Bill Oberst commented on my previous post as follows:
I am only a layman but have read this and related posts carefully several times to try and understand the difficulties of the problem and the ingenuity of the solution. I am currently reading Jonah Berg’s “Imagine: How Creativity Works” and I wonder if either Oleg Alexander or William Lambeth approached this problem by stepping away from it and daydreaming, or did a more ‘constant-focus-on-the-problem’ process lead to the solution?
I’ll try to answer this excellent question in this post.
I devote a lot of thought to the subject of creativity. I’m constantly trying to stay conscious of and to streamline my own creative process. So far I’ve identified two kinds of creative problems: Puzzles and Wicked Problems.
Think of your favorite puzzle games. Like jigsaw puzzles, tangrams, untangle puzzles, unblock puzzles, matchstick puzzles, Cut the Rope, etc. Solving puzzles is a creative process. Puzzles are “easy” problems because they are well defined. For example, in tangrams, you must match a given silhouette using all 7 tangram pieces. Puzzles usually have a structure like: do X, given the constraints Y. (In computer science, puzzles are called constraint satisfaction problems.) The constraints are the key to a well defined puzzle problem. Just think of other problems which are not usually referred to as puzzles, but which have the same structure. For example, creative writing assignments. Like, describe one of your family members, but the number of words you can use must match this family member’s age. Or Pictionary/Charades, describe a phrase by drawing it or acting it out. Improv/Who’s Line Is It Anyway fall into the same category.
Puzzles are solved by trial and error (aka optimization in computer science). It is rare that one solves a puzzle problem through one grand “inspiration”. It’s more like hundreds of tiny inspirations, each of which moves you closer to the optimal solution.
The bald cap wrinkling problem is an example of a puzzle problem. It is very well defined: Get rid of the goddamn wrinkles!
Now I will describe the creative process of solving the bald cap wrinkling issue as best as I remember it.
The first thing I tried was simple 2D tricks like using blur or median filters on the wrinkles. That looked like shit, so then I knew that only way to get rid of the wrinkles was to replace the bald cap completely with a CG one. This meant two things: the head would have to be matchmoved and the lighting would have to match. Luckily, the matchmoving was going to happen anyway for a different reason: to warp the head to Neanderthal proportions. So the matchmoving pipeline was already developed and work on matchmoving was already in progress.
This left only the lighting issue. I made a list of possible ways to match the lighting:
- Manually (Too labor intensive.)
- Dig through old hard drives to find the chrome ball lighting reference from the shoot. (Too boring.)
- Optimize the lighting to match a target image. (Sounds like fun!)
I didn’t know exactly how I was gonna optimize the lighting, but I knew it was possible. After consulting with my PhD math wiz friends and coworkers and doing a few Google searches for things like “reverse engineer lighting”, I came upon “inverse lighting”, the technical term for what I was trying to do. The papers on inverse lighting basically confirmed the approach I was about to take: to solve the lighting using linear least squares. This kind of thing happens to me all the time, I “invent” a solution to a problem because I know it’s possible, and then do a Google search to find out what this problem is officially called and what potential solutions there are in the literature.
So I prototyped the oa Match Lighting tool and it worked! (Needless to say, the tool went through several iterations of its own, which I won’t go into here.) Now I had a viable solution for matching the lighting. All the pieces were in place. Time to do an actual test frame.
Next came the familiar iterative process of “tweaking”. When you’re tweaking, you know you’re almost at the end! There are usually no new surprise variables, you’re just tweaking existing variables. But as you’ll see, tiny inspirations can come even at the tweaking stage. I’ll post images here of the different iterations.
Here’s the raw problem image again, to keep the context.
Here’s the first iteration of the CG bald cap.
Looking at the first shitty iteration, you might think that William Lambeth and I were worried. But we were not, because we knew that in a few more iterations we’ll get to a good place. William touched up the textures a bit more and this was the second iteration.
A bit better, but William still felt that the bald spot was not shiny enough. That it was too matte. It certainly didn’t match the greasy highlight on the nose. I explained to him that the problem was not with the shininess of the material on the CG bald cap, but a bigger problem of the lighting itself being too diffuse. The reason for this was that the material of the bald cap in the original raw image was already matte! Therefore any lighting recovered from a matte surface will also be very diffuse. “There’s nothing we can do about it”, I said. Luckily, William refused to accept my “logic” and sent me the following image in which he added a fake highlight in Photoshop.
After seeing this image, I realized that he was absolutely right: the bald spot must be made shinier. But how? It was at this moment that I had a flash of inspiration. If William can paint a fake highlight on the final image, then why can’t I paint a fake highlight on the target image for the lighting tool to match? I knew in that instance that adding a fake highlight to the target image would translate into my tool creating a bright “sun” light in the lightmap! So I tried it and of course it worked! So much for “nothing we can do about it”. Here’s the target image I used with an added fake highlight.
And here’s the final iteration we ended up with!
It is a fact that anything I post on this blog has gone through a similar creative process, and you only get to see the final result. But of course the fun is in the creative process itself!
Wicked problems are creative problems that are much harder that puzzles. What makes them hard is that in addition to not knowing what the solution is, you also don’t even know what the problem is! An example would be trying to come up with an idea for a screenplay. Where do you start? Anything goes so the solution space is infinite.
Well, I like to think that the best place to start is with some constraints. Like nailing down the genre or theme. Maybe trying random configurations of genre and theme. Like “hazing…in space”. In other words, before a wicked problem can be solved, it must first be converted into a puzzle, a well defined problem. And it is VERY difficult to design a good puzzle!
No one has a magic formula for creativity. But currently I try to use the following formula:
CREATIVITY = CONSTRAINTS + RANDOMNESS
Let me end with a quote from a Cinefex article about Avatar. When designing the world of Pandora, James Cameron said:
When you’ve got all these possibilities, when you can do any kind of action imaginable, you have to be very disciplined. Applying rigor and discipline to the process has been the biggest challenge, in fact. Early on, we came up with the principle of denying ourselves infinite possibility–which sounds wrong. You’d think you’d want to embrace the infinite possibility; but you don’t because you’ll never get there. Ever. We stood by the principle of making a creative decision in the moment, and never second-guessing it. And just by making that decision, we had eliminated possibility. Every single day was about eliminating possibility.
As a followup to my previous post, here’s progress on the bald cap wrinkling fix.
My inverse lighting tool (which is now called oa Match Lighting) worked like a charm! I used a blurred version of the original frame as my target image. I also added a fake highlight to make the head shinier. Pretty cool that I can just “paint” the kind of lighting I want and my lighting tool will match it! Here’s the target image I used:
Here’s the closest match preview from my tool:
And here’s the lightmap that my tool generated. I used this lightmap to render the bald cap model at high res. Then composited the render over the original frame.
Special thanks to William Lambeth for creating the model and textures for the bald cap. His input on this issue has been invaluable.
I know it’s been ages since I’ve posted anything about PRIMITIVE. But that’s because I’ve been too busy making progress! In the past 6 months I’ve completed two critical milestones:
- I fixed most of the vertical parallax issues in my stereo footage.
- I finished the alpha cleanup on all the shots.
Vertical Parallax Fix
You may remember that PRIMITIVE was shot with the wrong beam splitter rig. This was my fault. I originally bought the cheaper version of the beam splitter rig from 3dfilmfactory, because I never imagined that I could later afford shooting with 2 RED cameras. The beam splitter rig that I got was never designed to hold 2 REDs, but somehow my DP managed to shove 2 REDs in there anyway. The result was a vertically misaligned 3D camera. So, I knew going in that I was gonna have vertical parallax problems. What I didn’t know was how big a deal it would be to fix this in post.
Here’s an example shot with really severe vertical parallax problems. Not all shots were this bad. The red lines connect corresponding points in the left and right images. Ideally, these lines are supposed to be completely horizontal. Notice that the problem can’t be fixed with a simple 2D shift because the vertical parallax is different for points close to the camera and points far from the camera. Looking at this image in 3D would give you a major headache because in order to achieve stereopsis your eyes would have to diverge vertically!
Here is the same shot with my vertical parallax fix applied. Notice how the same lines are almost horizontal now, which means that viewing this shot in 3D will be a pleasant experience.
So how does it work? Well, I tried several different “obvious” approaches, all of which failed. I became discouraged by the possibility that I would have to fall back to making the film in 2D. Then I said: “Let me try one last thing. If this doesn’t work, nothing will.” The idea I came up with was to convert each frame of the film into a low res 3D scan, like an animated “rubber sheet”. Then rerender the left view from a corrected position. Here’s a screenshot revealing the magic trick.
I won’t go into the highly technical implementation details. Needless to say, it took me months to write the custom computer vision software required for this to work. Luckily, it did work and the film is still going forward in 3D! And by the way, lesson learned. Next time I’ll get it right in camera!
Whereas fixing the vertical parallax was a creative challenge, cleaning up the alpha channels was a monotonous, never-ending hell. The task was commonplace: isolate the subject from the background using a combination of chroma key, paint, and rotoscoping. Here’s an example shot.
And here is the cleaned up alpha channel. Having this alpha channel allows us to composite the kids over a new background, like a sky.
Most of the shots were easy because all I had to do was chroma key the blue screen. Sometimes the chroma key had a few holes in it which I filled with a paint brush. But sometimes there would be severe blue spill from the cliff onto the Neanderthal’s body. Or sometimes the Neanderthal’s fingers would go outside the blue screen. In such cases I had to resort to my new mortal enemy: rotoscoping. Here’s an example of a particularly evil roto shot. This shot alone took me weeks to complete.
The thing is, I fully anticipated having to do this kind of work even before shooting. But I highly underestimated how long roto actually takes. And I overestimated how much free help I would get from others. I ended up doing most of this work myself, with some help from two other people. So, another hard lesson learned: next time allocate money in the budget for roto! In fact, I would recommend this to any independent filmmaker: Allocate money in the budget for all non-creative tasks. I’m just glad it’s over.
The next steps are as follows:
- 3D track the Neanderthal’s head geometry. This will allow us to warp Bill Oberst’s head to Neanderthal proportions. This head warping effect will be what makes or breaks this film, and it could still go either way! We’ve developed a badass custom pipeline for the head tracking and I can’t wait to use it.
- Fix the bald cap wrinkling issues. The head tracking will help with that, too.
- Model, animate, and render props, like the nest and a more menacing branch.
- Finish 3D environments. This is already progressing nicely.
- Finish sound design. Can be completed in parallel with the VFX work.
- Composite all the elements!
Check out the latest progress from Salvador Cueto, the Environment Lead on PRIMITIVE! From what I understand, there are several 4K textures on this model of the cliff. The grass is fur, I think. Rendered in Mental Ray.
PRIMITIVE is a stereoscopic film. But up until now I haven’t been able to view any of my work in 3D. Sure, there’s anaglyph (red/cyan), but that’s bullshit. The mirror method is much better, but a bit awkward. So, after much research, I finally invested in my 3D monitor setup.
- NVidia 3D Vision Kit. One pair of active shutter glasses and infrared emitter. $200.
- Samsung SyncMaster 2233RZ 120Hz LCD monitor. $200 on Ebay.
- Stereoscopic Player software. Free trial allows up to 5 minutes of playback at a time, which is long enough to check shots.
To check a shot in 3D, all you have to do is render out a jpg sequence in the over/under format, with the left view on top and the right view on the bottom. (The shot has already been converged, usually on the subject’s eyes.) Like this:
Then convert the jpg sequence into an avi using ffmpeg. Like this:
ffmpeg -i F:\SUNSET_0100_N\topBottom2048.%04d.jpg -y -qscale 1 -r 24 F:\SUNSET_0100_N\topBottom2048.avi
Finally, open the avi in the Stereoscopic Player, set the viewing method to NVIDIA 3D Vision, and go to full screen mode. Put on your 3D glasses and enjoy!
There are a couple of other nice things about this setup:
- I can play any PC game in 3D! Nothing beats climbing the rooftops of Jerusalem in Assassin’s Creed in 3D.
- The Stereoscopic Player supports the Fujifilm FinePix W1 MPO format, which means I can view all the pictures I’ve taken with the W1 in 3D.
So, after having seen my shots in 3D, do I still think it was worth the trouble to make PRIMITIVE in 3D? Fuck yeah!
I’ve decided to 3D scan all my props during the Winter break. Like the big rock, the nest and egg, and the branch. I’ve set up a homemade 3D scanning pipeline using the Bundler Photogrammetry Package. (Watch the tutorial video on that page.)
I’ve been getting some really great results so I’m excited to share the process with you! Let’s take the big rock scan as an example.
The beauty of this pipeline is that it doesn’t require any special hardware. If you have a camera, you have a 3D scanner. If your subject is static (such as a prop or a building), you only need one camera. If your subject is alive, you’ll need lots of cameras for simultaneous capture.
The idea of “structure from motion” is to take lots of pictures of your subject from many different angles. I found that for props it is easier to have a static camera on a tripod and rotate the object in front of a greenscreen, rather than move the camera around the object. Here are some of the pictures I took of the big rock. I took 108 pictures with flash using my Fujifilm FinePix W1. The W1 is perfect for this because each picture is actually 2 pictures with a parallax. The scan probably would have worked with less than a 108 images, but I found that it’s better to take lots of images. Also, I found that for the entire object to be solved from all angles, it is a good idea to find a “path” from one camera to the next. In other words, random camera angles are not as good as planned/consistent/efficient ones.
I keyed out the greenscreen in Nuke and made the background black. One caveat when saving images out of Nuke is that it doesn’t preserve the Exif data, which is necessary for the next step. So I wrote a small Python script to copy the Exif data from the original images to the ones saved out of Nuke.
Now things get interesting. I ran the sparseRecon64.bat utility on all these images. This utility runs bundler.exe and solves a sparse point cloud of the automatically detected features and also solves the cameras. Only someone who has done image-based modeling manually (for example, manually clicking points in Autodesk ImageModeler) can appreciate how cool this step is. You can check the .ply file in MeshLab to make sure all cameras got solved.
Now things get even more exciting. After the sparse point cloud and cameras have been solved, I ran denseRecon.vbs. This compares each image with every other image pixel by pixel and generates a dense point cloud of the object! The dense reconstruction algorithm likes objects with lots of random texture (so the rock is perfect). It also doesn’t like really shiny objects because the highlights move from one image to the next.
In MeshLab, I used Poisson Surface Reconstruction to generate a mesh from the dense point cloud. Then I used Quadric Edge Collapse Decimation to reduce the poly count to 10,000 triangles.
Now we’ve got the geometry, and that’s how far the default Bundler Photogrammetry Package will take you. But what about the textures? Well, stay with me my friends, because I wrote a couple of MEL scripts to get high res textures onto the model. The first script I wrote imports the Bundler cameras and sparse point cloud into Maya. It also assigns the images as camera image planes. (Contact me, and I’ll probably give you this script.)
The second script projects the images from multiple cameras onto the model and automatically blends them into a single texture. (Contact me, and I’ll probably give you this script.) A UV set with non-overlapping UVs is required. (I just did a quick Unfold UVs in Maya.) I picked a subset of 6 cameras to project.
Here is what the texture looks like.
And here is a quick render of the rock with the texture applied as both diffuse and bump map. I think it looks pretty damn good considering I only spent about 2 hours on this scan, with NO manual work.
Finally, if you don’t like command line programs, I found this Bunder GUI. I haven’t tried it, but in theory it should get you pretty far. However, it’s not free, and certainly not as nuanced as my pipeline.
(Arnold accent) Come on! Do it! Do it now! Try Bundler! Adjust the little rays of light!
PRIMITIVE VFX work has begun. We have over 30 talented artists from all over the World: Hungary, Serbia, Finland, Mexico, India, UK, and USA. And still looking for more people!
We have two very strong Leads:
- Salvador Cueto, Environment Lead. Salvador did the environment concept art and designed the cliff structure for PRIMITIVE, so he was the natural choice.
- Myong Choi, Compositing Lead. Myong is currently a senior at Otis College of Art and Design. His reel is very strong and he is very dedicated.
And let’s not forget Sarah LaPenna, who’s doing a phenomenal job as the Postproduction Manager. I really don’t know what I’d do without her help.
I did a VFX breakdown of all of the shots. Here is an excerpt:
There’s a total of 75 VFX shots of varying difficulty. But it’s mostly the same techniques over and over. My theory is that the more artists we have, the quicker we can get all the tasks done.
Lots more VFX tutorials coming soon!
This just in! Salvador Cueto just delivered these phenomenal background concept paintings. I’m so happy with them! These environments are a 1000 times better than anything I had in my mind. Plus, I really hate Salvador right now for painting like a motherfucker
I sent Salvador a few frames from the movie with simple notes drawn on them. Then he painted the environments right on the frames in Photoshop. These environments will eventually be modeled and rendered photorealistically in 3D. The live action actors will be keyed out of the blue screen and composited into the 3D environments. We also did a 3D scan of the blue cliff set (before it was destroyed), so we can line up an accurate 3D model of the cliff to the plate.
What do you think?
I’m pleased to announce The Visual Effects Society’s Handbook of Visual Effects is finally out! I contributed sections on Facial Rigging and the Facial Action Coding System (FACS). It’s a huge honor to see my name among the other contributors, the pioneers who were doing visual effects when I was still in diapers. I’m not worthy!
Get your copy today!
I know I haven’t posted any PRIMITIVE updates lately. But that’s because so MUCH progress has been made, I haven’t had time to write a post! So here is all the latest news in one Mega update.
Upcoming Live Action Shoot
That’s right! It’s finally happening. After a LONG search for an affordable location, I finally found one–Area 11 Studio in Los Angeles. The lead came through Bill Oberst’s Twitter post, so thanks Bill, and I’ll never knock Twitter again The owners of the space, musician Jimmy Kuehn and photographer Jessy Plume, gave me a very reasonable deal and the shoot is scheduled for June.
We had a very promising rehearsal with Bill Oberst and the 2 boys. Bill has started working out for the part. I’ve asked him to get as cut as possible, because Neanderthals had almost no body fat. I’ve also asked him to beef up his neck muscles. Bill is pretty cut already, so a month from now he’ll probably look like a fucking beast!
Special Makeup Effects
We were lucky to hire master special makeup effects artist, Simpat Beshirian, to design the makeup and costume of the Neanderthal. The Neanderthal will be created through a combination of analog and digital means. Simpat is taking care of all the analog work:
- A bald cap.
- A chin piece, because Neanderthals didn’t have a chin.
- Facial hair.
- A cool ear scar.
- False teeth.
- Dirty/split fingernails.
- Additional small scrapes and abrasions from the fall.
- The costume will be made from pieces of buffalo fur and other “skin”. Gretta and I were originally looking to buy a bear fur. However, it turns out bear furs are illegal in California. So we bought a six foot square buffalo fur instead. Yeah…
- For the closeup shot in which the boys pierce the Neanderthal’s back with a branch and the blood comes out, we’re going all practical with a special “blood back” prop. Simpat made a mold of Bill’s back, then poured it in a fleshy material with a blood bag inside. Then he hand-painted the back to look incredibly realistic. When this blood back is pierced with something sharp, the blood oozes out. Simpat made 3 of these blood backs and we used one of them to test out the effect. I posted a video of the test below. The test shows that we’ll have to glue the blood back to something immobile. There are also some air bubbles, so we’ll have to wipe them off in between takes. But otherwise the effect looks awesome!
Simpat started by creating a cast of Bill’s teeth, chin, ear, and back. I’ve been visiting his studio almost every week to look at his progress. He really knows his shit and I’m learning a lot from him. All pictures below are work in progress.
Gretta is making good progress on the boys’ costumes. One of the “nature skirts” is finished and I think it’s looking pretty crazy! Gretta is working on the second skirt now…
There are basically 3 main props in PRIMITIVE: the nest/egg, the branch, and the rocks. I made the nest out of branches, leaves, feathers, and whatnot Gretta and I gathered at a local park. It’s being held together with some diluted Elmer’s glue. I also bought an ostrich egg on Ebay to go inside the nest and double as some kind of large prehistoric egg. (I hope you’re not thinking pterodactyl, cause those were already extinct by the time Neanderthals came around )
I found a cool looking branch for the kids to beat the Neanderthal with.
As for the rocks, I quickly realized that styrofoam rocks were NOT going to cut it. They simply don’t look real enough. So Gretta and I drove to the Natural Stone Yard in Wilmington. There they pretty much have any flavor of rock you could think of. I picked dark, sharp rocks called Basalt. Now I have some really cool looking rocks, but the problem is that these rocks are heavy and may be dangerous for the children to lift over their heads. So Simpat is going to create molds of these rocks, pour them in a lighter material, and paint them to look realistic. Total cost for rock props: $500. Realizing that THIS is why Hollywood movies cost millions: Priceless.
We are going to build a “cliff set” for the actors to interact with. The cliff set will have all the major forms in it, but will be painted green. All the texture detail will be added in post. In fact, we’re planning to get a 3D scan of the cliff set made after it is built, so that we have a perfect model to work with in post.
Salvador Cueto, our Mexican concept art genius, is doing the set design, first in 2D, then in 3D. The design is based on Basalt cliffs (so it matches the prop rocks (yes, I am that anal)), which tend to form into Basalt columns. His design will be closely followed by the set builders.
3D Filmmaking Theory
This topic easily deserves it’s own post. I must confess now, after everything I’ve learned, that I made my decision to shoot PRIMITIVE in 3D prematurely. I don’t regret making the decision, but I made it without knowing enough about the subject at the time. After I made the decision, I dived into the subject of stereoscopic filmmaking, reading everything I could get my hands on and doing as many tests as I could. (This was one of those knowledge frenzies I sometimes get into. A 24/7 binge of information the goal of which is understanding. Psychologists call it the “rage to master”.)
After finally achieving understanding, I was forced to buy a mirror box (AKA beam-splitter rig), a $3000 piece of equipment (not including the cameras). Why did I spend 3 grand on a mirror box? Why couldn’t I just “stick 2 cameras together” side by side? Read on my friends…
- 3D photography and 3D filmmaking are not exactly the same thing. The main difference is that in 3D filmmaking the size of the projection screen is typically much larger than in 3D photography presentations. 3D movies are projected on 20-70 foot screens, whereas 3D slides are usually projected on 5 foot screens. Why does the size of the screen matter? Read on…
- The most important rule in 3D filmmaking is: DO NO HARM. This means, don’t let the viewer’s eyes converge (go cross-eyed) or diverge (go wall-eyed) so far as to cause discomfort. Angular convergence and divergence (or just vergence) is a function of the size of the screen, the distance of the viewer from the screen, the interoccular distance (the distance between human eyes, or 2.5 inches), and the onscreen parallax (the physical distance between a point in the left image and the corresponding point in the right image when viewed without 3D glasses on the screen). There is a formula to work out the angular vergence given all these parameters and there are vergence comfort recommendations for human eyes. Without getting too mathy, the gist is this: Try not to let the eyes diverge at all and don’t let them converge too much.
- There are 3 kinds of parallax: positive (objects appear behind the “screen” or stereo window), zero (at the stereo window), and negative (in front of the stereo window). Negative parallax (or objects popping out at you) is rarely used in 3D films today and is considered a gimmick. In modern 3D films most of the action takes place behind the stereo window in zero to positive parallax. But here is the thing: Positive parallax leads to divergence of the eyes. Small positive parallax values will make the eyes parallel and the object will appear at infinity. Large positive parallax will diverge the eyes (wall-eyed) and will cause discomfort. Really large parallax values (positive or negative) will lead to double vision (loss of fusion) and a big headache. Therefore, positive parallax values should be kept small and negative parallax values should be avoided altogether for aesthetic reasons. Now we know why we need to keep parallax values low, but how do we control parallax? Read on…
- There are 5 main variables which control parallax: interaxial distance (the distance between the lenses of two parallel cameras), horizontal image translation (AKA HIT, AKA setting convergence), the size of the screen, the nearest object to the camera, and the furthest object from the camera. The interaxial distance is set during the shoot, per shot. If you got the interaxial distance wrong, you will have to reshoot! The interaxial distance controls the “amount of depth” in the scene. The HIT controls where this “amount of depth” appears in Z space. The HIT is usually done in post, per shot. Most importantly, the HIT controls where the viewer should be looking (converging his eyes on). HIT is usually slaved to focus, so that whatever is in focus is also the point of zero parallax–the subject of the shot–what most viewers are looking at. For example, in a closeup of an actor, the zero parallax should be around the actor’s eyes and this is controlled by HIT–sliding the left image horizontally until the actor’s eyes on screen are at zero parallax. (The astute reader will notice here that you must compose your shots with a generous “safe zone” with enough room for cropping because of HIT.) The third variable, the size of the screen is usually assumed to be some large value, like 30 feet wide. A good rule of thumb is to keep positive parallax values around 2.5 inches on the projection screen–this way objects with a positive parallax of 2.5 inches will make the viewer’s eyes parallel and will appear at infinity. With me so far?
- To keep the positive parallax values that low, around 2.5 inches on a 30 foot wide screen, it turns out that for most medium and closeup shots the interaxial distance must be below 2.5 inches. If I were to stick 2 HD cameras together side by side, each camera being 6 inches wide, my minimum interaxial distance would be 6 inches. They have some small HD cameras nowadays, some as small as 3 inches wide, but that’s still not small enough! The mirror box, however, allows the interaxial distance to be anything from 0 to 6 inches. And THIS is why you need a mirror box for 3D filmmaking. For example, let’s say you’re doing a closeup of an actor with the background at infinity. The camera is 3 feet away from the actor and we’re using a wide angle lens. We know zero parallax will be set (using HIT) to the actor’s eyes. We’re trying to keep the background positive parallax around 2.5 inches on a 30 foot screen. Therefore, using either a formula or a preview monitor to work it out, the interaxial distance will be approximately 1.5 inches. And (with today’s technology) only the mirror box can give you such a low interaxial distance. To sum it all up: in 3D films meant to be projected on large screens, you want small parallax values and therefore you need small interaxial distance values.
For simplicity, I’ve left out countless details from the above explanation. If you crave more 3D filmmaking theory, I recommend the following resources:
- 3D Movie Making: Stereoscopic Digital Cinema from Script to Screen by Bernard Mendiburu. A modern, practical text. Start here.
- Foundations of the Stereoscopic Cinema by Lenny Lipton. This is probably the book James Cameron kept under his pillow while shooting Ghosts of the Abyss and Avatar. This book is heavy on theory and has lots of formulas. Lenny Lipton is one of the most influential people to modern 3D cinema and is one of the people primarily responsible for the RealD projection system. Highly recommended.
- Stereographics Developers’ Handbook by Lenny Lipton. Read chapters 2 and 3.
Mirror Box Setup
The mirror box works similar to a teleprompter. It’s basically a box made out of 80/20 with a piece of teleprompter (half silvered) glass inside placed at a 45 degree angle. You stick two cameras into it; one of the cameras shoots through the glass, while the other camera (the top one) shoots a reflection off the glass. I am using 2 Canon HV20s to test with, but will rent something beefier for the shoot. You control the interaxial distance by sliding the top camera horizontally.
Ok, but how do we know what the interaxial distance should be per shot? One way is to use a formula to calculate the interaxial given the nearest object, furthest object, focal length, and size of the screen. Personally, I don’t feel like messing around with measuring tapes and formulas when I’m on the set. I’d rather see a live preview of the amount of parallax I’m getting and adjust the interaxial distance accordingly. My solution is the following:
- I set both cameras to output a composite Standard Def signal (even though they are recording in HD).
- I use 2 EasyCAP Composite to USB adapters. These basically convert the composite video signal into a webcam signal. I plug the 2 EasyCAPs into a laptop running 32 bit XP. (EasyCAP drivers don’t work with 64 bit.)
- I use 2 pieces of software: Stereoscopic Player and Stereoscopic Multiplexer to preview an anaglyph (red/cyan) version of my composition in real time.
Using the above preview method, I can see the amount of parallax I’m getting on the laptop screen (by looking at the image without 3D glasses), and I can extrapolate what the parallax will be when blown up on a 30 foot screen. I can then adjust the HIT in the Stereoscopic Player software and the interaxial distance on the mirror box until the parallax is small enough. Usually no more than a quarter inch of parallax on a laptop screen yields comfortable 3D viewing on a much larger screen.
Well, there you have it. An overview of where PRIMITIVE is today. If something is unclear (I’m sure), please post your questions in the comments and I will try to answer them. Chances are there will be radio silence on this blog until after the shoot!!
Well, here it is. The inevitable official announcement that PRIMITIVE will be shot in stereoscopic 3D. After seeing Avatar in 3D a second time I decided that I basically have no choice but to make PRIMITIVE in 3D. Which is fine by me because I always wanted to make it in 3D anyways, but was too afraid. In fact, if you recall, I initially was so inexperienced that I wanted to make PRIMITIVE in Standard Def. Later I changed my mind to make it in 720P. And now, after developing my visual effects pipeline for at least 6 months, I feel absolutely confident that I am capable to shoot and composite PRIMITIVE in 3D. (I say that now )
I am currently researching the most appropriate 3D camera rig for this project. More on this soon!
Originally I planned to have set builders create a realistic “cliff set” for the actors to interact with and then do matte painted set extensions in comp. The cliff set would be arduously sculpted and painted to look like a cliff and would take a significant portion of the budget. However, the following greenscreen test proves that I can get away with using a simple “green set” for the actors to interact with. This green set would be just a plywood set painted green. It would be easier to construct and cost much less.
Here is the final result of the test:
Now let’s break it down. Why was I so afraid of having a simple green set in the first place? Take a look at the plate below. The subject is touching the screen. Which means that the subject is casting a shadow on the greenscreen and the greenscreen is bouncing light on the subject! This is basically the absolute worst case scenario for greenscreen work.
So the idea here is to extract the subject element and the shadow element and comp them over some background. The first thing I did was pull two keys: one for the lighter portion of the greenscreen and one for the shadow portion.
I color corrected the plate using Lab curves to suppress the green spill on the subject’s forehead and arm. I multiplied the two greenscreen keys together; this gave me an alpha for the subject. Then I premultiplied the color corrected plate by this alpha:
Now for the shadow. I multiplied the shadow key over the background at 90%, but it looked really flat and boring. So I decided to use an ambient occlusion pass; this was just a threshold of the darkest areas of the plate. I multiplied the ambient occlusion with the shadow and the shadow immediately came to life.
Finally, I comped the subject over the shadow. The edges were looking really crunchy so I blurred the edges masked by a contour pass (derived from the subject’s alpha).
While the final result still needs a lot of roto/paint cleanup, I’m happy with the overall look. What do you think?
After speaking with several people, I’ve changed my mind about making PRIMITIVE in Standard Def (720×405). The new output resolution will be HD720 (1280×720). We will shoot at HD1080 (1920×1080), pretending our frame is actually smaller–1280×720. This will give us a lot of room in the margins to reframe shots and add fake camera moves in post. Eventually, all shots will be scaled down or cropped to 1280×720 for VFX work.
Reasons for changing my mind:
- Youtube can now play videos at HD720.
- Some film festivals only accept HD work.
- HD720 blown up to HD1080 looks fine. Already tested this.
- A year from now Standard Def will be obsolete, if it isn’t already.
- Tracking is easier with more pixels.
- Luma/chroma compression ratio is better on HD cameras–better for greenscreen work.
- Adding fake motion control is easier with lots of margin room. But if we need an even bigger camera move, we can still use the panorama technique I described in my fake motion control post.
- HD720 is not a killer in terms of rendering or storage. It’s the perfect compromise between HD1080 and SD.
- Bigger is better!!
Last week I discovered something trully powerful–the Lab color space. So from now on I will do all my color correction (color grading) in Lab. I’ve made a short tutorial video explaining the basics of Lab.
If you want to take your own images to the next level, I encourage you to try Lab and find out more about this powerful color space! Here are some key points and tips:
- The main advantage of Lab is that Lightness and color are manipulated independently. This makes color correction intuitive and efficient.
- Lab was designed to mimic human perception of color, as opposed to RGB which was designed for electronic displays.
- Lab naturally heightens simultaneous contrast. This is why images color corrected in Lab appear sharper than images corrected in RGB–because the colors themselves have been “sharpened”! (No, this is not the same as increasing saturation in RGB.)
- Sharpen the Lightness channel only. Blur the a and b channels slightly to smooth out conversion artifacts.
- Don’t take my word for it. Check out the podcasts and articles on this page.
Major news. Remember the fake motion control test? Remember how I said the first test failed because there was too much vignetting caused by my 28mm wide angle lens? Remember how I said that to fix the vignetting I had to zoom all the way in to the ground glass of my Letus35 adapter, which made my wide angle lens not so wide anymore? Remember how pissed I was?
Well, this issue has been bugging me ever since. I just knew there was a way to cancel out the vignetting in comp, but I didn’t know how. Some forum suggested to shoot a white wall to get a frame of “just the vignette”, invert it, and Add it back to the problem image. I tried this, but doing so also made all the blacks in the vignetted area gray. I abandoned this idea. Little did I know how close I was to the answer!
A week later I started thinking about it again. I really want my lens to be wide. I Googled “vignette correction” and found that there is actually a vignette correction feature in Photoshop (Filter > Distort > Lens Correction > Vignett). I tried it and it worked. Now I knew for sure it was possible, so I started looking for the formula so I could recreate it in comp.
After much searching I found something called “Flat-Field Correction”, a common technique used in Astrophotography. It is very similar to the technique I described above. Shoot a white wall with “just the vignette” (the flat field) at the same fstop you shot the problem image with. Then simply divide the problem image by the flat field frame!
Vignette corrected image = problem image / flat field frame
I plugged this into a comp and it worked! Which means my wide angle lens is back, baby!
The funny part is that I could not find this information anywhere in the compositing literature. It seems to be some sort of well guarded secret. Astrophotography? Weird.
Anyway, take a look at the result. The original problem image is on top; the vignette corrected version is on the bottom.
I’m happy to report that the “fake motion control” test was a complete success! This simple trick will add a great deal of production value to PRIMITIVE and save an enormous amount of money. Here is the test:
What is Motion Control?
A motion control rig is basically a giant robotic arm holding a camera. It is used to recreate the same camera move over and over. Motion control is typically used for getting clean plates during moving shots. Clean plates are used to remove objects like wires from the scene. (In my case I need to remove the actor’s head!)
Here is an example of a motion control rig:
Trust me, you don’t even wanna know how much it costs to rent one of these.
Fake Motion Control
I recently found out that you can fake motion control with a panorama image if you stick to the following rules:
- The fake camera move is limited to pan and tilt only.
- All the action must happen “within the frame”. Action cannot be “cut off” by the edges of the frame.
- Action cannot cross from one frame to another.
- All shots are static camera shots.
For my test I decided to duplicate Gretta (because one is not enough!) and do a simple object transfer (the bags) from Gretta A to Gretta B. I shot the following elements (the order is important):
- Clean plate of the car with the trunk open.
- (Without moving the camera) Gretta A walks in from screen left and places bags next to the trunk.
- (Without moving the camera or the bags) Gretta B pretends to acknowledge Gretta A and the bags. Gretta B puts the bags in the trunk and closes the trunk.
- Open the trunk again and get the bags out so we’re back to the clean plate. Now pan slightly to the left and then slightly to the right of the car.
I then took the following steps in comp:
- Averaged several frames of the clean plate to get a completely grainless version. You actually need to average at least 30 frames to get rid of the grain completely.
- Also created grainless versions of the panLeft and panRight shots.
- Used a photo stitching program to stitch the panLeft, clean plate, and panRight grainless images into one long panorama.
- I composited the “action” plates of Gretta A and Gretta B over the panorama. I translated the action plates until they were perfectly aligned with the panorama underneath.
- I shifted the footage around a by a few frames to get the timing right.
- I created the effects of the Gretta duplication and bag transfer using a few quick rotoshapes.
- I added the grain back in using a grain plate from an earlier test.
- Finally I cropped the image to my output resolution (720 x 405) and created the fake camera pan by keyframing a translate node.
Here is an image from the comp which reveals the trick:
Mistakes and Failures:
I made a few mistakes in this test which were a lesson to me:
- Sloppy rotoshapes. You can actually see them flickering. That’s okay, I just wanted to see the results of this test as quickly as possible. I’ll spend more time on the rotoshapes in PRIMITIVE.
- On frame 60, Gretta B is missing part of her foot. This is because her foot was already behind the bags. I’ll be more careful next time with occlusions.
- On frame 275, Gretta’s elbow disappears a little. This is because of rule #2–the action cannot be cut off by the frame. I’ll keep my actors away from the edges of the frame next time.
Here is an earlier test which completely failed because of severe vignetting around the middle frame:
The vignetting was caused by a combination of my Letus35 adapter and my 28mm wide angle lens. To get rid of the vignetting I had to zoom all the way into the ground glass of my Letus35 adapter. However, doing so had a serious side effect–my angle of view was now reduced from 60 degrees to a measly 34 degrees. That’s like going from a 28mm lens to a 58mm lens! To get back up to 60 degrees AOV, I would need a 15mm lens and those cost around $2000, which is not in my budget. I am pissed, but I’m also happy that this test revealed the vignetting problem. Such is life.
Comments and questions are welcome!
Over the last few days I’ve done several important camera tests. These tests are critical to being able to match my specific camera’s properties in CGI rendering and compositing. The camera tests are:
- Angle of view
- Barrel Distortion
- Motion Blur
- Depth of Field
- Lens Flare
Angle of View
The point of this test is to make sure that my CG camera has the same horizontal angle of view as my real camera. So if I know that I have a 28mm lens on my real camera, all I have to do is set my CG camera to have a 28mm focal lengh and I’m done. Right?
Wrong. The focal length is only part of the story. The other part of the story is the film size (or sensor size). There is a simple formula to calculate the angle of view given the focal length (f) and film size (d):
aov = 2*arctan(d/(2*f))
So if the focal length is 28mm and the horizontal film size is 36mm, then the horizontal angle of view is 65.5 degrees.
The only problem with using the formula is that the film size on my camera is variable because of the Letus35 Extreme adapter. I can zoom in almost anywhere on the adapter ground glass, effectively changing the film size as I zoom. So there is no accurate way to determine what my film size is.
So I decided to take a different approach. I started with the angle of view I wanted to achieve: 60 degrees. (60 degrees is my favorite angle of view–wide, but doesn’t distort faces too much.) I then aimed my camera at a wall and moved the camera back by (an arbitrary) 84 inches. Using a right triangle calculator I determined that the base of my angle of view triangle is 97 inches. I marked up the wall with pieces of tape. Finally, I zoomed into the adapter ground glass until the pieces of tape were at the edges of the frame.
In CG, I set my camera focal length to 28mm and adjusted the horizontalFilmAperture until the angle of view was 60 degrees. Perfect.
Wide angle lenses (such as the 28mm) usually create a fair amount of barrel distortion in the recorded image. The exact amount of distortion must be known so that it can be applied to all CGI elements in compositing. For this test I shot a piece of graph paper with my camera. I brought the footage into my compositing app and undistorted the grid to make the lines as straight as possible. Now that I know the value to undistort the recorded image, I can apply the inverse of this value to all my CGI elements in comp.
Grain must be applied to all CGI elements in comp to match the grain of the camera. Usually this grain is simulated. But why simulate the grain when I have access to the real thing?!
For this test I shot 2 elements:
- Some wine bottles on a table shot for a few seconds.
- A grain plate–an evenly lit wall shot for a few seconds will do.
I then took the following steps in comp:
- I took the first 5 frames of the wine bottles shot and averaged them together. This gave me a completely grainless version of the wine bottles shot.
- I took the grain plate and blurred it a bit. Then I subtracted the blurred version of the grain plate from the original version. This gave me the pure grain produced by my specific camera.
- Finally I added the pure grain to the grainless version of the wine bottles shot and compared the result to the original wine bottles shot. Perfect match.
The amount of motion blur produced by the camera must be matched in the CGI renders. For this test I recorded a falling hacky sack. Then I rotomated the hacky sack in CG using a sphere. In theory, at 24 frames per second with a shutter angle of 180 degrees, the shutter speed would be 1/48th of a second. So I plugged in those exact settings into my CG app: 24 fps, 180 degree shutter, and good anti-aliasing settings in my renderer. Luckily, the results were perfect from the first try.
Depth of Field
The depth of field produced by my lens must be matched in the CGI renders. For this test I shot apples on a table spaced 1 foot apart at different fStops. Then I recreated the scene in CG with spheres. I used the same settings from the real world in CG: same focusDistance and same fStops. Again the results from my renderer were perfect from the first try.
A lens flare happens inside the lens and therefore must always come in front of all other elements in the comp. A lens flare can be simulated, but again, why simlulate something I have access to? All I have to do is set my camera to record in the dark and then shine a flashlight into the lens roughly matching the position of the light source. Done.
If you like pixels, you’ll love non-square pixels!
Here is the format pipeline for PRIMITIVE. The “subject” is a perfect sphere. Notice how the sphere gets squashed and stretched (animation term!) as it goes through the pipeline. (All images are 25% of actual size.)
- First the DV footage is digitized. If the DV footage were to be played on a 4:3 TV, the sphere would look perfectly round. However, on a computer monitor the sphere is squished by 10% in Y.
- Rendering and compositing with non-square pixels would cause problems, especially with 2D rotations. Therefore it’s best to work “square”. That is, resize the image in Y to achieve square pixels. The correct “square” resolution for a 720×480 DV image is 720×533 (480/0.9=533.33). All CGI rendering and compositing will be done at this resolution.
- The comp is cropped to achieve a master resolution of 720×405 (aspect ratio 1.78). I have lots of room (533-405=128 pixels) at the top and bottom of the comp image, so I can tweak my composition as needed.
- Finally, the master is resized to the different output formats. Notice how the Anamorphic DVD format streches the sphere in Y. When played back on a 4:3 TV, the DVD player will squish the image in Y and put black bars at the top and bottom of the screen. When played back on a 16:9 TV, the DVD player will stretch the image to fill the frame. In both cases, the sphere will appear perfectly round.
Playing a standard definition video on an HDTV looks horrible because of the quick and dirty resizing the HDTV does. Therefore it is important to create an HD blowup using bicubic resampling for each frame. This way the standard definition image will at least have a chance at looking decent on an HDTV. Yay!
I had the idea that the Neanderthal will have some sort of scarification all over his body. It would be simple to create this effect on his CG face, but what about the rest of his body? Can it be done without using makeup effects?
Here is my attempt to create the scarification effect using compositing only. First, Gretta drew the design on my arm with a Lyra Aquacolor crayon. (She came up with this design on the fly!) Then I used my compositing app to key out the red paint and shift it to match my skin tone color. Finally, I embossed the alpha channel I pulled from the red paint and used an Overlay operation to layer the embossed design on my arm.
I’m fairly happy with the results. What do you think?