Progress Report

Latest

PRIMITIVE 20% of shots done

Ever since completing the environments last year, I’ve been compositing shots. Compositing is putting all the pieces together. It is the last thing I need to do, besides sound, to complete the film!

Yesterday I finished the 15th shot out of 75. I try to finish one shot every week, working one hour every morning. But realistically it ends up being a shot every 10 days or so. Plus the 3D aspect usually adds an additional day of work. I’m working from the easiest shots to the hardest. Every time I finish a shot, I add it proudly to my editing timeline. In the screenshot below, you can see the shots I’ve completed on the upper two video tracks vs. the full length film on the bottom track.

I’m relatively happy with the quality of the shots. I think they’re acceptable, or at least the best that I can do with the material. Of course they look a lot better in 3D than 2D, but I try to tell myself that even Avatar looked better in 3D than 2D.

At this rate, I will complete all the remaining shots in roughly 2 years. Unless some life altering event happens which may slow me down a bit….like having twins!

Gretta and I are expecting twins, a boy and a girl, on May 15th! We are very excited. My plan is to just faint during the delivery. I feel like I’m about to be put in a very unfamiliar situation. But hopefully I will adapt quickly. I will post more news after the twins are born!

PRIMITIVE Environments Done

The Story

Over the years, at least 3 different artists have tried in earnest to create environments for PRIMITIVE. Unfortunately, good as those artists are, I was never quite satisfied with their work. But I was also not paying them. So about 6 months ago, I decided to find a professional environment artist, with a long list of film credits, and pay them. The artist I found quoted me at least $20,000 (she was giving me a break, actually). Even though this was 10 times over my budget, I was willing to throw money at this problem to make it go away. Before committing to the full amount, I asked her if she was willing to do a paid test shot. Sensing my pickiness, she backed out of the project. (I don’t blame her.)

That was the last straw. And so, 6 months ago, I decided to do the environments myself. There was only one problem: Of all the VFX skills I had at the time, creating environments was NOT one of them. I had to learn this skill from scratch.

And after 6 months of trial and error, I finally achieved an adequate look. Click for 2K image:

Primitive environments cliff

Breakdown

One of the first and smartest things I did was take Greg Downing from xrez.com out to lunch. Greg is a highly experienced environment artist/panoramic photographer. He gave me some key insights into his workflow. I explained to him that I had no Photoshop painting skills at the time, so I had to rely on peripheral skills, like panoramic photography and photogrammetry. Greg reassured me that if I could find a real location with the right look, that I could survey the living shit out of it.

So I bought myself a Canon Rebel T4i and started looking for locations. At first I considered taking a trip to either Yosemite National Park or Zion National Park. But luckily, I found an excellent location just 30 minutes away: Stoney Point in Chatsworth, CA.

Stoney Point is basically a giant pile of rocks, where many professional rock climbers come to train. I’ve gone there multiple times and always bring home a ton of great data to work with. I have surveyed this location extensively, including Gigapan/panoramic photography, HDR photography, video, and Agisoft scans.

An interesting story is how I risked my life to get a bird’s eye view panorama from the summit. I climbed up to the top (it’s not that hard), crawled up to the edge of the cliff on my belly, stuck my hand out as far as I could, and shot a handheld panorama looking 1200 feet down. One wrong move and it would’ve been all over! This picture should give you some idea of how high up I was, though it fails to convey the actual experience:

I won’t go into too much detail about the exact steps I took to create the environments. I could write a book about all the things I learned (i.e., mistakes I made). Instead, let me show you a progression:

Main lessons learned:

  • Whenever possible/appropriate, create a 3D asset instead of a 2.5D asset (matte painting projected onto low res geo). You’ll have several orders of magnitude more control with a 3D asset.
  • Resolution is king!! Use panoramic photography techniques to create textures 10K or greater. The biggest panorama I shot was 65K across. (You can see a sliver of it in the background.) Don’t even bother going to cgtextures.com. What they call “Huge” are in fact puny 5K images. May be ok for games, but NOT for film. Shoot your own images whenever possible.

 

So now PRIMITIVE is closer than ever to being finished! All that is left is rendering, compositing, and sound. Comments welcome!

Previs Techniques

In this article I’d like to compare a few different previs techniques and figure out which is the most efficient. Previs comes in many forms, including shot lists, storyboards, animatics, “toy commercials”, 3D previsualizations, physical blocking, virtual production, and anything in between. All these forms serve the same purpose: to prototype the filmmaker’s vision and to communicate it to others.

Wikipedia provides a good definition of the word “prototype”:

prototype is an early sample or model built to test a concept or process or to act as a thing to be replicated or learned from.

What I want to stress here is that people create prototypes in virtually every human endeavor. Prototypes are so common that almost every profession calls prototyping by some other name: thumbnails (art), blocking (theater), software prototyping, rapid prototyping (industrial design), scale model (architecture), paper prototyping (game design), outlining (writing), etc. And as almost anyone in any field will tell you, creating and iterating prototypes is the most creative part of creating anything. The rest is just implementation.

Previs is a kind of prototype for filmmakers and animators. A previs is created to quickly and cheaply puzzle out most of the important aspects of a sequence, especially the action, camera moves, and editing. It is a video and audio plan. Personally, I’d never show up to the set without a previs. Frankly, the set is NOT the place to get creative–it’s the place to get shots done, because time is money on the set.

When creating a previs, it’s important to be able to iterate ideas quickly and to keep the process fun and fresh. But some previs techniques are more efficient than others. Which is best? I’ve made a list of the common previs techniques here. I’ve also (arbitrarily) rated them on their fun factor (is it fun working this way?) and relative efficiency (ability to iterate quickly) from 1 to 5, with 5 being the best score.

Shot Lists

This is the simplest kind of previs. Just make a list of the shots you’ll need. For some films a shot list is more than adequate. But for VFX films and animations, shot lists don’t convey enough information.

Requirements: pen and paper. Fun factor: 1. Efficiency: 5.

Storyboards/Animatics

Storyboards are the traditional form of previs, still preferred over other methods at some major studios, like Pixar. A good storyboard can convey a lot of information. An animatic, which is a storyboard edited together into a video with temp sound, can convey even more.

However, it’s probably not the most efficient way to make iterations. For example, making a camera change requires making a new drawing. Even you draw very fast, it’s still a relatively slow method compared to some of the others.

Requirements: pen and paper, some drawing ability. Fun factor: 3. Efficiency: 3.

3D Animatics

A 3D animatic is created inside a 3D program by keyframing CG characters. Sometimes “stepped” keyframes are used for a more stop-motion look. This is the technique I used to create the previs for PRIMITIVE. From experience, I can tell you that this technique sucks. It is dead slow and NOT fun. I would not recommend it to anyone and I certainly would NEVER use this technique again! (Only exception is when the characters are almost static, like interview shoots.)

I’ve also seen 2.5D animatics, which is when you draw on flat “alpha cards”, arrange them in a 3D program and then do a virtual camera pass. (See the Ratatouille special features for an example.) Again, I can’t imagine the 2.5D animatic technique being very efficient either.

Requirements: 3D program and ability to model, rig, and animate characters. Fun factor: 1. Efficiency: 1.

“Toy Commercials”

This is a wonderful previs technique I recently discovered. Surprisingly, I’ve never seen this technique used in film production. (If you find any examples, please let me know!) But apparently many video game companies use it to prototype game levels. Here’s an example:

http://youtu.be/13rKR5JS7n8

What fun! The idea is to take a completely analog 3D approach. You can start with a sandbox. Literally a box filled with sand placed on a table. And fill it with toys, cardboard cutouts for buildings, and custom models made out of polymer oven bake clay (Sculpey). Then simply play with your “toys” and record the action with a small camera. I can’t possibly see a reason why this wouldn’t be fun and efficient. It’s also a good technique to use when you have no friends (like me) to help you with the previs. If you have kids, maybe you can get them in on the action!

I wish I knew about this technique when I was making the previs for PRIMITIVE. But back then I was so inundated with “digital or die” that I never even considered doing things in an analog way. I can only imagine now all the time I would have saved by simply pouring water over a toy Neanderthal’s head, instead of making a freaking fluid simulation. Idiot!

Requirements: a sandbox, toys, cardboard, armature wire, Sculpey, small camera. Fun factor: 5. Efficiency: 5.

Physical Blocking

This is an excellent technique. The idea is to block out the scene with your friends or actors and film them performing the action. This technique is used quite frequently by both Hollywood and indie filmmakers. For example, it was used on the Star Wars prequels for some sequences and is also a favorite of Freddie Wong. Highly recommended for people who have friends.

Requirements: friends or actors, space, props. Fun factor: 5. Efficiency: 5.

Virtual Production

This is the kind of previs that takes place on a mocap stage. Actors walk around in mocap suits and that data is instantly transferred into the 3D world. The director can follow the actors around with a monitor, which is treated as a virtual camera by the mocap system. What the director sees on the monitor is the 3D characters (which may not even look human) walking around in a virtual environment.

The director can even decouple blocking action from blocking camera moves. He can block the action first and then have the mocap played back later so he can walk around and block the cameras after the actors have gone home. Often when blocking the cameras this way, the relative motion of the camera will be scaled up so that a small movement of the preview monitor in the director’s hands can be translated to a 20 foot dolly move in the virtual world. Or the virtual camera can be offset so that the director can be sitting down in the back of the mocap stage and yet still be “following” the actors with a virtual camera. And of course, anything can be rapidly changed and rerendered, like props, costumes, or sets.

It’s basically the coolest freaking previs system you can think of, but it’s also the most expensive. Nowadays, entire studios are devoted to virtual production, like The Third Floor. And some directors, like James Cameron, Peter Jackson, and Stephen Spielberg have begun working this way almost exclusively. It’s pretty awesome if you can afford it!

I should also mention that if your film is mostly CG, then you can forgo previs altogether if you use this technique. They call it virtual “production” for a reason, because filmmakers who use this technique can literally MAKE the film in the blocking stage. (Avatar, Tin Tin, Real Steel).

Requirements: a lot of money. Fun factor: 5. Efficiency: 5.

Summary

So what have we learned? We learned that some previs techniques are better than others. For example, storyboards and 3D animatics (or anything that requires “keyframing”), is less efficient and less fun than “toy commercials” and physical blocking. We also learned it’s a myth that only filmmakers with millions of dollars can afford “good previs”. While virtual production is certainly the coolest technique around, I’m not sure how well it stands up in a cost-benefit analysis against “toy commercials” and physical blocking. A previs is not about having the coolest visuals–it’s about figuring out your shots in the quickest, cheapest, and funnest way possible.

Finally, I would recommend that you prototype your ideas EARLY. This advice is echoed over and over in all professions. This means you can start doing previs as an aid to writing the screenplay!

Please comment! What previs techniques do you use? Did I fail to list some technique? Thanks.

Creativity and Puzzles

Bill Oberst commented on my previous post as follows:

I am only a layman but have read this and related posts carefully several times to try and understand the difficulties of the problem and the ingenuity of the solution. I am currently reading Jonah Berg’s “Imagine: How Creativity Works” and I wonder if either Oleg Alexander or William Lambeth approached this problem by stepping away from it and daydreaming, or did a more ‘constant-focus-on-the-problem’ process lead to the solution?

I’ll try to answer this excellent question in this post.

I devote a lot of thought to the subject of creativity. I’m constantly trying to stay conscious of and to streamline my own creative process. So far I’ve identified two kinds of creative problems: Puzzles and Wicked Problems.

Puzzles

Think of your favorite puzzle games. Like jigsaw puzzles, tangrams, untangle puzzles, unblock puzzles, matchstick puzzles, Cut the Rope, etc. Solving puzzles is a creative process. Puzzles are “easy” problems because they are well defined. For example, in tangrams, you must match a given silhouette using all 7 tangram pieces. Puzzles usually have a structure like: do X, given the constraints Y. (In computer science, puzzles are called constraint satisfaction problems.) The constraints are the key to a well defined puzzle problem. Just think of other problems which are not usually referred to as puzzles, but which have the same structure. For example, creative writing assignments. Like, describe one of your family members, but the number of words you can use must match this family member’s age. Or Pictionary/Charades, describe a phrase by drawing it or acting it out. Improv/Who’s Line Is It Anyway fall into the same category.

Puzzles are solved by trial and error (aka optimization in computer science). It is rare that one solves a puzzle problem through one grand “inspiration”. It’s more like hundreds of tiny inspirations, each of which moves you closer to the optimal solution.

The bald cap wrinkling problem is an example of a puzzle problem. It is very well defined: Get rid of the goddamn wrinkles!

Now I will describe the creative process of solving the bald cap wrinkling issue as best as I remember it.

The first thing I tried was simple 2D tricks like using blur or median filters on the wrinkles. That looked like shit, so then I knew that only way to get rid of the wrinkles was to replace the bald cap completely with a CG one. This meant two things: the head would have to be matchmoved and the lighting would have to match. Luckily, the matchmoving was going to happen anyway for a different reason: to warp the head to Neanderthal proportions. So the matchmoving pipeline was already developed and work on matchmoving was already in progress.

This left only the lighting issue. I made a list of possible ways to match the lighting:

  1. Manually (Too labor intensive.)
  2. Dig through old hard drives to find the chrome ball lighting reference from the shoot. (Too boring.)
  3. Optimize the lighting to match a target image. (Sounds like fun!)

I didn’t know exactly how I was gonna optimize the lighting, but I knew it was possible. After consulting with my PhD math wiz friends and coworkers and doing a few Google searches for things like “reverse engineer lighting”, I came upon “inverse lighting”, the technical term for what I was trying to do. The papers on inverse lighting basically confirmed the approach I was about to take: to solve the lighting using linear least squares. This kind of thing happens to me all the time, I “invent” a solution to a problem because I know it’s possible, and then do a Google search to find out what this problem is officially called and what potential solutions there are in the literature.

So I prototyped the oa Match Lighting tool and it worked! (Needless to say, the tool went through several iterations of its own, which I won’t go into here.) Now I had a viable solution for matching the lighting. All the pieces were in place. Time to do an actual test frame.

Next came the familiar iterative process of “tweaking”. When you’re tweaking, you know you’re almost at the end! There are usually no new surprise variables, you’re just tweaking existing variables. But as you’ll see, tiny inspirations can come even at the tweaking stage. I’ll post images here of the different iterations.

Here’s the raw problem image again, to keep the context.

Here’s the first iteration of the CG bald cap.

Looking at the first shitty iteration, you might think that William Lambeth and I were worried. But we were not, because we knew that in a few more iterations we’ll get to a good place. William touched up the textures a bit more and this was the second iteration.

A bit better, but William still felt that the bald spot was not shiny enough. That it was too matte. It certainly didn’t match the greasy highlight on the nose. I explained to him that the problem was not with the shininess of the material on the CG bald cap, but a bigger problem of the lighting itself being too diffuse. The reason for this was that the material of the bald cap in the original raw image was already matte! Therefore any lighting recovered from a matte surface will also be very diffuse. “There’s nothing we can do about it”, I said. Luckily, William refused to accept my “logic” and sent me the following image in which he added a fake highlight in Photoshop.

After seeing this image, I realized that he was absolutely right: the bald spot must be made shinier. But how? It was at this moment that I had a flash of inspiration. If William can paint a fake highlight on the final image, then why can’t I paint a fake highlight on the target image for the lighting tool to match? I knew in that instance that adding a fake highlight to the target image would translate into my tool creating a bright “sun” light in the lightmap! So I tried it and of course it worked! So much for “nothing we can do about it”. Here’s the target image I used with an added fake highlight.

And here’s the final iteration we ended up with!

It is a fact that anything I post on this blog has gone through a similar creative process, and you only get to see the final result. But of course the fun is in the creative process itself!

Wicked Problems

Wicked problems are creative problems that are much harder that puzzles. What makes them hard is that in addition to not knowing what the solution is, you also don’t even know what the problem is! An example would be trying to come up with an idea for a screenplay. Where do you start? Anything goes so the solution space is infinite.

Well, I like to think that the best place to start is with some constraints. Like nailing down the genre or theme. Maybe trying random configurations of genre and theme. Like “hazing…in space”. In other words, before a wicked problem can be solved, it must first be converted into a puzzle, a well defined problem. And it is VERY difficult to design a good puzzle!

No one has a magic formula for creativity. But currently I try to use the following formula:

CREATIVITY = CONSTRAINTS + RANDOMNESS

Let me end with a quote from a Cinefex article about Avatar. When designing the world of Pandora, James Cameron said:

When you’ve got all these possibilities, when you can do any kind of action imaginable, you have to be very disciplined. Applying rigor and discipline to the process has been the biggest challenge, in fact. Early on, we came up with the principle of denying ourselves infinite possibility–which sounds wrong. You’d think you’d want to embrace the infinite possibility; but you don’t because you’ll never get there. Ever. We stood by the principle of making a creative decision in the moment, and never second-guessing it. And just by making that decision, we had eliminated possibility. Every single day was about eliminating possibility.

 

 

 

 

PRIMITIVE Bald Cap

As a followup to my previous post, here’s progress on the bald cap wrinkling fix.

Before:

After:

My inverse lighting tool (which is now called oa Match Lighting) worked like a charm! I used a blurred version of the original frame as my target image. I also added a fake highlight to make the head shinier. Pretty cool that I can just “paint” the kind of lighting I want and my lighting tool will match it! Here’s the target image I used:

Here’s the closest match preview from my tool:

And here’s the lightmap that my tool generated. I used this lightmap to render the bald cap model at high res. Then composited the render over the original frame.

Special thanks to William Lambeth for creating the model and textures for the bald cap. His input on this issue has been invaluable.

Comments welcome!

PRIMITIVE Inverse Lighting

Ok, this is really cool! (This post is a bit technical, but I tried my best to make it accessible to everyone. If something is not clear, please post a question in the comments.)

The Problem

Unfortunately, the bald cap is wrinkling in most shots and has to be completely replaced with a cleaned up CG version. Here’s an example of the wrinkling:

We are going to track the head geometry (based on a 3D scan of Bill Oberst, Jr.) per frame relative to the camera. (The actual head tracking is done by a custom geometry tracker written especially for PRIMITIVE. I will describe it in a future post.) Here’s a screenshot of what the head tracking looks like in Maya:

So all we have do now is render out the clean CG bald cap and comp it over the original, right? Well, almost. Even though we have the camera and the head geometry, we still don’t know what the lighting is! In order to match the lighting in the original shot, I could take one of the following approaches:

  1. Match the lighting manually by placing lights around the CG bald cap and adjusting their colors. This would be a very labor intensive process.
  2. I actually took still Low Dynamic Range images of a chrome ball on the day of the shoot. Even though these are not true High Dynamic Range images, they could still be used for Image Based Lighting, especially if the clipped highlights are expanded. The only problem is that these images are now so far back in my pipeline that I would have to go digging through old hard drives to find them all. Boring.
  3. Write a tool which will estimate the lighting for me.

Guess which one I did?

Image Based Lighting

The tool I wrote is based around the idea of Image Based Lighting (IBL). For the uninitiated, here’s a very short explanation. IBL was introduced by my boss, Dr. Paul Debevec, in a Siggraph 98 paper called Rendering Synthetic Objects into Real Scenes. (I graduated high school that year!) The general idea is that if you’ve captured a High Dynamic Range panoramic environment of your scene, then you can use this environment as a spherical light source in an IBL capable renderer. Here’s an example environment (image from HDRLabs.com):

In IBL, we take this environment image and map it onto the inside of a giant sphere. We can then render objects placed inside this sphere using the environment map as a light source. Here’s an example of Bill’s head geometry lit by this environment:

You can actually see the blue light on his cheek coming from the TV screen.

Reflectance Fields

Reflectance fields were also introduced by Paul Debevec in a Siggraph 2000 paper called Acquiring the Reflectance Field of a Human Face. There’s an excellent explanation of reflectance fields in this short video:

http://people.ict.usc.edu/~debevec/Research/LS/debevec-imagebasedlighting-s2000.mov

The idea is that if you’ve captured or rendered your subject from all possible lighting directions, then it is possible to arbitrarily relight the subject by adding the images together in different proportions. In my case, the reflectance field (also known as a lighting basis) is one of the inputs to my program. I used a simple grid lighting basis, 16×8 pixels, for a total of 128 lighting conditions. Here’s the lighting basis I used:

 

I literally mapped this sequence of images onto a large sphere and rendered my object with IBL settings turned on in Mental Ray. Here’s the result:

As you can see, this produces 128 images, one image for each pixel in the 16×8 grid. Now let’s take our original environment of the room with the TV and scale it down to 16×8 pixels. Let’s call this the target lightmap.

Now let’s multiply each basis render with it’s corresponding pixel:

Now if we add up all these images together, we will get Bill’s face lit by the room’s environment. Simple, right? Let’s call this the target image:

Inverse Lighting

In the above case, the lighting (the image we called target lightmap) is known. But in the case of the bald cap, the lighting is unknown. Luckily, it is possible to reverse engineer the lighting through a process known as Inverse Lighting. Here’s a good paper about Inverse Lighting:

http://graphics.stanford.edu/~srm/publications/CIC97-invlig.pdf

My tool is simply an implementation of this paper in Python/Numpy. The tool takes as input the target image and the 128 basis renders. Assuming that the lighting in the target image can be reproduced by some combination of the basis images added together in different proportions, all we need to know is what those proportions are. The proportions (coefficients) are, in fact, the colors of the pixels in the estimated lightmap! My tool solves the coefficients (per channel) using a least squares solver and outputs an estimated lightmap, like this one:

As you can see, this lightmap is very close to our target lightmap, especially in areas where we have data, like the front of the face. Areas where we don’t have any data, like the back of the head, are not close, but that doesn’t matter because we will only be rendering from one point of view!

Now let’s compare the original target image rendered by the known ground truth target lightmap:

With the result image rendered by the estimated lightmap produced by my tool:

As you can see, they are almost identical!

Now of course, this test data set is synthetic and therefore these results are ideal. In the real world case of the bald cap there will be error. For example, the target image will be a still of the bald cap (masked out of course) and the 128 basis renders will be of the CG bald cap, which may not be the exact same shape or color as Bill’s head. This will introduce some error, but we can be confident that the tool will produce the best possible lightmap approximation while trying to minimize the error.

So in summary, I just saved myself countless hours of work, if I were to try matching the lighting manually. I’m considering selling this tool. If you’re interested, please contact me.

I’m not sure if this will impress Paul Debevec. But considering that I failed Algebra twice, this will impress my mom. :)

PRIMITIVE Progress

I know it’s been ages since I’ve posted anything about PRIMITIVE. But that’s because I’ve been too busy making progress! In the past 6 months I’ve completed two critical milestones:

  • I fixed most of the vertical parallax issues in my stereo footage.
  • I finished the alpha cleanup on all the shots.

Vertical Parallax Fix

You may remember that PRIMITIVE was shot with the wrong beam splitter rig. This was my fault. I originally bought the cheaper version of the beam splitter rig from 3dfilmfactory, because I never imagined that I could later afford shooting with 2 RED cameras. The beam splitter rig that I got was never designed to hold 2 REDs, but somehow my DP managed to shove 2 REDs in there anyway. The result was a vertically misaligned 3D camera. So, I knew going in that I was gonna have vertical parallax problems. What I didn’t know was how big a deal it would be to fix this in post.

Here’s an example shot with really severe vertical parallax problems. Not all shots were this bad. The red lines connect corresponding points in the left and right images. Ideally, these lines are supposed to be completely horizontal. Notice that the problem can’t be fixed with a simple 2D shift because the vertical parallax is different for points close to the camera and points far from the camera. Looking at this image in 3D would give you a major headache because in order to achieve stereopsis your eyes would have to diverge vertically!

Here is the same shot with my vertical parallax fix applied. Notice how the same lines are almost horizontal now, which means that viewing this shot in 3D will be a pleasant experience.

So how does it work? Well, I tried several different “obvious” approaches, all of which failed. I became discouraged by the possibility that I would have to fall back to making the film in 2D. Then I said: “Let me try one last thing. If this doesn’t work, nothing will.” The idea I came up with was to convert each frame of the film into a low res 3D scan, like an animated “rubber sheet”. Then rerender the left view from a corrected position. Here’s a screenshot revealing the magic trick.

I won’t go into the highly technical implementation details. Needless to say, it took me months to write the custom computer vision software required for this to work. Luckily, it did work and the film is still going forward in 3D! And by the way, lesson learned. Next time I’ll get it right in camera!

Alpha Cleanup

Whereas fixing the vertical parallax was a creative challenge, cleaning up the alpha channels was a monotonous, never-ending hell. The task was commonplace: isolate the subject from the background using a combination of chroma key, paint, and rotoscoping. Here’s an example shot.

And here is the cleaned up alpha channel. Having this alpha channel allows us to composite the kids over a new background, like a sky.

Most of the shots were easy because all I had to do was chroma key the blue screen. Sometimes the chroma key had a few holes in it which I filled with a paint brush. But sometimes there would be severe blue spill from the cliff onto the Neanderthal’s body. Or sometimes the Neanderthal’s fingers would go outside the blue screen. In such cases I had to resort to my new mortal enemy: rotoscoping. Here’s an example of a particularly evil roto shot. This shot alone took me weeks to complete.

The thing is, I fully anticipated having to do this kind of work even before shooting. But I highly underestimated how long roto actually takes. And I overestimated how much free help I would get from others. I ended up doing most of this work myself, with some help from two other people. So, another hard lesson learned: next time allocate money in the budget for roto! In fact, I would recommend this to any independent filmmaker: Allocate money in the budget for all non-creative tasks. I’m just glad it’s over.

Next Steps

The next steps are as follows:

  • 3D track the Neanderthal’s head geometry. This will allow us to warp Bill Oberst’s head to Neanderthal proportions. This head warping effect will be what makes or breaks this film, and it could still go either way! We’ve developed a badass custom pipeline for the head tracking and I can’t wait to use it.
  • Fix the bald cap wrinkling issues. The head tracking will help with that, too.
  • Model, animate, and render props, like the nest and a more menacing branch.
  • Finish 3D environments. This is already progressing nicely.
  • Finish sound design. Can be completed in parallel with the VFX work.
  • Composite all the elements!
It’s clear that I will not finish all this work by the end of 2011, like I originally wanted to. But I can finally see the light at the end of the tunnel. It may be just a speck, but it’s there.

Games with a Purpose

Don’t miss this excellent lecture about human computation by Luis Von Ahn! You can also play the games here. ESP game is my favorite :)

 

PRIMITIVE ADR

This weekend we recorded all the ADR (automated dialog replacement) for PRIMITIVE. The recording took place in the spacious sound studio at Puget Sound, owned by supervising sound editor, Joe Milner. We recorded all 3 actors in one day: Bill Oberst Jr. in the morning and the boys, Brendon Eggertsen and Christopher Mastandrea, in the afternoon. The material has now been sent to my very talented sound designer, Ken Showler, who will edit the ADR.

This was my first ADR session as a director and I learned a lot! Big thanks to everyone who made this session possible!

PRIMITIVE Environments Progress

Check out the latest progress from Salvador Cueto, the Environment Lead on PRIMITIVE! From what I understand, there are several 4K textures on this model of the cliff. The grass is fur, I think. Rendered in Mental Ray.