Table top AR simulator in VR

This is a little VR environment I made that simulates a 3D display on a tabletop. Think of Chewie and R2D2 playing Dejarik on the Millennium Falcon in Star Wars. I wanted get an idea how such a system might look and feel and to prototype for any future hardware being developed. The simulator actually clips the 3D content to the projection surface on the table. This should be similar to the way an actual device works.

I had some fun getting this to work in Unity3D. It uses the SteamVR plugin and the clipping effect is layered on with a set of custom shaders that make use of the stencil buffer . All of these shaders run before the opaque geometry queue and they end up stetting up a hole in the depth buffer that is the only place subsequent geometry can render.

This is done in five steps.

  1. In the Geometry - 3 queue the simulated retroreflective display surface on the table is rendered to the stencil buffer. Color and depth buffer writes are turned off for this shader. It creates a mask that is used by later rendering stages.
  2. In the Geometry - 2 queue the room features like the table and floor are rendered. This shader doesn't draw over the area masked by stencil. This leaves a hole on the simulated table top.
  3. Also in the Geometry - 2 queue a background is rendered using quad that is aligned perpendicular to camera at a distance. ( I'll get to the reason for this a little later. )
  4. In the Geometry - 1 queue another quad is rendered that is aligned perpendicular to camera and is right up next to the near clipping plane. This is also masked by the stencil and does depth buffer writes but not writes to the color buffer. So it doesn't display anything but does set zbuffer to the minimum value everywhere except over the table top.
  5. All the rest of the scene objects are assumed to be on the table top. They are rendered in the standard way. Any fragment that would be the the field of view outside of the table top display area will fail the depth test.


So a simpler way of doing would have been to skip steps 3 and 4 and just do step 5 using another custom shader that only rendered on top of the stencil mask. This however would have meant creating custom shaders for all the content including things like particle effects. The method above means that no custom shaders are required for the main content.

This works but it has a few weird side effects.

The first side effect is the reason the background quad is needed. Normally the background would be cleared by the sky box which is rendered last and draws on all areas of the screen that were not previously rendered in the curent frame. These are the areas where the depth buffer is at the maximum distance. However step 3 above sets a lot of the depth buffer to the near value without actually drawing anything. This means that areas of the screen are not cleared and may contain artifacts from the previous frame. So the background quad in step 3 basically clears the screen before the Z mask is written.

The second side effect has to do with the way Unity draws shadows. This stencil buffer method keeps the geometry from being rendered but it doesn't keep it from casting shadows. This ends up creating weird shading artifacts on the table sides and floor. I fixed this by making the table and floor not receive shadows. This works fine for my purposes and I'm not sure what a more correct solution would be. Maybe when the scriptable shader pipeline comes along in Unity two different shadow passes could be done.

Here is source for the various shaders. I don't have a lot of shader fu so these are just tweaked from existing examples that I found.