Meat Shift was the first game our studio had made as a group. We all come from strong backgrounds in 2D art, so for our first 3D project we decided to try and mimic a 2D style. This let us supplement our 3D models with 2D art for some of the more detailed & complex setpieces. The game is specifically intended to evoke the idea of a charcoal drawing. We chose charcoal both because it created a smoky, dreamlike feel appropriate for the game's mood, and because it allowed us to work mostly in black and white. Meat Shift was created for a game jam, so an art pipeline that let us build assets quickly was a must-have. The most critical piece of tech art for our style was a postprocessing setup that applied the charcoal textures to the scene. This setup consisted of two rendering passes - an initial pass to create the base texture used for the overlay, then a postprocessing pass on the player's actual viewpoint to apply the overlay. Because we were working in Unity, we used a separate camera for each pass. The first camera sat at the origin and was surrounded by a textured sphere. This let us roughly emulate a cubemap, allowing the player to see the charcoal texture remain consistent as they turned their head. This technique was inspired by the dithering overlay in Return of the Obra Dinn, which uses a similar approach to avoid disorienting the player. The actual material on the sphere used a really simple shader that packed three different black & white charcoal textures into a single RGB channel each. We'd have just done the texture packing in Photoshop, but we wanted the ability to iterate between textures and adjust tiling more quickly in-engine. This first camera rendered to a RenderTexture that was the same size as the player's screen. We then took that texture and used it as an input for the second step - a postprocessing shader on the main scene camera. This shader used three texture inputs - the initial render of the player's camera, the charcoal texture, and a paper-textured bump map. Setting up texture inputs
Once we've set up our inputs (making sure to provide an extra UV slot for the paper UV map so we can tile it differently if needed) we can move onto the actual meat of the shader. Taking texture samples
This part consists of three steps.
(We also sample the charcoal texture using the original screen UV, for use later.) Overlaying charcoal
Here, we take our scene color sample, and calculate its luminance using some magic numbers. After this, we calculate out two luminance breakpoints for our scene - one at 33% light and one at 66% light. Then, its just a matter of taking the charcoal textures we packed way back when and using some simple lerping to figure out which one should be the most visible at our level of light. In this scenario, the red channel (the darkest texture) is strongest at 0% luminance, the green channel (a medium texture) is strongest at 33% luminance, and the blue channel (a light texture) is strongest at 66% luminance. Final steps
We wanted to add some extra visual flair to our game, so we decided that red objects ought to show up in full color. This was a simple addition - instead of just returning the charcoal output, we lerped between it and the base color based on how red the object we were looking at was. C# ImplementationIf you watched the breakdown video above, then read this article, you might notice a couple of missing features. These were handled in the Monobehaviour used to control the postprocessing shader. (Quick explainer for people not used to Unity's postprocessing setup: If you want to apply a shader image effect to a camera, you need to attach a script to that camera that manually processes your image every time the camera renders. This is handled in a function called OnRenderImage() and there's a lot of other documentation about it available so I won't go over it here.) The features handled in our C# script mostly involve manipulating the charcoal and paper textures. The charcoal textures exist on a physical sphere in the unity scene, so to create a jittery hand-animated effect we just rotated the sphere randomly every so often. The paper texture has its own UV tiling in the charcoal shader, so we can just jitter that manually on the same timing as the charcoal to make the whole thing look like part of the same effect. Moving textures
We also move the camera used to render our charcoal texture to match the rotation of our main camera. It's notable that we only render our charcoal camera manually during this function - this makes sure there's no weirdness with mismatched rotations. The other bits
One last (nearly unused) feature:I didn't want to put the article out without mentioning this, even though it barely showed up in the actual game. One of the things about hand animation that we wanted to make sure carried over into our game was the jittery / shaky feeling of movement. Jiggling the charcoal and paper textures got us part of the way there, but we wanted to push it further. The solution we came up with was to mimic a feature of PS1 hardware - vertex snapping. Basically, the PS1 had really limited graphical power, so to save space, it would only render vertices in exact pixel positions in screen space. Given that the hardware only output at 320x240, this limited number of positions would often cause models to be distorted, particularly at longer distances. Moving the camera would also cause vertices to swap positions quickly as the closest pixel to their actual position changed, causing a distinctive jitter effect. It's not hard to find prebuilt shaders out there that mimic PS1 graphics very closely, and as our charcoal effect existed entirely as a postprocessing pass, they would have been completely compatible with our graphics pipeline. However, we really wanted the ability to use Unity's full surface shader feature set, including per-pixel lighting and realtime shadows. This took prebuilt shaders off the table - they only used vertex lighting and (very occasionally) baked lighting. Another roadblock to this is that vertex functions are handled differently in surface shaders than in fragment shaders. Fragment shaders expect their vertices to be output in screen space, making vertex snapping trivial with built-in Unity shader functions. Surface shaders have their vertex function set up as a sort of intermediary step between the inputs and Unity's internal vertex function, meaning that their functions need to output vertices in the original object space. Unity doesn't have a built in function for translating a position back from screen space to object space, meaning we needed to improvise. THE SOLUTION
The C# part of this solution was partially sourced from this Unity Answers thread. The OP of that thread had issues managing this script-generated matrix alongside Unity's built in projection matrices. As such, our solution was to simply ignore all the built-in matrices and pass two matrices to the shader - one to convert our vertices to camera space, and one to convert them back. The solution (HLSL)
The end result is fairly readable - we just convert the vertex to world space, apply our first matrix, do the pixel snapping calculations, apply our second matrix to bring it back to world space, then convert the vertex to object space and pass it back to Unity. There is one issue here - this shader receives shadows perfectly well, but it will cast shadows without all our extra vertex snapping functionality. To fix that, we need to add a shadow caster pass. I'm going to leave the code for that here as a resource. It's game jam code, so instead of wrapping the snapping process in a function and calling it from both passes, it's copy-pasted, but otherwise it's perfectly functional & demonstrates how to apply vertex adjustments to an object's cast shadow. Shadow Pass
Sadly, we didn't end up using this for most of the objects in Meat Shift. It takes a moment to replicate all the functionality of the Unity standard shader, and we didn't have the time to make a new variant every time we needed a new form of transparency or texture map or something. It's present for a few of the props (I think the sausages and the wax cylinder in the break room both use it) but otherwise we just stuck with the standard shader and let the post-processing do all the work.
That's basically what I've got for this post. Meat Shift was a fairly short project, so our style for it was built out quickly and with a focus on simplicity in the art pipeline. I think it was fairly successful - for a small amount of time invested, we got a distinctive style that meshed very well with our story and gameplay.
0 Comments
Leave a Reply. |
ArchivesCategories |