Werewolfinator

A guide to the possibilities and pitfalls of trying to make a AAA Spark AR filter

Matt Greenhalgh
26 min readOct 26, 2020

Matt Greenhalgh

This is a detailed, end-to-end technical write up of a personal Spark AR project I created for Halloween 2020 using Spark AR Studio. If you’re interested in building advanced, patch graph-based Spark AR effects using shader, lighting and material techniques I hope you‘ll find it interesting and useful. I will also share some patch graph assets for the committed reader that you can use in your own projects. The content is pitched at Advanced to Expert level users but I’ll include links to useful background reading on key concepts.

A brief Table of Contents:

  • Motivation
  • Modelling
  • Texturing
  • Spark AR Patch configuration
  • Subsurface Scattering shaders
  • Anisotropic Hair shader
  • Eyeballs
  • Dynamic Environment Lighting
  • Lessons Learned
  • Conclusion
  • Free Swag

Disclaimer: this is a personal, non-commercial work and any opinions expressed do not reflect the views of Stink Studios.

Motivation

One of the big challenges facing AR in 2020 is realistic integration with the real-world environment. Making virtual objects look like they’re part of the world around you requires consistent lighting informed by the light conditions of the ambient environment. Capturing this lighting is no easy challenge to overcome. The camera presents limited information about the world at any given time and extracting lighting information from unknown objects and materials is a hard problem to solve.

This is not an impediment however to the many Creators who use Spark AR to make Face Effects: its standard lighting model is perfectly acceptable for the vast majority of uses. Spark AR offers a more advanced Physically-based Material but, as I’ve written in the past, it contains a bug that prevents its use in realistic contexts. However, with the advent of Renderpasses in Spark AR Studio, new possibilities have opened up to capture real-world environment lighting and use it in AR scenes to create highly realistic integration of virtual objects into the real world. I wanted to attempt to solve this lighting challenge in a test project that could also showcase other advanced material shaders taken from AAA real-time game shader theory.

I chose a Werewolf transformation as a test case. It would offer up a number of interesting challenges: blend shape deformation of the base face mesh, rigging of mouth elements, fur growth and material creation, sub-surface scattering of light within the ears and advanced eye shader capabilities. Also, Halloween was around the corner so it would be a suitably scary way to take advantage of the real-world lighting capabilities if I could integrate them.

Modelling

I’ll surmise the modelling and texturing process as it was fairly standard for this kind of workflow.

I began by capturing reference artwork and anatomical diagrams to aid the modelling and texturing process. I didn’t follow any particular concept art but drew inspiration from a variety of sources. One early decision, that would have ramifications for the remainder of the project, was to base the model on an anatomically-correct hybrid of human and wolf skulls. Using Blender I created a 2D mesh outline, traced off a human skull, and a blend shape variant taken from a wolf skull and then set the blend factor to an intermediary state of ~50% to act as a guide for modelling the fictional shape of a werewolf skull.

I sculpted a very rough skull for this hybrid and attached soft tissue elements like muscles and the nose to act as a base on which to establish the face mesh blend shape. Using a combination of the Shrinkwrap Modifier and a fair amount of manual coaxing I created a displaced version of Spark AR’s Face mesh asset that would be the Werewolf’s face.

Spark AR filters generally take one of two approaches to transformation effects: they either distort the face only and don’t attempt to manipulate the surrounding head or they obscure the head altogether with a larger mesh that hides most hairstyles. As my model contained a wolf-like skull that was smaller than the equivalent human skull I had a challenge: the top portion of the human head would remain visible above the wolf skull. Equipped with some ideas about squashing the camera feed texture and hiding things with fur I committed to a separate head model blend shape that would sit behind the face mesh. Finally I modelled human and werewolf blend shape ears aligned to the surface of, but separated from, the head mesh.

Modelling the mouth area presented quite a few challenges and now is as good a time as any to confess that, while I’m an intermediate modeller and texture artist, I don’t have a lot of experience rigging face meshes for animation or creating blend shapes. However, equipped with a broad understanding of the principles I modelled the gums and soft palette of both Human and Werewolf blend shapes and rigged them with a simple two bone armature that rotated at a broadly shared axis running through the jaw bone region. The upper and lower teeth and tongue were parented to the same rig and could be animated together based on a single jaw bone rotation. I added some deforming saliva strands too as a detail that could be selectively shown within Spark AR.

Rig used to control ear rotation, fur displacement around eyebrows, jaw and neck rotation

The basic jaw rig was supplemented with neck and ear bones to counteract neck mesh rotation and ear rotation respectively and eyebrow bones aligned to the exposed tracking positions used by Spark AR. Note that the official asset featuring red dots has some errors in it. Here is one I’ve generated that is correctly aligned to the exposed positions that you can use:

Pink Dots are correct exposed Spark feature points

For hair card placement I used the Blender HairTool AddOn to establish rows of hair traversing the top and rear of the head and around the jaw line. Arranging the hair in rows in this way was very helpful when it came to Alpha Blending in Spark AR (more on that later).

I only modelled a single eye asset but established scale and position offsets for the Human and Werewolf variants in Blender prior to exporting to Spark AR.

Finally, prior to texturing, I created a high-res sculpt of the face mesh featuring wrinkle detail to bake down to a Normal Map.

Finished mesh asset prior to exporting

Texturing

I use Substance Painter for texturing and I’ve developed a physically-inspired model for building up skin textures. In essence it just creates layers beginning with bone and working up through cartilage, musculature, fat, sub-dermis, capillary and vein, dermis, epidermis, and then surface layers like melanin, age spots, hair follicles and the like before adding separate layers to control roughness and normal map displacement. By using fill layers for each, and using masks to adjust their blending, I can create a lot of subtlety in the skin tone and the way the skin can look flushed or pallid.

I generated the textures at 2K but exported at 1K for use in Spark AR. I use a custom made exporter format that’s very similar to the official Spark AR one but that packs a shininess term into the Alpha channel of the ORM output.

Montage of layers contributing to Basecolor Texture
Grrrrrr!

Spark AR Patch configuration

So how do you go about building a project like this in Spark AR? I’ve laid out the steps above as a chronological process but in reality there was lots of iteration back and forth. It’s critical that you export meshes early on to check they import successfully into Spark and so I spent lots of time validating mesh import settings, scale and position against the tracked face. Textures also saw multiple iterations to adjust for roughness, normal intensity and the like.

Here’s the final patch graph that contains all the shader logic, material configuration and drivers for bone animation. It’s fair to say this is one of the bigger projects I’ve built.

I like to arrange commonly referenced values like device dimensions and utility textures in a stack on the left and use Send nodes to route them out to wherever is needed. Materials consume the bulk of the space in the middle. Transform manipulations the columns on the right. Environment capture is configured at the top and the large group at the bottom feeding into two yellow material Albedo slots is the eye shader material.

I used a variety of custom patch groups to aid functionality, there’s a full list here but I’ll describe some of the highlights in more detail below.

Subsurface Scattering shader for ears and skin

One of the techniques I was keen to implement was Subsurface Scattering. This phenomenon, seen in a variety of materials from wax, to marble, to human skin, occurs when a proportion of light hitting a surface isn’t immediately reflected back or entirely absorbed but penetrates the surface, is scattered around against internal surfaces like cell walls, before being bounced back out at a different location, often with a different colour as a result of some wavelengths having been absorbed. The classic effect is seen on people’s ears if they are photographed with the sun directly behind them and they can be seen to be glowing red as the blood absorbs other wavelengths before forward scattering out the front of the ears. I adopted a variant of the technique presented here combined with a custom falloff volume defined around the ears to moderate light penetration on thicker mesh areas like the base of the ears where they join the skull. This isn’t an especially generalised solution and requires some per-mesh tuning but gives reasonable results.

Prototype Sub-surface Scattering Shader implemented in Spark AR
Human ears
Werewolf ears (exaggerated for dramatic effect)

This kind of forward scattering effect was complemented by a different kind of Sub-surface Scattering technique designed for the face material. It is actually a very simple technique that renders the lighting of the material to a Renderpass, blurs it by a scattering amount and then multiplies the result by the scattering colour before recombining with the original lit Renderpass using Max blend mode. It’s useful for suggesting subsurface scattering on regions of the face being struck by light at an angle so I applied it during the Lightning flashes seen in the effect to make the skin feel softer under the very strong lighting.

Fake Subsurface Scattering prototype under strong glancing angles. Note the halo glow around the mesh edges — an artefact of this technique

Anisotropic Hair Shader

Another shader material I was keen to integrate was Anisotropic Specular highlights for the fur cards. Standard specular reflections under the Phong shading model are omnidirectional: they extend equally in every direction over a mesh surface as dictated by the surface Normals. Hair, in common with the bases of saucepans, 70's HiFi separates and Call of Duty rifle barrels exhibits what is known as Anisotropic reflection properties: micro alignments in the surface material extend the reflections in one axis and self shadow the reflection in the perpendicular axis. The result are reflections that appear to stretch out over the surface in the flow direction of the material. Due to the scales on the surface of hair strands they too exhibit this property, along with a small amount of subsurface scattering that lends hair a directional sheen and rich, saturated long-tail highlights that move across the surface of the hair based on viewing angle.

I prototyped an Anisotropic shader in Spark AR based (I think, it was a while ago)on this paper. Results look great if you’re looking for lustrous shiny hair like you’ve just stepped out of a Salon. Unfortunately I thought a Werewolf would more likely look like it had just stepped out of a hedge so I dialled back the specular shininess to quite subtle levels, to the point where the contribution of the effect was fairly minimal. Still, it lends the hair a more realistic quality that I like.

The other feature I wanted for the hair was the ability to grow as the transformation progressed. I’m using a vertically-oriented greyscale hair strand texture for the texture information so this was a simple case of animating the UV texture coordinates so the texture slides into view over the duration of the transformation. By combining this with a Power patch applied to the texture greyscale value I could also make the hair appear to start as fine strands and become progressively thicker.

Eyeballs

Far and away the most complex single shader material are the eyeballs. There is undeniably way more detail than is warranted or even visible included in these but I was keen to try a variety of techniques to increase realism. The eyeballs UVs are projected from an orthographic front projection so that I could use a radial gradient as the basis of controlling the transition between different layers that are built up to create the eyeball shader. I began by modelling and creating textures and shaders for a photorealistic eyeball in Blender. I used Substance Designer to generate PBR textures for the Iris and combined them to achieve the look and capabilities I had in mind for the real-time equivalent.

Iris texture with RGBA channels configured for remapping to independent colour adjustment
Blender Render of eye capabilities target

I wanted to be able to control how bloodshot the eye was, the size of the iris and pupil independently, the colour of the iris, include the subtle refraction of the pupil and iris caused by the lens, the Fresnel fall-off of the specular reflections from the surrounding room but also the scattering of light around the sclera/iris margin as well as the classic Bladerunner-esque reflection of light from the back of the sclera.

The graph, seen here, essentially just layers different colours and textures using Mix nodes in combination with Gradient Steps to build up the overall effect. By mapping the range adjustments of the gradient steps to the overall progress of the Werewolf transformation I could adjust the hue shift of the iris, the scale of the iris and pupil, the thickness of the capillaries in the eye and so on. I also applied some fake Index of Refraction bias to the combined iris and pupil texture by using the mesh normals to offset the texture sampling. I also applied some fake lighting to the iris by inverting the mesh normals on the z axis so the shape appeared inverted for the purposes of light calculation, effectively giving the iris a concave appearance even though I was using one unified convex mesh.

Base eyeball transformation sequence (without AO applied)

Although the animated sequence behaved broadly as I wanted it to, the eyes still didn’t integrate as well as I’d like. I was using a low-res render of my office to supply the surrounding reflections but it was equally bright for both eyes. In reality, the bridge of the nose and surrounding eyelids and brows would apply a fair amount of occlusion to these reflections and darken the sclera overall. I addressed this by sampling the colour value of the whites of the eyes to either side of the pupil location on the camera face texture in order to grab some real-world indication of the localised ambient occlusion. I then generated gradients that ran horizontally through each eye with these values brightened slightly and then multiplied over the sclera value.

Eyes in isolation with occlusion gradients applied
And implemented behind face mesh

The final addition was some vertical shadowing from the eyelids that I generated from an ellipse SDF shape that I used as an inverted, soft mask that had its vertical height tied to the eye openness.

Final eye shader featuring both lateral and vertical AO techniques

As a final note, you won’t have noticed but when you tap the screen to trigger the effect, the environment map reflection actually changes based on vertical position to create a mix between day and night renders of my home office. I did this so I could make the effect look as realistic as possible by selecting the reflection to match the time of day…

It’s the details, right? (note Werewolfinator Spark AR Project running on main monitor)

Dynamic Environment Lighting

Ok, so if you’ve tried the effect I’d hope that the thing that struck you most was the AMAZING way it adapted to the lighting in your environment. In truth what probably struck you most was the TERRIBLE head integration at the rear. No matter, we will discuss the Dynamic Environment Lighting.

Now, spoiler alert, I’m not going to give you the exact steps to replicate it here. I’d like to keep some things a little secret, for a while at least, but I’ll give you enough to go ahead and work it out for yourselves. I think that’s the best way to learn. Fair warning though, there’s a lot of theory that comes with this. I’ve done my best to surmise this but I’d encourage you to follow up on some of the recommended reading if you’re serious about adding this level of effect to your filters. It’s a long but rewarding journey to go on.

Ok, so the basic challenge we face is this: How do we take the camera feed, and work out what light is illuminating that scene in order to apply the same illumination to a mesh model that we want to integrate into that scene?

To begin tackling this problem it’s worth thinking about how light reflects from a mesh. I’m go to skim through the theory here but there’s a great in-depth explanation from Substance here.

So, in a Physically-based model, the light we see reflected from an object can be separated into two components: the Diffuse illumination and the Specular reflections. If the lighting is static, and the object doesn’t move, then the Diffuse Illumination contribution is constant and, for any given point on that surface, simply reflects the sum total of light rays striking that point taken from a hemisphere of influence aligned to the surface Normal. The Specular Reflections however are view-dependent. As you, or the camera, move with respect to the object, the reflections appear to shift location over its surface.

There are a number of ways of calculating these two components but in real-time Physically-based lighting models it is generally achieved by sampling from an Environment Map that captures the surrounding lighting information. You’ll be familiar with these Environment maps if you’ve ever added them into a Material in Spark AR. They are 2:1 rectangular images that look like fish-eye lens captures of the environment. Spark AR Studio only ever shows you the raw form of these images in the Studio UI but in fact, behind the scenes, it separates them out into two discrete images. One is the crisp clean image you’re familiar with that is used for the specular reflections. In order to simulate the effect of rough materials, Spark AR Studio actually generates multiple smaller variants of these that are progressively blurred and then packed together into a MipMap Cascade that stores the pre-calculated result needed to display progressively rougher settings for your material. It also creates a Diffuse version of the image that looks like a very blurry version of the Specular environment map. The blur calculations here are technically required to sample from the totality of the hemisphere of light ray influence but in practice random rays are sampled within the region to approximate the result.

Specular Radiance MipMap Cascade (left) and Diffuse Irradiance Map (right) generated from a Studio lighting environment

The final thing to understand is that for Dielectric materials — that is, non-metallics like plastic, concrete, wood and (with some caveats) Werewolf skin — the way that these two lighting terms combine depends on something called Fresnel reflectance. This measure indicates what proportion of light rays are reflected off a surface as Specular reflections and what proportion are refracted inwards to be absorbed at certain wavelengths and reflected outwards as perceived Diffuse colour. This calculation is expensive, so in real-time use it is replaced by an approximation known as Schlick’s approximation of Fresnel Reflectance. The physically correct calculation uses a power term set to an exponent of 5. To see what this looks like on a Facemesh here’s an image that indicates the proportion of the mesh that will display as Diffuse illuminated material (in black) vs Specular light reflected at grazing angles (white).

Fresnel Reflectance Mix (Diffuse — Black, Specular -White)

As you can see, the proportion of Specular light is very small and critically it is strongest on surfaces that are reflecting light at glancing angles from directly behind the mesh.

Ok, so we now know we need to generate two types of environment map: one for Specular and one for Diffuse and that we’ll need to generate MipMaps of the former if we want to support different Roughness settings for our Material. But where do we get these from?

Let’s start with the Specular map. As we’ve seen from our black and white Fresnel Mix image above, we actually need very little Specular data and most of it needs to come from behind the mesh. This is then actually ideal for our purposes: our camera image will generally display a reasonable portion of the scene behind a user in typical selfie-usage scenarios. We can just grab the background and use it to populate our base Specular Environment Map. But how much of it should stretch to fill the 2:1 aspect ratio rectangle? Well, if we think about it, the width of the screen we’re looking at reflects a viewing angle, and that viewing angle must be some proportion of 360 degrees. If we can work out what proportion it is then we can work out how much of the 2:1 space our background slice should take up. We can do this by taking the Focal Distance and half the screen width and doing some simple Trigonometry since we know from School that the Tangent of an angle in a right-angled triangle is equal to the opposite side divided by the adjacent side. We can therefore divide half the screen width by the focal distance and take the ArcTangent of the result to find out this angle. Double it and we have our viewing angle that we can then use to determine the proportion of the Environment map we need to apply the background to. Rinse and repeat for the vertical viewing angle and we have a Specular environment map to use for our reflective illumination.

Now, there’s an obvious question here: what do we fill the rest of the environment map with? We have some choices here but for Dielectric materials with a Fresnel Power of 5 it doesn’t really matter: we’re just not going to see the contribution. However for completeness, we can either clamp the image contribution so it just stretches edge pixels to fill the gap. This is perhaps the safest choice as we’re filling the environment map with representative lighting while making minimal assumptions about the rest of the scene. Alternatively we could tile the background. This works in environments where the surrounding environment is broadly uniform — like an open field for example, but doesn’t work so well for indoor environments that can have brightly painted feature walls and the like that create different colour casts. If we were feeling really fancy we could use Renderpasses to progressively build up the detail of the Environment Map as the camera rotates, adding patches as we go but this is unnecessary for our particular use case as the live background will always supply the information we need.

Ok so Specular reflections done. What about Diffuse illumination? Referring back to the black and white Fresnel Mix above we can see that there’s a lot of Diffuse information that we need to capture on the face and, worse it’s illuminated by the light coming from behind the camera, light we know nothing about as we can’t see it. Well, not so fast. We can see it, its just being reflected off a surface. Which surface? The face! The face contains all the information we need to reconstruct this diffuse lighting, it’s just mixed in with the person’s features. All we need to do is find a way to work out the colour of the light on the person’s face and we will have the information we need.

Ok so how can we extract this light? Well its worth thinking about the calculation that is performed when we apply lighting to a material. It’s really as simple as this:

(Light1 + Light 2 + Light 3) * Material Basecolor = Lit Material

We add up all the lights in the scene and multiply them by the base colour to get the lit material. Hopefully you can see the solution here already but we just need to reverse the operation by dividing the Lit material by its Basecolor and we get the sum of the lighting contribution.

Ah, but how do we know what the object’s Basecolor is? It’s coloured by different lights! Well when it comes to human faces, we can have a pretty good guess. Now clearly we are entering sensitive territory here. It is our responsibility as Creators to consider the very wide variety of skin tone, complexion, age, impairment and disfigurement, make-up and adornment that may make up our users faces. However we can use technology to assist us in making an informed guess. We can sample from regions of the face that are least likely to be obscured by things like fringes and glasses and least likely to be covered by vivid make up and least likely to be reflecting environmental lights strongly. There is no perfect solution here but by taking multiple sample points and averaging the results you can arrive at a reasonable guess approximation. I was fortunate that I was using this lighting technique to illuminate a fictional creature which could accommodate slight variances in hue without causing offence as there was no precedent for what colour the werewolf was supposed to be. There are slightly more pink and slightly more orange versions that I have seen based on skin tone of the user but the differences are minimal thanks to the above technique.

So equipped with this average skin tone measure we can now divide the extracted face texture camera feed by this skin colour and Tada, you get the illumination that is being applied to the face by the real world. Now it’s noisy and, due to Spark AR’s lack of proper high-dynamic range handling, the colours may well appear oversaturated and max out towards RGB /CMY values but you’ve instantly got a very useful and powerful piece of information.

Ok so we have the illumination of the face, how to map it to an Environment Map? Well we can do this by working out how the surfaces of the face would project a UV Map onto the surface of a surrounding sphere, unwrap this as an equirectangular map and you’ve got a look-up texture to determine how to project the face onto an Environment Map. Sounds like a lot of work?

It is.

I did it and, here, let me save you the bother…

Rarely in the service of 3D rendering has so much effort been put into something so useless

Don’t thank me just yet. If you try projecting your extracted face illumination using this to an Environment Map you might be struck by the following thought…

‘What’s that beard doing there!?’

Or at least you would if you were me… or someone with a beard.

Because we’re using the entirety of the face for our diffuse map we pick up all its features as part of the calculated illumination: beard, eyebrows, glasses and all. I tried various masking and blurring techniques to obfuscate this but none of them gave great results. I eventually arrived at a good answer (and this is the bit I’m keeping secret) but you’ve got information enough between here and my Werewolf filter to think about what the correct kind of solution might be ;-)

Phew ok, that was a lot of theory and a lot of words but here’s the result shown in an early prototype of the technique.

Early test of Dynamic Environment Lighting capture (minus Specular component)

I was able to refine this further over the course of the project and ultimately ended up with this group of patch graphs that deliver the necessary components of the Dynamic Lighting solution outlined above.

Neat, clean… mysterious

This dynamic lighting contribution plugged neatly into the pre-existing PBRinator Patch I’ve previously released, shown here in its more grown up, less buggy, streamlined 2020.3 incarnation. By keeping the lighting calculation separate I could plug it into multiple instances of the PBRinator to apply the dynamic lighting to different materials without the overhead of recalculating the lighting each time.

Base Lighting Context
Raw Extracted Diffuse Lighting
Diffuse Lighting applied to a 50% Grey Material
Diffuse + Specular Mix applied to 50% Grey Material

Lessons Learned

AKA Stuff I screwed up on.

Ok, so it wasn’t all unbridled success with world’s-first lighting techniques. As I approached the final development stages I threw the filter open to testing by some celebrity Spark AR friends who I knew would be honest with any feedback. They were.

As hinted at previously, while I know my way around Spark AR and shader creation, my modelling and particularly rigging skills are intermediate to noobie level, respectively. One thing I hadn’t accounted for was how to control the gum/tongue/teeth meshes to adapt to the different mouth sizes and shapes of users. I was morphing from the canonical Spark AR Mesh position to my Werewolf mouth shape but had no way to account for the small but significant changes in shape that the live deformations being applied to the face mesh would transfer over to my Werewolf face. No two werewolves would be the same but their mouth blend shapes would be identical. This resulted in some obvious and ugly clipping as one Piotar Boa was keen to point out to me during testing:

Trust me, this is Piotar Boa

I have some ideas now on how I’d go about this a second time but the fact that the gums can deform softly but the teeth need to remain rigid does make it challenging to adapt the mouth interior to user mouth shape. Still, room for improvement here.

An even worse issue, that I was almost entirely oblivious too until very late on, was my solution to providing a back to the werewolf’s head. As mentioned at the outset I had led myself down the path to anatomical appropriateness with a skull that was smaller than the equivalent human one. I thus had a small skull mesh behind the face mesh that I had to find a colour for before it was obscured by the fur growth. I chose to blend between colour samples from the left and right cheek regions to create a gradient blend that would look like a bald head skin tone. By making the users hair recede and shrink backwards the idea was that it would look like a plausible step in the transformation process. Because I spent 97% of the development time looking at a webcam facing directly forward at a camera mounted above my monitor I’d neglected to think how this technique would play out on a (very) mobile phone. Tomas Pietravallo was kind enough to point out my short-sightedness.

Yeah, not my finest hour

Another mistake I walked into quite consciously, was just making this whole thing way too complicated (you might appreciate that by now). The shader processing overhead was bringing Spark AR Studio to its knees. My Camera feed in the simulator was periodically flashing green and the UI would abruptly shrink to 50% width with no warning before reverting to correct aspect. Placing a new patch took ~5 seconds every time. The clues were there. Not surprisingly this resulted in some bugs on different devices as Josh Beckwith and Luke Hurd were able to demonstrate…

Josh with less of a background than I’d intended on a Pixel 3 and Luke’s eyes missing AO, Reflections, and well, most of the things eyes are supposed to have

For all that I’d always intended this as a technical exercise first and successful filter second, I didn’t like the idea that people were having a partial experience because I’d failed to consider, in good time, the kind of impact my technical choices would have on performance.

The most egregious mistake, and one that required me to re-publish my effect after it was live, was to overlook the health impact of my use of flashing lights in the scene. I’d wanted to feature Lightning from the outset for drama and not least to contribute to the SSS techniques outlined above. I dropped in a placeholder implementation early on that was close to what I wanted to achieve. I was mindful that photosensitivity would be a consideration and I’d intended to come back and polish it further and check my strobe frequency was within an acceptable range but as I got used to looking at the effect it fell off my conscious list of things to address before publishing. I’ve also now realised that there’s a (really obvious) option to add an instruction in to warn users of flashing lights in effects. I’m old and experienced enough to know better so this was a humbling reminder to put user’s health and well being above any consideration whenever you’re creating content for use by others.

Lockdown has made it difficult to test our projects, with limited access to test devices, but I’ve learned some lessons which I can summarise here:

  1. Know your weaknesses as well as your strengths and take time to learn skills you lack experience in before jumping in
  2. Test on a phone, often. Walk around, turn it upside down, give it to other people and see how they use it. Don’t rely on the simulator.
  3. If Spark AR Studio is complaining about the complexity of your project it’s a good sign that you’ve strayed out of the bounds of its normal usage and it’s time to check that things are working as you’d intended on different devices before too much technical debt builds up.
  4. Most of all, always consider the health impacts of your filter on your users. It’s going as my first and last items on my To Do list in future.

Conclusion

I began this project in mid-August and spent the majority of weekends and many evenings working on it in the run up to Publishing in the week before Halloween. If I had to value it at my commercial rate it would put it well beyond the budget of most commercial filters I work on. As such I can’t see the variety of techniques used here translating into a commercial filter soon.

Nonetheless I feel it is by demonstrating the potential of the Spark AR Platform through experiments like this, that push the boundaries of what is possible, that we open both users and clients’ eyes to the potential of AR and ensure that, as a medium, it doesn’t get mired in quick turn-around, low ambition, low value experiences. I’ve also had great fun working on it, as I hope you now have, learning about the process.

Lasse Mejlvang Tvedt demonstrating the dynamic lighting response

My considerable thanks to all the people who helped test it and allowed me to share their images above and most of all to Davide La Sala for giving of his valuable time to help debug my blend shapes. If I’ve missed any explanations out of anything you were curious about, feel free to let me know in the comments below and I’ll consider adding an update with any missing bits.

Free swag

Ok, I promised some freebie patches for people who’ve taken the time to read all this. I hope this post has proved interesting and useful but if you just scrolled to the bottom for the free stuff, that’s ok, I do too.

Free Spark AR Patches Repo

So, I thought about small portions of this overall effect that could be separated out and still prove useful on their own without requiring PBRinator or the whole Dynamic lighting stack. So here is one that could prove useful: a modified version of the Anisotropic Hair Shader. I’ve wrapped it in a project so you can see it’s basic usage. I’ve included some basic textures to get you started although you can create more sophisticated results with proper hair textures. Finally, take a look inside the group if you want to change the basic Melanin approach to colouring hair

Thanks for taking the time to read this, look forward to seeing any uses of the hair shader in your filters and good luck in your creations.

Matt

Dedicated to Winston, our own special little wolf, who passed away during the production of this filter.

--

--