Hello everyone! I've recently been spearheading the rendering of all of the new Megas from Legends Z-A while I have a bit of free time and motivation from the new game coming out. Given the list is small, I figured I'd handle going through the new Mega list on my own while the rest of the team we have can just keep going with the tasks they have for ScVi Pokemon.
I'll first open up this post with just dropping all of the renders I have done, and then go into detail about new rendering techniques that should benefit the entire team, as the rendering process is magnitudes faster to go through, and is using a method that is even more accurate to how the actual 3DS games handled drawing their outlines.
Mega-Clefable:
Mega-Victreebel:
Mega-Starmie:
Mega-Dragonite:
Mega-Meganium:
Mega-Feraligatr:
Mega-Skarmory:
Mega-Froslass:
Mega-Emboar:
Okay! So, rendering technique. Before, we rendered the base resolution of these renders at an 8:1 ratio. Or in other words, 8 times the target resolution on each axis, which would be 64 times the total amount of pixels. The reason I had set this up this way before was because of the need to improve accuracy of Blender's built in edge detection filters I was using at the time. The lower the base resolution, the less accurate the outlines would be drawn and more unintended areas would receive outlines with no way to filter them out. Blender's built in edge detection (the variant I used was referred to as "Laplace") would draw outlines around around high areas of contrast around whatever we fed into the filter. The most important pass to put through the filter is what is called the "Normals" pass. The high areas of contrast often represent parts of the model that have creases, intersections, or is otherwise not a continuous flow of the object's mesh, and would more than likely be an intuitive place to draw the outline.
The result of this Laplace edge detection filter would then be put through more filtering nodes I set up to remove color, and filter out any faint detections that would otherwise not make sense (like an edge of a face on a continuous mesh would still get highlighted, just very faintly).
As I mentioned before, though, this method is not very easy to tune to work at small resolutions. We had to have quite a large base resolution to get a clean output from Laplace. But, with this large resolution, the result is fairly clean as you can see in the above images in the spoiler window. It would output what would on average be a 2px thick outline in all the important areas. However, if we kept this as is when shrinking the render back down to its intended resolution, it would be an 8x shrink and the outlines would barely be visible as they wouldn't be able to maintain the needed 1px thickness we need.
The solution I came up with for this issue would be to first use a Blur effect on the entire filtered outline, and then use Color Ramp nodes in Blender to tune at what point the outline thickness should be within the gradient of the blurred outline.
Then this could be shrunk down to a 1/8th scale image, and outline thickness could be tuned precisely with this color ramp to maintain the intended thickness, so that it's not too thick, or too think as to leave gaps around the drawn outlines.
Now that I have gone over this, let me go over why this method ended up being a huge bottleneck, especially for a lot of people we've brought onto the team to help render out ScVi Pokemon, as a lot of them don't have super high end PC hardware like I do.
First issue is obvious, the base render resolution is very, very high compared to the target resolution. Rendering at a higher resolution will pretty clearly force renders to take longer and be more resource intensive.
Secondly, in the compositor, the main bottleneck here is the blur functionality. Blurring images in realtime is very slow, especially if the blur is given a fairly large radius, and is using a fairly large base image on top of that. This also, in turn, resulted in long render times.
And thirdly, something I didn't touch on yet, was high lighting was handled. This is a bit too complicated for me to want to dive too deeply into in this thread, but in a nutshell, lighting was originally handled by using sun light with a very large radius to give the base shading render pass a very, very, VERY soft shadow. Given how large the radius was for the light, Blender EEVEE was required to be set up to have a very high sample count. To put it simply, essentially every frame (a frame that is 8x the intended scale already) needed to be rendered a couple hundred times for that one frame with different lighting samples in order to produce the soft shadow. This was the biggest bottleneck of them all, but was hard to get around not doing this at the time, so it stuck.
The reason we needed soft shadows was to allow a fine amount of control and tuning for how shadows were drawn from parts of the model that are expected to cast shadows. Similar to how we filtered the blurred outline, we just used color ramps to tune where shadows would start and end, and this tuning needed base shading that had nice soft gradients, hence the large light size.
Anyway, this method is irrelevant now. There are still upcoming renders for ScVi that may still be using the old method, but I'm fairly certain everyone on the team will want to be using this new method from here one out. So, let's go into why the new way is so much better.
Outlines. Outlines, outlines, outlines. The outlines are now using the same exact method the 3DS uses. Blender Laplace is not an accurate representation of how the 3DS did things for outlines and edge detection in general. If it was, we wouldn't have needed to render at such a MASSIVE base render in order for it to behave in a way we'd like. The method the 3DS uses is a lot more simple, and more appropriate to use for smaller resolution renders. It's essentially a 2 sample difference pass, one pass with two copies of an input image with one copy shifted 1px to the left with the two images then being put through a difference mix filter, and the second pass doing the same thing, but with the one copy shifted 1px up instead of left. Then these passes would get combined, and out comes a very clean 1px outline.
This image here was intentionally rendered at a high base resolution with high outline thickness to show more clearly how this works:
This method is really simple and easy honestly, and I'm surprised I didn't really look to investigate this sooner. This method idea came up mainly from observing how the 3DS games rendered these outlines while playing them by... using a device that allowed rendering the games at a higher internal resolution than the original 3DS would be able to display. Seeing the enlarged image made it a lot more obvious what they were doing to me. So, that, combined with the more game development experience I've accumulated for the past few years, made if quite easy to find a way to essentially indirectly reverse engineer how the outlines are drawn, and now I have a method that produces nearly the same kind of outlines you'd see on real hardware, even the same visual artifacts (Look at the top of Leafeon's ears for example in the above image and how they have a gap on the outline tip, this is how it looks on real 3DS hardware too that you can observe yourself if you look closely).
Now, I was hoping to develop this method without needing to increase the base resolution size, and I was CLOSE to managing this, but it was hard to filter out edge detection artifacts sufficiently enough, so I opted to just role with a 2x base resolution with 4x more pixels. This is still a significant improvement over 8x scaling though, and in the end this decision didn't result in significant render time penalties. And since this method is custom and built using Blender's compositing nodes, I am able to very easily increase the thickness of these outlines by 2x as well to maintain consistent outline thickness.
The next improvement is something that kind of just comes with the previous, which is no more need of very large blur effects. There is still a blur node in the compositing tree for very specific situations, but is a very small blue on a very small image and does not bottleneck renders when its used, so that's a very, very nice victory there as well.
And lastly, the method of lighting has changed. Lighting for most elements of the render are no longer using Blender EEVEE's real time lighting. Instead I'm calculating lighting using math in the compositor using dot products of the model's normals and the normal direction of an emulated light source. This is very basic lighting but it works for our needs. However, there are still situations where we absolutely NEED proper shadows, so here comes the last main change to this process, baked shadowmaps:
The shadow texture here on Leafeon is what it came with from the 3DS officially. And this was used to allow the lighting engine to render shadowed areas of the model that the basic dot product lighting method would not cover. These official shadowmap textures I sort of figure are probably not baked, but rather hand-drawn to some extent at the very least. However, I didn't like the idea of requiring anyone who needs to make a shadowmap texture for a Pokemon, so I resorted to using baked lighting, which is put into a texture, then is porcessed to appear similar to how the 3DS show maps appear. Take Mega-Meganium for example:
Mega-Meganium has very large flowers around its neck, which would be a very, very obvious source of casted shadow that wouldn't be able to render without having some what to bake down casted shadow detail into a texture for the pokemon to carry into the lighting pass. Not every Pokemon will need something like this, but if the Pokemon has a lot of large parts that hover over other parts of its body, there's a good chance it may need it to not feel super flat. This is why I chose to give Mega-Meganium this custom shadow treatment.
Alright! That's it for now! Expect more renders at some point in the future soon for Legends Z-A Megas! I'll continue to work through the list in my freetime until the list is finished.