Results from an experiment/demo livestream on a workflow designed more for "sketch to render" using Vizcom AI's custom diffusion-assisted design tool. These 3D toon renders worked surprisingly well as inputs, considering the density variation. On closer inspection, there is some minimal foreground/background confusion, but not too bad for mood renders! I also did some work painting directly within Vizcom to do some "inpainting" generation, and later Photoshop to pull out some colors and fix some artifacts, but the post-processing was fairly minimal.