Probably the ideal workflow would use a reference image via something like IP-Adapter since a simple colour palette wouldn't really give enough control (see https://x.com/P_Galbraith/status/1716405163420963196 for example). Typically you have the character design done in a flat perspective. So it would be nice to have something like an IP-Adapter input as well as a detailed line drawing and a rough paintover.
You would also need to provide a way to give lighting control (i.e. control over light sources and direction) as well as multiple characters etc... for it to be useful in complex compositions.
The workflow I've used in the past for this is using the fantastic StableDiffusion 1.5 ControlNet lineart model. See https://x.com/P_Galbraith/status/1716299002969469054 for example.
Probably the ideal workflow would use a reference image via something like IP-Adapter since a simple colour palette wouldn't really give enough control (see https://x.com/P_Galbraith/status/1716405163420963196 for example). Typically you have the character design done in a flat perspective. So it would be nice to have something like an IP-Adapter input as well as a detailed line drawing and a rough paintover.
You would also need to provide a way to give lighting control (i.e. control over light sources and direction) as well as multiple characters etc... for it to be useful in complex compositions.