Creative Workflows: The Human-AI Loop

Stop fighting the AI. Start collaborating. Learn the "Sandwich Workflow" to integrate Manual Sketching, AI Generation, and Photoshop Polish.

workflow_manager.py
# The Sandwich Workflow
def
create_art(human_input):
base = Human.sketch(human_input)
generated = AI.render(base, "ControlNet")
final = Human.polish(generated)
return
final
🤝
LOADING SIMULATION...
sandwich_workflow.log
1 / 8
1
2
Workflow Terminal
👤🤖👤
The Sandwich Workflow

Guide:Welcome to the 'Sandwich Workflow'. In professional AI Art Direction, we don't just type a prompt and hope for the best. We create a collaborative loop between Human intention and AI execution.


Hybrid Workflow Mastery

Unlock nodes by mastering the Human-AI-Human loop.

Phase 1: Human Concept

The process begins with YOU. Don't let the AI guess the composition. Sketch it out. Even a "napkin sketch" provides the skeletal structure the AI needs.

Workflow Check

Why do we start with a sketch instead of just text?


Art Director's Hub

Recent Critiques

How do I fix hands in Photoshop?

Posted by: PixelPusher

My ControlNet sketch is being ignored

Posted by: StableDiffusionFan

Portfolio Review

Submit your "Final Campaign" asset for review by senior AI Art Directors.

The Human-AI-Human Handoff: "The Sandwich"

Author

Pascual Vila

Full Stack Developer.

One of the biggest misconceptions about AI Art is that it's a "one-click" process. In a professional environment, relying solely on a text prompt is a recipe for mediocrity. The industry standard workflow is known as the Sandwich Workflow.

1. The Base (Human Input)

Before you even open Midjourney or Stable Diffusion, you should have a plan. A rough sketch, a composition layout in Photoshop, or a "Skeletal Map" using ControlNet ensures that the AI generates exactly the structure you need, rather than hallucinating a random composition.

2. The Filling (AI Generation)

This is where the magic happens. The AI takes your human constraints and fills in the texture, lighting, and rendering details. It does the "heavy lifting" of rendering millions of pixels, but it follows *your* direction.

3. The Topping (Human Polish)

AI is terrible at specifics: hands, text, logos, and specific brand colors. This is where you, the human, step back in. Using Photoshop, Generative Fill, and vector tools, you correct artifacts, composite layers, and finalize the piece.

Key Takeaway: The AI is not the artist; it is the brush. The Human provides the intent (Sketch) and the critique (Polish).

AI Workflow Glossary

Img2Img (Image-to-Image)
A process where the AI takes an input image (like a sketch) and a text prompt to generate a new image that follows the structure of the original input.
tool_settings.json
{ "input_image": "sketch.png", "denoising_strength": 0.75, "prompt": "oil painting" }
Visual Output
✏️
🖼️
Inpainting
The process of masking a specific area of an image to regenerate only that part. Essential for fixing hands, eyes, or changing specific objects.
tool_settings.json
tool.select_area(mask) tool.generate("fix fingers")
Visual Output
ControlNet
A neural network structure to control diffusion models by adding extra conditions (like edge detection, depth maps, or pose estimation).
tool_settings.json
ControlNetModel.load("canny_edge") weight = 1.0
Visual Output
[Input] ---[ControlNet]---> [Output]
Generative Expand (Outpainting)
Extending the canvas beyond the original borders of the image using AI to imagine what lies outside the frame.
tool_settings.json
canvas.resize(200%) fill_mode = "generative"
Visual Output
Expand