Dual Clip

#15
by Kaiber132 - opened

I can't make it work I tried both of these gemma_3_12B_it_fp8_scaled.safetensors and gemma_3_12B_it_fpmixed.safetensors and still got "DualCLIPLoader Missing weight for layer gemma3_12b.transformer.model.layers.0.self_attn.q_proj"

Does your look like this?

image

I have the same problems.

I already updated to the very latest ComfyUI, and I also deliberately replaced the DualClipLoader node with a fresh one.
But I'm still getting this error:
Missing weight for layer gemma3_12b.transformer.model.layers.0.self_attn.q_proj

...Strangely though — the video still generates successfully?? Does anyone know why the error shows up but everything actually works? Thanks!

Ohhhhhhhhhh you are talking about the annoying warning messages, not an error. Yeah, those are expected. Ignore them.

OK.
Thank you for the greate work, Phr00t!

mine is actually fails and yeah it does look like that
321412

So I used full version of gemma 12b it and just run the workflow as intended like "A cat jumping over a dog." prompt and same settings I got 8gb varm and 48 ddr5 ram I got this error 4m 33 second "Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 1.16 GiB
Requested : 6.01 GiB
Device limit : 8.00 GiB
Free (according to CUDA): 0 bytes
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB
This error means you ran out of memory on your GPU." as expected I guess I will lowvram

mine is actually fails and yeah it does look like that
321412

I had the same problem.
But it works after disable xformer.

Sign up or log in to comment