alternate for Resolution Image Size Selector node
Hello,
I'm trying to use the workflow provided. Eventhough i have installed Fearnworks Nodes from comfyui manager, that Resolution Image Size Selector node refuses to load saying When loading the graph, the following node types were not found.
So I removed it and placed an Empty Latent with 1024x1024. While running this workflow i get RuntimeError: mat1 and mat2 shapes cannot be multiplied (784x1280 and 3840x1280)
I guess it has to do with the CLIP we have used. But have already applied the mmproj and nodes_qwen.py fix you mentioned so must be definitely be the input coz without any input image it works.
add the .mmproj for the text encoder into the text_encoder folder
https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-VL-7B-Instruct-abliterated.mmproj-Q8_0.gguf
and you need https://github.com/MNeMoNiCuZ/ComfyUI-mnemic-nodes
after that you get an clip error:
https://github.com/city96/ComfyUI-GGUF/pull/349 #for mmproj error since i use the abliterated model which has a lot more NSFW capability.
ComfyUI/custom_nodes/ComfyUI-GGUF/loader.py in line 13 change to:
TXT_ARCH_LIST = {"t5", "t5encoder", "llama", "qwen2vl", "clip"}
here only clip has to get added
https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO/discussions/39
here is the full description howto
Great ! Followed your steps and now I'm able to run the workflow. Thanks for the model and helpful instructions !
Few followup questions, Does that "π Resolution Image Size Selector" node offer advantage over the usual Empty Latent Image node ? Noticed you mention feeding the width input from this node to the target size.. So does the Target size, Original Image resolution influence the output ? If so was the change noticeable ? Im just curious to know since this is the first time I'm running qwen image edit and yet to tinker with it so might be helpful
so think about a group photo and you want to change something .
i was able to manage with a smaller target size that i got a more wide angle, and qwen thought hey here is some space to fill. a copletely new person spawned on the left side and qwen adds details in the free space. if you do larger target size than latent it zooms and crops.
so if you want to keep it at close as possible to the original, thats what i usually want, the target size has to be the the width of latent, thats why i connect it directly.
actually i cant test that much cause i try to build a better quality version. cause of low spec hardware my pc is running on ram and swap limit since yesterday. Fingers crossed that something useful comes out, IF it wont die.
Cool. Then I'm also going to explore a bit with the target size change depending on my inage
then what gguf file to download on my 8gb vram and 16gb ram low end pc and that will work in this workflow and i have put the mmproj file on the text encoder file changing the name Qwen2.5-VL-7B-Instruct-mmproj-F16.gguf .. its not working , pls help brother
img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8)) THIS IS WHAT I CAN SEE ON CMD
no plan, maybe try the workflow from model card and follow the steps described there.
maybe remove --fast or sage attention from startup command.
you are always free to compare to city96 or quantstack gguf's, for me and i guess 10k ppl downloaded my model, it just works.
as i told rather google your error messages than hesitating me, thats all online. thats the way i walked and now im here.
and this is what i tell everybody:
- i know not everything.
- i dont want to googke that for you.
but the first google hit says try --force-upcast-attention
- Qwen-Image-Edit-Rapid-GGUF is bleeding edge, alpha, testing, needs custom patches.
- feel free to test.
Sorry for my harsh words.
Im doing all the stuff to learn and keep my brain fresh. I got a succes to be able to fix the low quant problem and shared the solution to the community. City96 and Quantstack, cause without them and their hard work, all this were not possible, thanks to them.
My rig is:
i7 4790k from 2012
32gb ddr 3
superslow sata ssd and hdd
AMD Instinct MI25 (vega 10) from 2016
the vega10 has no tensor cores, no mat mul nor wmma.
so patching my python environment to get the usupported gfx900 running took around 2 months. on sd.cpp black images are my daily friend.
cause of upcasting and no bf16 or int8 support running the fp8 or q8 model kills my entire ram.
i cant run the larger models.
and though slow ddr3 ram and loaded partially makes models runs so slow on my gpu that i need the low quants to work with.
thats the only reason i fixed low quants and created this repo.
cause i needed it and no one did it for me.
BTW: i don't even know if the KJ nodes gguf stuff can handle qwen in this state. For me to know i have to read issues commits and pr's on github so i recommend what i know its working City96 gguf node with the clip patch i posted.



