Instructions to use zai-org/GLM-Image with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use zai-org/GLM-Image with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("zai-org/GLM-Image", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Error deploying and running the model in an internal network environment
sglang serve --model-path /models
Traceback (most recent call last):
File "/usr/local/bin/sglang", line 6, in
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.12/dist-packages/sglang/cli/main.py", line 40, in main
serve(args, extra_argv)
File "/usr/local/lib/python3.12/dist-packages/sglang/cli/serve.py", line 116, in serve
execute_serve_cmd(parsed_args, remaining_argv)
File "/usr/local/lib/python3.12/dist-packages/sglang/multimodal_gen/runtime/entrypoints/cli/serve.py", line 33, in execute_serve_cmd
server_args = ServerArgs.from_cli_args(args, unknown_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/sglang/multimodal_gen/runtime/server_args.py", line 958, in from_cli_args
return cls.from_dict(provided_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/sglang/multimodal_gen/runtime/server_args.py", line 973, in from_dict
pipeline_config = PipelineConfig.from_kwargs(kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/sglang/multimodal_gen/configs/pipeline_configs/base.py", line 646, in from_kwargs
model_info = get_model_info(
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/sglang/multimodal_gen/registry.py", line 540, in get_model_info
config_info = _get_config_info(model_path, model_id=model_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/sglang/multimodal_gen/registry.py", line 352, in _get_config_info
raise RuntimeError(f"No model info found for model path: {model_path}")
RuntimeError: No model info found for model path: /models
sglang serve --model-path ./zai-org/GLM-Image/
It can actually run? That’s not very smart, is it…
check our github repos for deploy