From ab7cca68018ad3ceadcace9d6ecb1bc1f6f46b4e Mon Sep 17 00:00:00 2001 From: Sven Killig Date: Mon, 1 Dec 2025 14:32:55 +0100 Subject: [PATCH] fix huggingface blog URL (#13) --- docs/flux2_dev_hf.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/flux2_dev_hf.md b/docs/flux2_dev_hf.md index 9c057c9..1e5f176 100644 --- a/docs/flux2_dev_hf.md +++ b/docs/flux2_dev_hf.md @@ -109,7 +109,7 @@ image = pipe( image.save("flux2_output.png") ``` -To understand how different quantizations affect the model's abilities and quality, access the [FLUX.2 on diffusers](https://huggingface.co/blog/flux2) blog +To understand how different quantizations affect the model's abilities and quality, access the [FLUX.2 on diffusers](https://huggingface.co/blog/flux-2) blog --- @@ -194,4 +194,4 @@ image.save("flux2_output.png") ## 🧮 Other VRAM sizes -If you have different GPU sizes, you can experiment with different quantizations, for example, for 40-48G VRAM GPUs, (8-bit) quantization instead of 4-bit can be a good trade-off. You can learn more on the [diffusers FLUX.2 release blog](https://huggingface.co/blog/flux2) +If you have different GPU sizes, you can experiment with different quantizations, for example, for 40-48G VRAM GPUs, (8-bit) quantization instead of 4-bit can be a good trade-off. You can learn more on the [diffusers FLUX.2 release blog](https://huggingface.co/blog/flux-2)