diff --git a/docs/flux2_dev_hf.md b/docs/flux2_dev_hf.md index 9c057c9..1e5f176 100644 --- a/docs/flux2_dev_hf.md +++ b/docs/flux2_dev_hf.md @@ -109,7 +109,7 @@ image = pipe( image.save("flux2_output.png") ``` -To understand how different quantizations affect the model's abilities and quality, access the [FLUX.2 on diffusers](https://huggingface.co/blog/flux2) blog +To understand how different quantizations affect the model's abilities and quality, access the [FLUX.2 on diffusers](https://huggingface.co/blog/flux-2) blog --- @@ -194,4 +194,4 @@ image.save("flux2_output.png") ## 🧮 Other VRAM sizes -If you have different GPU sizes, you can experiment with different quantizations, for example, for 40-48G VRAM GPUs, (8-bit) quantization instead of 4-bit can be a good trade-off. You can learn more on the [diffusers FLUX.2 release blog](https://huggingface.co/blog/flux2) +If you have different GPU sizes, you can experiment with different quantizations, for example, for 40-48G VRAM GPUs, (8-bit) quantization instead of 4-bit can be a good trade-off. You can learn more on the [diffusers FLUX.2 release blog](https://huggingface.co/blog/flux-2)