fix huggingface blog URL (#13)

This commit is contained in:
Sven Killig
2025-12-01 14:32:55 +01:00
committed by GitHub
parent c7a09571ba
commit ab7cca6801

View File

@@ -109,7 +109,7 @@ image = pipe(
image.save("flux2_output.png")
```
To understand how different quantizations affect the model's abilities and quality, access the [FLUX.2 on diffusers](https://huggingface.co/blog/flux2) blog
To understand how different quantizations affect the model's abilities and quality, access the [FLUX.2 on diffusers](https://huggingface.co/blog/flux-2) blog
---
@@ -194,4 +194,4 @@ image.save("flux2_output.png")
## 🧮 Other VRAM sizes
If you have different GPU sizes, you can experiment with different quantizations, for example, for 40-48G VRAM GPUs, (8-bit) quantization instead of 4-bit can be a good trade-off. You can learn more on the [diffusers FLUX.2 release blog](https://huggingface.co/blog/flux2)
If you have different GPU sizes, you can experiment with different quantizations, for example, for 40-48G VRAM GPUs, (8-bit) quantization instead of 4-bit can be a good trade-off. You can learn more on the [diffusers FLUX.2 release blog](https://huggingface.co/blog/flux-2)