Compare commits

..

712 Commits

Author SHA1 Message Date
Dr.Lt.Data
2866193baf ● feat: Draft pip package policy management system (not yet integrated)
Add comprehensive pip dependency conflict resolution framework as draft implementation. This is self-contained and does not affect existing
ComfyUI Manager functionality.

Key components:
- pip_util.py with PipBatch class for policy-driven package management
- Lazy-loaded policy system supporting base + user overrides
- Multi-stage policy execution (uninstall → apply_first_match → apply_all_matches → restore)
- Conditional policies based on platform, installed packages, and ComfyUI version
- Comprehensive test suite covering edge cases, workflows, and platform scenarios
- Design and implementation documentation

Policy capabilities (draft):
- Package replacement (e.g., PIL → Pillow, opencv-python → opencv-contrib-python)
- Version pinning to prevent dependency conflicts
- Dependency protection during installations
- Platform-specific handling (Linux/Windows, GPU detection)
- Pre-removal and post-restoration workflows

Testing infrastructure:
- Pytest-based test suite with isolated environments
- Dependency analysis tools for conflict detection
- Coverage for policy priority, edge cases, and environment recovery

Status: Draft implementation complete, integration with manager workflows pending.
2025-10-04 08:55:59 +09:00
Dr.Lt.Data
1ab2b1aeb3 modified: Reflection of changing --disable-manager to --enable-manager 2025-09-19 11:58:04 +09:00
Dr.Lt.Data
ffaeb6d3ff from draft-v4 to manager-v4 2025-09-13 08:07:44 +09:00
Dr.Lt.Data
6cc1ad4cc0 Merge branch 'main' into draft-v4 2025-09-13 08:06:45 +09:00
Dr.Lt.Data
27fc787294 update DB 2025-09-13 08:06:27 +09:00
snicolast
d23286d390 IndexTTS2 custom node (custom-node-list.json) (#2146) 2025-09-13 07:36:18 +09:00
Dr.Lt.Data
7c3ccc76c3 update DB 2025-09-12 12:48:20 +09:00
Dr.Lt.Data
892dc5d4f3 update DB 2025-09-12 07:53:17 +09:00
Dr.Lt.Data
e278692749 update DB 2025-09-11 12:36:38 +09:00
Dr.Lt.Data
8d77dd2246 update DB 2025-09-11 07:23:42 +09:00
Dr.Lt.Data
14ede2a585 update DB 2025-09-10 11:58:27 +09:00
Dr.Lt.Data
5b525622f1 update DB 2025-09-10 07:52:05 +09:00
Dr.Lt.Data
a24b11905c update DB 2025-09-09 12:19:49 +09:00
darkamenosa
5d70858341 Add Comfy Nano Banana - Interact directly with Gemini API using your own API key, also add custom batch images node to avoid chaining a lot of nodes (#2141) 2025-09-09 07:39:31 +09:00
dehypnotic
3daa006741 Update custom-node-list.json (#2140) 2025-09-09 07:39:18 +09:00
Dr.Lt.Data
0bcc0c2101 update DB 2025-09-08 12:31:06 +09:00
Dr.Lt.Data
b8850c808c update DB 2025-09-08 07:47:24 +09:00
Dr.Lt.Data
f4f2c01ac1 update DB 2025-09-08 06:40:55 +09:00
Dr.Lt.Data
7072e82dff update DB 2025-09-08 06:38:52 +09:00
Leylah Krell
53dc36c4cf Add ComfyUI Violet Tools to custom node list (#2136)
Added aesthetic-focused custom nodes package with 7 specialized nodes:
- Aesthetic Alchemist (style blending with 20+ curated aesthetics)
- Quality Queen (quality prompts)
- Glamour Goddess (hair/makeup)
- Body Bard (body features)
- Pose Priestess (positioning)
- Encoding Enchantress (text processing)
- Negativity Nullifier (negative prompts)

Features weighted blending, randomization, and modular YAML-based configuration.
2025-09-08 06:37:00 +09:00
Satadal Dhara
5aadc3af00 Updated Node List with My node (#2134) 2025-09-06 03:55:06 +09:00
Dr.Lt.Data
8c28a698ed update DB 2025-09-06 03:54:56 +09:00
Dr.Lt.Data
5ed6d8b202 update DB 2025-09-06 03:53:56 +09:00
Vantage with AI
b73dc7bf5e Changed name of node from ComfyUI-HunyuanFoley to Vantage-HunyuanFoley because of conflict. (#2130)
* Update custom-node-list.json

* Update custom-node-list.json
2025-09-06 03:51:08 +09:00
Dr.Lt.Data
d7799964de fixed: Issue where an invalid channel exception occurred when using the default channel
- Mismatch issue between ltdrdata/ and Comfy-Org/
modified: /v2/customnode/installed – cnr_id was being returned in a normalized form
modified: /v2/customnode/installed – when both an enabled nodepack and a disabled nodepack existed, modified to report only the enabled nodepack
fixed: Removed unnecessary warning messages printed during nodepack installation
2025-09-06 03:35:43 +09:00
Dr.Lt.Data
71d0f4ab63 update DB 2025-09-05 12:56:40 +09:00
Dr.Lt.Data
d479dcde81 update DB 2025-09-05 07:53:04 +09:00
Dr.Lt.Data
ae536017d5 update DB 2025-09-05 07:49:12 +09:00
matthewfriedrichs
67ddfce279 adding thought bubble custom node (#2129) 2025-09-05 07:48:06 +09:00
Vantage with AI
b1f39b34d7 Update custom-node-list.json (#2128) 2025-09-05 07:47:26 +09:00
Dr.Lt.Data
6cf958ccce udpate DB 2025-09-04 12:22:45 +09:00
Dr.Lt.Data
5378f0a8e9 bump version 2025-09-04 08:39:37 +09:00
Jin Yi
e13bf68775 Fix JSON serialization error in bulk import fail info API (#2119)
* fix: import failed info bulk api bug fix

* fix: Remove unused ImportFailInfoBulkResponse import
2025-09-04 08:36:46 +09:00
Dr.Lt.Data
eaed3677d3 update DB 2025-09-04 07:27:31 +09:00
sumitchatterjee13
b9c88da54d Add Nuke Nodes for ComfyUI to registry (#2123)
This PR adds nuke-nodes-comfyui to the ComfyUI Manager registry.

Features:
- Professional compositing nodes replicating Nuke functionality
- 15+ nodes including merge, grade, transform, and blur operations
- Designed for professional compositing workflows in ComfyUI
- Well-documented with installation instructions

Repository: https://github.com/sumitchatterjee13/nuke-nodes-comfyui
2025-09-04 07:23:48 +09:00
Dr.Lt.Data
104ae77f7a update DB 2025-09-03 12:12:40 +09:00
Dr.Lt.Data
bfcb2ce61b update DB 2025-09-03 07:40:58 +09:00
Dr.Lt.Data
d970fe68ea Merge branch 'main' into draft-v4 2025-09-03 01:24:47 +09:00
Dr.Lt.Data
63ba5fed09 update DB 2025-09-03 01:07:30 +09:00
Dr.Lt.Data
98a8464933 update DB 2025-09-03 00:16:55 +09:00
S4MUEL
7e3e6726e0 Add ComfyUI-Prepack to custom nodes list (#2121)
* Add ComfyUI-S4Tool-Image to custom nodes list

Add ComfyUI-S4Tool-Image to custom nodes list

* Update custom-node-list.json

Add custom-node : ComfyUI-S4Motion

* Add ComfyUI-S4Tool-Text to custom node list

Text rendering and styling nodes for ComfyUI. This extension provides a basic text renderer, multiple font loaders, and a style node that adds stroke, shadow, gradient fill, and opacity control.

* Add ComfyUI-Prepack to custom node list

A small, practical bundle of ComfyUI nodes that streamlines common workflows.

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-09-03 00:15:49 +09:00
Dr.Lt.Data
09567b2bb2 update DB 2025-09-03 00:15:34 +09:00
Frief84
f3bd116184 Add ComfyUI-LoRAWeightAxisXY (#2120)
* Add ComfyUI-LoRAWeightAxisXY

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-09-03 00:12:50 +09:00
Dr.Lt.Data
7509737563 update DB 2025-09-02 12:59:44 +09:00
Dr.Lt.Data
cfb815d879 update DB 2025-09-01 12:05:21 +09:00
Dr.Lt.Data
44241fb967 update DB 2025-09-01 07:31:34 +09:00
mengqin
c4b45129bd Update DB. (#2118) 2025-09-01 06:53:35 +09:00
Dr.Lt.Data
70741008ca Update DB 2025-08-31 18:11:54 +09:00
daehwa
6c2d2cae2a Add ComfyUI-NanoBananaAPI node entry (#2115) 2025-08-31 17:22:18 +09:00
gsusgg
28f13d3311 Add ComfyUI-CozyGen custom node entry (#2113)
Added a new custom node entry for ComfyUI-CozyGen with details.
2025-08-31 17:20:43 +09:00
Dr.Lt.Data
4e31aaa8fb update DB 2025-08-30 10:47:43 +09:00
dehypnotic
ba99f0c2cc Update custom-node-list.json (#2112) 2025-08-30 10:41:28 +09:00
Dr.Lt.Data
e0a96b4937 update DB 2025-08-29 13:00:32 +09:00
Dr.Lt.Data
82c055f527 update DB 2025-08-29 07:59:21 +09:00
Makki Shizu
f94008192c Update custom-node-list.json (#2110) 2025-08-29 07:47:26 +09:00
Fabio Sarracino
3895d5279e Add VibeVoice ComfyUI node (#2109) 2025-08-29 07:45:41 +09:00
Dr.Lt.Data
41be94690f bump version 2025-08-28 00:27:03 +09:00
Dr.Lt.Data
3d85ecc525 update DB 2025-08-28 00:25:45 +09:00
Dr.Lt.Data
7da00796e5 update DB 2025-08-27 12:21:31 +09:00
Dr.Lt.Data
6086419cb6 update DB 2025-08-27 07:51:36 +09:00
Dr.Lt.Data
5bc1f2f2c0 update DB 2025-08-26 19:39:38 +09:00
Changrz
32a83b211e Update Rodin Plugin url (#2102)
Co-authored-by: WhiteGiven <c15838568211@163.com>
2025-08-26 19:03:05 +09:00
Alex
bead7b3a7f Add Custom Node - Save Checkpoint with Metadata (#2105)
* Added entry for ComfyUI-SaveCheckpointWithMetadata

* Added entry for ComfyUI-SaveCheckpointWithMetadata in git-clone section
2025-08-26 19:01:52 +09:00
jialuw0830
815d6d6572 Add Eigen AI FLUX API Plugin to custom node list (#2104) 2025-08-26 18:59:51 +09:00
Christian Byrne
fbecbee4c3 Merge pull request #2106 from viva-jinyi/revert-legacy-hardcoding
Revert "As a temporary measure, the new UI will use the legacy/... ba…
2025-08-25 18:27:57 -07:00
Jin Yi
b9a7d2a78c Revert "As a temporary measure, the new UI will use the legacy/... backend structure."
This reverts commit 121a5a1888.
2025-08-26 10:07:32 +09:00
Dr.Lt.Data
95ce812992 update DB 2025-08-25 12:59:46 +09:00
Dr.Lt.Data
9a36f4748c update DB 2025-08-25 08:06:43 +09:00
Dr.Lt.Data
50b7849a35 update DB 2025-08-25 07:27:39 +09:00
Dr.Lt.Data
6f1245b27c update DB 2025-08-25 06:30:51 +09:00
dehypnotic
cc87ed3899 Update custom-node-list.json (#2097)
* Update custom-node-list.json

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-08-25 06:28:06 +09:00
Dr.Lt.Data
1d9037fefe update DB 2025-08-25 06:27:46 +09:00
Daxamur
03016e2d16 Add DaxNodes to custom node list (#2100) 2025-08-25 06:26:28 +09:00
Dr.Lt.Data
bdfb70a58a bump version 2025-08-24 15:58:23 +09:00
Dr.Lt.Data
3d41617f4e update DB 2025-08-23 17:54:00 +09:00
Dr.Lt.Data
35151ffdd1 update DB 2025-08-23 09:20:01 +09:00
Dr.Lt.Data
4527d41a7a update DB 2025-08-22 21:13:29 +09:00
dehypnotic
553cba12f3 Update custom-node-list.json (#2096)
* Update custom-node-list.json

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-08-22 20:54:35 +09:00
Dr.Lt.Data
00fb9c88e1 modified: remove matrix-nio dependency from the requirements.txt
modified: The matrix share feature is now only available when the `matrix-nio` dependency is installed.

If `matrix-nio` is not installed:
1. Apply a strikethrough to the matrix checkbox text in the share UI and display a tooltip.
2. A warning is logged at startup indicating that `matrix-nio` is missing, along with the installation command.

fixed: Corrected an issue where PR #2025 was merged into draft-v4 but applied only to `legacy/..` and not to `glob/..`
2025-08-22 20:46:32 +09:00
Dr.Lt.Data
116e068ac3 update DB 2025-08-22 12:41:08 +09:00
Dr.Lt.Data
1010dd2d28 update DB 2025-08-22 07:35:26 +09:00
Dr.Lt.Data
68bc8302fd Update publish-to-pypi.yml 2025-08-22 06:17:55 +09:00
Dr.Lt.Data
596dad5cda Update publish-to-pypi.yml 2025-08-22 06:14:51 +09:00
Dr.Lt.Data
a924c280fb Update publish-to-pypi.yml 2025-08-22 06:08:59 +09:00
Dr.Lt.Data
7354242906 update workflow 2025-08-22 06:05:27 +09:00
Dr.Lt.Data
3d0bcf5979 update workflow 2025-08-22 06:00:26 +09:00
Dr.Lt.Data
e7d0b158e9 update DB 2025-08-22 05:41:35 +09:00
Dr.Lt.Data
10ff90787c Merge branch 'main' into draft-v4 2025-08-21 12:48:17 +09:00
Dr.Lt.Data
330c4657b1 update DB 2025-08-21 12:25:20 +09:00
Dr.Lt.Data
72a109f109 update DB 2025-08-21 07:29:53 +09:00
licyk
cf45c51dfb Add HDM-ext to custom-node-list (#2094) 2025-08-21 06:52:09 +09:00
Dr.Lt.Data
0b013adb34 update DB 2025-08-20 12:24:39 +09:00
Dr.Lt.Data
7457d91f64 update DB 2025-08-20 07:44:09 +09:00
Dr.Lt.Data
7fe1159426 update DB 2025-08-20 05:23:08 +09:00
renderartist
c2665e3677 Update custom-node-list.json (#2091)
* Update custom-node-list.json

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-08-20 05:10:13 +09:00
Dr.Lt.Data
d63de803a4 update DB 2025-08-20 04:02:02 +09:00
Dr.Lt.Data
11aca3513c update DB 2025-08-20 03:53:51 +09:00
Joel Andrés Navarro Navarro
561c9f40e5 Update custom-node-list.json (#2089) 2025-08-20 03:49:46 +09:00
Saquib Alam
54ed13aadf add nodes for omini-kontext framework (#2087) 2025-08-20 03:47:56 +09:00
Dr.Lt.Data
109cc21337 update DB 2025-08-19 07:48:17 +09:00
Dr.Lt.Data
7e46b30fa5 update DB 2025-08-18 12:33:30 +09:00
Dr.Lt.Data
0ba112c2c7 update DB 2025-08-18 07:47:41 +09:00
david
fc15d94170 Update custom-node-list.json (#2086) 2025-08-18 07:38:28 +09:00
Dr.Lt.Data
dcb37d9c55 update DB 2025-08-17 18:23:05 +09:00
Marco Zanella
755b9d6342 Add ComfyUI-BooleanExpression to custom-node-list (#2084) 2025-08-17 17:53:24 +09:00
Joel Andrés Navarro Navarro
3d6151c94f Update custom-node-list.json (#2085) 2025-08-17 17:51:20 +09:00
jupo-ai
590bd8c4b9 Update custom-node-list.json (#2083) 2025-08-17 07:05:03 +09:00
Dr.Lt.Data
e99aafd876 update DB 2025-08-16 10:26:33 +09:00
Dr.Lt.Data
1f0adf8bcf update DB 2025-08-16 09:53:13 +09:00
jupo-ai
dbd5d5fb43 Update custom-node-list.json (#2082)
* Update custom-node-list.json

* Update custom-node-list.json
2025-08-16 09:36:35 +09:00
Dr.Lt.Data
a8b0e3641b update DB 2025-08-15 10:13:33 +09:00
AfterGlow.SYX
9efb350be9 Update custom-node-list.json (#2081) 2025-08-15 10:08:10 +09:00
Dr.Lt.Data
8d9820b3fb update DB 2025-08-14 23:24:08 +09:00
Dr.Lt.Data
103f89551a update DB 2025-08-14 22:00:23 +09:00
Dr.Lt.Data
6030d961ad update DB 2025-08-14 12:01:24 +09:00
Dr.Lt.Data
ee08c9e17f update DB 2025-08-14 07:42:41 +09:00
Dr.Lt.Data
48dd9a3240 update DB 2025-08-14 02:35:34 +09:00
Baverne
e122e206a6 Add TiledWan (#2078)
* Add TiledWan

* Add TiledWan

* Add TiledWan
2025-08-14 02:21:37 +09:00
Dr.Lt.Data
398b905758 update DB 2025-08-13 12:12:36 +09:00
Dr.Lt.Data
dc2ec08fe3 update DB 2025-08-13 07:44:54 +09:00
Dr.Lt.Data
3bf5edf5c9 update DB 2025-08-12 10:34:55 +09:00
Dr.Lt.Data
134bca526c update DB 2025-08-12 09:52:15 +09:00
Dr.Lt.Data
3393e58b06 update DB 2025-08-11 22:52:13 +09:00
Dr.Lt.Data
648d7e73c6 Merge branch 'main' into draft-v4 2025-08-11 12:51:34 +09:00
Dr.Lt.Data
eab6cdeee4 bump version 2025-08-11 12:48:38 +09:00
Christian Byrne
e8ec1ce8e3 recurse when finding nodes in workflow (#2070) 2025-08-11 12:47:20 +09:00
Dr.Lt.Data
b3581564ed update DB 2025-08-11 12:28:12 +09:00
S4MUEL
29e1bd95fd Add ComfyUI S4Motion to custom-node-list.json (#2072)
* Add ComfyUI-S4Tool-Image to custom nodes list

Add ComfyUI-S4Tool-Image to custom nodes list

* Update custom-node-list.json

Add custom-node : ComfyUI-S4Motion
2025-08-11 12:23:16 +09:00
Dr.Lt.Data
8bff401c14 update DB 2025-08-11 08:47:56 +09:00
Dr.Lt.Data
41798e9255 update DB 2025-08-11 07:44:25 +09:00
Dr.Lt.Data
9e4f0228d1 update DB 2025-08-10 20:54:49 +09:00
Dr.Lt.Data
76ee93c98c update DB 2025-08-10 11:25:27 +09:00
ericKuang
fb1a89efb7 Update custom-node-list.json (#2068)
Add ComfyUI-Only node:
Pain Point Solved: Eliminates the need to manually move .latent files into the ComfyUI input directory.
2025-08-10 11:16:32 +09:00
Dr.Lt.Data
aface43554 update DB 2025-08-10 11:02:38 +09:00
Dr.Lt.Data
a35f0157b2 update DB 2025-08-10 10:20:57 +09:00
Dr.Lt.Data
9b32162906 update DB 2025-08-09 15:13:30 +09:00
Dr.Lt.Data
21bba62572 update DB 2025-08-09 12:35:05 +09:00
Dr.Lt.Data
302327d6b3 update DB 2025-08-09 07:54:04 +09:00
Dr.Lt.Data
5667e8bcbb update DB 2025-08-08 23:13:50 +09:00
Dr.Lt.Data
ae66bd0e31 update DB 2025-08-08 12:15:46 +09:00
Dr.Lt.Data
48dfadc02d update DB 2025-08-08 07:54:54 +09:00
Dr.Lt.Data
3df6272bb6 update DB 2025-08-08 07:37:49 +09:00
CY-CHENYUE
e7f9bcda01 Update custom-node-list.json (#2064) 2025-08-08 07:35:24 +09:00
Dr.Lt.Data
205044ca66 update DB 2025-08-07 12:19:21 +09:00
Dr.Lt.Data
d497eb1f00 update DB 2025-08-07 08:42:22 +09:00
Dr.Lt.Data
4e6f970ee9 update DB 2025-08-06 12:14:25 +09:00
Dr.Lt.Data
0b6cdda6f5 update DB 2025-08-06 08:59:45 +09:00
Dr.Lt.Data
a896ded763 update DB 2025-08-06 07:26:55 +09:00
Dr.Lt.Data
fb5dd9ebc2 update DB 2025-08-05 12:24:03 +09:00
Dr.Lt.Data
c8b7db6c38 update DB 2025-08-05 08:57:36 +09:00
Dr.Lt.Data
44a3191be3 update DB 2025-08-05 07:16:04 +09:00
Dr.Lt.Data
b4f7cdc9e7 update DB 2025-08-05 06:20:52 +09:00
Alex Furer
8da07018d5 Update custom-node-list.json (#2058)
* Update custom-node-list.json

Added my custom node "AF-EditGeneratedPrompt", which let's one pipe a generated prompt, edit it, or use the node as a regular prompting node. Thank you for your efforts!

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-08-05 06:19:40 +09:00
Dr.Lt.Data
0c19a27065 update DB 2025-08-04 20:13:27 +09:00
jqy-yo
3296b0ecdf Add ComfyUI Gemini Nodes by jqy-yo (#2057)
Add entry for comfyui-gemini-nodes - a collection of custom nodes for integrating Google Gemini API with ComfyUI, providing AI capabilities for text generation, image generation, and video analysis.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: jqy-yo <jqy-yo@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-08-04 19:58:59 +09:00
Uygar
0a07261124 Update custom-node-list.json (#2055) 2025-08-04 12:12:13 +09:00
Dr.Lt.Data
33106d0ecf update DB 2025-08-04 12:10:52 +09:00
Novice_Chen
5bb887206a add new node:ComfyUI-XingLiu (#2040) 2025-08-04 12:09:22 +09:00
Dr.Lt.Data
b30b0e27cb update DB 2025-08-04 08:59:56 +09:00
Dr.Lt.Data
363736489c update DB 2025-08-04 08:59:40 +09:00
Dr.Lt.Data
8dbf5e87a0 update DB 2025-08-04 07:39:25 +09:00
Dr.Lt.Data
0b30f2cb50 update DB 2025-08-04 07:02:06 +09:00
Brekel
ba5265dac4 Update custom-node-list.json (#2054)
Add ComfyUI-Brekel
2025-08-04 06:16:37 +09:00
Dr.Lt.Data
ecb9c65917 update DB 2025-08-04 06:16:24 +09:00
jupo-ai
8a98474600 Update custom-node-list.json (#2051) 2025-08-04 06:09:28 +09:00
Radiating Reverberations
b072216e67 Add Wan2.2 models from Comfy-Org (#2050) 2025-08-04 06:08:44 +09:00
Dr.Lt.Data
cfb3181716 update DB 2025-08-02 08:03:23 +09:00
Dr.Lt.Data
ab684cdc99 update DB 2025-08-01 12:22:27 +09:00
Dr.Lt.Data
facadc3a44 update DB 2025-08-01 07:29:09 +09:00
Christian Byrne
f599bc22d7 Merge pull request #2047 from viva-jinyi/feat/pydantic-validation-bulk-api
Add Pydantic validation to import_fail_info_bulk endpoint
2025-07-31 12:34:20 -07:00
Dr.Lt.Data
281319d2da update DB 2025-08-01 00:08:52 +09:00
Simlym
5cb203685c Update custom-node-list.json (#2045) 2025-07-31 23:44:48 +09:00
Jin Yi
300c6e7406 feat: Add Pydantic validation to import_fail_info_bulk endpoint
- Regenerated Pydantic models from updated OpenAPI specification
- Updated import_fail_info_bulk route handler to use ImportFailInfoBulkRequest/Response models
- Replaced manual JSON validation with Pydantic model validation
- Added proper error handling with ValidationError
- Updated data_models/__init__.py to export new models

Following the process outlined in data_models/README.md for type safety and consistency.
2025-07-31 14:15:21 +09:00
Dr.Lt.Data
9c4d6a0773 Merge branch 'main' into draft-v4 2025-07-31 12:44:02 +09:00
Dr.Lt.Data
01fa37900b update DB 2025-07-31 12:32:47 +09:00
Dr.Lt.Data
edbe744e17 update DB 2025-07-31 07:57:27 +09:00
Jin Yi
2a32a1a4a8 Add bulk API endpoint for import fail info (#2039)
* feat(api): Implement endpoint for bulk import failure info

Adds the `/v2/customnode/import_fail_info_bulk` endpoint to allow
fetching multiple import error statuses in a single request.

* chore(api): Update OpenAPI spec for new bulk endpoint

Adds the `import_fail_info_bulk` route and its corresponding
request/response schemas to `openapi.yaml`.
2025-07-31 07:43:49 +09:00
Dr.Lt.Data
404bdb21e6 update DB 2025-07-30 18:39:08 +09:00
PD19 Anime
b260c9a512 Update custom-node-list.json (#2044) 2025-07-30 18:33:29 +09:00
Yuan-Man
4b941adb6a Add ComfyUI-SkyworkUniPic (#2043) 2025-07-30 18:32:15 +09:00
copusDev
bd752550a8 feat: change web icon (#2042)
Co-authored-by: john <john@server31.io>
2025-07-30 18:31:56 +09:00
Dr.Lt.Data
b8b71bb961 update DB 2025-07-30 12:16:25 +09:00
Kevin Lin
5aaf7a4092 Update custom node listing (#2041) 2025-07-30 12:03:28 +09:00
Dr.Lt.Data
030e02ffb8 update DB 2025-07-30 08:57:38 +09:00
Jin Yi
60746c6253 [feat] Add bulk import failure info API endpoint (#2035)
* [feat] Add bulk import failure info API endpoint

- Add import_fail_info_bulk endpoint to both glob and legacy manager servers
- Supports bulk processing of cnr_ids and urls arrays in single request
- Maintains same error handling pattern as original import_fail_info API
- Reduces API calls from N to 1 for conflict detection optimization
- Validates input parameters and provides proper error responses

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* modified: remove manager button completely. Now, even when using the legacy UI, it must always be accessed through the menu.

* chore(api): Add temporary cache reload for import_fail_info_bulk

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Dr.Lt.Data <dr.lt.data@gmail.com>
2025-07-30 07:57:19 +09:00
Dr.Lt.Data
d962aa03f4 update DB 2025-07-30 07:37:26 +09:00
Dr.Lt.Data
121a5a1888 As a temporary measure, the new UI will use the legacy/... backend structure.
The glob/... version will be applied later after the cacheless implementation is completed.
2025-07-30 01:13:17 +09:00
Dr.Lt.Data
9e4a2aae43 update DB 2025-07-30 00:02:30 +09:00
rainlizard
ee6eb685e7 Add Whirlpool Upscaler (#2037)
* Added Whirlpool Upscaler

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-07-29 23:52:57 +09:00
Dr.Lt.Data
09a38a32ce update DB 2025-07-29 21:30:45 +09:00
Android zhang
d13b19d43d Update custom-node-list.json (#2036)
Add ComfyUI-MoGe2
2025-07-29 21:02:18 +09:00
Dr.Lt.Data
5316ec1b4d Merge branch 'main' into draft-v4 2025-07-29 12:18:55 +09:00
Dr.Lt.Data
e730dca1ad update DB 2025-07-29 12:13:35 +09:00
Dr.Lt.Data
8da30640bb update DB
fixed: scanner.py
2025-07-29 07:45:05 +09:00
Dr.Lt.Data
6f4eb88e07 update DB 2025-07-28 12:15:58 +09:00
Dr.Lt.Data
d9592b9dab update DB 2025-07-28 08:57:58 +09:00
Dr.Lt.Data
b87ada72aa update DB 2025-07-28 07:04:57 +09:00
Dr.Lt.Data
83363ba1f0 update DB 2025-07-27 21:36:48 +09:00
Dr.Lt.Data
a2a7349ce4 Merge branch 'main' into draft-v4 2025-07-27 16:07:57 +09:00
Dr.Lt.Data
23ebe7f718 update DB 2025-07-27 15:04:41 +09:00
Dr.Lt.Data
e04264cfa3 update DB 2025-07-27 10:45:00 +09:00
Shmuel Ronen
8d29e5037f Add ComfyUI-HiggsAudio_Wrapper to custom node list (#2034) 2025-07-27 10:28:27 +09:00
Dr.Lt.Data
6926ed45b0 update DB 2025-07-26 21:05:02 +09:00
Dr.Lt.Data
736b85b8bb update DB 2025-07-26 20:51:43 +09:00
Nanthakumar
9e3361bc31 Update custom-node-list.json (#2031) 2025-07-26 20:37:40 +09:00
Dr.Lt.Data
6e10381020 update DB 2025-07-26 11:13:08 +09:00
Dr.Lt.Data
a1d37d379c update DB 2025-07-26 09:34:57 +09:00
comfyuistudio
07d87db7a2 Update custom-node-listAdd ComfyUI-Studio-nodes to custom_nodes registry.json (#2029)
* Update custom-node-list.json

* Update custom-node-list.json

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-07-26 09:29:31 +09:00
Dr.Lt.Data
4e556673d2 update DB 2025-07-26 09:27:00 +09:00
AIWarper
f421304fc1 Update custom-node-list.json (#2028) 2025-07-26 09:25:34 +09:00
Dr.Lt.Data
6867616973 Merge branch 'main' into draft-v4 2025-07-25 12:26:42 +09:00
Dr.Lt.Data
c9271b1686 update DB 2025-07-25 12:19:45 +09:00
Dr.Lt.Data
12eb6863da update DB 2025-07-25 08:58:56 +09:00
Dr.Lt.Data
4834874091 fixed: ruff check 2025-07-25 07:26:48 +09:00
Dr.Lt.Data
8759ebf200 bump version 2025-07-25 07:03:14 +09:00
YAN Wenkun
d4715aebef Migrate matrix-client to matrix-nio (#2025) 2025-07-25 06:59:46 +09:00
Dr.Lt.Data
0fe2ade7bb update DB 2025-07-25 06:59:32 +09:00
Dr.Lt.Data
0c71565535 update DB 2025-07-24 21:28:41 +09:00
Dr.Lt.Data
cf8029ecd4 Merge branch 'main' into draft-v4 2025-07-24 12:41:48 +09:00
Dr.Lt.Data
6a637091a2 update DB 2025-07-24 12:10:49 +09:00
Dr.Lt.Data
31eba60012 update DB 2025-07-24 09:00:09 +09:00
Dr.Lt.Data
51e58e9078 update DB 2025-07-24 07:07:58 +09:00
Dr.Lt.Data
4a1e76730a fixed: security_check - robust checking
https://github.com/Comfy-Org/ComfyUI-Manager/issues/2002
2025-07-24 02:44:43 +09:00
Dr.Lt.Data
5599bb028b fixed: security_check - robust checking
https://github.com/Comfy-Org/ComfyUI-Manager/issues/2002
2025-07-24 02:38:53 +09:00
Dr.Lt.Data
552c6da0cc modified: download_url - provide more informative error messages
https://github.com/Comfy-Org/ComfyUI-Manager/issues/2016
2025-07-24 02:30:07 +09:00
Dr.Lt.Data
cc6817a891 fixed: cnr_utils – fixed improper behavior of bypass_ssl
https://github.com/Comfy-Org/ComfyUI-Manager/issues/2017
2025-07-24 02:15:31 +09:00
Dr.Lt.Data
fb48d1b485 update DB 2025-07-24 02:06:14 +09:00
Uygar
1c336dad6b ComfyUI-Artha-Gemini custom node (#2024)
* Add files via upload

* Update custom-node-list.json
2025-07-24 02:01:31 +09:00
Dr.Lt.Data
a4940d46cd update DB 2025-07-24 02:01:16 +09:00
猫大好き
499b2f44c1 Add builmenlabo custom node entry (#2020)
* Add files via upload

* Add files via upload

* Delete manager_registration.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-07-24 01:59:13 +09:00
Yuan-Man
2b200c9281 Add ComfyUI-HiggsAudio (#2023) 2025-07-24 01:58:09 +09:00
Dr.Lt.Data
36a900c98f update DB 2025-07-23 12:50:44 +09:00
Dr.Lt.Data
5236b03f66 update DB 2025-07-23 07:32:34 +09:00
kpsss34
8be35e3621 Update custom-node-list.json (#2021)
Rename: ComfyUI-kpsss34-Sana to ComfyUI-kpsss34
2025-07-23 07:31:26 +09:00
Dariusz L
509f00fe89 Add Comfyui-LayerForge (#2022)
Add the "Comfyui-LayerForge" node to the community list.
2025-07-23 07:30:43 +09:00
Dr.Lt.Data
a98b87f148 update DB 2025-07-22 12:17:42 +09:00
Dr.Lt.Data
ae9b2b3b72 update DB 2025-07-22 08:59:51 +09:00
Dr.Lt.Data
02e1ec0ae3 update DB 2025-07-22 07:32:38 +09:00
Vaishnav V Nair
daefb0f120 Update custom-node-list.json (#2018)
first custom node
2025-07-22 07:22:18 +09:00
Dr.Lt.Data
ff0604e3b6 update DB 2025-07-21 12:14:49 +09:00
Dr.Lt.Data
20e41e22fa update DB 2025-07-21 08:59:07 +09:00
Dr.Lt.Data
59264c1fd9 Merge branch 'main' into draft-v4 2025-07-20 19:23:24 +09:00
Dr.Lt.Data
a0e3bdd594 update DB 2025-07-20 19:15:45 +09:00
brucew4yn3rp
6580aaf3ad Added Save Image (Selective Metadata) node (#2012) 2025-07-20 18:57:27 +09:00
Dr.Lt.Data
0b46701b60 update DB 2025-07-20 18:57:10 +09:00
Edoardo Carmignani
0bb4effede Add ComfyUI-ExtraLinks (#2009)
A one-click collection of alternate connection styles for ComfyUI.
2025-07-20 18:21:25 +09:00
Dr.Lt.Data
b07082a52d update DB 2025-07-19 18:16:26 +09:00
StrawBerryFist
04f267f5a7 Add StrawberryFist VRAM Optimizer node to custom-node-list.json (#2007)
* Add StrawberryFist VRAM Optimizer node to custom-node-list.json

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-07-19 18:15:22 +09:00
Dr.Lt.Data
03ccce2804 fixed: cm-cli - provides pip dependency restoration using the options --pip-non-url, --pip-non-local-url, and --pip-local-url.
https://github.com/Comfy-Org/ComfyUI-Manager/issues/2008
2025-07-19 06:51:07 +09:00
Dr.Lt.Data
e894bd9f24 update DB 2025-07-18 07:50:14 +09:00
Dr.Lt.Data
10e6988273 update DB 2025-07-18 07:26:51 +09:00
Erehr
905b61e5d8 Publish ComfyUI-Eagle-Autosend (#2006) 2025-07-18 07:25:55 +09:00
Dr.Lt.Data
ee69d393ae update DB
update scanner script
2025-07-17 12:22:13 +09:00
Dr.Lt.Data
cab39973ae update DB 2025-07-17 12:10:40 +09:00
Dr.Lt.Data
d93f5d07bb update DB 2025-07-17 08:57:16 +09:00
Dr.Lt.Data
ba00ffe1ae update DB 2025-07-17 07:39:11 +09:00
Gilad Schreiber
6afaf5eaf5 Add LTX-Video 0.9.8 distilled models (#2005)
- Add LTX-Video 2B Distilled v0.9.8 (6.34GB)
- Add LTX-Video 2B Distilled FP8 v0.9.8 (4.46GB)
- Add LTX-Video 13B Distilled v0.9.8 (28.6GB)
- Add LTX-Video 13B Distilled FP8 v0.9.8 (15.7GB)

These v0.9.8 models feature improved prompt understanding and detail generation.
Both 2B and 13B variants available in standard and FP8 quantized versions.

Co-authored-by: gschreiber <gschreiber@infra-image-generator.c.ltx-research-vms.internal>
2025-07-17 07:38:53 +09:00
Dr.Lt.Data
d30459cc34 update DB 2025-07-16 12:31:58 +09:00
Dr.Lt.Data
e92fbb7b1b update DB 2025-07-16 12:24:26 +09:00
aiaiaikkk
42d464b532 Update custom-node-list.json (#2004)
* Update custom-node-list.json

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-07-16 12:22:34 +09:00
Dr.Lt.Data
c2e9e5c63a update DB 2025-07-16 07:28:23 +09:00
Creepybits
bc36726925 Update custom-node-list.json (#2001)
Add Save To OneDrive node for ComfyUI
2025-07-16 07:08:59 +09:00
Dr.Lt.Data
22725b0188 add missing file 2025-07-15 18:52:17 +09:00
Dr.Lt.Data
7abbff8c31 update DB 2025-07-15 12:14:23 +09:00
Android zhang
6236f4bcf4 Add ComfyUI nodes to use Distill-Any-Depth prediction (#1999) 2025-07-15 06:27:32 +09:00
Jukka Seppänen
3c3e80f77f Add WanVideoWrapper (#1998)
* Add IC-Light nodes and models

* Add Florence2 and LuminaWrapper -nodes

https://github.com/kijai/ComfyUI-Florence2
https://github.com/kijai/ComfyUI-LuminaWrapper

* Update custom-node-list.json

* Update custom-node-list.json

* Update custom-node-list.json

* Add segment-anything-2

* Update custom-node-list.json

* Add T5 encoder models

* Update custom-node-list.json

* Add PyramidFlowWrapper

* Add HunyuanVideoWrapper

* Add ComfyUI-WanVideoWrapper
2025-07-15 06:25:56 +09:00
Dr.Lt.Data
4aae2fb289 update DB 2025-07-14 20:29:22 +09:00
Dr.Lt.Data
66ff07752f update DB 2025-07-14 19:04:10 +09:00
LaoMaoBoss
5cf92f2742 Add ComfyUI-WBLESS (#1990)
* Update custom-node-list.json

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-07-14 19:03:33 +09:00
Dr.Lt.Data
6d3fddc474 update DB 2025-07-14 19:02:41 +09:00
Dr.Lt.Data
66d4ad6174 update DB 2025-07-14 18:58:05 +09:00
ChenNing
2a366a1607 Add ComfyUI_Image_Pin (#1992) 2025-07-14 18:56:22 +09:00
Dr.Lt.Data
d87a0995b4 update DB 2025-07-14 18:55:31 +09:00
Dr.Lt.Data
9a73a41e04 update DB 2025-07-14 18:55:11 +09:00
company8
ba041b36bc Update custom-node-list.json (#1993) 2025-07-14 18:54:18 +09:00
Eses
f5f9de69b4 Add EsesImageCompare node to node list (#1994)
Co-authored-by: eses <13034046+quasiblob@users.noreply.github.com>
2025-07-14 18:53:29 +09:00
Yuan-Man
71e56c62e8 Add ComfyUI-ThinkSound (#1989) 2025-07-14 18:52:27 +09:00
Dr.Lt.Data
a0b0c2b963 feat: initial implementation of middleware-based security policy 2025-07-12 11:31:07 +09:00
Dr.Lt.Data
0f496619fd update DB 2025-07-12 11:07:46 +09:00
Dr.Lt.Data
5fdd6a441a update DB 2025-07-12 09:07:33 +09:00
Dr.Lt.Data
00f287bb63 fixed: ruff check 2025-07-12 06:15:09 +09:00
Dr.Lt.Data
785268efa6 modified: By default, do not forcefully downgrade numpy to below version 2. I believe enough of a grace period has now been given.
https://github.com/Comfy-Org/ComfyUI-Manager/issues/1981#issuecomment-3058772842
2025-07-12 06:07:10 +09:00
Dr.Lt.Data
2c976d9394 update DB 2025-07-12 05:54:51 +09:00
Dr.Lt.Data
1e32582642 fixed: broken db 2025-07-12 05:29:32 +09:00
IsItDanOrAi
6f8f6d07f5 Update custom-node-list.json (#1980) 2025-07-12 05:28:36 +09:00
Gilad Schreiber
3958111e76 Add LTX-Video ICLoRA models for depth, pose, and canny control (#1988)
- Add LTX-Video ICLoRA Depth 13B v0.9.7 (81.9MB)
- Add LTX-Video ICLoRA Pose 13B v0.9.7 (151MB)
- Add LTX-Video ICLoRA Canny 13B v0.9.7 (81.9MB)

These In-Context LoRA models enable precise control for video-to-video generation
with depth, pose, and canny edge conditioning respectively.

Co-authored-by: gschreiber <gschreiber@infra-image-generator.c.ltx-research-vms.internal>
2025-07-12 05:20:21 +09:00
Dr.Lt.Data
86fcc4af74 update DB 2025-07-10 12:33:19 +09:00
Dr.Lt.Data
2fd26756df update DB 2025-07-10 07:41:25 +09:00
Eses
478f4b74d8 add ComfyUI-EsesImageTransform node (#1987)
Co-authored-by: eses <13034046+quasiblob@users.noreply.github.com>
2025-07-10 07:36:40 +09:00
Dr.Lt.Data
73d0d2a1bb update DB 2025-07-09 22:59:44 +09:00
Dr.Lt.Data
546db08ec4 update DB 2025-07-09 08:56:44 +09:00
Dr.Lt.Data
0dd41a8670 update DB 2025-07-09 07:19:11 +09:00
PD19 Anime
82c0c89f46 Add ComfyUI-PD19Anime-Nodes to custom node list (#1975)
* Update custom-node-list.json

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-07-09 06:38:43 +09:00
Dr.Lt.Data
f4ce0fd5f1 Merge branch 'main' into draft-v4 2025-07-08 12:21:47 +09:00
Dr.Lt.Data
c3798bf4c2 update DB 2025-07-08 12:12:31 +09:00
Dr.Lt.Data
ff80b6ccb0 update DB 2025-07-08 08:58:03 +09:00
Eses
e729217116 add ComfyUI-EsesImageEffectCurves node (#1976)
Co-authored-by: eses <13034046+quasiblob@users.noreply.github.com>
2025-07-08 08:56:58 +09:00
Dr.Lt.Data
94c695daca update DB 2025-07-08 08:56:11 +09:00
FortunaCournot
9f189f0420 Stereoscopic Nodes added (#1978) 2025-07-08 08:55:24 +09:00
Bas Nijholt
ad09e53f60 Remove file argument from logging.error in manager_server.py (#1977)
Otherwise this results in:
```python
TypeError: Logger._log() got an unexpected keyword argument 'file' 
```
2025-07-08 08:48:16 +09:00
Dr.Lt.Data
092a7a5f3f update DB 2025-07-07 23:38:10 +09:00
Dr.Lt.Data
f45649bd25 update DB 2025-07-07 12:59:28 +09:00
Dr.Lt.Data
2595cc5ed7 bump version 2025-07-07 01:05:25 +09:00
Dr.Lt.Data
2f62190c6f update DB 2025-07-07 01:00:58 +09:00
Alexander Piskun
577314984c fix(Windows, numpy): fix for cm-cli usage (#1972) 2025-07-06 22:36:49 +09:00
Dr.Lt.Data
f0346b955b update DB 2025-07-06 16:57:36 +09:00
Dr.Lt.Data
70139ded4a bump version 2025-07-06 13:40:50 +09:00
Dr.Lt.Data
bf379900e1 update DB 2025-07-06 13:40:17 +09:00
Dr.Lt.Data
9bafc90f5e update DB 2025-07-06 08:31:22 +09:00
Alexander Piskun
fce0d9e88e fix(Windows, numpy): do not use 'uv' by default (#1971) 2025-07-06 08:23:31 +09:00
namtb96
2b3b154989 Add OmniGen2 Simple Node (#1970)
* add OmniGen2 custom node

* Change extenions name
2025-07-06 08:22:02 +09:00
Dr.Lt.Data
948d2440a1 update DB 2025-07-05 09:40:28 +09:00
Dr.Lt.Data
5adbe1ce7a update DB 2025-07-05 06:42:32 +09:00
vrgamegirl19
8157d34ffa Add VRGameDevGirl’s Video Enhancement Nodes (#1966)
* Update custom-node-list.json

* Update custom-node-list.json

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-07-05 06:26:15 +09:00
Dr.Lt.Data
3ec8cb2204 update DB 2025-07-05 06:06:16 +09:00
Dr.Lt.Data
0daa826543 fixed: invalid default config.ini
https://github.com/Comfy-Org/ComfyUI-Manager/issues/1967
2025-07-04 17:54:26 +09:00
Dr.Lt.Data
a66028da58 update DB 2025-07-04 08:53:35 +09:00
Dr.Lt.Data
807c9e6872 update DB 2025-07-04 07:02:41 +09:00
Dr.Lt.Data
e71f3774ba modified: If uv is available, set use_uv to True by default. 2025-07-03 12:32:50 +09:00
Dr.Lt.Data
dd7314bf10 update DB 2025-07-03 12:22:59 +09:00
Dr.Lt.Data
f33bc127dc update DB 2025-07-03 07:31:25 +09:00
Creepybits
db92b87782 Update custom-node-list.json (#1965)
* Update custom-node-list.json

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-07-03 07:08:40 +09:00
Dr.Lt.Data
eba41c8693 update DB 2025-07-02 21:38:06 +09:00
sunxAI
c855308162 Update DB (#1963)
* Update custom-node-list.json

update

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-07-02 21:32:13 +09:00
Dr.Lt.Data
73d971bed8 bump version 2025-07-02 12:33:16 +09:00
copusDev
bcfe0c2874 feat: copus content add rating (#1962)
Co-authored-by: john <john@server31.io>
2025-07-02 12:32:17 +09:00
Dr.Lt.Data
931ff666ae update DB 2025-07-02 12:02:20 +09:00
Dr.Lt.Data
18b6d86cc4 update DB 2025-07-02 08:57:41 +09:00
Dr.Lt.Data
086040f858 bump version 2025-07-01 12:55:13 +09:00
Dr.Lt.Data
adbeb527d6 added: middleware manager for security policy 2025-07-01 12:54:29 +09:00
Dr.Lt.Data
043176168d Merge branch 'main' into draft-v4 2025-07-01 12:35:39 +09:00
Dr.Lt.Data
3c5efa0662 update DB 2025-07-01 12:18:14 +09:00
Dr.Lt.Data
9b739bcbbf update DB 2025-07-01 08:57:40 +09:00
Dr.Lt.Data
db89076e48 update DB 2025-07-01 07:30:59 +09:00
Dr.Lt.Data
19b341ef18 update DB 2025-07-01 01:04:40 +09:00
Dr.Lt.Data
be3713b1a3 update DB 2025-07-01 00:21:53 +09:00
Dr.Lt.Data
99c4415cfb update DB 2025-06-30 21:29:41 +09:00
方长君
7b311f2ccf Add MultiSaveImage custom node (#1956) 2025-06-30 21:13:20 +09:00
Dr.Lt.Data
4aeabfe0a7 update DB 2025-06-30 07:34:20 +09:00
Dr.Lt.Data
431ed02194 update DB 2025-06-30 07:25:27 +09:00
KarmaSwint
07f587ed83 Add KarmaNodes to Comfy Registry (#1958)
Co-authored-by: Karma Swint <karmaaswint@gmail.com>
2025-06-30 07:16:43 +09:00
S4MUEL
0408341d82 Add ComfyUI-S4Tool-Image to custom nodes list (#1957)
Add ComfyUI-S4Tool-Image to custom nodes list
2025-06-30 07:16:33 +09:00
Dr.Lt.Data
5b3c9432f3 update DB 2025-06-29 15:48:08 +09:00
Dr.Lt.Data
4a197e63f9 update DB 2025-06-28 23:31:08 +09:00
Dr.Lt.Data
ad79a2ef45 Merge branch 'main' into draft-v4 2025-06-28 19:59:19 +09:00
Dr.Lt.Data
0876a12fe9 update DB 2025-06-28 19:33:20 +09:00
Dr.Lt.Data
c43c7ecc03 update DB 2025-06-28 18:15:49 +09:00
Dr.Lt.Data
4a6dee3044 update DB 2025-06-28 08:45:28 +09:00
Dr.Lt.Data
019acdd840 update DB 2025-06-28 08:22:30 +09:00
PeterMikhai
1c98512720 Update custom-node-list.json (#1955)
* Update custom-node-list.json

* Update custom-node-list.json

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-06-28 08:21:18 +09:00
Dr.Lt.Data
43041cebed modified: Do not modify generated_models.py directly; use openapi.yaml instead. 2025-06-28 07:54:17 +09:00
Dr.Lt.Data
23a09ad546 update DB 2025-06-27 12:23:33 +09:00
Dr.Lt.Data
0836e8fe7c update DB 2025-06-27 07:23:09 +09:00
Dr.Lt.Data
90196af8f8 update DB 2025-06-27 01:48:34 +09:00
Dr.Lt.Data
002e549a86 modified: security policy
- Strengthened the default security policy
- Subdivided the risky levels high and middle into high+, high, middle+, and middle
- Added support for personal_cloud network mode
- Updated README.md

fixed: invalid security message
fixed: legacy - crash when security policy violation occurred

modified: default 'use_uv' is now True
2025-06-27 01:38:38 +09:00
Dr.Lt.Data
1de6f859bf Merge branch 'main' into draft-v4 2025-06-26 23:21:04 +09:00
Dr.Lt.Data
566fe05772 update DB 2025-06-26 22:56:48 +09:00
uinodes
18772c6292 Update custom-node-list.json (#1953) 2025-06-26 22:34:15 +09:00
Yuan-Man
6278bddc9b Add ComfyUI-PosterCraft (#1952) 2025-06-26 22:33:02 +09:00
Dr.Lt.Data
f74bf71735 update DB 2025-06-26 08:58:08 +09:00
Dr.Lt.Data
efe9ed68b2 update DB 2025-06-26 06:56:56 +09:00
Ambrosinus
7c1e75865d Add ComfyUI-ATk-Nodes plugin (#1949)
* Update custom-node-list.json

* Update custom-node-list.json

fixing the correct insertion of new entries in alphabetical order.
2025-06-26 06:37:29 +09:00
Dr.Lt.Data
89530fc4e7 Merge branch 'main' into draft-v4 2025-06-25 12:58:50 +09:00
Dr.Lt.Data
a0aee41f1a fixed: Support configuration with use_uv enabled in environments where only uv exists without pip.
https://github.com/Comfy-Org/ComfyUI-Manager/issues/1828
2025-06-25 12:44:26 +09:00
Dr.Lt.Data
2049dd75f4 update DB 2025-06-25 12:17:07 +09:00
Dr.Lt.Data
0864c35ba9 update DB 2025-06-25 07:27:45 +09:00
Dr.Lt.Data
92c9f66671 update DB 2025-06-25 00:52:31 +09:00
Dr.Lt.Data
223d6dad51 Merge branch 'main' into draft-v4 2025-06-25 00:46:12 +09:00
Dr.Lt.Data
815784e809 fixed: Fix issue where some nodepacks were displayed redundantly in custom nodes manager. 2025-06-25 00:18:18 +09:00
Dr.Lt.Data
2795d00d1e update DB 2025-06-24 23:39:31 +09:00
Dr.Lt.Data
86dd0b4963 update DB 2025-06-24 07:17:40 +09:00
Dr.Lt.Data
77a4f4819f update DB 2025-06-24 00:18:16 +09:00
Dr.Lt.Data
b63d603482 update DB 2025-06-23 23:40:54 +09:00
Dr.Lt.Data
e569b4e613 update DB 2025-06-23 12:35:42 +09:00
Gero Doll
8a70997546 Add ComfyUI Face Detection Node (#1947) 2025-06-23 12:29:58 +09:00
Dr.Lt.Data
80d0a0f882 update DB 2025-06-23 08:47:23 +09:00
Dr.Lt.Data
70b3997874 update DB 2025-06-23 06:53:30 +09:00
Dr.Lt.Data
e8e4311068 update DB 2025-06-22 18:43:10 +09:00
Christian Byrne
cb0fa5829d Merge pull request #1915 from Comfy-Org/feat/implement-batch-tracking-clean
[feat] Implement comprehensive batch tracking and OpenAPI-driven data models
2025-06-21 19:46:23 -07:00
bymyself
a66f86d4af cleanup records older than 16 days 2025-06-21 16:57:54 -07:00
bymyself
35d98dcea8 add batch_id to history task items 2025-06-21 16:45:50 -07:00
bymyself
38fefde06d add embedded python to system state 2025-06-21 16:29:40 -07:00
bymyself
75ecb31f8c add frontend version to system state capture 2025-06-21 16:28:00 -07:00
bymyself
77133375ad [fix] Ensure batch history is written when queue becomes empty 2025-06-21 16:01:25 -07:00
Dr.Lt.Data
c58b93ff51 update DB 2025-06-22 00:31:46 +09:00
Dr.Lt.Data
7d8ebfe91b update DB 2025-06-22 00:08:43 +09:00
Dr.Lt.Data
810381eab2 update DB 2025-06-22 00:03:44 +09:00
Dr.Lt.Data
61dc6cf2de update DB 2025-06-21 23:35:58 +09:00
NumZ
0205ebad2a Add ComfyUI-SeedVR2_VideoUpscaler Nodes (#1945)
* Update custom-node-list.json for Comfyui-Orpheus

add custom nodes from https://github.com/numz/Comfyui-Orpheus

* Update custom-node-list.json

add ComfyUI-SeedVR2_VideoUpscaler Node
2025-06-21 23:34:47 +09:00
Dr.Lt.Data
09a94133ac update DB 2025-06-21 23:34:05 +09:00
Dr.Lt.Data
1eb3c3b219 update DB 2025-06-21 23:25:10 +09:00
Alejandro Olivares Mompó
457845bb51 Add Kaizen Package by aleolidev (#1946) 2025-06-21 23:18:54 +09:00
Yuan-Man
0c11b46585 Add ComfyUI-OmniGen2 (#1944) 2025-06-21 23:17:36 +09:00
Dr.Lt.Data
c35100d9e9 update DB 2025-06-21 00:51:05 +09:00
Dr.Lt.Data
847031cb04 update DB 2025-06-20 12:33:28 +09:00
bymyself
d1ca6288a3 apply formatting 2025-06-19 16:41:16 -07:00
bymyself
624ad4cfe6 remove debug comments 2025-06-19 16:39:14 -07:00
Dr.Lt.Data
f8d87bb452 update DB 2025-06-20 07:38:39 +09:00
Dr.Lt.Data
f60b3505e0 update DB 2025-06-19 20:44:57 +09:00
Dr.Lt.Data
addefbc511 update DB 2025-06-19 12:55:15 +09:00
Dr.Lt.Data
c4314b25a3 update DB 2025-06-19 07:34:54 +09:00
Dr.Lt.Data
921bb86127 update DB 2025-06-18 12:37:38 +09:00
bymyself
d912fb0f8b [fix] Remove unused imports to fix Ruff linting errors 2025-06-17 15:27:21 -07:00
bymyself
e8fc053a32 [fix] Update data models to Pydantic v2 syntax to fix TypeError 2025-06-17 15:12:25 -07:00
bymyself
ce3b2bab39 refactor 2025-06-17 14:58:34 -07:00
bymyself
15e3699535 [cleanup] Remove outdated temp_queue_batch comment 2025-06-17 14:44:58 -07:00
bymyself
a4bf6bddbf [refactor] Use Pydantic models for query parameter validation
- Added query parameter models to OpenAPI spec for GET endpoints
- Regenerated data models to include new query param models
- Replaced manual validation with Pydantic model validation
- Removed obsolete validate_required_params helper function
- Provides better error messages and type safety for API endpoints

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-17 14:42:25 -07:00
bymyself
f1b3c6b735 [refactor] Move model utility functions to model_utils module 2025-06-17 14:24:31 -07:00
bymyself
e923434d08 [fix] Update client filtering to handle tuple structure in pending_tasks 2025-06-17 13:52:00 -07:00
bymyself
ddc9cd0fd5 [fix] Use tuples in TaskQueue heap for proper comparison support 2025-06-17 13:42:47 -07:00
bymyself
d081db0c30 [cleanup] Remove dead code do_update_all function
- Removed do_update_all function that was never called and only returned an error
- Removed "update-all" from OperationType enum as it's no longer used
- Regenerated data models to reflect the enum change

The update_all functionality now properly creates individual update tasks through the API endpoint rather than being a single monolithic task.

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-17 13:27:51 -07:00
bymyself
14298b0859 [fix] Remove unused imports to fix linting errors 2025-06-17 13:08:52 -07:00
bymyself
03ecda3cfe [feat] Implement comprehensive system state capture for batch records 2025-06-17 13:08:35 -07:00
bymyself
350cb767c3 [feat] Regenerate data models with enhanced ComfyUISystemState
- Add SecurityLevel and RiskLevel enums to generated models
- Enhance ComfyUISystemState with additional system information fields:
  - comfyui_root_path: ComfyUI installation directory
  - model_paths: Map of model types to configured paths
  - manager_version: ComfyUI Manager version
  - security_level: Current security configuration
  - network_mode: Network mode (online/offline/private)
  - cli_args: Selected CLI arguments
  - custom_nodes_count: Total number of custom nodes
  - failed_imports: List of failed imports
  - pip_packages: Installed pip packages

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-17 13:06:14 -07:00
bymyself
f450dcbb57 [feat] Add SecurityLevel and RiskLevel enums to OpenAPI schema
- Add SecurityLevel enum with strong/normal/normal-/weak values
- Add RiskLevel enum with block/high/middle values
- These will be used for security policy management

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-17 13:05:59 -07:00
bymyself
32e003965a fix files description in api 2025-06-17 10:36:52 -07:00
bymyself
65f0764338 fix duplicated schemas in openapi 2025-06-17 10:36:31 -07:00
bymyself
1bdb026079 explain glob vs legacy in claude memory 2025-06-17 10:36:08 -07:00
Dr.Lt.Data
b3a7fb9c3e update DB 2025-06-17 23:53:40 +09:00
Lord Lethris
c143c81a7e Update custom-node-list.json (#1941) 2025-06-17 23:46:54 +09:00
Dr.Lt.Data
dd389ba0f8 update DB 2025-06-17 22:34:28 +09:00
seeo
46b1649ab8 Update custom-node-list.json (#1940) 2025-06-17 22:24:24 +09:00
Dr.Lt.Data
89710412e4 fixed: indentation error 2025-06-17 07:27:46 +09:00
Dr.Lt.Data
931973b632 update DB 2025-06-17 07:22:13 +09:00
Dr.Lt.Data
60aaa838e3 update DB 2025-06-17 00:52:22 +09:00
Dr.Lt.Data
7e51286313 Merge branch 'main' into draft-v4 2025-06-17 00:33:31 +09:00
Dr.Lt.Data
1246538bbb fixed: Issue where installation status was not properly recognized when the nodepack ID registered in the registry was not normalized.
- ex) `ComfyUI-Crystools`

https://github.com/Comfy-Org/ComfyUI-Manager/issues/1834#issuecomment-2937370214
2025-06-17 00:31:51 +09:00
Dr.Lt.Data
80518abf9d update DB 2025-06-16 22:42:41 +09:00
Leon Wong
fc1ae2a18e added comfyui-leon-nodes to ustom-node-list.json (#1937) 2025-06-16 22:17:45 +09:00
Yuan-Man
3fd8d2049c Add ComfyUI-Hunyuan3D-2.1 (#1936) 2025-06-16 22:16:50 +09:00
Dr.Lt.Data
35a6bcf20c update DB 2025-06-16 12:52:05 +09:00
Dr.Lt.Data
0d75fc331e update DB 2025-06-16 07:28:55 +09:00
Dr.Lt.Data
0a23e793e3 update DB 2025-06-15 15:43:09 +09:00
Dr.Lt.Data
2c1c03e063 update DB 2025-06-15 14:27:27 +09:00
Çağlayan Karagözler
64059d2949 Added ComfyUI-YouTubeUploader to custom nodes json (#1933)
* Update custom-node-list.json

Added ComfyUI-YouTubeUploader

* Update custom-node-list.json

* Update custom-node-list.json

Added proper link

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-06-15 14:13:03 +09:00
Dr.Lt.Data
648aa7c4d3 update DB 2025-06-14 18:56:19 +09:00
bymyself
c888ea6435 [fix] Reduce excessive logging output to debug level
- Convert batch tracking messages to debug level (batch start, history saved)
- Convert task processing details to debug level
- Convert cache update messages to debug level
- Replace print() with logging.debug() for task processing
- Keep user-relevant messages at info level (ComfyUI updates, installation success)
- Resolves verbose output appearing without --verbose flag
2025-06-13 20:39:18 -07:00
bymyself
b089db79c5 [fix] Restore proper thread-based TaskQueue worker management
- Fix async/sync mismatch in TaskQueue worker implementation
- Use threading.Thread with asyncio.run() as originally designed
- Remove incorrect async task approach that caused blocking issues
- TaskQueue now properly manages its own thread lifecycle
- Resolves WebSocket message delivery and task processing issues
2025-06-13 20:27:41 -07:00
bymyself
7a73f5db73 [fix] Update CI to only check changed files
- Add tj-actions/changed-files to detect modified files in PR
- Only run OpenAPI validation if openapi.yaml was changed
- Only run Python linting on changed Python files (excluding legacy/)
- Remove incorrect "pip install ast" dependency
- Remove non-standard AST parsing and import checks
- Makes CI more efficient and prevents unrelated failures
2025-06-13 19:41:07 -07:00
bymyself
a96e7b114e [chore] Regenerate data models after OpenAPI fixes
- Updated generated_models.py to reflect OpenAPI 3.1 nullable format changes
- Models now use Optional[type] instead of nullable: true
- All affected models regenerated with datamodel-codegen
- Syntax and linting checks pass
2025-06-13 19:41:07 -07:00
bymyself
0148b5a3cc [fix] Fix OpenAPI validation errors for CI compliance
- Convert all nullable: true to OpenAPI 3.1 format using type: [type, 'null']
- Fix invalid array schema definition in ManagerMappings using oneOf
- Add default security: [] configuration to satisfy security-defined rule
- All 41 validation errors resolved, spec now passes with 0 errors
- 141 warnings remain (mostly missing operationId and example validation)
2025-06-13 19:41:07 -07:00
bymyself
2120a0aa79 [chore] Add dist/ to gitignore to exclude build artifacts 2025-06-13 19:40:27 -07:00
bymyself
706b6d8317 [refactor] Remove legacy thread management for TaskQueue
- Add proper async worker management to TaskQueue class
- Remove redundant task_worker_thread and task_worker_lock global variables
- Replace manual threading with async task management
- Update is_processing() logic to use TaskQueue state instead of thread status
- Implement automatic worker cleanup when queue processing completes
- Simplify queue start endpoint to use TaskQueue.start_worker()
2025-06-13 19:40:27 -07:00
bymyself
a59e6e176e [refactor] Remove redundant ExecutionStatus NamedTuple
- Eliminate TaskQueue.ExecutionStatus NamedTuple in favor of generated TaskExecutionStatus Pydantic model
- Remove manual conversion logic between NamedTuple and Pydantic model
- Use single source of truth for task execution status
- Clean up unused imports (Literal, NamedTuple)
- Maintain consistent data model usage throughout TaskQueue
2025-06-13 19:37:57 -07:00
bymyself
1d575fb654 [refactor] Replace non-standard OpenAPI validation with Redoc CLI
- Replace deprecated openapi-spec-validator with @redocly/cli
- Remove fragile custom regex-based route alignment script
- Use industry-standard OpenAPI validation tooling
- Switch from Python to Node.js for validation pipeline
- New validation catches 41 errors and 141 warnings that old validator missed
2025-06-13 19:37:57 -07:00
bymyself
98af8dc849 add claude memory 2025-06-13 19:37:57 -07:00
bymyself
4d89c69109 add installed packs to openapi 2025-06-13 19:37:57 -07:00
bymyself
b73dc6121f refresh cache before reporting status 2025-06-13 19:37:57 -07:00
bymyself
b55e1404b1 return installed pack list on status update 2025-06-13 19:37:57 -07:00
bymyself
0be0a2e6d7 migrate to data models for all routes 2025-06-13 19:37:57 -07:00
bymyself
3afafdb884 remove dist dir 2025-06-13 19:37:57 -07:00
bymyself
884b503728 [feat] Add comprehensive Pydantic validation to all API endpoints
- Updated all POST endpoints to use proper Pydantic model validation:
  - `/v2/manager/queue/task` - validates QueueTaskItem
  - `/v2/manager/queue/install_model` - validates ModelMetadata
  - `/v2/manager/queue/reinstall` - validates InstallPackParams
  - `/v2/customnode/import_fail_info` - validates cnr_id/url fields

- Added proper error handling with ValidationError for detailed error messages
- Updated TaskQueue.put() to handle both dict and Pydantic model inputs
- Added missing imports: InstallPackParams, ModelMetadata, ValidationError

Benefits:
- Early validation catches invalid data at API boundaries
- Better error messages for clients with specific validation failures
- Type safety throughout the request processing pipeline
- Consistent validation behavior across all endpoints

All ruff checks pass and validation is now enabled by default.
2025-06-13 19:37:57 -07:00
bymyself
7f1ebbe081 [cleanup] Remove completed TODO comments and fix ruff issues
- Removed completed TODO comments about code quality checks and client_id handling
- Updated comments to reflect implemented features
- Fixed ruff linting errors:
  - Removed duplicate constant definitions
  - Added missing locale import
  - Fixed unused imports
  - Moved is_local_mode logic to security_utils module
  - Added model_dir_name_map import to model_utils

All ruff checks now pass successfully.
2025-06-13 19:37:57 -07:00
bymyself
c8882dcb7c [feat] Implement comprehensive batch tracking and OpenAPI-driven data models
Enhances ComfyUI Manager with robust batch execution tracking and unified data model architecture:

- Implemented automatic batch history serialization with before/after system state snapshots
- Added comprehensive state management capturing installed nodes, models, and ComfyUI version info
- Enhanced task queue with proper client ID handling and WebSocket notifications
- Migrated all data models to OpenAPI-generated Pydantic models for consistency
- Added documentation for new TaskQueue methods (done_count, total_count, finalize)
- Fixed 64 linting errors with proper imports and code cleanup

Technical improvements:
- All models now auto-generated from openapi.yaml ensuring API/implementation consistency
- Batch tracking captures complete system state at operation start and completion
- Enhanced REST endpoints with comprehensive documentation
- Removed manual model files in favor of single source of truth
- Added helper methods for system state capture and batch lifecycle management
2025-06-13 19:36:55 -07:00
bymyself
601f1bf452 [feat] Add client_id support to task queue system
- Add client_id field to QueueTaskItem and TaskHistoryItem models
- Implement client-specific WebSocket message routing
- Add client filtering to queue status and history endpoints
- Follow ComfyUI patterns for session management
- Create data_models package for better code organization
2025-06-13 19:33:05 -07:00
Dr.Lt.Data
274bb81a08 update DB 2025-06-14 10:06:34 +09:00
Dr.Lt.Data
e2c90b4681 update DB 2025-06-13 22:41:52 +09:00
Dr.Lt.Data
fa0a98ac6e update DB 2025-06-13 12:53:51 +09:00
Dr.Lt.Data
e6e7b42415 update DB 2025-06-13 03:01:18 +09:00
Dr.Lt.Data
0b7ef2e1d4 update DB 2025-06-12 18:21:40 +09:00
Yuan-Man
2fac67a9f9 Add ComfyUI-Vui (#1930) 2025-06-12 18:15:32 +09:00
Dr.Lt.Data
8b9892de2e update DB 2025-06-12 12:31:04 +09:00
Dr.Lt.Data
b3290dc909 update DB 2025-06-12 12:24:22 +09:00
LargeModGames
3e3176eddb Update custom-node-list.json for new node: Add ComfyUI LoRA Auto Downloader (#1929)
* Add ComfyUI LoRA Auto Downloader extension

Adding ComfyUI LoRA Auto Downloader extension to the registry.
- Automatically downloads missing LoRAs from CivitAI
- Detects missing LoRAs in workflows
- Smart directory detection

* Update custom-node-list.json

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-06-12 12:22:50 +09:00
Dr.Lt.Data
b1ef84894a update DB 2025-06-12 12:22:02 +09:00
hassan-sd
c6cffc92c4 Update custom-node-list.json for new node: comfyui-image-prompt-loader (#1928)
https://github.com/hassan-sd/comfyui-image-prompt-loader

Load images with automatic prompt extraction from Civitai URLs, caption files, or EXIF metadata. Features smart dataset detection and dynamic preview updates.
2025-06-12 12:16:27 +09:00
Dr.Lt.Data
efb9fd2712 update DB 2025-06-12 07:21:17 +09:00
Dr.Lt.Data
94b294ff93 update DB 2025-06-12 07:17:09 +09:00
Dr.Lt.Data
99a9e33648 update DB 2025-06-11 22:11:42 +09:00
gitadmini
055d94a919 add node extractstoryboards (#1927)
* add node extractstoryboards

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-06-11 22:00:32 +09:00
Dr.Lt.Data
0978005240 update DB 2025-06-11 12:31:34 +09:00
Yuan-Man
1f796581ec Add ComfyUI-Direct3D-S2 node (#1925) 2025-06-11 07:31:56 +09:00
Dr.Lt.Data
f3a1716dad update DB 2025-06-11 07:23:14 +09:00
Zachary
a1c3a0db1f add my custom node for read metadata from filepath. (#1926) 2025-06-11 06:59:54 +09:00
Dr.Lt.Data
9f80cc8a6b update DB 2025-06-10 12:27:20 +09:00
Dr.Lt.Data
133786846e update DB 2025-06-10 07:28:53 +09:00
keit
bdf297a5c6 Add ComfyUI-keitNodes (#1924) 2025-06-10 07:28:02 +09:00
Dr.Lt.Data
6767254eb0 update DB 2025-06-10 07:27:48 +09:00
11dogzi
691cebd479 CYBERPUNK-STYLE-DIY (#1923) 2025-06-10 07:26:14 +09:00
xiaowc
f3932cbf29 Add Comfyui-Dynamic-Params Node Plugin (#1922)
* Update custom-node-list.json to add Comfyui-Dynamic-Params Node

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-06-10 07:25:52 +09:00
Dr.Lt.Data
3f73a97037 update DB 2025-06-10 07:25:40 +09:00
Erehr
226f1f5be4 Add ComfyUI-EreNodes (#1921)
* Add ComfyUI-EreNodes

* Update custom-node-list.json
2025-06-10 07:23:53 +09:00
Dr.Lt.Data
7e45c07660 update DB 2025-06-10 07:23:40 +09:00
INuBq8
0c815036b9 Update custom-node-list.json (#1920) 2025-06-10 07:22:31 +09:00
Dr.Lt.Data
3870abfd2d Merge branch 'main' into draft-v4 2025-06-09 12:37:10 +09:00
Dr.Lt.Data
ae9fdd0255 update DB 2025-06-09 07:19:09 +09:00
Vlad Bondarovich
b3874ee6fd Update custom-node-list.json (#1917) 2025-06-09 06:06:15 +09:00
Eric W. Burns
62af4891f3 Update custom-node-list.json (#1912)
Submitting my new custom nodes at https://github.com/burnsbert/ComfyUI-EBU-Workflow for inclusion, thanks!
2025-06-09 06:02:16 +09:00
Budi Hartono
2176e0c0ad Add CAS Aspect Ratio Presets Node for ComfyUI to custom-node-list.json (#1910)
Add a custom node to quickly create empty latents in common resolutions and aspect ratios for SD 1.5, SDXL, Flux, Chroma, and HiDream. Choose from curated presets or generate by axis and aspect ratio. Appears in the 'latent' node group.
2025-06-09 06:01:18 +09:00
Dr.Lt.Data
cac105b0d5 fixed: prevent halting when log flushing fails.
https://github.com/Comfy-Org/ComfyUI-Manager/issues/1794
2025-06-08 06:54:39 +09:00
Dr.Lt.Data
cd7c42cc23 update DB 2025-06-08 06:39:30 +09:00
Dr.Lt.Data
a3fb847773 fixed: Don't override preview method if --preview-method is given
https://github.com/Comfy-Org/ComfyUI-Manager/issues/1887
2025-06-08 06:33:42 +09:00
Dr.Lt.Data
5c2f4f9e4b fixed: Issue where cloning Comfy-Org/ComfyUI-Manager would cause mismatches with ltdrdata/ComfyUI-Manager, resulting in it not being recognized properly.
https://github.com/Comfy-Org/ComfyUI-Manager/issues/1900
2025-06-08 06:24:19 +09:00
Dr.Lt.Data
0a511d5b87 update DB 2025-06-08 05:00:25 +09:00
Dr.Lt.Data
efe1aad5db update DB 2025-06-07 16:20:15 +09:00
Dr.Lt.Data
eed4c53df0 update DB 2025-06-07 12:55:45 +09:00
Dr.Lt.Data
9c08a6314b update DB 2025-06-07 12:32:42 +09:00
Pigidiy
a6b2d2c722 Add ComfyUI-LikeSpiderAI-UI (UI Framework for Node Creators) (#1907)
This PR adds a declarative UI framework for ComfyUI nodes: ComfyUI-LikeSpiderAI-UI.

Highlights:
- Minimalistic base class: LikeSpiderUINode
- Built-in input schema with auto-generated UI
- Example node: AudioExport (supports mp3/wav/flac + bitrate/filename)
- Designed for extensibility and clean UX

Author: Pigidiy
2025-06-07 12:31:47 +09:00
Dr.Lt.Data
3c6b5300e5 update DB 2025-06-06 14:37:15 +09:00
xmarre
f084c30b20 Add LoRA-Safe TorchCompile node (#1905)
* Add LoRA-Safe TorchCompile node

* Update custom-node-list.json

---------

Co-authored-by: xmarre <mmquant1@gmail.com>
Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-06-06 14:17:19 +09:00
Dr.Lt.Data
206004fc1f update DB 2025-06-06 07:13:30 +09:00
Dr.Lt.Data
d9641cbff8 update DB 2025-06-06 06:14:09 +09:00
Dr.Lt.Data
13b272052a update DB 2025-06-06 05:56:26 +09:00
MDMAchine
c79e0d26d8 Update custom-node-list.json (#1904)
Added:
https://github.com/MDMAchine/ComfyUI_MD_Nodes
2025-06-06 05:55:26 +09:00
Dr.Lt.Data
ec4a4c2cfc update DB 2025-06-06 05:53:38 +09:00
leolee
9a9491bff9 Add Comfy-Topaz-Photo (#1901)
* Update custom-node-list.json

Add Comfy-Topaz-Photo

* Update custom-node-list.json

* Update custom-node-list.json

* Update custom-node-list.json

Add Comfy-Topaz-Photo

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-06-06 05:53:13 +09:00
Dr.Lt.Data
5b5155819f update DB 2025-06-06 05:52:34 +09:00
Pigidiy
1b941c6b29 Fix: correct author & ID for ComfyUI-LikeSpiderAI-SaveMP3 (#1899)
* Fix: correct author & ID for ComfyUI-LikeSpiderAI-SaveMP3

This PR corrects the metadata for the ComfyUI-LikeSpiderAI-SaveMP3 node:

Changes author from aimingfail → Pigidiy

Adds missing version field: v1.0.0

Updates id from img2halftone → likeSpiderMP3

The previous metadata was mistakenly duplicated from another node.

Project repo: https://github.com/Pigidiy/ComfyUI-LikeSpiderAI-SaveMP3

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-06-06 05:51:14 +09:00
e-tier-newbie
9b9665d2e9 Update custom-node-list.json (Add ComfyUI-E-Tier-TextSaver to node list) (#1879)
* Update custom-node-list.json

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-06-06 05:49:02 +09:00
Dr.Lt.Data
4cceb46641 update DB 2025-06-03 18:50:49 +09:00
Dr.Lt.Data
19cf83cce6 update DB 2025-06-03 18:47:13 +09:00
Dr.Lt.Data
bb60d399fc update DB 2025-06-03 13:57:28 +09:00
Dr.Lt.Data
1a9f1dd0ae update DB 2025-06-03 10:47:49 +09:00
violetz
586c465aaa Add custom node: Hugging Face LoRA Uploader (#1897) 2025-06-03 10:42:15 +09:00
Dr.Lt.Data
50ceb974d9 update DB 2025-06-03 10:42:03 +09:00
Pigidiy
27cf40d392 Add: ComfyUI-LikeSpiderAI-SaveMP3 (save AUDIO to .mp3) (#1894)
* Add: ComfyUI-LikeSpiderAI-SaveMP3 (save AUDIO to .mp3)

Adds a node that saves AUDIO output to .mp3 format via ffmpeg.
Repo: https://github.com/Pigidiy/ComfyUI-LikeSpiderAI-SaveMP3

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-06-03 10:39:15 +09:00
Dr.Lt.Data
bbb6005634 fixed: scanner
update DB
2025-06-03 10:36:48 +09:00
vivi-gomez
8dbd996558 Add ComfyUI Fix Node Translate custom node (#1892)
* Update custom-node-list.json

* Update custom-node-list.json

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-06-03 10:35:41 +09:00
Dr.Lt.Data
8605345499 update DB 2025-06-01 06:55:16 +09:00
Dr.Lt.Data
8303e7c043 Merge branch 'main' into draft-v4
# Conflicts:
#	comfyui_manager/common/README.md
#	comfyui_manager/glob/manager_core.py
#	comfyui_manager/js/README.md
#	pyproject.toml
2025-06-01 06:23:11 +09:00
Dr.Lt.Data
3671ddbd4b update DB 2025-06-01 04:30:56 +09:00
Dr.Lt.Data
5bc1ceacb2 update DB 2025-06-01 04:11:34 +09:00
YuSuu
47b9fa3651 Add comfyui-merge plugin info (#1866) 2025-06-01 04:10:42 +09:00
Dr.Lt.Data
6062b87771 update DB 2025-06-01 04:09:55 +09:00
Yuan-Man
213152aa43 Add ComfyUI-ChatterboxTTS node (#1888) 2025-06-01 04:03:24 +09:00
Hiroaki Ogasawara
ea8047344f feat: ComfyUl-FramePackWrapper_PlusOne (#1891) 2025-06-01 04:01:57 +09:00
Dr.Lt.Data
a7bc167d53 update DB 2025-05-30 12:42:14 +09:00
Yuan-Man
18e78ee2c2 Add ComfyUI-HunyuanVideo-Avatar node (#1886) 2025-05-30 12:35:47 +09:00
Dr.Lt.Data
754236e35b update DB 2025-05-30 12:30:21 +09:00
Dr.Lt.Data
2645d62991 fixed: scanner.py - better limitation check 2025-05-30 07:26:03 +09:00
Dr.Lt.Data
e55d9416dc update DB 2025-05-29 07:49:40 +09:00
Yuan-Man
24d35eec54 Add ComfyUI-HunyuanPortrait node (#1882) 2025-05-29 05:29:50 +09:00
seungwoo-ji
ee053f50b4 fix: replace link to registry (#1883) 2025-05-29 05:27:13 +09:00
Dr.Lt.Data
3593c9ed3e update DB 2025-05-28 08:58:19 +09:00
Dr.Lt.Data
93f548696d update DB 2025-05-28 07:15:18 +09:00
Dr.Lt.Data
cecb952add update DB 2025-05-27 07:01:44 +09:00
Ethan Yang
596571bb38 add openvino custom node (#1864) 2025-05-27 06:28:23 +09:00
filtered
85a6fb75b8 Add workaround for delay in link connection (#1873)
New input sockets have no pos, and require a render frame to occur before links can be set to the correct location.
2025-05-27 06:27:45 +09:00
Dominik Bargiel
7dea42433b Update custom-node-list.json with Deadline Rneder manager plugin (#1874) 2025-05-27 06:27:06 +09:00
Faych Chen
ec5e4af6b7 feat: Add ComfyUI-BAGEL custom node (#1875) 2025-05-27 06:26:24 +09:00
Dr.Lt.Data
0048754fe8 fixed: An issue occurs when attempting to update a node pack installed via git clone if its URL has changed or if the node is not registered in custom-nodes-list.json.
https://github.com/Comfy-Org/ComfyUI-Manager/issues/1834#issuecomment-2907690538
2025-05-26 02:21:25 +09:00
Dr.Lt.Data
5c0bd0f79c bump version 2025-05-26 01:41:49 +09:00
Alexander Piskun
669cdffe08 fix(manager_util): used non normalized package name (#1867)
* set channel=default, mode=cache for git clone

* fix(manager_util): use normalized_name of package in fix_broken

Signed-off-by: bigcat88 <bigcat88@icloud.com>

---------

Signed-off-by: bigcat88 <bigcat88@icloud.com>
2025-05-26 01:41:07 +09:00
Dr.Lt.Data
3cd553301b update DB 2025-05-26 01:27:39 +09:00
hmwl
db7ef4f253 Add ComfyUI-TaskMonitor node (#1871) 2025-05-26 01:14:00 +09:00
Level Pixel
a09704567c Update custom-node-list.json for Level Pixel Advanced nodes (#1870)
Splitting the Level Pixel node package into two separate packages:
https://github.com/LevelPixel/ComfyUI-LevelPixel
https://github.com/LevelPixel/ComfyUI-LevelPixel-Advanced

Adding information about the new ComfyUI-LevelPixel-Advanced node package to custom-node-list.json.

The new ComfyUI-LevelPixel-Advanced node package is needed to separate the complex to install and use LLM and VLM node package from the rest of the main nodes of Level Pixel.

Conflicting nodes will be removed from ComfyUI-LevelPixel later.
2025-05-26 01:12:16 +09:00
Dr.Lt.Data
21fe577a2e update DB 2025-05-25 23:51:21 +09:00
Yuan-Man
9f258f5c9c Add ComfyUI-Bagel node (#1863) 2025-05-25 23:44:55 +09:00
Dr.Lt.Data
9cd088feb0 update DB 2025-05-23 15:10:47 +09:00
Dr.Lt.Data
89e3828138 update DB 2025-05-21 22:23:08 +09:00
Christian Byrne
731c89dc27 [api] Add OpenAPI specification file (#1856) 2025-05-21 21:48:50 +09:00
Yuan-Man
3d920cab4d Add ComfyUI-AniSora node (#1860) 2025-05-21 21:47:04 +09:00
TrophiHunter
470b8c1fb8 Update custom-node-list.json (#1858)
Fixed node references to github
2025-05-21 21:46:34 +09:00
Christian Byrne
dbf988fd5a [docs] Add README for docs directory (#1855)
* [docs] Add README for docs directory

* [docs] Remove redundant sections from docs README
2025-05-21 21:45:17 +09:00
Christian Byrne
0031743ad4 [docs] Add README for node_db directory (#1854) 2025-05-21 21:45:05 +09:00
Christian Byrne
0f2c0ab65d [docs] Add README for js directory (#1853)
* [docs] Add README for js directory

* [docs] Update js/README.md based on PR review feedback

* [docs] Update js/README.md with corrected descriptions
2025-05-21 21:44:48 +09:00
Christian Byrne
53244b794f [docs] Add README for glob directory (#1852) 2025-05-21 21:44:24 +09:00
Dr.Lt.Data
416122d61d update DB 2025-05-21 00:03:10 +09:00
Dr.Lt.Data
d3c625e791 update DB 2025-05-20 23:43:34 +09:00
2frames
ca2c41783c Add AQnodes (#1849)
* add AQnodes

* add AQnodes - fix repo url

---------

Co-authored-by: pk <poczta@aquasite.pl>
2025-05-20 23:42:57 +09:00
Dr.Lt.Data
e2a6446585 update DB 2025-05-20 23:42:44 +09:00
ICAI Icelandic Center for Artificial Intelligence
839790b5ab Update custom-node-list.json (#1848)
added entry for Sample Scheduler Metrics Tester custom node
2025-05-20 23:41:32 +09:00
jqy-yo
58b9946936 Add Comfyui-BBoxLowerMask2 to custom-node-list (#1842) 2025-05-20 23:41:00 +09:00
Dr.Lt.Data
a19ba22eaf update DB 2025-05-20 23:40:40 +09:00
Yuan-Man
117715aa22 Add ComfyUI-MoviiGen node (#1846) 2025-05-20 23:35:37 +09:00
lum3on
891a5a85ee add ModelQuantizer node to custom node list (#1806)
* add-ModelQuantizer to custom node list

* Update custom-node-list.json

---------

Co-authored-by: yogotatara3 <milan.kastenmueller@thjnk.de>
Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-05-20 23:32:43 +09:00
Dr.Lt.Data
35464654c1 fixed: cm_global importing error 2025-05-19 06:10:25 +09:00
Dr.Lt.Data
ec9d52d482 Merge branch 'main' into draft-v4 2025-05-19 06:07:31 +09:00
Dr.Lt.Data
166debfabb modified: In Python 3.13, the functionality to forcibly downgrade the numpy version below 3.13 is disabled.
- Starting from Python 3.13, prebuilt wheels for `numpy` 1.26.4 are no longer provided.

https://github.com/comfyanonymous/ComfyUI/discussions/8187
2025-05-19 05:13:40 +09:00
Dr.Lt.Data
7258a09fe5 update DB 2025-05-19 05:03:54 +09:00
Dr.Lt.Data
058a436187 update DB 2025-05-17 17:39:31 +09:00
Yuan-Man
1950802c55 Update ComfyUI-Step1X-3D node (#1840) 2025-05-17 17:11:51 +09:00
Dr.Lt.Data
eb52a03372 update DB 2025-05-16 03:52:03 +09:00
Dr.Lt.Data
f8aa428be3 update DB 2025-05-15 22:09:48 +09:00
Dr.Lt.Data
ec0893f136 update DB 2025-05-15 21:48:56 +09:00
TrophiHunter
92b99ea963 Update custom-node-list.json (#1832)
add my nodes to manager
2025-05-15 21:47:37 +09:00
Dr.Lt.Data
02cd52bb65 update DB 2025-05-15 21:45:19 +09:00
Dontdrunk
af1ec2c87b Update custom-node-list.json (#1818)
* Submit Registration

* Update custom-node-list.json

* Update custom-node-list.json
2025-05-15 21:43:29 +09:00
Dr.Lt.Data
41006c3a33 update DB 2025-05-15 08:09:03 +09:00
Gilad Schreiber
116a6d500d model-list: add new ltxv 13b distilled models. (#1835)
Co-authored-by: gschreiber <gschreiber@infra-image-generator.c.ltx-research-vms.internal>
2025-05-15 08:03:12 +09:00
Dr.Lt.Data
87d0ac807f update DB 2025-05-15 07:24:34 +09:00
Dr.Lt.Data
fc943172eb update DB 2025-05-14 06:07:35 +09:00
Gilad Schreiber
9daa5a2fbd fix: update ltxv upscale models metadata. (#1830)
Co-authored-by: gschreiber <gschreiber@infra-image-generator.c.ltx-research-vms.internal>
2025-05-14 06:07:22 +09:00
Dr.Lt.Data
b7b2746a61 update DB 2025-05-13 03:36:18 +09:00
Dr.Lt.Data
d66a4fbfc8 update DB 2025-05-13 03:23:47 +09:00
Dr.Lt.Data
683a172ad8 modified: Added a feature to prevent numpy from being forcibly downgraded to below 2 via pip_overrides.json.
https://github.com/Comfy-Org/ComfyUI-Manager/issues/1665#issuecomment-2862099191
2025-05-13 03:04:27 +09:00
Dr.Lt.Data
6e12358f5a update DB 2025-05-13 02:56:36 +09:00
Dr.Lt.Data
8bcf16dc90 fixed: A type error occurred during the creation of the pip fixer object when an error occurred while retrieving the list of installed packages.
https://github.com/Comfy-Org/ComfyUI-Manager/issues/1804
2025-05-13 02:46:34 +09:00
Dr.Lt.Data
65c0a2a1f5 update DB 2025-05-13 02:10:21 +09:00
Alastor 666 1933
115236eb9c adding caching_to_not_waste custom node (#1786) 2025-05-13 02:06:23 +09:00
Dr.Lt.Data
08de942abe update DB 2025-05-13 02:05:51 +09:00
Seb Hirsch
e9dff83290 Update custom-node-list.json (#1802)
added seb nodes
2025-05-13 02:02:55 +09:00
Yuan-Man
3bc6c7584d Add ComfyUI-Muyan-TTS node (#1805) 2025-05-13 02:00:54 +09:00
Dr.Lt.Data
22a2bf1584 Apply https://github.com/Comfy-Org/ComfyUI-Manager/pull/1811 to prestartup_script as well. 2025-05-13 01:59:42 +09:00
Tomasz Dowgielewicz
79ece5f72c fix: handle pip package names with inline comments during installation (#1811)
Co-authored-by: Tomasz Dowgielewicz <todowgielewicz@artflow.me>
2025-05-13 01:53:44 +09:00
VitoChenLY
5da6fe1373 extract_url_and_commit_id (#1813)
Co-authored-by: chenyijian <chenyijian@infini-ai.com>
2025-05-13 01:52:02 +09:00
moldwebs
48c10d0b95 Show models used in current workflow (#1819)
Simple javascript modify that filter models used in current workflow
2025-05-13 01:48:29 +09:00
Dr.Lt.Data
9bb56b1457 update DB 2025-05-13 01:46:26 +09:00
1hew
83420fd828 Add ComfyUI-1hewNodes to custom node list (#1826)
Co-authored-by: yige1127 <wangyihe370875982@gmail.com>
2025-05-13 01:45:34 +09:00
Dr.Lt.Data
52f4b9506f update DB 2025-05-13 01:44:07 +09:00
fpgaminer
b501e9b20b Add fpgaminer/joycaption_comfyui to custom-node-list.json (#1827) 2025-05-13 01:43:28 +09:00
Dr.Lt.Data
1f7ae5319a update DB 2025-05-13 01:42:35 +09:00
Goshe-nite
68c201239d Update custom-node-list.json (#1825) 2025-05-13 01:42:13 +09:00
Dr.Lt.Data
6e4e43f612 update DB 2025-05-13 01:41:12 +09:00
AIWarper
81c3708f39 Add NormalCrafterWrapper custom node by AIWarper (#1816) 2025-05-13 01:40:43 +09:00
Dr.Lt.Data
f4d2bbde34 update DB 2025-05-13 01:40:25 +09:00
gasparuff
d14b42a42c Update custom-node-list.json (#1810)
added customselector node to custom-node-list.json
2025-05-13 01:34:46 +09:00
Dr.Lt.Data
0e9c32344c fix: syntax error 2025-05-12 18:33:24 +09:00
Liangbin Lian
30c4ea06af fix model DB for Hyper-SD LoRA (4steps) - SDXL (#1815) 2025-05-12 18:20:42 +09:00
Fadel Mochammad
8211264993 Add inline comment to __init__.py (#1823) 2025-05-12 18:15:27 +09:00
ClownsharkBatwing
67cf5b49e1 Update custom-node-list.json (#1821) 2025-05-12 18:15:12 +09:00
Dr.Lt.Data
90ce448380 Merge branch 'main' into draft-v4 2025-05-12 12:21:18 +09:00
Dr.Lt.Data
8e7ba18e05 update DB 2025-05-09 08:04:39 +09:00
Dr.Lt.Data
8359e1063e update DB 2025-05-09 07:23:33 +09:00
VitoChenLY
ca078e54b9 Add 'exit-on-fail' parameter to control failure behavior (#1807)
Co-authored-by: chenyijian <chenyijian@infini-ai.com>
2025-05-09 07:08:41 +09:00
Dr.Lt.Data
f7e930c5a2 update DB 2025-05-08 02:03:46 +09:00
Dr.Lt.Data
479d95e1c8 update DB 2025-05-08 01:43:01 +09:00
Demis Bellot
2b0ff08eef Add ComfyUI Asset Downloader (#1799) 2025-05-08 01:34:02 +09:00
Dr.Lt.Data
67a487db15 update DB 2025-05-08 01:30:54 +09:00
Dr.Lt.Data
2488cb3458 update DB 2025-05-08 00:11:28 +09:00
Dr.Lt.Data
157e6336fa update DB 2025-05-08 00:09:38 +09:00
IrsalKhan
d808a1f406 Add ComfyUI DAM Object Extractor node (#1796)
* Update custom-node-list.json

* Update custom-node-list.json

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-05-08 00:08:58 +09:00
Dr.Lt.Data
2bb4d8cd63 update DB 2025-05-08 00:08:42 +09:00
CY-CHENYUE
a8164e1631 Update custom-node-list.json (#1791)
* Update custom-node-list.json

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-05-08 00:07:50 +09:00
Dr.Lt.Data
a31d286945 update DB 2025-05-08 00:05:49 +09:00
wakattac
12eeef4cf0 Update custom-node-list.json (#1793) 2025-05-08 00:04:36 +09:00
Yuan-Man
ce8e6dc36e Add ComfyUI-AudioX node (#1798) 2025-05-08 00:03:58 +09:00
Sssnap
7a32e544a7 Update custom-node-list.json (#1792) 2025-05-07 23:54:45 +09:00
Dr.Lt.Data
e16e9d7a0e update DB 2025-05-03 23:40:58 +09:00
unicough
821f908dbc Update custom-node-list.json (#1784) 2025-05-03 23:12:05 +09:00
Dr.Lt.Data
e007e6f897 update DB 2025-05-01 02:08:58 +09:00
Yuan-Man
94f496fd65 Add ComfyUI-Step1X-Edit node (#1780) 2025-05-01 01:15:03 +09:00
Dr.Lt.Data
d2ce35d2e6 update DB 2025-05-01 01:13:31 +09:00
somesomebody
2eeebb32dc Add comfyui-lorainfo-sidebar to custom node list (#1778) 2025-05-01 01:12:44 +09:00
Sander
f6d636d82f Add fixed MagicQuill node (#1768) 2025-05-01 01:08:11 +09:00
Dr.Lt.Data
56125839ac Merge branch 'main' into draft-v4 2025-04-29 00:30:02 +09:00
Dr.Lt.Data
0cd397623e update DB 2025-04-29 00:21:59 +09:00
Dr.Lt.Data
5978b6c9ee updated: PIPFixer - support for pytorch 2.7.0 2025-04-28 23:49:42 +09:00
Dr.Lt.Data
9e132811bc update DB 2025-04-28 00:43:52 +09:00
Dr.Lt.Data
cd49799bed fixed: crash related to deleted CNR node after installed
modified: convert cm-cli.sh to cm-cli command
2025-04-28 00:13:31 +09:00
Dr.Lt.Data
d547a05106 Merge branch 'main' into draft-v4 2025-04-27 23:17:18 +09:00
Dr.Lt.Data
3a3b5c1f92 update DB 2025-04-27 23:16:48 +09:00
hua(Kungfu)
26be01ff82 Update custom-node-list.json (#1774) 2025-04-27 22:52:56 +09:00
Dr.Lt.Data
8f6dd92374 update DB 2025-04-26 18:24:56 +09:00
Dr.Lt.Data
d50b71a887 update DB 2025-04-26 14:51:07 +09:00
Dr.Lt.Data
3bc9cbc767 update DB 2025-04-26 13:16:26 +09:00
Yuan-Man
b6f6b4fd8a Add ComfyUI-LiveCC node (#1770) 2025-04-26 13:12:14 +09:00
Christian Byrne
a66bada8a3 Update workflow-metadata.js 2025-04-23 17:24:07 -07:00
Dr.Lt.Data
db0b57a14c Merge branch 'main' into draft-v4 2025-04-24 08:44:50 +09:00
Dr.Lt.Data
2048ac87a9 modified: glob.core - make default network mode as public.
Network mode does not simply determine whether the CNR cache is used. Even after switching to cacheless in the future, it will continue to be used as a policy for user environments.
2025-04-24 08:41:17 +09:00
Dr.Lt.Data
9adf6de850 fixed: missing channels.list.template
modified: /ltdrdata -> /Comfy-Org
modified: set default network as public instead of offline
2025-04-23 08:58:47 +09:00
Dr.Lt.Data
7657c7866f fixed: perform reload when starting task worker 2025-04-22 12:39:09 +09:00
Dr.Lt.Data
d638f75117 modified: prevent displaying ComfyUI-Manager on list 2025-04-22 02:39:56 +09:00
Dr.Lt.Data
a804f7de19 update DB 2025-04-22 02:14:50 +09:00
Dr.Lt.Data
efff6b2c18 Merge branch 'main' into draft-v4 2025-04-22 01:20:57 +09:00
Dr.Lt.Data
72a61a9966 modified: pipfixer/blacklisting - add torchaudio 2025-04-22 01:17:22 +09:00
Dr.Lt.Data
b08bb658ea update DB 2025-04-22 01:13:57 +09:00
Dr.Lt.Data
7b28bf608b modified: release pinning ultralytics version 2025-04-22 00:43:20 +09:00
Dr.Lt.Data
0c46434164 fixed: avoid except:
fixed: prestartup_script - remove useless exception handling when fallback resolving comfy_path
2025-04-21 12:42:50 +09:00
Dr.Lt.Data
0bb8947c02 Merge branch 'main' into draft-v4 2025-04-21 12:12:27 +09:00
Dr.Lt.Data
b57747fdf1 update DB 2025-04-20 18:49:43 +09:00
Dr.Lt.Data
0735271b10 update DB 2025-04-20 17:13:47 +09:00
Dr.Lt.Data
770cd0f9f5 update DB 2025-04-19 10:31:07 +09:00
Dr.Lt.Data
32b6266dd9 update DB 2025-04-19 09:39:43 +09:00
NumZ
2a8412a2bf Update custom-node-list.json for Comfyui-Orpheus (#1754)
add custom nodes from https://github.com/numz/Comfyui-Orpheus
2025-04-19 09:35:28 +09:00
Dr.Lt.Data
0c4d289002 update DB 2025-04-19 09:34:52 +09:00
Nisaruj Rattanaaram
cee01fec25 Add comfyui-daam to custom node list (#1753)
* Update custom-node-list.json

* Update description
2025-04-19 09:34:17 +09:00
Dr.Lt.Data
f00686f3f2 update DB 2025-04-19 09:34:07 +09:00
FunnyFinger
bd33f7726e Add Dynamic Sliders Stack to custom node list (#1750)
* Update custom-node-list.json

* Update custom-node-list.json

Added my custom node to the list
2025-04-19 09:33:08 +09:00
Dr.Lt.Data
22ab526b0c update DB 2025-04-19 09:32:30 +09:00
Christian Byrne
af269d198d trim version string embedded in workflow (#1758) 2025-04-19 09:30:41 +09:00
Yuan-Man
995ef6356e Add ComfyUI-Kimi-VL node (#1756) 2025-04-19 09:30:02 +09:00
杨必赞
aa3bf77c28 Update custom-node-list.json (#1752) 2025-04-19 09:29:15 +09:00
Danteday
15667c1259 Update custom-node-list.json (#1751) 2025-04-19 09:28:53 +09:00
zzw5516
c7b6b565da feat: Add ComfyUI-zw-tools custom node to list (#1749) 2025-04-19 09:27:55 +09:00
Dr.Lt.Data
3214ab52c6 update DB 2025-04-15 23:40:14 +09:00
Legende
e3062ff613 Add custom node xLegende/ComfyUI-Prompt-Formatter (#1741)
Custom node for formating prompts
2025-04-15 23:18:13 +09:00
Yoland Yan
036b63efe7 Change order of manager to be default install lateste (#1747) 2025-04-15 18:46:24 +09:00
Christian Byrne
09e8e8798c Add is_legacy_manager_ui route from the legacy package as well (#1748)
* add `is_legacy_manager_ui` route to `legacy` package  as well

* add static
2025-04-15 18:36:38 +09:00
Christian Byrne
abfd85602e Only load legacy FE extension if --enable-manager-legacy-ui is set (#1746)
* only load JS extensions when legacy arg is set

* add `is_legacy_manager_ui` endpoint
2025-04-15 08:03:04 +09:00
Dr.Lt.Data
1816bb748e use --enable-manager-legacy-ui cli arg instead of env variable 2025-04-15 01:36:35 +09:00
Dr.Lt.Data
8d3e1d60d0 update DB 2025-04-15 01:24:42 +09:00
Dr.Lt.Data
59876452f4 update DB 2025-04-15 00:37:10 +09:00
BIGMON
04972ad87f feat: Register ComfyUI-ResolutionPresets to custom nodes list (#1738)
* Add: register ComfyUI-ResolutionPresets

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-04-15 00:29:27 +09:00
Dr.Lt.Data
c7e69f4e26 update DB 2025-04-15 00:28:53 +09:00
leolee
7a59b6d0d9 Update custom-node-list.json (#1745)
* Update custom-node-list.json

Add Comfy-Topaz-Photo

* Update custom-node-list.json

* Update custom-node-list.json

---------

Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
2025-04-15 00:28:03 +09:00
Yuan-Man
d227ad97a4 Add ComfyUI-HiDream-I1 node (#1744) 2025-04-15 00:25:45 +09:00
Dr.Lt.Data
b93a474dae update DB 2025-04-15 00:23:42 +09:00
Silver
a5fe075bf3 Add custom node silveroxides/ComfyUI-ModelUtils (#1652)
Custom nodes project for model management.
2025-04-15 00:22:38 +09:00
Dr.Lt.Data
05ceab68f8 restructuring
the existing cache-based implementation will be retained as a fallback under legacy/..., while glob/... will be updated to a cacheless implementation.
2025-04-13 09:26:02 +09:00
Christian Byrne
46a37907e6 add development guide (#1739) 2025-04-13 08:40:28 +09:00
Dr.Lt.Data
7fc8ba587e fixed: don't disable legacy ComfyUI-Manager unless --disable-comfyui is set 2025-04-12 21:24:29 +09:00
Dr.Lt.Data
7a35bd9d9a Merge branch 'main' into draft-v4 2025-04-12 21:22:34 +09:00
Dr.Lt.Data
17e5c3d2f5 update DB 2025-04-12 21:20:45 +09:00
Dr.Lt.Data
27bfc539f7 fixed: Removed the possibility of locking by opening the git repo.
https://github.com/Comfy-Org/ComfyUI-Manager/issues/1717
2025-04-12 21:10:14 +09:00
Dr.Lt.Data
a76ef49d2d Merge branch 'feat/cacheless-v2' into draft-v4 2025-04-12 20:11:33 +09:00
Dr.Lt.Data
bb0fcf6ea6 added: should_be_disabled function 2025-04-12 19:35:41 +09:00
Dr.Lt.Data
539e0a1534 Merge branch 'main' into draft-v4 2025-04-12 19:06:24 +09:00
Dr.Lt.Data
aaae6ce304 Merge branch 'feat/queue_batch' into draft-v4 2025-04-12 19:05:48 +09:00
Dr.Lt.Data
821fded09d update DB 2025-04-12 17:26:41 +09:00
Dr.Lt.Data
ec4a2aa873 update DB 2025-04-12 15:22:09 +09:00
Dr.Lt.Data
d6b2d54f3f update DB 2025-04-12 15:20:29 +09:00
Jerry Chukwudi
97ae67bb9a Add LoadImageFromHttpURL node by jerrywap (#1732)
Add LoadImageFromHttpURL node by jerrywap
2025-04-12 15:18:35 +09:00
Sander
765514a33f Added ComfyUI-api-tools (#1733)
Custom node for to add some extra api endpoints, including prometheus monitoring
2025-04-12 15:17:50 +09:00
Yuan-Man
e2cdcc96c4 Add ComfyUI-UNO node (#1735) 2025-04-12 15:16:57 +09:00
Dr.Lt.Data
dbd25b0f0a Merge branch 'main' into feat/cacheless 2025-04-10 12:20:29 +09:00
Dr.Lt.Data
a128baf894 fixed: ruff check 2025-03-25 23:40:15 +09:00
Dr.Lt.Data
57b847eebf fixed: failed[..].ui_id -> failed 2025-03-24 23:12:45 +09:00
Dr.Lt.Data
149257e4f1 Merge branch 'main' into feat/queue_batch 2025-03-24 22:53:13 +09:00
Dr.Lt.Data
212b8e7ed2 feat: support task batch
POST /v2/manager/queue/batch
GET /v2/manager/queue/history_list
GET /v2/manager/queue/history?id={id}
GET /v2/manager/queue/abort_current
2025-03-24 22:49:38 +09:00
Dr.Lt.Data
01ac9c895a Modify the structure to be installable via pip. 2025-03-19 22:15:53 +09:00
Dr.Lt.Data
ebcb14e6aa support installation of system added nodepack
modified: install_by_id - Change the install path of the CNR node added by the system to be based on the repo URL instead of the CNR ID.
2025-03-19 07:41:39 +09:00
107 changed files with 82572 additions and 14209 deletions

1
.env.example Normal file
View File

@@ -0,0 +1 @@
PYPI_TOKEN=your-pypi-token

70
.github/workflows/ci.yml vendored Normal file
View File

@@ -0,0 +1,70 @@
name: CI
on:
push:
branches: [ main, feat/*, fix/* ]
pull_request:
branches: [ main ]
jobs:
validate-openapi:
name: Validate OpenAPI Specification
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check if OpenAPI changed
id: openapi-changed
uses: tj-actions/changed-files@v44
with:
files: openapi.yaml
- name: Setup Node.js
if: steps.openapi-changed.outputs.any_changed == 'true'
uses: actions/setup-node@v4
with:
node-version: '18'
- name: Install Redoc CLI
if: steps.openapi-changed.outputs.any_changed == 'true'
run: |
npm install -g @redocly/cli
- name: Validate OpenAPI specification
if: steps.openapi-changed.outputs.any_changed == 'true'
run: |
redocly lint openapi.yaml
code-quality:
name: Code Quality Checks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Fetch all history for proper diff
- name: Get changed Python files
id: changed-py-files
uses: tj-actions/changed-files@v44
with:
files: |
**/*.py
files_ignore: |
comfyui_manager/legacy/**
- name: Setup Python
if: steps.changed-py-files.outputs.any_changed == 'true'
uses: actions/setup-python@v5
with:
python-version: '3.9'
- name: Install dependencies
if: steps.changed-py-files.outputs.any_changed == 'true'
run: |
pip install ruff
- name: Run ruff linting on changed files
if: steps.changed-py-files.outputs.any_changed == 'true'
run: |
echo "Changed files: ${{ steps.changed-py-files.outputs.all_changed_files }}"
echo "${{ steps.changed-py-files.outputs.all_changed_files }}" | xargs -r ruff check

View File

@@ -4,7 +4,7 @@ on:
workflow_dispatch:
push:
branches:
- main
- manager-v4
paths:
- "pyproject.toml"
@@ -21,7 +21,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.9'
python-version: '3.x'
- name: Install build dependencies
run: |
@@ -31,28 +31,28 @@ jobs:
- name: Get current version
id: current_version
run: |
CURRENT_VERSION=$(grep -oP 'version = "\K[^"]+' pyproject.toml)
CURRENT_VERSION=$(grep -oP '^version = "\K[^"]+' pyproject.toml)
echo "version=$CURRENT_VERSION" >> $GITHUB_OUTPUT
echo "Current version: $CURRENT_VERSION"
- name: Build package
run: python -m build
- name: Create GitHub Release
id: create_release
uses: softprops/action-gh-release@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
files: dist/*
tag_name: v${{ steps.current_version.outputs.version }}
draft: false
prerelease: false
generate_release_notes: true
# - name: Create GitHub Release
# id: create_release
# uses: softprops/action-gh-release@v2
# env:
# GITHUB_TOKEN: ${{ github.token }}
# with:
# files: dist/*
# tag_name: v${{ steps.current_version.outputs.version }}
# draft: false
# prerelease: false
# generate_release_notes: true
- name: Publish to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
uses: pypa/gh-action-pypi-publish@76f52bc884231f62b9a034ebfe128415bbaabdfc
with:
password: ${{ secrets.PYPI_TOKEN }}
skip-existing: true
verbose: true
verbose: true

View File

@@ -1,25 +0,0 @@
name: Publish to Comfy registry
on:
workflow_dispatch:
push:
branches:
- main-blocked
paths:
- "pyproject.toml"
permissions:
issues: write
jobs:
publish-node:
name: Publish Custom Node to registry
runs-on: ubuntu-latest
if: ${{ github.repository_owner == 'ltdrdata' || github.repository_owner == 'Comfy-Org' }}
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Publish Custom Node
uses: Comfy-Org/publish-node-action@v1
with:
## Add your own personal access token to your Github Repository secrets and reference it here.
personal_access_token: ${{ secrets.REGISTRY_ACCESS_TOKEN }}

4
.gitignore vendored
View File

@@ -19,4 +19,6 @@ pip_overrides.json
check2.sh
/venv/
build
*.egg-info
dist
*.egg-info
.env

47
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,47 @@
## Testing Changes
1. Activate the ComfyUI environment.
2. Build package locally after making changes.
```bash
# from inside the ComfyUI-Manager directory, with the ComfyUI environment activated
python -m build
```
3. Install the package locally in the ComfyUI environment.
```bash
# Uninstall existing package
pip uninstall comfyui-manager
# Install the locale package
pip install dist/comfyui-manager-*.whl
```
4. Start ComfyUI.
```bash
# after navigating to the ComfyUI directory
python main.py
```
## Manually Publish Test Version to PyPi
1. Set the `PYPI_TOKEN` environment variable in env file.
2. If manually publishing, you likely want to use a release candidate version, so set the version in [pyproject.toml](pyproject.toml) to something like `0.0.1rc1`.
3. Build the package.
```bash
python -m build
```
4. Upload the package to PyPi.
```bash
python -m twine upload dist/* --username __token__ --password $PYPI_TOKEN
```
5. View at https://pypi.org/project/comfyui-manager/

View File

@@ -4,4 +4,12 @@ include comfyui_manager/glob/*
include LICENSE.txt
include README.md
include requirements.txt
include pyproject.toml
include pyproject.toml
include custom-node-list.json
include extension-node-list.json
include extras.json
include github-stats.json
include model-list.json
include alter-list.json
include comfyui_manager/channels.list.template
include comfyui_manager/pip-policy.json

View File

@@ -9,7 +9,7 @@
* V3.16: Support for `uv` has been added. Set `use_uv` in `config.ini`.
* V3.10: `double-click feature` is removed
* This feature has been moved to https://github.com/ltdrdata/comfyui-connection-helper
* V3.3.2: Overhauled. Officially supports [https://comfyregistry.org/](https://comfyregistry.org/).
* V3.3.2: Overhauled. Officially supports [https://registry.comfy.org/](https://registry.comfy.org/).
* You can see whole nodes info on [ComfyUI Nodes Info](https://ltdrdata.github.io/) page.
## Installation
@@ -215,13 +215,14 @@ The following settings are applied based on the section marked as `is_default`.
downgrade_blacklist = <Set a list of packages to prevent downgrades. List them separated by commas.>
security_level = <Set the security level => strong|normal|normal-|weak>
always_lazy_install = <Whether to perform dependency installation on restart even in environments other than Windows.>
network_mode = <Set the network mode => public|private|offline>
network_mode = <Set the network mode => public|private|offline|personal_cloud>
```
* network_mode:
- public: An environment that uses a typical public network.
- private: An environment that uses a closed network, where a private node DB is configured via `channel_url`. (Uses cache if available)
- offline: An environment that does not use any external connections when using an offline network. (Uses cache if available)
- personal_cloud: Applies relaxed security features in cloud environments such as Google Colab or Runpod, where strong security is not required.
## Additional Feature
@@ -312,31 +313,33 @@ When you run the `scan.sh` script:
## Security policy
* Edit `config.ini` file: add `security_level = <LEVEL>`
* `strong`
* doesn't allow `high` and `middle` level risky feature
* `normal`
* doesn't allow `high` level risky feature
* `middle` level risky feature is available
* `normal-`
* doesn't allow `high` level risky feature if `--listen` is specified and not starts with `127.`
* `middle` level risky feature is available
* `weak`
* all feature is available
* `high` level risky features
* `Install via git url`, `pip install`
* Installation of custom nodes registered not in the `default channel`.
* Fix custom nodes
* `middle` level risky features
* Uninstall/Update
* Installation of custom nodes registered in the `default channel`.
* Restore/Remove Snapshot
* Restart
* `low` level risky features
* Update ComfyUI
The security settings are applied based on whether the ComfyUI server's listener is non-local and whether the network mode is set to `personal_cloud`.
* **non-local**: When the server is launched with `--listen` and is bound to a network range other than the local `127.` range, allowing remote IP access.
* **personal\_cloud**: When the `network_mode` is set to `personal_cloud`.
### Risky Level Table
| Risky Level | features |
|-------------|---------------------------------------------------------------------------------------------------------------------------------------|
| high+ | * `Install via git url`, `pip install`<BR>* Installation of nodepack registered not in the `default channel`. |
| high | * Fix nodepack |
| middle+ | * Uninstall/Update<BR>* Installation of nodepack registered in the `default channel`.<BR>* Restore/Remove Snapshot<BR>* Install model |
| middle | * Restart |
| low | * Update ComfyUI |
### Security Level Table
| Security Level | local | non-local (personal_cloud) | non-local (not personal_cloud) |
|----------------|--------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------|--------------------------------|
| strong | * Only `weak` level risky features are allowed | * Only `weak` level risky features are allowed | * Only `weak` level risky features are allowed |
| normal | * `high+` and `high` level risky features are not allowed<BR>* `middle+` and `middle` level risky features are available | * `high+` and `high` level risky features are not allowed<BR>* `middle+` and `middle` level risky features are available | * `high+`, `high` and `middle+` level risky features are not allowed<BR>* `middle` level risky features are available
| normal- | * All features are available | * `high+` and `high` level risky features are not allowed<BR>* `middle+` and `middle` level risky features are available | * `high+`, `high` and `middle+` level risky features are not allowed<BR>* `middle` level risky features are available
| weak | * All features are available | * All features are available | * `high+` and `middle+` level risky features are not allowed<BR>* `high`, `middle` and `low` level risky features are available
# Disclaimer

View File

@@ -1,6 +0,0 @@
default::https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main
recent::https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/node_db/new
legacy::https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/node_db/legacy
forked::https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/node_db/forked
dev::https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/node_db/dev
tutorial::https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/node_db/tutorial

View File

@@ -1,2 +0,0 @@
#!/bin/bash
python ./comfyui_manager/cm-cli.py $*

49
comfyui_manager/README.md Normal file
View File

@@ -0,0 +1,49 @@
# ComfyUI-Manager: Core Backend (glob)
This directory contains the Python backend modules that power ComfyUI-Manager, handling the core functionality of node management, downloading, security, and server operations.
## Directory Structure
- **glob/** - code for new cacheless ComfyUI-Manager
- **legacy/** - code for legacy ComfyUI-Manager
## Core Modules
- **manager_core.py**: The central implementation of management functions, handling configuration, installation, updates, and node management.
- **manager_server.py**: Implements server functionality and API endpoints for the web interface to interact with the backend.
## Specialized Modules
- **share_3rdparty.py**: Manages integration with third-party sharing platforms.
## Architecture
The backend follows a modular design pattern with clear separation of concerns:
1. **Core Layer**: Manager modules provide the primary API and business logic
2. **Utility Layer**: Helper modules provide specialized functionality
3. **Integration Layer**: Modules that connect to external systems
## Security Model
The system implements a comprehensive security framework with multiple levels:
- **Block**: Highest security - blocks most remote operations
- **High**: Allows only specific trusted operations
- **Middle**: Standard security for most users
- **Normal-**: More permissive for advanced users
- **Weak**: Lowest security for development environments
## Implementation Details
- The backend is designed to work seamlessly with ComfyUI
- Asynchronous task queuing is implemented for background operations
- The system supports multiple installation modes
- Error handling and risk assessment are integrated throughout the codebase
## API Integration
The backend exposes a REST API via `manager_server.py` that enables:
- Custom node management (install, update, disable, remove)
- Model downloading and organization
- System configuration
- Snapshot management
- Workflow component handling

View File

@@ -1,8 +1,10 @@
import os
import logging
from aiohttp import web
from .common.manager_security import HANDLER_POLICY
from .common import manager_security
from comfy.cli_args import args
ENABLE_LEGACY_COMFYUI_MANAGER_FRONT_DEFAULT = True # Enable legacy ComfyUI Manager frontend while new UI is in beta phase
def prestartup():
from . import prestartup_script # noqa: F401
@@ -11,14 +13,92 @@ def prestartup():
def start():
logging.info('[START] ComfyUI-Manager')
from .glob import manager_server # noqa: F401
from .glob import share_3rdparty # noqa: F401
from .glob import cm_global # noqa: F401
from .common import cm_global # noqa: F401
should_show_legacy_manager_front = os.environ.get('ENABLE_LEGACY_COMFYUI_MANAGER_FRONT', 'false') == 'true' or ENABLE_LEGACY_COMFYUI_MANAGER_FRONT_DEFAULT
if not args.disable_manager and should_show_legacy_manager_front:
try:
import nodes
nodes.EXTENSION_WEB_DIRS['comfyui-manager-legacy'] = os.path.join(os.path.dirname(__file__), 'js')
except Exception as e:
print("Error enabling legacy ComfyUI Manager frontend:", e)
if args.enable_manager:
if args.enable_manager_legacy_ui:
try:
from .legacy import manager_server # noqa: F401
from .legacy import share_3rdparty # noqa: F401
from .legacy import manager_core as core
import nodes
logging.info("[ComfyUI-Manager] Legacy UI is enabled.")
nodes.EXTENSION_WEB_DIRS['comfyui-manager-legacy'] = os.path.join(os.path.dirname(__file__), 'js')
except Exception as e:
print("Error enabling legacy ComfyUI Manager frontend:", e)
core = None
else:
from .glob import manager_server # noqa: F401
from .glob import share_3rdparty # noqa: F401
from .glob import manager_core as core
if core is not None:
manager_security.is_personal_cloud_mode = core.get_config()['network_mode'].lower() == 'personal_cloud'
def should_be_disabled(fullpath:str) -> bool:
"""
1. Disables the legacy ComfyUI-Manager.
2. The blocklist can be expanded later based on policies.
"""
if args.enable_manager:
# In cases where installation is done via a zip archive, the directory name may not be comfyui-manager, and it may not contain a git repository.
# It is assumed that any installed legacy ComfyUI-Manager will have at least 'comfyui-manager' in its directory name.
dir_name = os.path.basename(fullpath).lower()
if 'comfyui-manager' in dir_name:
return True
return False
def get_client_ip(request):
peername = request.transport.get_extra_info("peername")
if peername is not None:
host, port = peername
return host
return "unknown"
def create_middleware():
connected_clients = set()
is_local_mode = manager_security.is_loopback(args.listen)
@web.middleware
async def manager_middleware(request: web.Request, handler):
nonlocal connected_clients
# security policy for remote environments
prev_client_count = len(connected_clients)
client_ip = get_client_ip(request)
connected_clients.add(client_ip)
next_client_count = len(connected_clients)
if prev_client_count == 1 and next_client_count > 1:
manager_security.multiple_remote_alert()
policy = manager_security.get_handler_policy(handler)
is_banned = False
# policy check
if len(connected_clients) > 1:
if is_local_mode:
if HANDLER_POLICY.MULTIPLE_REMOTE_BAN_NON_LOCAL in policy:
is_banned = True
if HANDLER_POLICY.MULTIPLE_REMOTE_BAN_NOT_PERSONAL_CLOUD in policy:
is_banned = not manager_security.is_personal_cloud_mode
if HANDLER_POLICY.BANNED in policy:
is_banned = True
if is_banned:
logging.warning(f"[Manager] Banning request from {client_ip}: {request.path}")
response = web.Response(text="[Manager] This request is banned.", status=403)
else:
response: web.Response = await handler(request)
return response
return manager_middleware

View File

@@ -0,0 +1,6 @@
default::https://raw.githubusercontent.com/Comfy-Org/ComfyUI-Manager/main
recent::https://raw.githubusercontent.com/Comfy-Org/ComfyUI-Manager/main/node_db/new
legacy::https://raw.githubusercontent.com/Comfy-Org/ComfyUI-Manager/main/node_db/legacy
forked::https://raw.githubusercontent.com/Comfy-Org/ComfyUI-Manager/main/node_db/forked
dev::https://raw.githubusercontent.com/Comfy-Org/ComfyUI-Manager/main/node_db/dev
tutorial::https://raw.githubusercontent.com/Comfy-Org/ComfyUI-Manager/main/node_db/tutorial

View File

View File

@@ -15,7 +15,7 @@ import git
import importlib
import manager_util
from ..common import manager_util
# read env vars
# COMFYUI_FOLDERS_BASE_PATH is not required in cm-cli.py
@@ -35,16 +35,18 @@ if not os.path.exists(os.path.join(comfy_path, 'folder_paths.py')):
import utils.extra_config
from .glob import cm_global
from .glob import manager_core as core
from .glob.manager_core import unified_manager
from .glob import cnr_utils
from ..common import cm_global
from ..legacy import manager_core as core
from ..common import context
from ..legacy.manager_core import unified_manager
from ..common import cnr_utils
comfyui_manager_path = os.path.abspath(os.path.dirname(__file__))
cm_global.pip_blacklist = {'torch', 'torchsde', 'torchvision'}
cm_global.pip_downgrade_blacklist = ['torch', 'torchsde', 'torchvision', 'transformers', 'safetensors', 'kornia']
cm_global.pip_overrides = {'numpy': 'numpy<2'}
cm_global.pip_blacklist = {'torch', 'torchaudio', 'torchsde', 'torchvision'}
cm_global.pip_downgrade_blacklist = ['torch', 'torchaudio', 'torchsde', 'torchvision', 'transformers', 'safetensors', 'kornia']
cm_global.pip_overrides = {}
if os.path.exists(os.path.join(manager_util.comfyui_manager_path, "pip_overrides.json")):
with open(os.path.join(manager_util.comfyui_manager_path, "pip_overrides.json"), 'r', encoding="UTF-8", errors="ignore") as json_file:
@@ -64,7 +66,7 @@ def check_comfyui_hash():
repo = git.Repo(comfy_path)
core.comfy_ui_revision = len(list(repo.iter_commits('HEAD')))
core.comfy_ui_commit_datetime = repo.head.commit.committed_datetime
except:
except Exception:
print('[bold yellow]INFO: Frozen ComfyUI mode.[/bold yellow]')
core.comfy_ui_revision = 0
core.comfy_ui_commit_datetime = 0
@@ -80,7 +82,7 @@ def read_downgrade_blacklist():
try:
import configparser
config = configparser.ConfigParser(strict=False)
config.read(core.manager_config.path)
config.read(context.manager_config_path)
default_conf = config['default']
if 'downgrade_blacklist' in default_conf:
@@ -88,7 +90,7 @@ def read_downgrade_blacklist():
items = [x.strip() for x in items if x != '']
cm_global.pip_downgrade_blacklist += items
cm_global.pip_downgrade_blacklist = list(set(cm_global.pip_downgrade_blacklist))
except:
except Exception:
pass
@@ -103,7 +105,7 @@ class Ctx:
self.no_deps = False
self.mode = 'cache'
self.user_directory = None
self.custom_nodes_paths = [os.path.join(core.comfy_base_path, 'custom_nodes')]
self.custom_nodes_paths = [os.path.join(context.comfy_base_path, 'custom_nodes')]
self.manager_files_directory = os.path.dirname(__file__)
if Ctx.folder_paths is None:
@@ -141,15 +143,14 @@ class Ctx:
if os.path.exists(extra_model_paths_yaml):
utils.extra_config.load_extra_path_config(extra_model_paths_yaml)
core.update_user_directory(user_directory)
context.update_user_directory(user_directory)
if os.path.exists(core.manager_pip_overrides_path):
with open(core.manager_pip_overrides_path, 'r', encoding="UTF-8", errors="ignore") as json_file:
if os.path.exists(context.manager_pip_overrides_path):
with open(context.manager_pip_overrides_path, 'r', encoding="UTF-8", errors="ignore") as json_file:
cm_global.pip_overrides = json.load(json_file)
cm_global.pip_overrides = {'numpy': 'numpy<2'}
if os.path.exists(core.manager_pip_blacklist_path):
with open(core.manager_pip_blacklist_path, 'r', encoding="UTF-8", errors="ignore") as f:
if os.path.exists(context.manager_pip_blacklist_path):
with open(context.manager_pip_blacklist_path, 'r', encoding="UTF-8", errors="ignore") as f:
for x in f.readlines():
y = x.strip()
if y != '':
@@ -162,15 +163,15 @@ class Ctx:
@staticmethod
def get_startup_scripts_path():
return os.path.join(core.manager_startup_script_path, "install-scripts.txt")
return os.path.join(context.manager_startup_script_path, "install-scripts.txt")
@staticmethod
def get_restore_snapshot_path():
return os.path.join(core.manager_startup_script_path, "restore-snapshot.json")
return os.path.join(context.manager_startup_script_path, "restore-snapshot.json")
@staticmethod
def get_snapshot_path():
return core.manager_snapshot_path
return context.manager_snapshot_path
@staticmethod
def get_custom_nodes_paths():
@@ -183,13 +184,18 @@ class Ctx:
cmd_ctx = Ctx()
def install_node(node_spec_str, is_all=False, cnt_msg=''):
def install_node(node_spec_str, is_all=False, cnt_msg='', **kwargs):
exit_on_fail = kwargs.get('exit_on_fail', False)
print(f"install_node exit on fail:{exit_on_fail}...")
if core.is_valid_url(node_spec_str):
# install via urls
res = asyncio.run(core.gitclone_install(node_spec_str, no_deps=cmd_ctx.no_deps))
if not res.result:
print(res.msg)
print(f"[bold red]ERROR: An error occurred while installing '{node_spec_str}'.[/bold red]")
if exit_on_fail:
sys.exit(1)
else:
print(f"{cnt_msg} [INSTALLED] {node_spec_str:50}")
else:
@@ -224,6 +230,8 @@ def install_node(node_spec_str, is_all=False, cnt_msg=''):
print("")
else:
print(f"[bold red]ERROR: An error occurred while installing '{node_name}'.\n{res.msg}[/bold red]")
if exit_on_fail:
sys.exit(1)
def reinstall_node(node_spec_str, is_all=False, cnt_msg=''):
@@ -430,8 +438,11 @@ def show_list(kind, simple=False):
flag = kind in ['all', 'cnr', 'installed', 'enabled']
for k, v in unified_manager.active_nodes.items():
if flag:
cnr = unified_manager.cnr_map[k]
processed[k] = "[ ENABLED ] ", cnr['name'], k, cnr['publisher']['name'], v[0]
cnr = unified_manager.cnr_map.get(k)
if cnr:
processed[k] = "[ ENABLED ] ", cnr['name'], k, cnr['publisher']['name'], v[0]
else:
processed[k] = None
else:
processed[k] = None
@@ -451,8 +462,11 @@ def show_list(kind, simple=False):
continue
if flag:
cnr = unified_manager.cnr_map[k]
processed[k] = "[ DISABLED ] ", cnr['name'], k, cnr['publisher']['name'], ", ".join(list(v.keys()))
cnr = unified_manager.cnr_map.get(k) # NOTE: can this be None if removed from CNR after installed
if cnr:
processed[k] = "[ DISABLED ] ", cnr['name'], k, cnr['publisher']['name'], ", ".join(list(v.keys()))
else:
processed[k] = None
else:
processed[k] = None
@@ -461,8 +475,11 @@ def show_list(kind, simple=False):
continue
if flag:
cnr = unified_manager.cnr_map[k]
processed[k] = "[ DISABLED ] ", cnr['name'], k, cnr['publisher']['name'], 'nightly'
cnr = unified_manager.cnr_map.get(k)
if cnr:
processed[k] = "[ DISABLED ] ", cnr['name'], k, cnr['publisher']['name'], 'nightly'
else:
processed[k] = None
else:
processed[k] = None
@@ -482,9 +499,12 @@ def show_list(kind, simple=False):
continue
if flag:
cnr = unified_manager.cnr_map[k]
ver_spec = v['latest_version']['version'] if 'latest_version' in v else '0.0.0'
processed[k] = "[ NOT INSTALLED ] ", cnr['name'], k, cnr['publisher']['name'], ver_spec
cnr = unified_manager.cnr_map.get(k)
if cnr:
ver_spec = v['latest_version']['version'] if 'latest_version' in v else '0.0.0'
processed[k] = "[ NOT INSTALLED ] ", cnr['name'], k, cnr['publisher']['name'], ver_spec
else:
processed[k] = None
else:
processed[k] = None
@@ -585,7 +605,7 @@ def get_all_installed_node_specs():
return res
def for_each_nodes(nodes, act, allow_all=True):
def for_each_nodes(nodes, act, allow_all=True, **kwargs):
is_all = False
if allow_all and 'all' in nodes:
is_all = True
@@ -597,7 +617,7 @@ def for_each_nodes(nodes, act, allow_all=True):
i = 1
for x in nodes:
try:
act(x, is_all=is_all, cnt_msg=f'{i}/{total}')
act(x, is_all=is_all, cnt_msg=f'{i}/{total}', **kwargs)
except Exception as e:
print(f"ERROR: {e}")
traceback.print_exc()
@@ -641,13 +661,17 @@ def install(
None,
help="user directory"
),
exit_on_fail: bool = typer.Option(
False,
help="Exit on failure"
)
):
cmd_ctx.set_user_directory(user_directory)
cmd_ctx.set_channel_mode(channel, mode)
cmd_ctx.set_no_deps(no_deps)
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, core.manager_files_path)
for_each_nodes(nodes, act=install_node)
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, context.manager_files_path)
for_each_nodes(nodes, act=install_node, exit_on_fail=exit_on_fail)
pip_fixer.fix_broken()
@@ -684,7 +708,7 @@ def reinstall(
cmd_ctx.set_channel_mode(channel, mode)
cmd_ctx.set_no_deps(no_deps)
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, core.manager_files_path)
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, context.manager_files_path)
for_each_nodes(nodes, act=reinstall_node)
pip_fixer.fix_broken()
@@ -738,7 +762,7 @@ def update(
if 'all' in nodes:
asyncio.run(auto_save_snapshot())
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, core.manager_files_path)
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, context.manager_files_path)
for x in nodes:
if x.lower() in ['comfyui', 'comfy', 'all']:
@@ -839,7 +863,7 @@ def fix(
if 'all' in nodes:
asyncio.run(auto_save_snapshot())
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, core.manager_files_path)
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, context.manager_files_path)
for_each_nodes(nodes, fix_node, allow_all=True)
pip_fixer.fix_broken()
@@ -1116,7 +1140,7 @@ def restore_snapshot(
print(f"[bold red]ERROR: `{snapshot_path}` is not exists.[/bold red]")
exit(1)
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, core.manager_files_path)
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, context.manager_files_path)
try:
asyncio.run(core.restore_snapshot(snapshot_path, extras))
except Exception:
@@ -1148,7 +1172,7 @@ def restore_dependencies(
total = len(node_paths)
i = 1
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, core.manager_files_path)
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, context.manager_files_path)
for x in node_paths:
print("----------------------------------------------------------------------------------------------------")
print(f"Restoring [{i}/{total}]: {x}")
@@ -1167,7 +1191,7 @@ def post_install(
):
path = os.path.expanduser(path)
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, core.manager_files_path)
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, context.manager_files_path)
unified_manager.execute_install_script('', path, instant_execution=True)
pip_fixer.fix_broken()
@@ -1207,11 +1231,11 @@ def install_deps(
with open(deps, 'r', encoding="UTF-8", errors="ignore") as json_file:
try:
json_obj = json.load(json_file)
except:
except Exception:
print(f"[bold red]Invalid json file: {deps}[/bold red]")
exit(1)
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, core.manager_files_path)
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, context.manager_files_path)
for k in json_obj['custom_nodes'].keys():
state = core.simple_check_custom_node(k)
if state == 'installed':
@@ -1268,6 +1292,10 @@ def export_custom_node_ids(
print(f"{x['id']}@unknown", file=output_file)
def main():
app()
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(app())

View File

@@ -0,0 +1,16 @@
# ComfyUI-Manager: Core Backend (glob)
This directory contains the Python backend modules that power ComfyUI-Manager, handling the core functionality of node management, downloading, security, and server operations.
## Core Modules
- **manager_downloader.py**: Handles downloading operations for models, extensions, and other resources.
- **manager_util.py**: Provides utility functions used throughout the system.
## Specialized Modules
- **cm_global.py**: Maintains global variables and state management across the system.
- **cnr_utils.py**: Helper utilities for interacting with the custom node registry (CNR).
- **git_utils.py**: Git-specific utilities for repository operations.
- **node_package.py**: Handles the packaging and installation of node extensions.
- **security_check.py**: Implements the multi-level security system for installation safety.

View File

View File

@@ -6,7 +6,7 @@ import time
from dataclasses import dataclass
from typing import List
from . import manager_core
from . import context
from . import manager_util
import requests
@@ -48,9 +48,9 @@ async def _get_cnr_data(cache_mode=True, dont_wait=True):
# Get ComfyUI version tag
if is_desktop:
# extract version from pyproject.toml instead of git tag
comfyui_ver = manager_core.get_current_comfyui_ver() or 'unknown'
comfyui_ver = context.get_current_comfyui_ver() or 'unknown'
else:
comfyui_ver = manager_core.get_comfyui_tag() or 'unknown'
comfyui_ver = context.get_comfyui_tag() or 'unknown'
if is_desktop:
if is_windows:
@@ -112,7 +112,7 @@ async def _get_cnr_data(cache_mode=True, dont_wait=True):
json_obj = await fetch_all()
manager_util.save_to_cache(uri, json_obj)
return json_obj['nodes']
except:
except Exception:
res = {}
print("Cannot connect to comfyregistry.")
finally:
@@ -180,7 +180,7 @@ def install_node(node_id, version=None):
else:
url = f"{base_url}/nodes/{node_id}/install?version={version}"
response = requests.get(url)
response = requests.get(url, verify=not manager_util.bypass_ssl)
if response.status_code == 200:
# Convert the API response to a NodeVersion object
return map_node_version(response.json())
@@ -191,7 +191,7 @@ def install_node(node_id, version=None):
def all_versions_of_node(node_id):
url = f"{base_url}/nodes/{node_id}/versions?statuses=NodeVersionStatusActive&statuses=NodeVersionStatusPending"
response = requests.get(url)
response = requests.get(url, verify=not manager_util.bypass_ssl)
if response.status_code == 200:
return response.json()
else:
@@ -211,6 +211,7 @@ def read_cnr_info(fullpath):
project = data.get('project', {})
name = project.get('name').strip().lower()
original_name = project.get('name')
# normalize version
# for example: 2.5 -> 2.5.0
@@ -222,6 +223,7 @@ def read_cnr_info(fullpath):
if name and version: # repository is optional
return {
"id": name,
"original_name": original_name,
"version": version,
"url": repository
}
@@ -237,7 +239,7 @@ def generate_cnr_id(fullpath, cnr_id):
if not os.path.exists(cnr_id_path):
with open(cnr_id_path, "w") as f:
return f.write(cnr_id)
except:
except Exception:
print(f"[ComfyUI Manager] unable to create file: {cnr_id_path}")
@@ -247,7 +249,7 @@ def read_cnr_id(fullpath):
if os.path.exists(cnr_id_path):
with open(cnr_id_path) as f:
return f.read().strip()
except:
except Exception:
pass
return None

View File

@@ -0,0 +1,108 @@
import sys
import os
import logging
from . import manager_util
import toml
import git
# read env vars
comfy_path: str = os.environ.get('COMFYUI_PATH')
comfy_base_path = os.environ.get('COMFYUI_FOLDERS_BASE_PATH')
if comfy_path is None:
try:
comfy_path = os.path.abspath(os.path.dirname(sys.modules['__main__'].__file__))
os.environ['COMFYUI_PATH'] = comfy_path
except Exception:
logging.error("[ComfyUI-Manager] environment variable 'COMFYUI_PATH' is not specified.")
exit(-1)
if comfy_base_path is None:
comfy_base_path = comfy_path
channel_list_template_path = os.path.join(manager_util.comfyui_manager_path, 'channels.list.template')
git_script_path = os.path.join(manager_util.comfyui_manager_path, "git_helper.py")
manager_files_path = None
manager_config_path = None
manager_channel_list_path = None
manager_startup_script_path:str = None
manager_snapshot_path = None
manager_pip_overrides_path = None
manager_pip_blacklist_path = None
manager_components_path = None
manager_batch_history_path = None
def update_user_directory(user_dir):
global manager_files_path
global manager_config_path
global manager_channel_list_path
global manager_startup_script_path
global manager_snapshot_path
global manager_pip_overrides_path
global manager_pip_blacklist_path
global manager_components_path
global manager_batch_history_path
manager_files_path = os.path.abspath(os.path.join(user_dir, 'default', 'ComfyUI-Manager'))
if not os.path.exists(manager_files_path):
os.makedirs(manager_files_path)
manager_snapshot_path = os.path.join(manager_files_path, "snapshots")
if not os.path.exists(manager_snapshot_path):
os.makedirs(manager_snapshot_path)
manager_startup_script_path = os.path.join(manager_files_path, "startup-scripts")
if not os.path.exists(manager_startup_script_path):
os.makedirs(manager_startup_script_path)
manager_config_path = os.path.join(manager_files_path, 'config.ini')
manager_channel_list_path = os.path.join(manager_files_path, 'channels.list')
manager_pip_overrides_path = os.path.join(manager_files_path, "pip_overrides.json")
manager_pip_blacklist_path = os.path.join(manager_files_path, "pip_blacklist.list")
manager_components_path = os.path.join(manager_files_path, "components")
manager_util.cache_dir = os.path.join(manager_files_path, "cache")
manager_batch_history_path = os.path.join(manager_files_path, "batch_history")
if not os.path.exists(manager_util.cache_dir):
os.makedirs(manager_util.cache_dir)
if not os.path.exists(manager_batch_history_path):
os.makedirs(manager_batch_history_path)
try:
import folder_paths
update_user_directory(folder_paths.get_user_directory())
except Exception:
# fallback:
# This case is only possible when running with cm-cli, and in practice, this case is not actually used.
update_user_directory(os.path.abspath(manager_util.comfyui_manager_path))
def get_current_comfyui_ver():
"""
Extract version from pyproject.toml
"""
toml_path = os.path.join(comfy_path, 'pyproject.toml')
if not os.path.exists(toml_path):
return None
else:
try:
with open(toml_path, "r", encoding="utf-8") as f:
data = toml.load(f)
project = data.get('project', {})
return project.get('version')
except Exception:
return None
def get_comfyui_tag():
try:
with git.Repo(comfy_path) as repo:
return repo.git.describe('--tags')
except Exception:
return None

View File

@@ -4,6 +4,7 @@ class NetworkMode(enum.Enum):
PUBLIC = "public"
PRIVATE = "private"
OFFLINE = "offline"
PERSONAL_CLOUD = "personal_cloud"
class SecurityLevel(enum.Enum):
STRONG = "strong"

View File

@@ -156,27 +156,27 @@ def switch_to_default_branch(repo):
default_branch = repo.git.symbolic_ref(f'refs/remotes/{remote_name}/HEAD').replace(f'refs/remotes/{remote_name}/', '')
repo.git.checkout(default_branch)
return True
except:
except Exception:
# try checkout master
# try checkout main if failed
try:
repo.git.checkout(repo.heads.master)
return True
except:
except Exception:
try:
if remote_name is not None:
repo.git.checkout('-b', 'master', f'{remote_name}/master')
return True
except:
except Exception:
try:
repo.git.checkout(repo.heads.main)
return True
except:
except Exception:
try:
if remote_name is not None:
repo.git.checkout('-b', 'main', f'{remote_name}/main')
return True
except:
except Exception:
pass
print("[ComfyUI Manager] Failed to switch to the default branch")
@@ -447,7 +447,7 @@ def restore_pip_snapshot(pips, options):
res = 1
try:
res = subprocess.check_call([sys.executable, '-m', 'pip', 'install'] + non_url)
except:
except Exception:
pass
# fallback
@@ -456,7 +456,7 @@ def restore_pip_snapshot(pips, options):
res = 1
try:
res = subprocess.check_call([sys.executable, '-m', 'pip', 'install', x])
except:
except Exception:
pass
if res != 0:
@@ -467,7 +467,7 @@ def restore_pip_snapshot(pips, options):
res = 1
try:
res = subprocess.check_call([sys.executable, '-m', 'pip', 'install', x])
except:
except Exception:
pass
if res != 0:
@@ -478,7 +478,7 @@ def restore_pip_snapshot(pips, options):
res = 1
try:
res = subprocess.check_call([sys.executable, '-m', 'pip', 'install', x])
except:
except Exception:
pass
if res != 0:

View File

@@ -46,6 +46,8 @@ def git_url(fullpath):
for k, v in config.items():
if k.startswith('remote ') and 'url' in v:
if 'Comfy-Org/ComfyUI-Manager' in v['url']:
return "https://github.com/ltdrdata/ComfyUI-Manager"
return v['url']
return None

View File

@@ -55,7 +55,11 @@ def download_url(model_url: str, model_dir: str, filename: str):
return aria2_download_url(model_url, model_dir, filename)
else:
from torchvision.datasets.utils import download_url as torchvision_download_url
return torchvision_download_url(model_url, model_dir, filename)
try:
return torchvision_download_url(model_url, model_dir, filename)
except Exception as e:
logging.error(f"[ComfyUI-Manager] Failed to download: {model_url} / {repr(e)}")
raise
def aria2_find_task(dir: str, filename: str):

View File

@@ -0,0 +1,36 @@
from enum import Enum
is_personal_cloud_mode = False
handler_policy = {}
class HANDLER_POLICY(Enum):
MULTIPLE_REMOTE_BAN_NON_LOCAL = 1
MULTIPLE_REMOTE_BAN_NOT_PERSONAL_CLOUD = 2
BANNED = 3
def is_loopback(address):
import ipaddress
try:
return ipaddress.ip_address(address).is_loopback
except ValueError:
return False
def do_nothing():
pass
def get_handler_policy(x):
return handler_policy.get(x) or set()
def add_handler_policy(x, policy):
s = handler_policy.get(x)
if s is None:
s = set()
handler_policy[x] = s
s.add(policy)
multiple_remote_alert = do_nothing

View File

@@ -24,6 +24,7 @@ comfyui_manager_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '
cache_dir = os.path.join(comfyui_manager_path, '.cache') # This path is also updated together in **manager_core.update_user_directory**.
use_uv = False
bypass_ssl = False
def is_manager_pip_package():
return not os.path.exists(os.path.join(comfyui_manager_path, '..', 'custom_nodes'))
@@ -53,7 +54,7 @@ def make_pip_cmd(cmd):
# DON'T USE StrictVersion - cannot handle pre_release version
# try:
# from distutils.version import StrictVersion
# except:
# except Exception:
# print(f"[ComfyUI-Manager] 'distutils' package not found. Activating fallback mode for compatibility.")
class StrictVersion:
def __init__(self, version_string):
@@ -139,7 +140,7 @@ async def get_data(uri, silent=False):
print(f"FETCH DATA from: {uri}", end="")
if uri.startswith("http"):
async with aiohttp.ClientSession(trust_env=True, connector=aiohttp.TCPConnector(verify_ssl=False)) as session:
async with aiohttp.ClientSession(trust_env=True, connector=aiohttp.TCPConnector(verify_ssl=not bypass_ssl)) as session:
headers = {
'Cache-Control': 'no-cache',
'Pragma': 'no-cache',
@@ -259,7 +260,7 @@ def get_installed_packages(renew=False):
pip_map[normalized_name] = y[1]
except subprocess.CalledProcessError:
logging.error("[ComfyUI-Manager] Failed to retrieve the information of installed pip packages.")
return set()
return {}
return pip_map
@@ -310,6 +311,7 @@ def parse_requirement_line(line):
torch_torchvision_torchaudio_version_map = {
'2.7.0': ('0.22.0', '2.7.0'),
'2.6.0': ('0.21.0', '2.6.0'),
'2.5.1': ('0.20.0', '2.5.0'),
'2.5.0': ('0.20.0', '2.5.0'),
@@ -328,6 +330,32 @@ torch_torchvision_torchaudio_version_map = {
}
def torch_rollback(prev):
spec = prev.split('+')
if len(spec) > 1:
platform = spec[1]
else:
cmd = make_pip_cmd(['install', '--force', 'torch', 'torchvision', 'torchaudio'])
subprocess.check_output(cmd, universal_newlines=True)
logging.error(cmd)
return
torch_ver = StrictVersion(spec[0])
torch_ver = f"{torch_ver.major}.{torch_ver.minor}.{torch_ver.patch}"
torch_torchvision_torchaudio_ver = torch_torchvision_torchaudio_version_map.get(torch_ver)
if torch_torchvision_torchaudio_ver is None:
cmd = make_pip_cmd(['install', '--pre', 'torch', 'torchvision', 'torchaudio',
'--index-url', f"https://download.pytorch.org/whl/nightly/{platform}"])
logging.info("[ComfyUI-Manager] restore PyTorch to nightly version")
else:
torchvision_ver, torchaudio_ver = torch_torchvision_torchaudio_ver
cmd = make_pip_cmd(['install', f'torch=={torch_ver}', f'torchvision=={torchvision_ver}', f"torchaudio=={torchaudio_ver}",
'--index-url', f"https://download.pytorch.org/whl/{platform}"])
logging.info(f"[ComfyUI-Manager] restore PyTorch to {torch_ver}+{platform}")
subprocess.check_output(cmd, universal_newlines=True)
class PIPFixer:
def __init__(self, prev_pip_versions, comfyui_path, manager_files_path):
@@ -335,32 +363,6 @@ class PIPFixer:
self.comfyui_path = comfyui_path
self.manager_files_path = manager_files_path
def torch_rollback(self):
spec = self.prev_pip_versions['torch'].split('+')
if len(spec) > 0:
platform = spec[1]
else:
cmd = make_pip_cmd(['install', '--force', 'torch', 'torchvision', 'torchaudio'])
subprocess.check_output(cmd, universal_newlines=True)
logging.error(cmd)
return
torch_ver = StrictVersion(spec[0])
torch_ver = f"{torch_ver.major}.{torch_ver.minor}.{torch_ver.patch}"
torch_torchvision_torchaudio_ver = torch_torchvision_torchaudio_version_map.get(torch_ver)
if torch_torchvision_torchaudio_ver is None:
cmd = make_pip_cmd(['install', '--pre', 'torch', 'torchvision', 'torchaudio',
'--index-url', f"https://download.pytorch.org/whl/nightly/{platform}"])
logging.info("[ComfyUI-Manager] restore PyTorch to nightly version")
else:
torchvision_ver, torchaudio_ver = torch_torchvision_torchaudio_ver
cmd = make_pip_cmd(['install', f'torch=={torch_ver}', f'torchvision=={torchvision_ver}', f"torchaudio=={torchaudio_ver}",
'--index-url', f"https://download.pytorch.org/whl/{platform}"])
logging.info(f"[ComfyUI-Manager] restore PyTorch to {torch_ver}+{platform}")
subprocess.check_output(cmd, universal_newlines=True)
def fix_broken(self):
new_pip_versions = get_installed_packages(True)
@@ -382,7 +384,7 @@ class PIPFixer:
elif self.prev_pip_versions['torch'] != new_pip_versions['torch'] \
or self.prev_pip_versions['torchvision'] != new_pip_versions['torchvision'] \
or self.prev_pip_versions['torchaudio'] != new_pip_versions['torchaudio']:
self.torch_rollback()
torch_rollback(self.prev_pip_versions['torch'])
except Exception as e:
logging.error("[ComfyUI-Manager] Failed to restore PyTorch")
logging.error(e)
@@ -413,7 +415,7 @@ class PIPFixer:
if len(targets) > 0:
for x in targets:
cmd = make_pip_cmd(['install', f"{x}=={versions[0].version_string}", "numpy<2"])
cmd = make_pip_cmd(['install', f"{x}=={versions[0].version_string}"])
subprocess.check_output(cmd, universal_newlines=True)
logging.info(f"[ComfyUI-Manager] 'opencv' dependencies were fixed: {targets}")
@@ -421,19 +423,6 @@ class PIPFixer:
logging.error("[ComfyUI-Manager] Failed to restore opencv")
logging.error(e)
# fix numpy
try:
np = new_pip_versions.get('numpy')
if np is not None:
if StrictVersion(np) >= StrictVersion('2'):
cmd = make_pip_cmd(['install', "numpy<2"])
subprocess.check_output(cmd , universal_newlines=True)
logging.info("[ComfyUI-Manager] 'numpy' dependency were fixed")
except Exception as e:
logging.error("[ComfyUI-Manager] Failed to restore numpy")
logging.error(e)
# fix missing frontend
try:
# NOTE: package name in requirements is 'comfyui-frontend-package'
@@ -472,7 +461,7 @@ class PIPFixer:
normalized_name = parsed['package'].lower().replace('-', '_')
if normalized_name in new_pip_versions:
if 'version' in parsed and 'operator' in parsed:
cur = StrictVersion(new_pip_versions[parsed['package']])
cur = StrictVersion(new_pip_versions[normalized_name])
dest = parsed['version']
op = parsed['operator']
if cur == dest:
@@ -520,7 +509,7 @@ def robust_readlines(fullpath):
try:
with open(fullpath, "r") as f:
return f.readlines()
except:
except Exception:
encoding = None
with open(fullpath, "rb") as f:
raw_data = f.read()
@@ -533,3 +522,69 @@ def robust_readlines(fullpath):
print(f"[ComfyUI-Manager] Failed to recognize encoding for: {fullpath}")
return []
def restore_pip_snapshot(pips, options):
non_url = []
local_url = []
non_local_url = []
for k, v in pips.items():
# NOTE: skip torch related packages
if k.startswith("torch==") or k.startswith("torchvision==") or k.startswith("torchaudio==") or k.startswith("nvidia-"):
continue
if v == "":
non_url.append(k)
else:
if v.startswith('file:'):
local_url.append(v)
else:
non_local_url.append(v)
# restore other pips
failed = []
if '--pip-non-url' in options:
# try all at once
res = 1
try:
res = subprocess.check_output(make_pip_cmd(['install'] + non_url))
except Exception:
pass
# fallback
if res != 0:
for x in non_url:
res = 1
try:
res = subprocess.check_output(make_pip_cmd(['install', '--no-deps', x]))
except Exception:
pass
if res != 0:
failed.append(x)
if '--pip-non-local-url' in options:
for x in non_local_url:
res = 1
try:
res = subprocess.check_output(make_pip_cmd(['install', '--no-deps', x]))
except Exception:
pass
if res != 0:
failed.append(x)
if '--pip-local-url' in options:
for x in local_url:
res = 1
try:
res = subprocess.check_output(make_pip_cmd(['install', '--no-deps', x]))
except Exception:
pass
if res != 0:
failed.append(x)
print(f"Installation failed for pip packages: {failed}")

View File

@@ -0,0 +1,713 @@
# Design Document for pip_util.py Implementation
This is designed to minimize breaking existing installed dependencies.
## List of Functions to Implement
## Global Policy Management
### Global Variables
```python
_pip_policy_cache = None # Policy cache (program-wide, loaded once)
```
### Global Functions
* get_pip_policy(): Returns policy for resolving pip dependency conflicts (lazy loading)
- **Call timing**: Called whenever needed (automatically loads only once on first call)
- **Purpose**: Returns policy cache, automatically loads if cache is empty
- **Execution flow**:
1. Declare global _pip_policy_cache
2. If _pip_policy_cache is already loaded, return immediately (prevent duplicate loading)
3. Read base policy file:
- Path: {manager_util.comfyui_manager_path}/pip-policy.json
- Use empty dictionary if file doesn't exist
- Log error and use empty dictionary if JSON parsing fails
4. Read user policy file:
- Path: {context.manager_files_path}/pip-policy.user.json
- Create empty JSON file if doesn't exist ({"_comment": "User-specific pip policy overrides"})
- Log warning and use empty dictionary if JSON parsing fails
5. Apply merge rules (merge by package name):
- Start with base policy as base
- For each package in user policy:
* Package only in user policy: add to base
* Package only in base policy: keep in base
* Package in both: completely replace with user policy (entire package replacement, not section-level)
6. Store merged policy in _pip_policy_cache
7. Log policy load success (include number of loaded package policies)
8. Return _pip_policy_cache
- **Return value**: Dict (merged policy dictionary)
- **Exception handling**:
- File read failure: Log warning and treat file as empty dictionary
- JSON parsing failure: Log error and treat file as empty dictionary
- **Notes**:
- Lazy loading pattern automatically loads on first call
- Not thread-safe, caution needed in multi-threaded environments
- Policy file structure should support the following scenarios:
- Dictionary structure of {dependency name -> policy object}
- Policy object has four policy sections:
- **uninstall**: Package removal policy (pre-processing, condition optional)
- **apply_first_match**: Evaluate top-to-bottom and execute only the first policy that satisfies condition (exclusive)
- **apply_all_matches**: Execute all policies that satisfy conditions (cumulative)
- **restore**: Package restoration policy (post-processing, condition optional)
- Condition types:
- installed: Check version condition of already installed dependencies
- spec is optional
- package field: Specify package to check (optional, defaults to self)
- Explicit: Reference another package (e.g., numba checks numpy version)
- Omitted: Check own version (e.g., critical-package checks its own version)
- platform: Platform conditions (os, has_gpu, comfyui_version, etc.)
- If condition is absent, always considered satisfied
- uninstall policy (pre-removal policy):
- Removal policy list (condition is optional, evaluate top-to-bottom and execute only first match)
- When condition satisfied (or always if no condition): remove target package and abort installation
- If this policy is applied, all subsequent steps are ignored
- target field specifies package to remove
- Example: Unconditionally remove if specific package is installed
- Actions available in apply_first_match (determine installation method, exclusive):
- skip: Block installation of specific dependency
- force_version: Force change to specific version during installation
- extra_index_url field can specify custom package repository (optional)
- replace: Replace with different dependency
- extra_index_url field can specify custom package repository (optional)
- Actions available in apply_all_matches (installation options, cumulative):
- pin_dependencies: Pin currently installed versions of other dependencies
- pinned_packages field specifies package list
- Example: `pip install requests urllib3==1.26.15 certifi==2023.7.22 charset-normalizer==3.2.0`
- Real use case: Prevent urllib3 from upgrading to 2.x when installing requests
- on_failure: "fail" or "retry_without_pin"
- install_with: Specify additional dependencies to install together
- warn: Record warning message in log
- restore policy (post-restoration policy):
- Restoration policy list (condition is optional, evaluate top-to-bottom and execute only first match)
- Executed after package installation completes (post-processing)
- When condition satisfied (or always if no condition): force install target package to specific version
- target field specifies package to restore (can be different package)
- version field specifies version to install
- extra_index_url field can specify custom package repository (optional)
- Example: Reinstall/change version if specific package is deleted or wrong version
- Execution order:
1. uninstall evaluation: If condition satisfied, remove package and **terminate** (ignore subsequent steps)
2. apply_first_match evaluation:
- Execute first policy that satisfies condition among skip/force_version/replace
- If no matching policy, proceed with default installation of originally requested package
3. apply_all_matches evaluation: Apply all pin_dependencies, install_with, warn that satisfy conditions
4. Execute actual package installation (pip install or uv pip install)
5. restore evaluation: If condition satisfied, restore target package (post-processing)
## Batch Unit Class (PipBatch)
### Class Structure
```python
class PipBatch:
"""
pip package installation batch unit manager
Maintains pip freeze cache during batch operations for performance optimization
Usage pattern:
# Batch operations (policy auto-loaded)
with PipBatch() as batch:
batch.ensure_not_installed()
batch.install("numpy>=1.20")
batch.install("pandas>=2.0")
batch.install("scipy>=1.7")
batch.ensure_installed()
"""
def __init__(self):
self._installed_cache = None # Installed packages cache (batch-level)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self._installed_cache = None
```
### Private Methods
* PipBatch._refresh_installed_cache():
- **Purpose**: Read currently installed package information and refresh cache
- **Execution flow**:
1. Generate command using manager_util.make_pip_cmd(["freeze"])
2. Execute pip freeze via subprocess
3. Parse output:
- Each line is in "package_name==version" format
- Parse "package_name==version" to create dictionary
- Ignore editable packages (starting with -e)
- Ignore comments (starting with #)
4. Store parsed dictionary in self._installed_cache
- **Return value**: None
- **Exception handling**:
- pip freeze failure: Set cache to empty dictionary and log warning
- Parse failure: Ignore line and continue
* PipBatch._get_installed_packages():
- **Purpose**: Return cached installed package information (refresh if cache is None)
- **Execution flow**:
1. If self._installed_cache is None, call _refresh_installed_cache()
2. Return self._installed_cache
- **Return value**: {package_name: version} dictionary
* PipBatch._invalidate_cache():
- **Purpose**: Invalidate cache after package install/uninstall
- **Execution flow**:
1. Set self._installed_cache = None
- **Return value**: None
- **Call timing**: After install(), ensure_not_installed(), ensure_installed()
* PipBatch._parse_package_spec(package_info):
- **Purpose**: Split package spec string into package name and version spec
- **Parameters**:
- package_info: "numpy", "numpy==1.26.0", "numpy>=1.20.0", "numpy~=1.20", etc.
- **Execution flow**:
1. Use regex to split package name and version spec
2. Pattern: `^([a-zA-Z0-9_-]+)([><=!~]+.*)?$`
- **Return value**: (package_name, version_spec) tuple
- Examples: ("numpy", "==1.26.0"), ("pandas", ">=2.0.0"), ("scipy", None)
- **Exception handling**:
- Parse failure: Raise ValueError
* PipBatch._evaluate_condition(condition, package_name, installed_packages):
- **Purpose**: Evaluate policy condition and return whether satisfied
- **Parameters**:
- condition: Policy condition object (dictionary)
- package_name: Name of package currently being processed
- installed_packages: {package_name: version} dictionary
- **Execution flow**:
1. If condition is None, return True (always satisfied)
2. Branch based on condition["type"]:
a. "installed" type:
- target_package = condition.get("package", package_name)
- Check current version with installed_packages.get(target_package)
- If not installed (None), return False
- If spec exists, compare version using packaging.specifiers.SpecifierSet
- If no spec, only check installation status (True)
b. "platform" type:
- If condition["os"] exists, compare with platform.system()
- If condition["has_gpu"] exists, check GPU presence (torch.cuda.is_available(), etc.)
- If condition["comfyui_version"] exists, compare ComfyUI version
- Return True if all conditions satisfied
3. Return True if all conditions satisfied, False if any unsatisfied
- **Return value**: bool
- **Exception handling**:
- Version comparison failure: Log warning and return False
- Unknown condition type: Log warning and return False
### Public Methods
* PipBatch.install(package_info, extra_index_url=None, override_policy=False):
- **Purpose**: Perform policy-based pip package installation (individual package basis)
- **Parameters**:
- package_info: Package name and version spec (e.g., "numpy", "numpy==1.26.0", "numpy>=1.20.0")
- extra_index_url: Additional package repository URL (optional)
- override_policy: If True, skip policy application and install directly (default: False)
- **Execution flow**:
1. Call get_pip_policy() to get policy (lazy loading)
2. Use self._parse_package_spec() to split package_info into package name and version spec
3. Call self._get_installed_packages() to get cached installed package information
4. If override_policy=True → Jump directly to step 10 (skip policy)
5. Get policy for package name from policy dictionary
6. If no policy → Jump to step 10 (default installation)
7. **apply_first_match policy evaluation** (exclusive - only first match):
- Iterate through policy list top-to-bottom
- Evaluate each policy's condition with self._evaluate_condition()
- When first condition-satisfying policy found:
* type="skip": Log reason and return False (don't install)
* type="force_version": Change package_info version to policy's version
* type="replace": Completely replace package_info with policy's replacement package
- If no matching policy, keep original package_info
8. **apply_all_matches policy evaluation** (cumulative - all matches):
- Iterate through policy list top-to-bottom
- Evaluate each policy's condition with self._evaluate_condition()
- For all condition-satisfying policies:
* type="pin_dependencies":
- For each package in pinned_packages, query current version with self._installed_cache.get(pkg)
- Pin to installed version in "package==version" format
- Add to installation package list
* type="install_with":
- Add additional_packages to installation package list
* type="warn":
- Output message as warning log
- If allow_continue=false, wait for user confirmation (optional)
9. Compose final installation package list:
- Main package (modified/replaced package_info)
- Packages pinned by pin_dependencies
- Packages added by install_with
10. Handle extra_index_url:
- Parameter-passed extra_index_url takes priority
- Otherwise use extra_index_url defined in policy
11. Generate pip/uv command using manager_util.make_pip_cmd():
- Basic format: ["pip", "install"] + package list
- If extra_index_url exists: add ["--extra-index-url", url]
12. Execute command via subprocess
13. Handle installation failure:
- If pin_dependencies's on_failure="retry_without_pin":
* Retry with only main package excluding pinned packages
- If on_failure="fail":
* Raise exception and abort installation
- Otherwise: Log warning and continue
14. On successful installation:
- Call self._invalidate_cache() (invalidate cache)
- Log info if reason exists
- Return True
- **Return value**: Installation success status (bool)
- **Exception handling**:
- Policy parsing failure: Log warning and proceed with default installation
- Installation failure: Log error and raise exception (depends on on_failure setting)
- **Notes**:
- restore policy not handled in this method (batch-processed in ensure_installed())
- uninstall policy not handled in this method (batch-processed in ensure_not_installed())
* PipBatch.ensure_not_installed():
- **Purpose**: Iterate through all policies and remove all packages satisfying uninstall conditions (batch processing)
- **Parameters**: None
- **Execution flow**:
1. Call get_pip_policy() to get policy (lazy loading)
2. Call self._get_installed_packages() to get cached installed package information
3. Iterate through all package policies in policy dictionary:
a. Check if each package has uninstall policy
b. If uninstall policy exists:
- Iterate through uninstall policy list top-to-bottom
- Evaluate each policy's condition with self._evaluate_condition()
- When first condition-satisfying policy found:
* Check if target package exists in self._installed_cache
* If installed:
- Generate command with manager_util.make_pip_cmd(["uninstall", "-y", target])
- Execute pip uninstall via subprocess
- Log reason in info log
- Add to removed package list
- Remove package from self._installed_cache
* Move to next package (only first match per package)
4. Complete iteration through all package policies
- **Return value**: List of removed package names (list of str)
- **Exception handling**:
- Individual package removal failure: Log warning only and continue to next package
- **Call timing**:
- Called at batch operation start to pre-remove conflicting packages
- Called before multiple package installations to clean installation environment
* PipBatch.ensure_installed():
- **Purpose**: Iterate through all policies and restore all packages satisfying restore conditions (batch processing)
- **Parameters**: None
- **Execution flow**:
1. Call get_pip_policy() to get policy (lazy loading)
2. Call self._get_installed_packages() to get cached installed package information
3. Iterate through all package policies in policy dictionary:
a. Check if each package has restore policy
b. If restore policy exists:
- Iterate through restore policy list top-to-bottom
- Evaluate each policy's condition with self._evaluate_condition()
- When first condition-satisfying policy found:
* Get target package name (policy's "target" field)
* Get version specified in version field
* Check current version with self._installed_cache.get(target)
* If current version is None or different from specified version:
- Compose as package_spec = f"{target}=={version}" format
- Generate command with manager_util.make_pip_cmd(["install", package_spec])
- If extra_index_url exists, add ["--extra-index-url", url]
- Execute pip install via subprocess
- Log reason in info log
- Add to restored package list
- Update cache: self._installed_cache[target] = version
* Move to next package (only first match per package)
4. Complete iteration through all package policies
- **Return value**: List of restored package names (list of str)
- **Exception handling**:
- Individual package installation failure: Log warning only and continue to next package
- **Call timing**:
- Called at batch operation end to restore essential package versions
- Called for environment verification after multiple package installations
## pip-policy.json Examples
### Base Policy File ({manager_util.comfyui_manager_path}/pip-policy.json)
```json
{
"torch": {
"apply_first_match": [
{
"type": "skip",
"reason": "PyTorch installation should be managed manually due to CUDA compatibility"
}
]
},
"opencv-python": {
"apply_first_match": [
{
"type": "replace",
"replacement": "opencv-contrib-python",
"version": ">=4.8.0",
"reason": "opencv-contrib-python includes all opencv-python features plus extras"
}
]
},
"PIL": {
"apply_first_match": [
{
"type": "replace",
"replacement": "Pillow",
"reason": "PIL is deprecated, use Pillow instead"
}
]
},
"click": {
"apply_first_match": [
{
"condition": {
"type": "installed",
"package": "colorama",
"spec": "<0.5.0"
},
"type": "force_version",
"version": "8.1.3",
"reason": "click 8.1.3 compatible with colorama <0.5"
}
],
"apply_all_matches": [
{
"type": "pin_dependencies",
"pinned_packages": ["colorama"],
"reason": "Prevent colorama upgrade that may break compatibility"
}
]
},
"requests": {
"apply_all_matches": [
{
"type": "pin_dependencies",
"pinned_packages": ["urllib3", "certifi", "charset-normalizer"],
"on_failure": "retry_without_pin",
"reason": "Prevent urllib3 from upgrading to 2.x which has breaking changes"
}
]
},
"six": {
"restore": [
{
"target": "six",
"version": "1.16.0",
"reason": "six must be maintained at 1.16.0 for compatibility"
}
]
},
"urllib3": {
"restore": [
{
"condition": {
"type": "installed",
"spec": "!=1.26.15"
},
"target": "urllib3",
"version": "1.26.15",
"reason": "urllib3 must be 1.26.15 for compatibility with legacy code"
}
]
},
"onnxruntime": {
"apply_first_match": [
{
"condition": {
"type": "platform",
"os": "linux",
"has_gpu": true
},
"type": "replace",
"replacement": "onnxruntime-gpu",
"reason": "Use GPU version on Linux with CUDA"
}
]
},
"legacy-custom-node-package": {
"apply_first_match": [
{
"condition": {
"type": "platform",
"comfyui_version": "<1.0.0"
},
"type": "force_version",
"version": "0.9.0",
"reason": "legacy-custom-node-package 0.9.0 is compatible with ComfyUI <1.0.0"
},
{
"condition": {
"type": "platform",
"comfyui_version": ">=1.0.0"
},
"type": "force_version",
"version": "1.5.0",
"reason": "legacy-custom-node-package 1.5.0 is required for ComfyUI >=1.0.0"
}
]
},
"tensorflow": {
"apply_all_matches": [
{
"condition": {
"type": "installed",
"package": "torch"
},
"type": "warn",
"message": "Installing TensorFlow alongside PyTorch may cause CUDA conflicts",
"allow_continue": true
}
]
},
"some-package": {
"uninstall": [
{
"condition": {
"type": "installed",
"package": "conflicting-package",
"spec": ">=2.0.0"
},
"target": "conflicting-package",
"reason": "conflicting-package >=2.0.0 conflicts with some-package"
}
]
},
"banned-malicious-package": {
"uninstall": [
{
"target": "banned-malicious-package",
"reason": "Security vulnerability CVE-2024-XXXXX, always remove if attempting to install"
}
]
},
"critical-package": {
"restore": [
{
"condition": {
"type": "installed",
"package": "critical-package",
"spec": "!=1.2.3"
},
"target": "critical-package",
"version": "1.2.3",
"extra_index_url": "https://custom-repo.example.com/simple",
"reason": "critical-package must be version 1.2.3, restore if different or missing"
}
]
},
"stable-package": {
"apply_first_match": [
{
"condition": {
"type": "installed",
"package": "critical-dependency",
"spec": ">=2.0.0"
},
"type": "force_version",
"version": "1.5.0",
"extra_index_url": "https://custom-repo.example.com/simple",
"reason": "stable-package 1.5.0 is required when critical-dependency >=2.0.0 is installed"
}
]
},
"new-experimental-package": {
"apply_all_matches": [
{
"type": "pin_dependencies",
"pinned_packages": ["numpy", "pandas", "scipy"],
"on_failure": "retry_without_pin",
"reason": "new-experimental-package may upgrade numpy/pandas/scipy, pin them to prevent breakage"
}
]
},
"pytorch-addon": {
"apply_all_matches": [
{
"condition": {
"type": "installed",
"package": "torch",
"spec": ">=2.0.0"
},
"type": "pin_dependencies",
"pinned_packages": ["torch", "torchvision", "torchaudio"],
"on_failure": "fail",
"reason": "pytorch-addon must not change PyTorch ecosystem versions"
}
]
}
}
```
### Policy Structure Schema
```json
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"patternProperties": {
"^.*$": {
"type": "object",
"properties": {
"uninstall": {
"type": "array",
"description": "When condition satisfied (or always if no condition), remove package and terminate",
"items": {
"type": "object",
"required": ["target"],
"properties": {
"condition": {
"type": "object",
"description": "Optional: always remove if absent",
"required": ["type"],
"properties": {
"type": {"enum": ["installed", "platform"]},
"package": {"type": "string", "description": "Optional: defaults to self"},
"spec": {"type": "string", "description": "Optional: version condition"},
"os": {"type": "string"},
"has_gpu": {"type": "boolean"},
"comfyui_version": {"type": "string"}
}
},
"target": {
"type": "string",
"description": "Package name to remove"
},
"reason": {"type": "string"}
}
}
},
"restore": {
"type": "array",
"description": "When condition satisfied (or always if no condition), restore package and terminate",
"items": {
"type": "object",
"required": ["target", "version"],
"properties": {
"condition": {
"type": "object",
"description": "Optional: always restore if absent",
"required": ["type"],
"properties": {
"type": {"enum": ["installed", "platform"]},
"package": {"type": "string", "description": "Optional: defaults to self"},
"spec": {"type": "string", "description": "Optional: version condition"},
"os": {"type": "string"},
"has_gpu": {"type": "boolean"},
"comfyui_version": {"type": "string"}
}
},
"target": {
"type": "string",
"description": "Package name to restore"
},
"version": {
"type": "string",
"description": "Version to restore"
},
"extra_index_url": {"type": "string"},
"reason": {"type": "string"}
}
}
},
"apply_first_match": {
"type": "array",
"description": "Execute only first condition-satisfying policy (exclusive)",
"items": {
"type": "object",
"required": ["type"],
"properties": {
"condition": {
"type": "object",
"description": "Optional: always apply if absent",
"required": ["type"],
"properties": {
"type": {"enum": ["installed", "platform"]},
"package": {"type": "string", "description": "Optional: defaults to self"},
"spec": {"type": "string", "description": "Optional: version condition"},
"os": {"type": "string"},
"has_gpu": {"type": "boolean"},
"comfyui_version": {"type": "string"}
}
},
"type": {
"enum": ["skip", "force_version", "replace"],
"description": "Exclusive action: determines installation method"
},
"version": {"type": "string"},
"replacement": {"type": "string"},
"extra_index_url": {"type": "string"},
"reason": {"type": "string"}
}
}
},
"apply_all_matches": {
"type": "array",
"description": "Execute all condition-satisfying policies (cumulative)",
"items": {
"type": "object",
"required": ["type"],
"properties": {
"condition": {
"type": "object",
"description": "Optional: always apply if absent",
"required": ["type"],
"properties": {
"type": {"enum": ["installed", "platform"]},
"package": {"type": "string", "description": "Optional: defaults to self"},
"spec": {"type": "string", "description": "Optional: version condition"},
"os": {"type": "string"},
"has_gpu": {"type": "boolean"},
"comfyui_version": {"type": "string"}
}
},
"type": {
"enum": ["pin_dependencies", "install_with", "warn"],
"description": "Cumulative action: adds installation options"
},
"pinned_packages": {
"type": "array",
"items": {"type": "string"}
},
"on_failure": {"enum": ["fail", "retry_without_pin"]},
"additional_packages": {"type": "array"},
"message": {"type": "string"},
"allow_continue": {"type": "boolean"},
"reason": {"type": "string"}
}
}
}
}
}
}
}
```
## Error Handling
* Default behavior when errors occur during policy execution:
- Log error and continue
- Only treat as installation failure when pin_dependencies's on_failure="fail"
- For other cases, leave warning and attempt originally requested installation
* pip_install: Performs pip package installation
- Use manager_util.make_pip_cmd to generate commands for selective application of uv and pip
- Provide functionality to skip policy application through override_policy flag

View File

@@ -0,0 +1,614 @@
# pip_util.py Implementation Plan Document
## 1. Project Overview
### Purpose
Implement a policy-based pip package management system that minimizes breaking existing installed dependencies
### Core Features
- JSON-based policy file loading and merging (lazy loading)
- Per-package installation policy evaluation and application
- Performance optimization through batch-level pip freeze caching
- Automated conditional package removal/restoration
### Technology Stack
- Python 3.x
- packaging library (version comparison)
- subprocess (pip command execution)
- json (policy file parsing)
---
## 2. Architecture Design
### 2.1 Global Policy Management (Lazy Loading Pattern)
```
┌─────────────────────────────────────┐
│ get_pip_policy() │
│ - Auto-loads policy files on │
│ first call via lazy loading │
│ - Returns cache on subsequent calls│
└─────────────────────────────────────┘
┌─────────────────────────────────────┐
│ _pip_policy_cache (global) │
│ - Merged policy dictionary │
│ - {package_name: policy_object} │
└─────────────────────────────────────┘
```
### 2.2 Batch Operation Class (PipBatch)
```
┌─────────────────────────────────────┐
│ PipBatch (Context Manager) │
│ ┌───────────────────────────────┐ │
│ │ _installed_cache │ │
│ │ - Caches pip freeze results │ │
│ │ - {package: version} │ │
│ └───────────────────────────────┘ │
│ │
│ Public Methods: │
│ ├─ install() │
│ ├─ ensure_not_installed() │
│ └─ ensure_installed() │
│ │
│ Private Methods: │
│ ├─ _get_installed_packages() │
│ ├─ _refresh_installed_cache() │
│ ├─ _invalidate_cache() │
│ ├─ _parse_package_spec() │
│ └─ _evaluate_condition() │
└─────────────────────────────────────┘
```
### 2.3 Policy Evaluation Flow
```
install("numpy>=1.20") called
get_pip_policy() → Load policy (lazy)
Parse package name: "numpy"
Look up "numpy" policy in policy dictionary
├─ Evaluate apply_first_match (exclusive)
│ ├─ skip → Return False (don't install)
│ ├─ force_version → Change version
│ └─ replace → Replace package
├─ Evaluate apply_all_matches (cumulative)
│ ├─ pin_dependencies → Pin dependencies
│ ├─ install_with → Additional packages
│ └─ warn → Warning log
Execute pip install
Invalidate cache (_invalidate_cache)
```
---
## 3. Phase-by-Phase Implementation Plan
### Phase 1: Core Infrastructure Setup (2-3 hours)
#### Task 1.1: Project Structure and Dependency Setup (30 min)
**Implementation**:
- Create `pip_util.py` file
- Add necessary import statements
```python
import json
import logging
import platform
import re
import subprocess
from pathlib import Path
from typing import Dict, List, Optional, Tuple
from packaging.specifiers import SpecifierSet
from packaging.version import Version
from . import manager_util, context
```
- Set up logging
```python
logger = logging.getLogger(__name__)
```
**Validation**:
- Module loads without import errors
- Logger works correctly
#### Task 1.2: Global Variable and get_pip_policy() Implementation (1 hour)
**Implementation**:
- Declare global variable
```python
_pip_policy_cache: Optional[Dict] = None
```
- Implement `get_pip_policy()` function
- Check cache and early return
- Read base policy file (`{manager_util.comfyui_manager_path}/pip-policy.json`)
- Read user policy file (`{context.manager_files_path}/pip-policy.user.json`)
- Create file if doesn't exist (for user policy)
- Merge policies (complete package-level replacement)
- Save to cache and return
**Exception Handling**:
- `FileNotFoundError`: File not found → Use empty dictionary
- `json.JSONDecodeError`: JSON parse failure → Warning log + empty dictionary
- General exception: Warning log + empty dictionary
**Validation**:
- Returns empty dictionary when policy files don't exist
- Returns correct merged result when policy files exist
- Confirms cache usage on second call (load log appears only once)
#### Task 1.3: PipBatch Class Basic Structure (30 min)
**Implementation**:
- Class definition and `__init__`
```python
class PipBatch:
def __init__(self):
self._installed_cache: Optional[Dict[str, str]] = None
```
- Context manager methods (`__enter__`, `__exit__`)
```python
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self._installed_cache = None
return False
```
**Validation**:
- `with PipBatch() as batch:` syntax works correctly
- Cache cleared on `__exit__` call
---
### Phase 2: Caching and Utility Methods (2-3 hours)
#### Task 2.1: pip freeze Caching Methods (1 hour)
**Implementation**:
- Implement `_refresh_installed_cache()`
- Call `manager_util.make_pip_cmd(["freeze"])`
- Execute command via subprocess
- Parse output (package==version format)
- Exclude editable packages (-e) and comments (#)
- Convert to dictionary and store in `self._installed_cache`
- Implement `_get_installed_packages()`
- Call `_refresh_installed_cache()` if cache is None
- Return cache
- Implement `_invalidate_cache()`
- Set `self._installed_cache = None`
**Exception Handling**:
- `subprocess.CalledProcessError`: pip freeze failure → Empty dictionary
- Parse error: Ignore line + warning log
**Validation**:
- pip freeze results correctly parsed into dictionary
- New load occurs after cache invalidation and re-query
#### Task 2.2: Package Spec Parsing (30 min)
**Implementation**:
- Implement `_parse_package_spec(package_info)`
- Regex pattern: `^([a-zA-Z0-9_-]+)([><=!~]+.*)?$`
- Split package name and version spec
- Return tuple: `(package_name, version_spec)`
**Exception Handling**:
- Parse failure: Raise `ValueError`
**Validation**:
- "numpy" → ("numpy", None)
- "numpy==1.26.0" → ("numpy", "==1.26.0")
- "pandas>=2.0.0" → ("pandas", ">=2.0.0")
- Invalid format → ValueError
#### Task 2.3: Condition Evaluation Method (1.5 hours)
**Implementation**:
- Implement `_evaluate_condition(condition, package_name, installed_packages)`
**Handling by Condition Type**:
1. **condition is None**: Always return True
2. **"installed" type**:
- `target_package = condition.get("package", package_name)`
- Check version with `installed_packages.get(target_package)`
- If spec exists, compare using `packaging.specifiers.SpecifierSet`
- If no spec, only check installation status
3. **"platform" type**:
- `os` condition: Compare with `platform.system()`
- `has_gpu` condition: Check `torch.cuda.is_available()` (False if torch unavailable)
- `comfyui_version` condition: TODO (currently warning)
**Exception Handling**:
- Version comparison failure: Warning log + return False
- Unknown condition type: Warning log + return False
**Validation**:
- Write test cases for each condition type
- Verify edge case handling (torch not installed, invalid version format, etc.)
---
### Phase 3: Core Installation Logic Implementation (4-5 hours)
#### Task 3.1: install() Method - Basic Flow (2 hours)
**Implementation**:
1. Parse package spec (`_parse_package_spec`)
2. Query installed package cache (`_get_installed_packages`)
3. If `override_policy=True`, install directly and return
4. Call `get_pip_policy()` to load policy
5. Default installation if no policy exists
**Validation**:
- Verify policy ignored when override_policy=True
- Verify default installation for packages without policy
#### Task 3.2: install() Method - apply_first_match Policy (1 hour)
**Implementation**:
- Iterate through policy list top-to-bottom
- Evaluate each policy's condition (`_evaluate_condition`)
- When condition satisfied:
- **skip**: Log reason and return False
- **force_version**: Force version change
- **replace**: Replace package
- Apply only first match (break)
**Validation**:
- Verify installation blocked by skip policy
- Verify version changed by force_version
- Verify package replaced by replace
#### Task 3.3: install() Method - apply_all_matches Policy (1 hour)
**Implementation**:
- Iterate through policy list top-to-bottom
- Evaluate each policy's condition
- Apply all condition-satisfying policies:
- **pin_dependencies**: Pin to installed version
- **install_with**: Add to additional package list
- **warn**: Output warning log
**Validation**:
- Verify multiple policies applied simultaneously
- Verify version pinning by pin_dependencies
- Verify additional package installation by install_with
#### Task 3.4: install() Method - Installation Execution and Retry Logic (1 hour)
**Implementation**:
1. Compose final package list
2. Generate command using `manager_util.make_pip_cmd()`
3. Handle `extra_index_url`
4. Execute installation via subprocess
5. Handle failure based on on_failure setting:
- `retry_without_pin`: Retry without pins
- `fail`: Raise exception
- Other: Warning log
6. Invalidate cache on success
**Validation**:
- Verify normal installation
- Verify retry logic on pin failure
- Verify error handling
---
### Phase 4: Batch Operation Methods Implementation (2-3 hours)
#### Task 4.1: ensure_not_installed() Implementation (1.5 hours)
**Implementation**:
1. Call `get_pip_policy()`
2. Iterate through all package policies
3. Check each package's uninstall policy
4. When condition satisfied:
- Check if target package is installed
- If installed, execute `pip uninstall -y {target}`
- Remove from cache
- Add to removal list
5. Execute only first match (per package)
6. Return list of removed packages
**Exception Handling**:
- Individual package removal failure: Warning log + continue
**Validation**:
- Verify package removal by uninstall policy
- Verify batch removal of multiple packages
- Verify continued processing of other packages even on removal failure
#### Task 4.2: ensure_installed() Implementation (1.5 hours)
**Implementation**:
1. Call `get_pip_policy()`
2. Iterate through all package policies
3. Check each package's restore policy
4. When condition satisfied:
- Check target package's current version
- If absent or different version:
- Execute `pip install {target}=={version}`
- Add extra_index_url if present
- Update cache
- Add to restoration list
5. Execute only first match (per package)
6. Return list of restored packages
**Exception Handling**:
- Individual package installation failure: Warning log + continue
**Validation**:
- Verify package restoration by restore policy
- Verify reinstallation on version mismatch
- Verify continued processing of other packages even on restoration failure
---
## 4. Testing Strategy
### 4.1 Unit Tests
#### Policy Loading Tests
```python
def test_get_pip_policy_empty():
"""Returns empty dictionary when policy files don't exist"""
def test_get_pip_policy_merge():
"""Correctly merges base and user policies"""
def test_get_pip_policy_cache():
"""Uses cache on second call"""
```
#### Package Parsing Tests
```python
def test_parse_package_spec_simple():
"""'numpy' → ('numpy', None)"""
def test_parse_package_spec_version():
"""'numpy==1.26.0' → ('numpy', '==1.26.0')"""
def test_parse_package_spec_range():
"""'pandas>=2.0.0' → ('pandas', '>=2.0.0')"""
def test_parse_package_spec_invalid():
"""Invalid format → ValueError"""
```
#### Condition Evaluation Tests
```python
def test_evaluate_condition_none():
"""None condition → True"""
def test_evaluate_condition_installed():
"""Evaluates installed package condition"""
def test_evaluate_condition_platform():
"""Evaluates platform condition"""
```
### 4.2 Integration Tests
#### Installation Policy Tests
```python
def test_install_with_skip_policy():
"""Blocks installation with skip policy"""
def test_install_with_force_version():
"""Changes version with force_version policy"""
def test_install_with_replace():
"""Replaces package with replace policy"""
def test_install_with_pin_dependencies():
"""Pins versions with pin_dependencies"""
```
#### Batch Operation Tests
```python
def test_ensure_not_installed():
"""Removes packages with uninstall policy"""
def test_ensure_installed():
"""Restores packages with restore policy"""
def test_batch_workflow():
"""Tests complete batch workflow"""
```
### 4.3 Edge Case Tests
```python
def test_install_without_policy():
"""Default installation for packages without policy"""
def test_install_override_policy():
"""Ignores policy with override_policy=True"""
def test_pip_freeze_failure():
"""Handles empty cache on pip freeze failure"""
def test_json_parse_error():
"""Handles malformed JSON files"""
def test_subprocess_failure():
"""Exception handling when pip command fails"""
```
---
## 5. Error Handling Strategy
### 5.1 Policy Loading Errors
- **File not found**: Warning log + empty dictionary
- **JSON parse failure**: Error log + empty dictionary
- **No read permission**: Warning log + empty dictionary
### 5.2 Package Installation Errors
- **pip command failure**: Depends on on_failure setting
- `retry_without_pin`: Retry
- `fail`: Raise exception
- Other: Warning log
- **Invalid package spec**: Raise ValueError
### 5.3 Batch Operation Errors
- **Individual package failure**: Warning log + continue to next package
- **pip freeze failure**: Empty dictionary + warning log
---
## 6. Performance Optimization
### 6.1 Caching Strategy
- **Policy cache**: Reused program-wide via global variable
- **pip freeze cache**: Reused per batch, invalidated after install/remove
- **lazy loading**: Load only when needed
### 6.2 Parallel Processing Considerations
- Current implementation is not thread-safe
- Consider adding threading.Lock if needed
- Batch operations execute sequentially only
---
## 7. Documentation Requirements
### 7.1 Code Documentation
- Docstrings required for all public methods
- Specify parameters, return values, and exceptions
- Include usage examples
### 7.2 User Guide
- Explain `pip-policy.json` structure
- Policy writing examples
- Usage pattern examples
### 7.3 Developer Guide
- Architecture explanation
- Extension methods
- Test execution methods
---
## 8. Deployment Checklist
### 8.1 Code Quality
- [ ] All unit tests pass
- [ ] All integration tests pass
- [ ] Code coverage ≥80%
- [ ] No linting errors (flake8, pylint)
- [ ] Type hints complete (mypy passes)
### 8.2 Documentation
- [ ] README.md written
- [ ] API documentation generated
- [ ] Example policy files written
- [ ] Usage guide written
### 8.3 Performance Verification
- [ ] Policy loading performance measured (<100ms)
- [ ] pip freeze caching effectiveness verified (≥50% speed improvement)
- [ ] Memory usage confirmed (<10MB)
### 8.4 Security Verification
- [ ] Input validation complete
- [ ] Path traversal prevention
- [ ] Command injection prevention
- [ ] JSON parsing safety confirmed
---
## 9. Future Improvements
### 9.1 Short-term (1-2 weeks)
- Implement ComfyUI version check
- Implement user confirmation prompt (allow_continue=false)
- Thread-safe improvements (add Lock)
### 9.2 Mid-term (1-2 months)
- Add policy validation tools
- Policy migration tools
- More detailed logging and debugging options
### 9.3 Long-term (3-6 months)
- Web UI for policy management
- Provide policy templates
- Community policy sharing system
---
## 10. Risks and Mitigation Strategies
### Risk 1: Policy Conflicts
**Description**: Policies for different packages may conflict
**Mitigation**: Develop policy validation tools, conflict detection algorithm
### Risk 2: pip Version Compatibility
**Description**: Must work across various pip versions
**Mitigation**: Test on multiple pip versions, version-specific branching
### Risk 3: Performance Degradation
**Description**: Installation speed may decrease due to policy evaluation
**Mitigation**: Optimize caching, minimize condition evaluation
### Risk 4: Policy Misconfiguration
**Description**: Users may write incorrect policies
**Mitigation**: JSON schema validation, provide examples and guides
---
## 11. Timeline
### Week 1
- Phase 1: Core Infrastructure Setup (Day 1-2)
- Phase 2: Caching and Utility Methods (Day 3-4)
- Write unit tests (Day 5)
### Week 2
- Phase 3: Core Installation Logic Implementation (Day 1-3)
- Phase 4: Batch Operation Methods Implementation (Day 4-5)
### Week 3
- Integration and edge case testing (Day 1-2)
- Documentation (Day 3)
- Code review and refactoring (Day 4-5)
### Week 4
- Performance optimization (Day 1-2)
- Security verification (Day 3)
- Final testing and deployment preparation (Day 4-5)
---
## 12. Success Criteria
### Feature Completeness
- ✅ All policy types (uninstall, apply_first_match, apply_all_matches, restore) work correctly
- ✅ Policy merge logic works correctly
- ✅ Batch operations perform normally
### Quality Metrics
- ✅ Test coverage ≥80%
- ✅ All tests pass
- ✅ 0 linting errors
- ✅ 100% type hint completion
### Performance Metrics
- ✅ Policy loading <100ms
- ✅ ≥50% performance improvement with pip freeze caching
- ✅ Memory usage <10MB
### Usability
- ✅ Clear error messages
- ✅ Sufficient documentation
- ✅ Verified in real-world use cases

View File

@@ -0,0 +1,629 @@
"""
pip_util - Policy-based pip package management system
This module provides a policy-based approach to pip package installation
to minimize dependency conflicts and protect existing installed packages.
Usage:
# Batch operations (policy auto-loaded)
with PipBatch() as batch:
batch.ensure_not_installed()
batch.install("numpy>=1.20")
batch.install("pandas>=2.0")
batch.install("scipy>=1.7")
batch.ensure_installed()
"""
import json
import logging
import platform
import re
import subprocess
from pathlib import Path
from typing import Dict, List, Optional, Tuple
from packaging.requirements import Requirement
from packaging.specifiers import SpecifierSet
from packaging.version import Version
from . import manager_util, context
logger = logging.getLogger(__name__)
# Global policy cache (lazy loaded on first access)
_pip_policy_cache: Optional[Dict] = None
def get_pip_policy() -> Dict:
"""
Get pip policy with lazy loading.
Returns the cached policy if available, otherwise loads it from files.
This function automatically loads the policy on first access.
Thread safety: This function is NOT thread-safe.
Ensure single-threaded access during initialization.
Returns:
Dictionary of merged pip policies
Example:
>>> policy = get_pip_policy()
>>> numpy_policy = policy.get("numpy", {})
"""
global _pip_policy_cache
# Return cached policy if already loaded
if _pip_policy_cache is not None:
logger.debug("Returning cached pip policy")
return _pip_policy_cache
logger.info("Loading pip policies...")
# Load base policy
base_policy = {}
base_policy_path = Path(manager_util.comfyui_manager_path) / "pip-policy.json"
try:
if base_policy_path.exists():
with open(base_policy_path, 'r', encoding='utf-8') as f:
base_policy = json.load(f)
logger.debug(f"Loaded base policy from {base_policy_path}")
else:
logger.warning(f"Base policy file not found: {base_policy_path}")
except json.JSONDecodeError as e:
logger.error(f"Failed to parse base policy JSON: {e}")
base_policy = {}
except Exception as e:
logger.warning(f"Failed to read base policy file: {e}")
base_policy = {}
# Load user policy
user_policy = {}
user_policy_path = Path(context.manager_files_path) / "pip-policy.user.json"
try:
if user_policy_path.exists():
with open(user_policy_path, 'r', encoding='utf-8') as f:
user_policy = json.load(f)
logger.debug(f"Loaded user policy from {user_policy_path}")
else:
# Create empty user policy file
user_policy_path.parent.mkdir(parents=True, exist_ok=True)
with open(user_policy_path, 'w', encoding='utf-8') as f:
json.dump({"_comment": "User-specific pip policy overrides"}, f, indent=2)
logger.info(f"Created empty user policy file: {user_policy_path}")
except json.JSONDecodeError as e:
logger.warning(f"Failed to parse user policy JSON: {e}")
user_policy = {}
except Exception as e:
logger.warning(f"Failed to read user policy file: {e}")
user_policy = {}
# Merge policies (package-level override: user completely replaces base per package)
merged_policy = base_policy.copy()
for package_name, package_policy in user_policy.items():
if package_name.startswith("_"): # Skip metadata fields like _comment
continue
merged_policy[package_name] = package_policy # Complete package replacement
# Store in global cache
_pip_policy_cache = merged_policy
logger.info(f"Policy loaded successfully: {len(_pip_policy_cache)} package policies")
return _pip_policy_cache
class PipBatch:
"""
Pip package installation batch manager.
Maintains pip freeze cache during a batch of operations for performance optimization.
Usage pattern:
# Batch operations (policy auto-loaded)
with PipBatch() as batch:
batch.ensure_not_installed()
batch.install("numpy>=1.20")
batch.install("pandas>=2.0")
batch.install("scipy>=1.7")
batch.ensure_installed()
Attributes:
_installed_cache: Cache of installed packages from pip freeze
"""
def __init__(self):
"""Initialize PipBatch with empty cache."""
self._installed_cache: Optional[Dict[str, str]] = None
def __enter__(self):
"""Enter context manager."""
return self
def __exit__(self, exc_type, exc_val, exc_tb):
"""Exit context manager and clear cache."""
self._installed_cache = None
return False
def _refresh_installed_cache(self) -> None:
"""
Refresh the installed packages cache by executing pip freeze.
Parses pip freeze output into a dictionary of {package_name: version}.
Ignores editable packages and comments.
Raises:
No exceptions raised - failures result in empty cache with warning log
"""
try:
cmd = manager_util.make_pip_cmd(["freeze"])
result = subprocess.run(cmd, capture_output=True, text=True, check=True)
packages = {}
for line in result.stdout.strip().split('\n'):
line = line.strip()
# Skip empty lines
if not line:
continue
# Skip editable packages (-e /path/to/package or -e git+https://...)
# Editable packages don't have version info and are typically development-only
if line.startswith('-e '):
continue
# Skip comments (defensive: pip freeze typically doesn't output comments,
# but this handles manually edited requirements.txt or future pip changes)
if line.startswith('#'):
continue
# Parse package==version
if '==' in line:
try:
package_name, version = line.split('==', 1)
packages[package_name.strip()] = version.strip()
except ValueError:
logger.warning(f"Failed to parse pip freeze line: {line}")
continue
self._installed_cache = packages
logger.debug(f"Refreshed installed packages cache: {len(packages)} packages")
except subprocess.CalledProcessError as e:
logger.warning(f"pip freeze failed: {e}")
self._installed_cache = {}
except Exception as e:
logger.warning(f"Failed to refresh installed packages cache: {e}")
self._installed_cache = {}
def _get_installed_packages(self) -> Dict[str, str]:
"""
Get cached installed packages, refresh if cache is None.
Returns:
Dictionary of {package_name: version}
"""
if self._installed_cache is None:
self._refresh_installed_cache()
return self._installed_cache
def _invalidate_cache(self) -> None:
"""
Invalidate the installed packages cache.
Should be called after install/uninstall operations.
"""
self._installed_cache = None
def _parse_package_spec(self, package_info: str) -> Tuple[str, Optional[str]]:
"""
Parse package spec string into package name and version spec using PEP 508.
Uses the packaging library to properly parse package specifications according to
PEP 508 standard, which handles complex cases like extras and multiple version
constraints that simple regex cannot handle correctly.
Args:
package_info: Package specification like "numpy", "numpy==1.26.0", "numpy>=1.20.0",
or complex specs like "package[extra]>=1.0,<2.0"
Returns:
Tuple of (package_name, version_spec)
Examples: ("numpy", "==1.26.0"), ("pandas", ">=2.0.0"), ("scipy", None)
Package names are normalized (e.g., "NumPy" -> "numpy")
Raises:
ValueError: If package_info cannot be parsed according to PEP 508
Example:
>>> batch._parse_package_spec("numpy>=1.20")
("numpy", ">=1.20")
>>> batch._parse_package_spec("requests[security]>=2.0,<3.0")
("requests", ">=2.0,<3.0")
"""
try:
req = Requirement(package_info)
package_name = req.name # Normalized package name
version_spec = str(req.specifier) if req.specifier else None
return package_name, version_spec
except Exception as e:
raise ValueError(f"Invalid package spec: {package_info}") from e
def _evaluate_condition(self, condition: Optional[Dict], package_name: str,
installed_packages: Dict[str, str]) -> bool:
"""
Evaluate policy condition and return whether it's satisfied.
Args:
condition: Policy condition object (dict) or None
package_name: Current package being processed
installed_packages: Dictionary of {package_name: version}
Returns:
True if condition is satisfied, False otherwise
None condition always returns True
Example:
>>> condition = {"type": "installed", "package": "numpy", "spec": ">=1.20"}
>>> batch._evaluate_condition(condition, "numba", {"numpy": "1.26.0"})
True
"""
# No condition means always satisfied
if condition is None:
return True
condition_type = condition.get("type")
if condition_type == "installed":
# Check if a package is installed with optional version spec
target_package = condition.get("package", package_name)
installed_version = installed_packages.get(target_package)
# Package not installed
if installed_version is None:
return False
# Check version spec if provided
spec = condition.get("spec")
if spec:
try:
specifier = SpecifierSet(spec)
return Version(installed_version) in specifier
except Exception as e:
logger.warning(f"Failed to compare version {installed_version} with spec {spec}: {e}")
return False
# Package is installed (no spec check)
return True
elif condition_type == "platform":
# Check platform conditions (os, has_gpu, comfyui_version)
conditions_met = True
# Check OS
if "os" in condition:
expected_os = condition["os"].lower()
actual_os = platform.system().lower()
if expected_os not in actual_os and actual_os not in expected_os:
conditions_met = False
# Check GPU availability
if "has_gpu" in condition:
expected_gpu = condition["has_gpu"]
try:
import torch
has_gpu = torch.cuda.is_available()
except ImportError:
has_gpu = False
if expected_gpu != has_gpu:
conditions_met = False
# Check ComfyUI version
if "comfyui_version" in condition:
# TODO: Implement ComfyUI version check
logger.warning("ComfyUI version condition not yet implemented")
return conditions_met
else:
logger.warning(f"Unknown condition type: {condition_type}")
return False
def install(self, package_info: str, extra_index_url: Optional[str] = None,
override_policy: bool = False) -> bool:
"""
Install a pip package with policy-based modifications.
Args:
package_info: Package specification (e.g., "numpy", "numpy==1.26.0", "numpy>=1.20.0")
extra_index_url: Additional package repository URL (optional)
override_policy: If True, skip policy application and install directly (default: False)
Returns:
True if installation succeeded, False if skipped by policy
Raises:
ValueError: If package_info cannot be parsed
subprocess.CalledProcessError: If installation fails (depending on policy on_failure settings)
Example:
>>> with PipBatch() as batch:
... batch.install("numpy>=1.20")
... batch.install("torch", override_policy=True)
"""
# Parse package spec
try:
package_name, version_spec = self._parse_package_spec(package_info)
except ValueError as e:
logger.error(f"Invalid package spec: {e}")
raise
# Get installed packages cache
installed_packages = self._get_installed_packages()
# Override policy - skip to direct installation
if override_policy:
logger.info(f"Installing {package_info} (policy override)")
cmd = manager_util.make_pip_cmd(["install", package_info])
if extra_index_url:
cmd.extend(["--extra-index-url", extra_index_url])
try:
subprocess.run(cmd, check=True)
self._invalidate_cache()
logger.info(f"Successfully installed {package_info}")
return True
except subprocess.CalledProcessError as e:
logger.error(f"Failed to install {package_info}: {e}")
raise
# Get policy (lazy loading)
pip_policy = get_pip_policy()
policy = pip_policy.get(package_name, {})
# If no policy, proceed with default installation
if not policy:
logger.debug(f"No policy found for {package_name}, proceeding with default installation")
cmd = manager_util.make_pip_cmd(["install", package_info])
if extra_index_url:
cmd.extend(["--extra-index-url", extra_index_url])
try:
subprocess.run(cmd, check=True)
self._invalidate_cache()
logger.info(f"Successfully installed {package_info}")
return True
except subprocess.CalledProcessError as e:
logger.error(f"Failed to install {package_info}: {e}")
raise
# Apply apply_first_match policies (exclusive - first match only)
final_package_info = package_info
final_extra_index_url = extra_index_url
policy_reason = None
apply_first_match = policy.get("apply_first_match", [])
for policy_item in apply_first_match:
condition = policy_item.get("condition")
if self._evaluate_condition(condition, package_name, installed_packages):
policy_type = policy_item.get("type")
if policy_type == "skip":
reason = policy_item.get("reason", "No reason provided")
logger.info(f"Skipping installation of {package_name}: {reason}")
return False
elif policy_type == "force_version":
forced_version = policy_item.get("version")
final_package_info = f"{package_name}=={forced_version}"
policy_reason = policy_item.get("reason")
if "extra_index_url" in policy_item:
final_extra_index_url = policy_item["extra_index_url"]
logger.info(f"Force version for {package_name}: {forced_version} ({policy_reason})")
break # First match only
elif policy_type == "replace":
replacement = policy_item.get("replacement")
replacement_version = policy_item.get("version", "")
if replacement_version:
final_package_info = f"{replacement}{replacement_version}"
else:
final_package_info = replacement
policy_reason = policy_item.get("reason")
if "extra_index_url" in policy_item:
final_extra_index_url = policy_item["extra_index_url"]
logger.info(f"Replacing {package_name} with {final_package_info}: {policy_reason}")
break # First match only
# Apply apply_all_matches policies (cumulative - all matches)
additional_packages = []
pinned_packages = []
pin_on_failure = "fail"
apply_all_matches = policy.get("apply_all_matches", [])
for policy_item in apply_all_matches:
condition = policy_item.get("condition")
if self._evaluate_condition(condition, package_name, installed_packages):
policy_type = policy_item.get("type")
if policy_type == "pin_dependencies":
pin_list = policy_item.get("pinned_packages", [])
for pkg in pin_list:
installed_version = installed_packages.get(pkg)
if installed_version:
pinned_packages.append(f"{pkg}=={installed_version}")
else:
logger.warning(f"Cannot pin {pkg}: not currently installed")
pin_on_failure = policy_item.get("on_failure", "fail")
reason = policy_item.get("reason", "")
logger.info(f"Pinning dependencies: {pinned_packages} ({reason})")
elif policy_type == "install_with":
additional = policy_item.get("additional_packages", [])
additional_packages.extend(additional)
reason = policy_item.get("reason", "")
logger.info(f"Installing additional packages: {additional} ({reason})")
elif policy_type == "warn":
message = policy_item.get("message", "")
allow_continue = policy_item.get("allow_continue", True)
logger.warning(f"Policy warning for {package_name}: {message}")
if not allow_continue:
# TODO: Implement user confirmation
logger.info("User confirmation required (not implemented, continuing)")
# Build final package list
packages_to_install = [final_package_info] + pinned_packages + additional_packages
# Execute installation
cmd = manager_util.make_pip_cmd(["install"] + packages_to_install)
if final_extra_index_url:
cmd.extend(["--extra-index-url", final_extra_index_url])
try:
subprocess.run(cmd, check=True)
self._invalidate_cache()
if policy_reason:
logger.info(f"Successfully installed {final_package_info}: {policy_reason}")
else:
logger.info(f"Successfully installed {final_package_info}")
return True
except subprocess.CalledProcessError as e:
# Handle installation failure
if pinned_packages and pin_on_failure == "retry_without_pin":
logger.warning(f"Installation failed with pinned dependencies, retrying without pins")
retry_cmd = manager_util.make_pip_cmd(["install", final_package_info])
if final_extra_index_url:
retry_cmd.extend(["--extra-index-url", final_extra_index_url])
try:
subprocess.run(retry_cmd, check=True)
self._invalidate_cache()
logger.info(f"Successfully installed {final_package_info} (without pins)")
return True
except subprocess.CalledProcessError as retry_error:
logger.error(f"Retry installation also failed: {retry_error}")
raise
elif pin_on_failure == "fail":
logger.error(f"Installation failed: {e}")
raise
else:
logger.warning(f"Installation failed, but continuing: {e}")
return False
def ensure_not_installed(self) -> List[str]:
"""
Remove all packages matching uninstall policies (batch processing).
Iterates through all package policies and executes uninstall actions
where conditions are satisfied.
Returns:
List of removed package names
Example:
>>> with PipBatch() as batch:
... removed = batch.ensure_not_installed()
... print(f"Removed: {removed}")
"""
# Get policy (lazy loading)
pip_policy = get_pip_policy()
installed_packages = self._get_installed_packages()
removed_packages = []
for package_name, policy in pip_policy.items():
uninstall_policies = policy.get("uninstall", [])
for uninstall_policy in uninstall_policies:
condition = uninstall_policy.get("condition")
if self._evaluate_condition(condition, package_name, installed_packages):
target = uninstall_policy.get("target")
reason = uninstall_policy.get("reason", "No reason provided")
# Check if target is installed
if target in installed_packages:
try:
cmd = manager_util.make_pip_cmd(["uninstall", "-y", target])
subprocess.run(cmd, check=True)
logger.info(f"Uninstalled {target}: {reason}")
removed_packages.append(target)
# Remove from cache
del installed_packages[target]
except subprocess.CalledProcessError as e:
logger.warning(f"Failed to uninstall {target}: {e}")
# First match only per package
break
return removed_packages
def ensure_installed(self) -> List[str]:
"""
Restore all packages matching restore policies (batch processing).
Iterates through all package policies and executes restore actions
where conditions are satisfied.
Returns:
List of restored package names
Example:
>>> with PipBatch() as batch:
... batch.install("numpy>=1.20")
... restored = batch.ensure_installed()
... print(f"Restored: {restored}")
"""
# Get policy (lazy loading)
pip_policy = get_pip_policy()
installed_packages = self._get_installed_packages()
restored_packages = []
for package_name, policy in pip_policy.items():
restore_policies = policy.get("restore", [])
for restore_policy in restore_policies:
condition = restore_policy.get("condition")
if self._evaluate_condition(condition, package_name, installed_packages):
target = restore_policy.get("target")
version = restore_policy.get("version")
reason = restore_policy.get("reason", "No reason provided")
extra_index_url = restore_policy.get("extra_index_url")
# Check if target needs restoration
current_version = installed_packages.get(target)
if current_version is None or current_version != version:
try:
package_spec = f"{target}=={version}"
cmd = manager_util.make_pip_cmd(["install", package_spec])
if extra_index_url:
cmd.extend(["--extra-index-url", extra_index_url])
subprocess.run(cmd, check=True)
logger.info(f"Restored {package_spec}: {reason}")
restored_packages.append(target)
# Update cache
installed_packages[target] = version
except subprocess.CalledProcessError as e:
logger.warning(f"Failed to restore {target}: {e}")
# First match only per package
break
return restored_packages

View File

File diff suppressed because it is too large Load Diff

View File

@@ -2,6 +2,8 @@ import sys
import subprocess
import os
from . import manager_util
def security_check():
print("[START] Security scan")
@@ -66,18 +68,23 @@ https://blog.comfy.org/comfyui-statement-on-the-ultralytics-crypto-miner-situati
"lolMiner": [os.path.join(comfyui_path, 'lolMiner')]
}
installed_pips = subprocess.check_output([sys.executable, '-m', "pip", "freeze"], text=True)
installed_pips = subprocess.check_output(manager_util.make_pip_cmd(["freeze"]), text=True)
detected = set()
try:
anthropic_info = subprocess.check_output([sys.executable, '-m', "pip", "show", "anthropic"], text=True, stderr=subprocess.DEVNULL)
anthropic_reqs = [x for x in anthropic_info.split('\n') if x.startswith("Requires")][0].split(': ')[1]
if "pycrypto" in anthropic_reqs:
location = [x for x in anthropic_info.split('\n') if x.startswith("Location")][0].split(': ')[1]
for fi in os.listdir(location):
if fi.startswith("anthropic"):
guide["ComfyUI_LLMVISION"] = f"\n0.Remove {os.path.join(location, fi)}" + guide["ComfyUI_LLMVISION"]
detected.add("ComfyUI_LLMVISION")
anthropic_info = subprocess.check_output(manager_util.make_pip_cmd(["show", "anthropic"]), text=True, stderr=subprocess.DEVNULL)
requires_lines = [x for x in anthropic_info.split('\n') if x.startswith("Requires")]
if requires_lines:
anthropic_reqs = requires_lines[0].split(": ", 1)[1]
if "pycrypto" in anthropic_reqs:
location_lines = [x for x in anthropic_info.split('\n') if x.startswith("Location")]
if location_lines:
location = location_lines[0].split(": ", 1)[1]
for fi in os.listdir(location):
if fi.startswith("anthropic"):
guide["ComfyUI_LLMVISION"] = (f"\n0.Remove {os.path.join(location, fi)}" + guide["ComfyUI_LLMVISION"])
detected.add("ComfyUI_LLMVISION")
except subprocess.CalledProcessError:
pass

View File

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,68 @@
# Data Models
This directory contains Pydantic models for ComfyUI Manager, providing type safety, validation, and serialization for the API and internal data structures.
## Overview
- `generated_models.py` - All models auto-generated from OpenAPI spec
- `__init__.py` - Package exports for all models
**Note**: All models are now auto-generated from the OpenAPI specification. Manual model files (`task_queue.py`, `state_management.py`) have been deprecated in favor of a single source of truth.
## Generating Types from OpenAPI
The state management models are automatically generated from the OpenAPI specification using `datamodel-codegen`. This ensures type safety and consistency between the API specification and the Python code.
### Prerequisites
Install the code generator:
```bash
pipx install datamodel-code-generator
```
### Generation Command
To regenerate all models after updating the OpenAPI spec:
```bash
datamodel-codegen \
--use-subclass-enum \
--field-constraints \
--strict-types bytes \
--use-double-quotes \
--input openapi.yaml \
--output comfyui_manager/data_models/generated_models.py \
--output-model-type pydantic_v2.BaseModel
```
### When to Regenerate
You should regenerate the models when:
1. **Adding new API endpoints** that return new data structures
2. **Modifying existing schemas** in the OpenAPI specification
3. **Adding new state management features** that require new models
### Important Notes
- **Single source of truth**: All models are now generated from `openapi.yaml`
- **No manual models**: All previously manual models have been migrated to the OpenAPI spec
- **OpenAPI requirements**: New schemas must be referenced in API paths to be generated by datamodel-codegen
- **Validation**: Always validate the OpenAPI spec before generation:
```bash
python3 -c "import yaml; yaml.safe_load(open('openapi.yaml'))"
```
### Example: Adding New State Models
1. Add your schema to `openapi.yaml` under `components/schemas/`
2. Reference the schema in an API endpoint response
3. Run the generation command above
4. Update `__init__.py` to export the new models
5. Import and use the models in your code
### Troubleshooting
- **Models not generated**: Ensure schemas are under `components/schemas/` (not `parameters/`)
- **Missing models**: Verify schemas are referenced in at least one API path
- **Import errors**: Check that new models are added to `__init__.py` exports

View File

@@ -0,0 +1,137 @@
"""
Data models for ComfyUI Manager.
This package contains Pydantic models used throughout the ComfyUI Manager
for data validation, serialization, and type safety.
All models are auto-generated from the OpenAPI specification to ensure
consistency between the API and implementation.
"""
from .generated_models import (
# Core Task Queue Models
QueueTaskItem,
TaskHistoryItem,
TaskStateMessage,
TaskExecutionStatus,
# WebSocket Message Models
MessageTaskDone,
MessageTaskStarted,
MessageTaskFailed,
MessageUpdate,
ManagerMessageName,
# State Management Models
BatchExecutionRecord,
ComfyUISystemState,
BatchOperation,
InstalledNodeInfo,
InstalledModelInfo,
ComfyUIVersionInfo,
# Import Fail Info Models
ImportFailInfoBulkRequest,
ImportFailInfoBulkResponse,
ImportFailInfoItem,
ImportFailInfoItem1,
# Other models
OperationType,
OperationResult,
ManagerPackInfo,
ManagerPackInstalled,
SelectedVersion,
ManagerChannel,
ManagerDatabaseSource,
ManagerPackState,
ManagerPackInstallType,
ManagerPack,
InstallPackParams,
UpdatePackParams,
UpdateAllPacksParams,
UpdateComfyUIParams,
FixPackParams,
UninstallPackParams,
DisablePackParams,
EnablePackParams,
UpdateAllQueryParams,
UpdateComfyUIQueryParams,
ComfyUISwitchVersionQueryParams,
QueueStatus,
ManagerMappings,
ModelMetadata,
NodePackageMetadata,
SnapshotItem,
Error,
InstalledPacksResponse,
HistoryResponse,
HistoryListResponse,
InstallType,
SecurityLevel,
RiskLevel,
)
__all__ = [
# Core Task Queue Models
"QueueTaskItem",
"TaskHistoryItem",
"TaskStateMessage",
"TaskExecutionStatus",
# WebSocket Message Models
"MessageTaskDone",
"MessageTaskStarted",
"MessageTaskFailed",
"MessageUpdate",
"ManagerMessageName",
# State Management Models
"BatchExecutionRecord",
"ComfyUISystemState",
"BatchOperation",
"InstalledNodeInfo",
"InstalledModelInfo",
"ComfyUIVersionInfo",
# Import Fail Info Models
"ImportFailInfoBulkRequest",
"ImportFailInfoBulkResponse",
"ImportFailInfoItem",
"ImportFailInfoItem1",
# Other models
"OperationType",
"OperationResult",
"ManagerPackInfo",
"ManagerPackInstalled",
"SelectedVersion",
"ManagerChannel",
"ManagerDatabaseSource",
"ManagerPackState",
"ManagerPackInstallType",
"ManagerPack",
"InstallPackParams",
"UpdatePackParams",
"UpdateAllPacksParams",
"UpdateComfyUIParams",
"FixPackParams",
"UninstallPackParams",
"DisablePackParams",
"EnablePackParams",
"UpdateAllQueryParams",
"UpdateComfyUIQueryParams",
"ComfyUISwitchVersionQueryParams",
"QueueStatus",
"ManagerMappings",
"ModelMetadata",
"NodePackageMetadata",
"SnapshotItem",
"Error",
"InstalledPacksResponse",
"HistoryResponse",
"HistoryListResponse",
"InstallType",
"SecurityLevel",
"RiskLevel",
]

View File

@@ -0,0 +1,561 @@
# generated by datamodel-codegen:
# filename: openapi.yaml
# timestamp: 2025-07-31T04:52:26+00:00
from __future__ import annotations
from datetime import datetime
from enum import Enum
from typing import Any, Dict, List, Optional, Union
from pydantic import BaseModel, Field, RootModel
class OperationType(str, Enum):
install = "install"
uninstall = "uninstall"
update = "update"
update_comfyui = "update-comfyui"
fix = "fix"
disable = "disable"
enable = "enable"
install_model = "install-model"
class OperationResult(str, Enum):
success = "success"
failed = "failed"
skipped = "skipped"
error = "error"
skip = "skip"
class TaskExecutionStatus(BaseModel):
status_str: OperationResult
completed: bool = Field(..., description="Whether the task completed")
messages: List[str] = Field(..., description="Additional status messages")
class ManagerMessageName(str, Enum):
cm_task_completed = "cm-task-completed"
cm_task_started = "cm-task-started"
cm_queue_status = "cm-queue-status"
class ManagerPackInfo(BaseModel):
id: str = Field(
...,
description="Either github-author/github-repo or name of pack from the registry",
)
version: str = Field(..., description="Semantic version or Git commit hash")
ui_id: Optional[str] = Field(None, description="Task ID - generated internally")
class ManagerPackInstalled(BaseModel):
ver: str = Field(
...,
description="The version of the pack that is installed (Git commit hash or semantic version)",
)
cnr_id: Optional[str] = Field(
None, description="The name of the pack if installed from the registry"
)
aux_id: Optional[str] = Field(
None,
description="The name of the pack if installed from github (author/repo-name format)",
)
enabled: bool = Field(..., description="Whether the pack is enabled")
class SelectedVersion(str, Enum):
latest = "latest"
nightly = "nightly"
class ManagerChannel(str, Enum):
default = "default"
recent = "recent"
legacy = "legacy"
forked = "forked"
dev = "dev"
tutorial = "tutorial"
class ManagerDatabaseSource(str, Enum):
remote = "remote"
local = "local"
cache = "cache"
class ManagerPackState(str, Enum):
installed = "installed"
disabled = "disabled"
not_installed = "not_installed"
import_failed = "import_failed"
needs_update = "needs_update"
class ManagerPackInstallType(str, Enum):
git_clone = "git-clone"
copy = "copy"
cnr = "cnr"
class SecurityLevel(str, Enum):
strong = "strong"
normal = "normal"
normal_ = "normal-"
weak = "weak"
class RiskLevel(str, Enum):
block = "block"
high_ = "high+"
high = "high"
middle_ = "middle+"
middle = "middle"
class UpdateState(Enum):
false = "false"
true = "true"
class ManagerPack(ManagerPackInfo):
author: Optional[str] = Field(
None, description="Pack author name or 'Unclaimed' if added via GitHub crawl"
)
files: Optional[List[str]] = Field(
None,
description="Repository URLs for installation (typically contains one GitHub URL)",
)
reference: Optional[str] = Field(
None, description="The type of installation reference"
)
title: Optional[str] = Field(None, description="The display name of the pack")
cnr_latest: Optional[SelectedVersion] = None
repository: Optional[str] = Field(None, description="GitHub repository URL")
state: Optional[ManagerPackState] = None
update_state: Optional[UpdateState] = Field(
None, alias="update-state", description="Update availability status"
)
stars: Optional[int] = Field(None, description="GitHub stars count")
last_update: Optional[datetime] = Field(None, description="Last update timestamp")
health: Optional[str] = Field(None, description="Health status of the pack")
description: Optional[str] = Field(None, description="Pack description")
trust: Optional[bool] = Field(None, description="Whether the pack is trusted")
install_type: Optional[ManagerPackInstallType] = None
class InstallPackParams(ManagerPackInfo):
selected_version: Union[str, SelectedVersion] = Field(
..., description="Semantic version, Git commit hash, latest, or nightly"
)
repository: Optional[str] = Field(
None,
description="GitHub repository URL (required if selected_version is nightly)",
)
pip: Optional[List[str]] = Field(None, description="PyPi dependency names")
mode: ManagerDatabaseSource
channel: ManagerChannel
skip_post_install: Optional[bool] = Field(
None, description="Whether to skip post-installation steps"
)
class UpdateAllPacksParams(BaseModel):
mode: Optional[ManagerDatabaseSource] = None
ui_id: Optional[str] = Field(None, description="Task ID - generated internally")
class UpdatePackParams(BaseModel):
node_name: str = Field(..., description="Name of the node package to update")
node_ver: Optional[str] = Field(
None, description="Current version of the node package"
)
class UpdateComfyUIParams(BaseModel):
is_stable: Optional[bool] = Field(
True,
description="Whether to update to stable version (true) or nightly (false)",
)
target_version: Optional[str] = Field(
None,
description="Specific version to switch to (for version switching operations)",
)
class FixPackParams(BaseModel):
node_name: str = Field(..., description="Name of the node package to fix")
node_ver: str = Field(..., description="Version of the node package")
class UninstallPackParams(BaseModel):
node_name: str = Field(..., description="Name of the node package to uninstall")
is_unknown: Optional[bool] = Field(
False, description="Whether this is an unknown/unregistered package"
)
class DisablePackParams(BaseModel):
node_name: str = Field(..., description="Name of the node package to disable")
is_unknown: Optional[bool] = Field(
False, description="Whether this is an unknown/unregistered package"
)
class EnablePackParams(BaseModel):
cnr_id: str = Field(
..., description="ComfyUI Node Registry ID of the package to enable"
)
class UpdateAllQueryParams(BaseModel):
client_id: str = Field(
..., description="Client identifier that initiated the request"
)
ui_id: str = Field(..., description="Base UI identifier for task tracking")
mode: Optional[ManagerDatabaseSource] = None
class UpdateComfyUIQueryParams(BaseModel):
client_id: str = Field(
..., description="Client identifier that initiated the request"
)
ui_id: str = Field(..., description="UI identifier for task tracking")
stable: Optional[bool] = Field(
True,
description="Whether to update to stable version (true) or nightly (false)",
)
class ComfyUISwitchVersionQueryParams(BaseModel):
ver: str = Field(..., description="Version to switch to")
client_id: str = Field(
..., description="Client identifier that initiated the request"
)
ui_id: str = Field(..., description="UI identifier for task tracking")
class QueueStatus(BaseModel):
total_count: int = Field(
..., description="Total number of tasks (pending + running)"
)
done_count: int = Field(..., description="Number of completed tasks")
in_progress_count: int = Field(..., description="Number of tasks currently running")
pending_count: Optional[int] = Field(
None, description="Number of tasks waiting to be executed"
)
is_processing: bool = Field(..., description="Whether the task worker is active")
client_id: Optional[str] = Field(
None, description="Client ID (when filtered by client)"
)
class ManagerMappings1(BaseModel):
title_aux: Optional[str] = Field(None, description="The display name of the pack")
class ManagerMappings(
RootModel[Optional[Dict[str, List[Union[List[str], ManagerMappings1]]]]]
):
root: Optional[Dict[str, List[Union[List[str], ManagerMappings1]]]] = Field(
None, description="Tuple of [node_names, metadata]"
)
class ModelMetadata(BaseModel):
name: str = Field(..., description="Name of the model")
type: str = Field(..., description="Type of model")
base: Optional[str] = Field(None, description="Base model type")
save_path: Optional[str] = Field(None, description="Path for saving the model")
url: str = Field(..., description="Download URL")
filename: str = Field(..., description="Target filename")
ui_id: Optional[str] = Field(None, description="ID for UI reference")
class InstallType(str, Enum):
git = "git"
copy = "copy"
pip = "pip"
class NodePackageMetadata(BaseModel):
title: Optional[str] = Field(None, description="Display name of the node package")
name: Optional[str] = Field(None, description="Repository/package name")
files: Optional[List[str]] = Field(None, description="Source URLs for the package")
description: Optional[str] = Field(
None, description="Description of the node package functionality"
)
install_type: Optional[InstallType] = Field(None, description="Installation method")
version: Optional[str] = Field(None, description="Version identifier")
id: Optional[str] = Field(
None, description="Unique identifier for the node package"
)
ui_id: Optional[str] = Field(None, description="ID for UI reference")
channel: Optional[str] = Field(None, description="Source channel")
mode: Optional[str] = Field(None, description="Source mode")
class SnapshotItem(RootModel[str]):
root: str = Field(..., description="Name of the snapshot")
class Error(BaseModel):
error: str = Field(..., description="Error message")
class InstalledPacksResponse(RootModel[Optional[Dict[str, ManagerPackInstalled]]]):
root: Optional[Dict[str, ManagerPackInstalled]] = None
class HistoryListResponse(BaseModel):
ids: Optional[List[str]] = Field(
None, description="List of available batch history IDs"
)
class InstalledNodeInfo(BaseModel):
name: str = Field(..., description="Node package name")
version: str = Field(..., description="Installed version")
repository_url: Optional[str] = Field(None, description="Git repository URL")
install_method: str = Field(
..., description="Installation method (cnr, git, pip, etc.)"
)
enabled: Optional[bool] = Field(
True, description="Whether the node is currently enabled"
)
install_date: Optional[datetime] = Field(
None, description="ISO timestamp of installation"
)
class InstalledModelInfo(BaseModel):
name: str = Field(..., description="Model filename")
path: str = Field(..., description="Full path to model file")
type: str = Field(..., description="Model type (checkpoint, lora, vae, etc.)")
size_bytes: Optional[int] = Field(None, description="File size in bytes", ge=0)
hash: Optional[str] = Field(None, description="Model file hash for verification")
install_date: Optional[datetime] = Field(
None, description="ISO timestamp when added"
)
class ComfyUIVersionInfo(BaseModel):
version: str = Field(..., description="ComfyUI version string")
commit_hash: Optional[str] = Field(None, description="Git commit hash")
branch: Optional[str] = Field(None, description="Git branch name")
is_stable: Optional[bool] = Field(
False, description="Whether this is a stable release"
)
last_updated: Optional[datetime] = Field(
None, description="ISO timestamp of last update"
)
class BatchOperation(BaseModel):
operation_id: str = Field(..., description="Unique operation identifier")
operation_type: OperationType
target: str = Field(
..., description="Target of the operation (node name, model name, etc.)"
)
target_version: Optional[str] = Field(
None, description="Target version for the operation"
)
result: OperationResult
error_message: Optional[str] = Field(
None, description="Error message if operation failed"
)
start_time: datetime = Field(
..., description="ISO timestamp when operation started"
)
end_time: Optional[datetime] = Field(
None, description="ISO timestamp when operation completed"
)
client_id: Optional[str] = Field(
None, description="Client that initiated the operation"
)
class ComfyUISystemState(BaseModel):
snapshot_time: datetime = Field(
..., description="ISO timestamp when snapshot was taken"
)
comfyui_version: ComfyUIVersionInfo
frontend_version: Optional[str] = Field(
None, description="ComfyUI frontend version if available"
)
python_version: str = Field(..., description="Python interpreter version")
platform_info: str = Field(
..., description="Operating system and platform information"
)
installed_nodes: Optional[Dict[str, InstalledNodeInfo]] = Field(
None, description="Map of installed node packages by name"
)
installed_models: Optional[Dict[str, InstalledModelInfo]] = Field(
None, description="Map of installed models by name"
)
manager_config: Optional[Dict[str, Any]] = Field(
None, description="ComfyUI Manager configuration settings"
)
comfyui_root_path: Optional[str] = Field(
None, description="ComfyUI root installation directory"
)
model_paths: Optional[Dict[str, List[str]]] = Field(
None, description="Map of model types to their configured paths"
)
manager_version: Optional[str] = Field(None, description="ComfyUI Manager version")
security_level: Optional[SecurityLevel] = None
network_mode: Optional[str] = Field(
None, description="Network mode (online, offline, private)"
)
cli_args: Optional[Dict[str, Any]] = Field(
None, description="Selected ComfyUI CLI arguments"
)
custom_nodes_count: Optional[int] = Field(
None, description="Total number of custom node packages", ge=0
)
failed_imports: Optional[List[str]] = Field(
None, description="List of custom nodes that failed to import"
)
pip_packages: Optional[Dict[str, str]] = Field(
None, description="Map of installed pip packages to their versions"
)
embedded_python: Optional[bool] = Field(
None,
description="Whether ComfyUI is running from an embedded Python distribution",
)
class BatchExecutionRecord(BaseModel):
batch_id: str = Field(..., description="Unique batch identifier")
start_time: datetime = Field(..., description="ISO timestamp when batch started")
end_time: Optional[datetime] = Field(
None, description="ISO timestamp when batch completed"
)
state_before: ComfyUISystemState
state_after: Optional[ComfyUISystemState] = Field(
None, description="System state after batch execution"
)
operations: Optional[List[BatchOperation]] = Field(
None, description="List of operations performed in this batch"
)
total_operations: Optional[int] = Field(
0, description="Total number of operations in batch", ge=0
)
successful_operations: Optional[int] = Field(
0, description="Number of successful operations", ge=0
)
failed_operations: Optional[int] = Field(
0, description="Number of failed operations", ge=0
)
skipped_operations: Optional[int] = Field(
0, description="Number of skipped operations", ge=0
)
class ImportFailInfoBulkRequest(BaseModel):
cnr_ids: Optional[List[str]] = Field(
None, description="A list of CNR IDs to check."
)
urls: Optional[List[str]] = Field(
None, description="A list of repository URLs to check."
)
class ImportFailInfoItem1(BaseModel):
error: Optional[str] = None
traceback: Optional[str] = None
class ImportFailInfoItem(RootModel[Optional[ImportFailInfoItem1]]):
root: Optional[ImportFailInfoItem1]
class QueueTaskItem(BaseModel):
ui_id: str = Field(..., description="Unique identifier for the task")
client_id: str = Field(..., description="Client identifier that initiated the task")
kind: OperationType
params: Union[
InstallPackParams,
UpdatePackParams,
UpdateAllPacksParams,
UpdateComfyUIParams,
FixPackParams,
UninstallPackParams,
DisablePackParams,
EnablePackParams,
ModelMetadata,
]
class TaskHistoryItem(BaseModel):
ui_id: str = Field(..., description="Unique identifier for the task")
client_id: str = Field(..., description="Client identifier that initiated the task")
kind: str = Field(..., description="Type of task that was performed")
timestamp: datetime = Field(..., description="ISO timestamp when task completed")
result: str = Field(..., description="Task result message or details")
status: Optional[TaskExecutionStatus] = None
batch_id: Optional[str] = Field(
None, description="ID of the batch this task belongs to"
)
end_time: Optional[datetime] = Field(
None, description="ISO timestamp when task execution ended"
)
class TaskStateMessage(BaseModel):
history: Dict[str, TaskHistoryItem] = Field(
..., description="Map of task IDs to their history items"
)
running_queue: List[QueueTaskItem] = Field(
..., description="Currently executing tasks"
)
pending_queue: List[QueueTaskItem] = Field(
..., description="Tasks waiting to be executed"
)
installed_packs: Dict[str, ManagerPackInstalled] = Field(
..., description="Map of currently installed node packages by name"
)
class MessageTaskDone(BaseModel):
ui_id: str = Field(..., description="Task identifier")
result: str = Field(..., description="Task result message")
kind: str = Field(..., description="Type of task")
status: Optional[TaskExecutionStatus] = None
timestamp: datetime = Field(..., description="ISO timestamp when task completed")
state: TaskStateMessage
class MessageTaskStarted(BaseModel):
ui_id: str = Field(..., description="Task identifier")
kind: str = Field(..., description="Type of task")
timestamp: datetime = Field(..., description="ISO timestamp when task started")
state: TaskStateMessage
class MessageTaskFailed(BaseModel):
ui_id: str = Field(..., description="Task identifier")
error: str = Field(..., description="Error message")
kind: str = Field(..., description="Type of task")
timestamp: datetime = Field(..., description="ISO timestamp when task failed")
state: TaskStateMessage
class MessageUpdate(
RootModel[Union[MessageTaskDone, MessageTaskStarted, MessageTaskFailed]]
):
root: Union[MessageTaskDone, MessageTaskStarted, MessageTaskFailed] = Field(
..., description="Union type for all possible WebSocket message updates"
)
class HistoryResponse(BaseModel):
history: Optional[Dict[str, TaskHistoryItem]] = Field(
None, description="Map of task IDs to their history items"
)
class ImportFailInfoBulkResponse(RootModel[Optional[Dict[str, ImportFailInfoItem]]]):
root: Optional[Dict[str, ImportFailInfoItem]] = None

View File

File diff suppressed because it is too large Load Diff

View File

View File

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,11 @@
- Anytime you make a change to the data being sent or received, you should follow this process:
1. Adjust the openapi.yaml file first
2. Verify the syntax of the openapi.yaml file using `yaml.safe_load`
3. Regenerate the types following the instructions in the `data_models/README.md` file
4. Verify the new data model is generated
5. Verify the syntax of the generated types files
6. Run formatting and linting on the generated types files
7. Adjust the `__init__.py` files in the `data_models` directory to match/export the new data model
8. Only then, make the changes to the rest of the codebase
9. Run the CI tests to verify that the changes are working
- The comfyui_manager is a python package that is used to manage the comfyui server. There are two sub-packages `glob` and `legacy`. These represent the current version (`glob`) and the previous version (`legacy`), not including common utilities and data models. When developing, we work in the `glob` package. You can ignore the `legacy` package entirely, unless you have a very good reason to research how things were done in the legacy or prior major versions of the package. But in those cases, you should just look for the sake of knowledge or reflection, not for changing code (unless explicitly asked to do so).

View File

View File

@@ -0,0 +1,55 @@
SECURITY_MESSAGE_MIDDLE = "ERROR: To use this action, a security_level of `normal or below` is required. Please contact the administrator.\nReference: https://github.com/ltdrdata/ComfyUI-Manager#security-policy"
SECURITY_MESSAGE_MIDDLE_P = "ERROR: To use this action, security_level must be `normal or below`, and network_mode must be set to `personal_cloud`. Please contact the administrator.\nReference: https://github.com/ltdrdata/ComfyUI-Manager#security-policy"
SECURITY_MESSAGE_NORMAL_MINUS = "ERROR: To use this feature, you must either set '--listen' to a local IP and set the security level to 'normal-' or lower, or set the security level to 'middle' or 'weak'. Please contact the administrator.\nReference: https://github.com/ltdrdata/ComfyUI-Manager#security-policy"
SECURITY_MESSAGE_GENERAL = "ERROR: This installation is not allowed in this security_level. Please contact the administrator.\nReference: https://github.com/ltdrdata/ComfyUI-Manager#security-policy"
SECURITY_MESSAGE_NORMAL_MINUS_MODEL = "ERROR: Downloading models that are not in '.safetensors' format is only allowed for models registered in the 'default' channel at this security level. If you want to download this model, set the security level to 'normal-' or lower."
def is_loopback(address):
import ipaddress
try:
return ipaddress.ip_address(address).is_loopback
except ValueError:
return False
model_dir_name_map = {
"checkpoints": "checkpoints",
"checkpoint": "checkpoints",
"unclip": "checkpoints",
"text_encoders": "text_encoders",
"clip": "text_encoders",
"vae": "vae",
"lora": "loras",
"t2i-adapter": "controlnet",
"t2i-style": "controlnet",
"controlnet": "controlnet",
"clip_vision": "clip_vision",
"gligen": "gligen",
"upscale": "upscale_models",
"embedding": "embeddings",
"embeddings": "embeddings",
"unet": "diffusion_models",
"diffusion_model": "diffusion_models",
}
# List of all model directory names used for checking installed models
MODEL_DIR_NAMES = [
"checkpoints",
"loras",
"vae",
"text_encoders",
"diffusion_models",
"clip_vision",
"embeddings",
"diffusers",
"vae_approx",
"controlnet",
"gligen",
"upscale_models",
"hypernetworks",
"photomaker",
"classifiers",
]

View File

@@ -23,7 +23,6 @@ import yaml
import zipfile
import traceback
from concurrent.futures import ThreadPoolExecutor, as_completed
import toml
orig_print = print
@@ -32,19 +31,22 @@ from packaging import version
import uuid
from . import cm_global
from . import cnr_utils
from . import manager_util
from . import git_utils
from . import manager_downloader
from .node_package import InstalledNodePackage
from .enums import NetworkMode, SecurityLevel, DBMode
from ..common import cm_global
from ..common import cnr_utils
from ..common import manager_util
from ..common import git_utils
from ..common import manager_downloader
from ..common.node_package import InstalledNodePackage
from ..common.enums import NetworkMode, SecurityLevel, DBMode
from ..common import context
version_code = [4, 0]
version_code = [4, 0, 2]
version_str = f"V{version_code[0]}.{version_code[1]}" + (f'.{version_code[2]}' if len(version_code) > 2 else '')
DEFAULT_CHANNEL = "https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main"
DEFAULT_CHANNEL_LEGACY = "https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main"
default_custom_nodes_path = None
@@ -62,7 +64,7 @@ def get_default_custom_nodes_path():
try:
import folder_paths
default_custom_nodes_path = folder_paths.get_folder_paths("custom_nodes")[0]
except:
except Exception:
default_custom_nodes_path = os.path.abspath(os.path.join(manager_util.comfyui_manager_path, '..'))
return default_custom_nodes_path
@@ -72,37 +74,11 @@ def get_custom_nodes_paths():
try:
import folder_paths
return folder_paths.get_folder_paths("custom_nodes")
except:
except Exception:
custom_nodes_path = os.path.abspath(os.path.join(manager_util.comfyui_manager_path, '..'))
return [custom_nodes_path]
def get_comfyui_tag():
try:
repo = git.Repo(comfy_path)
return repo.git.describe('--tags')
except:
return None
def get_current_comfyui_ver():
"""
Extract version from pyproject.toml
"""
toml_path = os.path.join(comfy_path, 'pyproject.toml')
if not os.path.exists(toml_path):
return None
else:
try:
with open(toml_path, "r", encoding="utf-8") as f:
data = toml.load(f)
project = data.get('project', {})
return project.get('version')
except:
return None
def get_script_env():
new_env = os.environ.copy()
git_exe = get_config().get('git_exe')
@@ -110,10 +86,10 @@ def get_script_env():
new_env['GIT_EXE_PATH'] = git_exe
if 'COMFYUI_PATH' not in new_env:
new_env['COMFYUI_PATH'] = comfy_path
new_env['COMFYUI_PATH'] = context.comfy_path
if 'COMFYUI_FOLDERS_BASE_PATH' not in new_env:
new_env['COMFYUI_FOLDERS_BASE_PATH'] = comfy_path
new_env['COMFYUI_FOLDERS_BASE_PATH'] = context.comfy_path
return new_env
@@ -135,12 +111,12 @@ def check_invalid_nodes():
try:
import folder_paths
except:
except Exception:
try:
sys.path.append(comfy_path)
sys.path.append(context.comfy_path)
import folder_paths
except:
raise Exception(f"Invalid COMFYUI_FOLDERS_BASE_PATH: {comfy_path}")
except Exception:
raise Exception(f"Invalid COMFYUI_FOLDERS_BASE_PATH: {context.comfy_path}")
def check(root):
global invalid_nodes
@@ -175,87 +151,11 @@ def check_invalid_nodes():
print("\n---------------------------------------------------------------------------\n")
# read env vars
comfy_path: str = os.environ.get('COMFYUI_PATH')
comfy_base_path = os.environ.get('COMFYUI_FOLDERS_BASE_PATH')
if comfy_path is None:
try:
comfy_path = os.path.abspath(os.path.dirname(sys.modules['__main__'].__file__))
os.environ['COMFYUI_PATH'] = comfy_path
except:
logging.error("[ComfyUI-Manager] environment variable 'COMFYUI_PATH' is not specified.")
exit(-1)
if comfy_base_path is None:
comfy_base_path = comfy_path
channel_list_template_path = os.path.join(manager_util.comfyui_manager_path, 'channels.list.template')
git_script_path = os.path.join(manager_util.comfyui_manager_path, "git_helper.py")
manager_files_path = None
manager_config_path = None
manager_channel_list_path = None
manager_startup_script_path:str = None
manager_snapshot_path = None
manager_pip_overrides_path = None
manager_pip_blacklist_path = None
manager_components_path = None
def update_user_directory(user_dir):
global manager_files_path
global manager_config_path
global manager_channel_list_path
global manager_startup_script_path
global manager_snapshot_path
global manager_pip_overrides_path
global manager_pip_blacklist_path
global manager_components_path
manager_files_path = os.path.abspath(os.path.join(user_dir, 'default', 'ComfyUI-Manager'))
if not os.path.exists(manager_files_path):
os.makedirs(manager_files_path)
manager_snapshot_path = os.path.join(manager_files_path, "snapshots")
if not os.path.exists(manager_snapshot_path):
os.makedirs(manager_snapshot_path)
manager_startup_script_path = os.path.join(manager_files_path, "startup-scripts")
if not os.path.exists(manager_startup_script_path):
os.makedirs(manager_startup_script_path)
manager_config_path = os.path.join(manager_files_path, 'config.ini')
manager_channel_list_path = os.path.join(manager_files_path, 'channels.list')
manager_pip_overrides_path = os.path.join(manager_files_path, "pip_overrides.json")
manager_pip_blacklist_path = os.path.join(manager_files_path, "pip_blacklist.list")
manager_components_path = os.path.join(manager_files_path, "components")
manager_util.cache_dir = os.path.join(manager_files_path, "cache")
if not os.path.exists(manager_util.cache_dir):
os.makedirs(manager_util.cache_dir)
try:
import folder_paths
update_user_directory(folder_paths.get_user_directory())
except Exception:
# fallback:
# This case is only possible when running with cm-cli, and in practice, this case is not actually used.
update_user_directory(os.path.abspath(manager_util.comfyui_manager_path))
cached_config = None
js_path = None
comfy_ui_required_revision = 1930
comfy_ui_required_commit_datetime = datetime(2024, 1, 24, 0, 0, 0)
comfy_ui_revision = "Unknown"
comfy_ui_commit_datetime = datetime(1900, 1, 1, 0, 0, 0)
channel_dict = None
valid_channels = {'default', 'local'}
valid_channels = {'default', 'local', DEFAULT_CHANNEL, DEFAULT_CHANNEL_LEGACY}
channel_list = None
@@ -399,18 +299,86 @@ class ManagedResult:
return self
class NormalizedKeyDict:
def __init__(self):
self._store = {}
self._key_map = {}
def _normalize_key(self, key):
if isinstance(key, str):
return key.strip().lower()
return key
def __setitem__(self, key, value):
norm_key = self._normalize_key(key)
self._key_map[norm_key] = key
self._store[key] = value
def __getitem__(self, key):
norm_key = self._normalize_key(key)
original_key = self._key_map[norm_key]
return self._store[original_key]
def __delitem__(self, key):
norm_key = self._normalize_key(key)
original_key = self._key_map.pop(norm_key)
del self._store[original_key]
def __contains__(self, key):
return self._normalize_key(key) in self._key_map
def get(self, key, default=None):
return self[key] if key in self else default
def setdefault(self, key, default=None):
if key in self:
return self[key]
self[key] = default
return default
def pop(self, key, default=None):
if key in self:
val = self[key]
del self[key]
return val
if default is not None:
return default
raise KeyError(key)
def keys(self):
return self._store.keys()
def values(self):
return self._store.values()
def items(self):
return self._store.items()
def __iter__(self):
return iter(self._store)
def __len__(self):
return len(self._store)
def __repr__(self):
return repr(self._store)
def to_dict(self):
return dict(self._store)
class UnifiedManager:
def __init__(self):
self.installed_node_packages: dict[str, InstalledNodePackage] = {}
self.cnr_inactive_nodes = {} # node_id -> node_version -> fullpath
self.nightly_inactive_nodes = {} # node_id -> fullpath
self.unknown_inactive_nodes = {} # node_id -> repo url * fullpath
self.active_nodes = {} # node_id -> node_version * fullpath
self.unknown_active_nodes = {} # node_id -> repo url * fullpath
self.cnr_map = {} # node_id -> cnr info
self.repo_cnr_map = {} # repo_url -> cnr info
self.custom_node_map_cache = {} # (channel, mode) -> augmented custom node list json
self.cnr_inactive_nodes = NormalizedKeyDict() # node_id -> node_version -> fullpath
self.nightly_inactive_nodes = NormalizedKeyDict() # node_id -> fullpath
self.unknown_inactive_nodes = {} # node_id -> repo url * fullpath
self.active_nodes = NormalizedKeyDict() # node_id -> node_version * fullpath
self.unknown_active_nodes = {} # node_id -> repo url * fullpath
self.cnr_map = NormalizedKeyDict() # node_id -> cnr info
self.repo_cnr_map = {} # repo_url -> cnr info
self.custom_node_map_cache = {} # (channel, mode) -> augmented custom node list json
self.processed_install = set()
def get_module_name(self, x):
@@ -729,6 +697,8 @@ class UnifiedManager:
return latest
async def reload(self, cache_mode, dont_wait=True, update_cnr_map=True):
import folder_paths
self.custom_node_map_cache = {}
self.cnr_inactive_nodes = {} # node_id -> node_version -> fullpath
self.nightly_inactive_nodes = {} # node_id -> fullpath
@@ -795,7 +765,7 @@ class UnifiedManager:
if 'id' in x:
if x['id'] not in res:
res[x['id']] = (x, True)
except:
except Exception:
logging.error(f"[ComfyUI-Manager] broken item:{x}")
return res
@@ -814,7 +784,7 @@ class UnifiedManager:
channel = normalize_channel(channel)
nodes = await self.load_nightly(channel, mode)
res = {}
res = NormalizedKeyDict()
added_cnr = set()
for v in nodes.values():
v = v[0]
@@ -848,7 +818,7 @@ class UnifiedManager:
def safe_version(ver_str):
try:
return version.parse(ver_str)
except:
except Exception:
return version.parse("0.0.0")
def execute_install_script(self, url, repo_path, instant_execution=False, lazy_mode=False, no_deps=False):
@@ -862,14 +832,15 @@ class UnifiedManager:
else:
if os.path.exists(requirements_path) and not no_deps:
print("Install: pip packages")
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, manager_files_path)
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), context.comfy_path, context.manager_files_path)
lines = manager_util.robust_readlines(requirements_path)
for line in lines:
package_name = remap_pip_package(line.strip())
if package_name and not package_name.startswith('#') and package_name not in self.processed_install:
self.processed_install.add(package_name)
install_cmd = manager_util.make_pip_cmd(["install", package_name])
if package_name.strip() != "" and not package_name.startswith('#'):
clean_package_name = package_name.split('#')[0].strip()
install_cmd = manager_util.make_pip_cmd(["install", clean_package_name])
if clean_package_name != "" and not clean_package_name.startswith('#'):
res = res and try_install_script(url, repo_path, install_cmd, instant_execution=instant_execution)
pip_fixer.fix_broken()
@@ -883,7 +854,7 @@ class UnifiedManager:
return res
def reserve_cnr_switch(self, target, zip_url, from_path, to_path, no_deps):
script_path = os.path.join(manager_startup_script_path, "install-scripts.txt")
script_path = os.path.join(context.manager_startup_script_path, "install-scripts.txt")
with open(script_path, "a") as file:
obj = [target, "#LAZY-CNR-SWITCH-SCRIPT", zip_url, from_path, to_path, no_deps, get_default_custom_nodes_path(), sys.executable]
file.write(f"{obj}\n")
@@ -1289,7 +1260,7 @@ class UnifiedManager:
print(f"Download: git clone '{clone_url}'")
if not instant_execution and platform.system() == 'Windows':
res = manager_funcs.run_script([sys.executable, git_script_path, "--clone", get_default_custom_nodes_path(), clone_url, repo_path], cwd=get_default_custom_nodes_path())
res = manager_funcs.run_script([sys.executable, context.git_script_path, "--clone", get_default_custom_nodes_path(), clone_url, repo_path], cwd=get_default_custom_nodes_path())
if res != 0:
return result.fail(f"Failed to clone repo: {clone_url}")
else:
@@ -1320,67 +1291,66 @@ class UnifiedManager:
return result.fail(f'Path not found: {repo_path}')
# version check
repo = git.Repo(repo_path)
with git.Repo(repo_path) as repo:
if repo.head.is_detached:
if not switch_to_default_branch(repo):
return result.fail(f"Failed to switch to default branch: {repo_path}")
if repo.head.is_detached:
if not switch_to_default_branch(repo):
return result.fail(f"Failed to switch to default branch: {repo_path}")
current_branch = repo.active_branch
branch_name = current_branch.name
current_branch = repo.active_branch
branch_name = current_branch.name
if current_branch.tracking_branch() is None:
print(f"[ComfyUI-Manager] There is no tracking branch ({current_branch})")
remote_name = get_remote_name(repo)
else:
remote_name = current_branch.tracking_branch().remote_name
if remote_name is None:
return result.fail(f"Failed to get remote when installing: {repo_path}")
remote = repo.remote(name=remote_name)
try:
remote.fetch()
except Exception as e:
if 'detected dubious' in str(e):
print(f"[ComfyUI-Manager] Try fixing 'dubious repository' error on '{repo_path}' repository")
safedir_path = repo_path.replace('\\', '/')
subprocess.run(['git', 'config', '--global', '--add', 'safe.directory', safedir_path])
try:
remote.fetch()
except Exception:
print("\n[ComfyUI-Manager] Failed to fixing repository setup. Please execute this command on cmd: \n"
"-----------------------------------------------------------------------------------------\n"
f'git config --global --add safe.directory "{safedir_path}"\n'
"-----------------------------------------------------------------------------------------\n")
commit_hash = repo.head.commit.hexsha
if f'{remote_name}/{branch_name}' in repo.refs:
remote_commit_hash = repo.refs[f'{remote_name}/{branch_name}'].object.hexsha
else:
return result.fail(f"Not updatable branch: {branch_name}")
if commit_hash != remote_commit_hash:
git_pull(repo_path)
if len(repo.remotes) > 0:
url = repo.remotes[0].url
if current_branch.tracking_branch() is None:
print(f"[ComfyUI-Manager] There is no tracking branch ({current_branch})")
remote_name = get_remote_name(repo)
else:
url = "unknown repo"
remote_name = current_branch.tracking_branch().remote_name
def postinstall():
return self.execute_install_script(url, repo_path, instant_execution=instant_execution, no_deps=no_deps)
if remote_name is None:
return result.fail(f"Failed to get remote when installing: {repo_path}")
if return_postinstall:
return result.with_postinstall(postinstall)
remote = repo.remote(name=remote_name)
try:
remote.fetch()
except Exception as e:
if 'detected dubious' in str(e):
print(f"[ComfyUI-Manager] Try fixing 'dubious repository' error on '{repo_path}' repository")
safedir_path = repo_path.replace('\\', '/')
subprocess.run(['git', 'config', '--global', '--add', 'safe.directory', safedir_path])
try:
remote.fetch()
except Exception:
print("\n[ComfyUI-Manager] Failed to fixing repository setup. Please execute this command on cmd: \n"
"-----------------------------------------------------------------------------------------\n"
f'git config --global --add safe.directory "{safedir_path}"\n'
"-----------------------------------------------------------------------------------------\n")
commit_hash = repo.head.commit.hexsha
if f'{remote_name}/{branch_name}' in repo.refs:
remote_commit_hash = repo.refs[f'{remote_name}/{branch_name}'].object.hexsha
else:
if not postinstall():
return result.fail(f"Failed to execute install script: {url}")
return result.fail(f"Not updatable branch: {branch_name}")
return result
else:
return ManagedResult('skip').with_msg('Up to date')
if commit_hash != remote_commit_hash:
git_pull(repo_path)
if len(repo.remotes) > 0:
url = repo.remotes[0].url
else:
url = "unknown repo"
def postinstall():
return self.execute_install_script(url, repo_path, instant_execution=instant_execution, no_deps=no_deps)
if return_postinstall:
return result.with_postinstall(postinstall)
else:
if not postinstall():
return result.fail(f"Failed to execute install script: {url}")
return result
else:
return ManagedResult('skip').with_msg('Up to date')
def unified_update(self, node_id, version_spec=None, instant_execution=False, no_deps=False, return_postinstall=False):
orig_print(f"\x1b[2K\rUpdating: {node_id}", end='')
@@ -1504,7 +1474,7 @@ def identify_node_pack_from_path(fullpath):
# cnr
cnr = cnr_utils.read_cnr_info(fullpath)
if cnr is not None:
return module_name, cnr['version'], cnr['id'], None
return module_name, cnr['version'], cnr['original_name'], None
return None
else:
@@ -1516,7 +1486,7 @@ def identify_node_pack_from_path(fullpath):
if github_id is None:
try:
github_id = os.path.basename(repo_url)
except:
except Exception:
logging.warning(f"[ComfyUI-Manager] unexpected repo url: {repo_url}")
github_id = module_name
@@ -1554,7 +1524,10 @@ def get_installed_node_packs():
if info is None:
continue
res[info[0]] = { 'ver': info[1], 'cnr_id': info[2], 'aux_id': info[3], 'enabled': False }
# NOTE: don't add disabled nodepack if there is enabled nodepack
original_name = info[0].split('@')[0]
if original_name not in res:
res[info[0]] = { 'ver': info[1], 'cnr_id': info[2], 'aux_id': info[3], 'enabled': False }
return res
@@ -1571,10 +1544,10 @@ def get_channel_dict():
if channel_dict is None:
channel_dict = {}
if not os.path.exists(manager_channel_list_path):
shutil.copy(channel_list_template_path, manager_channel_list_path)
if not os.path.exists(context.manager_channel_list_path):
shutil.copy(context.channel_list_template_path, context.manager_channel_list_path)
with open(manager_channel_list_path, 'r') as file:
with open(context.manager_channel_list_path, 'r') as file:
channels = file.read()
for x in channels.split('\n'):
channel_info = x.split("::")
@@ -1638,29 +1611,31 @@ def write_config():
'db_mode': get_config()['db_mode'],
}
directory = os.path.dirname(manager_config_path)
directory = os.path.dirname(context.manager_config_path)
if not os.path.exists(directory):
os.makedirs(directory)
with open(manager_config_path, 'w') as configfile:
with open(context.manager_config_path, 'w') as configfile:
config.write(configfile)
def read_config():
try:
config = configparser.ConfigParser(strict=False)
config.read(manager_config_path)
config.read(context.manager_config_path)
default_conf = config['default']
manager_util.use_uv = default_conf['use_uv'].lower() == 'true' if 'use_uv' in default_conf else False
def get_bool(key, default_value):
return default_conf[key].lower() == 'true' if key in default_conf else False
manager_util.use_uv = default_conf['use_uv'].lower() == 'true' if 'use_uv' in default_conf else False
manager_util.bypass_ssl = get_bool('bypass_ssl', False)
return {
'http_channel_enabled': get_bool('http_channel_enabled', False),
'preview_method': default_conf.get('preview_method', manager_funcs.get_current_preview_method()).lower(),
'git_exe': default_conf.get('git_exe', ''),
'use_uv': get_bool('use_uv', False),
'use_uv': get_bool('use_uv', True),
'channel_url': default_conf.get('channel_url', DEFAULT_CHANNEL),
'default_cache_as_channel_url': get_bool('default_cache_as_channel_url', False),
'share_option': default_conf.get('share_option', 'all').lower(),
@@ -1678,16 +1653,20 @@ def read_config():
}
except Exception:
manager_util.use_uv = False
import importlib.util
# temporary disable `uv` on Windows by default (https://github.com/Comfy-Org/ComfyUI-Manager/issues/1969)
manager_util.use_uv = importlib.util.find_spec("uv") is not None and platform.system() != "Windows"
manager_util.bypass_ssl = False
return {
'http_channel_enabled': False,
'preview_method': manager_funcs.get_current_preview_method(),
'git_exe': '',
'use_uv': False,
'use_uv': manager_util.use_uv,
'channel_url': DEFAULT_CHANNEL,
'default_cache_as_channel_url': False,
'share_option': 'all',
'bypass_ssl': False,
'bypass_ssl': manager_util.bypass_ssl,
'file_logging': True,
'component_policy': 'workflow',
'update_policy': 'stable-comfyui',
@@ -1695,7 +1674,7 @@ def read_config():
'model_download_by_agent': False,
'downgrade_blacklist': '',
'always_lazy_install': False,
'network_mode': NetworkMode.OFFLINE.value,
'network_mode': NetworkMode.PUBLIC.value,
'security_level': SecurityLevel.NORMAL.value,
'db_mode': DBMode.CACHE.value,
}
@@ -1741,27 +1720,27 @@ def switch_to_default_branch(repo):
default_branch = repo.git.symbolic_ref(f'refs/remotes/{remote_name}/HEAD').replace(f'refs/remotes/{remote_name}/', '')
repo.git.checkout(default_branch)
return True
except:
except Exception:
# try checkout master
# try checkout main if failed
try:
repo.git.checkout(repo.heads.master)
return True
except:
except Exception:
try:
if remote_name is not None:
repo.git.checkout('-b', 'master', f'{remote_name}/master')
return True
except:
except Exception:
try:
repo.git.checkout(repo.heads.main)
return True
except:
except Exception:
try:
if remote_name is not None:
repo.git.checkout('-b', 'main', f'{remote_name}/main')
return True
except:
except Exception:
pass
print("[ComfyUI Manager] Failed to switch to the default branch")
@@ -1769,10 +1748,10 @@ def switch_to_default_branch(repo):
def reserve_script(repo_path, install_cmds):
if not os.path.exists(manager_startup_script_path):
os.makedirs(manager_startup_script_path)
if not os.path.exists(context.manager_startup_script_path):
os.makedirs(context.manager_startup_script_path)
script_path = os.path.join(manager_startup_script_path, "install-scripts.txt")
script_path = os.path.join(context.manager_startup_script_path, "install-scripts.txt")
with open(script_path, "a") as file:
obj = [repo_path] + install_cmds
file.write(f"{obj}\n")
@@ -1805,16 +1784,6 @@ def try_install_script(url, repo_path, install_cmd, instant_execution=False):
print(f"\n## ComfyUI-Manager: EXECUTE => {install_cmd}")
code = manager_funcs.run_script(install_cmd, cwd=repo_path)
if platform.system() != "Windows":
try:
if not os.environ.get('__COMFYUI_DESKTOP_VERSION__') and comfy_ui_commit_datetime.date() < comfy_ui_required_commit_datetime.date():
print("\n\n###################################################################")
print(f"[WARN] ComfyUI-Manager: Your ComfyUI version ({comfy_ui_revision})[{comfy_ui_commit_datetime.date()}] is too old. Please update to the latest version.")
print("[WARN] The extension installation feature may not work properly in the current installed ComfyUI version on Windows environment.")
print("###################################################################\n\n")
except:
pass
if code != 0:
if url is None:
url = os.path.dirname(repo_path)
@@ -1827,11 +1796,11 @@ def try_install_script(url, repo_path, install_cmd, instant_execution=False):
# use subprocess to avoid file system lock by git (Windows)
def __win_check_git_update(path, do_fetch=False, do_update=False):
if do_fetch:
command = [sys.executable, git_script_path, "--fetch", path]
command = [sys.executable, context.git_script_path, "--fetch", path]
elif do_update:
command = [sys.executable, git_script_path, "--pull", path]
command = [sys.executable, context.git_script_path, "--pull", path]
else:
command = [sys.executable, git_script_path, "--check", path]
command = [sys.executable, context.git_script_path, "--check", path]
new_env = get_script_env()
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=get_default_custom_nodes_path(), env=new_env)
@@ -1885,7 +1854,7 @@ def __win_check_git_update(path, do_fetch=False, do_update=False):
def __win_check_git_pull(path):
command = [sys.executable, git_script_path, "--pull", path]
command = [sys.executable, context.git_script_path, "--pull", path]
process = subprocess.Popen(command, env=get_script_env(), cwd=get_default_custom_nodes_path())
process.wait()
@@ -1901,7 +1870,7 @@ def execute_install_script(url, repo_path, lazy_mode=False, instant_execution=Fa
else:
if os.path.exists(requirements_path) and not no_deps:
print("Install: pip packages")
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, manager_files_path)
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), context.comfy_path, context.manager_files_path)
with open(requirements_path, "r") as requirements_file:
for line in requirements_file:
#handle comments
@@ -2081,6 +2050,13 @@ def is_valid_url(url):
return False
def extract_url_and_commit_id(s):
index = s.rfind('@')
if index == -1:
return (s, '')
else:
return (s[:index], s[index+1:])
async def gitclone_install(url, instant_execution=False, msg_prefix='', no_deps=False):
await unified_manager.reload('cache')
await unified_manager.get_custom_nodes('default', 'cache')
@@ -2098,8 +2074,11 @@ async def gitclone_install(url, instant_execution=False, msg_prefix='', no_deps=
cnr = unified_manager.get_cnr_by_repo(url)
if cnr:
cnr_id = cnr['id']
return await unified_manager.install_by_id(cnr_id, version_spec='nightly', channel='default', mode='cache')
return await unified_manager.install_by_id(cnr_id, version_spec=None, channel='default', mode='cache')
else:
new_url, commit_id = extract_url_and_commit_id(url)
if commit_id != "":
url = new_url
repo_name = os.path.splitext(os.path.basename(url))[0]
# NOTE: Keep original name as possible if unknown node
@@ -2127,11 +2106,15 @@ async def gitclone_install(url, instant_execution=False, msg_prefix='', no_deps=
clone_url = git_utils.get_url_for_clone(url)
if not instant_execution and platform.system() == 'Windows':
res = manager_funcs.run_script([sys.executable, git_script_path, "--clone", get_default_custom_nodes_path(), clone_url, repo_path], cwd=get_default_custom_nodes_path())
res = manager_funcs.run_script([sys.executable, context.git_script_path, "--clone", get_default_custom_nodes_path(), clone_url, repo_path], cwd=get_default_custom_nodes_path())
if res != 0:
return result.fail(f"Failed to clone '{clone_url}' into '{repo_path}'")
else:
repo = git.Repo.clone_from(clone_url, repo_path, recursive=True, progress=GitProgress())
if commit_id!= "":
repo.git.checkout(commit_id)
repo.git.submodule('update', '--init', '--recursive')
repo.git.clear_cache()
repo.close()
@@ -2285,7 +2268,7 @@ def gitclone_uninstall(files):
url = url[:-1]
try:
for custom_nodes_dir in get_custom_nodes_paths():
dir_name = os.path.splitext(os.path.basename(url))[0].replace(".git", "")
dir_name:str = os.path.splitext(os.path.basename(url))[0].replace(".git", "")
dir_path = os.path.join(custom_nodes_dir, dir_name)
# safety check
@@ -2333,7 +2316,7 @@ def gitclone_set_active(files, is_disable):
url = url[:-1]
try:
for custom_nodes_dir in get_custom_nodes_paths():
dir_name = os.path.splitext(os.path.basename(url))[0].replace(".git", "")
dir_name:str = os.path.splitext(os.path.basename(url))[0].replace(".git", "")
dir_path = os.path.join(custom_nodes_dir, dir_name)
# safety check
@@ -2430,7 +2413,7 @@ def update_to_stable_comfyui(repo_path):
repo = git.Repo(repo_path)
try:
repo.git.checkout(repo.heads.master)
except:
except Exception:
logging.error(f"[ComfyUI-Manager] Failed to checkout 'master' branch.\nrepo_path={repo_path}\nAvailable branches:")
for branch in repo.branches:
logging.error('\t'+branch.name)
@@ -2453,7 +2436,7 @@ def update_to_stable_comfyui(repo_path):
logging.info(f"[ComfyUI-Manager] Updating ComfyUI: {current_tag} -> {latest_tag}")
repo.git.checkout(latest_tag)
return 'updated', latest_tag
except:
except Exception:
traceback.print_exc()
return "fail", None
@@ -2606,7 +2589,7 @@ async def get_current_snapshot(custom_nodes_only = False):
await unified_manager.get_custom_nodes('default', 'cache')
# Get ComfyUI hash
repo_path = comfy_path
repo_path = context.comfy_path
comfyui_commit_hash = None
if not custom_nodes_only:
@@ -2648,24 +2631,10 @@ async def get_current_snapshot(custom_nodes_only = False):
cnr_custom_nodes[info['id']] = info['ver']
else:
repo = git.Repo(fullpath)
if repo.head.is_detached:
remote_name = get_remote_name(repo)
else:
current_branch = repo.active_branch
if current_branch.tracking_branch() is None:
remote_name = get_remote_name(repo)
else:
remote_name = current_branch.tracking_branch().remote_name
commit_hash = repo.head.commit.hexsha
url = repo.remotes[remote_name].url
commit_hash = git_utils.get_commit_hash(fullpath)
url = git_utils.git_url(fullpath)
git_custom_nodes[url] = dict(hash=commit_hash, disabled=is_disabled)
except:
except Exception:
print(f"Failed to extract snapshots for the custom node '{path}'.")
elif path.endswith('.py'):
@@ -2696,7 +2665,7 @@ async def save_snapshot_with_postfix(postfix, path=None, custom_nodes_only = Fal
date_time_format = now.strftime("%Y-%m-%d_%H-%M-%S")
file_name = f"{date_time_format}_{postfix}"
path = os.path.join(manager_snapshot_path, f"{file_name}.json")
path = os.path.join(context.manager_snapshot_path, f"{file_name}.json")
else:
file_name = path.replace('\\', '/').split('/')[-1]
file_name = file_name.split('.')[-2]
@@ -2723,7 +2692,7 @@ async def extract_nodes_from_workflow(filepath, mode='local', channel_url='defau
with open(filepath, "r", encoding="UTF-8", errors="ignore") as json_file:
try:
workflow = json.load(json_file)
except:
except Exception:
print(f"Invalid workflow file: {filepath}")
exit(-1)
@@ -2736,7 +2705,7 @@ async def extract_nodes_from_workflow(filepath, mode='local', channel_url='defau
else:
try:
workflow = json.loads(img.info['workflow'])
except:
except Exception:
print(f"This is not a valid .png file containing a ComfyUI workflow: {filepath}")
exit(-1)
@@ -2884,7 +2853,7 @@ async def get_unified_total_nodes(channel, mode, regsitry_cache_mode='cache'):
if cnr_id is not None:
# cnr or nightly version
cnr_ids.remove(cnr_id)
cnr_ids.discard(cnr_id)
updatable = False
cnr = unified_manager.cnr_map[cnr_id]
@@ -3007,7 +2976,7 @@ def populate_github_stats(node_packs, json_obj_github):
v['stars'] = -1
v['last_update'] = -1
v['trust'] = False
except:
except Exception:
logging.error(f"[ComfyUI-Manager] DB item is broken:\n{v}")
@@ -3048,6 +3017,11 @@ async def restore_snapshot(snapshot_path, git_helper_extras=None):
info = yaml.load(snapshot_file, Loader=yaml.SafeLoader)
info = info['custom_nodes']
if 'pips' in info and info['pips']:
pips = info['pips']
else:
pips = {}
# for cnr restore
cnr_info = info.get('cnr_custom_nodes')
if cnr_info is not None:
@@ -3254,6 +3228,8 @@ async def restore_snapshot(snapshot_path, git_helper_extras=None):
unified_manager.repo_install(repo_url, to_path, instant_execution=True, no_deps=False, return_postinstall=False)
cloned_repos.append(repo_name)
manager_util.restore_pip_snapshot(pips, git_helper_extras)
# print summary
for x in cloned_repos:
print(f"[ INSTALLED ] {x}")
@@ -3278,12 +3254,12 @@ async def restore_snapshot(snapshot_path, git_helper_extras=None):
def get_comfyui_versions(repo=None):
if repo is None:
repo = git.Repo(comfy_path)
repo = git.Repo(context.comfy_path)
try:
remote = get_remote_name(repo)
repo.remotes[remote].fetch()
except:
except Exception:
logging.error("[ComfyUI-Manager] Failed to fetch ComfyUI")
versions = [x.name for x in repo.tags if x.name.startswith('v')]
@@ -3312,7 +3288,7 @@ def get_comfyui_versions(repo=None):
def switch_comfyui(tag):
repo = git.Repo(comfy_path)
repo = git.Repo(context.comfy_path)
if tag == 'nightly':
repo.git.checkout('master')
@@ -3352,5 +3328,5 @@ def repo_switch_commit(repo_path, commit_hash):
repo.git.checkout(commit_hash)
return True
except:
except Exception:
return None

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,5 @@
import mimetypes
from ..common import context
from . import manager_core as core
import os
@@ -9,6 +10,16 @@ import hashlib
import folder_paths
from server import PromptServer
import logging
import sys
try:
from nio import AsyncClient, LoginResponse, UploadResponse
matrix_nio_is_available = True
except Exception:
logging.warning(f"[ComfyUI-Manager] The matrix sharing feature has been disabled because the `matrix-nio` dependency is not installed.\n\tTo use this feature, please run the following command:\n\t{sys.executable} -m pip install matrix-nio\n")
matrix_nio_is_available = False
def extract_model_file_names(json_data):
@@ -66,21 +77,21 @@ async def share_option(request):
def get_openart_auth():
if not os.path.exists(os.path.join(core.manager_files_path, ".openart_key")):
if not os.path.exists(os.path.join(context.manager_files_path, ".openart_key")):
return None
try:
with open(os.path.join(core.manager_files_path, ".openart_key"), "r") as f:
with open(os.path.join(context.manager_files_path, ".openart_key"), "r") as f:
openart_key = f.read().strip()
return openart_key if openart_key else None
except:
except Exception:
return None
def get_matrix_auth():
if not os.path.exists(os.path.join(core.manager_files_path, "matrix_auth")):
if not os.path.exists(os.path.join(context.manager_files_path, "matrix_auth")):
return None
try:
with open(os.path.join(core.manager_files_path, "matrix_auth"), "r") as f:
with open(os.path.join(context.manager_files_path, "matrix_auth"), "r") as f:
matrix_auth = f.read()
homeserver, username, password = matrix_auth.strip().split("\n")
if not homeserver or not username or not password:
@@ -90,36 +101,36 @@ def get_matrix_auth():
"username": username,
"password": password,
}
except:
except Exception:
return None
def get_comfyworkflows_auth():
if not os.path.exists(os.path.join(core.manager_files_path, "comfyworkflows_sharekey")):
if not os.path.exists(os.path.join(context.manager_files_path, "comfyworkflows_sharekey")):
return None
try:
with open(os.path.join(core.manager_files_path, "comfyworkflows_sharekey"), "r") as f:
with open(os.path.join(context.manager_files_path, "comfyworkflows_sharekey"), "r") as f:
share_key = f.read()
if not share_key.strip():
return None
return share_key
except:
except Exception:
return None
def get_youml_settings():
if not os.path.exists(os.path.join(core.manager_files_path, ".youml")):
if not os.path.exists(os.path.join(context.manager_files_path, ".youml")):
return None
try:
with open(os.path.join(core.manager_files_path, ".youml"), "r") as f:
with open(os.path.join(context.manager_files_path, ".youml"), "r") as f:
youml_settings = f.read().strip()
return youml_settings if youml_settings else None
except:
except Exception:
return None
def set_youml_settings(settings):
with open(os.path.join(core.manager_files_path, ".youml"), "w") as f:
with open(os.path.join(context.manager_files_path, ".youml"), "w") as f:
f.write(settings)
@@ -136,7 +147,7 @@ async def api_get_openart_auth(request):
async def api_set_openart_auth(request):
json_data = await request.json()
openart_key = json_data['openart_key']
with open(os.path.join(core.manager_files_path, ".openart_key"), "w") as f:
with open(os.path.join(context.manager_files_path, ".openart_key"), "w") as f:
f.write(openart_key)
return web.Response(status=200)
@@ -179,28 +190,36 @@ async def api_get_comfyworkflows_auth(request):
@PromptServer.instance.routes.post("/v2/manager/set_esheep_workflow_and_images")
async def set_esheep_workflow_and_images(request):
json_data = await request.json()
with open(os.path.join(core.manager_files_path, "esheep_share_message.json"), "w", encoding='utf-8') as file:
with open(os.path.join(context.manager_files_path, "esheep_share_message.json"), "w", encoding='utf-8') as file:
json.dump(json_data, file, indent=4)
return web.Response(status=200)
@PromptServer.instance.routes.get("/v2/manager/get_esheep_workflow_and_images")
async def get_esheep_workflow_and_images(request):
with open(os.path.join(core.manager_files_path, "esheep_share_message.json"), 'r', encoding='utf-8') as file:
with open(os.path.join(context.manager_files_path, "esheep_share_message.json"), 'r', encoding='utf-8') as file:
data = json.load(file)
return web.Response(status=200, text=json.dumps(data))
@PromptServer.instance.routes.get("/v2/manager/get_matrix_dep_status")
async def get_matrix_dep_status(request):
if matrix_nio_is_available:
return web.Response(status=200, text='available')
else:
return web.Response(status=200, text='unavailable')
def set_matrix_auth(json_data):
homeserver = json_data['homeserver']
username = json_data['username']
password = json_data['password']
with open(os.path.join(core.manager_files_path, "matrix_auth"), "w") as f:
with open(os.path.join(context.manager_files_path, "matrix_auth"), "w") as f:
f.write("\n".join([homeserver, username, password]))
def set_comfyworkflows_auth(comfyworkflows_sharekey):
with open(os.path.join(core.manager_files_path, "comfyworkflows_sharekey"), "w") as f:
with open(os.path.join(context.manager_files_path, "comfyworkflows_sharekey"), "w") as f:
f.write(comfyworkflows_sharekey)
@@ -234,7 +253,7 @@ async def share_art(request):
try:
output_to_share = potential_outputs[int(selected_output_index)]
except:
except Exception:
# for now, pick the first output
output_to_share = potential_outputs[0]
@@ -330,15 +349,12 @@ async def share_art(request):
workflowId = upload_workflow_json["workflowId"]
# check if the user has provided Matrix credentials
if "matrix" in share_destinations:
if matrix_nio_is_available and "matrix" in share_destinations:
comfyui_share_room_id = '!LGYSoacpJPhIfBqVfb:matrix.org'
filename = os.path.basename(asset_filepath)
content_type = assetFileType
try:
from matrix_client.api import MatrixHttpApi
from matrix_client.client import MatrixClient
homeserver = 'matrix.org'
if matrix_auth:
homeserver = matrix_auth.get('homeserver', 'matrix.org')
@@ -346,20 +362,35 @@ async def share_art(request):
if not homeserver.startswith("https://"):
homeserver = "https://" + homeserver
client = MatrixClient(homeserver)
try:
token = client.login(username=matrix_auth['username'], password=matrix_auth['password'])
if not token:
return web.json_response({"error": "Invalid Matrix credentials."}, content_type='application/json', status=400)
except:
client = AsyncClient(homeserver, matrix_auth['username'])
# Login
login_resp = await client.login(matrix_auth['password'])
if not isinstance(login_resp, LoginResponse) or not login_resp.access_token:
await client.close()
return web.json_response({"error": "Invalid Matrix credentials."}, content_type='application/json', status=400)
matrix = MatrixHttpApi(homeserver, token=token)
# Upload asset
with open(asset_filepath, 'rb') as f:
mxc_url = matrix.media_upload(f.read(), content_type, filename=filename)['content_uri']
upload_resp, _maybe_keys = await client.upload(f, content_type=content_type, filename=filename)
asset_data = f.seek(0) or f.read() # get size for info below
if not isinstance(upload_resp, UploadResponse) or not upload_resp.content_uri:
await client.close()
return web.json_response({"error": "Failed to upload asset to Matrix."}, content_type='application/json', status=500)
mxc_url = upload_resp.content_uri
workflow_json_mxc_url = matrix.media_upload(prompt['workflow'], 'application/json', filename='workflow.json')['content_uri']
# Upload workflow JSON
import io
workflow_json_bytes = json.dumps(prompt['workflow']).encode('utf-8')
workflow_io = io.BytesIO(workflow_json_bytes)
upload_workflow_resp, _maybe_keys = await client.upload(workflow_io, content_type='application/json', filename='workflow.json')
workflow_io.seek(0)
if not isinstance(upload_workflow_resp, UploadResponse) or not upload_workflow_resp.content_uri:
await client.close()
return web.json_response({"error": "Failed to upload workflow to Matrix."}, content_type='application/json', status=500)
workflow_json_mxc_url = upload_workflow_resp.content_uri
# Send text message
text_content = ""
if title:
text_content += f"{title}\n"
@@ -367,9 +398,44 @@ async def share_art(request):
text_content += f"{description}\n"
if credits:
text_content += f"\ncredits: {credits}\n"
matrix.send_message(comfyui_share_room_id, text_content)
matrix.send_content(comfyui_share_room_id, mxc_url, filename, 'm.image')
matrix.send_content(comfyui_share_room_id, workflow_json_mxc_url, 'workflow.json', 'm.file')
await client.room_send(
room_id=comfyui_share_room_id,
message_type="m.room.message",
content={"msgtype": "m.text", "body": text_content}
)
# Send image
await client.room_send(
room_id=comfyui_share_room_id,
message_type="m.room.message",
content={
"msgtype": "m.image",
"body": filename,
"url": mxc_url,
"info": {
"mimetype": content_type,
"size": len(asset_data)
}
}
)
# Send workflow JSON file
await client.room_send(
room_id=comfyui_share_room_id,
message_type="m.room.message",
content={
"msgtype": "m.file",
"body": "workflow.json",
"url": workflow_json_mxc_url,
"info": {
"mimetype": "application/json",
"size": len(workflow_json_bytes)
}
}
)
await client.close()
except:
import traceback
traceback.print_exc()

View File

View File

@@ -0,0 +1,142 @@
import os
import git
import logging
import traceback
from comfyui_manager.common import context
import folder_paths
from comfy.cli_args import args
import latent_preview
from comfyui_manager.glob import manager_core as core
from comfyui_manager.common import cm_global
comfy_ui_hash = "-"
comfyui_tag = None
def print_comfyui_version():
global comfy_ui_hash
global comfyui_tag
is_detached = False
try:
repo = git.Repo(os.path.dirname(folder_paths.__file__))
core.comfy_ui_revision = len(list(repo.iter_commits("HEAD")))
comfy_ui_hash = repo.head.commit.hexsha
cm_global.variables["comfyui.revision"] = core.comfy_ui_revision
core.comfy_ui_commit_datetime = repo.head.commit.committed_datetime
cm_global.variables["comfyui.commit_datetime"] = core.comfy_ui_commit_datetime
is_detached = repo.head.is_detached
current_branch = repo.active_branch.name
comfyui_tag = context.get_comfyui_tag()
try:
if (
not os.environ.get("__COMFYUI_DESKTOP_VERSION__")
and core.comfy_ui_commit_datetime.date()
< core.comfy_ui_required_commit_datetime.date()
):
logging.warning(
f"\n\n## [WARN] ComfyUI-Manager: Your ComfyUI version ({core.comfy_ui_revision})[{core.comfy_ui_commit_datetime.date()}] is too old. Please update to the latest version. ##\n\n"
)
except Exception:
pass
# process on_revision_detected -->
if "cm.on_revision_detected_handler" in cm_global.variables:
for k, f in cm_global.variables["cm.on_revision_detected_handler"]:
try:
f(core.comfy_ui_revision)
except Exception:
logging.error(f"[ERROR] '{k}' on_revision_detected_handler")
traceback.print_exc()
del cm_global.variables["cm.on_revision_detected_handler"]
else:
logging.warning(
"[ComfyUI-Manager] Some features are restricted due to your ComfyUI being outdated."
)
# <--
if current_branch == "master":
if comfyui_tag:
logging.info(
f"### ComfyUI Version: {comfyui_tag} | Released on '{core.comfy_ui_commit_datetime.date()}'"
)
else:
logging.info(
f"### ComfyUI Revision: {core.comfy_ui_revision} [{comfy_ui_hash[:8]}] | Released on '{core.comfy_ui_commit_datetime.date()}'"
)
else:
if comfyui_tag:
logging.info(
f"### ComfyUI Version: {comfyui_tag} on '{current_branch}' | Released on '{core.comfy_ui_commit_datetime.date()}'"
)
else:
logging.info(
f"### ComfyUI Revision: {core.comfy_ui_revision} on '{current_branch}' [{comfy_ui_hash[:8]}] | Released on '{core.comfy_ui_commit_datetime.date()}'"
)
except Exception:
if is_detached:
logging.info(
f"### ComfyUI Revision: {core.comfy_ui_revision} [{comfy_ui_hash[:8]}] *DETACHED | Released on '{core.comfy_ui_commit_datetime.date()}'"
)
else:
logging.info(
"### ComfyUI Revision: UNKNOWN (The currently installed ComfyUI is not a Git repository)"
)
def set_preview_method(method):
if method == "auto":
args.preview_method = latent_preview.LatentPreviewMethod.Auto
elif method == "latent2rgb":
args.preview_method = latent_preview.LatentPreviewMethod.Latent2RGB
elif method == "taesd":
args.preview_method = latent_preview.LatentPreviewMethod.TAESD
else:
args.preview_method = latent_preview.LatentPreviewMethod.NoPreviews
core.get_config()["preview_method"] = method
def set_update_policy(mode):
core.get_config()["update_policy"] = mode
def set_db_mode(mode):
core.get_config()["db_mode"] = mode
def setup_environment():
git_exe = core.get_config()["git_exe"]
if git_exe != "":
git.Git().update_environment(GIT_PYTHON_GIT_EXECUTABLE=git_exe)
def initialize_environment():
context.comfy_path = os.path.dirname(folder_paths.__file__)
core.js_path = os.path.join(context.comfy_path, "web", "extensions")
# Legacy database paths - kept for potential future use
# local_db_model = os.path.join(manager_util.comfyui_manager_path, "model-list.json")
# local_db_alter = os.path.join(manager_util.comfyui_manager_path, "alter-list.json")
# local_db_custom_node_list = os.path.join(
# manager_util.comfyui_manager_path, "custom-node-list.json"
# )
# local_db_extension_node_mappings = os.path.join(
# manager_util.comfyui_manager_path, "extension-node-map.json"
# )
set_preview_method(core.get_config()["preview_method"])
print_comfyui_version()
setup_environment()
core.check_invalid_nodes()

View File

@@ -0,0 +1,60 @@
import locale
import sys
import re
def handle_stream(stream, prefix):
stream.reconfigure(encoding=locale.getpreferredencoding(), errors="replace")
for msg in stream:
if (
prefix == "[!]"
and ("it/s]" in msg or "s/it]" in msg)
and ("%|" in msg or "it [" in msg)
):
if msg.startswith("100%"):
print("\r" + msg, end="", file=sys.stderr),
else:
print("\r" + msg[:-1], end="", file=sys.stderr),
else:
if prefix == "[!]":
print(prefix, msg, end="", file=sys.stderr)
else:
print(prefix, msg, end="")
def convert_markdown_to_html(input_text):
pattern_a = re.compile(r"\[a/([^]]+)]\(([^)]+)\)")
pattern_w = re.compile(r"\[w/([^]]+)]")
pattern_i = re.compile(r"\[i/([^]]+)]")
pattern_bold = re.compile(r"\*\*([^*]+)\*\*")
pattern_white = re.compile(r"%%([^*]+)%%")
def replace_a(match):
return f"<a href='{match.group(2)}' target='blank'>{match.group(1)}</a>"
def replace_w(match):
return f"<p class='cm-warn-note'>{match.group(1)}</p>"
def replace_i(match):
return f"<p class='cm-info-note'>{match.group(1)}</p>"
def replace_bold(match):
return f"<B>{match.group(1)}</B>"
def replace_white(match):
return f"<font color='white'>{match.group(1)}</font>"
input_text = (
input_text.replace("\\[", "&#91;")
.replace("\\]", "&#93;")
.replace("<", "&lt;")
.replace(">", "&gt;")
)
result_text = re.sub(pattern_a, replace_a, input_text)
result_text = re.sub(pattern_w, replace_w, result_text)
result_text = re.sub(pattern_i, replace_i, result_text)
result_text = re.sub(pattern_bold, replace_bold, result_text)
result_text = re.sub(pattern_white, replace_white, result_text)
return result_text.replace("\n", "<BR>")

View File

@@ -0,0 +1,161 @@
import os
import logging
import concurrent.futures
import folder_paths
from comfyui_manager.glob import manager_core as core
from comfyui_manager.glob.constants import model_dir_name_map, MODEL_DIR_NAMES
def get_model_dir(data, show_log=False):
if "download_model_base" in folder_paths.folder_names_and_paths:
models_base = folder_paths.folder_names_and_paths["download_model_base"][0][0]
else:
models_base = folder_paths.models_dir
# NOTE: Validate to prevent path traversal.
if any(char in data["filename"] for char in {"/", "\\", ":"}):
return None
def resolve_custom_node(save_path):
save_path = save_path[13:] # remove 'custom_nodes/'
# NOTE: Validate to prevent path traversal.
if save_path.startswith(os.path.sep) or ":" in save_path:
return None
repo_name = save_path.replace("\\", "/").split("/")[
0
] # get custom node repo name
# NOTE: The creation of files within the custom node path should be removed in the future.
repo_path = core.lookup_installed_custom_nodes_legacy(repo_name)
if repo_path is not None and repo_path[0]:
# Returns the retargeted path based on the actually installed repository
return os.path.join(os.path.dirname(repo_path[1]), save_path)
else:
return None
if data["save_path"] != "default":
if ".." in data["save_path"] or data["save_path"].startswith("/"):
if show_log:
logging.info(
f"[WARN] '{data['save_path']}' is not allowed path. So it will be saved into 'models/etc'."
)
base_model = os.path.join(models_base, "etc")
else:
if data["save_path"].startswith("custom_nodes"):
base_model = resolve_custom_node(data["save_path"])
if base_model is None:
if show_log:
logging.info(
f"[ComfyUI-Manager] The target custom node for model download is not installed: {data['save_path']}"
)
return None
else:
base_model = os.path.join(models_base, data["save_path"])
else:
model_dir_name = model_dir_name_map.get(data["type"].lower())
if model_dir_name is not None:
base_model = folder_paths.folder_names_and_paths[model_dir_name][0][0]
else:
base_model = os.path.join(models_base, "etc")
return base_model
def get_model_path(data, show_log=False):
base_model = get_model_dir(data, show_log)
if base_model is None:
return None
else:
if data["filename"] == "<huggingface>":
return os.path.join(base_model, os.path.basename(data["url"]))
else:
return os.path.join(base_model, data["filename"])
def check_model_installed(json_obj):
def is_exists(model_dir_name, filename, url):
if filename == "<huggingface>":
filename = os.path.basename(url)
dirs = folder_paths.get_folder_paths(model_dir_name)
for x in dirs:
if os.path.exists(os.path.join(x, filename)):
return True
return False
total_models_files = set()
for x in MODEL_DIR_NAMES:
for y in folder_paths.get_filename_list(x):
total_models_files.add(y)
def process_model_phase(item):
if (
"diffusion" not in item["filename"]
and "pytorch" not in item["filename"]
and "model" not in item["filename"]
):
# non-general name case
if item["filename"] in total_models_files:
item["installed"] = "True"
return
if item["save_path"] == "default":
model_dir_name = model_dir_name_map.get(item["type"].lower())
if model_dir_name is not None:
item["installed"] = str(
is_exists(model_dir_name, item["filename"], item["url"])
)
else:
item["installed"] = "False"
else:
model_dir_name = item["save_path"].split("/")[0]
if model_dir_name in folder_paths.folder_names_and_paths:
if is_exists(model_dir_name, item["filename"], item["url"]):
item["installed"] = "True"
if "installed" not in item:
if item["filename"] == "<huggingface>":
filename = os.path.basename(item["url"])
else:
filename = item["filename"]
fullpath = os.path.join(
folder_paths.models_dir, item["save_path"], filename
)
item["installed"] = "True" if os.path.exists(fullpath) else "False"
with concurrent.futures.ThreadPoolExecutor(8) as executor:
for item in json_obj["models"]:
executor.submit(process_model_phase, item)
async def check_whitelist_for_model(item):
from comfyui_manager.data_models import ManagerDatabaseSource
json_obj = await core.get_data_by_mode(ManagerDatabaseSource.cache.value, "model-list.json")
for x in json_obj.get("models", []):
if (
x["save_path"] == item["save_path"]
and x["base"] == item["base"]
and x["filename"] == item["filename"]
):
return True
json_obj = await core.get_data_by_mode(ManagerDatabaseSource.local.value, "model-list.json")
for x in json_obj.get("models", []):
if (
x["save_path"] == item["save_path"]
and x["base"] == item["base"]
and x["filename"] == item["filename"]
):
return True
return False

View File

@@ -0,0 +1,65 @@
import concurrent.futures
from comfyui_manager.glob import manager_core as core
def check_state_of_git_node_pack(
node_packs, do_fetch=False, do_update_check=True, do_update=False
):
if do_fetch:
print("Start fetching...", end="")
elif do_update:
print("Start updating...", end="")
elif do_update_check:
print("Start update check...", end="")
def process_custom_node(item):
core.check_state_of_git_node_pack_single(
item, do_fetch, do_update_check, do_update
)
with concurrent.futures.ThreadPoolExecutor(4) as executor:
for k, v in node_packs.items():
if v.get("active_version") in ["unknown", "nightly"]:
executor.submit(process_custom_node, v)
if do_fetch:
print("\x1b[2K\rFetching done.")
elif do_update:
update_exists = any(
item.get("updatable", False) for item in node_packs.values()
)
if update_exists:
print("\x1b[2K\rUpdate done.")
else:
print("\x1b[2K\rAll extensions are already up-to-date.")
elif do_update_check:
print("\x1b[2K\rUpdate check done.")
def nickname_filter(json_obj):
preemptions_map = {}
for k, x in json_obj.items():
if "preemptions" in x[1]:
for y in x[1]["preemptions"]:
preemptions_map[y] = k
elif k.endswith("/ComfyUI"):
for y in x[0]:
preemptions_map[y] = k
updates = {}
for k, x in json_obj.items():
removes = set()
for y in x[0]:
k2 = preemptions_map.get(y)
if k2 is not None and k != k2:
removes.add(y)
if len(removes) > 0:
updates[k] = [y for y in x[0] if y not in removes]
for k, v in updates.items():
json_obj[k][0] = v
return json_obj

View File

@@ -0,0 +1,67 @@
from comfyui_manager.glob import manager_core as core
from comfy.cli_args import args
from comfyui_manager.data_models import SecurityLevel, RiskLevel, ManagerDatabaseSource
def is_loopback(address):
import ipaddress
try:
return ipaddress.ip_address(address).is_loopback
except ValueError:
return False
def is_allowed_security_level(level):
is_local_mode = is_loopback(args.listen)
is_personal_cloud = core.get_config()['network_mode'].lower() == 'personal_cloud'
if level == RiskLevel.block.value:
return False
elif level == RiskLevel.high_.value:
if is_local_mode:
return core.get_config()['security_level'] in [SecurityLevel.weak.value, SecurityLevel.normal_.value]
elif is_personal_cloud:
return core.get_config()['security_level'] == SecurityLevel.weak.value
else:
return False
elif level == RiskLevel.high.value:
if is_local_mode:
return core.get_config()['security_level'] in [SecurityLevel.weak.value, SecurityLevel.normal_.value]
else:
return core.get_config()['security_level'] == SecurityLevel.weak.value
elif level == RiskLevel.middle_.value:
if is_local_mode or is_personal_cloud:
return core.get_config()['security_level'] in [SecurityLevel.weak.value, SecurityLevel.normal.value, SecurityLevel.normal_.value]
else:
return False
elif level == RiskLevel.middle.value:
return core.get_config()['security_level'] in [SecurityLevel.weak.value, SecurityLevel.normal.value, SecurityLevel.normal_.value]
else:
return True
async def get_risky_level(files, pip_packages):
json_data1 = await core.get_data_by_mode(ManagerDatabaseSource.local.value, "custom-node-list.json")
json_data2 = await core.get_data_by_mode(
ManagerDatabaseSource.cache.value,
"custom-node-list.json",
channel_url="https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main",
)
all_urls = set()
for x in json_data1["custom_nodes"] + json_data2["custom_nodes"]:
all_urls.update(x.get("files", []))
for x in files:
if x not in all_urls:
return RiskLevel.high_.value
all_pip_packages = set()
for x in json_data1["custom_nodes"] + json_data2["custom_nodes"]:
all_pip_packages.update(x.get("pip", []))
for p in pip_packages:
if p not in all_pip_packages:
return RiskLevel.block.value
return RiskLevel.middle_.value

View File

@@ -0,0 +1,50 @@
# ComfyUI-Manager: Frontend (js)
This directory contains the JavaScript frontend implementation for ComfyUI-Manager, providing the user interface components that interact with the backend API.
## Core Components
- **comfyui-manager.js**: Main entry point that initializes the manager UI and integrates with ComfyUI.
- **custom-nodes-manager.js**: Implements the UI for browsing, installing, and managing custom nodes.
- **model-manager.js**: Handles the model management interface for downloading and organizing AI models.
- **components-manager.js**: Manages reusable workflow components system.
- **snapshot.js**: Implements the snapshot system for backing up and restoring installations.
## Sharing Components
- **comfyui-share-common.js**: Base functionality for workflow sharing features.
- **comfyui-share-copus.js**: Integration with the ComfyUI Opus sharing platform.
- **comfyui-share-openart.js**: Integration with the OpenArt sharing platform.
- **comfyui-share-youml.js**: Integration with the YouML sharing platform.
## Utility Components
- **cm-api.js**: Client-side API wrapper for communication with the backend.
- **common.js**: Shared utilities and helper functions used across the frontend.
- **node_fixer.js**: Utilities for fixing disconnected links and repairing malformed nodes by recreating them while preserving connections.
- **popover-helper.js**: UI component for popup tooltips and contextual information.
- **turbogrid.esm.js**: Grid component library - https://github.com/cenfun/turbogrid
- **workflow-metadata.js**: Handles workflow metadata parsing, validation and cross-repository compatibility including versioning, dependencies tracking, and resource management.
## Architecture
The frontend follows a modular component-based architecture:
1. **Integration Layer**: Connects with ComfyUI's existing UI system
2. **Manager Components**: Individual functional UI components (node manager, model manager, etc.)
3. **Sharing Components**: Platform-specific sharing implementations
4. **Utility Layer**: Reusable UI components and helpers
## Implementation Details
- The frontend integrates directly with ComfyUI's UI system through `app.js`
- Dialog-based UI for most manager functions to avoid cluttering the main interface
- Asynchronous API calls to handle backend operations without blocking the UI
## Styling
CSS files are included for specific components:
- **custom-nodes-manager.css**: Styling for the node management UI
- **model-manager.css**: Styling for the model management UI
This frontend implementation provides a comprehensive yet user-friendly interface for managing the ComfyUI ecosystem.

View File

@@ -14,9 +14,9 @@ import { OpenArtShareDialog } from "./comfyui-share-openart.js";
import {
free_models, install_pip, install_via_git_url, manager_instance,
rebootAPI, setManagerInstance, show_message, customAlert, customPrompt,
infoToast, showTerminal, setNeedRestart
infoToast, showTerminal, setNeedRestart, generateUUID
} from "./common.js";
import { ComponentBuilderDialog, getPureName, load_components, set_component_policy } from "./components-manager.js";
import { ComponentBuilderDialog, load_components, set_component_policy } from "./components-manager.js";
import { CustomNodesManager } from "./custom-nodes-manager.js";
import { ModelManager } from "./model-manager.js";
import { SnapshotManager } from "./snapshot.js";
@@ -222,9 +222,6 @@ function isBeforeFrontendVersion(compareVersion) {
}
}
const is_legacy_front = () => isBeforeFrontendVersion('1.2.49');
const isNewManagerUI = () => isBeforeFrontendVersion('1.16.4');
document.head.appendChild(docStyle);
var update_comfyui_button = null;
@@ -234,7 +231,7 @@ var restart_stop_button = null;
var update_policy_combo = null;
let share_option = 'all';
var is_updating = false;
var batch_id = null;
// copied style from https://github.com/pythongosssss/ComfyUI-Custom-Scripts
@@ -476,14 +473,19 @@ async function updateComfyUI() {
let prev_text = update_comfyui_button.innerText;
update_comfyui_button.innerText = "Updating ComfyUI...";
set_inprogress_mode();
const response = await api.fetchApi('/v2/manager/queue/update_comfyui');
// set_inprogress_mode();
showTerminal();
is_updating = true;
await api.fetchApi('/v2/manager/queue/start');
batch_id = generateUUID();
let batch = {};
batch['batch_id'] = batch_id;
batch['update_comfyui'] = true;
const res = await api.fetchApi(`/v2/manager/queue/batch`, {
method: 'POST',
body: JSON.stringify(batch)
});
}
function showVersionSelectorDialog(versions, current, onSelect) {
@@ -658,18 +660,17 @@ async function onQueueStatus(event) {
const isElectron = 'electronAPI' in window;
if(event.detail.status == 'in_progress') {
set_inprogress_mode();
// set_inprogress_mode();
update_all_button.innerText = `in progress.. (${event.detail.done_count}/${event.detail.total_count})`;
}
else if(event.detail.status == 'done') {
else if(event.detail.status == 'all-done') {
reset_action_buttons();
if(!is_updating) {
}
else if(event.detail.status == 'batch-done') {
if(batch_id != event.detail.batch_id) {
return;
}
is_updating = false;
let success_list = [];
let failed_list = [];
let comfyui_state = null;
@@ -769,41 +770,28 @@ api.addEventListener("cm-queue-status", onQueueStatus);
async function updateAll(update_comfyui) {
update_all_button.innerText = "Updating...";
set_inprogress_mode();
// set_inprogress_mode();
var mode = manager_instance.datasrc_combo.value;
showTerminal();
batch_id = generateUUID();
let batch = {};
if(update_comfyui) {
update_all_button.innerText = "Updating ComfyUI...";
await api.fetchApi('/v2/manager/queue/update_comfyui');
batch['update_comfyui'] = true;
}
const response = await api.fetchApi(`/v2/manager/queue/update_all?mode=${mode}`);
batch['update_all'] = mode;
if (response.status == 401) {
customAlert('Another task is already in progress. Please stop the ongoing task first.');
}
else if(response.status == 200) {
is_updating = true;
await api.fetchApi('/v2/manager/queue/start');
}
const res = await api.fetchApi(`/v2/manager/queue/batch`, {
method: 'POST',
body: JSON.stringify(batch)
});
}
function newDOMTokenList(initialTokens) {
const tmp = document.createElement(`div`);
const classList = tmp.classList;
if (initialTokens) {
initialTokens.forEach(token => {
classList.add(token);
});
}
return classList;
}
/**
* Check whether the node is a potential output node (img, gif or video output)
*/
@@ -1526,11 +1514,6 @@ app.registerExtension({
tooltip: "Share"
}).element
);
const shouldShowLegacyMenuItems = !isNewManagerUI();
if (shouldShowLegacyMenuItems) {
app.menu?.settingsGroup.element.before(cmGroup.element);
}
}
catch(exception) {
console.log('ComfyUI is outdated. New style menu based features are disabled.');

View File

@@ -552,6 +552,20 @@ export class ShareDialog extends ComfyDialog {
this.matrix_destination_checkbox.style.color = "var(--fg-color)";
this.matrix_destination_checkbox.checked = this.share_option === 'matrix'; //true;
try {
api.fetchApi(`/v2/manager/get_matrix_dep_status`)
.then(response => response.text())
.then(data => {
if(data == 'unavailable') {
matrix_destination_checkbox_text.style.textDecoration = "line-through";
this.matrix_destination_checkbox.disabled = true;
this.matrix_destination_checkbox.title = "It has been disabled because the 'matrix-nio' dependency is not installed. Please install this dependency to use the matrix sharing feature.";
matrix_destination_checkbox_text.title = "It has been disabled because the 'matrix-nio' dependency is not installed. Please install this dependency to use the matrix sharing feature.";
}
})
.catch(error => {});
} catch (error) {}
this.comfyworkflows_destination_checkbox = $el("input", { type: 'checkbox', id: "comfyworkflows_destination" }, [])
const comfyworkflows_destination_checkbox_text = $el("label", {}, [" ComfyWorkflows.com"])
this.comfyworkflows_destination_checkbox.style.color = "var(--fg-color)";

View File

@@ -71,7 +71,7 @@ export class CopusShareDialog extends ComfyDialog {
this.allFiles = [];
this.titleNum = 0;
}
createButtons() {
const inputStyle = {
display: "block",
@@ -201,13 +201,15 @@ export class CopusShareDialog extends ComfyDialog {
});
this.LockInput = $el("input", {
type: "text",
placeholder: "",
style: {
placeholder: "0",
style: {
width: "100px",
padding: "7px",
paddingLeft: "30px",
borderRadius: "4px",
border: "1px solid #ddd",
boxSizing: "border-box",
position: "relative",
},
oninput: (event) => {
let input = event.target.value;
@@ -301,7 +303,7 @@ export class CopusShareDialog extends ComfyDialog {
},
[]
);
const titleNumDom = $el(
"label",
{
@@ -342,15 +344,11 @@ export class CopusShareDialog extends ComfyDialog {
["0/70"]
);
// Additional Inputs Section
const additionalInputsSection = $el(
"div",
{ style: { ...sectionStyle, } },
[
$el("label", { style: labelStyle }, ["3⃣ Title "]),
this.TitleInput,
titleNumDom,
]
);
const additionalInputsSection = $el("div", { style: { ...sectionStyle } }, [
$el("label", { style: labelStyle }, ["3⃣ Title "]),
this.TitleInput,
titleNumDom,
]);
const SubtitleSection = $el("div", { style: sectionStyle }, [
$el("label", { style: labelStyle }, ["4⃣ Subtitle "]),
this.SubTitleInput,
@@ -379,7 +377,7 @@ export class CopusShareDialog extends ComfyDialog {
});
const blockChainSection_lock = $el("div", { style: sectionStyle }, [
$el("label", { style: labelStyle }, ["6Pay to download"]),
$el("label", { style: labelStyle }, ["6Download threshold"]),
$el(
"label",
{
@@ -392,11 +390,42 @@ export class CopusShareDialog extends ComfyDialog {
},
[
this.radioButtonsCheck_lock,
$el("div", { style: { marginLeft: "5px" ,display:'flex',alignItems:'center'} }, [
$el("span", { style: { marginLeft: "5px" } }, ["ON"]),
$el("span", { style: { marginLeft: "20px",marginRight:'10px' ,color:'#fff'} }, ["Price US$"]),
this.LockInput
]),
$el(
"div",
{
style: {
marginLeft: "5px",
display: "flex",
alignItems: "center",
position: "relative",
},
},
[
$el("span", { style: { marginLeft: "5px" } }, ["ON"]),
$el(
"span",
{
style: {
marginLeft: "20px",
marginRight: "10px",
color: "#fff",
},
},
["Unlock with"]
),
$el("img", {
style: {
width: "16px",
height: "16px",
position: "absolute",
right: "75px",
zIndex: "100",
},
src: "https://static.copus.io/images/admin/202507/prod/e2919a1d8f3c2d99d3b8fe27ff94b841.png",
}),
this.LockInput,
]
),
]
),
$el(
@@ -404,14 +433,25 @@ export class CopusShareDialog extends ComfyDialog {
{ style: { display: "flex", alignItems: "center", cursor: "pointer" } },
[
this.radioButtonsCheckOff_lock,
$el("span", { style: { marginLeft: "5px" } }, ["OFF"]),
$el(
"div",
{
style: {
marginLeft: "5px",
display: "flex",
alignItems: "center",
},
},
[$el("span", { style: { marginLeft: "5px" } }, ["OFF"])]
),
]
),
$el(
"p",
{ style: { fontSize: "16px", color: "#fff", margin: "10px 0 0 0" } },
["Get paid from your workflow. You can change the price and withdraw your earnings on Copus."]
[
]
),
]);
@@ -432,7 +472,7 @@ export class CopusShareDialog extends ComfyDialog {
});
const blockChainSection = $el("div", { style: sectionStyle }, [
$el("label", { style: labelStyle }, ["7️⃣ Store on blockchain "]),
$el("label", { style: labelStyle }, ["8️⃣ Store on blockchain "]),
$el(
"label",
{
@@ -463,6 +503,139 @@ export class CopusShareDialog extends ComfyDialog {
),
]);
this.ratingRadioButtonsCheck0 = $el("input", {
type: "radio",
name: "content_rating",
value: "0",
id: "content_rating0",
});
this.ratingRadioButtonsCheck1 = $el("input", {
type: "radio",
name: "content_rating",
value: "1",
id: "content_rating1",
});
this.ratingRadioButtonsCheck2 = $el("input", {
type: "radio",
name: "content_rating",
value: "2",
id: "content_rating2",
});
this.ratingRadioButtonsCheck_1 = $el("input", {
type: "radio",
name: "content_rating",
value: "-1",
id: "content_rating_1",
checked: true,
});
// content rating
const contentRatingSection = $el("div", { style: sectionStyle }, [
$el("label", { style: labelStyle }, ["7⃣ Content rating "]),
$el(
"label",
{
style: {
marginTop: "10px",
display: "flex",
alignItems: "center",
cursor: "pointer",
},
},
[
this.ratingRadioButtonsCheck0,
$el("img", {
style: {
width: "12px",
height: "12px",
marginLeft: "5px",
},
src: "https://static.copus.io/images/client/202507/test/b9f17da83b054d53cd0cb4508c2c30dc.png",
}),
$el("span", { style: { marginLeft: "5px", color: "#fff" } }, [
"All ages",
]),
]
),
$el(
"p",
{ style: { fontSize: "10px", color: "#fff", marginLeft: "20px" } },
["Safe for all viewers; no profanity, violence, or mature themes."]
),
$el(
"label",
{ style: { display: "flex", alignItems: "center", cursor: "pointer" } },
[
this.ratingRadioButtonsCheck1,
$el("img", {
style: {
width: "12px",
height: "12px",
marginLeft: "5px",
},
src: "https://static.copus.io/images/client/202507/test/7848bc0d3690671df21c7cf00c4cfc81.png",
}),
$el("span", { style: { marginLeft: "5px", color: "#fff" } }, [
"13+ (Teen)",
]),
]
),
$el(
"p",
{ style: { fontSize: "10px", color: "#fff", marginLeft: "20px" } },
[
"Mild language, light themes, or cartoon violence; no explicit content. ",
]
),
$el(
"label",
{ style: { display: "flex", alignItems: "center", cursor: "pointer" } },
[
this.ratingRadioButtonsCheck2,
$el("img", {
style: {
width: "12px",
height: "12px",
marginLeft: "5px",
},
src: "https://static.copus.io/images/client/202507/test/bc51839c208d68d91173e43c23bff039.png",
}),
$el("span", { style: { marginLeft: "5px", color: "#fff" } }, [
"18+ (Explicit)",
]),
]
),
$el(
"p",
{ style: { fontSize: "10px", color: "#fff", marginLeft: "20px" } },
[
"Explicit content, including sexual content, strong violence, or intense themes. ",
]
),
$el(
"label",
{ style: { display: "flex", alignItems: "center", cursor: "pointer" } },
[
this.ratingRadioButtonsCheck_1,
$el("img", {
style: {
width: "12px",
height: "12px",
marginLeft: "5px",
},
src: "https://static.copus.io/images/client/202507/test/5c802fdcaaea4e7bbed37393eec0d5ba.png",
}),
$el("span", { style: { marginLeft: "5px", color: "#fff" } }, [
"Not Rated",
]),
]
),
$el(
"p",
{ style: { fontSize: "10px", color: "#fff", marginLeft: "20px" } },
["No age rating provided."]
),
]);
// Message Section
this.message = $el(
@@ -526,6 +699,7 @@ export class CopusShareDialog extends ComfyDialog {
DescriptionSection,
// contestSection,
blockChainSection_lock,
contentRatingSection,
blockChainSection,
this.message,
buttonsSection,
@@ -534,7 +708,7 @@ export class CopusShareDialog extends ComfyDialog {
return layout;
}
/**
* api
* api
* @param {url} path
* @param {params} options
* @param {statusText} statusText
@@ -587,7 +761,9 @@ export class CopusShareDialog extends ComfyDialog {
url: data,
});
} else {
throw new Error("make sure your API key is correct and try again later");
throw new Error(
"make sure your API key is correct and try again later"
);
}
} catch (e) {
if (e?.response?.status === 413) {
@@ -628,8 +804,15 @@ export class CopusShareDialog extends ComfyDialog {
subTitle: this.SubTitleInput.value,
content: this.descriptionInput.value,
storeOnChain: this.radioButtonsCheck.checked ? true : false,
lockState:this.radioButtonsCheck_lock.checked ? 2 : 0,
unlockPrice:this.LockInput.value,
lockState: this.radioButtonsCheck_lock.checked ? 2 : 0,
unlockPrice: this.LockInput.value,
rating: this.ratingRadioButtonsCheck0.checked
? 0
: this.ratingRadioButtonsCheck1.checked
? 1
: this.ratingRadioButtonsCheck2.checked
? 2
: -1,
};
if (!this.keyInput.value) {
@@ -644,8 +827,8 @@ export class CopusShareDialog extends ComfyDialog {
throw new Error("Title is required");
}
if(this.radioButtonsCheck_lock.checked){
if (!this.LockInput.value){
if (this.radioButtonsCheck_lock.checked) {
if (!this.LockInput.value) {
throw new Error("Price is required");
}
}
@@ -695,23 +878,23 @@ export class CopusShareDialog extends ComfyDialog {
"Uploading workflow..."
);
if (res.status && res.data.status && res.data) {
localStorage.setItem("copus_token",this.keyInput.value);
const { data } = res.data;
if (data) {
const url = `${DEFAULT_HOMEPAGE_URL}/work/${data}`;
this.message.innerHTML = `Workflow has been shared successfully. <a href="${url}" target="_blank">Click here to view it.</a>`;
this.previewImage.src = "";
this.previewImage.style.display = "none";
this.uploadedImages = [];
this.allFilesImages = [];
this.allFiles = [];
this.TitleInput.value = "";
this.SubTitleInput.value = "";
this.descriptionInput.value = "";
this.selectedFile = null;
}
}
if (res.status && res.data.status && res.data) {
localStorage.setItem("copus_token", this.keyInput.value);
const { data } = res.data;
if (data) {
const url = `${DEFAULT_HOMEPAGE_URL}/work/${data}`;
this.message.innerHTML = `Workflow has been shared successfully. <a href="${url}" target="_blank">Click here to view it.</a>`;
this.previewImage.src = "";
this.previewImage.style.display = "none";
this.uploadedImages = [];
this.allFilesImages = [];
this.allFiles = [];
this.TitleInput.value = "";
this.SubTitleInput.value = "";
this.descriptionInput.value = "";
this.selectedFile = null;
}
}
} catch (e) {
throw new Error("Error sharing workflow: " + e.message);
}
@@ -757,7 +940,7 @@ export class CopusShareDialog extends ComfyDialog {
this.element.style.display = "block";
this.previewImage.src = "";
this.previewImage.style.display = "none";
this.keyInput.value = apiToken!=null?apiToken:"";
this.keyInput.value = apiToken != null ? apiToken : "";
this.uploadedImages = [];
this.allFilesImages = [];
this.allFiles = [];

View File

@@ -630,6 +630,14 @@ export function showTooltip(target, text, className = 'cn-tooltip', styleMap = {
});
}
export function generateUUID() {
return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
const r = Math.random() * 16 | 0;
const v = c === 'x' ? r : (r & 0x3 | 0x8);
return v.toString(16);
});
}
function initTooltip () {
const mouseenterHandler = (e) => {
const target = e.target;

View File

@@ -7,7 +7,7 @@ import {
fetchData, md5, icons, show_message, customConfirm, customAlert, customPrompt,
sanitizeHTML, infoToast, showTerminal, setNeedRestart,
storeColumnWidth, restoreColumnWidth, getTimeAgo, copyText, loadCss,
showPopover, hidePopover
showPopover, hidePopover, generateUUID
} from "./common.js";
// https://cenfun.github.io/turbogrid/api.html
@@ -714,6 +714,7 @@ export class CustomNodesManager {
link.href = rowItem.reference;
link.target = '_blank';
link.innerHTML = `<b>${title}</b>`;
link.title = rowItem.originalData.id;
container.appendChild(link);
return container;
@@ -1410,15 +1411,16 @@ export class CustomNodesManager {
let version_cnt = 0;
if(!is_enable) {
if(rowItem.cnr_latest != rowItem.originalData.active_version && obj.length > 0) {
versions.push('latest');
}
if(rowItem.originalData.active_version != 'nightly') {
versions.push('nightly');
default_version = 'nightly';
version_cnt++;
}
if(rowItem.cnr_latest != rowItem.originalData.active_version && obj.length > 0) {
versions.push('latest');
}
}
for(let v of obj) {
@@ -1439,13 +1441,6 @@ export class CustomNodesManager {
}
async installNodes(list, btn, title, selected_version) {
let stats = await api.fetchApi('/v2/manager/queue/status');
stats = await stats.json();
if(stats.is_processing) {
customAlert(`[ComfyUI-Manager] There are already tasks in progress. Please try again after it is completed. (${stats.done_count}/${stats.total_count})`);
return;
}
const { target, label, mode} = btn;
if(mode === "uninstall") {
@@ -1472,10 +1467,10 @@ export class CustomNodesManager {
let needRestart = false;
let errorMsg = "";
await api.fetchApi('/v2/manager/queue/reset');
let target_items = [];
let batch = {};
for (const hash of list) {
const item = this.grid.getRowItemBy("hash", hash);
target_items.push(item);
@@ -1517,23 +1512,11 @@ export class CustomNodesManager {
api_mode = 'reinstall';
}
const res = await api.fetchApi(`/v2/manager/queue/${api_mode}`, {
method: 'POST',
body: JSON.stringify(data)
});
if (res.status != 200) {
errorMsg = `'${item.title}': `;
if(res.status == 403) {
errorMsg += `This action is not allowed with this security level configuration.\n`;
} else if(res.status == 404) {
errorMsg += `With the current security level configuration, only custom nodes from the <B>"default channel"</B> can be installed.\n`;
} else {
errorMsg += await res.text() + '\n';
}
break;
if(batch[api_mode]) {
batch[api_mode].push(data);
}
else {
batch[api_mode] = [data];
}
}
@@ -1550,7 +1533,24 @@ export class CustomNodesManager {
}
}
else {
await api.fetchApi('/v2/manager/queue/start');
this.batch_id = generateUUID();
batch['batch_id'] = this.batch_id;
const res = await api.fetchApi(`/v2/manager/queue/batch`, {
method: 'POST',
body: JSON.stringify(batch)
});
let failed = await res.json();
if(failed.length > 0) {
for(let k in failed) {
let hash = failed[k];
const item = this.grid.getRowItemBy("hash", hash);
errorMsg = `[FAIL] ${item.title}`;
}
}
this.showStop();
showTerminal();
}
@@ -1571,7 +1571,7 @@ export class CustomNodesManager {
self.grid.updateCell(item, "action");
self.grid.setRowSelected(item, false);
}
else if(event.detail.status == 'done') {
else if(event.detail.status == 'batch-done' && event.detail.batch_id == self.batch_id) {
self.hideStop();
self.onQueueCompleted(event.detail);
}
@@ -1626,17 +1626,35 @@ export class CustomNodesManager {
getNodesInWorkflow() {
let usedGroupNodes = new Set();
let allUsedNodes = {};
const visitedGraphs = new Set();
for(let k in app.graph._nodes) {
let node = app.graph._nodes[k];
const visitGraph = (graph) => {
if (!graph || visitedGraphs.has(graph)) return;
visitedGraphs.add(graph);
if(node.type.startsWith('workflow>')) {
usedGroupNodes.add(node.type.slice(9));
continue;
const nodes = graph._nodes || graph.nodes || [];
for(let k in nodes) {
let node = nodes[k];
if (!node) continue;
// If it's a SubgraphNode, recurse into its graph and continue searching
if (node.isSubgraphNode?.() && node.subgraph) {
visitGraph(node.subgraph);
}
if (!node.type) continue;
// Group nodes / components
if(typeof node.type === 'string' && node.type.startsWith('workflow>')) {
usedGroupNodes.add(node.type.slice(9));
continue;
}
allUsedNodes[node.type] = node;
}
};
allUsedNodes[node.type] = node;
}
visitGraph(app.graph);
for(let k of usedGroupNodes) {
let subnodes = app.graph.extra.groupNodes[k]?.nodes;

View File

@@ -3,7 +3,7 @@ import { $el } from "../../scripts/ui.js";
import {
manager_instance, rebootAPI,
fetchData, md5, icons, show_message, customAlert, infoToast, showTerminal,
storeColumnWidth, restoreColumnWidth, loadCss
storeColumnWidth, restoreColumnWidth, loadCss, generateUUID
} from "./common.js";
import { api } from "../../scripts/api.js";
@@ -81,10 +81,13 @@ export class ModelManager {
value: ""
}, {
label: "Installed",
value: "True"
value: "installed"
}, {
label: "Not Installed",
value: "False"
value: "not_installed"
}, {
label: "In Workflow",
value: "in_workflow"
}];
this.typeList = [{
@@ -254,12 +257,31 @@ export class ModelManager {
rowFilter: (rowItem) => {
const searchableColumns = ["name", "type", "base", "description", "filename", "save_path"];
const models_extensions = ['.ckpt', '.pt', '.pt2', '.bin', '.pth', '.safetensors', '.pkl', '.sft'];
let shouldShown = grid.highlightKeywordsFilter(rowItem, searchableColumns, this.keywords);
if (shouldShown) {
if(this.filter && rowItem.installed !== this.filter) {
return false;
if(this.filter) {
if (this.filter == "in_workflow") {
rowItem.in_workflow = null;
if (Array.isArray(app.graph._nodes)) {
app.graph._nodes.forEach((item, i) => {
if (Array.isArray(item.widgets_values)) {
item.widgets_values.forEach((_item, i) => {
if (rowItem.in_workflow === null && _item !== null && models_extensions.includes("." + _item.toString().split('.').pop())) {
let filename = _item.match(/([^\/]+)(?=\.\w+$)/)[0];
if (grid.highlightKeywordsFilter(rowItem, searchableColumns, filename)) {
rowItem.in_workflow = "True";
grid.highlightKeywordsFilter(rowItem, searchableColumns, "");
}
}
});
}
});
}
}
return ((this.filter == "installed" && rowItem.installed == "True") || (this.filter == "not_installed" && rowItem.installed == "False") || (this.filter == "in_workflow" && rowItem.in_workflow == "True"));
}
if(this.type && rowItem.type !== this.type) {
@@ -413,24 +435,16 @@ export class ModelManager {
}
async installModels(list, btn) {
let stats = await api.fetchApi('/v2/manager/queue/status');
stats = await stats.json();
if(stats.is_processing) {
customAlert(`[ComfyUI-Manager] There are already tasks in progress. Please try again after it is completed. (${stats.done_count}/${stats.total_count})`);
return;
}
btn.classList.add("cmm-btn-loading");
this.showError("");
let needRefresh = false;
let errorMsg = "";
await api.fetchApi('/v2/manager/queue/reset');
let target_items = [];
let batch = {};
for (const item of list) {
this.grid.scrollRowIntoView(item);
target_items.push(item);
@@ -446,21 +460,12 @@ export class ModelManager {
const data = item.originalData;
data.ui_id = item.hash;
const res = await api.fetchApi(`/v2/manager/queue/install_model`, {
method: 'POST',
body: JSON.stringify(data)
});
if (res.status != 200) {
errorMsg = `'${item.name}': `;
if(res.status == 403) {
errorMsg += `This action is not allowed with this security level configuration.\n`;
} else {
errorMsg += await res.text() + '\n';
}
break;
if(batch['install_model']) {
batch['install_model'].push(data);
}
else {
batch['install_model'] = [data];
}
}
@@ -477,7 +482,24 @@ export class ModelManager {
}
}
else {
await api.fetchApi('/v2/manager/queue/start');
this.batch_id = generateUUID();
batch['batch_id'] = this.batch_id;
const res = await api.fetchApi(`/v2/manager/queue/batch`, {
method: 'POST',
body: JSON.stringify(batch)
});
let failed = await res.json();
if(failed.length > 0) {
for(let k in failed) {
let hash = failed[k];
const item = self.grid.getRowItemBy("hash", hash);
errorMsg = `[FAIL] ${item.title}`;
}
}
this.showStop();
showTerminal();
}
@@ -497,7 +519,7 @@ export class ModelManager {
// self.grid.updateCell(item, "tg-column-select");
self.grid.updateRow(item);
}
else if(event.detail.status == 'done') {
else if(event.detail.status == 'batch-done') {
self.hideStop();
self.onQueueCompleted(event.detail);
}
@@ -795,4 +817,4 @@ export class ModelManager {
close() {
this.element.style.display = "none";
}
}
}

View File

@@ -153,6 +153,7 @@ app.registerExtension({
app.canvas.graph.add(new_node, false);
node_info_copy(this, new_node, true);
app.canvas.graph.remove(this);
requestAnimationFrame(() => app.canvas.setDirty(true, true))
},
});
});

View File

@@ -70,8 +70,8 @@ class WorkflowMetadataExtension {
if (cnr_id === "comfy-core") return; // don't allow hijacking comfy-core name
if (cnr_id) nodeProperties.cnr_id = cnr_id;
else nodeProperties.aux_id = aux_id;
if (ver) nodeProperties.ver = ver;
} else if (["nodes", "comfy_extras"].includes(moduleType)) {
if (ver) nodeProperties.ver = ver.trim();
} else if (["nodes", "comfy_extras", "comfy_api_nodes"].includes(moduleType)) {
nodeProperties.cnr_id = "comfy-core";
nodeProperties.ver = this.comfyCoreVersion;
}

View File

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,451 @@
import mimetypes
from ..common import context
from . import manager_core as core
import os
from aiohttp import web
import aiohttp
import json
import hashlib
import folder_paths
from server import PromptServer
import logging
import sys
try:
from nio import AsyncClient, LoginResponse, UploadResponse
matrix_nio_is_available = True
except Exception:
logging.warning(f"[ComfyUI-Manager] The matrix sharing feature has been disabled because the `matrix-nio` dependency is not installed.\n\tTo use this feature, please run the following command:\n\t{sys.executable} -m pip install matrix-nio\n")
matrix_nio_is_available = False
def extract_model_file_names(json_data):
"""Extract unique file names from the input JSON data."""
file_names = set()
model_filename_extensions = {'.safetensors', '.ckpt', '.pt', '.pth', '.bin'}
# Recursively search for file names in the JSON data
def recursive_search(data):
if isinstance(data, dict):
for value in data.values():
recursive_search(value)
elif isinstance(data, list):
for item in data:
recursive_search(item)
elif isinstance(data, str) and '.' in data:
file_names.add(os.path.basename(data)) # file_names.add(data)
recursive_search(json_data)
return [f for f in list(file_names) if os.path.splitext(f)[1] in model_filename_extensions]
def find_file_paths(base_dir, file_names):
"""Find the paths of the files in the base directory."""
file_paths = {}
for root, dirs, files in os.walk(base_dir):
# Exclude certain directories
dirs[:] = [d for d in dirs if d not in ['.git']]
for file in files:
if file in file_names:
file_paths[file] = os.path.join(root, file)
return file_paths
def compute_sha256_checksum(filepath):
"""Compute the SHA256 checksum of a file, in chunks"""
sha256 = hashlib.sha256()
with open(filepath, 'rb') as f:
for chunk in iter(lambda: f.read(4096), b''):
sha256.update(chunk)
return sha256.hexdigest()
@PromptServer.instance.routes.get("/v2/manager/share_option")
async def share_option(request):
if "value" in request.rel_url.query:
core.get_config()['share_option'] = request.rel_url.query['value']
core.write_config()
else:
return web.Response(text=core.get_config()['share_option'], status=200)
return web.Response(status=200)
def get_openart_auth():
if not os.path.exists(os.path.join(context.manager_files_path, ".openart_key")):
return None
try:
with open(os.path.join(context.manager_files_path, ".openart_key"), "r") as f:
openart_key = f.read().strip()
return openart_key if openart_key else None
except Exception:
return None
def get_matrix_auth():
if not os.path.exists(os.path.join(context.manager_files_path, "matrix_auth")):
return None
try:
with open(os.path.join(context.manager_files_path, "matrix_auth"), "r") as f:
matrix_auth = f.read()
homeserver, username, password = matrix_auth.strip().split("\n")
if not homeserver or not username or not password:
return None
return {
"homeserver": homeserver,
"username": username,
"password": password,
}
except Exception:
return None
def get_comfyworkflows_auth():
if not os.path.exists(os.path.join(context.manager_files_path, "comfyworkflows_sharekey")):
return None
try:
with open(os.path.join(context.manager_files_path, "comfyworkflows_sharekey"), "r") as f:
share_key = f.read()
if not share_key.strip():
return None
return share_key
except Exception:
return None
def get_youml_settings():
if not os.path.exists(os.path.join(context.manager_files_path, ".youml")):
return None
try:
with open(os.path.join(context.manager_files_path, ".youml"), "r") as f:
youml_settings = f.read().strip()
return youml_settings if youml_settings else None
except Exception:
return None
def set_youml_settings(settings):
with open(os.path.join(context.manager_files_path, ".youml"), "w") as f:
f.write(settings)
@PromptServer.instance.routes.get("/v2/manager/get_openart_auth")
async def api_get_openart_auth(request):
# print("Getting stored Matrix credentials...")
openart_key = get_openart_auth()
if not openart_key:
return web.Response(status=404)
return web.json_response({"openart_key": openart_key})
@PromptServer.instance.routes.post("/v2/manager/set_openart_auth")
async def api_set_openart_auth(request):
json_data = await request.json()
openart_key = json_data['openart_key']
with open(os.path.join(context.manager_files_path, ".openart_key"), "w") as f:
f.write(openart_key)
return web.Response(status=200)
@PromptServer.instance.routes.get("/v2/manager/get_matrix_auth")
async def api_get_matrix_auth(request):
# print("Getting stored Matrix credentials...")
matrix_auth = get_matrix_auth()
if not matrix_auth:
return web.Response(status=404)
return web.json_response(matrix_auth)
@PromptServer.instance.routes.get("/v2/manager/youml/settings")
async def api_get_youml_settings(request):
youml_settings = get_youml_settings()
if not youml_settings:
return web.Response(status=404)
return web.json_response(json.loads(youml_settings))
@PromptServer.instance.routes.post("/v2/manager/youml/settings")
async def api_set_youml_settings(request):
json_data = await request.json()
set_youml_settings(json.dumps(json_data))
return web.Response(status=200)
@PromptServer.instance.routes.get("/v2/manager/get_comfyworkflows_auth")
async def api_get_comfyworkflows_auth(request):
# Check if the user has provided Matrix credentials in a file called 'matrix_accesstoken'
# in the same directory as the ComfyUI base folder
# print("Getting stored Comfyworkflows.com auth...")
comfyworkflows_auth = get_comfyworkflows_auth()
if not comfyworkflows_auth:
return web.Response(status=404)
return web.json_response({"comfyworkflows_sharekey": comfyworkflows_auth})
@PromptServer.instance.routes.post("/v2/manager/set_esheep_workflow_and_images")
async def set_esheep_workflow_and_images(request):
json_data = await request.json()
with open(os.path.join(context.manager_files_path, "esheep_share_message.json"), "w", encoding='utf-8') as file:
json.dump(json_data, file, indent=4)
return web.Response(status=200)
@PromptServer.instance.routes.get("/v2/manager/get_esheep_workflow_and_images")
async def get_esheep_workflow_and_images(request):
with open(os.path.join(context.manager_files_path, "esheep_share_message.json"), 'r', encoding='utf-8') as file:
data = json.load(file)
return web.Response(status=200, text=json.dumps(data))
@PromptServer.instance.routes.get("/v2/manager/get_matrix_dep_status")
async def get_matrix_dep_status(request):
if matrix_nio_is_available:
return web.Response(status=200, text='available')
else:
return web.Response(status=200, text='unavailable')
def set_matrix_auth(json_data):
homeserver = json_data['homeserver']
username = json_data['username']
password = json_data['password']
with open(os.path.join(context.manager_files_path, "matrix_auth"), "w") as f:
f.write("\n".join([homeserver, username, password]))
def set_comfyworkflows_auth(comfyworkflows_sharekey):
with open(os.path.join(context.manager_files_path, "comfyworkflows_sharekey"), "w") as f:
f.write(comfyworkflows_sharekey)
def has_provided_matrix_auth(matrix_auth):
return matrix_auth['homeserver'].strip() and matrix_auth['username'].strip() and matrix_auth['password'].strip()
def has_provided_comfyworkflows_auth(comfyworkflows_sharekey):
return comfyworkflows_sharekey.strip()
@PromptServer.instance.routes.post("/v2/manager/share")
async def share_art(request):
# get json data
json_data = await request.json()
matrix_auth = json_data['matrix_auth']
comfyworkflows_sharekey = json_data['cw_auth']['cw_sharekey']
set_matrix_auth(matrix_auth)
set_comfyworkflows_auth(comfyworkflows_sharekey)
share_destinations = json_data['share_destinations']
credits = json_data['credits']
title = json_data['title']
description = json_data['description']
is_nsfw = json_data['is_nsfw']
prompt = json_data['prompt']
potential_outputs = json_data['potential_outputs']
selected_output_index = json_data['selected_output_index']
try:
output_to_share = potential_outputs[int(selected_output_index)]
except Exception:
# for now, pick the first output
output_to_share = potential_outputs[0]
assert output_to_share['type'] in ('image', 'output')
output_dir = folder_paths.get_output_directory()
if output_to_share['type'] == 'image':
asset_filename = output_to_share['image']['filename']
asset_subfolder = output_to_share['image']['subfolder']
if output_to_share['image']['type'] == 'temp':
output_dir = folder_paths.get_temp_directory()
else:
asset_filename = output_to_share['output']['filename']
asset_subfolder = output_to_share['output']['subfolder']
if asset_subfolder:
asset_filepath = os.path.join(output_dir, asset_subfolder, asset_filename)
else:
asset_filepath = os.path.join(output_dir, asset_filename)
# get the mime type of the asset
assetFileType = mimetypes.guess_type(asset_filepath)[0]
share_website_host = "UNKNOWN"
if "comfyworkflows" in share_destinations:
share_website_host = "https://comfyworkflows.com"
share_endpoint = f"{share_website_host}/api"
# get presigned urls
async with aiohttp.ClientSession(trust_env=True, connector=aiohttp.TCPConnector(verify_ssl=False)) as session:
async with session.post(
f"{share_endpoint}/get_presigned_urls",
json={
"assetFileName": asset_filename,
"assetFileType": assetFileType,
"workflowJsonFileName": 'workflow.json',
"workflowJsonFileType": 'application/json',
},
) as resp:
assert resp.status == 200
presigned_urls_json = await resp.json()
assetFilePresignedUrl = presigned_urls_json["assetFilePresignedUrl"]
assetFileKey = presigned_urls_json["assetFileKey"]
workflowJsonFilePresignedUrl = presigned_urls_json["workflowJsonFilePresignedUrl"]
workflowJsonFileKey = presigned_urls_json["workflowJsonFileKey"]
# upload asset
async with aiohttp.ClientSession(trust_env=True, connector=aiohttp.TCPConnector(verify_ssl=False)) as session:
async with session.put(assetFilePresignedUrl, data=open(asset_filepath, "rb")) as resp:
assert resp.status == 200
# upload workflow json
async with aiohttp.ClientSession(trust_env=True, connector=aiohttp.TCPConnector(verify_ssl=False)) as session:
async with session.put(workflowJsonFilePresignedUrl, data=json.dumps(prompt['workflow']).encode('utf-8')) as resp:
assert resp.status == 200
model_filenames = extract_model_file_names(prompt['workflow'])
model_file_paths = find_file_paths(folder_paths.base_path, model_filenames)
models_info = {}
for filename, filepath in model_file_paths.items():
models_info[filename] = {
"filename": filename,
"sha256_checksum": compute_sha256_checksum(filepath),
"relative_path": os.path.relpath(filepath, folder_paths.base_path),
}
# make a POST request to /api/upload_workflow with form data key values
async with aiohttp.ClientSession(trust_env=True, connector=aiohttp.TCPConnector(verify_ssl=False)) as session:
form = aiohttp.FormData()
if comfyworkflows_sharekey:
form.add_field("shareKey", comfyworkflows_sharekey)
form.add_field("source", "comfyui_manager")
form.add_field("assetFileKey", assetFileKey)
form.add_field("assetFileType", assetFileType)
form.add_field("workflowJsonFileKey", workflowJsonFileKey)
form.add_field("sharedWorkflowWorkflowJsonString", json.dumps(prompt['workflow']))
form.add_field("sharedWorkflowPromptJsonString", json.dumps(prompt['output']))
form.add_field("shareWorkflowCredits", credits)
form.add_field("shareWorkflowTitle", title)
form.add_field("shareWorkflowDescription", description)
form.add_field("shareWorkflowIsNSFW", str(is_nsfw).lower())
form.add_field("currentSnapshot", json.dumps(await core.get_current_snapshot()))
form.add_field("modelsInfo", json.dumps(models_info))
async with session.post(
f"{share_endpoint}/upload_workflow",
data=form,
) as resp:
assert resp.status == 200
upload_workflow_json = await resp.json()
workflowId = upload_workflow_json["workflowId"]
# check if the user has provided Matrix credentials
if matrix_nio_is_available and "matrix" in share_destinations:
comfyui_share_room_id = '!LGYSoacpJPhIfBqVfb:matrix.org'
filename = os.path.basename(asset_filepath)
content_type = assetFileType
try:
homeserver = 'matrix.org'
if matrix_auth:
homeserver = matrix_auth.get('homeserver', 'matrix.org')
homeserver = homeserver.replace("http://", "https://")
if not homeserver.startswith("https://"):
homeserver = "https://" + homeserver
client = AsyncClient(homeserver, matrix_auth['username'])
# Login
login_resp = await client.login(matrix_auth['password'])
if not isinstance(login_resp, LoginResponse) or not login_resp.access_token:
await client.close()
return web.json_response({"error": "Invalid Matrix credentials."}, content_type='application/json', status=400)
# Upload asset
with open(asset_filepath, 'rb') as f:
upload_resp, _maybe_keys = await client.upload(f, content_type=content_type, filename=filename)
asset_data = f.seek(0) or f.read() # get size for info below
if not isinstance(upload_resp, UploadResponse) or not upload_resp.content_uri:
await client.close()
return web.json_response({"error": "Failed to upload asset to Matrix."}, content_type='application/json', status=500)
mxc_url = upload_resp.content_uri
# Upload workflow JSON
import io
workflow_json_bytes = json.dumps(prompt['workflow']).encode('utf-8')
workflow_io = io.BytesIO(workflow_json_bytes)
upload_workflow_resp, _maybe_keys = await client.upload(workflow_io, content_type='application/json', filename='workflow.json')
workflow_io.seek(0)
if not isinstance(upload_workflow_resp, UploadResponse) or not upload_workflow_resp.content_uri:
await client.close()
return web.json_response({"error": "Failed to upload workflow to Matrix."}, content_type='application/json', status=500)
workflow_json_mxc_url = upload_workflow_resp.content_uri
# Send text message
text_content = ""
if title:
text_content += f"{title}\n"
if description:
text_content += f"{description}\n"
if credits:
text_content += f"\ncredits: {credits}\n"
await client.room_send(
room_id=comfyui_share_room_id,
message_type="m.room.message",
content={"msgtype": "m.text", "body": text_content}
)
# Send image
await client.room_send(
room_id=comfyui_share_room_id,
message_type="m.room.message",
content={
"msgtype": "m.image",
"body": filename,
"url": mxc_url,
"info": {
"mimetype": content_type,
"size": len(asset_data)
}
}
)
# Send workflow JSON file
await client.room_send(
room_id=comfyui_share_room_id,
message_type="m.room.message",
content={
"msgtype": "m.file",
"body": "workflow.json",
"url": workflow_json_mxc_url,
"info": {
"mimetype": "application/json",
"size": len(workflow_json_bytes)
}
}
)
await client.close()
except:
import traceback
traceback.print_exc()
return web.json_response({"error": "An error occurred when sharing your art to Matrix."}, content_type='application/json', status=500)
return web.json_response({
"comfyworkflows": {
"url": None if "comfyworkflows" not in share_destinations else f"{share_website_host}/workflows/{workflowId}",
},
"matrix": {
"success": None if "matrix" not in share_destinations else True
}
}, content_type='application/json', status=200)

View File

@@ -749,8 +749,8 @@
"save_path": "loras/HyperSD/SDXL",
"description": "Hyper-SD LoRA (4steps) - SDXL",
"reference": "https://huggingface.co/ByteDance/Hyper-SD",
"filename": "Hyper-SD15-4steps-lora.safetensors",
"url": "https://huggingface.co/ByteDance/Hyper-SD/resolve/main/Hyper-SD15-4steps-lora.safetensors",
"filename": "Hyper-SDXL-4steps-lora.safetensors",
"url": "https://huggingface.co/ByteDance/Hyper-SD/resolve/main/Hyper-SDXL-4steps-lora.safetensors",
"size": "787MB"
},
{
@@ -1973,6 +1973,97 @@
"url": "https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth",
"size": "375.0MB"
},
{
"name": "sam2.1_hiera_tiny.pt",
"type": "sam2.1",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2.1 hiera model (tiny)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2.1_hiera_tiny.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_tiny.pt",
"size": "149.0MB"
},
{
"name": "sam2.1_hiera_small.pt",
"type": "sam2.1",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2.1 hiera model (small)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2.1_hiera_small.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_small.pt",
"size": "176.0MB"
},
{
"name": "sam2.1_hiera_base_plus.pt",
"type": "sam2.1",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2.1 hiera model (base+)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2.1_hiera_base_plus.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_base_plus.pt",
"size": "309.0MB"
},
{
"name": "sam2.1_hiera_large.pt",
"type": "sam2.1",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2.1 hiera model (large)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2.1_hiera_large.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt",
"size": "857.0MB"
},
{
"name": "sam2_hiera_tiny.pt",
"type": "sam2",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2 hiera model (tiny)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2_hiera_tiny.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_tiny.pt",
"size": "149.0MB"
},
{
"name": "sam2_hiera_small.pt",
"type": "sam2",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2 hiera model (small)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2_hiera_small.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_small.pt",
"size": "176.0MB"
},
{
"name": "sam2_hiera_base_plus.pt",
"type": "sam2",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2 hiera model (base+)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2_hiera_base_plus.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_base_plus.pt",
"size": "309.0MB"
},
{
"name": "sam2_hiera_large.pt",
"type": "sam2",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2 hiera model (large)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2_hiera_large.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_large.pt",
"size": "857.0MB"
},
{
"name": "seecoder v1.0",
"type": "seecoder",
@@ -4006,6 +4097,29 @@
"size": "649MB"
},
{
"name": "Comfy-Org/omnigen2_fp16.safetensors",
"type": "diffusion_model",
"base": "OmniGen2",
"save_path": "default",
"description": "OmniGen2 diffusion model. This is required for using OmniGen2.",
"reference": "https://huggingface.co/Comfy-Org/Omnigen2_ComfyUI_repackaged",
"filename": "omnigen2_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Omnigen2_ComfyUI_repackaged/resolve/main/split_files/diffusion_models/omnigen2_fp16.safetensors",
"size": "7.93GB"
},
{
"name": "Comfy-Org/qwen_2.5_vl_fp16.safetensors",
"type": "clip",
"base": "qwen-2.5",
"save_path": "default",
"description": "text encoder for OmniGen2",
"reference": "https://huggingface.co/Comfy-Org/Omnigen2_ComfyUI_repackaged",
"filename": "qwen_2.5_vl_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Omnigen2_ComfyUI_repackaged/resolve/main/split_files/text_encoders/qwen_2.5_vl_fp16.safetensors",
"size": "7.51GB"
},
{
"name": "FLUX.1 [Schnell] Diffusion model",
"type": "diffusion_model",
@@ -4023,7 +4137,7 @@
"type": "VAE",
"base": "FLUX.1",
"save_path": "vae/FLUX1",
"description": "FLUX.1 VAE model",
"description": "FLUX.1 VAE model\nNOTE: This VAE model can also be used for image generation with OmniGen2.",
"reference": "https://huggingface.co/black-forest-labs/FLUX.1-schnell",
"filename": "ae.safetensors",
"url": "https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/ae.safetensors",
@@ -4931,6 +5045,105 @@
"size": "1.26GB"
},
{
"name": "Comfy-Org/Wan2.2 i2v high noise 14B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for i2v high noise 14B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_i2v_high_noise_14B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_i2v_high_noise_14B_fp16.safetensors",
"size": "28.6GB"
},
{
"name": "Comfy-Org/Wan2.2 i2v high noise 14B (fp8_scaled)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for i2v high noise 14B (fp8_scaled)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_i2v_high_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_i2v_high_noise_14B_fp8_scaled.safetensors",
"size": "14.3GB"
},
{
"name": "Comfy-Org/Wan2.2 i2v low noise 14B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for i2v low noise 14B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_i2v_low_noise_14B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_i2v_low_noise_14B_fp16.safetensors",
"size": "28.6GB"
},
{
"name": "Comfy-Org/Wan2.2 i2v low noise 14B (fp8_scaled)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for i2v low noise 14B (fp8_scaled)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_i2v_low_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_i2v_low_noise_14B_fp8_scaled.safetensors",
"size": "14.3GB"
},
{
"name": "Comfy-Org/Wan2.2 t2v high noise 14B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for t2v high noise 14B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_t2v_high_noise_14B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_high_noise_14B_fp16.safetensors",
"size": "28.6GB"
},
{
"name": "Comfy-Org/Wan2.2 t2v high noise 14B (fp8_scaled)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for t2v high noise 14B (fp8_scaled)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors",
"size": "14.3GB"
},
{
"name": "Comfy-Org/Wan2.2 t2v low noise 14B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for t2v low noise 14B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_t2v_low_noise_14B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_low_noise_14B_fp16.safetensors",
"size": "28.6GB"
},
{
"name": "Comfy-Org/Wan2.2 t2v low noise 14B (fp8_scaled)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for t2v low noise 14B (fp8_scaled)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors",
"size": "14.3GB"
},
{
"name": "Comfy-Org/Wan2.2 ti2v 5B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for ti2v 5B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_ti2v_5B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_ti2v_5B_fp16.safetensors",
"size": "10.0GB"
},
{
"name": "Comfy-Org/umt5_xxl_fp16.safetensors",
@@ -4953,6 +5166,195 @@
"filename": "umt5_xxl_fp8_e4m3fn_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors",
"size": "6.74GB"
},
{
"name": "lllyasviel/FramePackI2V_HY",
"type": "FramePackI2V",
"base": "FramePackI2V",
"save_path": "diffusers/lllyasviel",
"description": "[SNAPSHOT] This is the f1k1_x_g9_f1k1f2k2f16k4_td FramePack for HY. [w/You cannot download this item on ComfyUI-Manager versions below V3.18]",
"reference": "https://huggingface.co/lllyasviel/FramePackI2V_HY",
"filename": "<huggingface>",
"url": "lllyasviel/FramePackI2V_HY",
"size": "25.75GB"
},
{
"name": "LTX-Video Spatial Upscaler v0.9.7",
"type": "upscale",
"base": "upscale",
"save_path": "default",
"description": "Spatial upscaler model for LTX-Video. This model enhances the spatial resolution of generated videos.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-spatial-upscaler-0.9.7.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-spatial-upscaler-0.9.7.safetensors",
"size": "505MB"
},
{
"name": "LTX-Video Temporal Upscaler v0.9.7",
"type": "upscale",
"base": "upscale",
"save_path": "default",
"description": "Temporal upscaler model for LTX-Video. This model enhances the temporal resolution and smoothness of generated videos.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-temporal-upscaler-0.9.7.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-temporal-upscaler-0.9.7.safetensors",
"size": "524MB"
},
{
"name": "LTX-Video 13B v0.9.7",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "High-resolution quality LTX-Video 13B model.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-13b-0.9.7-dev.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-13b-0.9.7-dev.safetensors",
"size": "28.6GB"
},
{
"name": "LTX-Video 13B FP8 v0.9.7",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "Quantized version of the LTX-Video 13B model, optimized for lower VRAM usage while maintaining high quality.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-13b-0.9.7-dev-fp8.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-13b-0.9.7-dev-fp8.safetensors",
"size": "15.7GB"
},
{
"name": "LTX-Video 13B Distilled v0.9.7",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "Distilled version of the LTX-Video 13B model, providing improved efficiency while maintaining high-resolution quality.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-13b-0.9.7-distilled.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-13b-0.9.7-distilled.safetensors",
"size": "28.6GB"
},
{
"name": "LTX-Video 13B Distilled FP8 v0.9.7",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "Quantized distilled version of the LTX-Video 13B model, optimized for even lower VRAM usage while maintaining quality.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-13b-0.9.7-distilled-fp8.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-13b-0.9.7-distilled-fp8.safetensors",
"size": "15.7GB"
},
{
"name": "LTX-Video 2B Distilled v0.9.8",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "LTX-Video 2B distilled model v0.9.8 with improved prompt understanding and detail generation.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-2b-0.9.8-distilled.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-2b-0.9.8-distilled.safetensors",
"size": "6.34GB"
},
{
"name": "LTX-Video 2B Distilled FP8 v0.9.8",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "Quantized LTX-Video 2B distilled model v0.9.8 with improved prompt understanding and detail generation, optimized for lower VRAM usage.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-2b-0.9.8-distilled-fp8.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-2b-0.9.8-distilled-fp8.safetensors",
"size": "4.46GB"
},
{
"name": "LTX-Video 13B Distilled v0.9.8",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "LTX-Video 13B distilled model v0.9.8 with improved prompt understanding and detail generation.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-13b-0.9.8-distilled.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-13b-0.9.8-distilled.safetensors",
"size": "28.6GB"
},
{
"name": "LTX-Video 13B Distilled FP8 v0.9.8",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "Quantized LTX-Video 13B distilled model v0.9.8 with improved prompt understanding and detail generation, optimized for lower VRAM usage.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-13b-0.9.8-distilled-fp8.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-13b-0.9.8-distilled-fp8.safetensors",
"size": "15.7GB"
},
{
"name": "LTX-Video 13B Distilled LoRA v0.9.7",
"type": "lora",
"base": "LTX-Video",
"save_path": "loras",
"description": "A LoRA adapter that transforms the standard LTX-Video 13B model into a distilled version when loaded.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-13b-0.9.7-distilled-lora128.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-13b-0.9.7-distilled-lora128.safetensors",
"size": "1.33GB"
},
{
"name": "LTX-Video ICLoRA Depth 13B v0.9.7",
"type": "lora",
"base": "LTX-Video",
"save_path": "loras",
"description": "In-Context LoRA (IC LoRA) for depth-controlled video-to-video generation with precise depth conditioning.",
"reference": "https://huggingface.co/Lightricks/LTX-Video-ICLoRA-depth-13b-0.9.7",
"filename": "ltxv-097-ic-lora-depth-control-comfyui.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video-ICLoRA-depth-13b-0.9.7/resolve/main/ltxv-097-ic-lora-depth-control-comfyui.safetensors",
"size": "81.9MB"
},
{
"name": "LTX-Video ICLoRA Pose 13B v0.9.7",
"type": "lora",
"base": "LTX-Video",
"save_path": "loras",
"description": "In-Context LoRA (IC LoRA) for pose-controlled video-to-video generation with precise pose conditioning.",
"reference": "https://huggingface.co/Lightricks/LTX-Video-ICLoRA-pose-13b-0.9.7",
"filename": "ltxv-097-ic-lora-pose-control-comfyui.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video-ICLoRA-pose-13b-0.9.7/resolve/main/ltxv-097-ic-lora-pose-control-comfyui.safetensors",
"size": "151MB"
},
{
"name": "LTX-Video ICLoRA Canny 13B v0.9.7",
"type": "lora",
"base": "LTX-Video",
"save_path": "loras",
"description": "In-Context LoRA (IC LoRA) for canny edge-controlled video-to-video generation with precise edge conditioning.",
"reference": "https://huggingface.co/Lightricks/LTX-Video-ICLoRA-canny-13b-0.9.7",
"filename": "ltxv-097-ic-lora-canny-control-comfyui.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video-ICLoRA-canny-13b-0.9.7/resolve/main/ltxv-097-ic-lora-canny-control-comfyui.safetensors",
"size": "81.9MB"
},
{
"name": "LTX-Video ICLoRA Detailer 13B v0.9.8",
"type": "lora",
"base": "LTX-Video",
"save_path": "loras",
"description": "A video detailer model on top of LTXV_13B_098_DEV trained on custom data using In-Context LoRA (IC LoRA) method.",
"reference": "https://huggingface.co/Lightricks/LTX-Video-ICLoRA-detailer-13b-0.9.8",
"filename": "ltxv-098-ic-lora-detailer-comfyui.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video-ICLoRA-detailer-13b-0.9.8/resolve/main/ltxv-098-ic-lora-detailer-comfyui.safetensors",
"size": "1.31GB"
},
{
"name": "Latent Bridge Matching for Image Relighting",
"type": "diffusion_model",
"base": "LBM",
"save_path": "diffusion_models/LBM",
"description": "Latent Bridge Matching (LBM) Relighting model",
"reference": "https://huggingface.co/jasperai/LBM_relighting",
"filename": "LBM_relighting.safetensors",
"url": "https://huggingface.co/jasperai/LBM_relighting/resolve/main/model.safetensors",
"size": "5.02GB"
}
]
}

View File

@@ -12,10 +12,10 @@ import ast
import logging
import traceback
from .glob import security_check
from .glob import manager_util
from .glob import cm_global
from .glob import manager_downloader
from .common import security_check
from .common import manager_util
from .common import cm_global
from .common import manager_downloader
import folder_paths
manager_util.add_python_path_to_env()
@@ -35,10 +35,9 @@ else:
def current_timestamp():
return str(time.time()).split('.')[0]
security_check.security_check()
cm_global.pip_blacklist = {'torch', 'torchsde', 'torchvision'}
cm_global.pip_downgrade_blacklist = ['torch', 'torchsde', 'torchvision', 'transformers', 'safetensors', 'kornia']
cm_global.pip_blacklist = {'torch', 'torchaudio', 'torchsde', 'torchvision'}
cm_global.pip_downgrade_blacklist = ['torch', 'torchaudio', 'torchsde', 'torchvision', 'transformers', 'safetensors', 'kornia']
def skip_pip_spam(x):
@@ -65,12 +64,8 @@ comfy_path = os.environ.get('COMFYUI_PATH')
comfy_base_path = os.environ.get('COMFYUI_FOLDERS_BASE_PATH')
if comfy_path is None:
try:
comfy_path = os.path.abspath(os.path.dirname(sys.modules['__main__'].__file__))
os.environ['COMFYUI_PATH'] = comfy_path
except:
print("[ComfyUI-Manager] environment variable 'COMFYUI_PATH' is not specified.")
exit(-1)
comfy_path = os.path.abspath(os.path.dirname(sys.modules['__main__'].__file__))
os.environ['COMFYUI_PATH'] = comfy_path
if comfy_base_path is None:
comfy_base_path = comfy_path
@@ -91,9 +86,6 @@ manager_pip_blacklist_path = os.path.join(manager_files_path, "pip_blacklist.lis
restore_snapshot_path = os.path.join(manager_files_path, "startup-scripts", "restore-snapshot.json")
manager_config_path = os.path.join(manager_files_path, 'config.ini')
cm_cli_path = os.path.join(comfyui_manager_path, "cm-cli.py")
default_conf = {}
def read_config():
@@ -118,13 +110,14 @@ def check_file_logging():
read_config()
read_uv_mode()
security_check.security_check()
check_file_logging()
cm_global.pip_overrides = {'numpy': 'numpy<2'}
cm_global.pip_overrides = {}
if os.path.exists(manager_pip_overrides_path):
with open(manager_pip_overrides_path, 'r', encoding="UTF-8", errors="ignore") as json_file:
cm_global.pip_overrides = json.load(json_file)
cm_global.pip_overrides['numpy'] = 'numpy<2'
if os.path.exists(manager_pip_blacklist_path):
@@ -337,7 +330,12 @@ try:
log_file.write(message)
else:
log_file.write(f"[{timestamp}] {message}")
log_file.flush()
try:
log_file.flush()
except Exception:
pass
self.last_char = message if message == '' else message[-1]
if not file_only:
@@ -350,7 +348,10 @@ try:
original_stderr.flush()
def flush(self):
log_file.flush()
try:
log_file.flush()
except Exception:
pass
with std_log_lock:
if self.is_stdout:
@@ -438,35 +439,6 @@ except Exception as e:
print(f"[ComfyUI-Manager] Logging failed: {e}")
def ensure_dependencies():
try:
import git # noqa: F401
import toml # noqa: F401
import rich # noqa: F401
import chardet # noqa: F401
except ModuleNotFoundError:
my_path = os.path.dirname(__file__)
requirements_path = os.path.join(my_path, "requirements.txt")
print("## ComfyUI-Manager: installing dependencies. (GitPython)")
try:
subprocess.check_output(manager_util.make_pip_cmd(['install', '-r', requirements_path]))
except subprocess.CalledProcessError:
print("## [ERROR] ComfyUI-Manager: Attempting to reinstall dependencies using an alternative method.")
try:
subprocess.check_output(manager_util.make_pip_cmd(['install', '--user', '-r', requirements_path]))
except subprocess.CalledProcessError:
print("## [ERROR] ComfyUI-Manager: Failed to install the GitPython package in the correct Python environment. Please install it manually in the appropriate environment. (You can seek help at https://app.element.io/#/room/%23comfyui_space%3Amatrix.org)")
try:
print("## ComfyUI-Manager: installing dependencies done.")
except:
# maybe we should sys.exit() here? there is at least two screens worth of error messages still being pumped after our error messages
print("## [ERROR] ComfyUI-Manager: GitPython package seems to be installed, but failed to load somehow. Make sure you have a working git client installed")
ensure_dependencies()
print("** ComfyUI startup time:", current_timestamp())
print("** Platform:", platform.system())
print("** Python version:", sys.version)
@@ -490,7 +462,7 @@ def read_downgrade_blacklist():
items = [x.strip() for x in items if x != '']
cm_global.pip_downgrade_blacklist += items
cm_global.pip_downgrade_blacklist = list(set(cm_global.pip_downgrade_blacklist))
except:
except Exception:
pass
@@ -596,7 +568,10 @@ if os.path.exists(restore_snapshot_path):
if 'COMFYUI_FOLDERS_BASE_PATH' not in new_env:
new_env["COMFYUI_FOLDERS_BASE_PATH"] = comfy_path
cmd_str = [sys.executable, cm_cli_path, 'restore-snapshot', restore_snapshot_path]
if 'COMFYUI_PATH' not in new_env:
new_env['COMFYUI_PATH'] = os.path.dirname(folder_paths.__file__)
cmd_str = [sys.executable, '-m', 'comfyui_manager.cm_cli', 'restore-snapshot', restore_snapshot_path]
exit_code = process_wrap(cmd_str, custom_nodes_base_path, handler=msg_capture, env=new_env)
if exit_code != 0:
@@ -623,6 +598,7 @@ def execute_lazy_install_script(repo_path, executable):
lines = manager_util.robust_readlines(requirements_path)
for line in lines:
package_name = remap_pip_package(line.strip())
package_name = package_name.split('#')[0].strip()
if package_name and not is_installed(package_name):
if '--index-url' in package_name:
s = package_name.split('--index-url')

41
docs/README.md Normal file
View File

@@ -0,0 +1,41 @@
# ComfyUI-Manager: Documentation
This directory contains documentation for the ComfyUI-Manager, providing guides and tutorials for users in multiple languages.
## Directory Structure
The documentation is organized into language-specific directories:
- **en/**: English documentation
- **ko/**: Korean documentation
## Core Documentation Files
### Command-Line Interface
- **cm-cli.md**: Documentation for the ComfyUI-Manager Command Line Interface (CLI), which allows using manager functionality without the UI.
### Advanced Features
- **use_aria2.md**: Guide for using the aria2 download accelerator with ComfyUI-Manager for faster model downloads.
## Documentation Standards
The documentation follows these standards:
1. **Markdown Format**: All documentation is written in Markdown for easy rendering on GitHub and other platforms
2. **Language-specific Directories**: Content is separated by language to facilitate localization
3. **Feature-focused Documentation**: Each major feature has its own documentation file
4. **Updated with Releases**: Documentation is kept in sync with software releases
## Contributing to Documentation
When contributing new documentation:
1. Place files in the appropriate language directory
2. Use clear, concise language appropriate for the target audience
3. Include examples where helpful
4. Consider adding screenshots or diagrams for complex features
5. Maintain consistent formatting with existing documentation
This documentation directory will continue to grow to support the expanding feature set of ComfyUI-Manager.

95
node_db/README.md Normal file
View File

@@ -0,0 +1,95 @@
# ComfyUI-Manager: Node Database (node_db)
This directory contains the JSON database files that power ComfyUI-Manager's legacy node registry system. While the manager is gradually transitioning to the online Custom Node Registry (CNR), these local JSON files continue to provide important metadata about custom nodes, models, and their integrations.
## Directory Structure
The node_db directory is organized into several subdirectories, each serving a specific purpose:
- **dev/**: Development channel files with latest additions and experimental nodes
- **legacy/**: Historical/legacy nodes that may require special handling
- **new/**: New nodes that have passed initial verification but are still being evaluated
- **forked/**: Forks of existing nodes with modifications
- **tutorial/**: Example and tutorial nodes designed for learning purposes
## Core Database Files
Each subdirectory contains a standard set of JSON files:
- **custom-node-list.json**: Primary database of custom nodes with metadata
- **extension-node-map.json**: Maps between extensions and individual nodes they provide
- **model-list.json**: Catalog of models that can be downloaded through the manager
- **alter-list.json**: Alternative implementations of nodes for compatibility or functionality
- **github-stats.json**: GitHub repository statistics for node popularity metrics
## Database Schema
### custom-node-list.json
```json
{
"custom_nodes": [
{
"title": "Node display name",
"name": "Repository name",
"reference": "Original repository if forked",
"files": ["GitHub URL or other source location"],
"install_type": "git",
"description": "Description of the node's functionality",
"pip": ["optional pip dependencies"],
"js": ["optional JavaScript files"],
"tags": ["categorization tags"]
}
]
}
```
### extension-node-map.json
```json
{
"extension-id": [
["list", "of", "node", "classes"],
{
"author": "Author name",
"description": "Extension description",
"nodename_pattern": "Optional regex pattern for node name matching"
}
]
}
```
## Transition to Custom Node Registry (CNR)
This local database system is being progressively replaced by the online Custom Node Registry (CNR), which provides:
- Real-time updates without manual JSON maintenance
- Improved versioning support
- Better security validation
- Enhanced metadata
The Manager supports both systems simultaneously during the transition period.
## Implementation Details
- The database follows a channel-based architecture for different sources
- Multiple database modes are supported: Channel, Local, and Remote
- The system supports differential updates to minimize bandwidth usage
- Security levels are enforced for different node installations based on source
## Usage in the Application
The Manager's backend uses these database files to:
1. Provide browsable lists of available nodes and models
2. Resolve dependencies for installation
3. Track updates and new versions
4. Map node classes to their source repositories
5. Assess risk levels for installation security
## Maintenance Scripts
Each subdirectory contains a `scan.sh` script that assists with:
- Scanning repositories for new nodes
- Updating metadata
- Validating database integrity
- Generating proper JSON structures
This database system enables a flexible, secure, and comprehensive management system for the ComfyUI ecosystem while the transition to CNR continues.

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,3 @@
#!/bin/bash
rm ~/.tmp/dev/*.py > /dev/null 2>&1
python ../../scanner.py ~/.tmp/dev
python ../../scanner.py ~/.tmp/dev $*

View File

@@ -1,5 +1,35 @@
{
"custom_nodes": [
{
"author": "synchronicity-labs",
"title": "ComfyUI Sync Lipsync Node",
"reference": "https://github.com/synchronicity-labs/sync-comfyui",
"files": [
"https://github.com/synchronicity-labs/sync-comfyui"
],
"install_type": "git-clone",
"description": "This custom node allows you to perform audio-video lip synchronization inside ComfyUI using a simple interface."
},
{
"author": "joaomede",
"title": "ComfyUI-Unload-Model-Fork",
"reference": "https://github.com/joaomede/ComfyUI-Unload-Model-Fork",
"files": [
"https://github.com/joaomede/ComfyUI-Unload-Model-Fork"
],
"install_type": "git-clone",
"description": "For unloading a model or all models, using the memory management that is already present in ComfyUI. Copied from [a/https://github.com/willblaschko/ComfyUI-Unload-Models](https://github.com/willblaschko/ComfyUI-Unload-Models) but without the unnecessary extra stuff."
},
{
"author": "SanDiegoDude",
"title": "ComfyUI-HiDream-Sampler [WIP]",
"reference": "https://github.com/SanDiegoDude/ComfyUI-HiDream-Sampler",
"files": [
"https://github.com/SanDiegoDude/ComfyUI-HiDream-Sampler"
],
"install_type": "git-clone",
"description": "A collection of enhanced nodes for ComfyUI that provide powerful additional functionality to your workflows.\nNOTE: The files in the repo are not organized."
},
{
"author": "PramaLLC",
"title": "ComfyUI BEN - Background Erase Network",

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,320 @@
{
"models": [
{
"name": "Comfy-Org/Wan2.2 i2v high noise 14B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for i2v high noise 14B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_i2v_high_noise_14B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_i2v_high_noise_14B_fp16.safetensors",
"size": "28.6GB"
},
{
"name": "Comfy-Org/Wan2.2 i2v high noise 14B (fp8_scaled)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for i2v high noise 14B (fp8_scaled)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_i2v_high_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_i2v_high_noise_14B_fp8_scaled.safetensors",
"size": "14.3GB"
},
{
"name": "Comfy-Org/Wan2.2 i2v low noise 14B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for i2v low noise 14B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_i2v_low_noise_14B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_i2v_low_noise_14B_fp16.safetensors",
"size": "28.6GB"
},
{
"name": "Comfy-Org/Wan2.2 i2v low noise 14B (fp8_scaled)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for i2v low noise 14B (fp8_scaled)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_i2v_low_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_i2v_low_noise_14B_fp8_scaled.safetensors",
"size": "14.3GB"
},
{
"name": "Comfy-Org/Wan2.2 t2v high noise 14B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for t2v high noise 14B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_t2v_high_noise_14B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_high_noise_14B_fp16.safetensors",
"size": "28.6GB"
},
{
"name": "Comfy-Org/Wan2.2 t2v high noise 14B (fp8_scaled)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for t2v high noise 14B (fp8_scaled)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors",
"size": "14.3GB"
},
{
"name": "Comfy-Org/Wan2.2 t2v low noise 14B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for t2v low noise 14B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_t2v_low_noise_14B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_low_noise_14B_fp16.safetensors",
"size": "28.6GB"
},
{
"name": "Comfy-Org/Wan2.2 t2v low noise 14B (fp8_scaled)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for t2v low noise 14B (fp8_scaled)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors",
"size": "14.3GB"
},
{
"name": "Comfy-Org/Wan2.2 ti2v 5B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for ti2v 5B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_ti2v_5B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_ti2v_5B_fp16.safetensors",
"size": "10.0GB"
},
{
"name": "sam2.1_hiera_tiny.pt",
"type": "sam2.1",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2.1 hiera model (tiny)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2.1_hiera_tiny.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_tiny.pt",
"size": "149.0MB"
},
{
"name": "sam2.1_hiera_small.pt",
"type": "sam2.1",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2.1 hiera model (small)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2.1_hiera_small.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_small.pt",
"size": "176.0MB"
},
{
"name": "sam2.1_hiera_base_plus.pt",
"type": "sam2.1",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2.1 hiera model (base+)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2.1_hiera_base_plus.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_base_plus.pt",
"size": "309.0MB"
},
{
"name": "sam2.1_hiera_large.pt",
"type": "sam2.1",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2.1 hiera model (large)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2.1_hiera_large.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt",
"size": "857.0MB"
},
{
"name": "sam2_hiera_tiny.pt",
"type": "sam2",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2 hiera model (tiny)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2_hiera_tiny.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_tiny.pt",
"size": "149.0MB"
},
{
"name": "sam2_hiera_small.pt",
"type": "sam2",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2 hiera model (small)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2_hiera_small.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_small.pt",
"size": "176.0MB"
},
{
"name": "sam2_hiera_base_plus.pt",
"type": "sam2",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2 hiera model (base+)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2_hiera_base_plus.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_base_plus.pt",
"size": "309.0MB"
},
{
"name": "sam2_hiera_large.pt",
"type": "sam2",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2 hiera model (large)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2_hiera_large.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_large.pt",
"size": "857.0MB"
},
{
"name": "Comfy-Org/omnigen2_fp16.safetensors",
"type": "diffusion_model",
"base": "OmniGen2",
"save_path": "default",
"description": "OmniGen2 diffusion model. This is required for using OmniGen2.",
"reference": "https://huggingface.co/Comfy-Org/Omnigen2_ComfyUI_repackaged",
"filename": "omnigen2_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Omnigen2_ComfyUI_repackaged/resolve/main/split_files/diffusion_models/omnigen2_fp16.safetensors",
"size": "7.93GB"
},
{
"name": "Comfy-Org/qwen_2.5_vl_fp16.safetensors",
"type": "clip",
"base": "qwen-2.5",
"save_path": "default",
"description": "text encoder for OmniGen2",
"reference": "https://huggingface.co/Comfy-Org/Omnigen2_ComfyUI_repackaged",
"filename": "qwen_2.5_vl_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Omnigen2_ComfyUI_repackaged/resolve/main/split_files/text_encoders/qwen_2.5_vl_fp16.safetensors",
"size": "7.51GB"
},
{
"name": "Latent Bridge Matching for Image Relighting",
"type": "diffusion_model",
"base": "LBM",
"save_path": "diffusion_models/LBM",
"description": "Latent Bridge Matching (LBM) Relighting model",
"reference": "https://huggingface.co/jasperai/LBM_relighting",
"filename": "LBM_relighting.safetensors",
"url": "https://huggingface.co/jasperai/LBM_relighting/resolve/main/model.safetensors",
"size": "5.02GB"
},
{
"name": "LTX-Video 13B Distilled v0.9.7",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "Distilled version of the LTX-Video 13B model, providing improved efficiency while maintaining high-resolution quality.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-13b-0.9.7-distilled.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-13b-0.9.7-distilled.safetensors",
"size": "28.6GB"
},
{
"name": "LTX-Video 13B Distilled FP8 v0.9.7",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "Quantized distilled version of the LTX-Video 13B model, optimized for even lower VRAM usage while maintaining quality.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-13b-0.9.7-distilled-fp8.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-13b-0.9.7-distilled-fp8.safetensors",
"size": "15.7GB"
},
{
"name": "LTX-Video 13B Distilled LoRA v0.9.7",
"type": "lora",
"base": "LTX-Video",
"save_path": "loras",
"description": "A LoRA adapter that transforms the standard LTX-Video 13B model into a distilled version when loaded.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-13b-0.9.7-distilled-lora128.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-13b-0.9.7-distilled-lora128.safetensors",
"size": "1.33GB"
},
{
"name": "lllyasviel/FramePackI2V_HY",
"type": "FramePackI2V",
"base": "FramePackI2V",
"save_path": "diffusers/lllyasviel",
"description": "[SNAPSHOT] This is the f1k1_x_g9_f1k1f2k2f16k4_td FramePack for HY. [w/You cannot download this item on ComfyUI-Manager versions below V3.18]",
"reference": "https://huggingface.co/lllyasviel/FramePackI2V_HY",
"filename": "<huggingface>",
"url": "lllyasviel/FramePackI2V_HY",
"size": "25.75GB"
},
{
"name": "LTX-Video Spatial Upscaler v0.9.7",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "Spatial upscaler model for LTX-Video. This model enhances the spatial resolution of generated videos.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-spatial-upscaler-0.9.7.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-spatial-upscaler-0.9.7.safetensors",
"size": "505MB"
},
{
"name": "LTX-Video Temporal Upscaler v0.9.7",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "Temporal upscaler model for LTX-Video. This model enhances the temporal resolution and smoothness of generated videos.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-temporal-upscaler-0.9.7.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-temporal-upscaler-0.9.7.safetensors",
"size": "524MB"
},
{
"name": "LTX-Video 13B v0.9.7",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "High-resolution quality LTX-Video 13B model.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-13b-0.9.7-dev.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-13b-0.9.7-dev.safetensors",
"size": "28.6GB"
},
{
"name": "LTX-Video 13B FP8 v0.9.7",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "Quantized version of the LTX-Video 13B model, optimized for lower VRAM usage while maintaining high quality.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-13b-0.9.7-dev-fp8.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-13b-0.9.7-dev-fp8.safetensors",
"size": "15.7GB"
},
{
"name": "Comfy-Org/Wan2.1 i2v 480p 14B (bf16)",
"type": "diffusion_model",
@@ -372,339 +687,6 @@
"filename": "llava_llama3_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/HunyuanVideo_repackaged/resolve/main/split_files/text_encoders/llava_llama3_fp16.safetensors",
"size": "16.1GB"
},
{
"name": "PixArt-Sigma-XL-2-512-MS.safetensors (diffusion)",
"type": "diffusion_model",
"base": "pixart-sigma",
"save_path": "diffusion_models/PixArt-Sigma",
"description": "PixArt-Sigma Diffusion model",
"reference": "https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-512-MS",
"filename": "PixArt-Sigma-XL-2-512-MS.safetensors",
"url": "https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-512-MS/resolve/main/transformer/diffusion_pytorch_model.safetensors",
"size": "2.44GB"
},
{
"name": "PixArt-Sigma-XL-2-1024-MS.safetensors (diffusion)",
"type": "diffusion_model",
"base": "pixart-sigma",
"save_path": "diffusion_models/PixArt-Sigma",
"description": "PixArt-Sigma Diffusion model",
"reference": "https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
"filename": "PixArt-Sigma-XL-2-1024-MS.safetensors",
"url": "https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-1024-MS/resolve/main/transformer/diffusion_pytorch_model.safetensors",
"size": "2.44GB"
},
{
"name": "PixArt-XL-2-1024-MS.safetensors (diffusion)",
"type": "diffusion_model",
"base": "pixart-alpha",
"save_path": "diffusion_models/PixArt-Alpha",
"description": "PixArt-Alpha Diffusion model",
"reference": "https://huggingface.co/PixArt-alpha/PixArt-XL-2-1024-MS",
"filename": "PixArt-XL-2-1024-MS.safetensors",
"url": "https://huggingface.co/PixArt-alpha/PixArt-XL-2-1024-MS/resolve/main/transformer/diffusion_pytorch_model.safetensors",
"size": "2.45GB"
},
{
"name": "Comfy-Org/hunyuan_video_t2v_720p_bf16.safetensors",
"type": "diffusion_model",
"base": "Hunyuan Video",
"save_path": "diffusion_models/hunyuan_video",
"description": "Huyuan Video diffusion model. repackaged version.",
"reference": "https://huggingface.co/Comfy-Org/HunyuanVideo_repackaged",
"filename": "hunyuan_video_t2v_720p_bf16.safetensors",
"url": "https://huggingface.co/Comfy-Org/HunyuanVideo_repackaged/resolve/main/split_files/diffusion_models/hunyuan_video_t2v_720p_bf16.safetensors",
"size": "25.6GB"
},
{
"name": "Comfy-Org/hunyuan_video_vae_bf16.safetensors",
"type": "VAE",
"base": "Hunyuan Video",
"save_path": "VAE",
"description": "Huyuan Video VAE model. repackaged version.",
"reference": "https://huggingface.co/Comfy-Org/HunyuanVideo_repackaged",
"filename": "hunyuan_video_vae_bf16.safetensors",
"url": "https://huggingface.co/Comfy-Org/HunyuanVideo_repackaged/resolve/main/split_files/vae/hunyuan_video_vae_bf16.safetensors",
"size": "493MB"
},
{
"name": "LTX-Video 2B v0.9.1 Checkpoint",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "LTX-Video is the first DiT-based video generation model capable of generating high-quality videos in real-time. It produces 24 FPS videos at a 768x512 resolution faster than they can be watched. Trained on a large-scale dataset of diverse videos, the model generates high-resolution videos with realistic and varied content.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltx-video-2b-v0.9.1.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltx-video-2b-v0.9.1.safetensors",
"size": "5.72GB"
},
{
"name": "XLabs-AI/flux-canny-controlnet-v3.safetensors",
"type": "controlnet",
"base": "FLUX.1",
"save_path": "xlabs/controlnets",
"description": "ControlNet checkpoints for FLUX.1-dev model by Black Forest Labs.",
"reference": "https://huggingface.co/XLabs-AI/flux-controlnet-collections",
"filename": "flux-canny-controlnet-v3.safetensors",
"url": "https://huggingface.co/XLabs-AI/flux-controlnet-collections/resolve/main/flux-canny-controlnet-v3.safetensors",
"size": "1.49GB"
},
{
"name": "XLabs-AI/flux-depth-controlnet-v3.safetensors",
"type": "controlnet",
"base": "FLUX.1",
"save_path": "xlabs/controlnets",
"description": "ControlNet checkpoints for FLUX.1-dev model by Black Forest Labs.",
"reference": "https://huggingface.co/XLabs-AI/flux-controlnet-collections",
"filename": "flux-depth-controlnet-v3.safetensors",
"url": "https://huggingface.co/XLabs-AI/flux-controlnet-collections/resolve/main/flux-depth-controlnet-v3.safetensors",
"size": "1.49GB"
},
{
"name": "XLabs-AI/flux-hed-controlnet-v3.safetensors",
"type": "controlnet",
"base": "FLUX.1",
"save_path": "xlabs/controlnets",
"description": "ControlNet checkpoints for FLUX.1-dev model by Black Forest Labs.",
"reference": "https://huggingface.co/XLabs-AI/flux-controlnet-collections",
"filename": "flux-hed-controlnet-v3.safetensors",
"url": "https://huggingface.co/XLabs-AI/flux-controlnet-collections/resolve/main/flux-hed-controlnet-v3.safetensors",
"size": "1.49GB"
},
{
"name": "XLabs-AI/realism_lora.safetensors",
"type": "lora",
"base": "FLUX.1",
"save_path": "xlabs/loras",
"description": "A checkpoint with trained LoRAs for FLUX.1-dev model by Black Forest Labs",
"reference": "https://huggingface.co/XLabs-AI/flux-lora-collection",
"filename": "realism_lora.safetensors",
"url": "https://huggingface.co/XLabs-AI/flux-lora-collection/resolve/main/realism_lora.safetensors",
"size": "44.8MB"
},
{
"name": "XLabs-AI/art_lora.safetensors",
"type": "lora",
"base": "FLUX.1",
"save_path": "xlabs/loras",
"description": "A checkpoint with trained LoRAs for FLUX.1-dev model by Black Forest Labs",
"reference": "https://huggingface.co/XLabs-AI/flux-lora-collection",
"filename": "art_lora.safetensors",
"url": "https://huggingface.co/XLabs-AI/flux-lora-collection/resolve/main/scenery_lora.safetensors",
"size": "44.8MB"
},
{
"name": "XLabs-AI/mjv6_lora.safetensors",
"type": "lora",
"base": "FLUX.1",
"save_path": "xlabs/loras",
"description": "A checkpoint with trained LoRAs for FLUX.1-dev model by Black Forest Labs",
"reference": "https://huggingface.co/XLabs-AI/flux-lora-collection",
"filename": "mjv6_lora.safetensors",
"url": "https://huggingface.co/XLabs-AI/flux-lora-collection/resolve/main/mjv6_lora.safetensors",
"size": "44.8MB"
},
{
"name": "XLabs-AI/flux-ip-adapter",
"type": "lora",
"base": "FLUX.1",
"save_path": "xlabs/ipadapters",
"description": "A checkpoint with trained LoRAs for FLUX.1-dev model by Black Forest Labs",
"reference": "https://huggingface.co/XLabs-AI/flux-ip-adapter",
"filename": "ip_adapter.safetensors",
"url": "https://huggingface.co/XLabs-AI/flux-ip-adapter/resolve/main/ip_adapter.safetensors",
"size": "982MB"
},
{
"name": "stabilityai/SD3.5-Large-Controlnet-Blur",
"type": "controlnet",
"base": "SD3.5",
"save_path": "controlnet/SD3.5",
"description": "Blur Controlnet model for SD3.5 Large",
"reference": "https://huggingface.co/stabilityai/stable-diffusion-3.5-controlnets",
"filename": "sd3.5_large_controlnet_blur.safetensors",
"url": "https://huggingface.co/stabilityai/stable-diffusion-3.5-controlnets/resolve/main/sd3.5_large_controlnet_blur.safetensors",
"size": "8.65GB"
},
{
"name": "stabilityai/SD3.5-Large-Controlnet-Canny",
"type": "controlnet",
"base": "SD3.5",
"save_path": "controlnet/SD3.5",
"description": "Canny Controlnet model for SD3.5 Large",
"reference": "https://huggingface.co/stabilityai/stable-diffusion-3.5-controlnets",
"filename": "sd3.5_large_controlnet_canny.safetensors",
"url": "https://huggingface.co/stabilityai/stable-diffusion-3.5-controlnets/resolve/main/sd3.5_large_controlnet_canny.safetensors",
"size": "8.65GB"
},
{
"name": "stabilityai/SD3.5-Large-Controlnet-Depth",
"type": "controlnet",
"base": "SD3.5",
"save_path": "controlnet/SD3.5",
"description": "Depth Controlnet model for SD3.5 Large",
"reference": "https://huggingface.co/stabilityai/stable-diffusion-3.5-controlnets",
"filename": "sd3.5_large_controlnet_depth.safetensors",
"url": "https://huggingface.co/stabilityai/stable-diffusion-3.5-controlnets/resolve/main/sd3.5_large_controlnet_depth.safetensors",
"size": "8.65GB"
},
{
"name": "LTX-Video 2B v0.9 Checkpoint",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "LTX-Video is the first DiT-based video generation model capable of generating high-quality videos in real-time. It produces 24 FPS videos at a 768x512 resolution faster than they can be watched. Trained on a large-scale dataset of diverse videos, the model generates high-resolution videos with realistic and varied content.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltx-video-2b-v0.9.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltx-video-2b-v0.9.safetensors",
"size": "9.37GB"
},
{
"name": "InstantX/FLUX.1-dev-IP-Adapter",
"type": "IP-Adapter",
"base": "FLUX.1",
"save_path": "ipadapter-flux",
"description": "FLUX.1-dev-IP-Adapter",
"reference": "https://huggingface.co/InstantX/FLUX.1-dev-IP-Adapter",
"filename": "ip-adapter.bin",
"url": "https://huggingface.co/InstantX/FLUX.1-dev-IP-Adapter/resolve/main/ip-adapter.bin",
"size": "5.29GB"
},
{
"name": "Comfy-Org/sigclip_vision_384 (patch14_384)",
"type": "clip_vision",
"base": "sigclip",
"save_path": "clip_vision",
"description": "This clip vision model is required for FLUX.1 Redux.",
"reference": "https://huggingface.co/Comfy-Org/sigclip_vision_384/tree/main",
"filename": "sigclip_vision_patch14_384.safetensors",
"url": "https://huggingface.co/Comfy-Org/sigclip_vision_384/resolve/main/sigclip_vision_patch14_384.safetensors",
"size": "857MB"
},
{
"name": "comfyanonymous/flux_text_encoders - t5xxl (fp16)",
"type": "clip",
"base": "t5",
"save_path": "text_encoders/t5",
"description": "Text Encoders for FLUX (fp16)",
"reference": "https://huggingface.co/comfyanonymous/flux_text_encoders",
"filename": "t5xxl_fp16.safetensors",
"url": "https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp16.safetensors",
"size": "9.79GB"
},
{
"name": "comfyanonymous/flux_text_encoders - t5xxl (fp8_e4m3fn)",
"type": "clip",
"base": "t5",
"save_path": "text_encoders/t5",
"description": "Text Encoders for FLUX (fp8_e4m3fn)",
"reference": "https://huggingface.co/comfyanonymous/flux_text_encoders",
"filename": "t5xxl_fp8_e4m3fn.safetensors",
"url": "https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp8_e4m3fn.safetensors",
"size": "4.89GB"
},
{
"name": "comfyanonymous/flux_text_encoders - t5xxl (fp8_e4m3fn_scaled)",
"type": "clip",
"base": "t5",
"save_path": "text_encoders/t5",
"description": "Text Encoders for FLUX (fp16)",
"reference": "https://huggingface.co/comfyanonymous/flux_text_encoders",
"filename": "t5xxl_fp8_e4m3fn_scaled.safetensors",
"url": "https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp8_e4m3fn_scaled.safetensors",
"size": "5.16GB"
},
{
"name": "FLUX.1 [Dev] Diffusion model (scaled fp8)",
"type": "diffusion_model",
"base": "FLUX.1",
"save_path": "diffusion_models/FLUX1",
"description": "FLUX.1 [Dev] Diffusion model (scaled fp8)[w/Due to the large size of the model, it is recommended to download it through a browser if possible.]",
"reference": "https://huggingface.co/comfyanonymous/flux_dev_scaled_fp8_test",
"filename": "flux_dev_fp8_scaled_diffusion_model.safetensors",
"url": "https://huggingface.co/comfyanonymous/flux_dev_scaled_fp8_test/resolve/main/flux_dev_fp8_scaled_diffusion_model.safetensors",
"size": "11.9GB"
},
{
"name": "kijai/MoGe_ViT_L_fp16.safetensors",
"type": "MoGe",
"base": "MoGe",
"save_path": "MoGe",
"description": "Safetensors versions of [a/https://github.com/microsoft/MoGe](https://github.com/microsoft/MoGe)",
"reference": "https://huggingface.co/Kijai/MoGe_safetensors",
"filename": "MoGe_ViT_L_fp16.safetensors",
"url": "https://huggingface.co/Kijai/MoGe_safetensors/resolve/main/MoGe_ViT_L_fp16.safetensors",
"size": "628MB"
},
{
"name": "kijai/MoGe_ViT_L_fp16.safetensors",
"type": "MoGe",
"base": "MoGe",
"save_path": "MoGe",
"description": "Safetensors versions of [a/https://github.com/microsoft/MoGe](https://github.com/microsoft/MoGe)",
"reference": "https://huggingface.co/Kijai/MoGe_safetensors",
"filename": "MoGe_ViT_L_fp16.safetensors",
"url": "https://huggingface.co/Kijai/MoGe_safetensors/resolve/main/MoGe_ViT_L_fp16.safetensors",
"size": "1.26GB"
},
{
"name": "pulid_flux_v0.9.1.safetensors",
"type": "PuLID",
"base": "FLUX",
"save_path": "pulid",
"description": "This is required for PuLID (FLUX)",
"reference": "https://huggingface.co/guozinan/PuLID",
"filename": "pulid_flux_v0.9.1.safetensors",
"url": "https://huggingface.co/guozinan/PuLID/resolve/main/pulid_flux_v0.9.1.safetensors",
"size": "1.14GB"
},
{
"name": "pulid_v1.1.safetensors",
"type": "PuLID",
"base": "SDXL",
"save_path": "pulid",
"description": "This is required for PuLID (SDXL)",
"reference": "https://huggingface.co/guozinan/PuLID",
"filename": "pulid_v1.1.safetensors",
"url": "https://huggingface.co/guozinan/PuLID/resolve/main/pulid_v1.1.safetensors",
"size": "984MB"
},
{
"name": "Kolors-IP-Adapter-Plus.bin (Kwai-Kolors/Kolors-IP-Adapter-Plus)",
"type": "IP-Adapter",
"base": "Kolors",
"save_path": "ipadapter",
"description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.",
"reference": "https://huggingface.co/Kwai-Kolors/Kolors-IP-Adapter-Plus",
"filename": "Kolors-IP-Adapter-Plus.bin",
"url": "https://huggingface.co/Kwai-Kolors/Kolors-IP-Adapter-Plus/resolve/main/ip_adapter_plus_general.bin",
"size": "1.01GB"
},
{
"name": "Kolors-IP-Adapter-FaceID-Plus.bin (Kwai-Kolors/Kolors-IP-Adapter-Plus)",
"type": "IP-Adapter",
"base": "Kolors",
"save_path": "ipadapter",
"description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.",
"reference": "https://huggingface.co/Kwai-Kolors/Kolors-IP-Adapter-FaceID-Plus",
"filename": "Kolors-IP-Adapter-FaceID-Plus.bin",
"url": "https://huggingface.co/Kwai-Kolors/Kolors-IP-Adapter-FaceID-Plus/resolve/main/ipa-faceid-plus.bin",
"size": "2.39GB"
}
]
}

View File

@@ -1,5 +1,25 @@
{
"custom_nodes": [
{
"author": "Comfy-Org",
"title": "ComfyUI React Extension Template",
"reference": "https://github.com/Comfy-Org/ComfyUI-React-Extension-Template",
"files": [
"https://github.com/Comfy-Org/ComfyUI-React-Extension-Template"
],
"install_type": "git-clone",
"description": "A minimal template for creating React/TypeScript frontend extensions for ComfyUI, with complete boilerplate setup including internationalization and unit testing."
},
{
"author": "comfyui-wiki",
"title": "ComfyUI-i18n-demo",
"reference": "https://github.com/comfyui-wiki/ComfyUI-i18n-demo",
"files": [
"https://github.com/comfyui-wiki/ComfyUI-i18n-demo"
],
"install_type": "git-clone",
"description": "ComfyUI custom node develop i18n support demo "
},
{
"author": "Suzie1",
"title": "Guide To Making Custom Nodes in ComfyUI",
@@ -321,6 +341,26 @@
],
"description": "Dynamic Node examples for ComfyUI",
"install_type": "git-clone"
},
{
"author": "Jonathon-Doran",
"title": "remote-combo-demo",
"reference": "https://github.com/Jonathon-Doran/remote-combo-demo",
"files": [
"https://github.com/Jonathon-Doran/remote-combo-demo"
],
"install_type": "git-clone",
"description": "A minimal test suite demonstrating how remote COMBO inputs behave in ComfyUI, with and without force_input"
},
{
"author": "J1mB091",
"title": "ComfyUI-J1mB091 Custom Nodes",
"reference": "https://github.com/J1mB091/ComfyUI-J1mB091",
"files": [
"https://github.com/J1mB091/ComfyUI-J1mB091"
],
"install_type": "git-clone",
"description": "Vibe Coded ComfyUI Custom Nodes"
}
]
}

1414
openapi.yaml Normal file
View File

File diff suppressed because it is too large Load Diff

View File

@@ -5,7 +5,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "comfyui-manager"
license = { text = "GPL-3.0-only" }
version = "4.0"
version = "4.0.2"
requires-python = ">= 3.9"
description = "ComfyUI-Manager provides features to install and manage custom nodes for ComfyUI, as well as various functionalities to assist with ComfyUI."
readme = "README.md"
@@ -13,28 +13,28 @@ keywords = ["comfyui", "comfyui-manager"]
maintainers = [
{ name = "Dr.Lt.Data", email = "dr.lt.data@gmail.com" },
{ name = "Yoland Yan", email = "yoland@drip.art" },
{ name = "Yoland Yan", email = "yoland@comfy.org" },
{ name = "James Kwon", email = "hongilkwon316@gmail.com" },
{ name = "Robin Huang", email = "robin@drip.art" },
{ name = "Robin Huang", email = "robin@comfy.org" },
]
classifiers = [
"Development Status :: 4 - Beta",
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
]
dependencies = [
"GitPython",
"PyGithub",
"matrix-client==0.4.0",
"transformers",
"huggingface-hub>0.20",
"typer",
"rich",
"typing-extensions",
"toml",
"uv",
"GitPython",
"PyGithub",
# "matrix-nio",
"transformers",
"huggingface-hub>0.20",
"typer",
"rich",
"typing-extensions",
"toml",
"uv",
"chardet"
]
@@ -48,6 +48,9 @@ Repository = "https://github.com/ltdrdata/ComfyUI-Manager"
where = ["."]
include = ["comfyui_manager*"]
[project.scripts]
cm-cli = "comfyui_manager.cm_cli.__main__:main"
[tool.ruff]
line-length = 120
target-version = "py39"

View File

@@ -1,6 +1,6 @@
GitPython
PyGithub
matrix-client==0.4.0
# matrix-nio
transformers
huggingface-hub>0.20
typer

View File

@@ -94,7 +94,7 @@ def extract_nodes(code_text):
return s
else:
return set()
except:
except Exception:
return set()
@@ -102,12 +102,8 @@ def extract_nodes(code_text):
def scan_in_file(filename, is_builtin=False):
global builtin_nodes
try:
with open(filename, encoding='utf-8') as file:
code = file.read()
except UnicodeDecodeError:
with open(filename, encoding='cp949') as file:
code = file.read()
with open(filename, encoding='utf-8', errors='ignore') as file:
code = file.read()
pattern = r"_CLASS_MAPPINGS\s*=\s*{([^}]*)}"
regex = re.compile(pattern, re.MULTILINE | re.DOTALL)
@@ -259,13 +255,13 @@ def clone_or_pull_git_repository(git_url):
repo.git.submodule('update', '--init', '--recursive')
print(f"Pulling {repo_name}...")
except Exception as e:
print(f"Pulling {repo_name} failed: {e}")
print(f"Failed to pull '{repo_name}': {e}")
else:
try:
Repo.clone_from(git_url, repo_dir, recursive=True)
print(f"Cloning {repo_name}...")
except Exception as e:
print(f"Cloning {repo_name} failed: {e}")
print(f"Failed to clone '{repo_name}': {e}")
def update_custom_nodes():
@@ -297,7 +293,7 @@ def update_custom_nodes():
pass
def is_rate_limit_exceeded():
return g.rate_limiting[0] == 0
return g.rate_limiting[0] <= 20
if is_rate_limit_exceeded():
print(f"GitHub API Rate Limit Exceeded: remained - {(g.rate_limiting_resettime - datetime.datetime.now().timestamp())/60:.2f} min")
@@ -400,7 +396,7 @@ def update_custom_nodes():
try:
download_url(url, temp_dir)
except:
except Exception:
print(f"[ERROR] Cannot download '{url}'")
with concurrent.futures.ThreadPoolExecutor(10) as executor:
@@ -500,8 +496,15 @@ def gen_json(node_info):
nodes_in_url, metadata_in_url = data[git_url]
nodes = set(nodes_in_url)
for x, desc in node_list_json.items():
nodes.add(x.strip())
try:
for x, desc in node_list_json.items():
nodes.add(x.strip())
except Exception as e:
print(f"\nERROR: Invalid json format '{node_list_json_path}'")
print("------------------------------------------------------")
print(e)
print("------------------------------------------------------")
node_list_json = {}
metadata_in_url['title_aux'] = title

34
tests/.gitignore vendored Normal file
View File

@@ -0,0 +1,34 @@
# Test environment and artifacts
# Virtual environment
test_venv/
venv/
env/
# pytest cache
.pytest_cache/
__pycache__/
*.pyc
*.pyo
# Coverage reports (module-specific naming)
.coverage
.coverage.*
htmlcov*/
coverage*.xml
*.cover
# Test artifacts
.tox/
.hypothesis/
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
# OS
.DS_Store
Thumbs.db

181
tests/README.md Normal file
View File

@@ -0,0 +1,181 @@
# ComfyUI Manager Test Suite
This directory contains all tests for the ComfyUI Manager project, organized by module structure.
## Directory Structure
```
tests/
├── setup_test_env.sh # Setup isolated test environment
├── requirements.txt # Test dependencies
├── pytest.ini # Global pytest configuration
├── .gitignore # Ignore test artifacts
└── common/ # Tests for comfyui_manager/common/
└── pip_util/ # Tests for pip_util.py
├── README.md # pip_util test documentation
├── conftest.py # pip_util test fixtures
├── pytest.ini # pip_util-specific pytest config
└── test_*.py # Actual test files (to be created)
```
## Quick Start
### 1. Setup Test Environment (One Time)
```bash
cd tests
./setup_test_env.sh
```
This creates an isolated virtual environment with all test dependencies.
### 2. Run Tests
```bash
# Activate test environment
source test_venv/bin/activate
# Run all tests from root
cd tests
pytest
# Run specific module tests
cd tests/common/pip_util
pytest
# Deactivate when done
deactivate
```
## Test Organization
Tests mirror the source code structure:
| Source Code | Test Location |
|-------------|---------------|
| `comfyui_manager/common/pip_util.py` | `tests/common/pip_util/test_*.py` |
| `comfyui_manager/common/other.py` | `tests/common/other/test_*.py` |
| `comfyui_manager/module/file.py` | `tests/module/file/test_*.py` |
## Writing Tests
1. Create test directory matching source structure
2. Add `conftest.py` for module-specific fixtures
3. Add `pytest.ini` for module-specific configuration (optional)
4. Create `test_*.py` files with actual tests
5. Document in module-specific README
## Test Categories
Use pytest markers to categorize tests:
```python
@pytest.mark.unit
def test_simple_function():
pass
@pytest.mark.integration
def test_complex_workflow():
pass
@pytest.mark.e2e
def test_full_system():
pass
```
Run by category:
```bash
pytest -m unit # Only unit tests
pytest -m integration # Only integration tests
pytest -m e2e # Only end-to-end tests
```
## Coverage Reports
Coverage reports are generated per module:
```bash
cd tests/common/pip_util
pytest # Generates htmlcov_pip_util/ and coverage_pip_util.xml
```
## Environment Isolation
**Why use venv?**
- ✅ Prevents test dependencies from corrupting main environment
- ✅ Allows safe package installation/uninstallation during tests
- ✅ Consistent test results across machines
- ✅ Easy to recreate clean environment
## Available Test Modules
- **[common/pip_util](common/pip_util/)** - Policy-based pip package management system tests
- Unit tests for policy loading, parsing, condition evaluation
- Integration tests for policy application (60% of tests)
- End-to-end workflow tests
## Adding New Test Modules
1. Create directory structure: `tests/module_path/component_name/`
2. Add `conftest.py` with fixtures
3. Add `pytest.ini` if needed (optional)
4. Add `README.md` documenting the tests
5. Create `test_*.py` files
Example:
```bash
mkdir -p tests/data_models/config
cd tests/data_models/config
touch conftest.py README.md test_config_loader.py
```
## CI/CD Integration
Tests are designed to run in CI/CD pipelines:
```yaml
# Example GitHub Actions
- name: Setup test environment
run: |
cd tests
./setup_test_env.sh
- name: Run tests
run: |
source tests/test_venv/bin/activate
pytest tests/
```
## Troubleshooting
### Import errors
```bash
# Make sure venv is activated
source test_venv/bin/activate
# Verify Python path
python -c "import sys; print(sys.path)"
```
### Tests not discovered
```bash
# Check pytest configuration
pytest --collect-only
# Verify test file naming (must start with test_)
ls test_*.py
```
### Clean rebuild
```bash
# Remove and recreate test environment
rm -rf test_venv/
./setup_test_env.sh
```
## Resources
- **pytest Documentation**: https://docs.pytest.org/
- **Coverage.py**: https://coverage.readthedocs.io/
- **Module-specific READMEs**: Check each test module directory

View File

@@ -0,0 +1,423 @@
# Context Files Guide for pip_util Tests
Quick reference for all context files created for extending pip_util tests.
---
## 📋 File Overview
| File | Purpose | When to Use |
|------|---------|-------------|
| **DEPENDENCY_TREE_CONTEXT.md** | Complete dependency trees with version analysis | Adding new test packages or updating scenarios |
| **DEPENDENCY_ANALYSIS.md** | Analysis methodology and findings | Understanding why packages were chosen |
| **TEST_SCENARIOS.md** | Detailed test specifications | Writing new tests or understanding existing ones |
| **analyze_dependencies.py** | Interactive dependency analyzer | Exploring new packages before adding tests |
| **requirements-test-base.txt** | Base test environment packages | Setting up or modifying test environment |
---
## 🎯 Common Tasks
### Task 1: Adding a New Test Package
**Steps**:
1. **Analyze the package**:
```bash
python analyze_dependencies.py NEW_PACKAGE
```
2. **Check size and dependencies**:
```bash
./test_venv/bin/pip download --no-deps NEW_PACKAGE
ls -lh NEW_PACKAGE*.whl # Check size
```
3. **Verify dependency tree**:
- Open **DEPENDENCY_TREE_CONTEXT.md**
- Follow "Adding New Test Scenarios" section
- Document findings in the file
4. **Update requirements** (if pre-installation needed):
- Add to `requirements-test-base.txt`
- Run `./setup_test_env.sh` to recreate venv
5. **Write test**:
- Follow patterns in `test_dependency_protection.py`
- Use `reset_test_venv` fixture
- Add scenario to **TEST_SCENARIOS.md**
6. **Verify**:
```bash
pytest test_YOUR_NEW_TEST.py -v --override-ini="addopts="
```
---
### Task 2: Understanding Existing Tests
**Steps**:
1. **Read test scenario**:
- Open **TEST_SCENARIOS.md**
- Find your scenario (1-6)
- Review initial state, action, expected result
2. **Check dependency details**:
- Open **DEPENDENCY_TREE_CONTEXT.md**
- Look up package in table of contents
- Review dependency tree and version analysis
3. **Run analysis**:
```bash
python analyze_dependencies.py PACKAGE
```
4. **Examine test code**:
- Open relevant test file
- Check policy fixture
- Review assertions
---
### Task 3: Updating for New Package Versions
**When**: PyPI releases major version updates (e.g., urllib3 3.0)
**Steps**:
1. **Check current environment**:
```bash
python analyze_dependencies.py --env
```
2. **Analyze new versions**:
```bash
./test_venv/bin/pip index versions PACKAGE | head -20
python analyze_dependencies.py PACKAGE
```
3. **Update context files**:
- Update version numbers in **DEPENDENCY_TREE_CONTEXT.md**
- Update "Version Analysis" section
- Document breaking changes
4. **Test with new versions**:
- Update `requirements-test-base.txt` (if testing new base version)
- OR update test to verify protection from new version
- Run tests to verify behavior
5. **Update scenarios**:
- Update **TEST_SCENARIOS.md** with new version numbers
- Update expected results if behavior changed
---
### Task 4: Debugging Dependency Issues
**Problem**: Test fails with unexpected dependency versions
**Steps**:
1. **Check what's installed**:
```bash
./test_venv/bin/pip freeze | grep -E "(urllib3|certifi|six|requests)"
```
2. **Analyze what would install**:
```bash
python analyze_dependencies.py PACKAGE
```
3. **Compare with expected**:
- Open **DEPENDENCY_TREE_CONTEXT.md**
- Check "Install Scenarios" for the package
- Compare actual vs. expected
4. **Check for PyPI changes**:
```bash
./test_venv/bin/pip index versions PACKAGE
```
5. **Verify test environment**:
```bash
rm -rf test_venv && ./setup_test_env.sh
pytest test_FILE.py -v --override-ini="addopts="
```
---
## 📚 Context File Details
### DEPENDENCY_TREE_CONTEXT.md
**Contents**:
- Current test environment snapshot
- Complete dependency trees for all test packages
- Version analysis (current vs. latest)
- Upgrade scenarios matrix
- Guidelines for adding new scenarios
- Quick reference tables
**Use when**:
- Adding new test package
- Understanding why a package was chosen
- Checking version compatibility
- Updating for new PyPI releases
**Key sections**:
- Package Dependency Trees → See what each package depends on
- Version Analysis → Understand version gaps and breaking changes
- Adding New Test Scenarios → Step-by-step guide
---
### DEPENDENCY_ANALYSIS.md
**Contents**:
- Detailed analysis of each test scenario
- Real dependency verification using `pip --dry-run`
- Version difference analysis
- Rejected scenarios (and why)
- Package size verification
- Recommendations for implementation
**Use when**:
- Understanding test design decisions
- Evaluating new package candidates
- Reviewing why certain packages were rejected
- Learning the analysis methodology
**Key sections**:
- Test Scenarios with Real Dependencies → Detailed scenarios
- Rejected Scenarios → What NOT to use (e.g., click+colorama)
- Validation Commands → How to verify analysis
---
### TEST_SCENARIOS.md
**Contents**:
- Complete specifications for scenarios 1-6
- Exact package versions and states
- Policy configurations (JSON)
- Expected pip commands
- Expected final states
- Key points for each scenario
**Use when**:
- Writing new tests
- Understanding test expectations
- Debugging test failures
- Documenting new scenarios
**Key sections**:
- Each scenario section → Complete specification
- Summary tables → Quick reference
- Policy types summary → Available policy options
---
### analyze_dependencies.py
**Features**:
- Interactive package analysis
- Dry-run simulation
- Version comparison
- Pin impact analysis
**Use when**:
- Exploring new packages
- Verifying current environment
- Checking upgrade impacts
- Quick dependency checks
**Commands**:
```bash
# Analyze specific package
python analyze_dependencies.py requests
# Analyze all test packages
python analyze_dependencies.py --all
# Show current environment
python analyze_dependencies.py --env
```
---
### requirements-test-base.txt
**Contents**:
- Base packages for test environment
- Version specifications
- Comments explaining each package's purpose
**Use when**:
- Setting up test environment
- Adding pre-installed packages
- Modifying base versions
- Recreating clean environment
**Format**:
```txt
# Scenario X: Purpose
package==version # Comment explaining role
```
---
## 🔄 Workflow Examples
### Example 1: Adding flask Test
```bash
# 1. Analyze flask
python analyze_dependencies.py flask
# Output shows:
# Would install: Flask, Jinja2, MarkupSafe, Werkzeug, blinker, click, itsdangerous
# 2. Check sizes
./test_venv/bin/pip download --no-deps flask jinja2 werkzeug
ls -lh *.whl
# 3. Document in DEPENDENCY_TREE_CONTEXT.md
# Add section:
### 3. flask → Dependencies
**Package**: `flask==3.1.2`
**Size**: ~100KB
...
# 4. Write test
# Create test_flask_dependencies.py
# 5. Test
pytest test_flask_dependencies.py -v --override-ini="addopts="
```
---
### Example 2: Investigating Test Failure
```bash
# Test failed: "urllib3 version mismatch"
# 1. Check installed
./test_venv/bin/pip freeze | grep urllib3
# Output: urllib3==2.5.0 (expected: 1.26.15)
# 2. Analyze what happened
python analyze_dependencies.py requests
# 3. Check context
# Open DEPENDENCY_TREE_CONTEXT.md
# Section: "urllib3: Major Version Jump"
# Confirms: 1.26.15 → 2.5.0 is expected without pin
# 4. Verify test has pin
# Check test_dependency_protection.py for pin_policy fixture
# 5. Reset environment
rm -rf test_venv && ./setup_test_env.sh
# 6. Re-run test
pytest test_dependency_protection.py -v --override-ini="addopts="
```
---
## 🎓 Best Practices
### When Adding New Tests
✅ **DO**:
- Use `analyze_dependencies.py` first
- Document in **DEPENDENCY_TREE_CONTEXT.md**
- Add scenario to **TEST_SCENARIOS.md**
- Verify with real pip operations
- Keep packages lightweight (<500KB total)
❌ **DON'T**:
- Add packages without verifying dependencies
- Use packages with optional dependencies only
- Add heavy packages (>1MB)
- Skip documentation
- Mock subprocess for integration tests
---
### When Updating Context
✅ **DO**:
- Re-run `analyze_dependencies.py --all`
- Update version numbers throughout
- Document breaking changes
- Test after updates
- Note update date
❌ **DON'T**:
- Update only one file
- Skip verification
- Forget to update TEST_SCENARIOS.md
- Leave outdated version numbers
---
## 🆘 Quick Troubleshooting
| Problem | Check | Solution |
|---------|-------|----------|
| Test fails with version mismatch | `pip freeze` | Recreate venv with `./setup_test_env.sh` |
| Package not found | `pip index versions PKG` | Check if package exists on PyPI |
| Unexpected dependencies | `analyze_dependencies.py PKG` | Review dependency tree in context file |
| Wrong test data | **TEST_SCENARIOS.md** | Verify against documented scenario |
| Unclear why package chosen | **DEPENDENCY_ANALYSIS.md** | Read "Rejected Scenarios" section |
---
## 📞 Need Help?
1. **Check context files first**: Most answers are documented
2. **Run analyze_dependencies.py**: Verify current state
3. **Review test scenarios**: Understand expected behavior
4. **Examine dependency trees**: Understand relationships
5. **Check DEPENDENCY_ANALYSIS.md**: Learn the "why" behind decisions
---
## 📝 Maintenance Checklist
**Every 6 months or when major versions release**:
- [ ] Run `python analyze_dependencies.py --all`
- [ ] Check for new major versions: `pip index versions urllib3 certifi six`
- [ ] Update **DEPENDENCY_TREE_CONTEXT.md** version numbers
- [ ] Update **TEST_SCENARIOS.md** expected versions
- [ ] Test all scenarios: `pytest -v --override-ini="addopts="`
- [ ] Document any breaking changes
- [ ] Update this guide if workflow changed
---
## 🔗 File Relationships
```
requirements-test-base.txt
↓ (defines)
Current Test Environment
↓ (analyzed by)
analyze_dependencies.py
↓ (documents)
DEPENDENCY_TREE_CONTEXT.md
↓ (informs)
TEST_SCENARIOS.md
↓ (implemented in)
test_*.py files
```
---
**Last Updated**: 2025-10-01
**Python Version**: 3.12.3
**pip Version**: 25.2

View File

@@ -0,0 +1,261 @@
# pip_util Test Package Dependency Analysis
Real dependency analysis using `pip install --dry-run` to verify meaningful test scenarios.
## Analysis Date
Generated: 2025-10-01
Tool: `pip install --dry-run --ignore-installed`
## Test Scenarios with Real Dependencies
### Scenario 1: Dependency Version Protection (requests + urllib3)
**Purpose**: Verify pin_dependencies prevents unwanted upgrades
**Initial Environment**:
```
urllib3==1.26.15
certifi==2023.7.22
charset-normalizer==3.2.0
```
**Without pin** (`pip install requests`):
```bash
Would install:
certifi-2025.8.3 # UPGRADED from 2023.7.22 (+2 years)
charset-normalizer-3.4.3 # UPGRADED from 3.2.0 (minor)
idna-3.10 # NEW dependency
requests-2.32.5 # NEW package
urllib3-2.5.0 # UPGRADED from 1.26.15 (MAJOR 1.x→2.x!)
```
**With pin** (`pip install requests urllib3==1.26.15 certifi==2023.7.22 charset-normalizer==3.2.0`):
```bash
Would install:
idna-3.10 # NEW dependency (required by requests)
requests-2.32.5 # NEW package
# Pinned packages stay at old versions:
urllib3==1.26.15 ✅ PROTECTED (prevented 1.x→2.x jump)
certifi==2023.7.22 ✅ PROTECTED
charset-normalizer==3.2.0 ✅ PROTECTED
```
**Key Finding**:
- `urllib3` 1.26.15 → 2.5.0 is a **MAJOR version jump** (breaking changes!)
- requests accepts both: `urllib3<3,>=1.21.1` (compatible with 1.x and 2.x)
- Pin successfully prevents unwanted major upgrade
---
### Scenario 2: Package with Dependency (python-dateutil + six)
**Purpose**: Verify pin_dependencies with dependency chain
**Analysis**:
```bash
$ pip install --dry-run python-dateutil
Would install:
python-dateutil-2.9.0.post0
six-1.17.0 # DEPENDENCY
```
**Initial Environment**:
```
six==1.16.0 # Older version
```
**Without pin** (`pip install python-dateutil`):
```bash
Would install:
python-dateutil-2.9.0.post0
six-1.17.0 # UPGRADED from 1.16.0
```
**With pin** (`pip install python-dateutil six==1.16.0`):
```bash
Would install:
python-dateutil-2.9.0.post0
# Pinned package:
six==1.16.0 ✅ PROTECTED
```
---
### Scenario 3: Package Deletion and Restore (six)
**Purpose**: Verify restore policy reinstalls deleted packages
**Initial Environment**:
```
six==1.16.0
attrs==23.1.0
packaging==23.1
```
**Action Sequence**:
1. Delete six: `pip uninstall -y six`
2. Verify deletion: `pip freeze | grep six` (empty)
3. Restore: `batch.ensure_installed()``pip install six==1.16.0`
**Expected Result**:
```
six==1.16.0 # ✅ RESTORED
```
---
### Scenario 4: Version Change and Restore (urllib3)
**Purpose**: Verify restore policy reverts version changes
**Initial Environment**:
```
urllib3==1.26.15
```
**Action Sequence**:
1. Upgrade: `pip install urllib3==2.5.0`
2. Verify change: `pip freeze | grep urllib3``urllib3==2.5.0`
3. Restore: `batch.ensure_installed()``pip install urllib3==1.26.15`
**Expected Result**:
```
urllib3==1.26.15 # ✅ RESTORED (downgraded from 2.5.0)
```
**Key Finding**:
- Downgrade from 2.x to 1.x requires explicit version specification
- pip allows downgrades with `pip install urllib3==1.26.15`
---
## Rejected Scenarios
### click + colorama (NO REAL DEPENDENCY)
**Analysis**:
```bash
$ pip install --dry-run click
Would install: click-8.3.0
$ pip install --dry-run click colorama==0.4.6
Would install: click-8.3.0 # colorama not installed!
```
**Finding**: click has **NO direct dependency** on colorama
- colorama is **optional** and platform-specific (Windows only)
- Not a good test case for dependency protection
**Recommendation**: Use python-dateutil + six instead
---
## Package Size Verification
```bash
Package Size Version Purpose
-------------------------------------------------------
urllib3 ~140KB 1.26.15 Protected dependency
certifi ~158KB 2023.7.22 SSL certificates
charset-normalizer ~46KB 3.2.0 Charset detection
idna ~69KB 3.10 NEW dep from requests
requests ~100KB 2.32.5 Main package to install
six ~11KB 1.16.0 Restore test
python-dateutil ~280KB 2.9.0 Depends on six
attrs ~61KB 23.1.0 Bystander
packaging ~48KB 23.1 Bystander
-------------------------------------------------------
Total ~913KB (< 1MB) ✅ All lightweight
```
---
## Dependency Graph
```
requests 2.32.5
├── charset_normalizer<4,>=2 (have: 3.2.0)
├── idna<4,>=2.5 (need: 3.10) ← NEW
├── urllib3<3,>=1.21.1 (have: 1.26.15, latest: 2.5.0)
└── certifi>=2017.4.17 (have: 2023.7.22, latest: 2025.8.3)
python-dateutil 2.9.0
└── six>=1.5 (have: 1.16.0, latest: 1.17.0)
```
---
## Version Compatibility Matrix
| Package | Old Version | Latest | Spec | Compatible? |
|---------|------------|--------|------|-------------|
| urllib3 | 1.26.15 | 2.5.0 | <3,>=1.21.1 | ✅ Both work |
| certifi | 2023.7.22 | 2025.8.3 | >=2017.4.17 | ✅ Both work |
| charset-normalizer | 3.2.0 | 3.4.3 | <4,>=2 | ✅ Both work |
| six | 1.16.0 | 1.17.0 | >=1.5 | ✅ Both work |
| idna | (none) | 3.10 | <4,>=2.5 | ⚠️ Must install |
---
## Test Data Justification
### Why urllib3 1.26.15?
1. **Real world scenario**: Many projects pin urllib3<2 to avoid breaking changes
2. **Meaningful test**: 1.26.15 → 2.5.0 is a major version jump (API changes)
3. **Compatibility**: requests accepts both 1.x and 2.x (good for testing)
### Why certifi 2023.7.22?
1. **Real world scenario**: Older environment with outdated SSL certificates
2. **Meaningful test**: 2-year version gap (2023 → 2025)
3. **Safety**: Still compatible with requests
### Why six 1.16.0?
1. **Lightweight**: Only 11KB
2. **Real dependency**: python-dateutil actually depends on it
3. **Stable**: six is mature and rarely changes
---
## Recommendations for Test Implementation
### ✅ Keep These Scenarios:
1. **requests + urllib3 pin** - Real major version protection
2. **python-dateutil + six** - Real dependency chain
3. **six deletion/restore** - Real package management
4. **urllib3 version change** - Real downgrade scenario
### ❌ Remove These Scenarios:
1. **click + colorama** - No real dependency (colorama is optional/Windows-only)
### 📝 Update Required Files:
1. `requirements-test-base.txt` - Add idna (new dependency from requests)
2. `TEST_SCENARIOS.md` - Update with real dependency analysis
3. `test_dependency_protection.py` - Remove click-colorama test
4. `pip_util.design.en.md` - Update examples with verified dependencies
---
## Validation Commands
Run these to verify analysis:
```bash
# Check current environment
./test_venv/bin/pip freeze
# Simulate requests installation without pin
./test_venv/bin/pip install --dry-run requests
# Simulate requests installation with pin
./test_venv/bin/pip install --dry-run requests urllib3==1.26.15 certifi==2023.7.22 charset-normalizer==3.2.0
# Check python-dateutil dependencies
./test_venv/bin/pip install --dry-run python-dateutil
# Verify urllib3 version availability
./test_venv/bin/pip index versions urllib3 | head -20
```

View File

@@ -0,0 +1,413 @@
# Dependency Tree Context for pip_util Tests
**Generated**: 2025-10-01
**Tool**: `pip install --dry-run --ignore-installed`
**Python**: 3.12.3
**pip**: 25.2
This document provides detailed dependency tree information for all test packages, verified against real PyPI data. Use this as a reference when extending tests.
---
## Table of Contents
1. [Current Test Environment](#current-test-environment)
2. [Package Dependency Trees](#package-dependency-trees)
3. [Version Analysis](#version-analysis)
4. [Upgrade Scenarios](#upgrade-scenarios)
5. [Adding New Test Scenarios](#adding-new-test-scenarios)
---
## Current Test Environment
**Base packages installed in test_venv** (from `requirements-test-base.txt`):
```
urllib3==1.26.15 # Protected from 2.x upgrade
certifi==2023.7.22 # Protected from 2025.x upgrade
charset-normalizer==3.2.0 # Protected from 3.4.x upgrade
six==1.16.0 # For deletion/restore tests
attrs==23.1.0 # Bystander package
packaging==23.1 # Bystander package
pytest==8.4.2 # Test framework
```
**Total environment size**: ~913KB (all packages < 1MB)
---
## Package Dependency Trees
### 1. requests → Dependencies
**Package**: `requests==2.32.5`
**Size**: ~100KB
**Purpose**: Main test package for dependency protection
#### Dependency Tree
```
requests==2.32.5
├── charset-normalizer<4,>=2
│ └── 3.2.0 (OLD) → 3.4.3 (LATEST)
├── idna<4,>=2.5
│ └── (NOT INSTALLED) → 3.10 (LATEST)
├── urllib3<3,>=1.21.1
│ └── 1.26.15 (OLD) → 2.5.0 (LATEST) ⚠️ MAJOR VERSION JUMP
└── certifi>=2017.4.17
└── 2023.7.22 (OLD) → 2025.8.3 (LATEST)
```
#### Install Scenarios
**Scenario A: Without constraints (fresh install)**
```bash
$ pip install --dry-run --ignore-installed requests
Would install:
certifi-2025.8.3 # Latest version
charset-normalizer-3.4.3 # Latest version
idna-3.10 # New dependency
requests-2.32.5 # Target package
urllib3-2.5.0 # Latest version (2.x!)
```
**Scenario B: With pin constraints**
```bash
$ pip install --dry-run requests \
urllib3==1.26.15 \
certifi==2023.7.22 \
charset-normalizer==3.2.0
Would install:
certifi-2023.7.22 # Pinned to OLD version
charset-normalizer-3.2.0 # Pinned to OLD version
idna-3.10 # New dependency (not pinned)
requests-2.32.5 # Target package
urllib3-1.26.15 # Pinned to OLD version
```
**Impact Analysis**:
- ✅ Pin successfully prevents urllib3 1.x → 2.x major upgrade
- ✅ Pin prevents certifi 2023 → 2025 upgrade (2 years)
- ✅ Pin prevents charset-normalizer minor upgrade
- ⚠️ idna is NEW and NOT pinned (acceptable - new dependency)
---
### 2. python-dateutil → Dependencies
**Package**: `python-dateutil==2.9.0.post0`
**Size**: ~280KB
**Purpose**: Real dependency chain test (depends on six)
#### Dependency Tree
```
python-dateutil==2.9.0.post0
└── six>=1.5
└── 1.16.0 (OLD) → 1.17.0 (LATEST)
```
#### Install Scenarios
**Scenario A: Without constraints**
```bash
$ pip install --dry-run --ignore-installed python-dateutil
Would install:
python-dateutil-2.9.0.post0 # Target package
six-1.17.0 # Latest version
```
**Scenario B: With pin constraints**
```bash
$ pip install --dry-run python-dateutil six==1.16.0
Would install:
python-dateutil-2.9.0.post0 # Target package
six-1.16.0 # Pinned to OLD version
```
**Impact Analysis**:
- ✅ Pin successfully prevents six 1.16.0 → 1.17.0 upgrade
- ✅ Real dependency relationship (verified via PyPI)
---
### 3. Other Test Packages (No Dependencies)
These packages have no dependencies or only have dependencies already in the test environment:
```
attrs==23.1.0 # No dependencies
packaging==23.1 # No dependencies (standalone)
six==1.16.0 # No dependencies (pure Python)
```
---
## Version Analysis
### urllib3: Major Version Jump (1.x → 2.x)
**Current**: 1.26.15 (2023)
**Latest**: 2.5.0 (2025)
**Breaking Changes**: YES - urllib3 2.0 removed deprecated APIs
**Available versions**:
```
2.x series: 2.5.0, 2.4.0, 2.3.0, 2.2.3, 2.2.2, 2.2.1, 2.2.0, 2.1.0, 2.0.7, ...
1.26.x: 1.26.20, 1.26.19, 1.26.18, 1.26.17, 1.26.16, 1.26.15, ...
1.25.x: 1.25.11, 1.25.10, 1.25.9, ...
```
**Why test with 1.26.15?**
- ✅ Real-world scenario: Many projects pin `urllib3<2` to avoid breaking changes
- ✅ Meaningful test: 1.x → 2.x is a major API change
- ✅ Compatibility: requests accepts both 1.x and 2.x (`urllib3<3,>=1.21.1`)
**Breaking changes in urllib3 2.0**:
- Removed `urllib3.contrib.pyopenssl`
- Removed `urllib3.contrib.securetransport`
- Changed import paths for some modules
- Updated connection pooling behavior
---
### certifi: Long-Term Version Gap (2023 → 2025)
**Current**: 2023.7.22 (July 2023)
**Latest**: 2025.8.3 (August 2025)
**Gap**: ~2 years of SSL certificate updates
**Available versions**:
```
2025: 2025.8.3, 2025.7.14, 2025.7.9, 2025.6.15, 2025.4.26, ...
2024: 2024.12.25, 2024.11.28, 2024.10.29, 2024.9.19, ...
2023: 2023.11.17, 2023.7.22, 2023.5.7, ...
```
**Why test with 2023.7.22?**
- ✅ Real-world scenario: Older environments with outdated SSL certificates
- ✅ Meaningful test: 2-year gap shows protection of older versions
- ✅ Safety: Still compatible with requests (`certifi>=2017.4.17`)
---
### charset-normalizer: Minor Version Updates
**Current**: 3.2.0 (2023)
**Latest**: 3.4.3 (2025)
**Breaking Changes**: NO - only minor/patch updates
**Available versions**:
```
3.4.x: 3.4.3, 3.4.2, 3.4.1, 3.4.0
3.3.x: 3.3.2, 3.3.1, 3.3.0
3.2.x: 3.2.0
```
**Why test with 3.2.0?**
- ✅ Demonstrates protection of minor version updates
- ✅ Compatible with requests (`charset-normalizer<4,>=2`)
---
### six: Stable Version Update
**Current**: 1.16.0 (2021)
**Latest**: 1.17.0 (2024)
**Breaking Changes**: NO - six is very stable
**Available versions**:
```
1.17.0, 1.16.0, 1.15.0, 1.14.0, 1.13.0, 1.12.0, ...
```
**Why test with 1.16.0?**
- ✅ Real dependency of python-dateutil
- ✅ Small size (11KB) - lightweight for tests
- ✅ Demonstrates protection of stable packages
---
### idna: New Dependency
**Not pre-installed** - Added by requests
**Version**: 3.10
**Size**: ~69KB
**Dependency spec**: `idna<4,>=2.5` (from requests)
**Why NOT pre-installed?**
- ✅ Tests that new dependencies are correctly added
- ✅ Tests that pins only affect specified packages
- ✅ Real-world scenario: new dependency introduced by package update
---
## Upgrade Scenarios
### Scenario Matrix
| Package | Initial | Without Pin | With Pin | Change Type |
|---------|---------|-------------|----------|-------------|
| **urllib3** | 1.26.15 | 2.5.0 ❌ | 1.26.15 ✅ | Major (breaking) |
| **certifi** | 2023.7.22 | 2025.8.3 ❌ | 2023.7.22 ✅ | 2-year gap |
| **charset-normalizer** | 3.2.0 | 3.4.3 ❌ | 3.2.0 ✅ | Minor update |
| **six** | 1.16.0 | 1.17.0 ❌ | 1.16.0 ✅ | Stable update |
| **idna** | (none) | 3.10 ✅ | 3.10 ✅ | New dependency |
| **requests** | (none) | 2.32.5 ✅ | 2.32.5 ✅ | Target package |
| **python-dateutil** | (none) | 2.9.0 ✅ | 2.9.0 ✅ | Target package |
---
## Adding New Test Scenarios
### Step 1: Identify Candidate Package
Use `pip install --dry-run` to analyze dependencies:
```bash
# Analyze package dependencies
./test_venv/bin/pip install --dry-run --ignore-installed PACKAGE
# Check what changes with current environment
./test_venv/bin/pip install --dry-run PACKAGE
# List available versions
./test_venv/bin/pip index versions PACKAGE
```
### Step 2: Verify Real Dependencies
**Good candidates**:
- ✅ Has 2+ dependencies
- ✅ Dependencies have version upgrades available
- ✅ Total size < 500KB (all packages combined)
- ✅ Real-world use case (popular package)
**Examples**:
```bash
# flask → click, werkzeug, jinja2 (good: multiple dependencies)
$ pip install --dry-run --ignore-installed flask
Would install: Flask-3.1.2 Jinja2-3.1.6 MarkupSafe-3.0.3 Werkzeug-3.1.3 blinker-1.9.0 click-8.3.0 itsdangerous-2.2.0
# pytest-cov → pytest, coverage (good: popular testing tool)
$ pip install --dry-run --ignore-installed pytest-cov
Would install: coverage-7.10.7 pytest-8.4.2 pytest-cov-7.0.0
```
**Bad candidates**:
- ❌ click → colorama (no real dependency - colorama is optional/Windows-only)
- ❌ pandas → numpy (too large - numpy is 50MB+)
- ❌ torch → ... (too large - 800MB+)
### Step 3: Document Dependencies
Add to this file:
```markdown
### Package: PACKAGE_NAME → Dependencies
**Package**: `PACKAGE==VERSION`
**Size**: ~XXXKB
**Purpose**: Brief description
#### Dependency Tree
(Use tree format)
#### Install Scenarios
(Show with/without pin)
#### Impact Analysis
(What does pin protect?)
```
### Step 4: Update Test Files
1. Add package to `requirements-test-base.txt` (if pre-installation needed)
2. Create policy fixture in test file
3. Write test function using `reset_test_venv` fixture
4. Update `TEST_SCENARIOS.md` with detailed scenario
---
## Maintenance Notes
### Updating This Document
Re-run analysis when:
- ✅ PyPI releases major version updates (e.g., urllib3 3.0)
- ✅ Adding new test packages
- ✅ Test environment base packages change
- ✅ Every 6 months (to catch version drift)
### Verification Commands
```bash
# Regenerate dependency tree
./test_venv/bin/pip install --dry-run --ignore-installed requests
./test_venv/bin/pip install --dry-run --ignore-installed python-dateutil
# Check current environment
./test_venv/bin/pip freeze
# Verify test packages still available on PyPI
./test_venv/bin/pip index versions urllib3
./test_venv/bin/pip index versions certifi
./test_venv/bin/pip index versions six
```
---
## Quick Reference: Package Specs
From actual package metadata:
```python
# requests dependencies (from requests==2.32.5)
install_requires = [
"charset_normalizer<4,>=2",
"idna<4,>=2.5",
"urllib3<3,>=1.21.1",
"certifi>=2017.4.17"
]
# python-dateutil dependencies (from python-dateutil==2.9.0)
install_requires = [
"six>=1.5"
]
# six dependencies
install_requires = [] # No dependencies
# attrs dependencies
install_requires = [] # No dependencies
# packaging dependencies
install_requires = [] # No dependencies
```
---
## Version Compatibility Table
| Package | Minimum | Maximum | Current Test | Latest | Notes |
|---------|---------|---------|--------------|--------|-------|
| urllib3 | 1.21.1 | <3.0 | 1.26.15 | 2.5.0 | Major version jump possible |
| certifi | 2017.4.17 | (none) | 2023.7.22 | 2025.8.3 | Always backward compatible |
| charset-normalizer | 2.0 | <4.0 | 3.2.0 | 3.4.3 | Within major version |
| six | 1.5 | (none) | 1.16.0 | 1.17.0 | Very stable |
| idna | 2.5 | <4.0 | (new) | 3.10 | Added by requests |
---
## See Also
- **DEPENDENCY_ANALYSIS.md** - Detailed analysis methodology
- **TEST_SCENARIOS.md** - Complete test scenario specifications
- **requirements-test-base.txt** - Base environment packages
- **README.md** - Test suite overview and usage

View File

@@ -0,0 +1,305 @@
# pip_util Integration Tests
Real integration tests for `pip_util.py` using actual PyPI packages and pip operations.
## Overview
These tests use a **real isolated venv** to verify pip_util behavior with actual package installations, deletions, and version changes. No mocks - real pip operations only.
## Quick Start
### 1. Setup Test Environment
```bash
cd tests/common/pip_util
./setup_test_env.sh
```
This creates `test_venv/` with base packages:
- urllib3==1.26.15
- certifi==2023.7.22
- charset-normalizer==3.2.0
- colorama==0.4.6
- six==1.16.0
- attrs==23.1.0
- packaging==23.1
- pytest (latest)
### 2. Run Tests
```bash
# Run all integration tests
pytest -v --override-ini="addopts="
# Run specific test
pytest test_dependency_protection.py -v --override-ini="addopts="
# Run with markers
pytest -m integration -v --override-ini="addopts="
```
## Test Architecture
### Real venv Integration
- **No subprocess mocking** - uses real pip install/uninstall
- **Isolated test venv** - prevents system contamination
- **Automatic cleanup** - `reset_test_venv` fixture restores state after each test
### Test Fixtures
**venv Management**:
- `test_venv_path` - Path to test venv (session scope)
- `test_pip_cmd` - pip command for test venv
- `reset_test_venv` - Restore venv to initial state after each test
**Helpers**:
- `get_installed_packages()` - Get current venv packages
- `install_packages(*packages)` - Install packages in test venv
- `uninstall_packages(*packages)` - Uninstall packages in test venv
**Policy Configuration**:
- `temp_policy_dir` - Temporary directory for base policies
- `temp_user_policy_dir` - Temporary directory for user policies
- `mock_manager_util` - Mock manager_util paths to use temp dirs
- `mock_context` - Mock context paths to use temp dirs
## Test Scenarios
### Scenario 1: Dependency Version Protection
**File**: `test_dependency_protection.py::test_dependency_version_protection_with_pin`
**Initial State**:
```python
urllib3==1.26.15
certifi==2023.7.22
charset-normalizer==3.2.0
```
**Action**: Install `requests` with pin_dependencies policy
**Expected Result**:
```python
# Dependencies stay at old versions (protected by pin)
urllib3==1.26.15 # NOT upgraded to 2.x
certifi==2023.7.22 # NOT upgraded
charset-normalizer==3.2.0 # NOT upgraded
requests==2.31.0 # newly installed
```
### Scenario 2: Click-Colorama Dependency Chain
**File**: `test_dependency_protection.py::test_dependency_chain_with_click_colorama`
**Initial State**:
```python
colorama==0.4.6
```
**Action**: Install `click` with force_version + pin_dependencies
**Expected Result**:
```python
colorama==0.4.6 # PINNED
click==8.1.3 # FORCED to specific version
```
### Scenario 3: Package Deletion and Restore
**File**: `test_environment_recovery.py::test_package_deletion_and_restore`
**Initial State**:
```python
six==1.16.0
attrs==23.1.0
packaging==23.1
```
**Action**: Delete `six` → call `batch.ensure_installed()`
**Expected Result**:
```python
six==1.16.0 # RESTORED to required version
```
### Scenario 4: Version Change and Restore
**File**: `test_environment_recovery.py::test_version_change_and_restore`
**Initial State**:
```python
urllib3==1.26.15
```
**Action**: Upgrade `urllib3` to 2.1.0 → call `batch.ensure_installed()`
**Expected Result**:
```python
urllib3==1.26.15 # RESTORED to required version (downgraded)
```
## Test Categories
### Priority 1 (Essential) ✅ ALL PASSING
- ✅ Dependency version protection (enhanced with exact versions)
- ✅ Package deletion and restore (enhanced with exact versions)
- ✅ Version change and restore (enhanced with downgrade verification)
- ✅ Pin only affects specified packages ✨ NEW
- ✅ Major version jump prevention ✨ NEW
### Priority 2 (Important)
- ✅ Complex dependency chains (python-dateutil + six)
- ⏳ Full workflow integration (TODO: update to real venv)
- ⏳ Pin failure retry (TODO: update to real venv)
### Priority 3 (Edge Cases)
- ⏳ Platform conditions (TODO: update to real venv)
- ⏳ Policy priority (TODO: update to real venv)
- ⏳ Unit tests (no venv needed)
- ⏳ Edge cases (no venv needed)
## Package Selection
All test packages are **real PyPI packages < 200KB**:
| Package | Size | Version | Purpose |
|---------|------|---------|---------|
| **urllib3** | ~100KB | 1.26.15 | Protected dependency (prevent 2.x upgrade) |
| **certifi** | ~10KB | 2023.7.22 | SSL certificates (pinned) |
| **charset-normalizer** | ~46KB | 3.2.0 | Charset detection (pinned) |
| **requests** | ~100KB | 2.31.0 | Main package to install |
| **colorama** | ~25KB | 0.4.6 | Terminal colors (pinned) |
| **click** | ~90KB | 8.1.3 | CLI framework (forced version) |
| **six** | ~11KB | 1.16.0 | Python 2/3 compatibility (restore) |
| **attrs** | ~61KB | 23.1.0 | Bystander package |
| **packaging** | ~48KB | 23.1 | Bystander package |
## Cleanup
### Manual Cleanup
```bash
# Remove test venv
rm -rf test_venv/
# Recreate fresh venv
./setup_test_env.sh
```
### Automatic Cleanup
The `reset_test_venv` fixture automatically:
1. Records initial package state
2. Runs test
3. Removes all packages (except pip/setuptools/wheel)
4. Reinstalls initial packages
## Troubleshooting
### Error: "Test venv not found"
**Solution**: Run `./setup_test_env.sh`
### Error: "Package not installed in initial state"
**Solution**: Check `requirements-test-base.txt` and recreate venv
### Tests are slow
**Reason**: Real pip operations take 2-3 seconds per test
**This is expected** - we're doing actual pip install/uninstall
## Implementation Details
### How reset_test_venv Works
```python
@pytest.fixture
def reset_test_venv(test_pip_cmd):
# 1. Record initial state
initial = subprocess.run(test_pip_cmd + ["freeze"], ...)
yield # Run test here
# 2. Remove all packages
current = subprocess.run(test_pip_cmd + ["freeze"], ...)
subprocess.run(test_pip_cmd + ["uninstall", "-y", ...], ...)
# 3. Restore initial state
subprocess.run(test_pip_cmd + ["install", "-r", initial], ...)
```
### How make_pip_cmd is Patched
```python
@pytest.fixture(autouse=True)
def setup_pip_util(monkeypatch, test_pip_cmd):
from comfyui_manager.common import pip_util
def make_test_pip_cmd(args: List[str]) -> List[str]:
return test_pip_cmd + args # Use test venv pip
monkeypatch.setattr(
pip_util.manager_util,
"make_pip_cmd",
make_test_pip_cmd
)
```
## Dependency Analysis Tool
Use `analyze_dependencies.py` to examine package dependencies before adding new tests:
```bash
# Analyze specific package
python analyze_dependencies.py requests
# Analyze all test packages
python analyze_dependencies.py --all
# Show current environment
python analyze_dependencies.py --env
```
**Output includes**:
- Latest available versions
- Dependencies that would be installed
- Version upgrades that would occur
- Impact of pin constraints
**Example output**:
```
📦 Latest version: 2.32.5
🔍 Scenario A: Install without constraints
Would install 5 packages:
• urllib3 1.26.15 → 2.5.0 ⚠️ UPGRADE
🔍 Scenario B: Install with pin constraints
Would install 5 packages:
• urllib3 1.26.15 (no change) 📌 PINNED
✅ Pin prevented 2 upgrade(s)
```
## Test Statistics
**Current Status**: 6 tests, 100% passing
```
test_dependency_version_protection_with_pin PASSED (2.28s)
test_dependency_chain_with_six_pin PASSED (2.00s)
test_pin_only_affects_specified_packages PASSED (2.25s) ✨ NEW
test_major_version_jump_prevention PASSED (3.53s) ✨ NEW
test_package_deletion_and_restore PASSED (2.25s)
test_version_change_and_restore PASSED (2.24s)
Total: 14.10s
```
**Test Improvements**:
- ✅ All tests verify exact version numbers
- ✅ All tests reference DEPENDENCY_TREE_CONTEXT.md
- ✅ Added 2 new critical tests (pin selectivity, major version prevention)
- ✅ Enhanced error messages with expected vs actual values
## Design Documents
- **TEST_IMPROVEMENTS.md** - Summary of test enhancements based on dependency context
- **DEPENDENCY_TREE_CONTEXT.md** - Verified dependency trees for all test packages
- **DEPENDENCY_ANALYSIS.md** - Dependency analysis methodology
- **CONTEXT_FILES_GUIDE.md** - Guide for using context files
- **TEST_SCENARIOS.md** - Detailed test scenario specifications
- **pip_util.test-design.md** - Test design and architecture
- **pip_util.design.en.md** - pip_util design documentation

View File

@@ -0,0 +1,433 @@
# Test Code Improvements Based on Dependency Context
**Date**: 2025-10-01
**Basis**: DEPENDENCY_TREE_CONTEXT.md analysis
This document summarizes all test improvements made using verified dependency tree information.
---
## Summary of Changes
### Tests Enhanced
| Test File | Tests Modified | Tests Added | Total Tests |
|-----------|----------------|-------------|-------------|
| `test_dependency_protection.py` | 2 | 2 | 4 |
| `test_environment_recovery.py` | 2 | 0 | 2 |
| **Total** | **4** | **2** | **6** |
### Test Results
```bash
$ pytest test_dependency_protection.py test_environment_recovery.py -v
test_dependency_protection.py::test_dependency_version_protection_with_pin PASSED
test_dependency_protection.py::test_dependency_chain_with_six_pin PASSED
test_dependency_protection.py::test_pin_only_affects_specified_packages PASSED ✨ NEW
test_dependency_protection.py::test_major_version_jump_prevention PASSED ✨ NEW
test_environment_recovery.py::test_package_deletion_and_restore PASSED
test_environment_recovery.py::test_version_change_and_restore PASSED
6 passed in 14.10s
```
---
## Detailed Improvements
### 1. test_dependency_version_protection_with_pin
**File**: `test_dependency_protection.py:34-94`
**Enhancements**:
- ✅ Added exact version assertions based on DEPENDENCY_TREE_CONTEXT.md
- ✅ Verified initial versions: urllib3==1.26.15, certifi==2023.7.22, charset-normalizer==3.2.0
- ✅ Added verification that idna is NOT pre-installed
- ✅ Added assertion that idna==3.10 is installed as NEW dependency
- ✅ Verified requests==2.32.5 is installed
- ✅ Added detailed error messages explaining what versions are expected and why
**Key Assertions Added**:
```python
# Verify expected OLD versions
assert initial_urllib3 == "1.26.15", f"Expected urllib3==1.26.15, got {initial_urllib3}"
assert initial_certifi == "2023.7.22", f"Expected certifi==2023.7.22, got {initial_certifi}"
assert initial_charset == "3.2.0", f"Expected charset-normalizer==3.2.0, got {initial_charset}"
# Verify idna is NOT installed initially
assert "idna" not in initial, "idna should not be pre-installed"
# Verify new dependency was added (idna is NOT pinned, so it gets installed)
assert "idna" in final_packages, "idna should be installed as new dependency"
assert final_packages["idna"] == "3.10", f"Expected idna==3.10, got {final_packages['idna']}"
```
**Based on Context**:
- DEPENDENCY_TREE_CONTEXT.md Section 1: requests → Dependencies
- Verified: Without pin, urllib3 would upgrade to 2.5.0 (MAJOR version jump)
- Verified: idna is NEW dependency (not in requirements-test-base.txt)
---
### 2. test_dependency_chain_with_six_pin
**File**: `test_dependency_protection.py:117-162`
**Enhancements**:
- ✅ Added exact version assertion for six==1.16.0
- ✅ Added exact version assertion for python-dateutil==2.9.0.post0
- ✅ Added detailed error messages
- ✅ Added docstring reference to DEPENDENCY_TREE_CONTEXT.md
**Key Assertions Added**:
```python
# Verify expected OLD version
assert initial_six == "1.16.0", f"Expected six==1.16.0, got {initial_six}"
# Verify final versions
assert final_packages["python-dateutil"] == "2.9.0.post0", f"Expected python-dateutil==2.9.0.post0"
assert final_packages["six"] == "1.16.0", "six should remain at 1.16.0 (prevented 1.17.0 upgrade)"
```
**Based on Context**:
- DEPENDENCY_TREE_CONTEXT.md Section 2: python-dateutil → Dependencies
- Verified: six is a REAL dependency (not optional like colorama)
- Verified: Without pin, six would upgrade from 1.16.0 to 1.17.0
---
### 3. test_pin_only_affects_specified_packages ✨ NEW
**File**: `test_dependency_protection.py:165-208`
**Purpose**: Verify that pin is selective, not global
**Test Logic**:
1. Verify idna is NOT pre-installed
2. Verify requests is NOT pre-installed
3. Install requests with pin policy (only pins urllib3, certifi, charset-normalizer)
4. Verify idna was installed at latest version (3.10) - NOT pinned
5. Verify requests was installed at expected version (2.32.5)
**Key Assertions**:
```python
# Verify idna was installed (NOT pinned, so gets latest)
assert "idna" in final_packages, "idna should be installed as new dependency"
assert final_packages["idna"] == "3.10", "idna should be at latest version 3.10 (not pinned)"
```
**Based on Context**:
- DEPENDENCY_TREE_CONTEXT.md: "⚠️ idna is NEW and NOT pinned (acceptable - new dependency)"
- Verified: Pin only affects specified packages in pinned_packages list
---
### 4. test_major_version_jump_prevention ✨ NEW
**File**: `test_dependency_protection.py:211-271`
**Purpose**: Verify that pin prevents MAJOR version jumps with breaking changes
**Test Logic**:
1. Verify initial urllib3==1.26.15
2. **Test WITHOUT pin**: Uninstall deps, install requests → urllib3 upgrades to 2.x
3. Verify urllib3 was upgraded to 2.x (starts with "2.")
4. Reset environment
5. **Test WITH pin**: Install requests with pin → urllib3 stays at 1.x
6. Verify urllib3 stayed at 1.26.15 (starts with "1.")
**Key Assertions**:
```python
# Without pin - verify urllib3 upgrades to 2.x
assert without_pin["urllib3"].startswith("2."), \
f"Without pin, urllib3 should upgrade to 2.x, got {without_pin['urllib3']}"
# With pin - verify urllib3 stays at 1.x
assert final_packages["urllib3"] == "1.26.15", \
"Pin should prevent urllib3 from upgrading to 2.x (breaking changes)"
assert final_packages["urllib3"].startswith("1."), \
f"urllib3 should remain at 1.x series, got {final_packages['urllib3']}"
```
**Based on Context**:
- DEPENDENCY_TREE_CONTEXT.md: "urllib3 1.26.15 → 2.5.0 is a MAJOR version jump"
- DEPENDENCY_TREE_CONTEXT.md: "urllib3 2.0 removed deprecated APIs"
- This is the MOST IMPORTANT test - prevents breaking changes
---
### 5. test_package_deletion_and_restore
**File**: `test_environment_recovery.py:33-78`
**Enhancements**:
- ✅ Added exact version assertion for six==1.16.0
- ✅ Added verification that six is restored to EXACT version (not latest)
- ✅ Added detailed error messages
- ✅ Added docstring reference to DEPENDENCY_TREE_CONTEXT.md
**Key Assertions Added**:
```python
# Verify six is initially installed at expected version
assert initial["six"] == "1.16.0", f"Expected six==1.16.0, got {initial['six']}"
# Verify six was restored to EXACT required version (not latest)
assert final_packages["six"] == "1.16.0", \
"six should be restored to exact version 1.16.0 (not 1.17.0 latest)"
```
**Based on Context**:
- DEPENDENCY_TREE_CONTEXT.md: "six: 1.16.0 (OLD) → 1.17.0 (LATEST)"
- Verified: Restore policy restores to EXACT version, not latest
---
### 6. test_version_change_and_restore
**File**: `test_environment_recovery.py:105-158`
**Enhancements**:
- ✅ Added exact version assertions (1.26.15 initially, 2.1.0 after upgrade)
- ✅ Added verification of major version change (1.x → 2.x)
- ✅ Added verification of major version downgrade (2.x → 1.x)
- ✅ Added detailed error messages explaining downgrade capability
- ✅ Added docstring reference to DEPENDENCY_TREE_CONTEXT.md
**Key Assertions Added**:
```python
# Verify version was changed to 2.x
assert installed_after["urllib3"] == "2.1.0", \
f"urllib3 should be upgraded to 2.1.0, got {installed_after['urllib3']}"
assert installed_after["urllib3"].startswith("2."), \
"urllib3 should be at 2.x series"
# Verify version was DOWNGRADED from 2.x back to 1.x
assert final["urllib3"] == "1.26.15", \
"urllib3 should be downgraded to 1.26.15 (from 2.1.0)"
assert final["urllib3"].startswith("1."), \
f"urllib3 should be back at 1.x series, got {final['urllib3']}"
```
**Based on Context**:
- DEPENDENCY_TREE_CONTEXT.md: "urllib3 can upgrade from 1.26.15 (1.x) to 2.5.0 (2.x)"
- Verified: Restore policy can DOWNGRADE (not just prevent upgrades)
- Tests actual version downgrade capability (2.x → 1.x)
---
## Test Coverage Analysis
### Before Improvements
| Scenario | Coverage |
|----------|----------|
| Pin prevents upgrades | ✅ Basic |
| New dependencies installed | ❌ Not tested |
| Pin is selective | ❌ Not tested |
| Major version jump prevention | ❌ Not tested |
| Exact version restoration | ❌ Not tested |
| Version downgrade capability | ❌ Not tested |
### After Improvements
| Scenario | Coverage | Test |
|----------|----------|------|
| Pin prevents upgrades | ✅ Enhanced | test_dependency_version_protection_with_pin |
| New dependencies installed | ✅ Added | test_dependency_version_protection_with_pin |
| Pin is selective | ✅ Added | test_pin_only_affects_specified_packages |
| Major version jump prevention | ✅ Added | test_major_version_jump_prevention |
| Exact version restoration | ✅ Enhanced | test_package_deletion_and_restore |
| Version downgrade capability | ✅ Enhanced | test_version_change_and_restore |
---
## Key Testing Principles Applied
### 1. Exact Version Verification
**Before**:
```python
assert final_packages["urllib3"] == initial_urllib3 # Generic
```
**After**:
```python
assert initial_urllib3 == "1.26.15", f"Expected urllib3==1.26.15, got {initial_urllib3}"
assert final_packages["urllib3"] == "1.26.15", "urllib3 should remain at 1.26.15 (prevented 2.x upgrade)"
```
**Benefit**: Fails with clear message if environment setup is wrong
---
### 2. Version Series Verification
**Added**:
```python
assert final_packages["urllib3"].startswith("1."), \
f"urllib3 should remain at 1.x series, got {final_packages['urllib3']}"
```
**Benefit**: Catches major version jumps even if exact version changes
---
### 3. Negative Testing (Verify NOT Installed)
**Added**:
```python
assert "idna" not in initial, "idna should not be pre-installed"
```
**Benefit**: Ensures test environment is in expected state
---
### 4. Context-Based Documentation
**Every test now includes**:
```python
"""
Based on DEPENDENCY_TREE_CONTEXT.md:
<specific section reference>
<expected behavior from context>
"""
```
**Benefit**: Links test expectations to verified dependency data
---
## Real-World Scenarios Tested
### Scenario 1: Preventing Breaking Changes
**Test**: `test_major_version_jump_prevention`
**Real-World Impact**:
- urllib3 2.0 removed deprecated APIs
- Many applications break when upgrading from 1.x to 2.x
- Pin prevents this automatic breaking change
**Verified**: ✅ Pin successfully prevents 1.x → 2.x upgrade
---
### Scenario 2: Allowing New Dependencies
**Test**: `test_pin_only_affects_specified_packages`
**Real-World Impact**:
- New dependencies are safe to add (idna)
- Pin should not block ALL changes
- Only specified packages are protected
**Verified**: ✅ idna installs at 3.10 even with pin policy active
---
### Scenario 3: Version Downgrade Recovery
**Test**: `test_version_change_and_restore`
**Real-World Impact**:
- Sometimes packages get upgraded accidentally
- Need to downgrade to known-good version
- Downgrade is harder than upgrade prevention
**Verified**: ✅ Can downgrade urllib3 from 2.x to 1.x
---
## Test Execution Performance
```
Test Performance Summary:
test_dependency_version_protection_with_pin 2.28s (enhanced)
test_dependency_chain_with_six_pin 2.00s (enhanced)
test_pin_only_affects_specified_packages 2.25s (NEW)
test_major_version_jump_prevention 3.53s (NEW - does 2 install cycles)
test_package_deletion_and_restore 2.25s (enhanced)
test_version_change_and_restore 2.24s (enhanced)
Total: 14.10s for 6 tests
Average: 2.35s per test
```
**Note**: `test_major_version_jump_prevention` is slower because it tests both WITH and WITHOUT pin (2 install cycles).
---
## Files Modified
1. **test_dependency_protection.py**: +138 lines
- Enhanced 2 existing tests
- Added 2 new tests
- Total: 272 lines (was 132 lines)
2. **test_environment_recovery.py**: +35 lines
- Enhanced 2 existing tests
- Total: 159 lines (was 141 lines)
---
## Verification Against Context
All test improvements verified against:
| Context Source | Usage |
|----------------|-------|
| **DEPENDENCY_TREE_CONTEXT.md** | All version numbers, dependency trees |
| **DEPENDENCY_ANALYSIS.md** | Package selection rationale, rejected scenarios |
| **TEST_SCENARIOS.md** | Scenario specifications, expected outcomes |
| **requirements-test-base.txt** | Initial environment state |
| **analyze_dependencies.py** | Real-time verification of expectations |
---
## Future Maintenance
### When to Update Tests
Update tests when:
- ✅ PyPI releases new major versions (e.g., urllib3 3.0)
- ✅ Base package versions change in requirements-test-base.txt
- ✅ New test scenarios added to DEPENDENCY_TREE_CONTEXT.md
- ✅ Policy behavior changes in pip_util.py
### How to Update Tests
1. Run `python analyze_dependencies.py --all`
2. Update expected version numbers in tests
3. Update DEPENDENCY_TREE_CONTEXT.md
4. Update TEST_SCENARIOS.md
5. Run tests to verify
### Verification Commands
```bash
# Verify environment
python analyze_dependencies.py --env
# Verify package dependencies
python analyze_dependencies.py requests
python analyze_dependencies.py python-dateutil
# Run all tests
pytest test_dependency_protection.py test_environment_recovery.py -v --override-ini="addopts="
```
---
## Summary
**6 tests** now verify real PyPI package dependencies
**100% pass rate** with real pip operations
**All version numbers** verified against DEPENDENCY_TREE_CONTEXT.md
**Major version jump prevention** explicitly tested
**Selective pinning** verified (only specified packages)
**Version downgrade** capability tested
**Key Achievement**: Tests now verify actual PyPI behavior, not mocked expectations.

View File

@@ -0,0 +1,573 @@
# pip_util Test Scenarios - Test Data Specification
This document precisely defines all test scenarios, packages, versions, and expected behaviors used in the pip_util test suite.
## Table of Contents
1. [Test Scenario 1: Dependency Version Protection](#scenario-1-dependency-version-protection)
2. [Test Scenario 2: Complex Dependency Chain](#scenario-2-complex-dependency-chain)
3. [Test Scenario 3: Package Deletion and Restore](#scenario-3-package-deletion-and-restore)
4. [Test Scenario 4: Version Change and Restore](#scenario-4-version-change-and-restore)
5. [Test Scenario 5: Full Workflow Integration](#scenario-5-full-workflow-integration)
6. [Test Scenario 6: Pin Failure Retry](#scenario-6-pin-failure-retry)
---
## Scenario 1: Dependency Version Protection
**File**: `test_dependency_protection.py::test_dependency_version_protection_with_pin`
**Purpose**: Verify that `pin_dependencies` policy prevents dependency upgrades during package installation.
### Initial Environment State
```python
installed_packages = {
"urllib3": "1.26.15", # OLD stable version
"certifi": "2023.7.22", # OLD version
"charset-normalizer": "3.2.0" # OLD version
}
```
### Policy Configuration
```json
{
"requests": {
"apply_all_matches": [
{
"type": "pin_dependencies",
"pinned_packages": ["urllib3", "certifi", "charset-normalizer"],
"on_failure": "retry_without_pin"
}
]
}
}
```
### Action
```python
batch.install("requests")
```
### Expected pip Command
```bash
pip install requests urllib3==1.26.15 certifi==2023.7.22 charset-normalizer==3.2.0
```
### Expected Final State
```python
installed_packages = {
"urllib3": "1.26.15", # PROTECTED - stayed at old version
"certifi": "2023.7.22", # PROTECTED - stayed at old version
"charset-normalizer": "3.2.0", # PROTECTED - stayed at old version
"requests": "2.31.0" # NEWLY installed
}
```
### Without Pin (What Would Happen)
```python
# If pin_dependencies was NOT used:
installed_packages = {
"urllib3": "2.1.0", # UPGRADED to 2.x (breaking change)
"certifi": "2024.2.2", # UPGRADED to latest
"charset-normalizer": "3.3.2", # UPGRADED to latest
"requests": "2.31.0"
}
```
**Key Point**: Pin prevents `urllib3` from upgrading to 2.x, which has breaking API changes.
---
## Scenario 2: Complex Dependency Chain
**File**: `test_dependency_protection.py::test_dependency_chain_with_click_colorama`
**Purpose**: Verify that `force_version` + `pin_dependencies` work together correctly.
### Initial Environment State
```python
installed_packages = {
"colorama": "0.4.6" # Existing dependency
}
```
### Policy Configuration
```json
{
"click": {
"apply_first_match": [
{
"condition": {
"type": "installed",
"package": "colorama",
"spec": "<0.5.0"
},
"type": "force_version",
"version": "8.1.3",
"reason": "click 8.1.3 compatible with colorama <0.5"
}
],
"apply_all_matches": [
{
"type": "pin_dependencies",
"pinned_packages": ["colorama"]
}
]
}
}
```
### Condition Evaluation
```python
# Check: colorama installed AND version < 0.5.0?
colorama_installed = True
colorama_version = "0.4.6" # 0.4.6 < 0.5.0 → True
# Result: Condition satisfied → apply force_version
```
### Action
```python
batch.install("click")
```
### Expected pip Command
```bash
pip install click==8.1.3 colorama==0.4.6
```
### Expected Final State
```python
installed_packages = {
"colorama": "0.4.6", # PINNED - version protected
"click": "8.1.3" # FORCED to specific version
}
```
**Key Point**:
- `force_version` forces click to install version 8.1.3
- `pin_dependencies` ensures colorama stays at 0.4.6
---
## Scenario 3: Package Deletion and Restore
**File**: `test_environment_recovery.py::test_package_deletion_and_restore`
**Purpose**: Verify that deleted packages can be restored to required versions.
### Initial Environment State
```python
installed_packages = {
"six": "1.16.0", # Critical package
"attrs": "23.1.0",
"packaging": "23.1"
}
```
### Policy Configuration
```json
{
"six": {
"restore": [
{
"target": "six",
"version": "1.16.0",
"reason": "six must be maintained at 1.16.0 for compatibility"
}
]
}
}
```
### Action Sequence
**Step 1**: Install package that removes six
```python
batch.install("python-dateutil")
```
**Step 1 Result**: six is DELETED
```python
installed_packages = {
# "six": "1.16.0", # ❌ DELETED by python-dateutil
"attrs": "23.1.0",
"packaging": "23.1",
"python-dateutil": "2.8.2" # ✅ NEW
}
```
**Step 2**: Restore deleted packages
```python
batch.ensure_installed()
```
**Step 2 Result**: six is RESTORED
```python
installed_packages = {
"six": "1.16.0", # ✅ RESTORED to required version
"attrs": "23.1.0",
"packaging": "23.1",
"python-dateutil": "2.8.2"
}
```
### Expected pip Commands
```bash
# Step 1: Install
pip install python-dateutil
# Step 2: Restore
pip install six==1.16.0
```
**Key Point**: `restore` policy automatically reinstalls deleted packages.
---
## Scenario 4: Version Change and Restore
**File**: `test_environment_recovery.py::test_version_change_and_restore`
**Purpose**: Verify that packages with changed versions can be restored to required versions.
### Initial Environment State
```python
installed_packages = {
"urllib3": "1.26.15", # OLD version (required)
"certifi": "2023.7.22"
}
```
### Policy Configuration
```json
{
"urllib3": {
"restore": [
{
"condition": {
"type": "installed",
"spec": "!=1.26.15"
},
"target": "urllib3",
"version": "1.26.15",
"reason": "urllib3 must be 1.26.15 for compatibility"
}
]
}
}
```
### Action Sequence
**Step 1**: Install package that upgrades urllib3
```python
batch.install("requests")
```
**Step 1 Result**: urllib3 is UPGRADED
```python
installed_packages = {
"urllib3": "2.1.0", # ❌ UPGRADED from 1.26.15 to 2.1.0
"certifi": "2023.7.22",
"requests": "2.31.0" # ✅ NEW
}
```
**Step 2**: Check restore condition
```python
# Condition: urllib3 installed AND version != 1.26.15?
urllib3_version = "2.1.0"
condition_met = (urllib3_version != "1.26.15") # True
# Result: Restore urllib3 to 1.26.15
```
**Step 2**: Restore to required version
```python
batch.ensure_installed()
```
**Step 2 Result**: urllib3 is DOWNGRADED
```python
installed_packages = {
"urllib3": "1.26.15", # ✅ RESTORED to required version
"certifi": "2023.7.22",
"requests": "2.31.0"
}
```
### Expected pip Commands
```bash
# Step 1: Install (causes upgrade)
pip install requests
# Step 2: Restore (downgrade)
pip install urllib3==1.26.15
```
**Key Point**: `restore` with condition can revert unwanted version changes.
---
## Scenario 5: Full Workflow Integration
**File**: `test_full_workflow_integration.py::test_uninstall_install_restore_workflow`
**Purpose**: Verify complete workflow: uninstall → install → restore.
### Initial Environment State
```python
installed_packages = {
"old-package": "1.0.0", # To be removed
"critical-package": "1.2.3", # To be restored
"urllib3": "1.26.15",
"certifi": "2023.7.22"
}
```
### Policy Configuration
```json
{
"old-package": {
"uninstall": [
{
"target": "old-package"
}
]
},
"requests": {
"apply_all_matches": [
{
"type": "pin_dependencies",
"pinned_packages": ["urllib3", "certifi"]
}
]
},
"critical-package": {
"restore": [
{
"target": "critical-package",
"version": "1.2.3"
}
]
}
}
```
### Action Sequence
**Step 1**: Remove old packages
```python
removed = batch.ensure_not_installed()
```
**Step 1 Result**:
```python
installed_packages = {
# "old-package": "1.0.0", # ❌ REMOVED
"critical-package": "1.2.3",
"urllib3": "1.26.15",
"certifi": "2023.7.22"
}
removed = ["old-package"]
```
**Step 2**: Install new package with pins
```python
batch.install("requests")
```
**Step 2 Result**:
```python
installed_packages = {
"critical-package": "1.2.3",
"urllib3": "1.26.15", # PINNED - no upgrade
"certifi": "2023.7.22", # PINNED - no upgrade
"requests": "2.31.0" # NEW
}
```
**Step 3**: Restore required packages
```python
restored = batch.ensure_installed()
```
**Step 3 Result**:
```python
installed_packages = {
"critical-package": "1.2.3", # Still present
"urllib3": "1.26.15",
"certifi": "2023.7.22",
"requests": "2.31.0"
}
restored = [] # Nothing to restore (all present)
```
### Expected pip Commands
```bash
# Step 1: Uninstall
pip uninstall -y old-package
# Step 2: Install with pins
pip install requests urllib3==1.26.15 certifi==2023.7.22
# Step 3: (No command - all packages present)
```
**Key Point**: Complete workflow demonstrates policy coordination.
---
## Scenario 6: Pin Failure Retry
**File**: `test_pin_failure_retry.py::test_pin_failure_retry_without_pin_succeeds`
**Purpose**: Verify automatic retry without pins when installation with pins fails.
### Initial Environment State
```python
installed_packages = {
"urllib3": "1.26.15",
"certifi": "2023.7.22"
}
```
### Policy Configuration
```json
{
"requests": {
"apply_all_matches": [
{
"type": "pin_dependencies",
"pinned_packages": ["urllib3", "certifi"],
"on_failure": "retry_without_pin"
}
]
}
}
```
### Action
```python
batch.install("requests")
```
### Attempt 1: Install WITH pins (FAILS)
```bash
# Command:
pip install requests urllib3==1.26.15 certifi==2023.7.22
# Result: FAILURE (dependency conflict)
# Error: "Package conflict: requests requires urllib3>=2.0"
```
### Attempt 2: Retry WITHOUT pins (SUCCEEDS)
```bash
# Command:
pip install requests
# Result: SUCCESS
```
**Final State**:
```python
installed_packages = {
"urllib3": "2.1.0", # UPGRADED (pins removed)
"certifi": "2024.2.2", # UPGRADED (pins removed)
"requests": "2.31.0" # INSTALLED
}
```
### Expected Behavior
1. **First attempt**: Install with pinned versions
2. **On failure**: Log warning about conflict
3. **Retry**: Install without pins
4. **Success**: Package installed, dependencies upgraded
**Key Point**: `retry_without_pin` provides automatic fallback for compatibility issues.
---
## Scenario 6b: Pin Failure with Hard Fail
**File**: `test_pin_failure_retry.py::test_pin_failure_with_fail_raises_exception`
**Purpose**: Verify that `on_failure: fail` raises exception instead of retrying.
### Initial Environment State
```python
installed_packages = {
"urllib3": "1.26.15",
"certifi": "2023.7.22"
}
```
### Policy Configuration
```json
{
"requests": {
"apply_all_matches": [
{
"type": "pin_dependencies",
"pinned_packages": ["urllib3", "certifi"],
"on_failure": "fail"
}
]
}
}
```
### Action
```python
batch.install("requests")
```
### Attempt 1: Install WITH pins (FAILS)
```bash
# Command:
pip install requests urllib3==1.26.15 certifi==2023.7.22
# Result: FAILURE (dependency conflict)
# Error: "Package conflict: requests requires urllib3>=2.0"
```
### Expected Behavior
1. **First attempt**: Install with pinned versions
2. **On failure**: Raise `subprocess.CalledProcessError`
3. **No retry**: Exception propagates to caller
4. **No changes**: Environment unchanged
**Key Point**: `on_failure: fail` ensures strict version requirements.
---
## Summary Table: All Test Packages
| Package | Initial Version | Action | Final Version | Role |
|---------|----------------|--------|---------------|------|
| **urllib3** | 1.26.15 | Pin | 1.26.15 | Protected dependency |
| **certifi** | 2023.7.22 | Pin | 2023.7.22 | Protected dependency |
| **charset-normalizer** | 3.2.0 | Pin | 3.2.0 | Protected dependency |
| **requests** | (not installed) | Install | 2.31.0 | New package |
| **colorama** | 0.4.6 | Pin | 0.4.6 | Protected dependency |
| **click** | (not installed) | Force version | 8.1.3 | New package with forced version |
| **six** | 1.16.0 | Delete→Restore | 1.16.0 | Deleted then restored |
| **python-dateutil** | (not installed) | Install | 2.8.2 | Package that deletes six |
| **attrs** | 23.1.0 | No change | 23.1.0 | Bystander package |
| **packaging** | 23.1 | No change | 23.1 | Bystander package |
## Policy Types Summary
| Policy Type | Purpose | Example |
|-------------|---------|---------|
| **pin_dependencies** | Prevent dependency upgrades | Keep urllib3 at 1.26.15 |
| **force_version** | Force specific package version | Install click==8.1.3 |
| **restore** | Reinstall deleted/changed packages | Restore six to 1.16.0 |
| **uninstall** | Remove obsolete packages | Remove old-package |
| **on_failure** | Handle installation failures | retry_without_pin or fail |
## Test Data Design Principles
1. **Lightweight Packages**: All packages are <200KB for fast testing
2. **Real Dependencies**: Use actual PyPI package relationships
3. **Version Realism**: Use real version numbers from PyPI
4. **Clear Scenarios**: Each test demonstrates one clear behavior
5. **Reproducible**: Mock ensures consistent behavior across environments

View File

@@ -0,0 +1,261 @@
#!/usr/bin/env python3
"""
Dependency Tree Analyzer for pip_util Tests
Usage:
python analyze_dependencies.py [package]
python analyze_dependencies.py --all
python analyze_dependencies.py --update-context
Examples:
python analyze_dependencies.py requests
python analyze_dependencies.py python-dateutil
python analyze_dependencies.py --all
"""
import subprocess
import sys
from typing import Dict, List, Tuple, Optional
from pathlib import Path
PIP = "./test_venv/bin/pip"
def check_venv():
"""Check if test venv exists"""
if not Path(PIP).exists():
print("❌ Test venv not found!")
print(" Run: ./setup_test_env.sh")
sys.exit(1)
def get_installed_packages() -> Dict[str, str]:
"""Get currently installed packages"""
result = subprocess.run(
[PIP, "freeze"],
capture_output=True,
text=True,
check=True
)
packages = {}
for line in result.stdout.strip().split('\n'):
if '==' in line:
pkg, ver = line.split('==', 1)
packages[pkg] = ver
return packages
def analyze_package_dry_run(
package: str,
constraints: Optional[List[str]] = None
) -> Tuple[List[Tuple[str, str]], Dict[str, str]]:
"""
Analyze what would be installed with --dry-run
Returns:
- List of (package_name, version) tuples in install order
- Dict of current_version → new_version for upgrades
"""
cmd = [PIP, "install", "--dry-run", "--ignore-installed", package]
if constraints:
cmd.extend(constraints)
result = subprocess.run(cmd, capture_output=True, text=True)
# Parse "Would install" line
would_install = []
for line in result.stdout.split('\n'):
if 'Would install' in line:
packages_str = line.split('Would install')[1].strip()
for pkg_str in packages_str.split():
parts = pkg_str.split('-', 1)
if len(parts) == 2:
would_install.append((parts[0], parts[1]))
# Check against current installed
installed = get_installed_packages()
changes = {}
for pkg, new_ver in would_install:
if pkg in installed:
old_ver = installed[pkg]
if old_ver != new_ver:
changes[pkg] = (old_ver, new_ver)
return would_install, changes
def get_available_versions(package: str, limit: int = 10) -> Tuple[str, List[str]]:
"""
Get available versions from PyPI
Returns:
- Latest version
- List of available versions (limited)
"""
result = subprocess.run(
[PIP, "index", "versions", package],
capture_output=True,
text=True
)
latest = None
versions = []
for line in result.stdout.split('\n'):
if 'LATEST:' in line:
latest = line.split('LATEST:')[1].strip()
elif 'Available versions:' in line:
versions_str = line.split('Available versions:')[1].strip()
versions = [v.strip() for v in versions_str.split(',')[:limit]]
return latest, versions
def print_package_analysis(package: str, with_pin: bool = False):
"""Print detailed analysis for a package"""
print(f"\n{'='*80}")
print(f"Package: {package}")
print(f"{'='*80}")
installed = get_installed_packages()
# Get latest version
latest, available = get_available_versions(package)
if latest:
print(f"\n📦 Latest version: {latest}")
print(f"📋 Available versions: {', '.join(available[:5])}")
# Scenario 1: Without constraints
print(f"\n🔍 Scenario A: Install without constraints")
print(f" Command: pip install {package}")
would_install, changes = analyze_package_dry_run(package)
if would_install:
print(f"\n Would install {len(would_install)} packages:")
for pkg, ver in would_install:
if pkg in changes:
old_ver, new_ver = changes[pkg]
print(f"{pkg:25} {old_ver:15}{new_ver:15} ⚠️ UPGRADE")
elif pkg in installed:
print(f"{pkg:25} {ver:15} (already installed)")
else:
print(f"{pkg:25} {ver:15} ✨ NEW")
# Scenario 2: With pin constraints (if dependencies exist)
dependencies = [pkg for pkg, _ in would_install if pkg != package]
if dependencies and with_pin:
print(f"\n🔍 Scenario B: Install with pin constraints")
# Create pin constraints for all current dependencies
constraints = []
for dep in dependencies:
if dep in installed:
constraints.append(f"{dep}=={installed[dep]}")
if constraints:
print(f" Command: pip install {package} {' '.join(constraints)}")
would_install_pinned, changes_pinned = analyze_package_dry_run(
package, constraints
)
print(f"\n Would install {len(would_install_pinned)} packages:")
for pkg, ver in would_install_pinned:
if pkg in constraints:
print(f"{pkg:25} {ver:15} 📌 PINNED")
elif pkg in installed:
print(f"{pkg:25} {ver:15} (no change)")
else:
print(f"{pkg:25} {ver:15} ✨ NEW")
# Show what was prevented
prevented = set(changes.keys()) - set(changes_pinned.keys())
if prevented:
print(f"\n ✅ Pin prevented {len(prevented)} upgrade(s):")
for pkg in prevented:
old_ver, new_ver = changes[pkg]
print(f"{pkg:25} {old_ver:15} ❌→ {new_ver}")
def analyze_all_test_packages():
"""Analyze all packages used in tests"""
print("="*80)
print("ANALYZING ALL TEST PACKAGES")
print("="*80)
test_packages = [
("requests", True),
("python-dateutil", True),
]
for package, with_pin in test_packages:
print_package_analysis(package, with_pin)
print(f"\n{'='*80}")
print("ANALYSIS COMPLETE")
print(f"{'='*80}")
def print_current_environment():
"""Print current test environment"""
print("="*80)
print("CURRENT TEST ENVIRONMENT")
print("="*80)
installed = get_installed_packages()
print(f"\nTotal packages: {len(installed)}\n")
# Group by category
test_packages = ["urllib3", "certifi", "charset-normalizer", "six", "attrs", "packaging"]
framework = ["pytest", "iniconfig", "pluggy", "Pygments"]
print("Test packages:")
for pkg in test_packages:
if pkg in installed:
print(f" {pkg:25} {installed[pkg]}")
print("\nTest framework:")
for pkg in framework:
if pkg in installed:
print(f" {pkg:25} {installed[pkg]}")
other = set(installed.keys()) - set(test_packages) - set(framework)
if other:
print("\nOther packages:")
for pkg in sorted(other):
print(f" {pkg:25} {installed[pkg]}")
def main():
"""Main entry point"""
check_venv()
if len(sys.argv) == 1:
print("Usage: python analyze_dependencies.py [package|--all|--env]")
print("\nExamples:")
print(" python analyze_dependencies.py requests")
print(" python analyze_dependencies.py --all")
print(" python analyze_dependencies.py --env")
sys.exit(0)
command = sys.argv[1]
if command == "--all":
analyze_all_test_packages()
elif command == "--env":
print_current_environment()
elif command.startswith("--"):
print(f"Unknown option: {command}")
sys.exit(1)
else:
# Analyze specific package
print_package_analysis(command, with_pin=True)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,387 @@
"""
pytest configuration and shared fixtures for pip_util.py tests
This file provides common fixtures and configuration for all tests.
Uses real isolated venv for actual pip operations.
"""
import json
import subprocess
import sys
from pathlib import Path
from typing import Dict, List
from unittest.mock import MagicMock
import pytest
# =============================================================================
# Test venv Management
# =============================================================================
@pytest.fixture(scope="session")
def test_venv_path():
"""
Get path to test venv (must be created by setup_test_env.sh)
Returns:
Path: Path to test venv directory
"""
venv_path = Path(__file__).parent / "test_venv"
if not venv_path.exists():
pytest.fail(
f"Test venv not found at {venv_path}.\n"
"Please run: ./setup_test_env.sh"
)
return venv_path
@pytest.fixture(scope="session")
def test_pip_cmd(test_venv_path):
"""
Get pip command for test venv
Returns:
List[str]: pip command prefix for subprocess
"""
pip_path = test_venv_path / "bin" / "pip"
if not pip_path.exists():
pytest.fail(f"pip not found at {pip_path}")
return [str(pip_path)]
@pytest.fixture
def reset_test_venv(test_pip_cmd):
"""
Reset test venv to initial state before each test
This fixture:
1. Records current installed packages
2. Yields control to test
3. Restores original packages after test
"""
# Get initial state
result = subprocess.run(
test_pip_cmd + ["freeze"],
capture_output=True,
text=True,
check=True
)
initial_packages = result.stdout.strip()
yield
# Restore initial state
# Uninstall everything except pip, setuptools, wheel
result = subprocess.run(
test_pip_cmd + ["freeze"],
capture_output=True,
text=True,
check=True
)
current_packages = result.stdout.strip()
if current_packages:
packages_to_remove = []
for line in current_packages.split('\n'):
if line and '==' in line:
pkg = line.split('==')[0].lower()
if pkg not in ['pip', 'setuptools', 'wheel']:
packages_to_remove.append(pkg)
if packages_to_remove:
subprocess.run(
test_pip_cmd + ["uninstall", "-y"] + packages_to_remove,
capture_output=True,
check=False # Don't fail if package doesn't exist
)
# Reinstall initial packages
if initial_packages:
# Create temporary requirements file
import tempfile
with tempfile.NamedTemporaryFile(mode='w', suffix='.txt', delete=False) as f:
f.write(initial_packages)
temp_req = f.name
try:
subprocess.run(
test_pip_cmd + ["install", "-r", temp_req],
capture_output=True,
check=True
)
finally:
Path(temp_req).unlink()
# =============================================================================
# Directory and Path Fixtures
# =============================================================================
@pytest.fixture
def temp_policy_dir(tmp_path):
"""
Create temporary directory for policy files
Returns:
Path: Temporary directory for storing test policy files
"""
policy_dir = tmp_path / "policies"
policy_dir.mkdir()
return policy_dir
@pytest.fixture
def temp_user_policy_dir(tmp_path):
"""
Create temporary directory for user policy files
Returns:
Path: Temporary directory for storing user policy files
"""
user_dir = tmp_path / "user_policies"
user_dir.mkdir()
return user_dir
# =============================================================================
# Module Setup and Mocking
# =============================================================================
@pytest.fixture(autouse=True)
def setup_pip_util(monkeypatch, test_pip_cmd):
"""
Setup pip_util module for testing with real venv
This fixture:
1. Mocks comfy module (not needed for tests)
2. Adds comfyui_manager to path
3. Patches make_pip_cmd to use test venv
4. Resets policy cache
"""
# Mock comfy module before importing anything
comfy_mock = MagicMock()
cli_args_mock = MagicMock()
cli_args_mock.args = MagicMock()
comfy_mock.cli_args = cli_args_mock
sys.modules['comfy'] = comfy_mock
sys.modules['comfy.cli_args'] = cli_args_mock
# Add comfyui_manager parent to path so relative imports work
comfyui_manager_path = str(Path(__file__).parent.parent.parent.parent)
if comfyui_manager_path not in sys.path:
sys.path.insert(0, comfyui_manager_path)
# Import pip_util
from comfyui_manager.common import pip_util
# Patch make_pip_cmd to use test venv pip
def make_test_pip_cmd(args: List[str]) -> List[str]:
return test_pip_cmd + args
monkeypatch.setattr(
pip_util.manager_util,
"make_pip_cmd",
make_test_pip_cmd
)
# Reset policy cache
pip_util._pip_policy_cache = None
yield
# Cleanup
pip_util._pip_policy_cache = None
@pytest.fixture
def mock_manager_util(monkeypatch, temp_policy_dir):
"""
Mock manager_util module paths
Args:
monkeypatch: pytest monkeypatch fixture
temp_policy_dir: Temporary policy directory
"""
from comfyui_manager.common import pip_util
monkeypatch.setattr(
pip_util.manager_util,
"comfyui_manager_path",
str(temp_policy_dir)
)
@pytest.fixture
def mock_context(monkeypatch, temp_user_policy_dir):
"""
Mock context module paths
Args:
monkeypatch: pytest monkeypatch fixture
temp_user_policy_dir: Temporary user policy directory
"""
from comfyui_manager.common import pip_util
monkeypatch.setattr(
pip_util.context,
"manager_files_path",
str(temp_user_policy_dir)
)
# =============================================================================
# Platform Mocking Fixtures
# =============================================================================
@pytest.fixture
def mock_platform_linux(monkeypatch):
"""Mock platform.system() to return 'Linux'"""
monkeypatch.setattr("platform.system", lambda: "Linux")
@pytest.fixture
def mock_platform_windows(monkeypatch):
"""Mock platform.system() to return 'Windows'"""
monkeypatch.setattr("platform.system", lambda: "Windows")
@pytest.fixture
def mock_platform_darwin(monkeypatch):
"""Mock platform.system() to return 'Darwin' (macOS)"""
monkeypatch.setattr("platform.system", lambda: "Darwin")
@pytest.fixture
def mock_torch_cuda_available(monkeypatch):
"""Mock torch.cuda.is_available() to return True"""
class MockCuda:
@staticmethod
def is_available():
return True
class MockTorch:
cuda = MockCuda()
import sys
monkeypatch.setitem(sys.modules, "torch", MockTorch())
@pytest.fixture
def mock_torch_cuda_unavailable(monkeypatch):
"""Mock torch.cuda.is_available() to return False"""
class MockCuda:
@staticmethod
def is_available():
return False
class MockTorch:
cuda = MockCuda()
import sys
monkeypatch.setitem(sys.modules, "torch", MockTorch())
@pytest.fixture
def mock_torch_not_installed(monkeypatch):
"""Mock torch as not installed (ImportError)"""
import sys
if "torch" in sys.modules:
monkeypatch.delitem(sys.modules, "torch")
# =============================================================================
# Helper Functions
# =============================================================================
@pytest.fixture
def get_installed_packages(test_pip_cmd):
"""
Helper to get currently installed packages in test venv
Returns:
Callable that returns Dict[str, str] of installed packages
"""
def _get_installed() -> Dict[str, str]:
result = subprocess.run(
test_pip_cmd + ["freeze"],
capture_output=True,
text=True,
check=True
)
packages = {}
for line in result.stdout.strip().split('\n'):
if line and '==' in line:
pkg, ver = line.split('==', 1)
packages[pkg] = ver
return packages
return _get_installed
@pytest.fixture
def install_packages(test_pip_cmd):
"""
Helper to install packages in test venv
Returns:
Callable that installs packages
"""
def _install(*packages):
subprocess.run(
test_pip_cmd + ["install"] + list(packages),
capture_output=True,
check=True
)
return _install
@pytest.fixture
def uninstall_packages(test_pip_cmd):
"""
Helper to uninstall packages in test venv
Returns:
Callable that uninstalls packages
"""
def _uninstall(*packages):
subprocess.run(
test_pip_cmd + ["uninstall", "-y"] + list(packages),
capture_output=True,
check=False # Don't fail if package doesn't exist
)
return _uninstall
# =============================================================================
# Test Data Factories
# =============================================================================
@pytest.fixture
def make_policy():
"""
Factory fixture for creating policy dictionaries
Returns:
Callable that creates policy dict from parameters
"""
def _make_policy(
package_name: str,
policy_type: str,
section: str = "apply_first_match",
**kwargs
) -> Dict:
policy_item = {"type": policy_type}
policy_item.update(kwargs)
return {
package_name: {
section: [policy_item]
}
}
return _make_policy

View File

@@ -0,0 +1,52 @@
[pytest]
# pytest configuration for pip_util.py tests
# Test discovery
testpaths = .
# Markers
markers =
unit: Unit tests for individual functions
integration: Integration tests for workflows
e2e: End-to-end tests for complete scenarios
# Output options - extend global config
addopts =
# Coverage options for pip_util
--cov=../../../comfyui_manager/common/pip_util
--cov-report=html:htmlcov_pip_util
--cov-report=term-missing
--cov-report=xml:coverage_pip_util.xml
# Coverage fail threshold
--cov-fail-under=80
# Coverage configuration
[coverage:run]
source = ../../../comfyui_manager/common
omit =
*/tests/*
*/test_*.py
*/__pycache__/*
*/test_venv/*
[coverage:report]
precision = 2
show_missing = True
skip_covered = False
exclude_lines =
# Standard pragma
pragma: no cover
# Don't complain about missing debug code
def __repr__
# Don't complain if tests don't hit defensive assertion code
raise AssertionError
raise NotImplementedError
# Don't complain if non-runnable code isn't run
if __name__ == .__main__.:
# Don't complain about abstract methods
@abstractmethod
[coverage:html]
directory = htmlcov

View File

@@ -0,0 +1,20 @@
# Base packages for pip_util integration tests
# These packages are installed initially to test various scenarios
# All versions verified using: pip install --dry-run --ignore-installed
# Scenario 1: Dependency Version Protection (requests + urllib3)
# Purpose: Pin prevents urllib3 1.26.15 → 2.5.0 major upgrade
urllib3==1.26.15 # OLD stable version (prevent 2.x upgrade)
certifi==2023.7.22 # OLD version (prevent 2025.x upgrade)
charset-normalizer==3.2.0 # OLD version (prevent 3.4.x upgrade)
# Note: idna is NOT pre-installed (will be added by requests)
# Scenario 2: Package Deletion and Restore (six)
# Purpose: Restore policy reinstalls deleted packages
six==1.16.0 # Will be deleted and restored to 1.16.0
attrs==23.1.0 # Bystander package
packaging==23.1 # Bystander package (NOT 23.1.0, not 25.0)
# Scenario 3: Version Change and Restore (urllib3)
# Purpose: Restore policy reverts version changes
# urllib3==1.26.15 (same as Scenario 1, will be upgraded to 2.5.0 then restored)

View File

@@ -0,0 +1,47 @@
#!/bin/bash
# Setup script for pip_util integration tests
# Creates a test venv and installs base packages
set -e # Exit on error
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
VENV_DIR="$SCRIPT_DIR/test_venv"
echo "Setting up test environment for pip_util integration tests..."
# Remove existing venv if present
if [ -d "$VENV_DIR" ]; then
echo "Removing existing test venv..."
rm -rf "$VENV_DIR"
fi
# Create new venv
echo "Creating test venv at $VENV_DIR..."
python3 -m venv "$VENV_DIR"
# Activate venv
source "$VENV_DIR/bin/activate"
# Upgrade pip
echo "Upgrading pip..."
pip install --upgrade pip
# Install pytest
echo "Installing pytest..."
pip install pytest
# Install base test packages
echo "Installing base test packages..."
pip install -r "$SCRIPT_DIR/requirements-test-base.txt"
echo ""
echo "Test environment setup complete!"
echo "Installed packages:"
pip freeze
echo ""
echo "To activate the test venv, run:"
echo " source $VENV_DIR/bin/activate"
echo ""
echo "To run tests:"
echo " pytest -v"

View File

@@ -0,0 +1,271 @@
"""
Test dependency version protection with pin (Priority 1)
Tests that existing dependency versions are protected by pin_dependencies policy
"""
import json
from pathlib import Path
import pytest
@pytest.fixture
def pin_policy(temp_policy_dir):
"""Create policy with pin_dependencies for lightweight real packages"""
policy_content = {
"requests": {
"apply_all_matches": [
{
"type": "pin_dependencies",
"pinned_packages": ["urllib3", "certifi", "charset-normalizer"],
"on_failure": "retry_without_pin"
}
]
}
}
policy_file = temp_policy_dir / "pip-policy.json"
policy_file.write_text(json.dumps(policy_content, indent=2))
return policy_file
@pytest.mark.integration
def test_dependency_version_protection_with_pin(
pin_policy,
mock_manager_util,
mock_context,
reset_test_venv,
get_installed_packages
):
"""
Test existing dependency versions are protected by pin
Priority: 1 (Essential)
Purpose:
Verify that when installing a package that would normally upgrade
dependencies, the pin_dependencies policy protects existing versions.
Based on DEPENDENCY_TREE_CONTEXT.md:
Without pin: urllib3 1.26.15 → 2.5.0 (MAJOR upgrade)
With pin: urllib3 stays at 1.26.15 (protected)
"""
from comfyui_manager.common.pip_util import PipBatch
# Verify initial packages are installed (from requirements-test-base.txt)
initial = get_installed_packages()
assert "urllib3" in initial
assert "certifi" in initial
assert "charset-normalizer" in initial
# Record initial versions (from DEPENDENCY_TREE_CONTEXT.md)
initial_urllib3 = initial["urllib3"]
initial_certifi = initial["certifi"]
initial_charset = initial["charset-normalizer"]
# Verify expected OLD versions
assert initial_urllib3 == "1.26.15", f"Expected urllib3==1.26.15, got {initial_urllib3}"
assert initial_certifi == "2023.7.22", f"Expected certifi==2023.7.22, got {initial_certifi}"
assert initial_charset == "3.2.0", f"Expected charset-normalizer==3.2.0, got {initial_charset}"
# Verify idna is NOT installed initially
assert "idna" not in initial, "idna should not be pre-installed"
with PipBatch() as batch:
result = batch.install("requests")
final_packages = batch._get_installed_packages()
# Verify installation succeeded
assert result is True
assert "requests" in final_packages
# Verify versions were maintained (not upgraded to latest)
# Without pin, these would upgrade to: urllib3==2.5.0, certifi==2025.8.3, charset-normalizer==3.4.3
assert final_packages["urllib3"] == "1.26.15", "urllib3 should remain at 1.26.15 (prevented 2.x upgrade)"
assert final_packages["certifi"] == "2023.7.22", "certifi should remain at 2023.7.22 (prevented 2025.x upgrade)"
assert final_packages["charset-normalizer"] == "3.2.0", "charset-normalizer should remain at 3.2.0"
# Verify new dependency was added (idna is NOT pinned, so it gets installed)
assert "idna" in final_packages, "idna should be installed as new dependency"
assert final_packages["idna"] == "3.10", f"Expected idna==3.10, got {final_packages['idna']}"
# Verify requests was installed at expected version
assert final_packages["requests"] == "2.32.5", f"Expected requests==2.32.5, got {final_packages['requests']}"
@pytest.fixture
def python_dateutil_policy(temp_policy_dir):
"""Create policy for python-dateutil with six pinning"""
policy_content = {
"python-dateutil": {
"apply_all_matches": [
{
"type": "pin_dependencies",
"pinned_packages": ["six"],
"reason": "Protect six from upgrading"
}
]
}
}
policy_file = temp_policy_dir / "pip-policy.json"
policy_file.write_text(json.dumps(policy_content, indent=2))
return policy_file
@pytest.mark.integration
def test_dependency_chain_with_six_pin(
python_dateutil_policy,
mock_manager_util,
mock_context,
reset_test_venv,
get_installed_packages
):
"""
Test python-dateutil + six dependency chain with pin
Priority: 2 (Important)
Purpose:
Verify that pin_dependencies protects actual dependencies
(six is a real dependency of python-dateutil).
Based on DEPENDENCY_TREE_CONTEXT.md:
python-dateutil depends on six>=1.5
Without pin: six 1.16.0 → 1.17.0
With pin: six stays at 1.16.0 (protected)
"""
from comfyui_manager.common.pip_util import PipBatch
# Verify six is installed
initial = get_installed_packages()
assert "six" in initial
initial_six = initial["six"]
# Verify expected OLD version
assert initial_six == "1.16.0", f"Expected six==1.16.0, got {initial_six}"
with PipBatch() as batch:
result = batch.install("python-dateutil")
final_packages = batch._get_installed_packages()
# Verify installation succeeded
assert result is True
# Verify final versions
assert "python-dateutil" in final_packages
assert final_packages["python-dateutil"] == "2.9.0.post0", f"Expected python-dateutil==2.9.0.post0"
# Verify six was NOT upgraded (without pin, would upgrade to 1.17.0)
assert "six" in final_packages
assert final_packages["six"] == "1.16.0", "six should remain at 1.16.0 (prevented 1.17.0 upgrade)"
@pytest.mark.integration
def test_pin_only_affects_specified_packages(
pin_policy,
mock_manager_util,
mock_context,
reset_test_venv,
get_installed_packages
):
"""
Test that pin only affects specified packages, not all dependencies
Priority: 1 (Essential)
Purpose:
Verify that idna (new dependency) is installed even though
other dependencies are pinned. This tests that pin is selective,
not global.
Based on DEPENDENCY_TREE_CONTEXT.md:
idna is a NEW dependency (not in initial environment)
Pin only affects: urllib3, certifi, charset-normalizer
idna should be installed at latest version (3.10)
"""
from comfyui_manager.common.pip_util import PipBatch
# Verify initial state
initial = get_installed_packages()
assert "idna" not in initial, "idna should not be pre-installed"
assert "requests" not in initial, "requests should not be pre-installed"
with PipBatch() as batch:
result = batch.install("requests")
final_packages = batch._get_installed_packages()
# Verify installation succeeded
assert result is True
# Verify idna was installed (NOT pinned, so gets latest)
assert "idna" in final_packages, "idna should be installed as new dependency"
assert final_packages["idna"] == "3.10", "idna should be at latest version 3.10 (not pinned)"
# Verify requests was installed
assert "requests" in final_packages
assert final_packages["requests"] == "2.32.5"
@pytest.mark.integration
def test_major_version_jump_prevention(
pin_policy,
mock_manager_util,
mock_context,
reset_test_venv,
get_installed_packages,
install_packages,
uninstall_packages
):
"""
Test that pin prevents MAJOR version jumps (breaking changes)
Priority: 1 (Essential)
Purpose:
Verify that pin prevents urllib3 1.x → 2.x major upgrade.
This is the most important test because urllib3 2.0 has
breaking API changes.
Based on DEPENDENCY_TREE_CONTEXT.md:
urllib3 1.26.15 → 2.5.0 is a MAJOR version jump
urllib3 2.0 removed deprecated APIs
requests accepts both: urllib3<3,>=1.21.1
"""
from comfyui_manager.common.pip_util import PipBatch
# Verify initial urllib3 version
initial = get_installed_packages()
assert initial["urllib3"] == "1.26.15", "Expected urllib3==1.26.15"
# First, test WITHOUT pin to verify urllib3 would upgrade to 2.x
# (This simulates what would happen without our protection)
uninstall_packages("urllib3", "certifi", "charset-normalizer")
install_packages("requests")
without_pin = get_installed_packages()
# Verify urllib3 was upgraded to 2.x without pin
assert "urllib3" in without_pin
assert without_pin["urllib3"].startswith("2."), \
f"Without pin, urllib3 should upgrade to 2.x, got {without_pin['urllib3']}"
# Now reset and test WITH pin
uninstall_packages("requests", "urllib3", "certifi", "charset-normalizer", "idna")
install_packages("urllib3==1.26.15", "certifi==2023.7.22", "charset-normalizer==3.2.0")
with PipBatch() as batch:
result = batch.install("requests")
final_packages = batch._get_installed_packages()
# Verify installation succeeded
assert result is True
# Verify urllib3 stayed at 1.x (prevented major version jump)
assert final_packages["urllib3"] == "1.26.15", \
"Pin should prevent urllib3 from upgrading to 2.x (breaking changes)"
# Verify it's specifically 1.x, not 2.x
assert final_packages["urllib3"].startswith("1."), \
f"urllib3 should remain at 1.x series, got {final_packages['urllib3']}"

View File

@@ -0,0 +1,279 @@
"""
Edge cases and boundary conditions (Priority 3)
Tests empty policies, malformed JSON, and edge cases
"""
import json
import subprocess
from pathlib import Path
import pytest
@pytest.mark.unit
def test_empty_base_policy_uses_default_installation(
empty_policy_file,
mock_manager_util,
mock_context
):
"""
Test default installation with empty policy
Priority: 3 (Recommended)
Purpose:
Verify that when policy is empty, the system falls back
to default installation behavior.
"""
import sys
# Path setup handled by conftest.py
from comfyui_manager.common.pip_util import get_pip_policy
policy = get_pip_policy()
assert policy == {}
@pytest.fixture
def malformed_policy_file(temp_policy_dir):
"""Create malformed JSON policy file"""
policy_file = temp_policy_dir / "pip-policy.json"
policy_file.write_text("{invalid json content")
return policy_file
@pytest.mark.unit
def test_json_parse_error_fallback_to_empty(
malformed_policy_file,
mock_manager_util,
mock_context,
capture_logs
):
"""
Test empty dict on JSON parse error
Priority: 3 (Recommended)
Purpose:
Verify that malformed JSON results in empty policy
with appropriate error logging.
"""
import sys
# Path setup handled by conftest.py
from comfyui_manager.common.pip_util import get_pip_policy
policy = get_pip_policy()
assert policy == {}
# Should have error log about parsing failure
assert any("parse" in record.message.lower() for record in capture_logs.records)
@pytest.mark.unit
def test_unknown_condition_type_returns_false(
mock_manager_util,
mock_context,
capture_logs
):
"""
Test unknown condition type returns False
Priority: 3 (Recommended)
Purpose:
Verify that unknown condition types are handled gracefully
by returning False with a warning.
"""
import sys
# Path setup handled by conftest.py
from comfyui_manager.common.pip_util import PipBatch
batch = PipBatch()
condition = {"type": "unknown_type", "some_field": "value"}
result = batch._evaluate_condition(condition, "pkg", {})
assert result is False
# Should have warning about unknown type
assert any("unknown" in record.message.lower() for record in capture_logs.records)
@pytest.fixture
def self_reference_policy(temp_policy_dir):
"""Create policy with self-reference"""
policy_content = {
"critical-package": {
"restore": [
{
"condition": {
"type": "installed",
"spec": "!=1.2.3"
},
"target": "critical-package",
"version": "1.2.3"
}
]
}
}
policy_file = temp_policy_dir / "pip-policy.json"
policy_file.write_text(json.dumps(policy_content, indent=2))
return policy_file
@pytest.fixture
def mock_self_reference_subprocess(monkeypatch):
"""Mock subprocess for self-reference test"""
call_sequence = []
installed_packages = {
"critical-package": "1.2.2"
}
def mock_run(cmd, **kwargs):
call_sequence.append(cmd)
# pip freeze
if "freeze" in cmd:
output = "\n".join([f"{pkg}=={ver}" for pkg, ver in installed_packages.items()])
return subprocess.CompletedProcess(cmd, 0, output, "")
# pip install
if "install" in cmd and "critical-package==1.2.3" in cmd:
installed_packages["critical-package"] = "1.2.3"
return subprocess.CompletedProcess(cmd, 0, "", "")
return subprocess.CompletedProcess(cmd, 0, "", "")
monkeypatch.setattr("subprocess.run", mock_run)
return call_sequence, installed_packages
@pytest.mark.integration
def test_restore_self_version_check(
self_reference_policy,
mock_manager_util,
mock_context,
mock_self_reference_subprocess
):
"""
Test restore policy checking its own version
Priority: 3 (Recommended)
Purpose:
Verify that when a condition omits the package field,
it correctly defaults to checking the package itself.
"""
import sys
# Path setup handled by conftest.py
from comfyui_manager.common.pip_util import PipBatch
call_sequence, installed_packages = mock_self_reference_subprocess
with PipBatch() as batch:
restored = batch.ensure_installed()
final = batch._get_installed_packages()
# Condition should evaluate with self-reference
# "1.2.2" != "1.2.3" → True
assert "critical-package" in restored
assert final["critical-package"] == "1.2.3"
@pytest.fixture
def partial_failure_policy(temp_policy_dir):
"""Create policy for multiple uninstalls"""
policy_content = {
"pkg-a": {
"uninstall": [{"target": "old-pkg-1"}]
},
"pkg-b": {
"uninstall": [{"target": "old-pkg-2"}]
},
"pkg-c": {
"uninstall": [{"target": "old-pkg-3"}]
}
}
policy_file = temp_policy_dir / "pip-policy.json"
policy_file.write_text(json.dumps(policy_content, indent=2))
return policy_file
@pytest.fixture
def mock_partial_failure_subprocess(monkeypatch):
"""Mock subprocess with one failure"""
call_sequence = []
installed_packages = {
"old-pkg-1": "1.0",
"old-pkg-2": "1.0",
"old-pkg-3": "1.0"
}
def mock_run(cmd, **kwargs):
call_sequence.append(cmd)
# pip freeze
if "freeze" in cmd:
output = "\n".join([f"{pkg}=={ver}" for pkg, ver in installed_packages.items()])
return subprocess.CompletedProcess(cmd, 0, output, "")
# pip uninstall
if "uninstall" in cmd:
if "old-pkg-2" in cmd:
# Fail on pkg-2
raise subprocess.CalledProcessError(1, cmd, "", "Uninstall failed")
else:
# Success on others
for pkg in ["old-pkg-1", "old-pkg-3"]:
if pkg in cmd:
installed_packages.pop(pkg, None)
return subprocess.CompletedProcess(cmd, 0, "", "")
return subprocess.CompletedProcess(cmd, 0, "", "")
monkeypatch.setattr("subprocess.run", mock_run)
return call_sequence, installed_packages
@pytest.mark.integration
def test_ensure_not_installed_continues_on_individual_failure(
partial_failure_policy,
mock_manager_util,
mock_context,
mock_partial_failure_subprocess,
capture_logs
):
"""
Test partial failure handling
Priority: 2 (Important)
Purpose:
Verify that when one package removal fails, the system
continues processing other packages.
"""
import sys
# Path setup handled by conftest.py
from comfyui_manager.common.pip_util import PipBatch
call_sequence, installed_packages = mock_partial_failure_subprocess
with PipBatch() as batch:
removed = batch.ensure_not_installed()
# Verify partial success
assert "old-pkg-1" in removed
assert "old-pkg-3" in removed
assert "old-pkg-2" not in removed # Failed
# Verify warning logged for failure
assert any("warning" in record.levelname.lower() for record in capture_logs.records)

View File

@@ -0,0 +1,158 @@
"""
Test environment corruption and recovery (Priority 1)
Tests that packages deleted or modified during installation are restored
"""
import json
from pathlib import Path
import pytest
@pytest.fixture
def restore_policy(temp_policy_dir):
"""Create policy with restore section for lightweight packages"""
policy_content = {
"six": {
"restore": [
{
"target": "six",
"version": "1.16.0",
"reason": "six must be maintained at 1.16.0 for compatibility"
}
]
}
}
policy_file = temp_policy_dir / "pip-policy.json"
policy_file.write_text(json.dumps(policy_content, indent=2))
return policy_file
@pytest.mark.integration
def test_package_deletion_and_restore(
restore_policy,
mock_manager_util,
mock_context,
reset_test_venv,
get_installed_packages,
install_packages,
uninstall_packages
):
"""
Test package deleted by installation is restored
Priority: 1 (Essential)
Purpose:
Verify that when a package installation deletes another package,
the restore policy can bring it back with the correct version.
Based on DEPENDENCY_TREE_CONTEXT.md:
six==1.16.0 must be maintained for compatibility
After deletion, should restore to exactly 1.16.0
"""
from comfyui_manager.common.pip_util import PipBatch
# Verify six is initially installed at expected version
initial = get_installed_packages()
assert "six" in initial
assert initial["six"] == "1.16.0", f"Expected six==1.16.0, got {initial['six']}"
with PipBatch() as batch:
# Manually remove six to simulate deletion by another package
uninstall_packages("six")
# Check six was deleted
installed_after_delete = batch._get_installed_packages()
assert "six" not in installed_after_delete, "six should be deleted"
# Restore six
restored = batch.ensure_installed()
final_packages = batch._get_installed_packages()
# Verify six was restored to EXACT required version (not latest)
assert "six" in restored, "six should be in restored list"
assert final_packages["six"] == "1.16.0", \
"six should be restored to exact version 1.16.0 (not 1.17.0 latest)"
@pytest.fixture
def version_change_policy(temp_policy_dir):
"""Create policy for version change test with real packages"""
policy_content = {
"urllib3": {
"restore": [
{
"condition": {
"type": "installed",
"spec": "!=1.26.15"
},
"target": "urllib3",
"version": "1.26.15",
"reason": "urllib3 must be 1.26.15 for compatibility"
}
]
}
}
policy_file = temp_policy_dir / "pip-policy.json"
policy_file.write_text(json.dumps(policy_content, indent=2))
return policy_file
@pytest.mark.integration
def test_version_change_and_restore(
version_change_policy,
mock_manager_util,
mock_context,
reset_test_venv,
get_installed_packages,
install_packages
):
"""
Test package version changed by installation is restored
Priority: 1 (Essential)
Purpose:
Verify that when a package installation changes another package's
version, the restore policy can revert it to the required version.
Based on DEPENDENCY_TREE_CONTEXT.md:
urllib3 can upgrade from 1.26.15 (1.x) to 2.5.0 (2.x)
Restore policy with condition "!=1.26.15" should downgrade back
This tests downgrade capability (not just upgrade prevention)
"""
from comfyui_manager.common.pip_util import PipBatch
# Verify urllib3 1.26.15 is installed
initial = get_installed_packages()
assert "urllib3" in initial
assert initial["urllib3"] == "1.26.15", f"Expected urllib3==1.26.15, got {initial['urllib3']}"
with PipBatch() as batch:
# Manually upgrade urllib3 to 2.x to simulate version change
# This is a MAJOR version upgrade (1.x → 2.x)
install_packages("urllib3==2.1.0")
installed_after = batch._get_installed_packages()
# Verify version was changed to 2.x
assert installed_after["urllib3"] == "2.1.0", \
f"urllib3 should be upgraded to 2.1.0, got {installed_after['urllib3']}"
assert installed_after["urllib3"].startswith("2."), \
"urllib3 should be at 2.x series"
# Restore urllib3 to 1.26.15 (this is a DOWNGRADE from 2.x to 1.x)
restored = batch.ensure_installed()
final = batch._get_installed_packages()
# Verify condition was satisfied (2.1.0 != 1.26.15) and restore was triggered
assert "urllib3" in restored, "urllib3 should be in restored list"
# Verify version was DOWNGRADED from 2.x back to 1.x
assert final["urllib3"] == "1.26.15", \
"urllib3 should be downgraded to 1.26.15 (from 2.1.0)"
assert final["urllib3"].startswith("1."), \
f"urllib3 should be back at 1.x series, got {final['urllib3']}"

View File

@@ -0,0 +1,204 @@
"""
Test full workflow integration (Priority 1)
Tests the complete uninstall → install → restore workflow
"""
import json
import subprocess
from pathlib import Path
import pytest
@pytest.fixture
def workflow_policy(temp_policy_dir):
"""Create policy for full workflow test"""
policy_content = {
"target-package": {
"uninstall": [
{
"condition": {
"type": "installed",
"package": "conflicting-pkg"
},
"target": "conflicting-pkg",
"reason": "Conflicts with target-package"
}
],
"apply_all_matches": [
{
"type": "pin_dependencies",
"pinned_packages": ["numpy", "pandas"]
}
]
},
"critical-package": {
"restore": [
{
"target": "critical-package",
"version": "1.2.3",
"reason": "Critical package must be 1.2.3"
}
]
}
}
policy_file = temp_policy_dir / "pip-policy.json"
policy_file.write_text(json.dumps(policy_content, indent=2))
return policy_file
@pytest.fixture
def mock_workflow_subprocess(monkeypatch):
"""Mock subprocess for workflow test"""
call_sequence = []
# Initial environment: conflicting-pkg, numpy, pandas, critical-package
installed_packages = {
"conflicting-pkg": "1.0.0",
"numpy": "1.26.0",
"pandas": "2.0.0",
"critical-package": "1.2.3"
}
def mock_run(cmd, **kwargs):
call_sequence.append(cmd)
# pip freeze
if "freeze" in cmd:
output = "\n".join([f"{pkg}=={ver}" for pkg, ver in installed_packages.items()])
return subprocess.CompletedProcess(cmd, 0, output, "")
# pip uninstall
if "uninstall" in cmd:
# Remove conflicting-pkg
if "conflicting-pkg" in cmd:
installed_packages.pop("conflicting-pkg", None)
return subprocess.CompletedProcess(cmd, 0, "", "")
# pip install target-package (deletes critical-package)
if "install" in cmd and "target-package" in cmd:
# Simulate target-package installation deleting critical-package
installed_packages.pop("critical-package", None)
installed_packages["target-package"] = "1.0.0"
return subprocess.CompletedProcess(cmd, 0, "", "")
# pip install critical-package (restore)
if "install" in cmd and "critical-package==1.2.3" in cmd:
installed_packages["critical-package"] = "1.2.3"
return subprocess.CompletedProcess(cmd, 0, "", "")
return subprocess.CompletedProcess(cmd, 0, "", "")
monkeypatch.setattr("subprocess.run", mock_run)
return call_sequence, installed_packages
@pytest.mark.integration
def test_uninstall_install_restore_workflow(
workflow_policy,
mock_manager_util,
mock_context,
mock_workflow_subprocess
):
"""
Test complete uninstall → install → restore workflow
Priority: 1 (Essential)
Purpose:
Verify the complete workflow executes in correct order:
1. ensure_not_installed() removes conflicting packages
2. install() applies policies (pin_dependencies)
3. ensure_installed() restores deleted packages
"""
import sys
# Path setup handled by conftest.py
from comfyui_manager.common.pip_util import PipBatch
call_sequence, installed_packages = mock_workflow_subprocess
with PipBatch() as batch:
# Step 1: uninstall - remove conflicting packages
removed = batch.ensure_not_installed()
# Step 2: install target-package with pinned dependencies
result = batch.install("target-package")
# Step 3: restore critical-package that was deleted
restored = batch.ensure_installed()
# Verify Step 1: conflicting-pkg was removed
assert "conflicting-pkg" in removed
# Verify Step 2: target-package was installed with pinned dependencies
assert result is True
# Check that pip install was called with pinned packages
install_calls = [cmd for cmd in call_sequence if "install" in cmd and "target-package" in cmd]
assert len(install_calls) > 0
install_cmd = install_calls[0]
assert "target-package" in install_cmd
assert "numpy==1.26.0" in install_cmd
assert "pandas==2.0.0" in install_cmd
# Verify Step 3: critical-package was restored
assert "critical-package" in restored
# Verify final state
assert "conflicting-pkg" not in installed_packages
assert "critical-package" in installed_packages
assert installed_packages["critical-package"] == "1.2.3"
assert "target-package" in installed_packages
@pytest.mark.integration
def test_cache_invalidation_across_workflow(
workflow_policy,
mock_manager_util,
mock_context,
mock_workflow_subprocess
):
"""
Test cache is correctly refreshed at each workflow step
Priority: 1 (Essential)
Purpose:
Verify that the cache is invalidated and refreshed after each
operation (uninstall, install, restore) to reflect current state.
"""
import sys
# Path setup handled by conftest.py
from comfyui_manager.common.pip_util import PipBatch
call_sequence, installed_packages = mock_workflow_subprocess
with PipBatch() as batch:
# Initial cache state
cache1 = batch._get_installed_packages()
assert "conflicting-pkg" in cache1
assert "critical-package" in cache1
# After uninstall
removed = batch.ensure_not_installed()
cache2 = batch._get_installed_packages()
assert "conflicting-pkg" not in cache2 # Removed
# After install (critical-package gets deleted by target-package)
batch.install("target-package")
cache3 = batch._get_installed_packages()
assert "target-package" in cache3 # Added
assert "critical-package" not in cache3 # Deleted by target-package
# After restore
restored = batch.ensure_installed()
cache4 = batch._get_installed_packages()
assert "critical-package" in cache4 # Restored
# Verify cache was refreshed at each step
assert cache1 != cache2 # Changed after uninstall
assert cache2 != cache3 # Changed after install
assert cache3 != cache4 # Changed after restore

Some files were not shown because too many files have changed in this diff Show More