Compare commits

..

32 Commits

Author SHA1 Message Date
bymyself
49549ddcb8 [feat] Implement comprehensive batch tracking and OpenAPI-driven data models
Enhances ComfyUI Manager with robust batch execution tracking and unified data model architecture:

- Implemented automatic batch history serialization with before/after system state snapshots
- Added comprehensive state management capturing installed nodes, models, and ComfyUI version info
- Enhanced task queue with proper client ID handling and WebSocket notifications
- Migrated all data models to OpenAPI-generated Pydantic models for consistency
- Added documentation for new TaskQueue methods (done_count, total_count, finalize)
- Fixed 64 linting errors with proper imports and code cleanup

Technical improvements:
- All models now auto-generated from openapi.yaml ensuring API/implementation consistency
- Batch tracking captures complete system state at operation start and completion
- Enhanced REST endpoints with comprehensive documentation
- Removed manual model files in favor of single source of truth
- Added helper methods for system state capture and batch lifecycle management
2025-06-08 01:18:14 -07:00
bymyself
35eddc2965 [feat] Add client_id support to task queue system
- Add client_id field to QueueTaskItem and TaskHistoryItem models
- Implement client-specific WebSocket message routing
- Add client filtering to queue status and history endpoints
- Follow ComfyUI patterns for session management
- Create data_models package for better code organization
2025-06-06 16:01:52 -07:00
Dr.Lt.Data
798b203274 fixed: cm_global importing error 2025-06-06 16:00:45 -07:00
Dr.Lt.Data
9d2034bd4f fixed: crash related to deleted CNR node after installed
modified: convert cm-cli.sh to cm-cli command
2025-06-06 16:00:45 -07:00
Dr.Lt.Data
6233fabe02 modified: glob.core - make default network mode as public.
Network mode does not simply determine whether the CNR cache is used. Even after switching to cacheless in the future, it will continue to be used as a policy for user environments.
2025-06-06 16:00:45 -07:00
Dr.Lt.Data
48ba2f4b4c fixed: missing channels.list.template
modified: /ltdrdata -> /Comfy-Org
modified: set default network as public instead of offline
2025-06-06 16:00:45 -07:00
Dr.Lt.Data
3799af0017 fixed: perform reload when starting task worker 2025-06-06 16:00:45 -07:00
Dr.Lt.Data
403947a5d1 modified: prevent displaying ComfyUI-Manager on list 2025-06-06 16:00:45 -07:00
Dr.Lt.Data
276ccca4f6 fixed: avoid except:
fixed: prestartup_script - remove useless exception handling when fallback resolving comfy_path
2025-06-06 16:00:45 -07:00
Christian Byrne
31de92a7ef Add is_legacy_manager_ui route from the legacy package as well (#1748)
* add `is_legacy_manager_ui` route to `legacy` package  as well

* add static
2025-06-06 16:00:45 -07:00
Christian Byrne
3ae4aecd84 Only load legacy FE extension if --enable-manager-legacy-ui is set (#1746)
* only load JS extensions when legacy arg is set

* add `is_legacy_manager_ui` endpoint
2025-06-06 16:00:45 -07:00
Dr.Lt.Data
7896949719 use --enable-manager-legacy-ui cli arg instead of env variable 2025-06-06 16:00:45 -07:00
Dr.Lt.Data
86c7482048 restructuring
the existing cache-based implementation will be retained as a fallback under legacy/..., while glob/... will be updated to a cacheless implementation.
2025-06-06 16:00:45 -07:00
Christian Byrne
0146655f0f add development guide (#1739) 2025-06-06 15:59:20 -07:00
Dr.Lt.Data
89bb61fb05 fixed: don't disable legacy ComfyUI-Manager unless --disable-comfyui is set 2025-06-06 15:59:20 -07:00
bymyself
dfd9a3ec7b use parsed version and id even when no cnr map exists 2025-06-06 15:59:20 -07:00
bymyself
985c987603 fix: installed nodes should still be initialized in offline mode 2025-06-06 15:59:20 -07:00
bymyself
1ce35679b1 fix is_legacy_front should be a function still 2025-06-06 15:59:20 -07:00
bymyself
6e1c906aff if pip package, force offline mode 2025-06-06 15:59:20 -07:00
bymyself
cd8e87a3fb don't load legacy web dir when --disable-manager arg set 2025-06-06 15:59:20 -07:00
bymyself
8b9420731a enable legacy manager frontend during beta phase 2025-06-06 15:59:14 -07:00
bymyself
d9918cf773 add missing v2 prefix to customnode/installed route 2025-06-06 15:58:56 -07:00
bymyself
163782e445 don't handle queue in legacy front if element is not visible 2025-06-06 15:58:56 -07:00
bymyself
ad14e1ed13 don't show menu buttons if past comfyui front 1.16 2025-06-06 15:58:56 -07:00
bymyself
b4392293fa add workflow to publish to pypi 2025-06-06 15:58:56 -07:00
Dr.Lt.Data
e8ff505ebf Modify the structure to be installable via pip. 2025-06-06 15:58:56 -07:00
Dr.Lt.Data
8a5226b1d4 added: should_be_disabled function 2025-06-06 15:57:08 -07:00
Dr.Lt.Data
422af67217 fixed: ruff check 2025-06-06 15:57:08 -07:00
Dr.Lt.Data
5c300f75e7 fixed: failed[..].ui_id -> failed 2025-06-06 15:57:08 -07:00
Dr.Lt.Data
5ea7bf3683 feat: support task batch
POST /v2/manager/queue/batch
GET /v2/manager/queue/history_list
GET /v2/manager/queue/history?id={id}
GET /v2/manager/queue/abort_current
2025-06-06 15:57:08 -07:00
Dr.Lt.Data
34efbe9262 Modify the structure to be installable via pip. 2025-06-06 15:57:08 -07:00
Dr.Lt.Data
9d24038a7d support installation of system added nodepack
modified: install_by_id - Change the install path of the CNR node added by the system to be based on the repo URL instead of the CNR ID.
2025-06-06 15:54:38 -07:00
63 changed files with 15178 additions and 70494 deletions

View File

@@ -1,70 +0,0 @@
name: CI
on:
push:
branches: [ main, feat/*, fix/* ]
pull_request:
branches: [ main ]
jobs:
validate-openapi:
name: Validate OpenAPI Specification
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check if OpenAPI changed
id: openapi-changed
uses: tj-actions/changed-files@v44
with:
files: openapi.yaml
- name: Setup Node.js
if: steps.openapi-changed.outputs.any_changed == 'true'
uses: actions/setup-node@v4
with:
node-version: '18'
- name: Install Redoc CLI
if: steps.openapi-changed.outputs.any_changed == 'true'
run: |
npm install -g @redocly/cli
- name: Validate OpenAPI specification
if: steps.openapi-changed.outputs.any_changed == 'true'
run: |
redocly lint openapi.yaml
code-quality:
name: Code Quality Checks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Fetch all history for proper diff
- name: Get changed Python files
id: changed-py-files
uses: tj-actions/changed-files@v44
with:
files: |
**/*.py
files_ignore: |
comfyui_manager/legacy/**
- name: Setup Python
if: steps.changed-py-files.outputs.any_changed == 'true'
uses: actions/setup-python@v5
with:
python-version: '3.9'
- name: Install dependencies
if: steps.changed-py-files.outputs.any_changed == 'true'
run: |
pip install ruff
- name: Run ruff linting on changed files
if: steps.changed-py-files.outputs.any_changed == 'true'
run: |
echo "Changed files: ${{ steps.changed-py-files.outputs.all_changed_files }}"
echo "${{ steps.changed-py-files.outputs.all_changed_files }}" | xargs -r ruff check

View File

@@ -4,7 +4,7 @@ on:
workflow_dispatch: workflow_dispatch:
push: push:
branches: branches:
- manager-v4 - main
paths: paths:
- "pyproject.toml" - "pyproject.toml"
@@ -21,7 +21,7 @@ jobs:
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v4 uses: actions/setup-python@v4
with: with:
python-version: '3.x' python-version: '3.9'
- name: Install build dependencies - name: Install build dependencies
run: | run: |
@@ -31,28 +31,28 @@ jobs:
- name: Get current version - name: Get current version
id: current_version id: current_version
run: | run: |
CURRENT_VERSION=$(grep -oP '^version = "\K[^"]+' pyproject.toml) CURRENT_VERSION=$(grep -oP 'version = "\K[^"]+' pyproject.toml)
echo "version=$CURRENT_VERSION" >> $GITHUB_OUTPUT echo "version=$CURRENT_VERSION" >> $GITHUB_OUTPUT
echo "Current version: $CURRENT_VERSION" echo "Current version: $CURRENT_VERSION"
- name: Build package - name: Build package
run: python -m build run: python -m build
# - name: Create GitHub Release - name: Create GitHub Release
# id: create_release id: create_release
# uses: softprops/action-gh-release@v2 uses: softprops/action-gh-release@v2
# env: env:
# GITHUB_TOKEN: ${{ github.token }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# with: with:
# files: dist/* files: dist/*
# tag_name: v${{ steps.current_version.outputs.version }} tag_name: v${{ steps.current_version.outputs.version }}
# draft: false draft: false
# prerelease: false prerelease: false
# generate_release_notes: true generate_release_notes: true
- name: Publish to PyPI - name: Publish to PyPI
uses: pypa/gh-action-pypi-publish@76f52bc884231f62b9a034ebfe128415bbaabdfc uses: pypa/gh-action-pypi-publish@release/v1
with: with:
password: ${{ secrets.PYPI_TOKEN }} password: ${{ secrets.PYPI_TOKEN }}
skip-existing: true skip-existing: true
verbose: true verbose: true

25
.github/workflows/publish.yml vendored Normal file
View File

@@ -0,0 +1,25 @@
name: Publish to Comfy registry
on:
workflow_dispatch:
push:
branches:
- main-blocked
paths:
- "pyproject.toml"
permissions:
issues: write
jobs:
publish-node:
name: Publish Custom Node to registry
runs-on: ubuntu-latest
if: ${{ github.repository_owner == 'ltdrdata' || github.repository_owner == 'Comfy-Org' }}
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Publish Custom Node
uses: Comfy-Org/publish-node-action@v1
with:
## Add your own personal access token to your Github Repository secrets and reference it here.
personal_access_token: ${{ secrets.REGISTRY_ACCESS_TOKEN }}

1
.gitignore vendored
View File

@@ -19,6 +19,5 @@ pip_overrides.json
check2.sh check2.sh
/venv/ /venv/
build build
dist
*.egg-info *.egg-info
.env .env

105
README.md
View File

@@ -89,20 +89,20 @@
## Paths ## Paths
In `ComfyUI-Manager` V4.0.3b4 and later, configuration files and dynamically generated files are located under `<USER_DIRECTORY>/__manager/`. In `ComfyUI-Manager` V3.0 and later, configuration files and dynamically generated files are located under `<USER_DIRECTORY>/default/ComfyUI-Manager/`.
* <USER_DIRECTORY> * <USER_DIRECTORY>
* If executed without any options, the path defaults to ComfyUI/user. * If executed without any options, the path defaults to ComfyUI/user.
* It can be set using --user-directory <USER_DIRECTORY>. * It can be set using --user-directory <USER_DIRECTORY>.
* Basic config files: `<USER_DIRECTORY>/__manager/config.ini` * Basic config files: `<USER_DIRECTORY>/default/ComfyUI-Manager/config.ini`
* Configurable channel lists: `<USER_DIRECTORY>/__manager/channels.ini` * Configurable channel lists: `<USER_DIRECTORY>/default/ComfyUI-Manager/channels.ini`
* Configurable pip overrides: `<USER_DIRECTORY>/__manager/pip_overrides.json` * Configurable pip overrides: `<USER_DIRECTORY>/default/ComfyUI-Manager/pip_overrides.json`
* Configurable pip blacklist: `<USER_DIRECTORY>/__manager/pip_blacklist.list` * Configurable pip blacklist: `<USER_DIRECTORY>/default/ComfyUI-Manager/pip_blacklist.list`
* Configurable pip auto fix: `<USER_DIRECTORY>/__manager/pip_auto_fix.list` * Configurable pip auto fix: `<USER_DIRECTORY>/default/ComfyUI-Manager/pip_auto_fix.list`
* Saved snapshot files: `<USER_DIRECTORY>/__manager/snapshots` * Saved snapshot files: `<USER_DIRECTORY>/default/ComfyUI-Manager/snapshots`
* Startup script files: `<USER_DIRECTORY>/__manager/startup-scripts` * Startup script files: `<USER_DIRECTORY>/default/ComfyUI-Manager/startup-scripts`
* Component files: `<USER_DIRECTORY>/__manager/components` * Component files: `<USER_DIRECTORY>/default/ComfyUI-Manager/components`
## `extra_model_paths.yaml` Configuration ## `extra_model_paths.yaml` Configuration
@@ -115,17 +115,17 @@ The following settings are applied based on the section marked as `is_default`.
## Snapshot-Manager ## Snapshot-Manager
* When you press `Save snapshot` or use `Update All` on `Manager Menu`, the current installation status snapshot is saved. * When you press `Save snapshot` or use `Update All` on `Manager Menu`, the current installation status snapshot is saved.
* Snapshot file dir: `<USER_DIRECTORY>/__manager/snapshots` * Snapshot file dir: `<USER_DIRECTORY>/default/ComfyUI-Manager/snapshots`
* You can rename snapshot file. * You can rename snapshot file.
* Press the "Restore" button to revert to the installation status of the respective snapshot. * Press the "Restore" button to revert to the installation status of the respective snapshot.
* However, for custom nodes not managed by Git, snapshot support is incomplete. * However, for custom nodes not managed by Git, snapshot support is incomplete.
* When you press `Restore`, it will take effect on the next ComfyUI startup. * When you press `Restore`, it will take effect on the next ComfyUI startup.
* The selected snapshot file is saved in `<USER_DIRECTORY>/__manager/startup-scripts/restore-snapshot.json`, and upon restarting ComfyUI, the snapshot is applied and then deleted. * The selected snapshot file is saved in `<USER_DIRECTORY>/default/ComfyUI-Manager/startup-scripts/restore-snapshot.json`, and upon restarting ComfyUI, the snapshot is applied and then deleted.
![model-install-dialog](https://raw.githubusercontent.com/ltdrdata/ComfyUI-extension-tutorials/Main/ComfyUI-Manager/images/snapshot.jpg) ![model-install-dialog](https://raw.githubusercontent.com/ltdrdata/ComfyUI-extension-tutorials/Main/ComfyUI-Manager/images/snapshot.jpg)
## cm-cli: command line tools for power users ## cm-cli: command line tools for power user
* A tool is provided that allows you to use the features of ComfyUI-Manager without running ComfyUI. * A tool is provided that allows you to use the features of ComfyUI-Manager without running ComfyUI.
* For more details, please refer to the [cm-cli documentation](docs/en/cm-cli.md). * For more details, please refer to the [cm-cli documentation](docs/en/cm-cli.md).
@@ -169,12 +169,12 @@ The following settings are applied based on the section marked as `is_default`.
} }
``` ```
* `<current timestamp>` Ensure that the timestamp is always unique. * `<current timestamp>` Ensure that the timestamp is always unique.
* "components" should have the same structure as the content of the file stored in `<USER_DIRECTORY>/__manager/components`. * "components" should have the same structure as the content of the file stored in `<USER_DIRECTORY>/default/ComfyUI-Manager/components`.
* `<component name>`: The name should be in the format `<prefix>::<node name>`. * `<component name>`: The name should be in the format `<prefix>::<node name>`.
* `<component node data>`: In the node data of the group node. * `<compnent nodeata>`: In the nodedata of the group node.
* `<version>`: Only two formats are allowed: `major.minor.patch` or `major.minor`. (e.g. `1.0`, `2.2.1`) * `<version>`: Only two formats are allowed: `major.minor.patch` or `major.minor`. (e.g. `1.0`, `2.2.1`)
* `<datetime>`: Saved time * `<datetime>`: Saved time
* `<packname>`: If the packname is not empty, the category becomes packname/workflow, and it is saved in the <packname>.pack file in `<USER_DIRECTORY>/__manager/components`. * `<packname>`: If the packname is not empty, the category becomes packname/workflow, and it is saved in the <packname>.pack file in `<USER_DIRECTORY>/default/ComfyUI-Manager/components`.
* `<category>`: If there is neither a category nor a packname, it is saved in the components category. * `<category>`: If there is neither a category nor a packname, it is saved in the components category.
``` ```
"version":"1.0", "version":"1.0",
@@ -189,7 +189,7 @@ The following settings are applied based on the section marked as `is_default`.
* Dragging and dropping or pasting a single component will add a node. However, when adding multiple components, nodes will not be added. * Dragging and dropping or pasting a single component will add a node. However, when adding multiple components, nodes will not be added.
## Support for installing missing nodes ## Support of missing nodes installation
![missing-menu](https://raw.githubusercontent.com/ltdrdata/ComfyUI-extension-tutorials/Main/ComfyUI-Manager/images/missing-menu.jpg) ![missing-menu](https://raw.githubusercontent.com/ltdrdata/ComfyUI-extension-tutorials/Main/ComfyUI-Manager/images/missing-menu.jpg)
@@ -215,24 +215,23 @@ The following settings are applied based on the section marked as `is_default`.
downgrade_blacklist = <Set a list of packages to prevent downgrades. List them separated by commas.> downgrade_blacklist = <Set a list of packages to prevent downgrades. List them separated by commas.>
security_level = <Set the security level => strong|normal|normal-|weak> security_level = <Set the security level => strong|normal|normal-|weak>
always_lazy_install = <Whether to perform dependency installation on restart even in environments other than Windows.> always_lazy_install = <Whether to perform dependency installation on restart even in environments other than Windows.>
network_mode = <Set the network mode => public|private|offline|personal_cloud> network_mode = <Set the network mode => public|private|offline>
``` ```
* network_mode: * network_mode:
- public: An environment that uses a typical public network. - public: An environment that uses a typical public network.
- private: An environment that uses a closed network, where a private node DB is configured via `channel_url`. (Uses cache if available) - private: An environment that uses a closed network, where a private node DB is configured via `channel_url`. (Uses cache if available)
- offline: An environment that does not use any external connections when using an offline network. (Uses cache if available) - offline: An environment that does not use any external connections when using an offline network. (Uses cache if available)
- personal_cloud: Applies relaxed security features in cloud environments such as Google Colab or Runpod, where strong security is not required.
## Additional Feature ## Additional Feature
* Logging to file feature * Logging to file feature
* This feature is enabled by default and can be disabled by setting `file_logging = False` in the `config.ini`. * This feature is enabled by default and can be disabled by setting `file_logging = False` in the `config.ini`.
* Fix node (recreate): When right-clicking on a node and selecting `Fix node (recreate)`, you can recreate the node. The widget's values are reset, while the connections maintain those with the same names. * Fix node(recreate): When right-clicking on a node and selecting `Fix node (recreate)`, you can recreate the node. The widget's values are reset, while the connections maintain those with the same names.
* It is used to correct errors in nodes of old workflows created before, which are incompatible with the version changes of custom nodes. * It is used to correct errors in nodes of old workflows created before, which are incompatible with the version changes of custom nodes.
* Double-Click Node Title: You can set the double-click behavior of nodes in the ComfyUI-Manager menu. * Double-Click Node Title: You can set the double click behavior of nodes in the ComfyUI-Manager menu.
* `Copy All Connections`, `Copy Input Connections`: Double-clicking a node copies the connections of the nearest node. * `Copy All Connections`, `Copy Input Connections`: Double-clicking a node copies the connections of the nearest node.
* This action targets the nearest node within a straight-line distance of 1000 pixels from the center of the node. * This action targets the nearest node within a straight-line distance of 1000 pixels from the center of the node.
* In the case of `Copy All Connections`, it duplicates existing outputs, but since it does not allow duplicate connections, the existing output connections of the original node are disconnected. * In the case of `Copy All Connections`, it duplicates existing outputs, but since it does not allow duplicate connections, the existing output connections of the original node are disconnected.
@@ -298,48 +297,46 @@ When you run the `scan.sh` script:
* It updates the `github-stats.json`. * It updates the `github-stats.json`.
* This uses the GitHub API, so set your token with `export GITHUB_TOKEN=your_token_here` to avoid quickly reaching the rate limit and malfunctioning. * This uses the GitHub API, so set your token with `export GITHUB_TOKEN=your_token_here` to avoid quickly reaching the rate limit and malfunctioning.
* To skip this step, add the `--skip-stat-update` option. * To skip this step, add the `--skip-update-stat` option.
* The `--skip-all` option applies both `--skip-update` and `--skip-stat-update`. * The `--skip-all` option applies both `--skip-update` and `--skip-stat-update`.
## Troubleshooting ## Troubleshooting
* If your `git.exe` is installed in a specific location other than system git, please install ComfyUI-Manager and run ComfyUI. Then, specify the path including the file name in `git_exe = ` in the `<USER_DIRECTORY>/__manager/config.ini` file that is generated. * If your `git.exe` is installed in a specific location other than system git, please install ComfyUI-Manager and run ComfyUI. Then, specify the path including the file name in `git_exe = ` in the `<USER_DIRECTORY>/default/ComfyUI-Manager/config.ini` file that is generated.
* If updating ComfyUI-Manager itself fails, please go to the **ComfyUI-Manager** directory and execute the command `git update-ref refs/remotes/origin/main a361cc1 && git fetch --all && git pull`. * If updating ComfyUI-Manager itself fails, please go to the **ComfyUI-Manager** directory and execute the command `git update-ref refs/remotes/origin/main a361cc1 && git fetch --all && git pull`.
* If you encounter the error message `Overlapped Object has pending operation at deallocation on ComfyUI Manager load` under Windows * If you encounter the error message `Overlapped Object has pending operation at deallocation on Comfyui Manager load` under Windows
* Edit `config.ini` file: add `windows_selector_event_loop_policy = True` * Edit `config.ini` file: add `windows_selector_event_loop_policy = True`
* If the `SSL: CERTIFICATE_VERIFY_FAILED` error occurs. * if `SSL: CERTIFICATE_VERIFY_FAILED` error is occured.
* Edit `config.ini` file: add `bypass_ssl = True` * Edit `config.ini` file: add `bypass_ssl = True`
## Security policy ## Security policy
* Edit `config.ini` file: add `security_level = <LEVEL>`
The security settings are applied based on whether the ComfyUI server's listener is non-local and whether the network mode is set to `personal_cloud`. * `strong`
* doesn't allow `high` and `middle` level risky feature
* **non-local**: When the server is launched with `--listen` and is bound to a network range other than the local `127.` range, allowing remote IP access. * `normal`
* **personal\_cloud**: When the `network_mode` is set to `personal_cloud`. * doesn't allow `high` level risky feature
* `middle` level risky feature is available
* `normal-`
### Risky Level Table * doesn't allow `high` level risky feature if `--listen` is specified and not starts with `127.`
* `middle` level risky feature is available
| Risky Level | features | * `weak`
|-------------|---------------------------------------------------------------------------------------------------------------------------------------| * all feature is available
| high+ | * `Install via git url`, `pip install`<BR>* Installation of nodepack registered not in the `default channel`. |
| high | * Fix nodepack | * `high` level risky features
| middle+ | * Uninstall/Update<BR>* Installation of nodepack registered in the `default channel`.<BR>* Restore/Remove Snapshot<BR>* Install model | * `Install via git url`, `pip install`
| middle | * Restart | * Installation of custom nodes registered not in the `default channel`.
| low | * Update ComfyUI | * Fix custom nodes
* `middle` level risky features
### Security Level Table * Uninstall/Update
* Installation of custom nodes registered in the `default channel`.
| Security Level | local | non-local (personal_cloud) | non-local (not personal_cloud) | * Restore/Remove Snapshot
|----------------|--------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------|--------------------------------| * Restart
| strong | * Only `weak` level risky features are allowed | * Only `weak` level risky features are allowed | * Only `weak` level risky features are allowed |
| normal | * `high+` and `high` level risky features are not allowed<BR>* `middle+` and `middle` level risky features are available | * `high+` and `high` level risky features are not allowed<BR>* `middle+` and `middle` level risky features are available | * `high+`, `high` and `middle+` level risky features are not allowed<BR>* `middle` level risky features are available * `low` level risky features
| normal- | * All features are available | * `high+` and `high` level risky features are not allowed<BR>* `middle+` and `middle` level risky features are available | * `high+`, `high` and `middle+` level risky features are not allowed<BR>* `middle` level risky features are available * Update ComfyUI
| weak | * All features are available | * All features are available | * `high+` and `middle+` level risky features are not allowed<BR>* `high`, `middle` and `low` level risky features are available
# Disclaimer # Disclaimer

View File

@@ -37,7 +37,7 @@ find ~/.tmp/default -name "*.py" -print0 | xargs -0 grep -E "crypto|^_A="
echo echo
echo CHECK3 echo CHECK3
find ~/.tmp/default -name "requirements.txt" | xargs grep "^\s*[^#]*https\?:" find ~/.tmp/default -name "requirements.txt" | xargs grep "^\s*https\\?:"
find ~/.tmp/default -name "requirements.txt" | xargs grep "^\s*[^#].*\.whl" find ~/.tmp/default -name "requirements.txt" | xargs grep "\.whl"
echo echo

View File

@@ -1,10 +1,5 @@
import os import os
import logging import logging
from aiohttp import web
from .common.manager_security import HANDLER_POLICY
from .common import manager_security
from comfy.cli_args import args
def prestartup(): def prestartup():
from . import prestartup_script # noqa: F401 from . import prestartup_script # noqa: F401
@@ -12,29 +7,25 @@ def prestartup():
def start(): def start():
from comfy.cli_args import args
logging.info('[START] ComfyUI-Manager') logging.info('[START] ComfyUI-Manager')
from .common import cm_global # noqa: F401 from .common import cm_global # noqa: F401
if args.enable_manager: if not args.disable_manager:
if args.enable_manager_legacy_ui: if args.enable_manager_legacy_ui:
try: try:
from .legacy import manager_server # noqa: F401 from .legacy import manager_server # noqa: F401
from .legacy import share_3rdparty # noqa: F401 from .legacy import share_3rdparty # noqa: F401
from .legacy import manager_core as core
import nodes import nodes
logging.info("[ComfyUI-Manager] Legacy UI is enabled.") logging.info("[ComfyUI-Manager] Legacy UI is enabled.")
nodes.EXTENSION_WEB_DIRS['comfyui-manager-legacy'] = os.path.join(os.path.dirname(__file__), 'js') nodes.EXTENSION_WEB_DIRS['comfyui-manager-legacy'] = os.path.join(os.path.dirname(__file__), 'js')
except Exception as e: except Exception as e:
print("Error enabling legacy ComfyUI Manager frontend:", e) print("Error enabling legacy ComfyUI Manager frontend:", e)
core = None
else: else:
from .glob import manager_server # noqa: F401 from .glob import manager_server # noqa: F401
from .glob import share_3rdparty # noqa: F401 from .glob import share_3rdparty # noqa: F401
from .glob import manager_core as core
if core is not None:
manager_security.is_personal_cloud_mode = core.get_config()['network_mode'].lower() == 'personal_cloud'
def should_be_disabled(fullpath:str) -> bool: def should_be_disabled(fullpath:str) -> bool:
@@ -42,7 +33,9 @@ def should_be_disabled(fullpath:str) -> bool:
1. Disables the legacy ComfyUI-Manager. 1. Disables the legacy ComfyUI-Manager.
2. The blocklist can be expanded later based on policies. 2. The blocklist can be expanded later based on policies.
""" """
if args.enable_manager: from comfy.cli_args import args
if not args.disable_manager:
# In cases where installation is done via a zip archive, the directory name may not be comfyui-manager, and it may not contain a git repository. # In cases where installation is done via a zip archive, the directory name may not be comfyui-manager, and it may not contain a git repository.
# It is assumed that any installed legacy ComfyUI-Manager will have at least 'comfyui-manager' in its directory name. # It is assumed that any installed legacy ComfyUI-Manager will have at least 'comfyui-manager' in its directory name.
dir_name = os.path.basename(fullpath).lower() dir_name = os.path.basename(fullpath).lower()
@@ -50,55 +43,3 @@ def should_be_disabled(fullpath:str) -> bool:
return True return True
return False return False
def get_client_ip(request):
peername = request.transport.get_extra_info("peername")
if peername is not None:
host, port = peername
return host
return "unknown"
def create_middleware():
connected_clients = set()
is_local_mode = manager_security.is_loopback(args.listen)
@web.middleware
async def manager_middleware(request: web.Request, handler):
nonlocal connected_clients
# security policy for remote environments
prev_client_count = len(connected_clients)
client_ip = get_client_ip(request)
connected_clients.add(client_ip)
next_client_count = len(connected_clients)
if prev_client_count == 1 and next_client_count > 1:
manager_security.multiple_remote_alert()
policy = manager_security.get_handler_policy(handler)
is_banned = False
# policy check
if len(connected_clients) > 1:
if is_local_mode:
if HANDLER_POLICY.MULTIPLE_REMOTE_BAN_NON_LOCAL in policy:
is_banned = True
if HANDLER_POLICY.MULTIPLE_REMOTE_BAN_NOT_PERSONAL_CLOUD in policy:
is_banned = not manager_security.is_personal_cloud_mode
if HANDLER_POLICY.BANNED in policy:
is_banned = True
if is_banned:
logging.warning(f"[Manager] Banning request from {client_ip}: {request.path}")
response = web.Response(text="[Manager] This request is banned.", status=403)
else:
response: web.Response = await handler(request)
return response
return manager_middleware

View File

@@ -46,7 +46,10 @@ comfyui_manager_path = os.path.abspath(os.path.dirname(__file__))
cm_global.pip_blacklist = {'torch', 'torchaudio', 'torchsde', 'torchvision'} cm_global.pip_blacklist = {'torch', 'torchaudio', 'torchsde', 'torchvision'}
cm_global.pip_downgrade_blacklist = ['torch', 'torchaudio', 'torchsde', 'torchvision', 'transformers', 'safetensors', 'kornia'] cm_global.pip_downgrade_blacklist = ['torch', 'torchaudio', 'torchsde', 'torchvision', 'transformers', 'safetensors', 'kornia']
cm_global.pip_overrides = {} if sys.version_info < (3, 13):
cm_global.pip_overrides = {'numpy': 'numpy<2'}
else:
cm_global.pip_overrides = {}
if os.path.exists(os.path.join(manager_util.comfyui_manager_path, "pip_overrides.json")): if os.path.exists(os.path.join(manager_util.comfyui_manager_path, "pip_overrides.json")):
with open(os.path.join(manager_util.comfyui_manager_path, "pip_overrides.json"), 'r', encoding="UTF-8", errors="ignore") as json_file: with open(os.path.join(manager_util.comfyui_manager_path, "pip_overrides.json"), 'r', encoding="UTF-8", errors="ignore") as json_file:
@@ -149,6 +152,9 @@ class Ctx:
with open(context.manager_pip_overrides_path, 'r', encoding="UTF-8", errors="ignore") as json_file: with open(context.manager_pip_overrides_path, 'r', encoding="UTF-8", errors="ignore") as json_file:
cm_global.pip_overrides = json.load(json_file) cm_global.pip_overrides = json.load(json_file)
if sys.version_info < (3, 13):
cm_global.pip_overrides = {'numpy': 'numpy<2'}
if os.path.exists(context.manager_pip_blacklist_path): if os.path.exists(context.manager_pip_blacklist_path):
with open(context.manager_pip_blacklist_path, 'r', encoding="UTF-8", errors="ignore") as f: with open(context.manager_pip_blacklist_path, 'r', encoding="UTF-8", errors="ignore") as f:
for x in f.readlines(): for x in f.readlines():
@@ -670,7 +676,7 @@ def install(
cmd_ctx.set_channel_mode(channel, mode) cmd_ctx.set_channel_mode(channel, mode)
cmd_ctx.set_no_deps(no_deps) cmd_ctx.set_no_deps(no_deps)
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, context.manager_files_path) pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, core.manager_files_path)
for_each_nodes(nodes, act=install_node, exit_on_fail=exit_on_fail) for_each_nodes(nodes, act=install_node, exit_on_fail=exit_on_fail)
pip_fixer.fix_broken() pip_fixer.fix_broken()

View File

@@ -1,16 +0,0 @@
# ComfyUI-Manager: Core Backend (glob)
This directory contains the Python backend modules that power ComfyUI-Manager, handling the core functionality of node management, downloading, security, and server operations.
## Core Modules
- **manager_downloader.py**: Handles downloading operations for models, extensions, and other resources.
- **manager_util.py**: Provides utility functions used throughout the system.
## Specialized Modules
- **cm_global.py**: Maintains global variables and state management across the system.
- **cnr_utils.py**: Helper utilities for interacting with the custom node registry (CNR).
- **git_utils.py**: Git-specific utilities for repository operations.
- **node_package.py**: Handles the packaging and installation of node extensions.
- **security_check.py**: Implements the multi-level security system for installation safety.

View File

@@ -11,7 +11,6 @@ from . import manager_util
import requests import requests
import toml import toml
import logging
base_url = "https://api.comfy.org" base_url = "https://api.comfy.org"
@@ -24,7 +23,7 @@ async def get_cnr_data(cache_mode=True, dont_wait=True):
try: try:
return await _get_cnr_data(cache_mode, dont_wait) return await _get_cnr_data(cache_mode, dont_wait)
except asyncio.TimeoutError: except asyncio.TimeoutError:
logging.info("A timeout occurred during the fetch process from ComfyRegistry.") print("A timeout occurred during the fetch process from ComfyRegistry.")
return await _get_cnr_data(cache_mode=True, dont_wait=True) # timeout fallback return await _get_cnr_data(cache_mode=True, dont_wait=True) # timeout fallback
async def _get_cnr_data(cache_mode=True, dont_wait=True): async def _get_cnr_data(cache_mode=True, dont_wait=True):
@@ -80,12 +79,12 @@ async def _get_cnr_data(cache_mode=True, dont_wait=True):
full_nodes[x['id']] = x full_nodes[x['id']] = x
if page % 5 == 0: if page % 5 == 0:
logging.info(f"FETCH ComfyRegistry Data: {page}/{sub_json_obj['totalPages']}") print(f"FETCH ComfyRegistry Data: {page}/{sub_json_obj['totalPages']}")
page += 1 page += 1
time.sleep(0.5) time.sleep(0.5)
logging.info("FETCH ComfyRegistry Data [DONE]") print("FETCH ComfyRegistry Data [DONE]")
for v in full_nodes.values(): for v in full_nodes.values():
if 'latest_version' not in v: if 'latest_version' not in v:
@@ -101,7 +100,7 @@ async def _get_cnr_data(cache_mode=True, dont_wait=True):
if cache_state == 'not-cached': if cache_state == 'not-cached':
return {} return {}
else: else:
logging.info("[ComfyUI-Manager] The ComfyRegistry cache update is still in progress, so an outdated cache is being used.") print("[ComfyUI-Manager] The ComfyRegistry cache update is still in progress, so an outdated cache is being used.")
with open(manager_util.get_cache_path(uri), 'r', encoding="UTF-8", errors="ignore") as json_file: with open(manager_util.get_cache_path(uri), 'r', encoding="UTF-8", errors="ignore") as json_file:
return json.load(json_file)['nodes'] return json.load(json_file)['nodes']
@@ -115,7 +114,7 @@ async def _get_cnr_data(cache_mode=True, dont_wait=True):
return json_obj['nodes'] return json_obj['nodes']
except Exception: except Exception:
res = {} res = {}
logging.warning("Cannot connect to comfyregistry.") print("Cannot connect to comfyregistry.")
finally: finally:
if cache_mode: if cache_mode:
is_cache_loading = False is_cache_loading = False
@@ -181,7 +180,7 @@ def install_node(node_id, version=None):
else: else:
url = f"{base_url}/nodes/{node_id}/install?version={version}" url = f"{base_url}/nodes/{node_id}/install?version={version}"
response = requests.get(url, verify=not manager_util.bypass_ssl) response = requests.get(url)
if response.status_code == 200: if response.status_code == 200:
# Convert the API response to a NodeVersion object # Convert the API response to a NodeVersion object
return map_node_version(response.json()) return map_node_version(response.json())
@@ -192,7 +191,7 @@ def install_node(node_id, version=None):
def all_versions_of_node(node_id): def all_versions_of_node(node_id):
url = f"{base_url}/nodes/{node_id}/versions?statuses=NodeVersionStatusActive&statuses=NodeVersionStatusPending" url = f"{base_url}/nodes/{node_id}/versions?statuses=NodeVersionStatusActive&statuses=NodeVersionStatusPending"
response = requests.get(url, verify=not manager_util.bypass_ssl) response = requests.get(url)
if response.status_code == 200: if response.status_code == 200:
return response.json() return response.json()
else: else:
@@ -212,7 +211,6 @@ def read_cnr_info(fullpath):
project = data.get('project', {}) project = data.get('project', {})
name = project.get('name').strip().lower() name = project.get('name').strip().lower()
original_name = project.get('name')
# normalize version # normalize version
# for example: 2.5 -> 2.5.0 # for example: 2.5 -> 2.5.0
@@ -224,7 +222,6 @@ def read_cnr_info(fullpath):
if name and version: # repository is optional if name and version: # repository is optional
return { return {
"id": name, "id": name,
"original_name": original_name,
"version": version, "version": version,
"url": repository "url": repository
} }
@@ -241,7 +238,7 @@ def generate_cnr_id(fullpath, cnr_id):
with open(cnr_id_path, "w") as f: with open(cnr_id_path, "w") as f:
return f.write(cnr_id) return f.write(cnr_id)
except Exception: except Exception:
logging.error(f"[ComfyUI Manager] unable to create file: {cnr_id_path}") print(f"[ComfyUI Manager] unable to create file: {cnr_id_path}")
def read_cnr_id(fullpath): def read_cnr_id(fullpath):

View File

@@ -34,7 +34,7 @@ manager_pip_blacklist_path = None
manager_components_path = None manager_components_path = None
manager_batch_history_path = None manager_batch_history_path = None
def update_user_directory(manager_dir): def update_user_directory(user_dir):
global manager_files_path global manager_files_path
global manager_config_path global manager_config_path
global manager_channel_list_path global manager_channel_list_path
@@ -45,7 +45,7 @@ def update_user_directory(manager_dir):
global manager_components_path global manager_components_path
global manager_batch_history_path global manager_batch_history_path
manager_files_path = manager_dir manager_files_path = os.path.abspath(os.path.join(user_dir, 'default', 'ComfyUI-Manager'))
if not os.path.exists(manager_files_path): if not os.path.exists(manager_files_path):
os.makedirs(manager_files_path) os.makedirs(manager_files_path)
@@ -73,7 +73,7 @@ def update_user_directory(manager_dir):
try: try:
import folder_paths import folder_paths
update_user_directory(folder_paths.get_system_user_directory("manager")) update_user_directory(folder_paths.get_user_directory())
except Exception: except Exception:
# fallback: # fallback:
@@ -106,3 +106,4 @@ def get_comfyui_tag():
except Exception: except Exception:
return None return None

View File

@@ -4,7 +4,6 @@ class NetworkMode(enum.Enum):
PUBLIC = "public" PUBLIC = "public"
PRIVATE = "private" PRIVATE = "private"
OFFLINE = "offline" OFFLINE = "offline"
PERSONAL_CLOUD = "personal_cloud"
class SecurityLevel(enum.Enum): class SecurityLevel(enum.Enum):
STRONG = "strong" STRONG = "strong"

View File

@@ -46,8 +46,6 @@ def git_url(fullpath):
for k, v in config.items(): for k, v in config.items():
if k.startswith('remote ') and 'url' in v: if k.startswith('remote ') and 'url' in v:
if 'Comfy-Org/ComfyUI-Manager' in v['url']:
return "https://github.com/ltdrdata/ComfyUI-Manager"
return v['url'] return v['url']
return None return None

View File

@@ -55,11 +55,7 @@ def download_url(model_url: str, model_dir: str, filename: str):
return aria2_download_url(model_url, model_dir, filename) return aria2_download_url(model_url, model_dir, filename)
else: else:
from torchvision.datasets.utils import download_url as torchvision_download_url from torchvision.datasets.utils import download_url as torchvision_download_url
try: return torchvision_download_url(model_url, model_dir, filename)
return torchvision_download_url(model_url, model_dir, filename)
except Exception as e:
logging.error(f"[ComfyUI-Manager] Failed to download: {model_url} / {repr(e)}")
raise
def aria2_find_task(dir: str, filename: str): def aria2_find_task(dir: str, filename: str):

View File

@@ -1,36 +0,0 @@
from enum import Enum
is_personal_cloud_mode = False
handler_policy = {}
class HANDLER_POLICY(Enum):
MULTIPLE_REMOTE_BAN_NON_LOCAL = 1
MULTIPLE_REMOTE_BAN_NOT_PERSONAL_CLOUD = 2
BANNED = 3
def is_loopback(address):
import ipaddress
try:
return ipaddress.ip_address(address).is_loopback
except ValueError:
return False
def do_nothing():
pass
def get_handler_policy(x):
return handler_policy.get(x) or set()
def add_handler_policy(x, policy):
s = handler_policy.get(x)
if s is None:
s = set()
handler_policy[x] = s
s.add(policy)
multiple_remote_alert = do_nothing

View File

@@ -15,7 +15,7 @@ import re
import logging import logging
import platform import platform
import shlex import shlex
from functools import lru_cache from . import cm_global
cache_lock = threading.Lock() cache_lock = threading.Lock()
@@ -25,7 +25,6 @@ comfyui_manager_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '
cache_dir = os.path.join(comfyui_manager_path, '.cache') # This path is also updated together in **manager_core.update_user_directory**. cache_dir = os.path.join(comfyui_manager_path, '.cache') # This path is also updated together in **manager_core.update_user_directory**.
use_uv = False use_uv = False
bypass_ssl = False
def is_manager_pip_package(): def is_manager_pip_package():
return not os.path.exists(os.path.join(comfyui_manager_path, '..', 'custom_nodes')) return not os.path.exists(os.path.join(comfyui_manager_path, '..', 'custom_nodes'))
@@ -39,64 +38,18 @@ def add_python_path_to_env():
os.environ['PATH'] = os.path.dirname(sys.executable)+sep+os.environ['PATH'] os.environ['PATH'] = os.path.dirname(sys.executable)+sep+os.environ['PATH']
@lru_cache(maxsize=2)
def get_pip_cmd(force_uv=False):
"""
Get the base pip command, with automatic fallback to uv if pip is unavailable.
Args:
force_uv (bool): If True, use uv directly without trying pip
Returns:
list: Base command for pip operations
"""
embedded = 'python_embeded' in sys.executable
# Try pip first (unless forcing uv)
if not force_uv:
try:
test_cmd = [sys.executable] + (['-s'] if embedded else []) + ['-m', 'pip', '--version']
subprocess.check_output(test_cmd, stderr=subprocess.DEVNULL, timeout=5)
return [sys.executable] + (['-s'] if embedded else []) + ['-m', 'pip']
except Exception:
logging.warning("[ComfyUI-Manager] python -m pip not available. Falling back to uv.")
# Try uv (either forced or pip failed)
import shutil
# Try uv as Python module
try:
test_cmd = [sys.executable] + (['-s'] if embedded else []) + ['-m', 'uv', '--version']
subprocess.check_output(test_cmd, stderr=subprocess.DEVNULL, timeout=5)
logging.info("[ComfyUI-Manager] Using uv as Python module for pip operations.")
return [sys.executable] + (['-s'] if embedded else []) + ['-m', 'uv', 'pip']
except Exception:
pass
# Try standalone uv
if shutil.which('uv'):
logging.info("[ComfyUI-Manager] Using standalone uv for pip operations.")
return ['uv', 'pip']
# Nothing worked
logging.error("[ComfyUI-Manager] Neither python -m pip nor uv are available. Cannot proceed with package operations.")
raise Exception("Neither pip nor uv are available for package management")
def make_pip_cmd(cmd): def make_pip_cmd(cmd):
""" if 'python_embeded' in sys.executable:
Create a pip command by combining the cached base pip command with the given arguments. if use_uv:
return [sys.executable, '-s', '-m', 'uv', 'pip'] + cmd
Args: else:
cmd (list): List of pip command arguments (e.g., ['install', 'package']) return [sys.executable, '-s', '-m', 'pip'] + cmd
else:
Returns: # FIXED: https://github.com/ltdrdata/ComfyUI-Manager/issues/1667
list: Complete command list ready for subprocess execution if use_uv:
""" return [sys.executable, '-m', 'uv', 'pip'] + cmd
global use_uv else:
base_cmd = get_pip_cmd(force_uv=use_uv) return [sys.executable, '-m', 'pip'] + cmd
return base_cmd + cmd
# DON'T USE StrictVersion - cannot handle pre_release version # DON'T USE StrictVersion - cannot handle pre_release version
# try: # try:
@@ -187,7 +140,7 @@ async def get_data(uri, silent=False):
print(f"FETCH DATA from: {uri}", end="") print(f"FETCH DATA from: {uri}", end="")
if uri.startswith("http"): if uri.startswith("http"):
async with aiohttp.ClientSession(trust_env=True, connector=aiohttp.TCPConnector(verify_ssl=not bypass_ssl)) as session: async with aiohttp.ClientSession(trust_env=True, connector=aiohttp.TCPConnector(verify_ssl=False)) as session:
headers = { headers = {
'Cache-Control': 'no-cache', 'Cache-Control': 'no-cache',
'Pragma': 'no-cache', 'Pragma': 'no-cache',
@@ -377,32 +330,6 @@ torch_torchvision_torchaudio_version_map = {
} }
def torch_rollback(prev):
spec = prev.split('+')
if len(spec) > 1:
platform = spec[1]
else:
cmd = make_pip_cmd(['install', '--force', 'torch', 'torchvision', 'torchaudio'])
subprocess.check_output(cmd, universal_newlines=True)
logging.error(cmd)
return
torch_ver = StrictVersion(spec[0])
torch_ver = f"{torch_ver.major}.{torch_ver.minor}.{torch_ver.patch}"
torch_torchvision_torchaudio_ver = torch_torchvision_torchaudio_version_map.get(torch_ver)
if torch_torchvision_torchaudio_ver is None:
cmd = make_pip_cmd(['install', '--pre', 'torch', 'torchvision', 'torchaudio',
'--index-url', f"https://download.pytorch.org/whl/nightly/{platform}"])
logging.info("[ComfyUI-Manager] restore PyTorch to nightly version")
else:
torchvision_ver, torchaudio_ver = torch_torchvision_torchaudio_ver
cmd = make_pip_cmd(['install', f'torch=={torch_ver}', f'torchvision=={torchvision_ver}', f"torchaudio=={torchaudio_ver}",
'--index-url', f"https://download.pytorch.org/whl/{platform}"])
logging.info(f"[ComfyUI-Manager] restore PyTorch to {torch_ver}+{platform}")
subprocess.check_output(cmd, universal_newlines=True)
class PIPFixer: class PIPFixer:
def __init__(self, prev_pip_versions, comfyui_path, manager_files_path): def __init__(self, prev_pip_versions, comfyui_path, manager_files_path):
@@ -410,6 +337,32 @@ class PIPFixer:
self.comfyui_path = comfyui_path self.comfyui_path = comfyui_path
self.manager_files_path = manager_files_path self.manager_files_path = manager_files_path
def torch_rollback(self):
spec = self.prev_pip_versions['torch'].split('+')
if len(spec) > 0:
platform = spec[1]
else:
cmd = make_pip_cmd(['install', '--force', 'torch', 'torchvision', 'torchaudio'])
subprocess.check_output(cmd, universal_newlines=True)
logging.error(cmd)
return
torch_ver = StrictVersion(spec[0])
torch_ver = f"{torch_ver.major}.{torch_ver.minor}.{torch_ver.patch}"
torch_torchvision_torchaudio_ver = torch_torchvision_torchaudio_version_map.get(torch_ver)
if torch_torchvision_torchaudio_ver is None:
cmd = make_pip_cmd(['install', '--pre', 'torch', 'torchvision', 'torchaudio',
'--index-url', f"https://download.pytorch.org/whl/nightly/{platform}"])
logging.info("[ComfyUI-Manager] restore PyTorch to nightly version")
else:
torchvision_ver, torchaudio_ver = torch_torchvision_torchaudio_ver
cmd = make_pip_cmd(['install', f'torch=={torch_ver}', f'torchvision=={torchvision_ver}', f"torchaudio=={torchaudio_ver}",
'--index-url', f"https://download.pytorch.org/whl/{platform}"])
logging.info(f"[ComfyUI-Manager] restore PyTorch to {torch_ver}+{platform}")
subprocess.check_output(cmd, universal_newlines=True)
def fix_broken(self): def fix_broken(self):
new_pip_versions = get_installed_packages(True) new_pip_versions = get_installed_packages(True)
@@ -431,7 +384,7 @@ class PIPFixer:
elif self.prev_pip_versions['torch'] != new_pip_versions['torch'] \ elif self.prev_pip_versions['torch'] != new_pip_versions['torch'] \
or self.prev_pip_versions['torchvision'] != new_pip_versions['torchvision'] \ or self.prev_pip_versions['torchvision'] != new_pip_versions['torchvision'] \
or self.prev_pip_versions['torchaudio'] != new_pip_versions['torchaudio']: or self.prev_pip_versions['torchaudio'] != new_pip_versions['torchaudio']:
torch_rollback(self.prev_pip_versions['torch']) self.torch_rollback()
except Exception as e: except Exception as e:
logging.error("[ComfyUI-Manager] Failed to restore PyTorch") logging.error("[ComfyUI-Manager] Failed to restore PyTorch")
logging.error(e) logging.error(e)
@@ -462,14 +415,32 @@ class PIPFixer:
if len(targets) > 0: if len(targets) > 0:
for x in targets: for x in targets:
cmd = make_pip_cmd(['install', f"{x}=={versions[0].version_string}"]) if sys.version_info < (3, 13):
subprocess.check_output(cmd, universal_newlines=True) cmd = make_pip_cmd(['install', f"{x}=={versions[0].version_string}", "numpy<2"])
subprocess.check_output(cmd, universal_newlines=True)
logging.info(f"[ComfyUI-Manager] 'opencv' dependencies were fixed: {targets}") logging.info(f"[ComfyUI-Manager] 'opencv' dependencies were fixed: {targets}")
except Exception as e: except Exception as e:
logging.error("[ComfyUI-Manager] Failed to restore opencv") logging.error("[ComfyUI-Manager] Failed to restore opencv")
logging.error(e) logging.error(e)
# fix numpy
if sys.version_info >= (3, 13):
logging.info("[ComfyUI-Manager] In Python 3.13 and above, PIP Fixer does not downgrade `numpy` below version 2.0. If you need to force a downgrade of `numpy`, please use `pip_auto_fix.list`.")
else:
try:
np = new_pip_versions.get('numpy')
if cm_global.pip_overrides.get('numpy') == 'numpy<2':
if np is not None:
if StrictVersion(np) >= StrictVersion('2'):
cmd = make_pip_cmd(['install', "numpy<2"])
subprocess.check_output(cmd , universal_newlines=True)
logging.info("[ComfyUI-Manager] 'numpy' dependency were fixed")
except Exception as e:
logging.error("[ComfyUI-Manager] Failed to restore numpy")
logging.error(e)
# fix missing frontend # fix missing frontend
try: try:
# NOTE: package name in requirements is 'comfyui-frontend-package' # NOTE: package name in requirements is 'comfyui-frontend-package'
@@ -569,69 +540,3 @@ def robust_readlines(fullpath):
print(f"[ComfyUI-Manager] Failed to recognize encoding for: {fullpath}") print(f"[ComfyUI-Manager] Failed to recognize encoding for: {fullpath}")
return [] return []
def restore_pip_snapshot(pips, options):
non_url = []
local_url = []
non_local_url = []
for k, v in pips.items():
# NOTE: skip torch related packages
if k.startswith("torch==") or k.startswith("torchvision==") or k.startswith("torchaudio==") or k.startswith("nvidia-"):
continue
if v == "":
non_url.append(k)
else:
if v.startswith('file:'):
local_url.append(v)
else:
non_local_url.append(v)
# restore other pips
failed = []
if '--pip-non-url' in options:
# try all at once
res = 1
try:
res = subprocess.check_output(make_pip_cmd(['install'] + non_url))
except Exception:
pass
# fallback
if res != 0:
for x in non_url:
res = 1
try:
res = subprocess.check_output(make_pip_cmd(['install', '--no-deps', x]))
except Exception:
pass
if res != 0:
failed.append(x)
if '--pip-non-local-url' in options:
for x in non_local_url:
res = 1
try:
res = subprocess.check_output(make_pip_cmd(['install', '--no-deps', x]))
except Exception:
pass
if res != 0:
failed.append(x)
if '--pip-local-url' in options:
for x in local_url:
res = 1
try:
res = subprocess.check_output(make_pip_cmd(['install', '--no-deps', x]))
except Exception:
pass
if res != 0:
failed.append(x)
print(f"Installation failed for pip packages: {failed}")

View File

@@ -2,8 +2,6 @@ import sys
import subprocess import subprocess
import os import os
from . import manager_util
def security_check(): def security_check():
print("[START] Security scan") print("[START] Security scan")
@@ -68,23 +66,18 @@ https://blog.comfy.org/comfyui-statement-on-the-ultralytics-crypto-miner-situati
"lolMiner": [os.path.join(comfyui_path, 'lolMiner')] "lolMiner": [os.path.join(comfyui_path, 'lolMiner')]
} }
installed_pips = subprocess.check_output(manager_util.make_pip_cmd(["freeze"]), text=True) installed_pips = subprocess.check_output([sys.executable, '-m', "pip", "freeze"], text=True)
detected = set() detected = set()
try: try:
anthropic_info = subprocess.check_output(manager_util.make_pip_cmd(["show", "anthropic"]), text=True, stderr=subprocess.DEVNULL) anthropic_info = subprocess.check_output([sys.executable, '-m', "pip", "show", "anthropic"], text=True, stderr=subprocess.DEVNULL)
requires_lines = [x for x in anthropic_info.split('\n') if x.startswith("Requires")] anthropic_reqs = [x for x in anthropic_info.split('\n') if x.startswith("Requires")][0].split(': ')[1]
if requires_lines: if "pycrypto" in anthropic_reqs:
anthropic_reqs = requires_lines[0].split(": ", 1)[1] location = [x for x in anthropic_info.split('\n') if x.startswith("Location")][0].split(': ')[1]
if "pycrypto" in anthropic_reqs: for fi in os.listdir(location):
location_lines = [x for x in anthropic_info.split('\n') if x.startswith("Location")] if fi.startswith("anthropic"):
if location_lines: guide["ComfyUI_LLMVISION"] = f"\n0.Remove {os.path.join(location, fi)}" + guide["ComfyUI_LLMVISION"]
location = location_lines[0].split(": ", 1)[1] detected.add("ComfyUI_LLMVISION")
for fi in os.listdir(location):
if fi.startswith("anthropic"):
guide["ComfyUI_LLMVISION"] = (f"\n0.Remove {os.path.join(location, fi)}" + guide["ComfyUI_LLMVISION"])
detected.add("ComfyUI_LLMVISION")
except subprocess.CalledProcessError: except subprocess.CalledProcessError:
pass pass

View File

@@ -29,7 +29,6 @@ datamodel-codegen \
--use-subclass-enum \ --use-subclass-enum \
--field-constraints \ --field-constraints \
--strict-types bytes \ --strict-types bytes \
--use-double-quotes \
--input openapi.yaml \ --input openapi.yaml \
--output comfyui_manager/data_models/generated_models.py \ --output comfyui_manager/data_models/generated_models.py \
--output-model-type pydantic_v2.BaseModel --output-model-type pydantic_v2.BaseModel

View File

@@ -30,15 +30,9 @@ from .generated_models import (
InstalledModelInfo, InstalledModelInfo,
ComfyUIVersionInfo, ComfyUIVersionInfo,
# Import Fail Info Models
ImportFailInfoBulkRequest,
ImportFailInfoBulkResponse,
ImportFailInfoItem,
ImportFailInfoItem1,
# Other models # Other models
OperationType, Kind,
OperationResult, StatusStr,
ManagerPackInfo, ManagerPackInfo,
ManagerPackInstalled, ManagerPackInstalled,
SelectedVersion, SelectedVersion,
@@ -48,16 +42,7 @@ from .generated_models import (
ManagerPackInstallType, ManagerPackInstallType,
ManagerPack, ManagerPack,
InstallPackParams, InstallPackParams,
UpdatePackParams,
UpdateAllPacksParams, UpdateAllPacksParams,
UpdateComfyUIParams,
FixPackParams,
UninstallPackParams,
DisablePackParams,
EnablePackParams,
UpdateAllQueryParams,
UpdateComfyUIQueryParams,
ComfyUISwitchVersionQueryParams,
QueueStatus, QueueStatus,
ManagerMappings, ManagerMappings,
ModelMetadata, ModelMetadata,
@@ -68,8 +53,8 @@ from .generated_models import (
HistoryResponse, HistoryResponse,
HistoryListResponse, HistoryListResponse,
InstallType, InstallType,
SecurityLevel, OperationType,
RiskLevel, Result,
) )
__all__ = [ __all__ = [
@@ -94,15 +79,9 @@ __all__ = [
"InstalledModelInfo", "InstalledModelInfo",
"ComfyUIVersionInfo", "ComfyUIVersionInfo",
# Import Fail Info Models
"ImportFailInfoBulkRequest",
"ImportFailInfoBulkResponse",
"ImportFailInfoItem",
"ImportFailInfoItem1",
# Other models # Other models
"OperationType", "Kind",
"OperationResult", "StatusStr",
"ManagerPackInfo", "ManagerPackInfo",
"ManagerPackInstalled", "ManagerPackInstalled",
"SelectedVersion", "SelectedVersion",
@@ -112,16 +91,7 @@ __all__ = [
"ManagerPackInstallType", "ManagerPackInstallType",
"ManagerPack", "ManagerPack",
"InstallPackParams", "InstallPackParams",
"UpdatePackParams",
"UpdateAllPacksParams", "UpdateAllPacksParams",
"UpdateComfyUIParams",
"FixPackParams",
"UninstallPackParams",
"DisablePackParams",
"EnablePackParams",
"UpdateAllQueryParams",
"UpdateComfyUIQueryParams",
"ComfyUISwitchVersionQueryParams",
"QueueStatus", "QueueStatus",
"ManagerMappings", "ManagerMappings",
"ModelMetadata", "ModelMetadata",
@@ -132,6 +102,6 @@ __all__ = [
"HistoryResponse", "HistoryResponse",
"HistoryListResponse", "HistoryListResponse",
"InstallType", "InstallType",
"SecurityLevel", "OperationType",
"RiskLevel", "Result",
] ]

View File

@@ -1,6 +1,6 @@
# generated by datamodel-codegen: # generated by datamodel-codegen:
# filename: openapi.yaml # filename: openapi.yaml
# timestamp: 2025-07-31T04:52:26+00:00 # timestamp: 2025-06-08T08:07:38+00:00
from __future__ import annotations from __future__ import annotations
@@ -11,298 +11,213 @@ from typing import Any, Dict, List, Optional, Union
from pydantic import BaseModel, Field, RootModel from pydantic import BaseModel, Field, RootModel
class OperationType(str, Enum): class Kind(str, Enum):
install = "install" install = 'install'
uninstall = "uninstall" uninstall = 'uninstall'
update = "update" update = 'update'
update_comfyui = "update-comfyui" update_all = 'update-all'
fix = "fix" update_comfyui = 'update-comfyui'
disable = "disable" fix = 'fix'
enable = "enable" disable = 'disable'
install_model = "install-model" enable = 'enable'
install_model = 'install-model'
class OperationResult(str, Enum): class QueueTaskItem(BaseModel):
success = "success" ui_id: str = Field(..., description='Unique identifier for the task')
failed = "failed" client_id: str = Field(..., description='Client identifier that initiated the task')
skipped = "skipped" kind: Kind = Field(..., description='Type of task being performed')
error = "error"
skip = "skip"
class StatusStr(str, Enum):
success = 'success'
error = 'error'
skip = 'skip'
class TaskExecutionStatus(BaseModel): class TaskExecutionStatus(BaseModel):
status_str: OperationResult status_str: StatusStr = Field(..., description='Overall task execution status')
completed: bool = Field(..., description="Whether the task completed") completed: bool = Field(..., description='Whether the task completed')
messages: List[str] = Field(..., description="Additional status messages") messages: List[str] = Field(..., description='Additional status messages')
class ManagerMessageName(str, Enum): class ManagerMessageName(str, Enum):
cm_task_completed = "cm-task-completed" cm_task_completed = 'cm-task-completed'
cm_task_started = "cm-task-started" cm_task_started = 'cm-task-started'
cm_queue_status = "cm-queue-status" cm_queue_status = 'cm-queue-status'
class ManagerPackInfo(BaseModel): class ManagerPackInfo(BaseModel):
id: str = Field( id: str = Field(
..., ...,
description="Either github-author/github-repo or name of pack from the registry", description='Either github-author/github-repo or name of pack from the registry',
) )
version: str = Field(..., description="Semantic version or Git commit hash") version: str = Field(..., description='Semantic version or Git commit hash')
ui_id: Optional[str] = Field(None, description="Task ID - generated internally") ui_id: Optional[str] = Field(None, description='Task ID - generated internally')
class ManagerPackInstalled(BaseModel): class ManagerPackInstalled(BaseModel):
ver: str = Field( ver: str = Field(
..., ...,
description="The version of the pack that is installed (Git commit hash or semantic version)", description='The version of the pack that is installed (Git commit hash or semantic version)',
) )
cnr_id: Optional[str] = Field( cnr_id: Optional[str] = Field(
None, description="The name of the pack if installed from the registry" None, description='The name of the pack if installed from the registry'
) )
aux_id: Optional[str] = Field( aux_id: Optional[str] = Field(
None, None,
description="The name of the pack if installed from github (author/repo-name format)", description='The name of the pack if installed from github (author/repo-name format)',
) )
enabled: bool = Field(..., description="Whether the pack is enabled") enabled: bool = Field(..., description='Whether the pack is enabled')
class SelectedVersion(str, Enum): class SelectedVersion(str, Enum):
latest = "latest" latest = 'latest'
nightly = "nightly" nightly = 'nightly'
class ManagerChannel(str, Enum): class ManagerChannel(str, Enum):
default = "default" default = 'default'
recent = "recent" recent = 'recent'
legacy = "legacy" legacy = 'legacy'
forked = "forked" forked = 'forked'
dev = "dev" dev = 'dev'
tutorial = "tutorial" tutorial = 'tutorial'
class ManagerDatabaseSource(str, Enum): class ManagerDatabaseSource(str, Enum):
remote = "remote" remote = 'remote'
local = "local" local = 'local'
cache = "cache" cache = 'cache'
class ManagerPackState(str, Enum): class ManagerPackState(str, Enum):
installed = "installed" installed = 'installed'
disabled = "disabled" disabled = 'disabled'
not_installed = "not_installed" not_installed = 'not_installed'
import_failed = "import_failed" import_failed = 'import_failed'
needs_update = "needs_update" needs_update = 'needs_update'
class ManagerPackInstallType(str, Enum): class ManagerPackInstallType(str, Enum):
git_clone = "git-clone" git_clone = 'git-clone'
copy = "copy" copy = 'copy'
cnr = "cnr" cnr = 'cnr'
class SecurityLevel(str, Enum): class UpdateState(str, Enum):
strong = "strong" false = 'false'
normal = "normal" true = 'true'
normal_ = "normal-"
weak = "weak"
class RiskLevel(str, Enum):
block = "block"
high_ = "high+"
high = "high"
middle_ = "middle+"
middle = "middle"
class UpdateState(Enum):
false = "false"
true = "true"
class ManagerPack(ManagerPackInfo): class ManagerPack(ManagerPackInfo):
author: Optional[str] = Field( author: Optional[str] = Field(
None, description="Pack author name or 'Unclaimed' if added via GitHub crawl" None, description="Pack author name or 'Unclaimed' if added via GitHub crawl"
) )
files: Optional[List[str]] = Field( files: Optional[List[str]] = Field(None, description='Files included in the pack')
None,
description="Repository URLs for installation (typically contains one GitHub URL)",
)
reference: Optional[str] = Field( reference: Optional[str] = Field(
None, description="The type of installation reference" None, description='The type of installation reference'
) )
title: Optional[str] = Field(None, description="The display name of the pack") title: Optional[str] = Field(None, description='The display name of the pack')
cnr_latest: Optional[SelectedVersion] = None cnr_latest: Optional[SelectedVersion] = None
repository: Optional[str] = Field(None, description="GitHub repository URL") repository: Optional[str] = Field(None, description='GitHub repository URL')
state: Optional[ManagerPackState] = None state: Optional[ManagerPackState] = None
update_state: Optional[UpdateState] = Field( update_state: Optional[UpdateState] = Field(
None, alias="update-state", description="Update availability status" None, alias='update-state', description='Update availability status'
) )
stars: Optional[int] = Field(None, description="GitHub stars count") stars: Optional[int] = Field(None, description='GitHub stars count')
last_update: Optional[datetime] = Field(None, description="Last update timestamp") last_update: Optional[datetime] = Field(None, description='Last update timestamp')
health: Optional[str] = Field(None, description="Health status of the pack") health: Optional[str] = Field(None, description='Health status of the pack')
description: Optional[str] = Field(None, description="Pack description") description: Optional[str] = Field(None, description='Pack description')
trust: Optional[bool] = Field(None, description="Whether the pack is trusted") trust: Optional[bool] = Field(None, description='Whether the pack is trusted')
install_type: Optional[ManagerPackInstallType] = None install_type: Optional[ManagerPackInstallType] = None
class InstallPackParams(ManagerPackInfo): class InstallPackParams(ManagerPackInfo):
selected_version: Union[str, SelectedVersion] = Field( selected_version: Union[str, SelectedVersion] = Field(
..., description="Semantic version, Git commit hash, latest, or nightly" ..., description='Semantic version, Git commit hash, latest, or nightly'
) )
repository: Optional[str] = Field( repository: Optional[str] = Field(
None, None,
description="GitHub repository URL (required if selected_version is nightly)", description='GitHub repository URL (required if selected_version is nightly)',
) )
pip: Optional[List[str]] = Field(None, description="PyPi dependency names") pip: Optional[List[str]] = Field(None, description='PyPi dependency names')
mode: ManagerDatabaseSource mode: ManagerDatabaseSource
channel: ManagerChannel channel: ManagerChannel
skip_post_install: Optional[bool] = Field( skip_post_install: Optional[bool] = Field(
None, description="Whether to skip post-installation steps" None, description='Whether to skip post-installation steps'
) )
class UpdateAllPacksParams(BaseModel): class UpdateAllPacksParams(BaseModel):
mode: Optional[ManagerDatabaseSource] = None mode: Optional[ManagerDatabaseSource] = None
ui_id: Optional[str] = Field(None, description="Task ID - generated internally") ui_id: Optional[str] = Field(None, description='Task ID - generated internally')
class UpdatePackParams(BaseModel):
node_name: str = Field(..., description="Name of the node package to update")
node_ver: Optional[str] = Field(
None, description="Current version of the node package"
)
class UpdateComfyUIParams(BaseModel):
is_stable: Optional[bool] = Field(
True,
description="Whether to update to stable version (true) or nightly (false)",
)
target_version: Optional[str] = Field(
None,
description="Specific version to switch to (for version switching operations)",
)
class FixPackParams(BaseModel):
node_name: str = Field(..., description="Name of the node package to fix")
node_ver: str = Field(..., description="Version of the node package")
class UninstallPackParams(BaseModel):
node_name: str = Field(..., description="Name of the node package to uninstall")
is_unknown: Optional[bool] = Field(
False, description="Whether this is an unknown/unregistered package"
)
class DisablePackParams(BaseModel):
node_name: str = Field(..., description="Name of the node package to disable")
is_unknown: Optional[bool] = Field(
False, description="Whether this is an unknown/unregistered package"
)
class EnablePackParams(BaseModel):
cnr_id: str = Field(
..., description="ComfyUI Node Registry ID of the package to enable"
)
class UpdateAllQueryParams(BaseModel):
client_id: str = Field(
..., description="Client identifier that initiated the request"
)
ui_id: str = Field(..., description="Base UI identifier for task tracking")
mode: Optional[ManagerDatabaseSource] = None
class UpdateComfyUIQueryParams(BaseModel):
client_id: str = Field(
..., description="Client identifier that initiated the request"
)
ui_id: str = Field(..., description="UI identifier for task tracking")
stable: Optional[bool] = Field(
True,
description="Whether to update to stable version (true) or nightly (false)",
)
class ComfyUISwitchVersionQueryParams(BaseModel):
ver: str = Field(..., description="Version to switch to")
client_id: str = Field(
..., description="Client identifier that initiated the request"
)
ui_id: str = Field(..., description="UI identifier for task tracking")
class QueueStatus(BaseModel): class QueueStatus(BaseModel):
total_count: int = Field( total_count: int = Field(
..., description="Total number of tasks (pending + running)" ..., description='Total number of tasks (pending + running)'
) )
done_count: int = Field(..., description="Number of completed tasks") done_count: int = Field(..., description='Number of completed tasks')
in_progress_count: int = Field(..., description="Number of tasks currently running") in_progress_count: int = Field(..., description='Number of tasks currently running')
pending_count: Optional[int] = Field( pending_count: Optional[int] = Field(
None, description="Number of tasks waiting to be executed" None, description='Number of tasks waiting to be executed'
) )
is_processing: bool = Field(..., description="Whether the task worker is active") is_processing: bool = Field(..., description='Whether the task worker is active')
client_id: Optional[str] = Field( client_id: Optional[str] = Field(
None, description="Client ID (when filtered by client)" None, description='Client ID (when filtered by client)'
) )
class ManagerMappings1(BaseModel): class ManagerMapping(BaseModel):
title_aux: Optional[str] = Field(None, description="The display name of the pack") title_aux: Optional[str] = Field(None, description='The display name of the pack')
class ManagerMappings( class ManagerMappings(
RootModel[Optional[Dict[str, List[Union[List[str], ManagerMappings1]]]]] RootModel[Optional[Dict[str, List[Union[List[str], ManagerMapping]]]]]
): ):
root: Optional[Dict[str, List[Union[List[str], ManagerMappings1]]]] = Field( root: Optional[Dict[str, List[Union[List[str], ManagerMapping]]]] = None
None, description="Tuple of [node_names, metadata]"
)
class ModelMetadata(BaseModel): class ModelMetadata(BaseModel):
name: str = Field(..., description="Name of the model") name: str = Field(..., description='Name of the model')
type: str = Field(..., description="Type of model") type: str = Field(..., description='Type of model')
base: Optional[str] = Field(None, description="Base model type") base: Optional[str] = Field(None, description='Base model type')
save_path: Optional[str] = Field(None, description="Path for saving the model") save_path: Optional[str] = Field(None, description='Path for saving the model')
url: str = Field(..., description="Download URL") url: str = Field(..., description='Download URL')
filename: str = Field(..., description="Target filename") filename: str = Field(..., description='Target filename')
ui_id: Optional[str] = Field(None, description="ID for UI reference") ui_id: Optional[str] = Field(None, description='ID for UI reference')
class InstallType(str, Enum): class InstallType(str, Enum):
git = "git" git = 'git'
copy = "copy" copy = 'copy'
pip = "pip" pip = 'pip'
class NodePackageMetadata(BaseModel): class NodePackageMetadata(BaseModel):
title: Optional[str] = Field(None, description="Display name of the node package") title: Optional[str] = Field(None, description='Display name of the node package')
name: Optional[str] = Field(None, description="Repository/package name") name: Optional[str] = Field(None, description='Repository/package name')
files: Optional[List[str]] = Field(None, description="Source URLs for the package") files: Optional[List[str]] = Field(None, description='Source URLs for the package')
description: Optional[str] = Field( description: Optional[str] = Field(
None, description="Description of the node package functionality" None, description='Description of the node package functionality'
) )
install_type: Optional[InstallType] = Field(None, description="Installation method") install_type: Optional[InstallType] = Field(None, description='Installation method')
version: Optional[str] = Field(None, description="Version identifier") version: Optional[str] = Field(None, description='Version identifier')
id: Optional[str] = Field( id: Optional[str] = Field(
None, description="Unique identifier for the node package" None, description='Unique identifier for the node package'
) )
ui_id: Optional[str] = Field(None, description="ID for UI reference") ui_id: Optional[str] = Field(None, description='ID for UI reference')
channel: Optional[str] = Field(None, description="Source channel") channel: Optional[str] = Field(None, description='Source channel')
mode: Optional[str] = Field(None, description="Source mode") mode: Optional[str] = Field(None, description='Source mode')
class SnapshotItem(RootModel[str]): class SnapshotItem(RootModel[str]):
root: str = Field(..., description="Name of the snapshot") root: str = Field(..., description='Name of the snapshot')
class Error(BaseModel): class Error(BaseModel):
error: str = Field(..., description="Error message") error: str = Field(..., description='Error message')
class InstalledPacksResponse(RootModel[Optional[Dict[str, ManagerPackInstalled]]]): class InstalledPacksResponse(RootModel[Optional[Dict[str, ManagerPackInstalled]]]):
@@ -311,235 +226,180 @@ class InstalledPacksResponse(RootModel[Optional[Dict[str, ManagerPackInstalled]]
class HistoryListResponse(BaseModel): class HistoryListResponse(BaseModel):
ids: Optional[List[str]] = Field( ids: Optional[List[str]] = Field(
None, description="List of available batch history IDs" None, description='List of available batch history IDs'
) )
class InstalledNodeInfo(BaseModel): class InstalledNodeInfo(BaseModel):
name: str = Field(..., description="Node package name") name: str = Field(..., description='Node package name')
version: str = Field(..., description="Installed version") version: str = Field(..., description='Installed version')
repository_url: Optional[str] = Field(None, description="Git repository URL") repository_url: Optional[str] = Field(None, description='Git repository URL')
install_method: str = Field( install_method: str = Field(
..., description="Installation method (cnr, git, pip, etc.)" ..., description='Installation method (cnr, git, pip, etc.)'
) )
enabled: Optional[bool] = Field( enabled: Optional[bool] = Field(
True, description="Whether the node is currently enabled" True, description='Whether the node is currently enabled'
) )
install_date: Optional[datetime] = Field( install_date: Optional[datetime] = Field(
None, description="ISO timestamp of installation" None, description='ISO timestamp of installation'
) )
class InstalledModelInfo(BaseModel): class InstalledModelInfo(BaseModel):
name: str = Field(..., description="Model filename") name: str = Field(..., description='Model filename')
path: str = Field(..., description="Full path to model file") path: str = Field(..., description='Full path to model file')
type: str = Field(..., description="Model type (checkpoint, lora, vae, etc.)") type: str = Field(..., description='Model type (checkpoint, lora, vae, etc.)')
size_bytes: Optional[int] = Field(None, description="File size in bytes", ge=0) size_bytes: Optional[int] = Field(None, description='File size in bytes', ge=0)
hash: Optional[str] = Field(None, description="Model file hash for verification") hash: Optional[str] = Field(None, description='Model file hash for verification')
install_date: Optional[datetime] = Field( install_date: Optional[datetime] = Field(
None, description="ISO timestamp when added" None, description='ISO timestamp when added'
) )
class ComfyUIVersionInfo(BaseModel): class ComfyUIVersionInfo(BaseModel):
version: str = Field(..., description="ComfyUI version string") version: str = Field(..., description='ComfyUI version string')
commit_hash: Optional[str] = Field(None, description="Git commit hash") commit_hash: Optional[str] = Field(None, description='Git commit hash')
branch: Optional[str] = Field(None, description="Git branch name") branch: Optional[str] = Field(None, description='Git branch name')
is_stable: Optional[bool] = Field( is_stable: Optional[bool] = Field(
False, description="Whether this is a stable release" False, description='Whether this is a stable release'
) )
last_updated: Optional[datetime] = Field( last_updated: Optional[datetime] = Field(
None, description="ISO timestamp of last update" None, description='ISO timestamp of last update'
) )
class OperationType(str, Enum):
install = 'install'
update = 'update'
uninstall = 'uninstall'
fix = 'fix'
disable = 'disable'
enable = 'enable'
install_model = 'install-model'
class Result(str, Enum):
success = 'success'
failed = 'failed'
skipped = 'skipped'
class BatchOperation(BaseModel): class BatchOperation(BaseModel):
operation_id: str = Field(..., description="Unique operation identifier") operation_id: str = Field(..., description='Unique operation identifier')
operation_type: OperationType operation_type: OperationType = Field(..., description='Type of operation')
target: str = Field( target: str = Field(
..., description="Target of the operation (node name, model name, etc.)" ..., description='Target of the operation (node name, model name, etc.)'
) )
target_version: Optional[str] = Field( target_version: Optional[str] = Field(
None, description="Target version for the operation" None, description='Target version for the operation'
) )
result: OperationResult result: Result = Field(..., description='Operation result')
error_message: Optional[str] = Field( error_message: Optional[str] = Field(
None, description="Error message if operation failed" None, description='Error message if operation failed'
) )
start_time: datetime = Field( start_time: datetime = Field(
..., description="ISO timestamp when operation started" ..., description='ISO timestamp when operation started'
) )
end_time: Optional[datetime] = Field( end_time: Optional[datetime] = Field(
None, description="ISO timestamp when operation completed" None, description='ISO timestamp when operation completed'
) )
client_id: Optional[str] = Field( client_id: Optional[str] = Field(
None, description="Client that initiated the operation" None, description='Client that initiated the operation'
) )
class ComfyUISystemState(BaseModel): class ComfyUISystemState(BaseModel):
snapshot_time: datetime = Field( snapshot_time: datetime = Field(
..., description="ISO timestamp when snapshot was taken" ..., description='ISO timestamp when snapshot was taken'
) )
comfyui_version: ComfyUIVersionInfo comfyui_version: ComfyUIVersionInfo
frontend_version: Optional[str] = Field( frontend_version: Optional[str] = Field(
None, description="ComfyUI frontend version if available" None, description='ComfyUI frontend version if available'
) )
python_version: str = Field(..., description="Python interpreter version") python_version: str = Field(..., description='Python interpreter version')
platform_info: str = Field( platform_info: str = Field(
..., description="Operating system and platform information" ..., description='Operating system and platform information'
) )
installed_nodes: Optional[Dict[str, InstalledNodeInfo]] = Field( installed_nodes: Optional[Dict[str, InstalledNodeInfo]] = Field(
None, description="Map of installed node packages by name" None, description='Map of installed node packages by name'
) )
installed_models: Optional[Dict[str, InstalledModelInfo]] = Field( installed_models: Optional[Dict[str, InstalledModelInfo]] = Field(
None, description="Map of installed models by name" None, description='Map of installed models by name'
) )
manager_config: Optional[Dict[str, Any]] = Field( manager_config: Optional[Dict[str, Any]] = Field(
None, description="ComfyUI Manager configuration settings" None, description='ComfyUI Manager configuration settings'
)
comfyui_root_path: Optional[str] = Field(
None, description="ComfyUI root installation directory"
)
model_paths: Optional[Dict[str, List[str]]] = Field(
None, description="Map of model types to their configured paths"
)
manager_version: Optional[str] = Field(None, description="ComfyUI Manager version")
security_level: Optional[SecurityLevel] = None
network_mode: Optional[str] = Field(
None, description="Network mode (online, offline, private)"
)
cli_args: Optional[Dict[str, Any]] = Field(
None, description="Selected ComfyUI CLI arguments"
)
custom_nodes_count: Optional[int] = Field(
None, description="Total number of custom node packages", ge=0
)
failed_imports: Optional[List[str]] = Field(
None, description="List of custom nodes that failed to import"
)
pip_packages: Optional[Dict[str, str]] = Field(
None, description="Map of installed pip packages to their versions"
)
embedded_python: Optional[bool] = Field(
None,
description="Whether ComfyUI is running from an embedded Python distribution",
) )
class BatchExecutionRecord(BaseModel): class BatchExecutionRecord(BaseModel):
batch_id: str = Field(..., description="Unique batch identifier") batch_id: str = Field(..., description='Unique batch identifier')
start_time: datetime = Field(..., description="ISO timestamp when batch started") start_time: datetime = Field(..., description='ISO timestamp when batch started')
end_time: Optional[datetime] = Field( end_time: Optional[datetime] = Field(
None, description="ISO timestamp when batch completed" None, description='ISO timestamp when batch completed'
) )
state_before: ComfyUISystemState state_before: ComfyUISystemState
state_after: Optional[ComfyUISystemState] = Field( state_after: Optional[ComfyUISystemState] = Field(
None, description="System state after batch execution" None, description='System state after batch execution'
) )
operations: Optional[List[BatchOperation]] = Field( operations: Optional[List[BatchOperation]] = Field(
None, description="List of operations performed in this batch" None, description='List of operations performed in this batch'
) )
total_operations: Optional[int] = Field( total_operations: Optional[int] = Field(
0, description="Total number of operations in batch", ge=0 0, description='Total number of operations in batch', ge=0
) )
successful_operations: Optional[int] = Field( successful_operations: Optional[int] = Field(
0, description="Number of successful operations", ge=0 0, description='Number of successful operations', ge=0
) )
failed_operations: Optional[int] = Field( failed_operations: Optional[int] = Field(
0, description="Number of failed operations", ge=0 0, description='Number of failed operations', ge=0
) )
skipped_operations: Optional[int] = Field( skipped_operations: Optional[int] = Field(
0, description="Number of skipped operations", ge=0 0, description='Number of skipped operations', ge=0
) )
class ImportFailInfoBulkRequest(BaseModel):
cnr_ids: Optional[List[str]] = Field(
None, description="A list of CNR IDs to check."
)
urls: Optional[List[str]] = Field(
None, description="A list of repository URLs to check."
)
class ImportFailInfoItem1(BaseModel):
error: Optional[str] = None
traceback: Optional[str] = None
class ImportFailInfoItem(RootModel[Optional[ImportFailInfoItem1]]):
root: Optional[ImportFailInfoItem1]
class QueueTaskItem(BaseModel):
ui_id: str = Field(..., description="Unique identifier for the task")
client_id: str = Field(..., description="Client identifier that initiated the task")
kind: OperationType
params: Union[
InstallPackParams,
UpdatePackParams,
UpdateAllPacksParams,
UpdateComfyUIParams,
FixPackParams,
UninstallPackParams,
DisablePackParams,
EnablePackParams,
ModelMetadata,
]
class TaskHistoryItem(BaseModel): class TaskHistoryItem(BaseModel):
ui_id: str = Field(..., description="Unique identifier for the task") ui_id: str = Field(..., description='Unique identifier for the task')
client_id: str = Field(..., description="Client identifier that initiated the task") client_id: str = Field(..., description='Client identifier that initiated the task')
kind: str = Field(..., description="Type of task that was performed") kind: str = Field(..., description='Type of task that was performed')
timestamp: datetime = Field(..., description="ISO timestamp when task completed") timestamp: datetime = Field(..., description='ISO timestamp when task completed')
result: str = Field(..., description="Task result message or details") result: str = Field(..., description='Task result message or details')
status: Optional[TaskExecutionStatus] = None status: Optional[TaskExecutionStatus] = None
batch_id: Optional[str] = Field(
None, description="ID of the batch this task belongs to"
)
end_time: Optional[datetime] = Field(
None, description="ISO timestamp when task execution ended"
)
class TaskStateMessage(BaseModel): class TaskStateMessage(BaseModel):
history: Dict[str, TaskHistoryItem] = Field( history: Dict[str, TaskHistoryItem] = Field(
..., description="Map of task IDs to their history items" ..., description='Map of task IDs to their history items'
) )
running_queue: List[QueueTaskItem] = Field( running_queue: List[QueueTaskItem] = Field(
..., description="Currently executing tasks" ..., description='Currently executing tasks'
) )
pending_queue: List[QueueTaskItem] = Field( pending_queue: List[QueueTaskItem] = Field(
..., description="Tasks waiting to be executed" ..., description='Tasks waiting to be executed'
)
installed_packs: Dict[str, ManagerPackInstalled] = Field(
..., description="Map of currently installed node packages by name"
) )
class MessageTaskDone(BaseModel): class MessageTaskDone(BaseModel):
ui_id: str = Field(..., description="Task identifier") ui_id: str = Field(..., description='Task identifier')
result: str = Field(..., description="Task result message") result: str = Field(..., description='Task result message')
kind: str = Field(..., description="Type of task") kind: str = Field(..., description='Type of task')
status: Optional[TaskExecutionStatus] = None status: Optional[TaskExecutionStatus] = None
timestamp: datetime = Field(..., description="ISO timestamp when task completed") timestamp: datetime = Field(..., description='ISO timestamp when task completed')
state: TaskStateMessage state: TaskStateMessage
class MessageTaskStarted(BaseModel): class MessageTaskStarted(BaseModel):
ui_id: str = Field(..., description="Task identifier") ui_id: str = Field(..., description='Task identifier')
kind: str = Field(..., description="Type of task") kind: str = Field(..., description='Type of task')
timestamp: datetime = Field(..., description="ISO timestamp when task started") timestamp: datetime = Field(..., description='ISO timestamp when task started')
state: TaskStateMessage state: TaskStateMessage
class MessageTaskFailed(BaseModel): class MessageTaskFailed(BaseModel):
ui_id: str = Field(..., description="Task identifier") ui_id: str = Field(..., description='Task identifier')
error: str = Field(..., description="Error message") error: str = Field(..., description='Error message')
kind: str = Field(..., description="Type of task") kind: str = Field(..., description='Type of task')
timestamp: datetime = Field(..., description="ISO timestamp when task failed") timestamp: datetime = Field(..., description='ISO timestamp when task failed')
state: TaskStateMessage state: TaskStateMessage
@@ -547,15 +407,11 @@ class MessageUpdate(
RootModel[Union[MessageTaskDone, MessageTaskStarted, MessageTaskFailed]] RootModel[Union[MessageTaskDone, MessageTaskStarted, MessageTaskFailed]]
): ):
root: Union[MessageTaskDone, MessageTaskStarted, MessageTaskFailed] = Field( root: Union[MessageTaskDone, MessageTaskStarted, MessageTaskFailed] = Field(
..., description="Union type for all possible WebSocket message updates" ..., description='Union type for all possible WebSocket message updates'
) )
class HistoryResponse(BaseModel): class HistoryResponse(BaseModel):
history: Optional[Dict[str, TaskHistoryItem]] = Field( history: Optional[Dict[str, TaskHistoryItem]] = Field(
None, description="Map of task IDs to their history items" None, description='Map of task IDs to their history items'
) )
class ImportFailInfoBulkResponse(RootModel[Optional[Dict[str, ImportFailInfoItem]]]):
root: Optional[Dict[str, ImportFailInfoItem]] = None

View File

@@ -1,11 +0,0 @@
- Anytime you make a change to the data being sent or received, you should follow this process:
1. Adjust the openapi.yaml file first
2. Verify the syntax of the openapi.yaml file using `yaml.safe_load`
3. Regenerate the types following the instructions in the `data_models/README.md` file
4. Verify the new data model is generated
5. Verify the syntax of the generated types files
6. Run formatting and linting on the generated types files
7. Adjust the `__init__.py` files in the `data_models` directory to match/export the new data model
8. Only then, make the changes to the rest of the codebase
9. Run the CI tests to verify that the changes are working
- The comfyui_manager is a python package that is used to manage the comfyui server. There are two sub-packages `glob` and `legacy`. These represent the current version (`glob`) and the previous version (`legacy`), not including common utilities and data models. When developing, we work in the `glob` package. You can ignore the `legacy` package entirely, unless you have a very good reason to research how things were done in the legacy or prior major versions of the package. But in those cases, you should just look for the sake of knowledge or reflection, not for changing code (unless explicitly asked to do so).

View File

@@ -2,16 +2,20 @@
This directory contains the Python backend modules that power ComfyUI-Manager, handling the core functionality of node management, downloading, security, and server operations. This directory contains the Python backend modules that power ComfyUI-Manager, handling the core functionality of node management, downloading, security, and server operations.
## Directory Structure
- **glob/** - code for new cacheless ComfyUI-Manager
- **legacy/** - code for legacy ComfyUI-Manager
## Core Modules ## Core Modules
- **manager_core.py**: The central implementation of management functions, handling configuration, installation, updates, and node management. - **manager_core.py**: The central implementation of management functions, handling configuration, installation, updates, and node management.
- **manager_server.py**: Implements server functionality and API endpoints for the web interface to interact with the backend. - **manager_server.py**: Implements server functionality and API endpoints for the web interface to interact with the backend.
- **manager_downloader.py**: Handles downloading operations for models, extensions, and other resources.
- **manager_util.py**: Provides utility functions used throughout the system.
## Specialized Modules ## Specialized Modules
- **cm_global.py**: Maintains global variables and state management across the system.
- **cnr_utils.py**: Helper utilities for interacting with the custom node registry (CNR).
- **git_utils.py**: Git-specific utilities for repository operations.
- **node_package.py**: Handles the packaging and installation of node extensions.
- **security_check.py**: Implements the multi-level security system for installation safety.
- **share_3rdparty.py**: Manages integration with third-party sharing platforms. - **share_3rdparty.py**: Manages integration with third-party sharing platforms.
## Architecture ## Architecture

View File

@@ -1,6 +1,6 @@
from comfy.cli_args import args
SECURITY_MESSAGE_MIDDLE = "ERROR: To use this action, a security_level of `normal or below` is required. Please contact the administrator.\nReference: https://github.com/ltdrdata/ComfyUI-Manager#security-policy" SECURITY_MESSAGE_MIDDLE_OR_BELOW = "ERROR: To use this action, a security_level of `middle or below` is required. Please contact the administrator.\nReference: https://github.com/ltdrdata/ComfyUI-Manager#security-policy"
SECURITY_MESSAGE_MIDDLE_P = "ERROR: To use this action, security_level must be `normal or below`, and network_mode must be set to `personal_cloud`. Please contact the administrator.\nReference: https://github.com/ltdrdata/ComfyUI-Manager#security-policy"
SECURITY_MESSAGE_NORMAL_MINUS = "ERROR: To use this feature, you must either set '--listen' to a local IP and set the security level to 'normal-' or lower, or set the security level to 'middle' or 'weak'. Please contact the administrator.\nReference: https://github.com/ltdrdata/ComfyUI-Manager#security-policy" SECURITY_MESSAGE_NORMAL_MINUS = "ERROR: To use this feature, you must either set '--listen' to a local IP and set the security level to 'normal-' or lower, or set the security level to 'middle' or 'weak'. Please contact the administrator.\nReference: https://github.com/ltdrdata/ComfyUI-Manager#security-policy"
SECURITY_MESSAGE_GENERAL = "ERROR: This installation is not allowed in this security_level. Please contact the administrator.\nReference: https://github.com/ltdrdata/ComfyUI-Manager#security-policy" SECURITY_MESSAGE_GENERAL = "ERROR: This installation is not allowed in this security_level. Please contact the administrator.\nReference: https://github.com/ltdrdata/ComfyUI-Manager#security-policy"
SECURITY_MESSAGE_NORMAL_MINUS_MODEL = "ERROR: Downloading models that are not in '.safetensors' format is only allowed for models registered in the 'default' channel at this security level. If you want to download this model, set the security level to 'normal-' or lower." SECURITY_MESSAGE_NORMAL_MINUS_MODEL = "ERROR: Downloading models that are not in '.safetensors' format is only allowed for models registered in the 'default' channel at this security level. If you want to download this model, set the security level to 'normal-' or lower."
@@ -15,6 +15,9 @@ def is_loopback(address):
return False return False
is_local_mode = is_loopback(args.listen)
model_dir_name_map = { model_dir_name_map = {
"checkpoints": "checkpoints", "checkpoints": "checkpoints",
"checkpoint": "checkpoints", "checkpoint": "checkpoints",
@@ -34,22 +37,3 @@ model_dir_name_map = {
"unet": "diffusion_models", "unet": "diffusion_models",
"diffusion_model": "diffusion_models", "diffusion_model": "diffusion_models",
} }
# List of all model directory names used for checking installed models
MODEL_DIR_NAMES = [
"checkpoints",
"loras",
"vae",
"text_encoders",
"diffusion_models",
"clip_vision",
"embeddings",
"diffusers",
"vae_approx",
"controlnet",
"gligen",
"upscale_models",
"hypernetworks",
"photomaker",
"classifiers",
]

View File

@@ -41,12 +41,11 @@ from ..common.enums import NetworkMode, SecurityLevel, DBMode
from ..common import context from ..common import context
version_code = [4, 0, 3] version_code = [4, 0]
version_str = f"V{version_code[0]}.{version_code[1]}" + (f'.{version_code[2]}' if len(version_code) > 2 else '') version_str = f"V{version_code[0]}.{version_code[1]}" + (f'.{version_code[2]}' if len(version_code) > 2 else '')
DEFAULT_CHANNEL = "https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main" DEFAULT_CHANNEL = "https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main"
DEFAULT_CHANNEL_LEGACY = "https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main"
default_custom_nodes_path = None default_custom_nodes_path = None
@@ -151,11 +150,93 @@ def check_invalid_nodes():
print("\n---------------------------------------------------------------------------\n") print("\n---------------------------------------------------------------------------\n")
# read env vars
comfy_path: str = os.environ.get('COMFYUI_PATH')
comfy_base_path = os.environ.get('COMFYUI_FOLDERS_BASE_PATH')
if comfy_path is None:
try:
comfy_path = os.path.abspath(os.path.dirname(sys.modules['__main__'].__file__))
os.environ['COMFYUI_PATH'] = comfy_path
except:
logging.error("[ComfyUI-Manager] environment variable 'COMFYUI_PATH' is not specified.")
exit(-1)
if comfy_base_path is None:
comfy_base_path = comfy_path
channel_list_template_path = os.path.join(manager_util.comfyui_manager_path, 'channels.list.template')
git_script_path = os.path.join(manager_util.comfyui_manager_path, "git_helper.py")
manager_files_path = None
manager_config_path = None
manager_channel_list_path = None
manager_startup_script_path:str = None
manager_snapshot_path = None
manager_pip_overrides_path = None
manager_pip_blacklist_path = None
manager_components_path = None
manager_batch_history_path = None
def update_user_directory(user_dir):
global manager_files_path
global manager_config_path
global manager_channel_list_path
global manager_startup_script_path
global manager_snapshot_path
global manager_pip_overrides_path
global manager_pip_blacklist_path
global manager_components_path
global manager_batch_history_path
manager_files_path = os.path.abspath(os.path.join(user_dir, 'default', 'ComfyUI-Manager'))
if not os.path.exists(manager_files_path):
os.makedirs(manager_files_path)
manager_snapshot_path = os.path.join(manager_files_path, "snapshots")
if not os.path.exists(manager_snapshot_path):
os.makedirs(manager_snapshot_path)
manager_startup_script_path = os.path.join(manager_files_path, "startup-scripts")
if not os.path.exists(manager_startup_script_path):
os.makedirs(manager_startup_script_path)
manager_config_path = os.path.join(manager_files_path, 'config.ini')
manager_channel_list_path = os.path.join(manager_files_path, 'channels.list')
manager_pip_overrides_path = os.path.join(manager_files_path, "pip_overrides.json")
manager_pip_blacklist_path = os.path.join(manager_files_path, "pip_blacklist.list")
manager_components_path = os.path.join(manager_files_path, "components")
manager_util.cache_dir = os.path.join(manager_files_path, "cache")
manager_batch_history_path = os.path.join(manager_files_path, "batch_history")
if not os.path.exists(manager_util.cache_dir):
os.makedirs(manager_util.cache_dir)
if not os.path.exists(manager_batch_history_path):
os.makedirs(manager_batch_history_path)
try:
import folder_paths
update_user_directory(folder_paths.get_user_directory())
except Exception:
# fallback:
# This case is only possible when running with cm-cli, and in practice, this case is not actually used.
update_user_directory(os.path.abspath(manager_util.comfyui_manager_path))
cached_config = None cached_config = None
js_path = None js_path = None
comfy_ui_required_revision = 1930
comfy_ui_required_commit_datetime = datetime(2024, 1, 24, 0, 0, 0)
comfy_ui_revision = "Unknown"
comfy_ui_commit_datetime = datetime(1900, 1, 1, 0, 0, 0)
channel_dict = None channel_dict = None
valid_channels = {'default', 'local', DEFAULT_CHANNEL, DEFAULT_CHANNEL_LEGACY} valid_channels = {'default', 'local'}
channel_list = None channel_list = None
@@ -299,86 +380,18 @@ class ManagedResult:
return self return self
class NormalizedKeyDict:
def __init__(self):
self._store = {}
self._key_map = {}
def _normalize_key(self, key):
if isinstance(key, str):
return key.strip().lower()
return key
def __setitem__(self, key, value):
norm_key = self._normalize_key(key)
self._key_map[norm_key] = key
self._store[key] = value
def __getitem__(self, key):
norm_key = self._normalize_key(key)
original_key = self._key_map[norm_key]
return self._store[original_key]
def __delitem__(self, key):
norm_key = self._normalize_key(key)
original_key = self._key_map.pop(norm_key)
del self._store[original_key]
def __contains__(self, key):
return self._normalize_key(key) in self._key_map
def get(self, key, default=None):
return self[key] if key in self else default
def setdefault(self, key, default=None):
if key in self:
return self[key]
self[key] = default
return default
def pop(self, key, default=None):
if key in self:
val = self[key]
del self[key]
return val
if default is not None:
return default
raise KeyError(key)
def keys(self):
return self._store.keys()
def values(self):
return self._store.values()
def items(self):
return self._store.items()
def __iter__(self):
return iter(self._store)
def __len__(self):
return len(self._store)
def __repr__(self):
return repr(self._store)
def to_dict(self):
return dict(self._store)
class UnifiedManager: class UnifiedManager:
def __init__(self): def __init__(self):
self.installed_node_packages: dict[str, InstalledNodePackage] = {} self.installed_node_packages: dict[str, InstalledNodePackage] = {}
self.cnr_inactive_nodes = NormalizedKeyDict() # node_id -> node_version -> fullpath self.cnr_inactive_nodes = {} # node_id -> node_version -> fullpath
self.nightly_inactive_nodes = NormalizedKeyDict() # node_id -> fullpath self.nightly_inactive_nodes = {} # node_id -> fullpath
self.unknown_inactive_nodes = {} # node_id -> repo url * fullpath self.unknown_inactive_nodes = {} # node_id -> repo url * fullpath
self.active_nodes = NormalizedKeyDict() # node_id -> node_version * fullpath self.active_nodes = {} # node_id -> node_version * fullpath
self.unknown_active_nodes = {} # node_id -> repo url * fullpath self.unknown_active_nodes = {} # node_id -> repo url * fullpath
self.cnr_map = NormalizedKeyDict() # node_id -> cnr info self.cnr_map = {} # node_id -> cnr info
self.repo_cnr_map = {} # repo_url -> cnr info self.repo_cnr_map = {} # repo_url -> cnr info
self.custom_node_map_cache = {} # (channel, mode) -> augmented custom node list json self.custom_node_map_cache = {} # (channel, mode) -> augmented custom node list json
self.processed_install = set() self.processed_install = set()
def get_module_name(self, x): def get_module_name(self, x):
@@ -784,7 +797,7 @@ class UnifiedManager:
channel = normalize_channel(channel) channel = normalize_channel(channel)
nodes = await self.load_nightly(channel, mode) nodes = await self.load_nightly(channel, mode)
res = NormalizedKeyDict() res = {}
added_cnr = set() added_cnr = set()
for v in nodes.values(): for v in nodes.values():
v = v[0] v = v[0]
@@ -1003,6 +1016,7 @@ class UnifiedManager:
""" """
result = ManagedResult('enable') result = ManagedResult('enable')
if 'comfyui-manager' in node_id.lower(): if 'comfyui-manager' in node_id.lower():
return result.fail(f"ignored: enabling '{node_id}'") return result.fail(f"ignored: enabling '{node_id}'")
@@ -1473,7 +1487,7 @@ def identify_node_pack_from_path(fullpath):
# cnr # cnr
cnr = cnr_utils.read_cnr_info(fullpath) cnr = cnr_utils.read_cnr_info(fullpath)
if cnr is not None: if cnr is not None:
return module_name, cnr['version'], cnr['original_name'], None return module_name, cnr['version'], cnr['id'], None
return None return None
else: else:
@@ -1523,10 +1537,7 @@ def get_installed_node_packs():
if info is None: if info is None:
continue continue
# NOTE: don't add disabled nodepack if there is enabled nodepack res[info[0]] = { 'ver': info[1], 'cnr_id': info[2], 'aux_id': info[3], 'enabled': False }
original_name = info[0].split('@')[0]
if original_name not in res:
res[info[0]] = { 'ver': info[1], 'cnr_id': info[2], 'aux_id': info[3], 'enabled': False }
return res return res
@@ -1623,18 +1634,16 @@ def read_config():
config = configparser.ConfigParser(strict=False) config = configparser.ConfigParser(strict=False)
config.read(context.manager_config_path) config.read(context.manager_config_path)
default_conf = config['default'] default_conf = config['default']
manager_util.use_uv = default_conf['use_uv'].lower() == 'true' if 'use_uv' in default_conf else False
def get_bool(key, default_value): def get_bool(key, default_value):
return default_conf[key].lower() == 'true' if key in default_conf else False return default_conf[key].lower() == 'true' if key in default_conf else False
manager_util.use_uv = default_conf['use_uv'].lower() == 'true' if 'use_uv' in default_conf else False
manager_util.bypass_ssl = get_bool('bypass_ssl', False)
return { return {
'http_channel_enabled': get_bool('http_channel_enabled', False), 'http_channel_enabled': get_bool('http_channel_enabled', False),
'preview_method': default_conf.get('preview_method', manager_funcs.get_current_preview_method()).lower(), 'preview_method': default_conf.get('preview_method', manager_funcs.get_current_preview_method()).lower(),
'git_exe': default_conf.get('git_exe', ''), 'git_exe': default_conf.get('git_exe', ''),
'use_uv': get_bool('use_uv', True), 'use_uv': get_bool('use_uv', False),
'channel_url': default_conf.get('channel_url', DEFAULT_CHANNEL), 'channel_url': default_conf.get('channel_url', DEFAULT_CHANNEL),
'default_cache_as_channel_url': get_bool('default_cache_as_channel_url', False), 'default_cache_as_channel_url': get_bool('default_cache_as_channel_url', False),
'share_option': default_conf.get('share_option', 'all').lower(), 'share_option': default_conf.get('share_option', 'all').lower(),
@@ -1652,20 +1661,16 @@ def read_config():
} }
except Exception: except Exception:
import importlib.util manager_util.use_uv = False
# temporary disable `uv` on Windows by default (https://github.com/Comfy-Org/ComfyUI-Manager/issues/1969)
manager_util.use_uv = importlib.util.find_spec("uv") is not None and platform.system() != "Windows"
manager_util.bypass_ssl = False
return { return {
'http_channel_enabled': False, 'http_channel_enabled': False,
'preview_method': manager_funcs.get_current_preview_method(), 'preview_method': manager_funcs.get_current_preview_method(),
'git_exe': '', 'git_exe': '',
'use_uv': manager_util.use_uv, 'use_uv': False,
'channel_url': DEFAULT_CHANNEL, 'channel_url': DEFAULT_CHANNEL,
'default_cache_as_channel_url': False, 'default_cache_as_channel_url': False,
'share_option': 'all', 'share_option': 'all',
'bypass_ssl': manager_util.bypass_ssl, 'bypass_ssl': False,
'file_logging': True, 'file_logging': True,
'component_policy': 'workflow', 'component_policy': 'workflow',
'update_policy': 'stable-comfyui', 'update_policy': 'stable-comfyui',
@@ -1783,6 +1788,16 @@ def try_install_script(url, repo_path, install_cmd, instant_execution=False):
print(f"\n## ComfyUI-Manager: EXECUTE => {install_cmd}") print(f"\n## ComfyUI-Manager: EXECUTE => {install_cmd}")
code = manager_funcs.run_script(install_cmd, cwd=repo_path) code = manager_funcs.run_script(install_cmd, cwd=repo_path)
if platform.system() != "Windows":
try:
if not os.environ.get('__COMFYUI_DESKTOP_VERSION__') and comfy_ui_commit_datetime.date() < comfy_ui_required_commit_datetime.date():
print("\n\n###################################################################")
print(f"[WARN] ComfyUI-Manager: Your ComfyUI version ({comfy_ui_revision})[{comfy_ui_commit_datetime.date()}] is too old. Please update to the latest version.")
print("[WARN] The extension installation feature may not work properly in the current installed ComfyUI version on Windows environment.")
print("###################################################################\n\n")
except Exception:
pass
if code != 0: if code != 0:
if url is None: if url is None:
url = os.path.dirname(repo_path) url = os.path.dirname(repo_path)
@@ -1901,27 +1916,6 @@ def execute_install_script(url, repo_path, lazy_mode=False, instant_execution=Fa
return True return True
def install_manager_requirements(repo_path):
"""
Install packages from manager_requirements.txt if it exists.
This is specifically for ComfyUI's manager_requirements.txt.
"""
manager_requirements_path = os.path.join(repo_path, "manager_requirements.txt")
if not os.path.exists(manager_requirements_path):
return
logging.info("[ComfyUI-Manager] Installing manager_requirements.txt")
with open(manager_requirements_path, "r") as f:
for line in f:
line = line.strip()
if line and not line.startswith('#'):
if '#' in line:
line = line.split('#')[0].strip()
if line:
install_cmd = manager_util.make_pip_cmd(["install", line])
subprocess.run(install_cmd)
def git_repo_update_check_with(path, do_fetch=False, do_update=False, no_deps=False): def git_repo_update_check_with(path, do_fetch=False, do_update=False, no_deps=False):
""" """
@@ -2455,7 +2449,6 @@ def update_to_stable_comfyui(repo_path):
else: else:
logging.info(f"[ComfyUI-Manager] Updating ComfyUI: {current_tag} -> {latest_tag}") logging.info(f"[ComfyUI-Manager] Updating ComfyUI: {current_tag} -> {latest_tag}")
repo.git.checkout(latest_tag) repo.git.checkout(latest_tag)
execute_install_script("ComfyUI", repo_path, instant_execution=False, no_deps=False)
return 'updated', latest_tag return 'updated', latest_tag
except Exception: except Exception:
traceback.print_exc() traceback.print_exc()
@@ -2874,7 +2867,7 @@ async def get_unified_total_nodes(channel, mode, regsitry_cache_mode='cache'):
if cnr_id is not None: if cnr_id is not None:
# cnr or nightly version # cnr or nightly version
cnr_ids.discard(cnr_id) cnr_ids.remove(cnr_id)
updatable = False updatable = False
cnr = unified_manager.cnr_map[cnr_id] cnr = unified_manager.cnr_map[cnr_id]
@@ -3038,11 +3031,6 @@ async def restore_snapshot(snapshot_path, git_helper_extras=None):
info = yaml.load(snapshot_file, Loader=yaml.SafeLoader) info = yaml.load(snapshot_file, Loader=yaml.SafeLoader)
info = info['custom_nodes'] info = info['custom_nodes']
if 'pips' in info and info['pips']:
pips = info['pips']
else:
pips = {}
# for cnr restore # for cnr restore
cnr_info = info.get('cnr_custom_nodes') cnr_info = info.get('cnr_custom_nodes')
if cnr_info is not None: if cnr_info is not None:
@@ -3249,8 +3237,6 @@ async def restore_snapshot(snapshot_path, git_helper_extras=None):
unified_manager.repo_install(repo_url, to_path, instant_execution=True, no_deps=False, return_postinstall=False) unified_manager.repo_install(repo_url, to_path, instant_execution=True, no_deps=False, return_postinstall=False)
cloned_repos.append(repo_name) cloned_repos.append(repo_name)
manager_util.restore_pip_snapshot(pips, git_helper_extras)
# print summary # print summary
for x in cloned_repos: for x in cloned_repos:
print(f"[ INSTALLED ] {x}") print(f"[ INSTALLED ] {x}")

View File

File diff suppressed because it is too large Load Diff

View File

@@ -11,15 +11,6 @@ import hashlib
import folder_paths import folder_paths
from server import PromptServer from server import PromptServer
import logging import logging
import sys
try:
from nio import AsyncClient, LoginResponse, UploadResponse
matrix_nio_is_available = True
except Exception:
logging.warning(f"[ComfyUI-Manager] The matrix sharing feature has been disabled because the `matrix-nio` dependency is not installed.\n\tTo use this feature, please run the following command:\n\t{sys.executable} -m pip install matrix-nio\n")
matrix_nio_is_available = False
def extract_model_file_names(json_data): def extract_model_file_names(json_data):
@@ -202,14 +193,6 @@ async def get_esheep_workflow_and_images(request):
return web.Response(status=200, text=json.dumps(data)) return web.Response(status=200, text=json.dumps(data))
@PromptServer.instance.routes.get("/v2/manager/get_matrix_dep_status")
async def get_matrix_dep_status(request):
if matrix_nio_is_available:
return web.Response(status=200, text='available')
else:
return web.Response(status=200, text='unavailable')
def set_matrix_auth(json_data): def set_matrix_auth(json_data):
homeserver = json_data['homeserver'] homeserver = json_data['homeserver']
username = json_data['username'] username = json_data['username']
@@ -349,12 +332,15 @@ async def share_art(request):
workflowId = upload_workflow_json["workflowId"] workflowId = upload_workflow_json["workflowId"]
# check if the user has provided Matrix credentials # check if the user has provided Matrix credentials
if matrix_nio_is_available and "matrix" in share_destinations: if "matrix" in share_destinations:
comfyui_share_room_id = '!LGYSoacpJPhIfBqVfb:matrix.org' comfyui_share_room_id = '!LGYSoacpJPhIfBqVfb:matrix.org'
filename = os.path.basename(asset_filepath) filename = os.path.basename(asset_filepath)
content_type = assetFileType content_type = assetFileType
try: try:
from matrix_client.api import MatrixHttpApi
from matrix_client.client import MatrixClient
homeserver = 'matrix.org' homeserver = 'matrix.org'
if matrix_auth: if matrix_auth:
homeserver = matrix_auth.get('homeserver', 'matrix.org') homeserver = matrix_auth.get('homeserver', 'matrix.org')
@@ -362,35 +348,20 @@ async def share_art(request):
if not homeserver.startswith("https://"): if not homeserver.startswith("https://"):
homeserver = "https://" + homeserver homeserver = "https://" + homeserver
client = AsyncClient(homeserver, matrix_auth['username']) client = MatrixClient(homeserver)
try:
# Login token = client.login(username=matrix_auth['username'], password=matrix_auth['password'])
login_resp = await client.login(matrix_auth['password']) if not token:
if not isinstance(login_resp, LoginResponse) or not login_resp.access_token: return web.json_response({"error": "Invalid Matrix credentials."}, content_type='application/json', status=400)
await client.close() except Exception:
return web.json_response({"error": "Invalid Matrix credentials."}, content_type='application/json', status=400) return web.json_response({"error": "Invalid Matrix credentials."}, content_type='application/json', status=400)
# Upload asset matrix = MatrixHttpApi(homeserver, token=token)
with open(asset_filepath, 'rb') as f: with open(asset_filepath, 'rb') as f:
upload_resp, _maybe_keys = await client.upload(f, content_type=content_type, filename=filename) mxc_url = matrix.media_upload(f.read(), content_type, filename=filename)['content_uri']
asset_data = f.seek(0) or f.read() # get size for info below
if not isinstance(upload_resp, UploadResponse) or not upload_resp.content_uri:
await client.close()
return web.json_response({"error": "Failed to upload asset to Matrix."}, content_type='application/json', status=500)
mxc_url = upload_resp.content_uri
# Upload workflow JSON workflow_json_mxc_url = matrix.media_upload(prompt['workflow'], 'application/json', filename='workflow.json')['content_uri']
import io
workflow_json_bytes = json.dumps(prompt['workflow']).encode('utf-8')
workflow_io = io.BytesIO(workflow_json_bytes)
upload_workflow_resp, _maybe_keys = await client.upload(workflow_io, content_type='application/json', filename='workflow.json')
workflow_io.seek(0)
if not isinstance(upload_workflow_resp, UploadResponse) or not upload_workflow_resp.content_uri:
await client.close()
return web.json_response({"error": "Failed to upload workflow to Matrix."}, content_type='application/json', status=500)
workflow_json_mxc_url = upload_workflow_resp.content_uri
# Send text message
text_content = "" text_content = ""
if title: if title:
text_content += f"{title}\n" text_content += f"{title}\n"
@@ -398,47 +369,11 @@ async def share_art(request):
text_content += f"{description}\n" text_content += f"{description}\n"
if credits: if credits:
text_content += f"\ncredits: {credits}\n" text_content += f"\ncredits: {credits}\n"
await client.room_send( matrix.send_message(comfyui_share_room_id, text_content)
room_id=comfyui_share_room_id, matrix.send_content(comfyui_share_room_id, mxc_url, filename, 'm.image')
message_type="m.room.message", matrix.send_content(comfyui_share_room_id, workflow_json_mxc_url, 'workflow.json', 'm.file')
content={"msgtype": "m.text", "body": text_content} except Exception:
) logging.exception("An error occurred")
# Send image
await client.room_send(
room_id=comfyui_share_room_id,
message_type="m.room.message",
content={
"msgtype": "m.image",
"body": filename,
"url": mxc_url,
"info": {
"mimetype": content_type,
"size": len(asset_data)
}
}
)
# Send workflow JSON file
await client.room_send(
room_id=comfyui_share_room_id,
message_type="m.room.message",
content={
"msgtype": "m.file",
"body": "workflow.json",
"url": workflow_json_mxc_url,
"info": {
"mimetype": "application/json",
"size": len(workflow_json_bytes)
}
}
)
await client.close()
except:
import traceback
traceback.print_exc()
return web.json_response({"error": "An error occurred when sharing your art to Matrix."}, content_type='application/json', status=500) return web.json_response({"error": "An error occurred when sharing your art to Matrix."}, content_type='application/json', status=500)
return web.json_response({ return web.json_response({

View File

@@ -3,7 +3,7 @@ import git
import logging import logging
import traceback import traceback
from comfyui_manager.common import context from comfyui_manager.common import context, manager_util
import folder_paths import folder_paths
from comfy.cli_args import args from comfy.cli_args import args
import latent_preview import latent_preview
@@ -125,18 +125,17 @@ def initialize_environment():
context.comfy_path = os.path.dirname(folder_paths.__file__) context.comfy_path = os.path.dirname(folder_paths.__file__)
core.js_path = os.path.join(context.comfy_path, "web", "extensions") core.js_path = os.path.join(context.comfy_path, "web", "extensions")
# Legacy database paths - kept for potential future use local_db_model = os.path.join(manager_util.comfyui_manager_path, "model-list.json")
# local_db_model = os.path.join(manager_util.comfyui_manager_path, "model-list.json") local_db_alter = os.path.join(manager_util.comfyui_manager_path, "alter-list.json")
# local_db_alter = os.path.join(manager_util.comfyui_manager_path, "alter-list.json") local_db_custom_node_list = os.path.join(
# local_db_custom_node_list = os.path.join( manager_util.comfyui_manager_path, "custom-node-list.json"
# manager_util.comfyui_manager_path, "custom-node-list.json" )
# ) local_db_extension_node_mappings = os.path.join(
# local_db_extension_node_mappings = os.path.join( manager_util.comfyui_manager_path, "extension-node-map.json"
# manager_util.comfyui_manager_path, "extension-node-map.json" )
# )
set_preview_method(core.get_config()["preview_method"]) set_preview_method(core.get_config()["preview_method"])
print_comfyui_version() environment_utils.print_comfyui_version()
setup_environment() setup_environment()
core.check_invalid_nodes() core.check_invalid_nodes()

View File

@@ -1,6 +1,5 @@
import locale import locale
import sys import sys
import re
def handle_stream(stream, prefix): def handle_stream(stream, prefix):
@@ -20,41 +19,3 @@ def handle_stream(stream, prefix):
print(prefix, msg, end="", file=sys.stderr) print(prefix, msg, end="", file=sys.stderr)
else: else:
print(prefix, msg, end="") print(prefix, msg, end="")
def convert_markdown_to_html(input_text):
pattern_a = re.compile(r"\[a/([^]]+)]\(([^)]+)\)")
pattern_w = re.compile(r"\[w/([^]]+)]")
pattern_i = re.compile(r"\[i/([^]]+)]")
pattern_bold = re.compile(r"\*\*([^*]+)\*\*")
pattern_white = re.compile(r"%%([^*]+)%%")
def replace_a(match):
return f"<a href='{match.group(2)}' target='blank'>{match.group(1)}</a>"
def replace_w(match):
return f"<p class='cm-warn-note'>{match.group(1)}</p>"
def replace_i(match):
return f"<p class='cm-info-note'>{match.group(1)}</p>"
def replace_bold(match):
return f"<B>{match.group(1)}</B>"
def replace_white(match):
return f"<font color='white'>{match.group(1)}</font>"
input_text = (
input_text.replace("\\[", "&#91;")
.replace("\\]", "&#93;")
.replace("<", "&lt;")
.replace(">", "&gt;")
)
result_text = re.sub(pattern_a, replace_a, input_text)
result_text = re.sub(pattern_w, replace_w, result_text)
result_text = re.sub(pattern_i, replace_i, result_text)
result_text = re.sub(pattern_bold, replace_bold, result_text)
result_text = re.sub(pattern_white, replace_white, result_text)
return result_text.replace("\n", "<BR>")

View File

@@ -1,10 +1,8 @@
import os import os
import logging import logging
import concurrent.futures
import folder_paths import folder_paths
from comfyui_manager.glob import manager_core as core from comfyui_manager.glob import manager_core as core
from comfyui_manager.glob.constants import model_dir_name_map, MODEL_DIR_NAMES
def get_model_dir(data, show_log=False): def get_model_dir(data, show_log=False):
@@ -73,89 +71,3 @@ def get_model_path(data, show_log=False):
return os.path.join(base_model, os.path.basename(data["url"])) return os.path.join(base_model, os.path.basename(data["url"]))
else: else:
return os.path.join(base_model, data["filename"]) return os.path.join(base_model, data["filename"])
def check_model_installed(json_obj):
def is_exists(model_dir_name, filename, url):
if filename == "<huggingface>":
filename = os.path.basename(url)
dirs = folder_paths.get_folder_paths(model_dir_name)
for x in dirs:
if os.path.exists(os.path.join(x, filename)):
return True
return False
total_models_files = set()
for x in MODEL_DIR_NAMES:
for y in folder_paths.get_filename_list(x):
total_models_files.add(y)
def process_model_phase(item):
if (
"diffusion" not in item["filename"]
and "pytorch" not in item["filename"]
and "model" not in item["filename"]
):
# non-general name case
if item["filename"] in total_models_files:
item["installed"] = "True"
return
if item["save_path"] == "default":
model_dir_name = model_dir_name_map.get(item["type"].lower())
if model_dir_name is not None:
item["installed"] = str(
is_exists(model_dir_name, item["filename"], item["url"])
)
else:
item["installed"] = "False"
else:
model_dir_name = item["save_path"].split("/")[0]
if model_dir_name in folder_paths.folder_names_and_paths:
if is_exists(model_dir_name, item["filename"], item["url"]):
item["installed"] = "True"
if "installed" not in item:
if item["filename"] == "<huggingface>":
filename = os.path.basename(item["url"])
else:
filename = item["filename"]
fullpath = os.path.join(
folder_paths.models_dir, item["save_path"], filename
)
item["installed"] = "True" if os.path.exists(fullpath) else "False"
with concurrent.futures.ThreadPoolExecutor(8) as executor:
for item in json_obj["models"]:
executor.submit(process_model_phase, item)
async def check_whitelist_for_model(item):
from comfyui_manager.data_models import ManagerDatabaseSource
json_obj = await core.get_data_by_mode(ManagerDatabaseSource.cache.value, "model-list.json")
for x in json_obj.get("models", []):
if (
x["save_path"] == item["save_path"]
and x["base"] == item["base"]
and x["filename"] == item["filename"]
):
return True
json_obj = await core.get_data_by_mode(ManagerDatabaseSource.local.value, "model-list.json")
for x in json_obj.get("models", []):
if (
x["save_path"] == item["save_path"]
and x["base"] == item["base"]
and x["filename"] == item["filename"]
):
return True
return False

View File

@@ -1,49 +1,24 @@
from comfyui_manager.glob import manager_core as core from comfyui_manager.glob import manager_core as core
from comfy.cli_args import args
from comfyui_manager.data_models import SecurityLevel, RiskLevel, ManagerDatabaseSource
def is_loopback(address):
import ipaddress
try:
return ipaddress.ip_address(address).is_loopback
except ValueError:
return False
def is_allowed_security_level(level): def is_allowed_security_level(level):
is_local_mode = is_loopback(args.listen) if level == "block":
is_personal_cloud = core.get_config()['network_mode'].lower() == 'personal_cloud'
if level == RiskLevel.block.value:
return False return False
elif level == RiskLevel.high_.value: elif level == "high":
if is_local_mode: if is_local_mode:
return core.get_config()['security_level'] in [SecurityLevel.weak.value, SecurityLevel.normal_.value] return core.get_config()["security_level"] in ["weak", "normal-"]
elif is_personal_cloud:
return core.get_config()['security_level'] == SecurityLevel.weak.value
else: else:
return False return core.get_config()["security_level"] == "weak"
elif level == RiskLevel.high.value: elif level == "middle":
if is_local_mode: return core.get_config()["security_level"] in ["weak", "normal", "normal-"]
return core.get_config()['security_level'] in [SecurityLevel.weak.value, SecurityLevel.normal_.value]
else:
return core.get_config()['security_level'] == SecurityLevel.weak.value
elif level == RiskLevel.middle_.value:
if is_local_mode or is_personal_cloud:
return core.get_config()['security_level'] in [SecurityLevel.weak.value, SecurityLevel.normal.value, SecurityLevel.normal_.value]
else:
return False
elif level == RiskLevel.middle.value:
return core.get_config()['security_level'] in [SecurityLevel.weak.value, SecurityLevel.normal.value, SecurityLevel.normal_.value]
else: else:
return True return True
async def get_risky_level(files, pip_packages): async def get_risky_level(files, pip_packages):
json_data1 = await core.get_data_by_mode(ManagerDatabaseSource.local.value, "custom-node-list.json") json_data1 = await core.get_data_by_mode("local", "custom-node-list.json")
json_data2 = await core.get_data_by_mode( json_data2 = await core.get_data_by_mode(
ManagerDatabaseSource.cache.value, "cache",
"custom-node-list.json", "custom-node-list.json",
channel_url="https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main", channel_url="https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main",
) )
@@ -54,7 +29,7 @@ async def get_risky_level(files, pip_packages):
for x in files: for x in files:
if x not in all_urls: if x not in all_urls:
return RiskLevel.high_.value return "high"
all_pip_packages = set() all_pip_packages = set()
for x in json_data1["custom_nodes"] + json_data2["custom_nodes"]: for x in json_data1["custom_nodes"] + json_data2["custom_nodes"]:
@@ -62,6 +37,6 @@ async def get_risky_level(files, pip_packages):
for p in pip_packages: for p in pip_packages:
if p not in all_pip_packages: if p not in all_pip_packages:
return RiskLevel.block.value return "block"
return RiskLevel.middle_.value return "middle"

View File

@@ -13,7 +13,7 @@ This directory contains the JavaScript frontend implementation for ComfyUI-Manag
## Sharing Components ## Sharing Components
- **comfyui-share-common.js**: Base functionality for workflow sharing features. - **comfyui-share-common.js**: Base functionality for workflow sharing features.
- **comfyui-share-copus.js**: Integration with the ComfyUI Copus sharing platform. - **comfyui-share-copus.js**: Integration with the ComfyUI Opus sharing platform.
- **comfyui-share-openart.js**: Integration with the OpenArt sharing platform. - **comfyui-share-openart.js**: Integration with the OpenArt sharing platform.
- **comfyui-share-youml.js**: Integration with the YouML sharing platform. - **comfyui-share-youml.js**: Integration with the YouML sharing platform.
@@ -47,4 +47,4 @@ CSS files are included for specific components:
- **custom-nodes-manager.css**: Styling for the node management UI - **custom-nodes-manager.css**: Styling for the node management UI
- **model-manager.css**: Styling for the model management UI - **model-manager.css**: Styling for the model management UI
This frontend implementation provides a comprehensive yet user-friendly interface for managing the ComfyUI ecosystem. This frontend implementation provides a comprehensive yet user-friendly interface for managing the ComfyUI ecosystem.

View File

@@ -222,6 +222,9 @@ function isBeforeFrontendVersion(compareVersion) {
} }
} }
const is_legacy_front = () => isBeforeFrontendVersion('1.2.49');
const isNewManagerUI = () => isBeforeFrontendVersion('1.16.4');
document.head.appendChild(docStyle); document.head.appendChild(docStyle);
var update_comfyui_button = null; var update_comfyui_button = null;
@@ -475,9 +478,9 @@ async function updateComfyUI() {
// set_inprogress_mode(); // set_inprogress_mode();
showTerminal(); showTerminal();
batch_id = generateUUID(); batch_id = generateUUID();
let batch = {}; let batch = {};
batch['batch_id'] = batch_id; batch['batch_id'] = batch_id;
batch['update_comfyui'] = true; batch['update_comfyui'] = true;
@@ -664,13 +667,13 @@ async function onQueueStatus(event) {
update_all_button.innerText = `in progress.. (${event.detail.done_count}/${event.detail.total_count})`; update_all_button.innerText = `in progress.. (${event.detail.done_count}/${event.detail.total_count})`;
} }
else if(event.detail.status == 'all-done') { else if(event.detail.status == 'all-done') {
reset_action_buttons(); // reset_action_buttons();
} }
else if(event.detail.status == 'batch-done') { else if(event.detail.status == 'batch-done') {
if(batch_id != event.detail.batch_id) { if(batch_id != event.detail.batch_id) {
return; return;
} }
let success_list = []; let success_list = [];
let failed_list = []; let failed_list = [];
let comfyui_state = null; let comfyui_state = null;
@@ -777,7 +780,7 @@ async function updateAll(update_comfyui) {
showTerminal(); showTerminal();
batch_id = generateUUID(); batch_id = generateUUID();
let batch = {}; let batch = {};
if(update_comfyui) { if(update_comfyui) {
update_all_button.innerText = "Updating ComfyUI..."; update_all_button.innerText = "Updating ComfyUI...";
@@ -1514,6 +1517,11 @@ app.registerExtension({
tooltip: "Share" tooltip: "Share"
}).element }).element
); );
const shouldShowLegacyMenuItems = !isNewManagerUI();
if (shouldShowLegacyMenuItems) {
app.menu?.settingsGroup.element.before(cmGroup.element);
}
} }
catch(exception) { catch(exception) {
console.log('ComfyUI is outdated. New style menu based features are disabled.'); console.log('ComfyUI is outdated. New style menu based features are disabled.');

View File

@@ -552,20 +552,6 @@ export class ShareDialog extends ComfyDialog {
this.matrix_destination_checkbox.style.color = "var(--fg-color)"; this.matrix_destination_checkbox.style.color = "var(--fg-color)";
this.matrix_destination_checkbox.checked = this.share_option === 'matrix'; //true; this.matrix_destination_checkbox.checked = this.share_option === 'matrix'; //true;
try {
api.fetchApi(`/v2/manager/get_matrix_dep_status`)
.then(response => response.text())
.then(data => {
if(data == 'unavailable') {
matrix_destination_checkbox_text.style.textDecoration = "line-through";
this.matrix_destination_checkbox.disabled = true;
this.matrix_destination_checkbox.title = "It has been disabled because the 'matrix-nio' dependency is not installed. Please install this dependency to use the matrix sharing feature.";
matrix_destination_checkbox_text.title = "It has been disabled because the 'matrix-nio' dependency is not installed. Please install this dependency to use the matrix sharing feature.";
}
})
.catch(error => {});
} catch (error) {}
this.comfyworkflows_destination_checkbox = $el("input", { type: 'checkbox', id: "comfyworkflows_destination" }, []) this.comfyworkflows_destination_checkbox = $el("input", { type: 'checkbox', id: "comfyworkflows_destination" }, [])
const comfyworkflows_destination_checkbox_text = $el("label", {}, [" ComfyWorkflows.com"]) const comfyworkflows_destination_checkbox_text = $el("label", {}, [" ComfyWorkflows.com"])
this.comfyworkflows_destination_checkbox.style.color = "var(--fg-color)"; this.comfyworkflows_destination_checkbox.style.color = "var(--fg-color)";

View File

@@ -71,7 +71,7 @@ export class CopusShareDialog extends ComfyDialog {
this.allFiles = []; this.allFiles = [];
this.titleNum = 0; this.titleNum = 0;
} }
createButtons() { createButtons() {
const inputStyle = { const inputStyle = {
display: "block", display: "block",
@@ -201,15 +201,13 @@ export class CopusShareDialog extends ComfyDialog {
}); });
this.LockInput = $el("input", { this.LockInput = $el("input", {
type: "text", type: "text",
placeholder: "0", placeholder: "",
style: { style: {
width: "100px", width: "100px",
padding: "7px", padding: "7px",
paddingLeft: "30px",
borderRadius: "4px", borderRadius: "4px",
border: "1px solid #ddd", border: "1px solid #ddd",
boxSizing: "border-box", boxSizing: "border-box",
position: "relative",
}, },
oninput: (event) => { oninput: (event) => {
let input = event.target.value; let input = event.target.value;
@@ -303,7 +301,7 @@ export class CopusShareDialog extends ComfyDialog {
}, },
[] []
); );
const titleNumDom = $el( const titleNumDom = $el(
"label", "label",
{ {
@@ -344,11 +342,15 @@ export class CopusShareDialog extends ComfyDialog {
["0/70"] ["0/70"]
); );
// Additional Inputs Section // Additional Inputs Section
const additionalInputsSection = $el("div", { style: { ...sectionStyle } }, [ const additionalInputsSection = $el(
$el("label", { style: labelStyle }, ["3⃣ Title "]), "div",
this.TitleInput, { style: { ...sectionStyle, } },
titleNumDom, [
]); $el("label", { style: labelStyle }, ["3⃣ Title "]),
this.TitleInput,
titleNumDom,
]
);
const SubtitleSection = $el("div", { style: sectionStyle }, [ const SubtitleSection = $el("div", { style: sectionStyle }, [
$el("label", { style: labelStyle }, ["4⃣ Subtitle "]), $el("label", { style: labelStyle }, ["4⃣ Subtitle "]),
this.SubTitleInput, this.SubTitleInput,
@@ -377,7 +379,7 @@ export class CopusShareDialog extends ComfyDialog {
}); });
const blockChainSection_lock = $el("div", { style: sectionStyle }, [ const blockChainSection_lock = $el("div", { style: sectionStyle }, [
$el("label", { style: labelStyle }, ["6Download threshold"]), $el("label", { style: labelStyle }, ["6Pay to download"]),
$el( $el(
"label", "label",
{ {
@@ -390,42 +392,11 @@ export class CopusShareDialog extends ComfyDialog {
}, },
[ [
this.radioButtonsCheck_lock, this.radioButtonsCheck_lock,
$el( $el("div", { style: { marginLeft: "5px" ,display:'flex',alignItems:'center'} }, [
"div", $el("span", { style: { marginLeft: "5px" } }, ["ON"]),
{ $el("span", { style: { marginLeft: "20px",marginRight:'10px' ,color:'#fff'} }, ["Price US$"]),
style: { this.LockInput
marginLeft: "5px", ]),
display: "flex",
alignItems: "center",
position: "relative",
},
},
[
$el("span", { style: { marginLeft: "5px" } }, ["ON"]),
$el(
"span",
{
style: {
marginLeft: "20px",
marginRight: "10px",
color: "#fff",
},
},
["Unlock with"]
),
$el("img", {
style: {
width: "16px",
height: "16px",
position: "absolute",
right: "75px",
zIndex: "100",
},
src: "https://static.copus.io/images/admin/202507/prod/e2919a1d8f3c2d99d3b8fe27ff94b841.png",
}),
this.LockInput,
]
),
] ]
), ),
$el( $el(
@@ -433,25 +404,14 @@ export class CopusShareDialog extends ComfyDialog {
{ style: { display: "flex", alignItems: "center", cursor: "pointer" } }, { style: { display: "flex", alignItems: "center", cursor: "pointer" } },
[ [
this.radioButtonsCheckOff_lock, this.radioButtonsCheckOff_lock,
$el( $el("span", { style: { marginLeft: "5px" } }, ["OFF"]),
"div",
{
style: {
marginLeft: "5px",
display: "flex",
alignItems: "center",
},
},
[$el("span", { style: { marginLeft: "5px" } }, ["OFF"])]
),
] ]
), ),
$el( $el(
"p", "p",
{ style: { fontSize: "16px", color: "#fff", margin: "10px 0 0 0" } }, { style: { fontSize: "16px", color: "#fff", margin: "10px 0 0 0" } },
[ ["Get paid from your workflow. You can change the price and withdraw your earnings on Copus."]
]
), ),
]); ]);
@@ -472,7 +432,7 @@ export class CopusShareDialog extends ComfyDialog {
}); });
const blockChainSection = $el("div", { style: sectionStyle }, [ const blockChainSection = $el("div", { style: sectionStyle }, [
$el("label", { style: labelStyle }, ["8️⃣ Store on blockchain "]), $el("label", { style: labelStyle }, ["7️⃣ Store on blockchain "]),
$el( $el(
"label", "label",
{ {
@@ -503,139 +463,6 @@ export class CopusShareDialog extends ComfyDialog {
), ),
]); ]);
this.ratingRadioButtonsCheck0 = $el("input", {
type: "radio",
name: "content_rating",
value: "0",
id: "content_rating0",
});
this.ratingRadioButtonsCheck1 = $el("input", {
type: "radio",
name: "content_rating",
value: "1",
id: "content_rating1",
});
this.ratingRadioButtonsCheck2 = $el("input", {
type: "radio",
name: "content_rating",
value: "2",
id: "content_rating2",
});
this.ratingRadioButtonsCheck_1 = $el("input", {
type: "radio",
name: "content_rating",
value: "-1",
id: "content_rating_1",
checked: true,
});
// content rating
const contentRatingSection = $el("div", { style: sectionStyle }, [
$el("label", { style: labelStyle }, ["7⃣ Content rating "]),
$el(
"label",
{
style: {
marginTop: "10px",
display: "flex",
alignItems: "center",
cursor: "pointer",
},
},
[
this.ratingRadioButtonsCheck0,
$el("img", {
style: {
width: "12px",
height: "12px",
marginLeft: "5px",
},
src: "https://static.copus.io/images/client/202507/test/b9f17da83b054d53cd0cb4508c2c30dc.png",
}),
$el("span", { style: { marginLeft: "5px", color: "#fff" } }, [
"All ages",
]),
]
),
$el(
"p",
{ style: { fontSize: "10px", color: "#fff", marginLeft: "20px" } },
["Safe for all viewers; no profanity, violence, or mature themes."]
),
$el(
"label",
{ style: { display: "flex", alignItems: "center", cursor: "pointer" } },
[
this.ratingRadioButtonsCheck1,
$el("img", {
style: {
width: "12px",
height: "12px",
marginLeft: "5px",
},
src: "https://static.copus.io/images/client/202507/test/7848bc0d3690671df21c7cf00c4cfc81.png",
}),
$el("span", { style: { marginLeft: "5px", color: "#fff" } }, [
"13+ (Teen)",
]),
]
),
$el(
"p",
{ style: { fontSize: "10px", color: "#fff", marginLeft: "20px" } },
[
"Mild language, light themes, or cartoon violence; no explicit content. ",
]
),
$el(
"label",
{ style: { display: "flex", alignItems: "center", cursor: "pointer" } },
[
this.ratingRadioButtonsCheck2,
$el("img", {
style: {
width: "12px",
height: "12px",
marginLeft: "5px",
},
src: "https://static.copus.io/images/client/202507/test/bc51839c208d68d91173e43c23bff039.png",
}),
$el("span", { style: { marginLeft: "5px", color: "#fff" } }, [
"18+ (Explicit)",
]),
]
),
$el(
"p",
{ style: { fontSize: "10px", color: "#fff", marginLeft: "20px" } },
[
"Explicit content, including sexual content, strong violence, or intense themes. ",
]
),
$el(
"label",
{ style: { display: "flex", alignItems: "center", cursor: "pointer" } },
[
this.ratingRadioButtonsCheck_1,
$el("img", {
style: {
width: "12px",
height: "12px",
marginLeft: "5px",
},
src: "https://static.copus.io/images/client/202507/test/5c802fdcaaea4e7bbed37393eec0d5ba.png",
}),
$el("span", { style: { marginLeft: "5px", color: "#fff" } }, [
"Not Rated",
]),
]
),
$el(
"p",
{ style: { fontSize: "10px", color: "#fff", marginLeft: "20px" } },
["No age rating provided."]
),
]);
// Message Section // Message Section
this.message = $el( this.message = $el(
@@ -699,7 +526,6 @@ export class CopusShareDialog extends ComfyDialog {
DescriptionSection, DescriptionSection,
// contestSection, // contestSection,
blockChainSection_lock, blockChainSection_lock,
contentRatingSection,
blockChainSection, blockChainSection,
this.message, this.message,
buttonsSection, buttonsSection,
@@ -708,7 +534,7 @@ export class CopusShareDialog extends ComfyDialog {
return layout; return layout;
} }
/** /**
* api * api
* @param {url} path * @param {url} path
* @param {params} options * @param {params} options
* @param {statusText} statusText * @param {statusText} statusText
@@ -761,9 +587,7 @@ export class CopusShareDialog extends ComfyDialog {
url: data, url: data,
}); });
} else { } else {
throw new Error( throw new Error("make sure your API key is correct and try again later");
"make sure your API key is correct and try again later"
);
} }
} catch (e) { } catch (e) {
if (e?.response?.status === 413) { if (e?.response?.status === 413) {
@@ -804,15 +628,8 @@ export class CopusShareDialog extends ComfyDialog {
subTitle: this.SubTitleInput.value, subTitle: this.SubTitleInput.value,
content: this.descriptionInput.value, content: this.descriptionInput.value,
storeOnChain: this.radioButtonsCheck.checked ? true : false, storeOnChain: this.radioButtonsCheck.checked ? true : false,
lockState: this.radioButtonsCheck_lock.checked ? 2 : 0, lockState:this.radioButtonsCheck_lock.checked ? 2 : 0,
unlockPrice: this.LockInput.value, unlockPrice:this.LockInput.value,
rating: this.ratingRadioButtonsCheck0.checked
? 0
: this.ratingRadioButtonsCheck1.checked
? 1
: this.ratingRadioButtonsCheck2.checked
? 2
: -1,
}; };
if (!this.keyInput.value) { if (!this.keyInput.value) {
@@ -827,8 +644,8 @@ export class CopusShareDialog extends ComfyDialog {
throw new Error("Title is required"); throw new Error("Title is required");
} }
if (this.radioButtonsCheck_lock.checked) { if(this.radioButtonsCheck_lock.checked){
if (!this.LockInput.value) { if (!this.LockInput.value){
throw new Error("Price is required"); throw new Error("Price is required");
} }
} }
@@ -878,23 +695,23 @@ export class CopusShareDialog extends ComfyDialog {
"Uploading workflow..." "Uploading workflow..."
); );
if (res.status && res.data.status && res.data) { if (res.status && res.data.status && res.data) {
localStorage.setItem("copus_token", this.keyInput.value); localStorage.setItem("copus_token",this.keyInput.value);
const { data } = res.data; const { data } = res.data;
if (data) { if (data) {
const url = `${DEFAULT_HOMEPAGE_URL}/work/${data}`; const url = `${DEFAULT_HOMEPAGE_URL}/work/${data}`;
this.message.innerHTML = `Workflow has been shared successfully. <a href="${url}" target="_blank">Click here to view it.</a>`; this.message.innerHTML = `Workflow has been shared successfully. <a href="${url}" target="_blank">Click here to view it.</a>`;
this.previewImage.src = ""; this.previewImage.src = "";
this.previewImage.style.display = "none"; this.previewImage.style.display = "none";
this.uploadedImages = []; this.uploadedImages = [];
this.allFilesImages = []; this.allFilesImages = [];
this.allFiles = []; this.allFiles = [];
this.TitleInput.value = ""; this.TitleInput.value = "";
this.SubTitleInput.value = ""; this.SubTitleInput.value = "";
this.descriptionInput.value = ""; this.descriptionInput.value = "";
this.selectedFile = null; this.selectedFile = null;
} }
} }
} catch (e) { } catch (e) {
throw new Error("Error sharing workflow: " + e.message); throw new Error("Error sharing workflow: " + e.message);
} }
@@ -940,7 +757,7 @@ export class CopusShareDialog extends ComfyDialog {
this.element.style.display = "block"; this.element.style.display = "block";
this.previewImage.src = ""; this.previewImage.src = "";
this.previewImage.style.display = "none"; this.previewImage.style.display = "none";
this.keyInput.value = apiToken != null ? apiToken : ""; this.keyInput.value = apiToken!=null?apiToken:"";
this.uploadedImages = []; this.uploadedImages = [];
this.allFilesImages = []; this.allFilesImages = [];
this.allFiles = []; this.allFiles = [];

View File

@@ -1,5 +1,5 @@
.cn-manager { .cn-manager {
--grid-font: -apple-system, BlinkMacSystemFont, "Segoe UI", "Noto Sans", Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji"; --grid-font: -apple-system, BlinkMacSystemFont, "Segue UI", "Noto Sans", Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji";
z-index: 1099; z-index: 1099;
width: 80%; width: 80%;
height: 80%; height: 80%;

View File

@@ -714,7 +714,6 @@ export class CustomNodesManager {
link.href = rowItem.reference; link.href = rowItem.reference;
link.target = '_blank'; link.target = '_blank';
link.innerHTML = `<b>${title}</b>`; link.innerHTML = `<b>${title}</b>`;
link.title = rowItem.originalData.id;
container.appendChild(link); container.appendChild(link);
return container; return container;
@@ -1535,7 +1534,7 @@ export class CustomNodesManager {
else { else {
this.batch_id = generateUUID(); this.batch_id = generateUUID();
batch['batch_id'] = this.batch_id; batch['batch_id'] = this.batch_id;
const res = await api.fetchApi(`/v2/manager/queue/batch`, { const res = await api.fetchApi(`/v2/manager/queue/batch`, {
method: 'POST', method: 'POST',
body: JSON.stringify(batch) body: JSON.stringify(batch)
@@ -1550,7 +1549,7 @@ export class CustomNodesManager {
errorMsg = `[FAIL] ${item.title}`; errorMsg = `[FAIL] ${item.title}`;
} }
} }
this.showStop(); this.showStop();
showTerminal(); showTerminal();
} }
@@ -1626,35 +1625,17 @@ export class CustomNodesManager {
getNodesInWorkflow() { getNodesInWorkflow() {
let usedGroupNodes = new Set(); let usedGroupNodes = new Set();
let allUsedNodes = {}; let allUsedNodes = {};
const visitedGraphs = new Set();
const visitGraph = (graph) => { for(let k in app.graph._nodes) {
if (!graph || visitedGraphs.has(graph)) return; let node = app.graph._nodes[k];
visitedGraphs.add(graph);
const nodes = graph._nodes || graph.nodes || []; if(node.type.startsWith('workflow>')) {
for(let k in nodes) { usedGroupNodes.add(node.type.slice(9));
let node = nodes[k]; continue;
if (!node) continue;
// If it's a SubgraphNode, recurse into its graph and continue searching
if (node.isSubgraphNode?.() && node.subgraph) {
visitGraph(node.subgraph);
}
if (!node.type) continue;
// Group nodes / components
if(typeof node.type === 'string' && node.type.startsWith('workflow>')) {
usedGroupNodes.add(node.type.slice(9));
continue;
}
allUsedNodes[node.type] = node;
} }
};
visitGraph(app.graph); allUsedNodes[node.type] = node;
}
for(let k of usedGroupNodes) { for(let k of usedGroupNodes) {
let subnodes = app.graph.extra.groupNodes[k]?.nodes; let subnodes = app.graph.extra.groupNodes[k]?.nodes;

View File

@@ -41,12 +41,11 @@ from ..common.enums import NetworkMode, SecurityLevel, DBMode
from ..common import context from ..common import context
version_code = [4, 0, 3] version_code = [4, 0]
version_str = f"V{version_code[0]}.{version_code[1]}" + (f'.{version_code[2]}' if len(version_code) > 2 else '') version_str = f"V{version_code[0]}.{version_code[1]}" + (f'.{version_code[2]}' if len(version_code) > 2 else '')
DEFAULT_CHANNEL = "https://raw.githubusercontent.com/Comfy-Org/ComfyUI-Manager/main" DEFAULT_CHANNEL = "https://raw.githubusercontent.com/Comfy-Org/ComfyUI-Manager/main"
DEFAULT_CHANNEL_LEGACY = "https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main"
default_custom_nodes_path = None default_custom_nodes_path = None
@@ -161,7 +160,7 @@ comfy_ui_revision = "Unknown"
comfy_ui_commit_datetime = datetime(1900, 1, 1, 0, 0, 0) comfy_ui_commit_datetime = datetime(1900, 1, 1, 0, 0, 0)
channel_dict = None channel_dict = None
valid_channels = {'default', 'local', DEFAULT_CHANNEL, DEFAULT_CHANNEL_LEGACY} valid_channels = {'default', 'local'}
channel_list = None channel_list = None
@@ -305,86 +304,18 @@ class ManagedResult:
return self return self
class NormalizedKeyDict:
def __init__(self):
self._store = {}
self._key_map = {}
def _normalize_key(self, key):
if isinstance(key, str):
return key.strip().lower()
return key
def __setitem__(self, key, value):
norm_key = self._normalize_key(key)
self._key_map[norm_key] = key
self._store[key] = value
def __getitem__(self, key):
norm_key = self._normalize_key(key)
original_key = self._key_map[norm_key]
return self._store[original_key]
def __delitem__(self, key):
norm_key = self._normalize_key(key)
original_key = self._key_map.pop(norm_key)
del self._store[original_key]
def __contains__(self, key):
return self._normalize_key(key) in self._key_map
def get(self, key, default=None):
return self[key] if key in self else default
def setdefault(self, key, default=None):
if key in self:
return self[key]
self[key] = default
return default
def pop(self, key, default=None):
if key in self:
val = self[key]
del self[key]
return val
if default is not None:
return default
raise KeyError(key)
def keys(self):
return self._store.keys()
def values(self):
return self._store.values()
def items(self):
return self._store.items()
def __iter__(self):
return iter(self._store)
def __len__(self):
return len(self._store)
def __repr__(self):
return repr(self._store)
def to_dict(self):
return dict(self._store)
class UnifiedManager: class UnifiedManager:
def __init__(self): def __init__(self):
self.installed_node_packages: dict[str, InstalledNodePackage] = {} self.installed_node_packages: dict[str, InstalledNodePackage] = {}
self.cnr_inactive_nodes = NormalizedKeyDict() # node_id -> node_version -> fullpath self.cnr_inactive_nodes = {} # node_id -> node_version -> fullpath
self.nightly_inactive_nodes = NormalizedKeyDict() # node_id -> fullpath self.nightly_inactive_nodes = {} # node_id -> fullpath
self.unknown_inactive_nodes = {} # node_id -> repo url * fullpath self.unknown_inactive_nodes = {} # node_id -> repo url * fullpath
self.active_nodes = NormalizedKeyDict() # node_id -> node_version * fullpath self.active_nodes = {} # node_id -> node_version * fullpath
self.unknown_active_nodes = {} # node_id -> repo url * fullpath self.unknown_active_nodes = {} # node_id -> repo url * fullpath
self.cnr_map = NormalizedKeyDict() # node_id -> cnr info self.cnr_map = {} # node_id -> cnr info
self.repo_cnr_map = {} # repo_url -> cnr info self.repo_cnr_map = {} # repo_url -> cnr info
self.custom_node_map_cache = {} # (channel, mode) -> augmented custom node list json self.custom_node_map_cache = {} # (channel, mode) -> augmented custom node list json
self.processed_install = set() self.processed_install = set()
def get_module_name(self, x): def get_module_name(self, x):
@@ -790,7 +721,7 @@ class UnifiedManager:
channel = normalize_channel(channel) channel = normalize_channel(channel)
nodes = await self.load_nightly(channel, mode) nodes = await self.load_nightly(channel, mode)
res = NormalizedKeyDict() res = {}
added_cnr = set() added_cnr = set()
for v in nodes.values(): for v in nodes.values():
v = v[0] v = v[0]
@@ -1391,7 +1322,6 @@ class UnifiedManager:
return ManagedResult('skip') return ManagedResult('skip')
elif self.is_disabled(node_id): elif self.is_disabled(node_id):
return self.unified_enable(node_id) return self.unified_enable(node_id)
else: else:
version_spec = self.resolve_unspecified_version(node_id) version_spec = self.resolve_unspecified_version(node_id)
@@ -1627,18 +1557,16 @@ def read_config():
config = configparser.ConfigParser(strict=False) config = configparser.ConfigParser(strict=False)
config.read(context.manager_config_path) config.read(context.manager_config_path)
default_conf = config['default'] default_conf = config['default']
manager_util.use_uv = default_conf['use_uv'].lower() == 'true' if 'use_uv' in default_conf else False
def get_bool(key, default_value): def get_bool(key, default_value):
return default_conf[key].lower() == 'true' if key in default_conf else False return default_conf[key].lower() == 'true' if key in default_conf else False
manager_util.use_uv = default_conf['use_uv'].lower() == 'true' if 'use_uv' in default_conf else False
manager_util.bypass_ssl = get_bool('bypass_ssl', False)
return { return {
'http_channel_enabled': get_bool('http_channel_enabled', False), 'http_channel_enabled': get_bool('http_channel_enabled', False),
'preview_method': default_conf.get('preview_method', manager_funcs.get_current_preview_method()).lower(), 'preview_method': default_conf.get('preview_method', manager_funcs.get_current_preview_method()).lower(),
'git_exe': default_conf.get('git_exe', ''), 'git_exe': default_conf.get('git_exe', ''),
'use_uv': get_bool('use_uv', True), 'use_uv': get_bool('use_uv', False),
'channel_url': default_conf.get('channel_url', DEFAULT_CHANNEL), 'channel_url': default_conf.get('channel_url', DEFAULT_CHANNEL),
'default_cache_as_channel_url': get_bool('default_cache_as_channel_url', False), 'default_cache_as_channel_url': get_bool('default_cache_as_channel_url', False),
'share_option': default_conf.get('share_option', 'all').lower(), 'share_option': default_conf.get('share_option', 'all').lower(),
@@ -1657,17 +1585,15 @@ def read_config():
except Exception: except Exception:
manager_util.use_uv = False manager_util.use_uv = False
manager_util.bypass_ssl = False
return { return {
'http_channel_enabled': False, 'http_channel_enabled': False,
'preview_method': manager_funcs.get_current_preview_method(), 'preview_method': manager_funcs.get_current_preview_method(),
'git_exe': '', 'git_exe': '',
'use_uv': True, 'use_uv': False,
'channel_url': DEFAULT_CHANNEL, 'channel_url': DEFAULT_CHANNEL,
'default_cache_as_channel_url': False, 'default_cache_as_channel_url': False,
'share_option': 'all', 'share_option': 'all',
'bypass_ssl': manager_util.bypass_ssl, 'bypass_ssl': False,
'file_logging': True, 'file_logging': True,
'component_policy': 'workflow', 'component_policy': 'workflow',
'update_policy': 'stable-comfyui', 'update_policy': 'stable-comfyui',
@@ -1913,27 +1839,6 @@ def execute_install_script(url, repo_path, lazy_mode=False, instant_execution=Fa
return True return True
def install_manager_requirements(repo_path):
"""
Install packages from manager_requirements.txt if it exists.
This is specifically for ComfyUI's manager_requirements.txt.
"""
manager_requirements_path = os.path.join(repo_path, "manager_requirements.txt")
if not os.path.exists(manager_requirements_path):
return
logging.info("[ComfyUI-Manager] Installing manager_requirements.txt")
with open(manager_requirements_path, "r") as f:
for line in f:
line = line.strip()
if line and not line.startswith('#'):
if '#' in line:
line = line.split('#')[0].strip()
if line:
install_cmd = manager_util.make_pip_cmd(["install", line])
subprocess.run(install_cmd)
def git_repo_update_check_with(path, do_fetch=False, do_update=False, no_deps=False): def git_repo_update_check_with(path, do_fetch=False, do_update=False, no_deps=False):
""" """
@@ -2453,7 +2358,6 @@ def update_to_stable_comfyui(repo_path):
else: else:
logging.info(f"[ComfyUI-Manager] Updating ComfyUI: {current_tag} -> {latest_tag}") logging.info(f"[ComfyUI-Manager] Updating ComfyUI: {current_tag} -> {latest_tag}")
repo.git.checkout(latest_tag) repo.git.checkout(latest_tag)
execute_install_script("ComfyUI", repo_path, instant_execution=False, no_deps=False)
return 'updated', latest_tag return 'updated', latest_tag
except Exception: except Exception:
traceback.print_exc() traceback.print_exc()
@@ -2585,13 +2489,9 @@ def check_state_of_git_node_pack_single(item, do_fetch=False, do_update_check=Tr
def get_installed_pip_packages(): def get_installed_pip_packages():
try: # extract pip package infos
# extract pip package infos cmd = manager_util.make_pip_cmd(['freeze'])
cmd = manager_util.make_pip_cmd(['freeze']) pips = subprocess.check_output(cmd, text=True).split('\n')
pips = subprocess.check_output(cmd, text=True).split('\n')
except Exception as e:
logging.warning("[ComfyUI-Manager] Could not enumerate pip packages for snapshot: %s", e)
return {}
res = {} res = {}
for x in pips: for x in pips:
@@ -2876,7 +2776,7 @@ async def get_unified_total_nodes(channel, mode, regsitry_cache_mode='cache'):
if cnr_id is not None: if cnr_id is not None:
# cnr or nightly version # cnr or nightly version
cnr_ids.discard(cnr_id) cnr_ids.remove(cnr_id)
updatable = False updatable = False
cnr = unified_manager.cnr_map[cnr_id] cnr = unified_manager.cnr_map[cnr_id]
@@ -3040,11 +2940,6 @@ async def restore_snapshot(snapshot_path, git_helper_extras=None):
info = yaml.load(snapshot_file, Loader=yaml.SafeLoader) info = yaml.load(snapshot_file, Loader=yaml.SafeLoader)
info = info['custom_nodes'] info = info['custom_nodes']
if 'pips' in info and info['pips']:
pips = info['pips']
else:
pips = {}
# for cnr restore # for cnr restore
cnr_info = info.get('cnr_custom_nodes') cnr_info = info.get('cnr_custom_nodes')
if cnr_info is not None: if cnr_info is not None:
@@ -3251,8 +3146,6 @@ async def restore_snapshot(snapshot_path, git_helper_extras=None):
unified_manager.repo_install(repo_url, to_path, instant_execution=True, no_deps=False, return_postinstall=False) unified_manager.repo_install(repo_url, to_path, instant_execution=True, no_deps=False, return_postinstall=False)
cloned_repos.append(repo_name) cloned_repos.append(repo_name)
manager_util.restore_pip_snapshot(pips, git_helper_extras)
# print summary # print summary
for x in cloned_repos: for x in cloned_repos:
print(f"[ INSTALLED ] {x}") print(f"[ INSTALLED ] {x}")

View File

@@ -23,7 +23,6 @@ from ..common import manager_util
from ..common import cm_global from ..common import cm_global
from ..common import manager_downloader from ..common import manager_downloader
from ..common import context from ..common import context
from ..common import manager_security
logging.info(f"### Loading: ComfyUI-Manager ({core.version_str})") logging.info(f"### Loading: ComfyUI-Manager ({core.version_str})")
@@ -37,8 +36,7 @@ logging.info("[ComfyUI-Manager] network_mode: " + network_mode_description)
comfy_ui_hash = "-" comfy_ui_hash = "-"
comfyui_tag = None comfyui_tag = None
SECURITY_MESSAGE_MIDDLE = "ERROR: To use this action, a security_level of `normal or below` is required. Please contact the administrator.\nReference: https://github.com/Comfy-Org/ComfyUI-Manager#security-policy" SECURITY_MESSAGE_MIDDLE_OR_BELOW = "ERROR: To use this action, a security_level of `middle or below` is required. Please contact the administrator.\nReference: https://github.com/Comfy-Org/ComfyUI-Manager#security-policy"
SECURITY_MESSAGE_MIDDLE_P = "ERROR: To use this action, security_level must be `normal or below`, and network_mode must be set to `personal_cloud`. Please contact the administrator.\nReference: https://github.com/ltdrdata/ComfyUI-Manager#security-policy"
SECURITY_MESSAGE_NORMAL_MINUS = "ERROR: To use this feature, you must either set '--listen' to a local IP and set the security level to 'normal-' or lower, or set the security level to 'middle' or 'weak'. Please contact the administrator.\nReference: https://github.com/Comfy-Org/ComfyUI-Manager#security-policy" SECURITY_MESSAGE_NORMAL_MINUS = "ERROR: To use this feature, you must either set '--listen' to a local IP and set the security level to 'normal-' or lower, or set the security level to 'middle' or 'weak'. Please contact the administrator.\nReference: https://github.com/Comfy-Org/ComfyUI-Manager#security-policy"
SECURITY_MESSAGE_GENERAL = "ERROR: This installation is not allowed in this security_level. Please contact the administrator.\nReference: https://github.com/Comfy-Org/ComfyUI-Manager#security-policy" SECURITY_MESSAGE_GENERAL = "ERROR: This installation is not allowed in this security_level. Please contact the administrator.\nReference: https://github.com/Comfy-Org/ComfyUI-Manager#security-policy"
SECURITY_MESSAGE_NORMAL_MINUS_MODEL = "ERROR: Downloading models that are not in '.safetensors' format is only allowed for models registered in the 'default' channel at this security level. If you want to download this model, set the security level to 'normal-' or lower." SECURITY_MESSAGE_NORMAL_MINUS_MODEL = "ERROR: Downloading models that are not in '.safetensors' format is only allowed for models registered in the 'default' channel at this security level. If you want to download this model, set the security level to 'normal-' or lower."
@@ -95,27 +93,13 @@ model_dir_name_map = {
def is_allowed_security_level(level): def is_allowed_security_level(level):
is_personal_cloud = core.get_config()['network_mode'].lower() == 'personal_cloud'
if level == 'block': if level == 'block':
return False return False
elif level == 'high+':
if is_local_mode:
return core.get_config()['security_level'] in ['weak', 'normal-']
elif is_personal_cloud:
return core.get_config()['security_level'] == 'weak'
else:
return False
elif level == 'high': elif level == 'high':
if is_local_mode: if is_local_mode:
return core.get_config()['security_level'] in ['weak', 'normal-'] return core.get_config()['security_level'] in ['weak', 'normal-']
else: else:
return core.get_config()['security_level'] == 'weak' return core.get_config()['security_level'] == 'weak'
elif level == 'middle+':
if is_local_mode or is_personal_cloud:
return core.get_config()['security_level'] in ['weak', 'normal', 'normal-']
else:
return False
elif level == 'middle': elif level == 'middle':
return core.get_config()['security_level'] in ['weak', 'normal', 'normal-'] return core.get_config()['security_level'] in ['weak', 'normal', 'normal-']
else: else:
@@ -132,7 +116,7 @@ async def get_risky_level(files, pip_packages):
for x in files: for x in files:
if x not in all_urls: if x not in all_urls:
return "high+" return "high"
all_pip_packages = set() all_pip_packages = set()
for x in json_data1['custom_nodes'] + json_data2['custom_nodes']: for x in json_data1['custom_nodes'] + json_data2['custom_nodes']:
@@ -142,7 +126,7 @@ async def get_risky_level(files, pip_packages):
if p not in all_pip_packages: if p not in all_pip_packages:
return "block" return "block"
return "middle+" return "middle"
class ManagerFuncsInComfyUI(core.ManagerFuncs): class ManagerFuncsInComfyUI(core.ManagerFuncs):
@@ -561,8 +545,6 @@ async def task_worker():
logging.error("ComfyUI update failed") logging.error("ComfyUI update failed")
return "fail" return "fail"
elif res == "updated": elif res == "updated":
core.install_manager_requirements(repo_path)
if is_stable: if is_stable:
logging.info("ComfyUI is updated to latest stable version.") logging.info("ComfyUI is updated to latest stable version.")
return "success-stable-"+latest_tag return "success-stable-"+latest_tag
@@ -668,7 +650,7 @@ async def task_worker():
return 'success' return 'success'
except Exception as e: except Exception as e:
logging.error(f"[ComfyUI-Manager] ERROR: {e}") logging.error(f"[ComfyUI-Manager] ERROR: {e}", file=sys.stderr)
return f"Model installation error: {model_url}" return f"Model installation error: {model_url}"
@@ -776,29 +758,29 @@ async def queue_batch(request):
for x in v: for x in v:
res = await _uninstall_custom_node(x) res = await _uninstall_custom_node(x)
if res.status != 200: if res.status != 200:
failed.add(x['id']) failed.add(x[0])
else: else:
res = await _install_custom_node(x) res = await _install_custom_node(x)
if res.status != 200: if res.status != 200:
failed.add(x['id']) failed.add(x[0])
elif k == 'install': elif k == 'install':
for x in v: for x in v:
res = await _install_custom_node(x) res = await _install_custom_node(x)
if res.status != 200: if res.status != 200:
failed.add(x['id']) failed.add(x[0])
elif k == 'uninstall': elif k == 'uninstall':
for x in v: for x in v:
res = await _uninstall_custom_node(x) res = await _uninstall_custom_node(x)
if res.status != 200: if res.status != 200:
failed.add(x['id']) failed.add(x[0])
elif k == 'update': elif k == 'update':
for x in v: for x in v:
res = await _update_custom_node(x) res = await _update_custom_node(x)
if res.status != 200: if res.status != 200:
failed.add(x['id']) failed.add(x[0])
elif k == 'update_comfyui': elif k == 'update_comfyui':
await update_comfyui(None) await update_comfyui(None)
@@ -811,13 +793,13 @@ async def queue_batch(request):
for x in v: for x in v:
res = await _install_model(x) res = await _install_model(x)
if res.status != 200: if res.status != 200:
failed.add(x['id']) failed.add(x[0])
elif k == 'fix': elif k == 'fix':
for x in v: for x in v:
res = await _fix_custom_node(x) res = await _fix_custom_node(x)
if res.status != 200: if res.status != 200:
failed.add(x['id']) failed.add(x[0])
with task_worker_lock: with task_worker_lock:
finalize_temp_queue_batch(json_data, failed) finalize_temp_queue_batch(json_data, failed)
@@ -928,8 +910,8 @@ async def update_all(request):
async def _update_all(json_data): async def _update_all(json_data):
if not is_allowed_security_level('middle+'): if not is_allowed_security_level('middle'):
logging.error(SECURITY_MESSAGE_MIDDLE_P) logging.error(SECURITY_MESSAGE_MIDDLE_OR_BELOW)
return web.Response(status=403) return web.Response(status=403)
with task_worker_lock: with task_worker_lock:
@@ -1074,17 +1056,14 @@ async def fetch_customnode_list(request):
if channel != 'local': if channel != 'local':
found = 'custom' found = 'custom'
if channel == core.DEFAULT_CHANNEL or channel == core.DEFAULT_CHANNEL_LEGACY: for name, url in core.get_channel_dict().items():
channel = 'default' if url == channel:
else: found = name
for name, url in core.get_channel_dict().items(): break
if url == channel:
found = name
break
channel = found channel = found
result = dict(channel=channel, node_packs=node_packs.to_dict()) result = dict(channel=channel, node_packs=node_packs)
return web.json_response(result, content_type='application/json') return web.json_response(result, content_type='application/json')
@@ -1183,7 +1162,7 @@ async def get_snapshot_list(request):
@routes.get("/v2/snapshot/remove") @routes.get("/v2/snapshot/remove")
async def remove_snapshot(request): async def remove_snapshot(request):
if not is_allowed_security_level('middle'): if not is_allowed_security_level('middle'):
logging.error(SECURITY_MESSAGE_MIDDLE) logging.error(SECURITY_MESSAGE_MIDDLE_OR_BELOW)
return web.Response(status=403) return web.Response(status=403)
try: try:
@@ -1200,8 +1179,8 @@ async def remove_snapshot(request):
@routes.get("/v2/snapshot/restore") @routes.get("/v2/snapshot/restore")
async def restore_snapshot(request): async def restore_snapshot(request):
if not is_allowed_security_level('middle+'): if not is_allowed_security_level('middle'):
logging.error(SECURITY_MESSAGE_MIDDLE_P) logging.error(SECURITY_MESSAGE_MIDDLE_OR_BELOW)
return web.Response(status=403) return web.Response(status=403)
try: try:
@@ -1313,65 +1292,6 @@ async def import_fail_info(request):
return web.Response(status=400) return web.Response(status=400)
@routes.post("/v2/customnode/import_fail_info_bulk")
async def import_fail_info_bulk(request):
try:
json_data = await request.json()
# Basic validation - ensure we have either cnr_ids or urls
if not isinstance(json_data, dict):
return web.Response(status=400, text="Request body must be a JSON object")
if "cnr_ids" not in json_data and "urls" not in json_data:
return web.Response(
status=400, text="Either 'cnr_ids' or 'urls' field is required"
)
await core.unified_manager.reload('cache')
await core.unified_manager.get_custom_nodes('default', 'cache')
results = {}
if "cnr_ids" in json_data:
if not isinstance(json_data["cnr_ids"], list):
return web.Response(status=400, text="'cnr_ids' must be an array")
for cnr_id in json_data["cnr_ids"]:
if not isinstance(cnr_id, str):
results[cnr_id] = {"error": "cnr_id must be a string"}
continue
module_name = core.unified_manager.get_module_name(cnr_id)
if module_name is not None:
info = cm_global.error_dict.get(module_name)
if info is not None:
results[cnr_id] = info
else:
results[cnr_id] = None
else:
results[cnr_id] = None
if "urls" in json_data:
if not isinstance(json_data["urls"], list):
return web.Response(status=400, text="'urls' must be an array")
for url in json_data["urls"]:
if not isinstance(url, str):
results[url] = {"error": "url must be a string"}
continue
module_name = core.unified_manager.get_module_name(url)
if module_name is not None:
info = cm_global.error_dict.get(module_name)
if info is not None:
results[url] = info
else:
results[url] = None
else:
results[url] = None
return web.json_response(results)
except Exception as e:
logging.error(f"[ComfyUI-Manager] Error processing bulk import fail info: {e}")
return web.Response(status=500, text="Internal server error")
@routes.post("/v2/manager/queue/reinstall") @routes.post("/v2/manager/queue/reinstall")
async def reinstall_custom_node(request): async def reinstall_custom_node(request):
await uninstall_custom_node(request) await uninstall_custom_node(request)
@@ -1436,8 +1356,8 @@ async def install_custom_node(request):
async def _install_custom_node(json_data): async def _install_custom_node(json_data):
if not is_allowed_security_level('middle+'): if not is_allowed_security_level('middle'):
logging.error(SECURITY_MESSAGE_MIDDLE_P) logging.error(SECURITY_MESSAGE_MIDDLE_OR_BELOW)
return web.Response(status=403, text="A security error has occurred. Please check the terminal logs") return web.Response(status=403, text="A security error has occurred. Please check the terminal logs")
# non-nightly cnr is safe # non-nightly cnr is safe
@@ -1542,7 +1462,7 @@ async def _fix_custom_node(json_data):
@routes.post("/v2/customnode/install/git_url") @routes.post("/v2/customnode/install/git_url")
async def install_custom_node_git_url(request): async def install_custom_node_git_url(request):
if not is_allowed_security_level('high+'): if not is_allowed_security_level('high'):
logging.error(SECURITY_MESSAGE_NORMAL_MINUS) logging.error(SECURITY_MESSAGE_NORMAL_MINUS)
return web.Response(status=403) return web.Response(status=403)
@@ -1562,7 +1482,7 @@ async def install_custom_node_git_url(request):
@routes.post("/v2/customnode/install/pip") @routes.post("/v2/customnode/install/pip")
async def install_custom_node_pip(request): async def install_custom_node_pip(request):
if not is_allowed_security_level('high+'): if not is_allowed_security_level('high'):
logging.error(SECURITY_MESSAGE_NORMAL_MINUS) logging.error(SECURITY_MESSAGE_NORMAL_MINUS)
return web.Response(status=403) return web.Response(status=403)
@@ -1580,7 +1500,7 @@ async def uninstall_custom_node(request):
async def _uninstall_custom_node(json_data): async def _uninstall_custom_node(json_data):
if not is_allowed_security_level('middle'): if not is_allowed_security_level('middle'):
logging.error(SECURITY_MESSAGE_MIDDLE) logging.error(SECURITY_MESSAGE_MIDDLE_OR_BELOW)
return web.Response(status=403, text="A security error has occurred. Please check the terminal logs") return web.Response(status=403, text="A security error has occurred. Please check the terminal logs")
node_id = json_data.get('id') node_id = json_data.get('id')
@@ -1606,7 +1526,7 @@ async def update_custom_node(request):
async def _update_custom_node(json_data): async def _update_custom_node(json_data):
if not is_allowed_security_level('middle'): if not is_allowed_security_level('middle'):
logging.error(SECURITY_MESSAGE_MIDDLE) logging.error(SECURITY_MESSAGE_MIDDLE_OR_BELOW)
return web.Response(status=403, text="A security error has occurred. Please check the terminal logs") return web.Response(status=403, text="A security error has occurred. Please check the terminal logs")
node_id = json_data.get('id') node_id = json_data.get('id')
@@ -1697,8 +1617,8 @@ async def install_model(request):
async def _install_model(json_data): async def _install_model(json_data):
if not is_allowed_security_level('middle+'): if not is_allowed_security_level('middle'):
logging.error(SECURITY_MESSAGE_MIDDLE_P) logging.error(SECURITY_MESSAGE_MIDDLE_OR_BELOW)
return web.Response(status=403, text="A security error has occurred. Please check the terminal logs") return web.Response(status=403, text="A security error has occurred. Please check the terminal logs")
# validate request # validate request
@@ -1706,7 +1626,7 @@ async def _install_model(json_data):
logging.error(f"[ComfyUI-Manager] Invalid model install request is detected: {json_data}") logging.error(f"[ComfyUI-Manager] Invalid model install request is detected: {json_data}")
return web.Response(status=400, text="Invalid model install request is detected") return web.Response(status=400, text="Invalid model install request is detected")
if not json_data['filename'].endswith('.safetensors') and not is_allowed_security_level('high+'): if not json_data['filename'].endswith('.safetensors') and not is_allowed_security_level('high'):
models_json = await core.get_data_by_mode('cache', 'model-list.json', 'default') models_json = await core.get_data_by_mode('cache', 'model-list.json', 'default')
is_belongs_to_whitelist = False is_belongs_to_whitelist = False
@@ -1863,7 +1783,7 @@ async def get_notice_legacy(request):
@routes.get("/v2/manager/reboot") @routes.get("/v2/manager/reboot")
def restart(self): def restart(self):
if not is_allowed_security_level('middle'): if not is_allowed_security_level('middle'):
logging.error(SECURITY_MESSAGE_MIDDLE) logging.error(SECURITY_MESSAGE_MIDDLE_OR_BELOW)
return web.Response(status=403) return web.Response(status=403)
try: try:
@@ -2029,10 +1949,9 @@ if not os.path.exists(context.manager_config_path):
core.write_config() core.write_config()
# policy setup cm_global.register_extension('ComfyUI-Manager',
manager_security.add_handler_policy(reinstall_custom_node, manager_security.HANDLER_POLICY.MULTIPLE_REMOTE_BAN_NOT_PERSONAL_CLOUD) {'version': core.version,
manager_security.add_handler_policy(install_custom_node, manager_security.HANDLER_POLICY.MULTIPLE_REMOTE_BAN_NOT_PERSONAL_CLOUD) 'name': 'ComfyUI Manager',
manager_security.add_handler_policy(fix_custom_node, manager_security.HANDLER_POLICY.MULTIPLE_REMOTE_BAN_NOT_PERSONAL_CLOUD) 'nodes': {},
manager_security.add_handler_policy(install_custom_node_git_url, manager_security.HANDLER_POLICY.MULTIPLE_REMOTE_BAN_NOT_PERSONAL_CLOUD) 'description': 'This extension provides the ability to manage custom nodes in ComfyUI.', })
manager_security.add_handler_policy(install_custom_node_pip, manager_security.HANDLER_POLICY.MULTIPLE_REMOTE_BAN_NOT_PERSONAL_CLOUD)
manager_security.add_handler_policy(install_model, manager_security.HANDLER_POLICY.MULTIPLE_REMOTE_BAN_NOT_PERSONAL_CLOUD)

View File

@@ -10,16 +10,6 @@ import hashlib
import folder_paths import folder_paths
from server import PromptServer from server import PromptServer
import logging
import sys
try:
from nio import AsyncClient, LoginResponse, UploadResponse
matrix_nio_is_available = True
except Exception:
logging.warning(f"[ComfyUI-Manager] The matrix sharing feature has been disabled because the `matrix-nio` dependency is not installed.\n\tTo use this feature, please run the following command:\n\t{sys.executable} -m pip install matrix-nio\n")
matrix_nio_is_available = False
def extract_model_file_names(json_data): def extract_model_file_names(json_data):
@@ -202,14 +192,6 @@ async def get_esheep_workflow_and_images(request):
return web.Response(status=200, text=json.dumps(data)) return web.Response(status=200, text=json.dumps(data))
@PromptServer.instance.routes.get("/v2/manager/get_matrix_dep_status")
async def get_matrix_dep_status(request):
if matrix_nio_is_available:
return web.Response(status=200, text='available')
else:
return web.Response(status=200, text='unavailable')
def set_matrix_auth(json_data): def set_matrix_auth(json_data):
homeserver = json_data['homeserver'] homeserver = json_data['homeserver']
username = json_data['username'] username = json_data['username']
@@ -349,12 +331,15 @@ async def share_art(request):
workflowId = upload_workflow_json["workflowId"] workflowId = upload_workflow_json["workflowId"]
# check if the user has provided Matrix credentials # check if the user has provided Matrix credentials
if matrix_nio_is_available and "matrix" in share_destinations: if "matrix" in share_destinations:
comfyui_share_room_id = '!LGYSoacpJPhIfBqVfb:matrix.org' comfyui_share_room_id = '!LGYSoacpJPhIfBqVfb:matrix.org'
filename = os.path.basename(asset_filepath) filename = os.path.basename(asset_filepath)
content_type = assetFileType content_type = assetFileType
try: try:
from matrix_client.api import MatrixHttpApi
from matrix_client.client import MatrixClient
homeserver = 'matrix.org' homeserver = 'matrix.org'
if matrix_auth: if matrix_auth:
homeserver = matrix_auth.get('homeserver', 'matrix.org') homeserver = matrix_auth.get('homeserver', 'matrix.org')
@@ -362,35 +347,20 @@ async def share_art(request):
if not homeserver.startswith("https://"): if not homeserver.startswith("https://"):
homeserver = "https://" + homeserver homeserver = "https://" + homeserver
client = AsyncClient(homeserver, matrix_auth['username']) client = MatrixClient(homeserver)
try:
# Login token = client.login(username=matrix_auth['username'], password=matrix_auth['password'])
login_resp = await client.login(matrix_auth['password']) if not token:
if not isinstance(login_resp, LoginResponse) or not login_resp.access_token: return web.json_response({"error": "Invalid Matrix credentials."}, content_type='application/json', status=400)
await client.close() except Exception:
return web.json_response({"error": "Invalid Matrix credentials."}, content_type='application/json', status=400) return web.json_response({"error": "Invalid Matrix credentials."}, content_type='application/json', status=400)
# Upload asset matrix = MatrixHttpApi(homeserver, token=token)
with open(asset_filepath, 'rb') as f: with open(asset_filepath, 'rb') as f:
upload_resp, _maybe_keys = await client.upload(f, content_type=content_type, filename=filename) mxc_url = matrix.media_upload(f.read(), content_type, filename=filename)['content_uri']
asset_data = f.seek(0) or f.read() # get size for info below
if not isinstance(upload_resp, UploadResponse) or not upload_resp.content_uri:
await client.close()
return web.json_response({"error": "Failed to upload asset to Matrix."}, content_type='application/json', status=500)
mxc_url = upload_resp.content_uri
# Upload workflow JSON workflow_json_mxc_url = matrix.media_upload(prompt['workflow'], 'application/json', filename='workflow.json')['content_uri']
import io
workflow_json_bytes = json.dumps(prompt['workflow']).encode('utf-8')
workflow_io = io.BytesIO(workflow_json_bytes)
upload_workflow_resp, _maybe_keys = await client.upload(workflow_io, content_type='application/json', filename='workflow.json')
workflow_io.seek(0)
if not isinstance(upload_workflow_resp, UploadResponse) or not upload_workflow_resp.content_uri:
await client.close()
return web.json_response({"error": "Failed to upload workflow to Matrix."}, content_type='application/json', status=500)
workflow_json_mxc_url = upload_workflow_resp.content_uri
# Send text message
text_content = "" text_content = ""
if title: if title:
text_content += f"{title}\n" text_content += f"{title}\n"
@@ -398,45 +368,10 @@ async def share_art(request):
text_content += f"{description}\n" text_content += f"{description}\n"
if credits: if credits:
text_content += f"\ncredits: {credits}\n" text_content += f"\ncredits: {credits}\n"
await client.room_send( matrix.send_message(comfyui_share_room_id, text_content)
room_id=comfyui_share_room_id, matrix.send_content(comfyui_share_room_id, mxc_url, filename, 'm.image')
message_type="m.room.message", matrix.send_content(comfyui_share_room_id, workflow_json_mxc_url, 'workflow.json', 'm.file')
content={"msgtype": "m.text", "body": text_content} except Exception:
)
# Send image
await client.room_send(
room_id=comfyui_share_room_id,
message_type="m.room.message",
content={
"msgtype": "m.image",
"body": filename,
"url": mxc_url,
"info": {
"mimetype": content_type,
"size": len(asset_data)
}
}
)
# Send workflow JSON file
await client.room_send(
room_id=comfyui_share_room_id,
message_type="m.room.message",
content={
"msgtype": "m.file",
"body": "workflow.json",
"url": workflow_json_mxc_url,
"info": {
"mimetype": "application/json",
"size": len(workflow_json_bytes)
}
}
)
await client.close()
except:
import traceback import traceback
traceback.print_exc() traceback.print_exc()
return web.json_response({"error": "An error occurred when sharing your art to Matrix."}, content_type='application/json', status=500) return web.json_response({"error": "An error occurred when sharing your art to Matrix."}, content_type='application/json', status=500)

View File

@@ -35,6 +35,7 @@ else:
def current_timestamp(): def current_timestamp():
return str(time.time()).split('.')[0] return str(time.time()).split('.')[0]
security_check.security_check()
cm_global.pip_blacklist = {'torch', 'torchaudio', 'torchsde', 'torchvision'} cm_global.pip_blacklist = {'torch', 'torchaudio', 'torchsde', 'torchvision'}
cm_global.pip_downgrade_blacklist = ['torch', 'torchaudio', 'torchsde', 'torchvision', 'transformers', 'safetensors', 'kornia'] cm_global.pip_downgrade_blacklist = ['torch', 'torchaudio', 'torchsde', 'torchvision', 'transformers', 'safetensors', 'kornia']
@@ -80,7 +81,7 @@ cm_global.register_api('cm.is_import_failed_extension', is_import_failed_extensi
comfyui_manager_path = os.path.abspath(os.path.dirname(__file__)) comfyui_manager_path = os.path.abspath(os.path.dirname(__file__))
custom_nodes_base_path = folder_paths.get_folder_paths('custom_nodes')[0] custom_nodes_base_path = folder_paths.get_folder_paths('custom_nodes')[0]
manager_files_path = folder_paths.get_system_user_directory("manager") manager_files_path = os.path.abspath(os.path.join(folder_paths.get_user_directory(), 'default', 'ComfyUI-Manager'))
manager_pip_overrides_path = os.path.join(manager_files_path, "pip_overrides.json") manager_pip_overrides_path = os.path.join(manager_files_path, "pip_overrides.json")
manager_pip_blacklist_path = os.path.join(manager_files_path, "pip_blacklist.list") manager_pip_blacklist_path = os.path.join(manager_files_path, "pip_blacklist.list")
restore_snapshot_path = os.path.join(manager_files_path, "startup-scripts", "restore-snapshot.json") restore_snapshot_path = os.path.join(manager_files_path, "startup-scripts", "restore-snapshot.json")
@@ -110,14 +111,13 @@ def check_file_logging():
read_config() read_config()
read_uv_mode() read_uv_mode()
security_check.security_check()
check_file_logging() check_file_logging()
cm_global.pip_overrides = {} cm_global.pip_overrides = {'numpy': 'numpy<2'}
if os.path.exists(manager_pip_overrides_path): if os.path.exists(manager_pip_overrides_path):
with open(manager_pip_overrides_path, 'r', encoding="UTF-8", errors="ignore") as json_file: with open(manager_pip_overrides_path, 'r', encoding="UTF-8", errors="ignore") as json_file:
cm_global.pip_overrides = json.load(json_file) cm_global.pip_overrides = json.load(json_file)
cm_global.pip_overrides['numpy'] = 'numpy<2'
if os.path.exists(manager_pip_blacklist_path): if os.path.exists(manager_pip_blacklist_path):
@@ -330,12 +330,7 @@ try:
log_file.write(message) log_file.write(message)
else: else:
log_file.write(f"[{timestamp}] {message}") log_file.write(f"[{timestamp}] {message}")
log_file.flush()
try:
log_file.flush()
except Exception:
pass
self.last_char = message if message == '' else message[-1] self.last_char = message if message == '' else message[-1]
if not file_only: if not file_only:
@@ -348,10 +343,7 @@ try:
original_stderr.flush() original_stderr.flush()
def flush(self): def flush(self):
try: log_file.flush()
log_file.flush()
except Exception:
pass
with std_log_lock: with std_log_lock:
if self.is_stdout: if self.is_stdout:
@@ -483,7 +475,7 @@ check_bypass_ssl()
# Perform install # Perform install
processed_install = set() processed_install = set()
script_list_path = os.path.join(manager_files_path, "startup-scripts", "install-scripts.txt") script_list_path = os.path.join(folder_paths.user_directory, "default", "ComfyUI-Manager", "startup-scripts", "install-scripts.txt")
pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, manager_files_path) pip_fixer = manager_util.PIPFixer(manager_util.get_installed_packages(), comfy_path, manager_files_path)

View File

File diff suppressed because it is too large Load Diff

View File

@@ -139,9 +139,9 @@ You can set whether to use ComfyUI-Manager solely via CLI.
`restore-dependencies` `restore-dependencies`
* This command can be used if custom nodes are installed under the `ComfyUI/custom_nodes` path but their dependencies are not installed. * This command can be used if custom nodes are installed under the `ComfyUI/custom_nodes` path but their dependencies are not installed.
* It is useful when starting a new cloud instance, like Colab, where dependencies need to be reinstalled and installation scripts re-executed. * It is useful when starting a new cloud instance, like colab, where dependencies need to be reinstalled and installation scripts re-executed.
* It can also be utilized if ComfyUI is reinstalled and only the custom_nodes path has been backed up and restored. * It can also be utilized if ComfyUI is reinstalled and only the custom_nodes path has been backed up and restored.
### 7. Clear ### 7. Clear
In the GUI, installations, updates, or snapshot restorations are scheduled to execute the next time ComfyUI is launched. The `clear` command clears this scheduled state, ensuring no pre-execution actions are applied. In the GUI, installations, updates, or snapshot restorations are scheduled to execute the next time ComfyUI is launched. The `clear` command clears this scheduled state, ensuring no pre-execution actions are applied.

View File

@@ -23,13 +23,13 @@ OPTIONS:
## How To Use? ## How To Use?
* `python cm-cli.py` 를 통해서 실행 시킬 수 있습니다. * `python cm-cli.py` 를 통해서 실행 시킬 수 있습니다.
* 예를 들어 custom node를 모두 업데이트 하고 싶다면 * 예를 들어 custom node를 모두 업데이트 하고 싶다면
* ComfyUI-Manager 경로에서 `python cm-cli.py update all` 명령을 실행할 수 있습니다. * ComfyUI-Manager경로 에서 `python cm-cli.py update all` 를 command를 실행할 수 있습니다.
* ComfyUI 경로에서 실행한다면, `python custom_nodes/ComfyUI-Manager/cm-cli.py update all` 와 같이 cm-cli.py 의 경로를 지정할 수도 있습니다. * ComfyUI 경로에서 실행한다면, `python custom_nodes/ComfyUI-Manager/cm-cli.py update all` 와 같이 cm-cli.py 의 경로를 지정할 수도 있습니다.
## Prerequisite ## Prerequisite
* ComfyUI 를 실행하는 python과 동일한 python 환경에서 실행해야 합니다. * ComfyUI 를 실행하는 python과 동일한 python 환경에서 실행해야 합니다.
* venv를 사용할 경우 해당 venv를 activate 한 상태에서 실행해야 합니다. * venv를 사용할 경우 해당 venv를 activate 한 상태에서 실행해야 합니다.
* portable 버전을 사용할 경우 run_nvidia_gpu.bat 파일이 있는 경로인 경우, 다음과 같은 방식으로 명령을 실행해야 합니다. * portable 버전을 사용할 경우 run_nvidia_gpu.bat 파일이 있는 경로인 경우, 다음과 같은 방식으로 코맨드를 실행해야 합니다.
`.\python_embeded\python.exe ComfyUI\custom_nodes\ComfyUI-Manager\cm-cli.py update all` `.\python_embeded\python.exe ComfyUI\custom_nodes\ComfyUI-Manager\cm-cli.py update all`
* ComfyUI 의 경로는 COMFYUI_PATH 환경 변수로 설정할 수 있습니다. 만약 생략할 경우 다음과 같은 경고 메시지가 나타나며, ComfyUI-Manager가 설치된 경로를 기준으로 상대 경로로 설정됩니다. * ComfyUI 의 경로는 COMFYUI_PATH 환경 변수로 설정할 수 있습니다. 만약 생략할 경우 다음과 같은 경고 메시지가 나타나며, ComfyUI-Manager가 설치된 경로를 기준으로 상대 경로로 설정됩니다.
``` ```
@@ -40,8 +40,8 @@ OPTIONS:
### 1. --channel, --mode ### 1. --channel, --mode
* 정보 보기 기능과 커스텀 노드 관리 기능의 경우는 --channel과 --mode를 통해 정보 DB를 설정할 수 있습니다. * 정보 보기 기능과 커스텀 노드 관리 기능의 경우는 --channel과 --mode를 통해 정보 DB를 설정할 수 있습니다.
* 예 들어 `python cm-cli.py update all --channel recent --mode remote`와 같은 명령을 실행할 경우, 현재 ComfyUI-Manager repo에 내장된 로컬의 정보가 아닌 remote의 최신 정보를 기준으로 동작하며, recent channel에 있는 목록을 대상으로만 동작합니다. * 예 들어 `python cm-cli.py update all --channel recent --mode remote`와 같은 command를 실행할 경우, 현재 ComfyUI-Manager repo에 내장된 로컬의 정보가 아닌 remote의 최신 정보를 기준으로 동작하며, recent channel에 있는 목록을 대상으로만 동작합니다.
* --channel, --mode 는 `simple-show, show, install, uninstall, update, disable, enable, fix` 명령에서만 사용 가능합니다. * --channel, --mode 는 `simple-show, show, install, uninstall, update, disable, enable, fix` command에서만 사용 가능합니다.
### 2. 관리 정보 보기 ### 2. 관리 정보 보기
@@ -51,7 +51,7 @@ OPTIONS:
* `[show|simple-show]` - `show`는 상세하게 정보를 보여주며, `simple-show`는 간단하게 정보를 보여줍니다. * `[show|simple-show]` - `show`는 상세하게 정보를 보여주며, `simple-show`는 간단하게 정보를 보여줍니다.
`python cm-cli.py show installed` 와 같은 명령을 실행하면 설치된 커스텀 노드의 정보를 상세하게 보여줍니다. `python cm-cli.py show installed` 와 같은 코맨드를 실행하면 설치된 커스텀 노드의 정보를 상세하게 보여줍니다.
``` ```
-= ComfyUI-Manager CLI (V2.24) =- -= ComfyUI-Manager CLI (V2.24) =-
@@ -67,7 +67,7 @@ FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main
[ DISABLED ] ComfyUI-Loopchain (author: Fannovel16) [ DISABLED ] ComfyUI-Loopchain (author: Fannovel16)
``` ```
`python cm-cli.py simple-show installed` 와 같은 명령을 이용해서 설치된 커스텀 노드의 정보를 간단하게 보여줍니다. `python cm-cli.py simple-show installed` 와 같은 코맨드를 이용해서 설치된 커스텀 노드의 정보를 간단하게 보여줍니다.
``` ```
-= ComfyUI-Manager CLI (V2.24) =- -= ComfyUI-Manager CLI (V2.24) =-
@@ -89,7 +89,7 @@ ComfyUI-Loopchain
* `installed`: enable, disable 여부와 상관없이 설치된 모든 노드를 보여줍니다 * `installed`: enable, disable 여부와 상관없이 설치된 모든 노드를 보여줍니다
* `not-installed`: 설치되지 않은 커스텀 노드의 목록을 보여줍니다. * `not-installed`: 설치되지 않은 커스텀 노드의 목록을 보여줍니다.
* `all`: 모든 커스텀 노드의 목록을 보여줍니다. * `all`: 모든 커스텀 노드의 목록을 보여줍니다.
* `snapshot`: 현재 설치된 커스텀 노드의 snapshot 정보를 보여줍니다. `show` 통해서 볼 경우는 json 출력 형태로 보여주며, `simple-show`를 통해서 볼 경우는 간단하게, 커밋 해시와 함께 보여줍니다. * `snapshot`: 현재 설치된 커스텀 노드의 snapshot 정보를 보여줍니다. `show` 통해서 볼 경우는 json 출력 형태로 보여주며, `simple-show`를 통해서 볼 경우는 간단하게, 커밋 해시와 함께 보여줍니다.
* `snapshot-list`: ComfyUI-Manager/snapshots 에 저장된 snapshot 파일의 목록을 보여줍니다. * `snapshot-list`: ComfyUI-Manager/snapshots 에 저장된 snapshot 파일의 목록을 보여줍니다.
### 3. 커스텀 노드 관리 하기 ### 3. 커스텀 노드 관리 하기
@@ -98,7 +98,7 @@ ComfyUI-Loopchain
* `python cm-cli.py install ComfyUI-Impact-Pack ComfyUI-Inspire-Pack ComfyUI_experiments` 와 같이 커스텀 노드의 이름을 나열해서 관리 기능을 적용할 수 있습니다. * `python cm-cli.py install ComfyUI-Impact-Pack ComfyUI-Inspire-Pack ComfyUI_experiments` 와 같이 커스텀 노드의 이름을 나열해서 관리 기능을 적용할 수 있습니다.
* 커스텀 노드의 이름은 `show`를 했을 때 보여주는 이름이며, git repository의 이름입니다. * 커스텀 노드의 이름은 `show`를 했을 때 보여주는 이름이며, git repository의 이름입니다.
(추후 nickname을 사용 가능하도록 업데이트할 예정입니다.) (추후 nickname 을 사용가능하돌고 업데이트 할 예정입니다.)
`[update|disable|enable|fix] all ?[--channel <channel name>] ?[--mode [remote|local|cache]]` `[update|disable|enable|fix] all ?[--channel <channel name>] ?[--mode [remote|local|cache]]`
@@ -124,7 +124,7 @@ ComfyUI-Loopchain
* `--pip-non-local-url`: web URL에 등록된 pip 패키지들에 대해서 복구를 수행 * `--pip-non-local-url`: web URL에 등록된 pip 패키지들에 대해서 복구를 수행
* `--pip-local-url`: local 경로를 지정하고 있는 pip 패키지들에 대해서 복구를 수행 * `--pip-local-url`: local 경로를 지정하고 있는 pip 패키지들에 대해서 복구를 수행
* `--user-directory`: 사용자 디렉토리 설정 * `--user-directory`: 사용자 디렉토리 설정
* `--restore-to`: 복구될 커스텀 노드가 설치될 경로. (이 옵션을 적용할 경우 오직 대상 경로에 설치된 custom nodes만 설치된 것으로 인식함.) * `--restore-to`: 복구될 커스텀 노드가 설치될 경로. (이 옵션을 적용할 경우 오직 대상 경로에 설치된 custom nodes 만 설치된 것으로 인식함.)
### 5. CLI only mode ### 5. CLI only mode
@@ -133,7 +133,7 @@ ComfyUI-Manager를 CLI로만 사용할 것인지를 설정할 수 있습니다.
`cli-only-mode [enable|disable]` `cli-only-mode [enable|disable]`
* security 혹은 policy 의 이유로 GUI 를 통한 ComfyUI-Manager 사용을 제한하고 싶은 경우 이 모드를 사용할 수 있습니다. * security 혹은 policy 의 이유로 GUI 를 통한 ComfyUI-Manager 사용을 제한하고 싶은 경우 이 모드를 사용할 수 있습니다.
* CLI only mode를 적용할 경우 ComfyUI-Manager 가 매우 제한된 상태로 로드되어, 내부적으로 제공하는 web API가 비활성화되며, 메인 메뉴에서도 Manager 버튼이 표시되지 않습니다. * CLI only mode를 적용할 경우 ComfyUI-Manager 가 매우 제한된 상태로 로드되어, 내부적으로 제공하는 web API가 비활성화 되며, 메인 메뉴에서도 Manager 버튼이 표시되지 않습니다.
### 6. 의존성 설치 ### 6. 의존성 설치
@@ -141,10 +141,10 @@ ComfyUI-Manager를 CLI로만 사용할 것인지를 설정할 수 있습니다.
`restore-dependencies` `restore-dependencies`
* `ComfyUI/custom_nodes` 하위 경로에 커스텀 노드들이 설치되어 있긴 하지만, 의존성이 설치되지 않은 경우 사용할 수 있습니다. * `ComfyUI/custom_nodes` 하위 경로에 커스텀 노드들이 설치되어 있긴 하지만, 의존성이 설치되지 않은 경우 사용할 수 있습니다.
* Colab과 같이 cloud instance를 새로 시작하는 경우 의존성 재설치 및 설치 스크립트가 재실행되어야 하는 경우 사용합니다. * colab 과 같이 cloud instance를 새로 시작하는 경우 의존성 재설치 및 설치 스크립트가 재실행 되어야 하는 경우 사용합니다.
* ComfyUI 재설치할 경우, custom_nodes 경로만 백업했다가 재설치할 경우 활용 가능합니다. * ComfyUI 재설치할 경우, custom_nodes 경로만 백업했다가 재설치 할 경우 활용 가능합니다.
### 7. clear ### 7. clear
GUI에서 install, update를 하거나 snapshot을 restore하는 경우 예약을 통해서 다음번 ComfyUI를 실행할 경우 실행되는 구조입니다. `clear` 는 이런 예약 상태를 clear해서, 아무런 사전 실행이 적용되지 않도록 합니다. GUI에서 install, update를 하거나 snapshot 을 restore하는 경우 예약을 통해서 다음번 ComfyUI를 실행할 경우 실행되는 구조입니다. `clear` 는 이런 예약 상태를 clear해서, 아무런 사전 실행이 적용되지 않도록 합니다.

View File

File diff suppressed because it is too large Load Diff

View File

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1973,97 +1973,6 @@
"url": "https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth", "url": "https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth",
"size": "375.0MB" "size": "375.0MB"
}, },
{
"name": "sam2.1_hiera_tiny.pt",
"type": "sam2.1",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2.1 hiera model (tiny)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2.1_hiera_tiny.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_tiny.pt",
"size": "149.0MB"
},
{
"name": "sam2.1_hiera_small.pt",
"type": "sam2.1",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2.1 hiera model (small)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2.1_hiera_small.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_small.pt",
"size": "176.0MB"
},
{
"name": "sam2.1_hiera_base_plus.pt",
"type": "sam2.1",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2.1 hiera model (base+)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2.1_hiera_base_plus.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_base_plus.pt",
"size": "309.0MB"
},
{
"name": "sam2.1_hiera_large.pt",
"type": "sam2.1",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2.1 hiera model (large)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2.1_hiera_large.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt",
"size": "857.0MB"
},
{
"name": "sam2_hiera_tiny.pt",
"type": "sam2",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2 hiera model (tiny)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2_hiera_tiny.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_tiny.pt",
"size": "149.0MB"
},
{
"name": "sam2_hiera_small.pt",
"type": "sam2",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2 hiera model (small)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2_hiera_small.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_small.pt",
"size": "176.0MB"
},
{
"name": "sam2_hiera_base_plus.pt",
"type": "sam2",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2 hiera model (base+)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2_hiera_base_plus.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_base_plus.pt",
"size": "309.0MB"
},
{
"name": "sam2_hiera_large.pt",
"type": "sam2",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2 hiera model (large)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2_hiera_large.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_large.pt",
"size": "857.0MB"
},
{ {
"name": "seecoder v1.0", "name": "seecoder v1.0",
"type": "seecoder", "type": "seecoder",
@@ -4097,29 +4006,6 @@
"size": "649MB" "size": "649MB"
}, },
{
"name": "Comfy-Org/omnigen2_fp16.safetensors",
"type": "diffusion_model",
"base": "OmniGen2",
"save_path": "default",
"description": "OmniGen2 diffusion model. This is required for using OmniGen2.",
"reference": "https://huggingface.co/Comfy-Org/Omnigen2_ComfyUI_repackaged",
"filename": "omnigen2_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Omnigen2_ComfyUI_repackaged/resolve/main/split_files/diffusion_models/omnigen2_fp16.safetensors",
"size": "7.93GB"
},
{
"name": "Comfy-Org/qwen_2.5_vl_fp16.safetensors",
"type": "clip",
"base": "qwen-2.5",
"save_path": "default",
"description": "text encoder for OmniGen2",
"reference": "https://huggingface.co/Comfy-Org/Omnigen2_ComfyUI_repackaged",
"filename": "qwen_2.5_vl_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Omnigen2_ComfyUI_repackaged/resolve/main/split_files/text_encoders/qwen_2.5_vl_fp16.safetensors",
"size": "7.51GB"
},
{ {
"name": "FLUX.1 [Schnell] Diffusion model", "name": "FLUX.1 [Schnell] Diffusion model",
"type": "diffusion_model", "type": "diffusion_model",
@@ -4137,7 +4023,7 @@
"type": "VAE", "type": "VAE",
"base": "FLUX.1", "base": "FLUX.1",
"save_path": "vae/FLUX1", "save_path": "vae/FLUX1",
"description": "FLUX.1 VAE model\nNOTE: This VAE model can also be used for image generation with OmniGen2.", "description": "FLUX.1 VAE model",
"reference": "https://huggingface.co/black-forest-labs/FLUX.1-schnell", "reference": "https://huggingface.co/black-forest-labs/FLUX.1-schnell",
"filename": "ae.safetensors", "filename": "ae.safetensors",
"url": "https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/ae.safetensors", "url": "https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/ae.safetensors",
@@ -5045,105 +4931,6 @@
"size": "1.26GB" "size": "1.26GB"
}, },
{
"name": "Comfy-Org/Wan2.2 i2v high noise 14B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for i2v high noise 14B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_i2v_high_noise_14B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_i2v_high_noise_14B_fp16.safetensors",
"size": "28.6GB"
},
{
"name": "Comfy-Org/Wan2.2 i2v high noise 14B (fp8_scaled)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for i2v high noise 14B (fp8_scaled)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_i2v_high_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_i2v_high_noise_14B_fp8_scaled.safetensors",
"size": "14.3GB"
},
{
"name": "Comfy-Org/Wan2.2 i2v low noise 14B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for i2v low noise 14B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_i2v_low_noise_14B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_i2v_low_noise_14B_fp16.safetensors",
"size": "28.6GB"
},
{
"name": "Comfy-Org/Wan2.2 i2v low noise 14B (fp8_scaled)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for i2v low noise 14B (fp8_scaled)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_i2v_low_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_i2v_low_noise_14B_fp8_scaled.safetensors",
"size": "14.3GB"
},
{
"name": "Comfy-Org/Wan2.2 t2v high noise 14B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for t2v high noise 14B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_t2v_high_noise_14B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_high_noise_14B_fp16.safetensors",
"size": "28.6GB"
},
{
"name": "Comfy-Org/Wan2.2 t2v high noise 14B (fp8_scaled)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for t2v high noise 14B (fp8_scaled)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors",
"size": "14.3GB"
},
{
"name": "Comfy-Org/Wan2.2 t2v low noise 14B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for t2v low noise 14B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_t2v_low_noise_14B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_low_noise_14B_fp16.safetensors",
"size": "28.6GB"
},
{
"name": "Comfy-Org/Wan2.2 t2v low noise 14B (fp8_scaled)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for t2v low noise 14B (fp8_scaled)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors",
"size": "14.3GB"
},
{
"name": "Comfy-Org/Wan2.2 ti2v 5B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for ti2v 5B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_ti2v_5B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_ti2v_5B_fp16.safetensors",
"size": "10.0GB"
},
{ {
"name": "Comfy-Org/umt5_xxl_fp16.safetensors", "name": "Comfy-Org/umt5_xxl_fp16.safetensors",
@@ -5246,50 +5033,6 @@
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-13b-0.9.7-distilled-fp8.safetensors", "url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-13b-0.9.7-distilled-fp8.safetensors",
"size": "15.7GB" "size": "15.7GB"
}, },
{
"name": "LTX-Video 2B Distilled v0.9.8",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "LTX-Video 2B distilled model v0.9.8 with improved prompt understanding and detail generation.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-2b-0.9.8-distilled.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-2b-0.9.8-distilled.safetensors",
"size": "6.34GB"
},
{
"name": "LTX-Video 2B Distilled FP8 v0.9.8",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "Quantized LTX-Video 2B distilled model v0.9.8 with improved prompt understanding and detail generation, optimized for lower VRAM usage.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-2b-0.9.8-distilled-fp8.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-2b-0.9.8-distilled-fp8.safetensors",
"size": "4.46GB"
},
{
"name": "LTX-Video 13B Distilled v0.9.8",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "LTX-Video 13B distilled model v0.9.8 with improved prompt understanding and detail generation.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-13b-0.9.8-distilled.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-13b-0.9.8-distilled.safetensors",
"size": "28.6GB"
},
{
"name": "LTX-Video 13B Distilled FP8 v0.9.8",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "Quantized LTX-Video 13B distilled model v0.9.8 with improved prompt understanding and detail generation, optimized for lower VRAM usage.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltxv-13b-0.9.8-distilled-fp8.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-13b-0.9.8-distilled-fp8.safetensors",
"size": "15.7GB"
},
{ {
"name": "LTX-Video 13B Distilled LoRA v0.9.7", "name": "LTX-Video 13B Distilled LoRA v0.9.7",
"type": "lora", "type": "lora",
@@ -5301,50 +5044,6 @@
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-13b-0.9.7-distilled-lora128.safetensors", "url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltxv-13b-0.9.7-distilled-lora128.safetensors",
"size": "1.33GB" "size": "1.33GB"
}, },
{
"name": "LTX-Video ICLoRA Depth 13B v0.9.7",
"type": "lora",
"base": "LTX-Video",
"save_path": "loras",
"description": "In-Context LoRA (IC LoRA) for depth-controlled video-to-video generation with precise depth conditioning.",
"reference": "https://huggingface.co/Lightricks/LTX-Video-ICLoRA-depth-13b-0.9.7",
"filename": "ltxv-097-ic-lora-depth-control-comfyui.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video-ICLoRA-depth-13b-0.9.7/resolve/main/ltxv-097-ic-lora-depth-control-comfyui.safetensors",
"size": "81.9MB"
},
{
"name": "LTX-Video ICLoRA Pose 13B v0.9.7",
"type": "lora",
"base": "LTX-Video",
"save_path": "loras",
"description": "In-Context LoRA (IC LoRA) for pose-controlled video-to-video generation with precise pose conditioning.",
"reference": "https://huggingface.co/Lightricks/LTX-Video-ICLoRA-pose-13b-0.9.7",
"filename": "ltxv-097-ic-lora-pose-control-comfyui.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video-ICLoRA-pose-13b-0.9.7/resolve/main/ltxv-097-ic-lora-pose-control-comfyui.safetensors",
"size": "151MB"
},
{
"name": "LTX-Video ICLoRA Canny 13B v0.9.7",
"type": "lora",
"base": "LTX-Video",
"save_path": "loras",
"description": "In-Context LoRA (IC LoRA) for canny edge-controlled video-to-video generation with precise edge conditioning.",
"reference": "https://huggingface.co/Lightricks/LTX-Video-ICLoRA-canny-13b-0.9.7",
"filename": "ltxv-097-ic-lora-canny-control-comfyui.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video-ICLoRA-canny-13b-0.9.7/resolve/main/ltxv-097-ic-lora-canny-control-comfyui.safetensors",
"size": "81.9MB"
},
{
"name": "LTX-Video ICLoRA Detailer 13B v0.9.8",
"type": "lora",
"base": "LTX-Video",
"save_path": "loras",
"description": "A video detailer model on top of LTXV_13B_098_DEV trained on custom data using In-Context LoRA (IC LoRA) method.",
"reference": "https://huggingface.co/Lightricks/LTX-Video-ICLoRA-detailer-13b-0.9.8",
"filename": "ltxv-098-ic-lora-detailer-comfyui.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video-ICLoRA-detailer-13b-0.9.8/resolve/main/ltxv-098-ic-lora-detailer-comfyui.safetensors",
"size": "1.31GB"
},
{ {
"name": "Latent Bridge Matching for Image Relighting", "name": "Latent Bridge Matching for Image Relighting",
"type": "diffusion_model", "type": "diffusion_model",
@@ -5355,317 +5054,6 @@
"filename": "LBM_relighting.safetensors", "filename": "LBM_relighting.safetensors",
"url": "https://huggingface.co/jasperai/LBM_relighting/resolve/main/model.safetensors", "url": "https://huggingface.co/jasperai/LBM_relighting/resolve/main/model.safetensors",
"size": "5.02GB" "size": "5.02GB"
},
{
"name": "Qwen-Image VAE",
"type": "VAE",
"base": "Qwen-Image",
"save_path": "vae/qwen-image",
"description": "VAE model for Qwen-Image",
"reference": "https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI",
"filename": "qwen_image_vae.safetensors",
"url": "https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/vae/qwen_image_vae.safetensors",
"size": "335MB"
},
{
"name": "Qwen 2.5 VL 7B Text Encoder (fp8_scaled)",
"type": "clip",
"base": "Qwen-2.5-VL",
"save_path": "text_encoders/qwen",
"description": "Qwen 2.5 VL 7B text encoder model (fp8_scaled)",
"reference": "https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI",
"filename": "qwen_2.5_vl_7b_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/text_encoders/qwen_2.5_vl_7b_fp8_scaled.safetensors",
"size": "3.75GB"
},
{
"name": "Qwen 2.5 VL 7B Text Encoder",
"type": "clip",
"base": "Qwen-2.5-VL",
"save_path": "text_encoders/qwen",
"description": "Qwen 2.5 VL 7B text encoder model",
"reference": "https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI",
"filename": "qwen_2.5_vl_7b.safetensors",
"url": "https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/text_encoders/qwen_2.5_vl_7b.safetensors",
"size": "7.51GB"
},
{
"name": "Qwen-Image Diffusion Model (fp8_e4m3fn)",
"type": "diffusion_model",
"base": "Qwen-Image",
"save_path": "diffusion_models/qwen-image",
"description": "Qwen-Image diffusion model (fp8_e4m3fn)",
"reference": "https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI",
"filename": "qwen_image_fp8_e4m3fn.safetensors",
"url": "https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/diffusion_models/qwen_image_fp8_e4m3fn.safetensors",
"size": "4.89GB"
},
{
"name": "Qwen-Image Diffusion Model (bf16)",
"type": "diffusion_model",
"base": "Qwen-Image",
"save_path": "diffusion_models/qwen-image",
"description": "Qwen-Image diffusion model (bf16)",
"reference": "https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI",
"filename": "qwen_image_bf16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/diffusion_models/qwen_image_bf16.safetensors",
"size": "9.78GB"
},
{
"name": "Qwen-Image-Edit 2509 Diffusion Model (fp8_e4m3fn)",
"type": "diffusion_model",
"base": "Qwen-Image-Edit",
"save_path": "diffusion_models/qwen-image-edit",
"description": "Qwen-Image-Edit 2509 diffusion model (fp8_e4m3fn)",
"reference": "https://huggingface.co/Comfy-Org/Qwen-Image-Edit_ComfyUI",
"filename": "qwen_image_edit_2509_fp8_e4m3fn.safetensors",
"url": "https://huggingface.co/Comfy-Org/Qwen-Image-Edit_ComfyUI/resolve/main/split_files/diffusion_models/qwen_image_edit_2509_fp8_e4m3fn.safetensors",
"size": "4.89GB"
},
{
"name": "Qwen-Image-Edit 2509 Diffusion Model (bf16)",
"type": "diffusion_model",
"base": "Qwen-Image-Edit",
"save_path": "diffusion_models/qwen-image-edit",
"description": "Qwen-Image-Edit 2509 diffusion model (bf16)",
"reference": "https://huggingface.co/Comfy-Org/Qwen-Image-Edit_ComfyUI",
"filename": "qwen_image_edit_2509_bf16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Qwen-Image-Edit_ComfyUI/resolve/main/split_files/diffusion_models/qwen_image_edit_2509_bf16.safetensors",
"size": "9.78GB"
},
{
"name": "Qwen-Image-Edit Diffusion Model (fp8_e4m3fn)",
"type": "diffusion_model",
"base": "Qwen-Image-Edit",
"save_path": "diffusion_models/qwen-image-edit",
"description": "Qwen-Image-Edit diffusion model (fp8_e4m3fn)",
"reference": "https://huggingface.co/Comfy-Org/Qwen-Image-Edit_ComfyUI",
"filename": "qwen_image_edit_fp8_e4m3fn.safetensors",
"url": "https://huggingface.co/Comfy-Org/Qwen-Image-Edit_ComfyUI/resolve/main/split_files/diffusion_models/qwen_image_edit_fp8_e4m3fn.safetensors",
"size": "4.89GB"
},
{
"name": "Qwen-Image-Edit Diffusion Model (bf16)",
"type": "diffusion_model",
"base": "Qwen-Image-Edit",
"save_path": "diffusion_models/qwen-image-edit",
"description": "Qwen-Image-Edit diffusion model (bf16)",
"reference": "https://huggingface.co/Comfy-Org/Qwen-Image-Edit_ComfyUI",
"filename": "qwen_image_edit_bf16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Qwen-Image-Edit_ComfyUI/resolve/main/split_files/diffusion_models/qwen_image_edit_bf16.safetensors",
"size": "9.78GB"
},
{
"name": "Qwen-Image-Lightning 8steps V1.0",
"type": "lora",
"base": "Qwen-Image",
"save_path": "loras/qwen-image-lightning",
"description": "Qwen-Image-Lightning 8-step LoRA model V1.0",
"reference": "https://huggingface.co/lightx2v/Qwen-Image-Lightning",
"filename": "Qwen-Image-Lightning-8steps-V1.0.safetensors",
"url": "https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Lightning-8steps-V1.0.safetensors",
"size": "9.78GB"
},
{
"name": "Qwen-Image-Lightning 4steps V1.0",
"type": "lora",
"base": "Qwen-Image",
"save_path": "loras/qwen-image-lightning",
"description": "Qwen-Image-Lightning 4-step LoRA model V1.0",
"reference": "https://huggingface.co/lightx2v/Qwen-Image-Lightning",
"filename": "Qwen-Image-Lightning-4steps-V1.0.safetensors",
"url": "https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Lightning-4steps-V1.0.safetensors",
"size": "9.78GB"
},
{
"name": "Qwen-Image-Lightning 4steps V1.0 (bf16)",
"type": "lora",
"base": "Qwen-Image",
"save_path": "loras/qwen-image-lightning",
"description": "Qwen-Image-Lightning 4-step LoRA model V1.0 (bf16)",
"reference": "https://huggingface.co/lightx2v/Qwen-Image-Lightning",
"filename": "Qwen-Image-Lightning-4steps-V1.0-bf16.safetensors",
"url": "https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Lightning-4steps-V1.0-bf16.safetensors",
"size": "19.6GB"
},
{
"name": "Qwen-Image-Lightning 4steps V2.0",
"type": "lora",
"base": "Qwen-Image",
"save_path": "loras/qwen-image-lightning",
"description": "Qwen-Image-Lightning 4-step LoRA model V2.0",
"reference": "https://huggingface.co/lightx2v/Qwen-Image-Lightning",
"filename": "Qwen-Image-Lightning-4steps-V2.0.safetensors",
"url": "https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Lightning-4steps-V2.0.safetensors",
"size": "9.78GB"
},
{
"name": "Qwen-Image-Lightning 4steps V2.0 (bf16)",
"type": "lora",
"base": "Qwen-Image",
"save_path": "loras/qwen-image-lightning",
"description": "Qwen-Image-Lightning 4-step LoRA model V2.0 (bf16)",
"reference": "https://huggingface.co/lightx2v/Qwen-Image-Lightning",
"filename": "Qwen-Image-Lightning-4steps-V2.0-bf16.safetensors",
"url": "https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Lightning-4steps-V2.0-bf16.safetensors",
"size": "19.6GB"
},
{
"name": "Qwen-Image-Lightning 8steps V1.1",
"type": "lora",
"base": "Qwen-Image",
"save_path": "loras/qwen-image-lightning",
"description": "Qwen-Image-Lightning 8-step LoRA model V1.1",
"reference": "https://huggingface.co/lightx2v/Qwen-Image-Lightning",
"filename": "Qwen-Image-Lightning-8steps-V1.1.safetensors",
"url": "https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Lightning-8steps-V1.1.safetensors",
"size": "9.78GB"
},
{
"name": "Qwen-Image-Lightning 8steps V1.1 (bf16)",
"type": "lora",
"base": "Qwen-Image",
"save_path": "loras/qwen-image-lightning",
"description": "Qwen-Image-Lightning 8-step LoRA model V1.1 (bf16)",
"reference": "https://huggingface.co/lightx2v/Qwen-Image-Lightning",
"filename": "Qwen-Image-Lightning-8steps-V1.1-bf16.safetensors",
"url": "https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Lightning-8steps-V1.1-bf16.safetensors",
"size": "19.6GB"
},
{
"name": "Qwen-Image-Lightning 8steps V2.0",
"type": "lora",
"base": "Qwen-Image",
"save_path": "loras/qwen-image-lightning",
"description": "Qwen-Image-Lightning 8-step LoRA model V2.0",
"reference": "https://huggingface.co/lightx2v/Qwen-Image-Lightning",
"filename": "Qwen-Image-Lightning-8steps-V2.0.safetensors",
"url": "https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Lightning-8steps-V2.0.safetensors",
"size": "9.78GB"
},
{
"name": "Qwen-Image-Lightning 8steps V2.0 (bf16)",
"type": "lora",
"base": "Qwen-Image",
"save_path": "loras/qwen-image-lightning",
"description": "Qwen-Image-Lightning 8-step LoRA model V2.0 (bf16)",
"reference": "https://huggingface.co/lightx2v/Qwen-Image-Lightning",
"filename": "Qwen-Image-Lightning-8steps-V2.0-bf16.safetensors",
"url": "https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Lightning-8steps-V2.0-bf16.safetensors",
"size": "19.6GB"
},
{
"name": "Qwen-Image-Edit-Lightning 4steps V1.0",
"type": "lora",
"base": "Qwen-Image-Edit",
"save_path": "loras/qwen-image-edit-lightning",
"description": "Qwen-Image-Edit-Lightning 4-step LoRA model V1.0",
"reference": "https://huggingface.co/lightx2v/Qwen-Image-Lightning",
"filename": "Qwen-Image-Edit-Lightning-4steps-V1.0.safetensors",
"url": "https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Edit-Lightning-4steps-V1.0.safetensors",
"size": "9.78GB"
},
{
"name": "Qwen-Image-Edit-Lightning 4steps V1.0 (bf16)",
"type": "lora",
"base": "Qwen-Image-Edit",
"save_path": "loras/qwen-image-edit-lightning",
"description": "Qwen-Image-Edit-Lightning 4-step LoRA model V1.0 (bf16)",
"reference": "https://huggingface.co/lightx2v/Qwen-Image-Lightning",
"filename": "Qwen-Image-Edit-Lightning-4steps-V1.0-bf16.safetensors",
"url": "https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Edit-Lightning-4steps-V1.0-bf16.safetensors",
"size": "19.6GB"
},
{
"name": "Qwen-Image-Edit-Lightning 8steps V1.0",
"type": "lora",
"base": "Qwen-Image-Edit",
"save_path": "loras/qwen-image-edit-lightning",
"description": "Qwen-Image-Edit-Lightning 8-step LoRA model V1.0",
"reference": "https://huggingface.co/lightx2v/Qwen-Image-Lightning",
"filename": "Qwen-Image-Edit-Lightning-8steps-V1.0.safetensors",
"url": "https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Edit-Lightning-8steps-V1.0.safetensors",
"size": "9.78GB"
},
{
"name": "Qwen-Image-Edit-Lightning 8steps V1.0 (bf16)",
"type": "lora",
"base": "Qwen-Image-Edit",
"save_path": "loras/qwen-image-edit-lightning",
"description": "Qwen-Image-Edit-Lightning 8-step LoRA model V1.0 (bf16)",
"reference": "https://huggingface.co/lightx2v/Qwen-Image-Lightning",
"filename": "Qwen-Image-Edit-Lightning-8steps-V1.0-bf16.safetensors",
"url": "https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Edit-Lightning-8steps-V1.0-bf16.safetensors",
"size": "19.6GB"
},
{
"name": "Qwen-Image-Edit-2509-Lightning 4steps V1.0 (bf16)",
"type": "lora",
"base": "Qwen-Image-Edit",
"save_path": "loras/qwen-image-edit-lightning",
"description": "Qwen-Image-Edit-2509-Lightning 4-step LoRA model V1.0 (bf16)",
"reference": "https://huggingface.co/lightx2v/Qwen-Image-Lightning",
"filename": "Qwen-Image-Edit-2509-Lightning-4steps-V1.0-bf16.safetensors",
"url": "https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Edit-2509/Qwen-Image-Edit-2509-Lightning-4steps-V1.0-bf16.safetensors",
"size": "19.6GB"
},
{
"name": "Qwen-Image-Edit-2509-Lightning 4steps V1.0 (fp32)",
"type": "lora",
"base": "Qwen-Image-Edit",
"save_path": "loras/qwen-image-edit-lightning",
"description": "Qwen-Image-Edit-2509-Lightning 4-step LoRA model V1.0 (fp32)",
"reference": "https://huggingface.co/lightx2v/Qwen-Image-Lightning",
"filename": "Qwen-Image-Edit-2509-Lightning-4steps-V1.0-fp32.safetensors",
"url": "https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Edit-2509/Qwen-Image-Edit-2509-Lightning-4steps-V1.0-fp32.safetensors",
"size": "39.1GB"
},
{
"name": "Qwen-Image-Edit-2509-Lightning 8steps V1.0 (bf16)",
"type": "lora",
"base": "Qwen-Image-Edit",
"save_path": "loras/qwen-image-edit-lightning",
"description": "Qwen-Image-Edit-2509-Lightning 8-step LoRA model V1.0 (bf16)",
"reference": "https://huggingface.co/lightx2v/Qwen-Image-Lightning",
"filename": "Qwen-Image-Edit-2509-Lightning-8steps-V1.0-bf16.safetensors",
"url": "https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Edit-2509/Qwen-Image-Edit-2509-Lightning-8steps-V1.0-bf16.safetensors",
"size": "19.6GB"
},
{
"name": "Qwen-Image-Edit-2509-Lightning 8steps V1.0 (fp32)",
"type": "lora",
"base": "Qwen-Image-Edit",
"save_path": "loras/qwen-image-edit-lightning",
"description": "Qwen-Image-Edit-2509-Lightning 8-step LoRA model V1.0 (fp32)",
"reference": "https://huggingface.co/lightx2v/Qwen-Image-Lightning",
"filename": "Qwen-Image-Edit-2509-Lightning-8steps-V1.0-fp32.safetensors",
"url": "https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Edit-2509/Qwen-Image-Edit-2509-Lightning-8steps-V1.0-fp32.safetensors",
"size": "39.1GB"
},
{
"name": "Qwen-Image InstantX ControlNet Union",
"type": "controlnet",
"base": "Qwen-Image",
"save_path": "controlnet/qwen-image/instantx",
"description": "Qwen-Image InstantX ControlNet Union model",
"reference": "https://huggingface.co/Comfy-Org/Qwen-Image-InstantX-ControlNets",
"filename": "Qwen-Image-InstantX-ControlNet-Union.safetensors",
"url": "https://huggingface.co/Comfy-Org/Qwen-Image-InstantX-ControlNets/resolve/main/split_files/controlnet/Qwen-Image-InstantX-ControlNet-Union.safetensors",
"size": "2.54GB"
},
{
"name": "Qwen-Image InstantX ControlNet Inpainting",
"type": "controlnet",
"base": "Qwen-Image",
"save_path": "controlnet/qwen-image/instantx",
"description": "Qwen-Image InstantX ControlNet Inpainting model",
"reference": "https://huggingface.co/Comfy-Org/Qwen-Image-InstantX-ControlNets",
"filename": "Qwen-Image-InstantX-ControlNet-Inpainting.safetensors",
"url": "https://huggingface.co/Comfy-Org/Qwen-Image-InstantX-ControlNets/resolve/main/split_files/controlnet/Qwen-Image-InstantX-ControlNet-Inpainting.safetensors",
"size": "2.54GB"
} }
] ]
} }

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,3 @@
#!/bin/bash #!/bin/bash
rm ~/.tmp/dev/*.py > /dev/null 2>&1 rm ~/.tmp/dev/*.py > /dev/null 2>&1
python ../../scanner.py ~/.tmp/dev $* python ../../scanner.py ~/.tmp/dev

View File

@@ -1,25 +1,5 @@
{ {
"custom_nodes": [ "custom_nodes": [
{
"author": "synchronicity-labs",
"title": "ComfyUI Sync Lipsync Node",
"reference": "https://github.com/synchronicity-labs/sync-comfyui",
"files": [
"https://github.com/synchronicity-labs/sync-comfyui"
],
"install_type": "git-clone",
"description": "This custom node allows you to perform audio-video lip synchronization inside ComfyUI using a simple interface."
},
{
"author": "joaomede",
"title": "ComfyUI-Unload-Model-Fork",
"reference": "https://github.com/joaomede/ComfyUI-Unload-Model-Fork",
"files": [
"https://github.com/joaomede/ComfyUI-Unload-Model-Fork"
],
"install_type": "git-clone",
"description": "For unloading a model or all models, using the memory management that is already present in ComfyUI. Copied from [a/https://github.com/willblaschko/ComfyUI-Unload-Models](https://github.com/willblaschko/ComfyUI-Unload-Models) but without the unnecessary extra stuff."
},
{ {
"author": "SanDiegoDude", "author": "SanDiegoDude",
"title": "ComfyUI-HiDream-Sampler [WIP]", "title": "ComfyUI-HiDream-Sampler [WIP]",
@@ -169,16 +149,6 @@
], ],
"install_type": "git-clone", "install_type": "git-clone",
"description": "A fork of KJNodes for ComfyUI.\nVarious quality of life -nodes for ComfyUI, mostly just visual stuff to improve usability" "description": "A fork of KJNodes for ComfyUI.\nVarious quality of life -nodes for ComfyUI, mostly just visual stuff to improve usability"
},
{
"author": "huixingyun",
"title": "ComfyUI-SoundFlow",
"reference": "https://github.com/huixingyun/ComfyUI-SoundFlow",
"files": [
"https://github.com/huixingyun/ComfyUI-SoundFlow"
],
"install_type": "git-clone",
"description": "forked from https://github.com/fredconex/ComfyUI-SoundFlow (removed)"
} }
] ]
} }

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,219 +1,5 @@
{ {
"models": [ "models": [
{
"name": "Comfy-Org/Wan2.2 i2v high noise 14B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for i2v high noise 14B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_i2v_high_noise_14B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_i2v_high_noise_14B_fp16.safetensors",
"size": "28.6GB"
},
{
"name": "Comfy-Org/Wan2.2 i2v high noise 14B (fp8_scaled)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for i2v high noise 14B (fp8_scaled)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_i2v_high_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_i2v_high_noise_14B_fp8_scaled.safetensors",
"size": "14.3GB"
},
{
"name": "Comfy-Org/Wan2.2 i2v low noise 14B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for i2v low noise 14B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_i2v_low_noise_14B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_i2v_low_noise_14B_fp16.safetensors",
"size": "28.6GB"
},
{
"name": "Comfy-Org/Wan2.2 i2v low noise 14B (fp8_scaled)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for i2v low noise 14B (fp8_scaled)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_i2v_low_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_i2v_low_noise_14B_fp8_scaled.safetensors",
"size": "14.3GB"
},
{
"name": "Comfy-Org/Wan2.2 t2v high noise 14B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for t2v high noise 14B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_t2v_high_noise_14B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_high_noise_14B_fp16.safetensors",
"size": "28.6GB"
},
{
"name": "Comfy-Org/Wan2.2 t2v high noise 14B (fp8_scaled)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for t2v high noise 14B (fp8_scaled)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors",
"size": "14.3GB"
},
{
"name": "Comfy-Org/Wan2.2 t2v low noise 14B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for t2v low noise 14B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_t2v_low_noise_14B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_low_noise_14B_fp16.safetensors",
"size": "28.6GB"
},
{
"name": "Comfy-Org/Wan2.2 t2v low noise 14B (fp8_scaled)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for t2v low noise 14B (fp8_scaled)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors",
"size": "14.3GB"
},
{
"name": "Comfy-Org/Wan2.2 ti2v 5B (fp16)",
"type": "diffusion_model",
"base": "Wan2.2",
"save_path": "diffusion_models/Wan2.2",
"description": "Wan2.2 diffusion model for ti2v 5B (fp16)",
"reference": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged",
"filename": "wan2.2_ti2v_5B_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_ti2v_5B_fp16.safetensors",
"size": "10.0GB"
},
{
"name": "sam2.1_hiera_tiny.pt",
"type": "sam2.1",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2.1 hiera model (tiny)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2.1_hiera_tiny.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_tiny.pt",
"size": "149.0MB"
},
{
"name": "sam2.1_hiera_small.pt",
"type": "sam2.1",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2.1 hiera model (small)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2.1_hiera_small.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_small.pt",
"size": "176.0MB"
},
{
"name": "sam2.1_hiera_base_plus.pt",
"type": "sam2.1",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2.1 hiera model (base+)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2.1_hiera_base_plus.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_base_plus.pt",
"size": "309.0MB"
},
{
"name": "sam2.1_hiera_large.pt",
"type": "sam2.1",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2.1 hiera model (large)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2.1_hiera_large.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt",
"size": "857.0MB"
},
{
"name": "sam2_hiera_tiny.pt",
"type": "sam2",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2 hiera model (tiny)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2_hiera_tiny.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_tiny.pt",
"size": "149.0MB"
},
{
"name": "sam2_hiera_small.pt",
"type": "sam2",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2 hiera model (small)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2_hiera_small.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_small.pt",
"size": "176.0MB"
},
{
"name": "sam2_hiera_base_plus.pt",
"type": "sam2",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2 hiera model (base+)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2_hiera_base_plus.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_base_plus.pt",
"size": "309.0MB"
},
{
"name": "sam2_hiera_large.pt",
"type": "sam2",
"base": "SAM",
"save_path": "sams",
"description": "Segmenty Anything SAM 2 hiera model (large)",
"reference": "https://github.com/facebookresearch/sam2#model-description",
"filename": "sam2_hiera_large.pt",
"url": "https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_large.pt",
"size": "857.0MB"
},
{
"name": "Comfy-Org/omnigen2_fp16.safetensors",
"type": "diffusion_model",
"base": "OmniGen2",
"save_path": "default",
"description": "OmniGen2 diffusion model. This is required for using OmniGen2.",
"reference": "https://huggingface.co/Comfy-Org/Omnigen2_ComfyUI_repackaged",
"filename": "omnigen2_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Omnigen2_ComfyUI_repackaged/resolve/main/split_files/diffusion_models/omnigen2_fp16.safetensors",
"size": "7.93GB"
},
{
"name": "Comfy-Org/qwen_2.5_vl_fp16.safetensors",
"type": "clip",
"base": "qwen-2.5",
"save_path": "default",
"description": "text encoder for OmniGen2",
"reference": "https://huggingface.co/Comfy-Org/Omnigen2_ComfyUI_repackaged",
"filename": "qwen_2.5_vl_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/Omnigen2_ComfyUI_repackaged/resolve/main/split_files/text_encoders/qwen_2.5_vl_fp16.safetensors",
"size": "7.51GB"
},
{ {
"name": "Latent Bridge Matching for Image Relighting", "name": "Latent Bridge Matching for Image Relighting",
"type": "diffusion_model", "type": "diffusion_model",
@@ -687,6 +473,224 @@
"filename": "llava_llama3_fp16.safetensors", "filename": "llava_llama3_fp16.safetensors",
"url": "https://huggingface.co/Comfy-Org/HunyuanVideo_repackaged/resolve/main/split_files/text_encoders/llava_llama3_fp16.safetensors", "url": "https://huggingface.co/Comfy-Org/HunyuanVideo_repackaged/resolve/main/split_files/text_encoders/llava_llama3_fp16.safetensors",
"size": "16.1GB" "size": "16.1GB"
},
{
"name": "PixArt-Sigma-XL-2-512-MS.safetensors (diffusion)",
"type": "diffusion_model",
"base": "pixart-sigma",
"save_path": "diffusion_models/PixArt-Sigma",
"description": "PixArt-Sigma Diffusion model",
"reference": "https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-512-MS",
"filename": "PixArt-Sigma-XL-2-512-MS.safetensors",
"url": "https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-512-MS/resolve/main/transformer/diffusion_pytorch_model.safetensors",
"size": "2.44GB"
},
{
"name": "PixArt-Sigma-XL-2-1024-MS.safetensors (diffusion)",
"type": "diffusion_model",
"base": "pixart-sigma",
"save_path": "diffusion_models/PixArt-Sigma",
"description": "PixArt-Sigma Diffusion model",
"reference": "https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
"filename": "PixArt-Sigma-XL-2-1024-MS.safetensors",
"url": "https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-1024-MS/resolve/main/transformer/diffusion_pytorch_model.safetensors",
"size": "2.44GB"
},
{
"name": "PixArt-XL-2-1024-MS.safetensors (diffusion)",
"type": "diffusion_model",
"base": "pixart-alpha",
"save_path": "diffusion_models/PixArt-Alpha",
"description": "PixArt-Alpha Diffusion model",
"reference": "https://huggingface.co/PixArt-alpha/PixArt-XL-2-1024-MS",
"filename": "PixArt-XL-2-1024-MS.safetensors",
"url": "https://huggingface.co/PixArt-alpha/PixArt-XL-2-1024-MS/resolve/main/transformer/diffusion_pytorch_model.safetensors",
"size": "2.45GB"
},
{
"name": "Comfy-Org/hunyuan_video_t2v_720p_bf16.safetensors",
"type": "diffusion_model",
"base": "Hunyuan Video",
"save_path": "diffusion_models/hunyuan_video",
"description": "Huyuan Video diffusion model. repackaged version.",
"reference": "https://huggingface.co/Comfy-Org/HunyuanVideo_repackaged",
"filename": "hunyuan_video_t2v_720p_bf16.safetensors",
"url": "https://huggingface.co/Comfy-Org/HunyuanVideo_repackaged/resolve/main/split_files/diffusion_models/hunyuan_video_t2v_720p_bf16.safetensors",
"size": "25.6GB"
},
{
"name": "Comfy-Org/hunyuan_video_vae_bf16.safetensors",
"type": "VAE",
"base": "Hunyuan Video",
"save_path": "VAE",
"description": "Huyuan Video VAE model. repackaged version.",
"reference": "https://huggingface.co/Comfy-Org/HunyuanVideo_repackaged",
"filename": "hunyuan_video_vae_bf16.safetensors",
"url": "https://huggingface.co/Comfy-Org/HunyuanVideo_repackaged/resolve/main/split_files/vae/hunyuan_video_vae_bf16.safetensors",
"size": "493MB"
},
{
"name": "LTX-Video 2B v0.9.1 Checkpoint",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "LTX-Video is the first DiT-based video generation model capable of generating high-quality videos in real-time. It produces 24 FPS videos at a 768x512 resolution faster than they can be watched. Trained on a large-scale dataset of diverse videos, the model generates high-resolution videos with realistic and varied content.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltx-video-2b-v0.9.1.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltx-video-2b-v0.9.1.safetensors",
"size": "5.72GB"
},
{
"name": "XLabs-AI/flux-canny-controlnet-v3.safetensors",
"type": "controlnet",
"base": "FLUX.1",
"save_path": "xlabs/controlnets",
"description": "ControlNet checkpoints for FLUX.1-dev model by Black Forest Labs.",
"reference": "https://huggingface.co/XLabs-AI/flux-controlnet-collections",
"filename": "flux-canny-controlnet-v3.safetensors",
"url": "https://huggingface.co/XLabs-AI/flux-controlnet-collections/resolve/main/flux-canny-controlnet-v3.safetensors",
"size": "1.49GB"
},
{
"name": "XLabs-AI/flux-depth-controlnet-v3.safetensors",
"type": "controlnet",
"base": "FLUX.1",
"save_path": "xlabs/controlnets",
"description": "ControlNet checkpoints for FLUX.1-dev model by Black Forest Labs.",
"reference": "https://huggingface.co/XLabs-AI/flux-controlnet-collections",
"filename": "flux-depth-controlnet-v3.safetensors",
"url": "https://huggingface.co/XLabs-AI/flux-controlnet-collections/resolve/main/flux-depth-controlnet-v3.safetensors",
"size": "1.49GB"
},
{
"name": "XLabs-AI/flux-hed-controlnet-v3.safetensors",
"type": "controlnet",
"base": "FLUX.1",
"save_path": "xlabs/controlnets",
"description": "ControlNet checkpoints for FLUX.1-dev model by Black Forest Labs.",
"reference": "https://huggingface.co/XLabs-AI/flux-controlnet-collections",
"filename": "flux-hed-controlnet-v3.safetensors",
"url": "https://huggingface.co/XLabs-AI/flux-controlnet-collections/resolve/main/flux-hed-controlnet-v3.safetensors",
"size": "1.49GB"
},
{
"name": "XLabs-AI/realism_lora.safetensors",
"type": "lora",
"base": "FLUX.1",
"save_path": "xlabs/loras",
"description": "A checkpoint with trained LoRAs for FLUX.1-dev model by Black Forest Labs",
"reference": "https://huggingface.co/XLabs-AI/flux-lora-collection",
"filename": "realism_lora.safetensors",
"url": "https://huggingface.co/XLabs-AI/flux-lora-collection/resolve/main/realism_lora.safetensors",
"size": "44.8MB"
},
{
"name": "XLabs-AI/art_lora.safetensors",
"type": "lora",
"base": "FLUX.1",
"save_path": "xlabs/loras",
"description": "A checkpoint with trained LoRAs for FLUX.1-dev model by Black Forest Labs",
"reference": "https://huggingface.co/XLabs-AI/flux-lora-collection",
"filename": "art_lora.safetensors",
"url": "https://huggingface.co/XLabs-AI/flux-lora-collection/resolve/main/scenery_lora.safetensors",
"size": "44.8MB"
},
{
"name": "XLabs-AI/mjv6_lora.safetensors",
"type": "lora",
"base": "FLUX.1",
"save_path": "xlabs/loras",
"description": "A checkpoint with trained LoRAs for FLUX.1-dev model by Black Forest Labs",
"reference": "https://huggingface.co/XLabs-AI/flux-lora-collection",
"filename": "mjv6_lora.safetensors",
"url": "https://huggingface.co/XLabs-AI/flux-lora-collection/resolve/main/mjv6_lora.safetensors",
"size": "44.8MB"
},
{
"name": "XLabs-AI/flux-ip-adapter",
"type": "lora",
"base": "FLUX.1",
"save_path": "xlabs/ipadapters",
"description": "A checkpoint with trained LoRAs for FLUX.1-dev model by Black Forest Labs",
"reference": "https://huggingface.co/XLabs-AI/flux-ip-adapter",
"filename": "ip_adapter.safetensors",
"url": "https://huggingface.co/XLabs-AI/flux-ip-adapter/resolve/main/ip_adapter.safetensors",
"size": "982MB"
},
{
"name": "stabilityai/SD3.5-Large-Controlnet-Blur",
"type": "controlnet",
"base": "SD3.5",
"save_path": "controlnet/SD3.5",
"description": "Blur Controlnet model for SD3.5 Large",
"reference": "https://huggingface.co/stabilityai/stable-diffusion-3.5-controlnets",
"filename": "sd3.5_large_controlnet_blur.safetensors",
"url": "https://huggingface.co/stabilityai/stable-diffusion-3.5-controlnets/resolve/main/sd3.5_large_controlnet_blur.safetensors",
"size": "8.65GB"
},
{
"name": "stabilityai/SD3.5-Large-Controlnet-Canny",
"type": "controlnet",
"base": "SD3.5",
"save_path": "controlnet/SD3.5",
"description": "Canny Controlnet model for SD3.5 Large",
"reference": "https://huggingface.co/stabilityai/stable-diffusion-3.5-controlnets",
"filename": "sd3.5_large_controlnet_canny.safetensors",
"url": "https://huggingface.co/stabilityai/stable-diffusion-3.5-controlnets/resolve/main/sd3.5_large_controlnet_canny.safetensors",
"size": "8.65GB"
},
{
"name": "stabilityai/SD3.5-Large-Controlnet-Depth",
"type": "controlnet",
"base": "SD3.5",
"save_path": "controlnet/SD3.5",
"description": "Depth Controlnet model for SD3.5 Large",
"reference": "https://huggingface.co/stabilityai/stable-diffusion-3.5-controlnets",
"filename": "sd3.5_large_controlnet_depth.safetensors",
"url": "https://huggingface.co/stabilityai/stable-diffusion-3.5-controlnets/resolve/main/sd3.5_large_controlnet_depth.safetensors",
"size": "8.65GB"
},
{
"name": "LTX-Video 2B v0.9 Checkpoint",
"type": "checkpoint",
"base": "LTX-Video",
"save_path": "checkpoints/LTXV",
"description": "LTX-Video is the first DiT-based video generation model capable of generating high-quality videos in real-time. It produces 24 FPS videos at a 768x512 resolution faster than they can be watched. Trained on a large-scale dataset of diverse videos, the model generates high-resolution videos with realistic and varied content.",
"reference": "https://huggingface.co/Lightricks/LTX-Video",
"filename": "ltx-video-2b-v0.9.safetensors",
"url": "https://huggingface.co/Lightricks/LTX-Video/resolve/main/ltx-video-2b-v0.9.safetensors",
"size": "9.37GB"
},
{
"name": "InstantX/FLUX.1-dev-IP-Adapter",
"type": "IP-Adapter",
"base": "FLUX.1",
"save_path": "ipadapter-flux",
"description": "FLUX.1-dev-IP-Adapter",
"reference": "https://huggingface.co/InstantX/FLUX.1-dev-IP-Adapter",
"filename": "ip-adapter.bin",
"url": "https://huggingface.co/InstantX/FLUX.1-dev-IP-Adapter/resolve/main/ip-adapter.bin",
"size": "5.29GB"
},
{
"name": "Comfy-Org/sigclip_vision_384 (patch14_384)",
"type": "clip_vision",
"base": "sigclip",
"save_path": "clip_vision",
"description": "This clip vision model is required for FLUX.1 Redux.",
"reference": "https://huggingface.co/Comfy-Org/sigclip_vision_384/tree/main",
"filename": "sigclip_vision_patch14_384.safetensors",
"url": "https://huggingface.co/Comfy-Org/sigclip_vision_384/resolve/main/sigclip_vision_patch14_384.safetensors",
"size": "857MB"
} }
] ]
} }

View File

@@ -10,16 +10,6 @@
"install_type": "git-clone", "install_type": "git-clone",
"description": "A minimal template for creating React/TypeScript frontend extensions for ComfyUI, with complete boilerplate setup including internationalization and unit testing." "description": "A minimal template for creating React/TypeScript frontend extensions for ComfyUI, with complete boilerplate setup including internationalization and unit testing."
}, },
{
"author": "comfyui-wiki",
"title": "ComfyUI-i18n-demo",
"reference": "https://github.com/comfyui-wiki/ComfyUI-i18n-demo",
"files": [
"https://github.com/comfyui-wiki/ComfyUI-i18n-demo"
],
"install_type": "git-clone",
"description": "ComfyUI custom node develop i18n support demo "
},
{ {
"author": "Suzie1", "author": "Suzie1",
"title": "Guide To Making Custom Nodes in ComfyUI", "title": "Guide To Making Custom Nodes in ComfyUI",
@@ -341,36 +331,6 @@
], ],
"description": "Dynamic Node examples for ComfyUI", "description": "Dynamic Node examples for ComfyUI",
"install_type": "git-clone" "install_type": "git-clone"
},
{
"author": "Jonathon-Doran",
"title": "remote-combo-demo",
"reference": "https://github.com/Jonathon-Doran/remote-combo-demo",
"files": [
"https://github.com/Jonathon-Doran/remote-combo-demo"
],
"install_type": "git-clone",
"description": "A minimal test suite demonstrating how remote COMBO inputs behave in ComfyUI, with and without force_input"
},
{
"author": "J1mB091",
"title": "ComfyUI-J1mB091 Custom Nodes",
"reference": "https://github.com/J1mB091/ComfyUI-J1mB091",
"files": [
"https://github.com/J1mB091/ComfyUI-J1mB091"
],
"install_type": "git-clone",
"description": "Vibe Coded ComfyUI Custom Nodes"
},
{
"author": "aiforhumans",
"title": "XDev Nodes - Complete Toolkit",
"reference": "https://github.com/aiforhumans/comfyui-xdev-nodes",
"files": [
"https://github.com/aiforhumans/comfyui-xdev-nodes"
],
"install_type": "git-clone",
"description": "Complete ComfyUI development toolkit with 8 professional nodes including VAE tools, universal type testing, and comprehensive debugging infrastructure."
} }
] ]
} }

View File

File diff suppressed because it is too large Load Diff

View File

@@ -5,7 +5,7 @@ build-backend = "setuptools.build_meta"
[project] [project]
name = "comfyui-manager" name = "comfyui-manager"
license = { text = "GPL-3.0-only" } license = { text = "GPL-3.0-only" }
version = "4.0.3b4" version = "4.0.0-beta.4"
requires-python = ">= 3.9" requires-python = ">= 3.9"
description = "ComfyUI-Manager provides features to install and manage custom nodes for ComfyUI, as well as various functionalities to assist with ComfyUI." description = "ComfyUI-Manager provides features to install and manage custom nodes for ComfyUI, as well as various functionalities to assist with ComfyUI."
readme = "README.md" readme = "README.md"
@@ -13,28 +13,28 @@ keywords = ["comfyui", "comfyui-manager"]
maintainers = [ maintainers = [
{ name = "Dr.Lt.Data", email = "dr.lt.data@gmail.com" }, { name = "Dr.Lt.Data", email = "dr.lt.data@gmail.com" },
{ name = "Yoland Yan", email = "yoland@comfy.org" }, { name = "Yoland Yan", email = "yoland@drip.art" },
{ name = "James Kwon", email = "hongilkwon316@gmail.com" }, { name = "James Kwon", email = "hongilkwon316@gmail.com" },
{ name = "Robin Huang", email = "robin@comfy.org" }, { name = "Robin Huang", email = "robin@drip.art" },
] ]
classifiers = [ classifiers = [
"Development Status :: 5 - Production/Stable", "Development Status :: 4 - Beta",
"Intended Audience :: Developers", "Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)", "License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
] ]
dependencies = [ dependencies = [
"GitPython", "GitPython",
"PyGithub", "PyGithub",
# "matrix-nio", "matrix-client==0.4.0",
"transformers", "transformers",
"huggingface-hub>0.20", "huggingface-hub>0.20",
"typer", "typer",
"rich", "rich",
"typing-extensions", "typing-extensions",
"toml", "toml",
"uv", "uv",
"chardet" "chardet"
] ]

View File

@@ -1,8 +1,8 @@
GitPython GitPython
PyGithub PyGithub
# matrix-nio matrix-client==0.4.0
transformers transformers
huggingface-hub huggingface-hub>0.20
typer typer
rich rich
typing-extensions typing-extensions

View File

@@ -9,4 +9,4 @@ lint.select = [
"F", "F",
] ]
exclude = ["*.ipynb", "tests"] exclude = ["*.ipynb"]

View File

@@ -7,15 +7,13 @@ import concurrent
import datetime import datetime
import concurrent.futures import concurrent.futures
import requests import requests
import warnings
import argparse
builtin_nodes = set() builtin_nodes = set()
import sys import sys
from urllib.parse import urlparse from urllib.parse import urlparse
from github import Github, Auth from github import Github
def download_url(url, dest_folder, filename=None): def download_url(url, dest_folder, filename=None):
@@ -41,51 +39,26 @@ def download_url(url, dest_folder, filename=None):
raise Exception(f"Failed to download file from {url}") raise Exception(f"Failed to download file from {url}")
def parse_arguments(): # prepare temp dir
"""Parse command-line arguments""" if len(sys.argv) > 1:
parser = argparse.ArgumentParser( temp_dir = sys.argv[1]
description='ComfyUI Manager Node Scanner', else:
formatter_class=argparse.RawDescriptionHelpFormatter, temp_dir = os.path.join(os.getcwd(), ".tmp")
epilog='''
Examples:
# Standard mode
python3 scanner.py
python3 scanner.py --skip-update
# Scan-only mode if not os.path.exists(temp_dir):
python3 scanner.py --scan-only temp-urls-clean.list os.makedirs(temp_dir)
python3 scanner.py --scan-only urls.list --temp-dir /custom/temp
python3 scanner.py --scan-only urls.list --skip-update
'''
)
parser.add_argument('--scan-only', type=str, metavar='URL_LIST_FILE',
help='Scan-only mode: provide URL list file (one URL per line)')
parser.add_argument('--temp-dir', type=str, metavar='DIR',
help='Temporary directory for cloned repositories')
parser.add_argument('--skip-update', action='store_true',
help='Skip git clone/pull operations')
parser.add_argument('--skip-stat-update', action='store_true',
help='Skip GitHub stats collection')
parser.add_argument('--skip-all', action='store_true',
help='Skip all update operations')
# Backward compatibility: positional argument for temp_dir
parser.add_argument('temp_dir_positional', nargs='?', metavar='TEMP_DIR',
help='(Legacy) Temporary directory path')
args = parser.parse_args()
return args
# Module-level variables (will be set in main if running as script) skip_update = '--skip-update' in sys.argv or '--skip-all' in sys.argv
args = None skip_stat_update = '--skip-stat-update' in sys.argv or '--skip-all' in sys.argv
scan_only_mode = False
url_list_file = None if not skip_stat_update:
temp_dir = None g = Github(os.environ.get('GITHUB_TOKEN'))
skip_update = False else:
skip_stat_update = True g = None
g = None
print(f"TEMP DIR: {temp_dir}")
parse_cnt = 0 parse_cnt = 0
@@ -100,22 +73,12 @@ def extract_nodes(code_text):
parse_cnt += 1 parse_cnt += 1
code_text = re.sub(r'\\[^"\']', '', code_text) code_text = re.sub(r'\\[^"\']', '', code_text)
with warnings.catch_warnings(): parsed_code = ast.parse(code_text)
warnings.filterwarnings('ignore', category=SyntaxWarning)
warnings.filterwarnings('ignore', category=DeprecationWarning)
parsed_code = ast.parse(code_text)
# Support both ast.Assign and ast.AnnAssign (for type-annotated assignments)
assignments = (node for node in parsed_code.body if isinstance(node, (ast.Assign, ast.AnnAssign)))
assignments = (node for node in parsed_code.body if isinstance(node, ast.Assign))
for assignment in assignments: for assignment in assignments:
# Handle ast.AnnAssign (e.g., NODE_CLASS_MAPPINGS: Type = {...}) if isinstance(assignment.targets[0], ast.Name) and assignment.targets[0].id in ['NODE_CONFIG', 'NODE_CLASS_MAPPINGS']:
if isinstance(assignment, ast.AnnAssign):
if isinstance(assignment.target, ast.Name) and assignment.target.id in ['NODE_CONFIG', 'NODE_CLASS_MAPPINGS']:
node_class_mappings = assignment.value
break
# Handle ast.Assign (e.g., NODE_CLASS_MAPPINGS = {...})
elif isinstance(assignment.targets[0], ast.Name) and assignment.targets[0].id in ['NODE_CONFIG', 'NODE_CLASS_MAPPINGS']:
node_class_mappings = assignment.value node_class_mappings = assignment.value
break break
else: else:
@@ -127,7 +90,7 @@ def extract_nodes(code_text):
for key in node_class_mappings.keys: for key in node_class_mappings.keys:
if key is not None and isinstance(key.value, str): if key is not None and isinstance(key.value, str):
s.add(key.value.strip()) s.add(key.value.strip())
return s return s
else: else:
return set() return set()
@@ -135,99 +98,6 @@ def extract_nodes(code_text):
return set() return set()
def has_comfy_node_base(class_node):
"""Check if class inherits from io.ComfyNode or ComfyNode"""
for base in class_node.bases:
# Case 1: ComfyNode
if isinstance(base, ast.Name) and base.id == 'ComfyNode':
return True
# Case 2: io.ComfyNode
elif isinstance(base, ast.Attribute):
if base.attr == 'ComfyNode':
return True
return False
def extract_keyword_value(call_node, keyword):
"""
Extract string value of keyword argument
Schema(node_id="MyNode") -> "MyNode"
"""
for kw in call_node.keywords:
if kw.arg == keyword:
# ast.Constant (Python 3.8+)
if isinstance(kw.value, ast.Constant):
if isinstance(kw.value.value, str):
return kw.value.value
# ast.Str (Python 3.7-) - suppress deprecation warning
else:
with warnings.catch_warnings():
warnings.filterwarnings('ignore', category=DeprecationWarning)
if hasattr(ast, 'Str') and isinstance(kw.value, ast.Str):
return kw.value.s
return None
def is_schema_call(call_node):
"""Check if ast.Call is io.Schema() or Schema()"""
func = call_node.func
if isinstance(func, ast.Name) and func.id == 'Schema':
return True
elif isinstance(func, ast.Attribute) and func.attr == 'Schema':
return True
return False
def extract_node_id_from_schema(class_node):
"""
Extract node_id from define_schema() method
"""
for item in class_node.body:
if isinstance(item, ast.FunctionDef) and item.name == 'define_schema':
# Walk through function body
for stmt in ast.walk(item):
if isinstance(stmt, ast.Call):
# Check if it's Schema() call
if is_schema_call(stmt):
node_id = extract_keyword_value(stmt, 'node_id')
if node_id:
return node_id
return None
def extract_v3_nodes(code_text):
"""
Extract V3 node IDs using AST parsing
Returns: set of node_id strings
"""
global parse_cnt
try:
if parse_cnt % 100 == 0:
print(".", end="", flush=True)
parse_cnt += 1
with warnings.catch_warnings():
warnings.filterwarnings('ignore', category=SyntaxWarning)
warnings.filterwarnings('ignore', category=DeprecationWarning)
tree = ast.parse(code_text)
except (SyntaxError, UnicodeDecodeError):
return set()
nodes = set()
# Find io.ComfyNode subclasses
for node in ast.walk(tree):
if isinstance(node, ast.ClassDef):
# Check if inherits from ComfyNode
if has_comfy_node_base(node):
node_id = extract_node_id_from_schema(node)
if node_id:
nodes.add(node_id)
return nodes
# scan # scan
def scan_in_file(filename, is_builtin=False): def scan_in_file(filename, is_builtin=False):
global builtin_nodes global builtin_nodes
@@ -235,18 +105,13 @@ def scan_in_file(filename, is_builtin=False):
with open(filename, encoding='utf-8', errors='ignore') as file: with open(filename, encoding='utf-8', errors='ignore') as file:
code = file.read() code = file.read()
# Support type annotations (e.g., NODE_CLASS_MAPPINGS: Type = {...}) and line continuations (\) pattern = r"_CLASS_MAPPINGS\s*=\s*{([^}]*)}"
pattern = r"_CLASS_MAPPINGS\s*(?::\s*\w+\s*)?=\s*(?:\\\s*)?{([^}]*)}"
regex = re.compile(pattern, re.MULTILINE | re.DOTALL) regex = re.compile(pattern, re.MULTILINE | re.DOTALL)
nodes = set() nodes = set()
class_dict = {} class_dict = {}
# V1 nodes detection
nodes |= extract_nodes(code) nodes |= extract_nodes(code)
# V3 nodes detection
nodes |= extract_v3_nodes(code)
code = re.sub(r'^#.*?$', '', code, flags=re.MULTILINE) code = re.sub(r'^#.*?$', '', code, flags=re.MULTILINE)
def extract_keys(pattern, code): def extract_keys(pattern, code):
@@ -343,53 +208,6 @@ def get_nodes(target_dir):
return py_files, directories return py_files, directories
def get_urls_from_list_file(list_file):
"""
Read URLs from list file for scan-only mode
Args:
list_file (str): Path to URL list file (one URL per line)
Returns:
list of tuples: [(url, "", None, None), ...]
Format: (url, title, preemptions, nodename_pattern)
- title: Empty string
- preemptions: None
- nodename_pattern: None
File format:
https://github.com/owner/repo1
https://github.com/owner/repo2
# Comments starting with # are ignored
Raises:
FileNotFoundError: If list_file does not exist
"""
if not os.path.exists(list_file):
raise FileNotFoundError(f"URL list file not found: {list_file}")
urls = []
with open(list_file, 'r', encoding='utf-8') as f:
for line_num, line in enumerate(f, 1):
line = line.strip()
# Skip empty lines and comments
if not line or line.startswith('#'):
continue
# Validate URL format (basic check)
if not (line.startswith('http://') or line.startswith('https://')):
print(f"WARNING: Line {line_num} is not a valid URL: {line}")
continue
# Add URL with empty metadata
# (url, title, preemptions, nodename_pattern)
urls.append((line, "", None, None))
print(f"Loaded {len(urls)} URLs from {list_file}")
return urls
def get_git_urls_from_json(json_file): def get_git_urls_from_json(json_file):
with open(json_file, encoding='utf-8') as file: with open(json_file, encoding='utf-8') as file:
data = json.load(file) data = json.load(file)
@@ -437,52 +255,22 @@ def clone_or_pull_git_repository(git_url):
repo.git.submodule('update', '--init', '--recursive') repo.git.submodule('update', '--init', '--recursive')
print(f"Pulling {repo_name}...") print(f"Pulling {repo_name}...")
except Exception as e: except Exception as e:
print(f"Failed to pull '{repo_name}': {e}") print(f"Pulling {repo_name} failed: {e}")
else: else:
try: try:
Repo.clone_from(git_url, repo_dir, recursive=True) Repo.clone_from(git_url, repo_dir, recursive=True)
print(f"Cloning {repo_name}...") print(f"Cloning {repo_name}...")
except Exception as e: except Exception as e:
print(f"Failed to clone '{repo_name}': {e}") print(f"Cloning {repo_name} failed: {e}")
def update_custom_nodes(scan_only_mode=False, url_list_file=None): def update_custom_nodes():
"""
Update custom nodes by cloning/pulling repositories
Args:
scan_only_mode (bool): If True, use URL list file instead of custom-node-list.json
url_list_file (str): Path to URL list file (required if scan_only_mode=True)
Returns:
dict: node_info mapping {repo_name: (url, title, preemptions, node_pattern)}
"""
if not os.path.exists(temp_dir): if not os.path.exists(temp_dir):
os.makedirs(temp_dir) os.makedirs(temp_dir)
node_info = {} node_info = {}
# Select URL source based on mode git_url_titles_preemptions = get_git_urls_from_json('custom-node-list.json')
if scan_only_mode:
if not url_list_file:
raise ValueError("url_list_file is required in scan-only mode")
git_url_titles_preemptions = get_urls_from_list_file(url_list_file)
print("\n[Scan-Only Mode]")
print(f" - URL source: {url_list_file}")
print(" - GitHub stats: DISABLED")
print(f" - Git clone/pull: {'ENABLED' if not skip_update else 'DISABLED'}")
print(" - Metadata: EMPTY")
else:
if not os.path.exists('custom-node-list.json'):
raise FileNotFoundError("custom-node-list.json not found")
git_url_titles_preemptions = get_git_urls_from_json('custom-node-list.json')
print("\n[Standard Mode]")
print(" - URL source: custom-node-list.json")
print(f" - GitHub stats: {'ENABLED' if not skip_stat_update else 'DISABLED'}")
print(f" - Git clone/pull: {'ENABLED' if not skip_update else 'DISABLED'}")
print(" - Metadata: FULL")
def process_git_url_title(url, title, preemptions, node_pattern): def process_git_url_title(url, title, preemptions, node_pattern):
name = os.path.basename(url) name = os.path.basename(url)
@@ -594,59 +382,46 @@ def update_custom_nodes(scan_only_mode=False, url_list_file=None):
if not skip_stat_update: if not skip_stat_update:
process_git_stats(git_url_titles_preemptions) process_git_stats(git_url_titles_preemptions)
# Git clone/pull for all repositories
with concurrent.futures.ThreadPoolExecutor(11) as executor: with concurrent.futures.ThreadPoolExecutor(11) as executor:
for url, title, preemptions, node_pattern in git_url_titles_preemptions: for url, title, preemptions, node_pattern in git_url_titles_preemptions:
executor.submit(process_git_url_title, url, title, preemptions, node_pattern) executor.submit(process_git_url_title, url, title, preemptions, node_pattern)
# .py file download (skip in scan-only mode - only process git repos) py_url_titles_and_pattern = get_py_urls_from_json('custom-node-list.json')
if not scan_only_mode:
py_url_titles_and_pattern = get_py_urls_from_json('custom-node-list.json')
def download_and_store_info(url_title_preemptions_and_pattern): def download_and_store_info(url_title_preemptions_and_pattern):
url, title, preemptions, node_pattern = url_title_preemptions_and_pattern url, title, preemptions, node_pattern = url_title_preemptions_and_pattern
name = os.path.basename(url) name = os.path.basename(url)
if name.endswith(".py"): if name.endswith(".py"):
node_info[name] = (url, title, preemptions, node_pattern) node_info[name] = (url, title, preemptions, node_pattern)
try: try:
download_url(url, temp_dir) download_url(url, temp_dir)
except Exception: except Exception:
print(f"[ERROR] Cannot download '{url}'") print(f"[ERROR] Cannot download '{url}'")
with concurrent.futures.ThreadPoolExecutor(10) as executor: with concurrent.futures.ThreadPoolExecutor(10) as executor:
executor.map(download_and_store_info, py_url_titles_and_pattern) executor.map(download_and_store_info, py_url_titles_and_pattern)
return node_info return node_info
def gen_json(node_info, scan_only_mode=False): def gen_json(node_info):
"""
Generate extension-node-map.json from scanned node information
Args:
node_info (dict): Repository metadata mapping
scan_only_mode (bool): If True, exclude metadata from output
"""
# scan from .py file # scan from .py file
node_files, node_dirs = get_nodes(temp_dir) node_files, node_dirs = get_nodes(temp_dir)
comfyui_path = os.path.abspath(os.path.join(temp_dir, "ComfyUI")) comfyui_path = os.path.abspath(os.path.join(temp_dir, "ComfyUI"))
# Only reorder if ComfyUI exists in the list node_dirs.remove(comfyui_path)
if comfyui_path in node_dirs: node_dirs = [comfyui_path] + node_dirs
node_dirs.remove(comfyui_path)
node_dirs = [comfyui_path] + node_dirs
data = {} data = {}
for dirname in node_dirs: for dirname in node_dirs:
py_files = get_py_file_paths(dirname) py_files = get_py_file_paths(dirname)
metadata = {} metadata = {}
nodes = set() nodes = set()
for py in py_files: for py in py_files:
nodes_in_file, metadata_in_file = scan_in_file(py, dirname == "ComfyUI") nodes_in_file, metadata_in_file = scan_in_file(py, dirname == "ComfyUI")
nodes.update(nodes_in_file) nodes.update(nodes_in_file)
# Include metadata from .py files in both modes
metadata.update(metadata_in_file) metadata.update(metadata_in_file)
dirname = os.path.basename(dirname) dirname = os.path.basename(dirname)
@@ -661,28 +436,17 @@ def gen_json(node_info, scan_only_mode=False):
if dirname in node_info: if dirname in node_info:
git_url, title, preemptions, node_pattern = node_info[dirname] git_url, title, preemptions, node_pattern = node_info[dirname]
# Conditionally add metadata based on mode metadata['title_aux'] = title
if not scan_only_mode:
# Standard mode: include all metadata
metadata['title_aux'] = title
if preemptions is not None: if preemptions is not None:
metadata['preemptions'] = preemptions metadata['preemptions'] = preemptions
if node_pattern is not None: if node_pattern is not None:
metadata['nodename_pattern'] = node_pattern metadata['nodename_pattern'] = node_pattern
# Scan-only mode: metadata remains empty
data[git_url] = (nodes, metadata) data[git_url] = (nodes, metadata)
else: else:
# Scan-only mode: Repository not in node_info (expected behavior) print(f"WARN: {dirname} is removed from custom-node-list.json")
# Construct URL from dirname (author_repo format)
if '_' in dirname:
parts = dirname.split('_', 1)
git_url = f"https://github.com/{parts[0]}/{parts[1]}"
data[git_url] = (nodes, metadata)
else:
print(f"WARN: {dirname} is removed from custom-node-list.json")
for file in node_files: for file in node_files:
nodes, metadata = scan_in_file(file) nodes, metadata = scan_in_file(file)
@@ -695,16 +459,13 @@ def gen_json(node_info, scan_only_mode=False):
if file in node_info: if file in node_info:
url, title, preemptions, node_pattern = node_info[file] url, title, preemptions, node_pattern = node_info[file]
metadata['title_aux'] = title
# Conditionally add metadata based on mode if preemptions is not None:
if not scan_only_mode: metadata['preemptions'] = preemptions
metadata['title_aux'] = title
if node_pattern is not None:
if preemptions is not None: metadata['nodename_pattern'] = node_pattern
metadata['preemptions'] = preemptions
if node_pattern is not None:
metadata['nodename_pattern'] = node_pattern
data[url] = (nodes, metadata) data[url] = (nodes, metadata)
else: else:
@@ -716,10 +477,6 @@ def gen_json(node_info, scan_only_mode=False):
for extension in extensions: for extension in extensions:
node_list_json_path = os.path.join(temp_dir, extension, 'node_list.json') node_list_json_path = os.path.join(temp_dir, extension, 'node_list.json')
if os.path.exists(node_list_json_path): if os.path.exists(node_list_json_path):
# Skip if extension not in node_info (scan-only mode with limited URLs)
if extension not in node_info:
continue
git_url, title, preemptions, node_pattern = node_info[extension] git_url, title, preemptions, node_pattern = node_info[extension]
with open(node_list_json_path, 'r', encoding='utf-8') as f: with open(node_list_json_path, 'r', encoding='utf-8') as f:
@@ -739,26 +496,17 @@ def gen_json(node_info, scan_only_mode=False):
nodes_in_url, metadata_in_url = data[git_url] nodes_in_url, metadata_in_url = data[git_url]
nodes = set(nodes_in_url) nodes = set(nodes_in_url)
try: for x, desc in node_list_json.items():
for x, desc in node_list_json.items(): nodes.add(x.strip())
nodes.add(x.strip())
except Exception as e:
print(f"\nERROR: Invalid json format '{node_list_json_path}'")
print("------------------------------------------------------")
print(e)
print("------------------------------------------------------")
node_list_json = {}
# Conditionally add metadata based on mode metadata_in_url['title_aux'] = title
if not scan_only_mode:
metadata_in_url['title_aux'] = title
if preemptions is not None: if preemptions is not None:
metadata_in_url['preemptions'] = preemptions metadata['preemptions'] = preemptions
if node_pattern is not None:
metadata_in_url['nodename_pattern'] = node_pattern
if node_pattern is not None:
metadata_in_url['nodename_pattern'] = node_pattern
nodes = list(nodes) nodes = list(nodes)
nodes.sort() nodes.sort()
data[git_url] = (nodes, metadata_in_url) data[git_url] = (nodes, metadata_in_url)
@@ -768,53 +516,12 @@ def gen_json(node_info, scan_only_mode=False):
json.dump(data, file, indent=4, sort_keys=True) json.dump(data, file, indent=4, sort_keys=True)
if __name__ == "__main__": print("### ComfyUI Manager Node Scanner ###")
# Parse arguments
args = parse_arguments()
# Determine mode print("\n# Updating extensions\n")
scan_only_mode = args.scan_only is not None updated_node_info = update_custom_nodes()
url_list_file = args.scan_only if scan_only_mode else None
# Determine temp_dir print("\n# 'extension-node-map.json' file is generated.\n")
if args.temp_dir: gen_json(updated_node_info)
temp_dir = args.temp_dir
elif args.temp_dir_positional:
temp_dir = args.temp_dir_positional
else:
temp_dir = os.path.join(os.getcwd(), ".tmp")
if not os.path.exists(temp_dir): print("\nDONE.\n")
os.makedirs(temp_dir)
# Determine skip flags
skip_update = args.skip_update or args.skip_all
skip_stat_update = args.skip_stat_update or args.skip_all or scan_only_mode
if not skip_stat_update:
auth = Auth.Token(os.environ.get('GITHUB_TOKEN'))
g = Github(auth=auth)
else:
g = None
print("### ComfyUI Manager Node Scanner ###")
if scan_only_mode:
print(f"\n# [Scan-Only Mode] Processing URL list: {url_list_file}\n")
else:
print("\n# [Standard Mode] Updating extensions\n")
# Update/clone repositories and collect node info
updated_node_info = update_custom_nodes(scan_only_mode, url_list_file)
print("\n# Generating 'extension-node-map.json'...\n")
# Generate extension-node-map.json
gen_json(updated_node_info, scan_only_mode)
print("\n✅ DONE.\n")
if scan_only_mode:
print("Output: extension-node-map.json (node mappings only)")
else:
print("Output: extension-node-map.json (full metadata)")