Merge pull request #4 from cdb-boop/main

Major Rework
This commit is contained in:
Hayden
2024-03-06 09:43:49 +08:00
committed by GitHub
9 changed files with 4187 additions and 753 deletions

2
.gitignore vendored
View File

@@ -158,3 +158,5 @@ cython_debug/
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
ui_settings.yaml
server_settings.yaml

104
README.md
View File

@@ -1,2 +1,104 @@
# comfyui-model-manager
Manage models: browsing, donwload and delete.
Download, browse and delete models in ComfyUI.
<div>
<img src="demo-tab-download.png" alt="Model Manager Demo Screenshot" width="45%"/>
<img src="demo-tab-models.png" alt="Model Manager Demo Screenshot" width="45%"/>
</div>
## Features
### Download Tab
- View multiple models associated with a url.
- Select a download directory.
- Optionally also download a model preview image (a default image along side the model, from another url or locally uploaded).
- Civitai and HuggingFace API token configurable in `server_settings.yaml`.
### Models Tab
- Search bar in models tab.
- Advanced keyword search using `"multiple words in quotes"` or a minus sign to `-exclude`.
- Search `/`subdirectories of model directories based on your file structure (for example, `/0/1.5/styles/clothing`).
- Add `/` at the start of the search bar to see auto-complete suggestions.
- Include models listed in ComfyUI's `extra_model_paths.yaml` or added in `ComfyUI/models`.
- Sort for models (Date Created, Date Modified, Name).
### Model Info View
- View model metadata, including training tags and bucket resolutions.
- Read, edit and save notes in a `.txt` file beside the model.
- Change or remove a model's preview image (add a different one using a url or local upload).
- Rename, move or **permanently** remove models.
### ComfyUI Node Graph
- Button to copy a model to the ComfyUI clipboard or embedding to system clipboard. (Embedding copying requires secure http connection.)
- Button to add model to ComfyUI graph or embedding to selected nodes. (For small screens/low resolution.)
- Right, left, top and bottom toggleable sidebar modes.
- Drag a model onto the graph to add a new node.
- Drag a model onto an existing node to set the model field.
- Drag an embedding onto a text area to add it to the end.
### Settings Tab
- Settings tab saved in `ui_settings.yaml`.
- Hide/Show 'add' and 'copy-to-clipboard' buttons.
- Text to always search.
- Show/Hide add embedding extension.
- Colors follow ComfyUI's current theme.
## TODO
<details>
<summary></summary>
### Download Model
- Checkbox to optionally save description in `.txt` file for Civitai. (what about "About Model"?)
- Server setting to enable creating new folders (on download, on move).
### Download Model Info
- Auto-save notes? (requires debounce and save confirmation)
- Load workflow from preview (Should be easy to add with ComfyUI built-in clipboard.)
- Default weights on add/drag? (optional override on drag?)
- Optional (re)download `📥︎` model info from the internet and cache the text file locally. (requires checksum?)
- Radio buttons to swap between downloaded and server view.
### Sidebar
- Drag sidebar width/height dynamically.
### Accessibility
- Proper naming, labeling, alt text, etc. for html elements.
- Tool tips.
- Better error messages.
### Image preview
- Better placeholder preview. (with proper spelling!)
- Show preview images for videos.
- If ffmpeg or cv2 available, extract the first frame of the video and use as image preview.
### Settings
- Toggle exclusion of "hidden folders" with a `.` prefix.
- Sidebar default width/height.
- Toggle non-uniform preview sizes. (How to handle extreme aspect ratios?)
### Search filtering and sort
- Real-time search
- Check search code is optimized to avoid recalculation on every minor input change
- Filter directory dropdown
- Filter directory content in auto-suggest dropdown (not clear how this should be implemented)
- Filters dropdown
- Stable Diffusion model version, if applicable (Maybe dropdown list of "Base Models" is more pratical to impliment?)
- Favorites
- Swap between `and` and `or` keyword search? (currently `and`)
</details>

View File

@@ -1,172 +1,841 @@
import os
import io
import pathlib
import shutil
from datetime import datetime
import sys
import copy
import importlib
import re
import base64
from aiohttp import web
import server
import os
model_uri = os.path.join(os.getcwd(), "models")
extension_uri = os.path.join(os.getcwd(), "custom_nodes/ComfyUI-Model-Manager")
model_type_dir_dict = {
"checkpoint": "checkpoints",
"clip": "clip",
"clip_vision": "clip_vision",
"controlnet": "controlnet",
"diffuser": "diffusers",
"embedding": "embeddings",
"gligen": "gligen",
"hypernetwork": "hypernetworks",
"lora": "loras",
"style_models": "style_models",
"unet": "unet",
"upscale_model": "upscale_models",
"vae": "vae",
"vae_approx": "vae_approx",
}
@server.PromptServer.instance.routes.get("/model-manager/imgPreview")
async def img_preview(request):
uri = request.query.get("uri")
filepath = os.path.join(model_uri, uri)
if os.path.exists(filepath):
with open(filepath, "rb") as img_file:
image_data = img_file.read()
else:
with open(os.path.join(extension_uri, "no-preview.png"), "rb") as img_file:
image_data = img_file.read()
return web.Response(body=image_data, content_type="image/png")
import urllib.parse
import urllib.request
import struct
import json
import requests
requests.packages.urllib3.disable_warnings()
import folder_paths
config_loader_path = os.path.join(os.path.dirname(__file__), 'config_loader.py')
config_loader_spec = importlib.util.spec_from_file_location('config_loader', config_loader_path)
config_loader = importlib.util.module_from_spec(config_loader_spec)
config_loader_spec.loader.exec_module(config_loader)
comfyui_model_uri = os.path.join(os.getcwd(), "models")
extension_uri = os.path.join(os.getcwd(), "custom_nodes" + os.path.sep + "ComfyUI-Model-Manager")
no_preview_image = os.path.join(extension_uri, "no-preview.png")
ui_settings_uri = os.path.join(extension_uri, "ui_settings.yaml")
server_settings_uri = os.path.join(extension_uri, "server_settings.yaml")
fallback_model_extensions = set([".bin", ".ckpt", ".onnx", ".pt", ".pth", ".safetensors"]) # TODO: magic values
image_extensions = (".apng", ".gif", ".jpeg", ".jpg", ".png", ".webp") # TODO: JavaScript does not know about this (x2 states)
#video_extensions = (".avi", ".mp4", ".webm") # TODO: Requires ffmpeg or cv2. Cache preview frame?
_folder_names_and_paths = None # dict[str, tuple[list[str], list[str]]]
def folder_paths_folder_names_and_paths(refresh = False):
global _folder_names_and_paths
if refresh or _folder_names_and_paths is None:
_folder_names_and_paths = {}
for item_name in os.listdir(comfyui_model_uri):
item_path = os.path.join(comfyui_model_uri, item_name)
if not os.path.isdir(item_path):
continue
if item_name == "configs":
continue
if item_name in folder_paths.folder_names_and_paths:
dir_paths, extensions = copy.deepcopy(folder_paths.folder_names_and_paths[item_name])
else:
dir_paths = [item_path]
extensions = copy.deepcopy(fallback_model_extensions)
_folder_names_and_paths[item_name] = (dir_paths, extensions)
return _folder_names_and_paths
def folder_paths_get_folder_paths(folder_name, refresh = False): # API function crashes querying unknown model folder
paths = folder_paths_folder_names_and_paths(refresh)
if folder_name in paths:
return paths[folder_name][0]
maybe_path = os.path.join(comfyui_model_uri, folder_name)
if os.path.exists(maybe_path):
return [maybe_path]
return []
def folder_paths_get_supported_pt_extensions(folder_name, refresh = False): # Missing API function
paths = folder_paths_folder_names_and_paths(refresh)
if folder_name in paths:
return paths[folder_name][1]
model_extensions = copy.deepcopy(fallback_model_extensions)
return model_extensions
@server.PromptServer.instance.routes.get("/model-manager/source")
async def load_source_from(request):
uri = request.query.get("uri", "local")
if uri == "local":
with open(os.path.join(extension_uri, "index.json")) as file:
dataSource = json.load(file)
def search_path_to_system_path(model_path):
sep = os.path.sep
model_path = os.path.normpath(model_path.replace("/", sep))
isep0 = 0 if model_path[0] == sep else -1
isep1 = model_path.find(sep, isep0 + 1)
if isep1 == -1 or isep1 == len(model_path):
return (None, None)
isep2 = model_path.find(sep, isep1 + 1)
if isep2 == -1 or isep2 - isep1 == 1:
isep2 = len(model_path)
model_path_type = model_path[isep0 + 1:isep1]
paths = folder_paths_get_folder_paths(model_path_type)
if len(paths) == 0:
return (None, None)
model_path_index = model_path[isep1 + 1:isep2]
try:
model_path_index = int(model_path_index)
except:
return (None, None)
if model_path_index < 0 or model_path_index >= len(paths):
return (None, None)
system_path = os.path.normpath(
paths[model_path_index] +
sep +
model_path[isep2:]
)
return (system_path, model_path_type)
def get_safetensor_header(path):
try:
with open(path, "rb") as f:
length_of_header = struct.unpack("<Q", f.read(8))[0]
header_bytes = f.read(length_of_header)
header_json = json.loads(header_bytes)
return header_json
except:
return {}
def end_swap_and_pop(x, i):
x[i], x[-1] = x[-1], x[i]
return x.pop(-1)
def model_type_to_dir_name(model_type):
if model_type == "checkpoint": return "checkpoints"
#elif model_type == "clip": return "clip"
#elif model_type == "clip_vision": return "clip_vision"
#elif model_type == "controlnet": return "controlnet"
elif model_type == "diffuser": return "diffusers"
elif model_type == "embedding": return "embeddings"
#elif model_type== "gligen": return "gligen"
elif model_type == "hypernetwork": return "hypernetworks"
elif model_type == "lora": return "loras"
#elif model_type == "style_models": return "style_models"
#elif model_type == "unet": return "unet"
elif model_type == "upscale_model": return "upscale_models"
#elif model_type == "vae": return "vae"
#elif model_type == "vae_approx": return "vae_approx"
else: return model_type
def ui_rules():
Rule = config_loader.Rule
return [
Rule("sidebar-default-height", 0.5, float, 0.0, 1.0),
Rule("sidebar-default-width", 0.5, float, 0.0, 1.0),
Rule("model-search-always-append", "", str),
Rule("model-persistent-search", True, bool),
Rule("model-show-label-extensions", False, bool),
Rule("model-preview-fallback-search-safetensors-thumbnail", False, bool),
Rule("model-show-add-button", True, bool),
Rule("model-show-copy-button", True, bool),
Rule("model-add-embedding-extension", False, bool),
Rule("model-add-drag-strict-on-field", False, bool),
Rule("model-add-offset", 25, int),
]
def server_rules():
Rule = config_loader.Rule
return [
#Rule("model_extension_download_whitelist", [".safetensors"], list),
Rule("civitai_api_key", "", str),
Rule("huggingface_api_key", "", str),
]
server_settings = config_loader.yaml_load(server_settings_uri, server_rules())
config_loader.yaml_save(server_settings_uri, server_rules(), server_settings)
@server.PromptServer.instance.routes.get("/model-manager/settings/load")
async def load_ui_settings(request):
rules = ui_rules()
settings = config_loader.yaml_load(ui_settings_uri, rules)
return web.json_response({ "settings": settings })
@server.PromptServer.instance.routes.post("/model-manager/settings/save")
async def save_ui_settings(request):
body = await request.json()
settings = body.get("settings")
rules = ui_rules()
validated_settings = config_loader.validated(rules, settings)
success = config_loader.yaml_save(ui_settings_uri, rules, validated_settings)
return web.json_response({
"success": success,
"settings": validated_settings if success else "",
})
@server.PromptServer.instance.routes.get("/model-manager/preview/get")
async def get_model_preview(request):
uri = request.query.get("uri")
image_path = no_preview_image
image_extension = "png"
image_data = None
if uri != "no-preview":
sep = os.path.sep
uri = uri.replace("/" if sep == "\\" else "/", sep)
path, _ = search_path_to_system_path(uri)
head, extension = os.path.splitext(path)
if os.path.exists(path):
image_extension = extension[1:]
image_path = path
elif os.path.exists(head) and os.path.splitext(head)[1] == ".safetensors":
image_extension = extension[1:]
header = get_safetensor_header(head)
metadata = header.get("__metadata__", None)
if metadata is not None:
thumbnail = metadata.get("modelspec.thumbnail", None)
if thumbnail is not None:
image_data = thumbnail.split(',')[1]
image_data = base64.b64decode(image_data)
if image_data == None:
with open(image_path, "rb") as file:
image_data = file.read()
return web.Response(body=image_data, content_type="image/" + image_extension)
def download_model_preview(formdata):
path = formdata.get("path", None)
if type(path) is not str:
raise ("Invalid path!")
path, _ = search_path_to_system_path(path)
path_without_extension, _ = os.path.splitext(path)
overwrite = formdata.get("overwrite", "true").lower()
overwrite = True if overwrite == "true" else False
image = formdata.get("image", None)
if type(image) is str:
image_path = download_image(image, path, overwrite)
_, image_extension = os.path.splitext(image_path)
else:
response = requests.get(uri)
dataSource = response.json()
content_type = image.content_type
if not content_type.startswith("image/"):
raise ("Invalid content type!")
image_extension = "." + content_type[len("image/"):]
if image_extension not in image_extensions:
raise ("Invalid extension!")
# check if it installed
for item in dataSource:
model_type = item.get("type")
model_name = item.get("name")
model_type_path = model_type_dir_dict.get(model_type)
if model_type_path is None:
continue
if os.path.exists(os.path.join(model_uri, model_type_path, model_name)):
item["installed"] = True
image_path = path_without_extension + image_extension
if not overwrite and os.path.isfile(image_path):
raise ("Image already exists!")
file: io.IOBase = image.file
image_data = file.read()
with open(image_path, "wb") as f:
f.write(image_data)
return web.json_response(dataSource)
delete_same_name_files(path_without_extension, image_extensions, image_extension)
@server.PromptServer.instance.routes.get("/model-manager/models")
async def load_download_models(request):
model_types = os.listdir(model_uri)
model_types = sorted(model_types)
model_types = [content for content in model_types if content != "configs"]
@server.PromptServer.instance.routes.post("/model-manager/preview/set")
async def set_model_preview(request):
formdata = await request.post()
try:
download_model_preview(formdata)
return web.json_response({ "success": True })
except ValueError as e:
print(e, file=sys.stderr, flush=True)
return web.json_response({ "success": False })
@server.PromptServer.instance.routes.post("/model-manager/preview/delete")
async def delete_model_preview(request):
model_path = request.query.get("path", None)
if model_path is None:
return web.json_response({ "success": False })
model_path = urllib.parse.unquote(model_path)
file, _ = search_path_to_system_path(model_path)
path_and_name, _ = os.path.splitext(file)
delete_same_name_files(path_and_name, image_extensions)
return web.json_response({ "success": True })
@server.PromptServer.instance.routes.get("/model-manager/models/list")
async def get_model_list(request):
use_safetensor_thumbnail = (
config_loader.yaml_load(ui_settings_uri, ui_rules())
.get("model-preview-fallback-search-safetensors-thumbnail", False)
)
model_types = os.listdir(comfyui_model_uri)
model_types.remove("configs")
model_types.sort()
model_suffix = (".safetensors", ".pt", ".pth", ".bin", ".ckpt")
models = {}
for model_type in model_types:
model_type_uri = os.path.join(model_uri, model_type)
filenames = os.listdir(model_type_uri)
filenames = sorted(filenames)
model_files = [f for f in filenames if f.endswith(model_suffix)]
model_extensions = tuple(folder_paths_get_supported_pt_extensions(model_type))
file_infos = []
for base_path_index, model_base_path in enumerate(folder_paths_get_folder_paths(model_type)):
if not os.path.exists(model_base_path): # TODO: Bug in main code? ("ComfyUI\output\checkpoints", "ComfyUI\output\clip", "ComfyUI\models\t2i_adapter", "ComfyUI\output\vae")
continue
for cwd, _subdirs, files in os.walk(model_base_path):
dir_models = []
dir_images = []
def name2item(name):
item = {"name": name}
file_name, ext = os.path.splitext(name)
post_name = file_name + ".png"
if post_name in filenames:
post_path = os.path.join(model_type, post_name)
item["post"] = post_path
return item
for file in files:
if file.lower().endswith(model_extensions):
dir_models.append(file)
elif file.lower().endswith(image_extensions):
dir_images.append(file)
for model in dir_models:
model_name, model_ext = os.path.splitext(model)
image = None
image_modified = None
for iImage in range(len(dir_images)-1, -1, -1):
image_name, _ = os.path.splitext(dir_images[iImage])
if model_name == image_name:
image = end_swap_and_pop(dir_images, iImage)
img_abs_path = os.path.join(cwd, image)
image_modified = pathlib.Path(img_abs_path).stat().st_mtime_ns
break
abs_path = os.path.join(cwd, model)
stats = pathlib.Path(abs_path).stat()
model_modified = stats.st_mtime_ns
model_created = stats.st_ctime_ns
if use_safetensor_thumbnail and image is None and model_ext == ".safetensors":
# try to fallback on safetensor embedded thumbnail
header = get_safetensor_header(abs_path)
metadata = header.get("__metadata__", None)
if metadata is not None:
thumbnail = metadata.get("modelspec.thumbnail", None)
if thumbnail is not None:
i0 = thumbnail.find("/") + 1
i1 = thumbnail.find(";")
image_ext = "." + thumbnail[i0:i1]
if image_ext in image_extensions:
image = model + image_ext
image_modified = model_modified
rel_path = "" if cwd == model_base_path else os.path.relpath(cwd, model_base_path)
info = (model, image, base_path_index, rel_path, model_modified, model_created, image_modified)
file_infos.append(info)
file_infos.sort(key=lambda tup: tup[4], reverse=True) # TODO: remove sort; sorted on client
model_items = []
for model, image, base_path_index, rel_path, model_modified, model_created, image_modified in file_infos:
item = {
"name": model,
"path": "/" + os.path.join(model_type, str(base_path_index), rel_path, model).replace(os.path.sep, "/"), # relative logical path
#"systemPath": os.path.join(rel_path, model), # relative system path (less information than "search path")
"dateModified": model_modified,
"dateCreated": model_created,
#"dateLastUsed": "", # TODO: track server-side, send increment client-side
#"countUsed": 0, # TODO: track server-side, send increment client-side
}
if image is not None:
raw_post = os.path.join(model_type, str(base_path_index), rel_path, image)
item["preview"] = {
"path": urllib.parse.quote_plus(raw_post),
"dateModified": urllib.parse.quote_plus(str(image_modified)),
}
model_items.append(item)
model_items = list(map(name2item, model_files))
models[model_type] = model_items
return web.json_response(models)
import sys
import requests
def linear_directory_hierarchy(refresh = False):
model_paths = folder_paths_folder_names_and_paths(refresh)
dir_list = []
dir_list.append({ "name": "", "childIndex": 1, "childCount": len(model_paths) })
for model_dir_name, (model_dirs, _) in model_paths.items():
dir_list.append({ "name": model_dir_name, "childIndex": None, "childCount": len(model_dirs) })
for model_dir_index, (_, (model_dirs, extension_whitelist)) in enumerate(model_paths.items()):
model_dir_child_index = len(dir_list)
dir_list[model_dir_index + 1]["childIndex"] = model_dir_child_index
for dir_path_index, dir_path in enumerate(model_dirs):
dir_list.append({ "name": str(dir_path_index), "childIndex": None, "childCount": None })
for dir_path_index, dir_path in enumerate(model_dirs):
if not os.path.exists(dir_path) or os.path.isfile(dir_path):
continue
#dir_list.append({ "name": str(dir_path_index), "childIndex": None, "childCount": 0 })
dir_stack = [(dir_path, model_dir_child_index + dir_path_index)]
while len(dir_stack) > 0: # DEPTH-FIRST
dir_path, dir_index = dir_stack.pop()
dir_items = os.listdir(dir_path)
dir_items = sorted(dir_items, key=str.casefold)
dir_child_count = 0
# TODO: sort content of directory: alphabetically
# TODO: sort content of directory: files first
subdirs = []
for item_name in dir_items: # BREADTH-FIRST
item_path = os.path.join(dir_path, item_name)
if os.path.isdir(item_path):
# dir
subdir_index = len(dir_list) # this must be done BEFORE `dir_list.append`
subdirs.append((item_path, subdir_index))
dir_list.append({ "name": item_name, "childIndex": None, "childCount": 0 })
dir_child_count += 1
else:
# file
_, file_extension = os.path.splitext(item_name)
if extension_whitelist is None or file_extension in extension_whitelist:
dir_list.append({ "name": item_name })
dir_child_count += 1
if dir_child_count > 0:
dir_list[dir_index]["childIndex"] = len(dir_list) - dir_child_count
dir_list[dir_index]["childCount"] = dir_child_count
subdirs.reverse()
for dir_path, subdir_index in subdirs:
dir_stack.append((dir_path, subdir_index))
return dir_list
requests.packages.urllib3.disable_warnings()
def_headers = {
"User-Agent": "Mozilla/5.0 (iPad; CPU OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148"
}
@server.PromptServer.instance.routes.get("/model-manager/models/directory-list")
async def get_directory_list(request):
#body = await request.json()
dir_list = linear_directory_hierarchy(True)
#json.dump(dir_list, sys.stdout, indent=4)
return web.json_response(dir_list)
def download_model_file(url, filename):
dl_filename = filename + ".download"
def download_file(url, filename, overwrite):
if not overwrite and os.path.isfile(filename):
raise ValueError("File already exists!")
rh = requests.get(
url=url, stream=True, verify=False, headers=def_headers, proxies=None
)
print("temp file is " + dl_filename)
total_size = int(rh.headers["Content-Length"])
filename_temp = filename + ".download"
basename, ext = os.path.splitext(filename)
print("Start download {}, file size: {}".format(basename, total_size))
def_headers = {
"User-Agent": "Mozilla/5.0 (iPad; CPU OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148",
}
if url.startswith("https://civitai.com/"):
api_key = server_settings["civitai_api_key"]
if (api_key != ""):
def_headers["Authorization"] = f"Bearer {api_key}"
url += "&" if "?" in url else "?" # not the most robust solution
url += f"token={api_key}" # TODO: Authorization didn't work in the header
elif url.startswith("https://huggingface.co/"):
api_key = server_settings["huggingface_api_key"]
if api_key != "":
def_headers["Authorization"] = f"Bearer {api_key}"
rh = requests.get(url=url, stream=True, verify=False, headers=def_headers, proxies=None, allow_redirects=False)
if not rh.ok:
raise ValueError(
"Unable to download! Request header status code: " +
str(rh.status_code)
)
downloaded_size = 0
if os.path.exists(dl_filename):
downloaded_size = os.path.getsize(download_file)
if rh.status_code == 200 and os.path.exists(filename_temp):
downloaded_size = os.path.getsize(filename_temp)
headers = {"Range": "bytes=%d-" % downloaded_size}
headers["User-Agent"] = def_headers["User-Agent"]
r = requests.get(url=url, stream=True, verify=False, headers=headers, proxies=None, allow_redirects=False)
if rh.status_code == 307 and r.status_code == 307:
# Civitai redirect
redirect_url = r.content.decode("utf-8")
if not redirect_url.startswith("http"):
# Civitai requires login (NSFW or user-required)
# TODO: inform user WHY download failed
raise ValueError("Unable to download from Civitai! Redirect url: " + str(redirect_url))
download_file(redirect_url, filename, overwrite)
return
if rh.status_code == 302 and r.status_code == 302:
# HuggingFace redirect
redirect_url = r.content.decode("utf-8")
redirect_url_index = redirect_url.find("http")
if redirect_url_index == -1:
raise ValueError("Unable to download from HuggingFace! Redirect url: " + str(redirect_url))
download_file(redirect_url[redirect_url_index:], filename, overwrite)
return
elif rh.status_code == 200 and r.status_code == 206:
# Civitai download link
pass
r = requests.get(url=url, stream=True, verify=False, headers=headers, proxies=None)
total_size = int(rh.headers.get("Content-Length", 0)) # TODO: pass in total size earlier
with open(dl_filename, "ab") as f:
print("Download file: " + filename)
if total_size != 0:
print("Download file size: " + str(total_size))
mode = "wb" if overwrite else "ab"
with open(filename_temp, mode) as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk:
if chunk is not None:
downloaded_size += len(chunk)
f.write(chunk)
f.flush()
progress = int(50 * downloaded_size / total_size)
sys.stdout.reconfigure(encoding="utf-8")
sys.stdout.write(
"\r[%s%s] %d%%"
% (
"-" * progress,
" " * (50 - progress),
100 * downloaded_size / total_size,
if total_size != 0:
fraction = 1 if downloaded_size == total_size else downloaded_size / total_size
progress = int(50 * fraction)
sys.stdout.reconfigure(encoding="utf-8")
sys.stdout.write(
"\r[%s%s] %d%%"
% (
"-" * progress,
" " * (50 - progress),
100 * fraction,
)
)
)
sys.stdout.flush()
sys.stdout.flush()
print()
os.rename(dl_filename, filename)
if overwrite and os.path.isfile(filename):
os.remove(filename)
os.rename(filename_temp, filename)
@server.PromptServer.instance.routes.post("/model-manager/download")
async def download_file(request):
body = await request.json()
model_type = body.get("type")
model_type_path = model_type_dir_dict.get(model_type)
if model_type_path is None:
return web.json_response({"success": False})
def download_image(image_uri, model_path, overwrite):
_, extension = os.path.splitext(image_uri) # TODO: doesn't work for https://civitai.com/images/...
if not extension in image_extensions:
raise ValueError("Invalid image type!")
path_without_extension, _ = os.path.splitext(model_path)
file = path_without_extension + extension
download_file(image_uri, file, overwrite)
return file
download_uri = body.get("download")
@server.PromptServer.instance.routes.get("/model-manager/model/info")
async def get_model_info(request):
model_path = request.query.get("path", None)
if model_path is None:
return web.json_response({ "success": False })
model_path = urllib.parse.unquote(model_path)
file, _ = search_path_to_system_path(model_path)
if file is None:
return web.json_response({})
info = {}
path, name = os.path.split(model_path)
info["File Name"] = name
info["File Directory"] = path
info["File Size"] = str(os.path.getsize(file)) + " bytes"
stats = pathlib.Path(file).stat()
date_format = "%Y-%m-%d %H:%M:%S"
date_modified = datetime.fromtimestamp(stats.st_mtime).strftime(date_format)
info["Date Modified"] = date_modified
info["Date Created"] = datetime.fromtimestamp(stats.st_ctime).strftime(date_format)
file_name, _ = os.path.splitext(file)
for extension in image_extensions:
maybe_image = file_name + extension
if os.path.isfile(maybe_image):
image_path, _ = os.path.splitext(model_path)
image_modified = pathlib.Path(maybe_image).stat().st_mtime_ns
info["Preview"] = {
"path": urllib.parse.quote_plus(image_path + extension),
"dateModified": urllib.parse.quote_plus(str(image_modified)),
}
break
header = get_safetensor_header(file)
metadata = header.get("__metadata__", None)
#json.dump(metadata, sys.stdout, indent=4)
#print()
if metadata is not None and info.get("Preview", None) is None:
thumbnail = metadata.get("modelspec.thumbnail")
if thumbnail is not None:
i0 = thumbnail.find("/") + 1
i1 = thumbnail.find(";", i0)
thumbnail_extension = "." + thumbnail[i0:i1]
if thumbnail_extension in image_extensions:
info["Preview"] = {
"path": request.query["path"] + thumbnail_extension,
"dateModified": date_modified,
}
if metadata is not None:
train_end = metadata.get("modelspec.date", "").replace("T", " ")
train_start = metadata.get("ss_training_started_at", "")
if train_start != "":
try:
train_start = float(train_start)
train_start = datetime.fromtimestamp(train_start).strftime(date_format)
except:
train_start = ""
info["Date Trained"] = (
train_start +
(" ... " if train_start != "" and train_end != "" else "") +
train_end
)
info["Base Training Model"] = metadata.get("ss_sd_model_name", "")
info["Base Model"] = metadata.get("ss_base_model_version", "")
info["Architecture"] = metadata.get("modelspec.architecture", "") # "stable-diffusion-xl-v1-base"
clip_skip = metadata.get("ss_clip_skip", "")
if clip_skip == "None":
clip_skip = ""
info["Clip Skip"] = clip_skip # default 1 (disable clip skip)
info["Model Sampling Type"] = metadata.get("modelspec.prediction_type", "") # "epsilon"
# it is unclear what these are
#info["Hash SHA256"] = metadata.get("modelspec.hash_sha256", "")
#info["SSHS Model Hash"] = metadata.get("sshs_model_hash", "")
#info["SSHS Legacy Hash"] = metadata.get("sshs_legacy_hash", "")
#info["New SD Model Hash"] = metadata.get("ss_new_sd_model_hash", "")
#info["Output Name"] = metadata.get("ss_output_name", "")
#info["Title"] = metadata.get("modelspec.title", "")
info["Author"] = metadata.get("modelspec.author", "")
info["License"] = metadata.get("modelspec.license", "")
if metadata is not None:
training_comment = metadata.get("ss_training_comment", "")
info["Description"] = (
metadata.get("modelspec.description", "") +
"\n\n" +
metadata.get("modelspec.usage_hint", "") +
"\n\n" +
training_comment if training_comment != "None" else ""
).strip()
txt_file = file_name + ".txt"
notes = ""
if os.path.isfile(txt_file):
with open(txt_file, 'r', encoding="utf-8") as f:
notes = f.read()
info["Notes"] = notes
if metadata is not None:
img_buckets = metadata.get("ss_bucket_info", "{}")
if type(img_buckets) is str:
img_buckets = json.loads(img_buckets)
resolutions = {}
if img_buckets is not None:
buckets = img_buckets.get("buckets", {})
for resolution in buckets.values():
dim = resolution["resolution"]
x, y = dim[0], dim[1]
count = resolution["count"]
resolutions[str(x) + "x" + str(y)] = count
resolutions = list(resolutions.items())
resolutions.sort(key=lambda x: x[1], reverse=True)
info["Bucket Resolutions"] = resolutions
dir_tags = metadata.get("ss_tag_frequency", "{}")
if type(dir_tags) is str:
dir_tags = json.loads(dir_tags)
tags = {}
for train_tags in dir_tags.values():
for tag, count in train_tags.items():
tags[tag] = tags.get(tag, 0) + count
tags = list(tags.items())
tags.sort(key=lambda x: x[1], reverse=True)
info["Tags"] = tags
return web.json_response(info)
@server.PromptServer.instance.routes.get("/model-manager/system-separator")
async def get_system_separator(request):
return web.json_response(os.path.sep)
@server.PromptServer.instance.routes.post("/model-manager/model/download")
async def download_model(request):
formdata = await request.post()
result = {
"success": False,
"invalid": None,
}
overwrite = formdata.get("overwrite", "false").lower()
overwrite = True if overwrite == "true" else False
model_path = formdata.get("path", "/0")
directory, model_type = search_path_to_system_path(model_path)
if directory is None:
result["invalid"] = "path"
return web.json_response(result)
download_uri = formdata.get("download")
if download_uri is None:
return web.json_response({"success": False})
result["invalid"] = "download"
return web.json_response(result)
model_name = body.get("name")
file_name = os.path.join(model_uri, model_type_path, model_name)
download_model_file(download_uri, file_name)
print("文件下载完成!")
return web.json_response({"success": True})
name = formdata.get("name")
_, model_extension = os.path.splitext(name)
if not model_extension in folder_paths_get_supported_pt_extensions(model_type):
result["invalid"] = "name"
return web.json_response(result)
file_name = os.path.join(directory, name)
try:
download_file(download_uri, file_name, overwrite)
except Exception as e:
print(e, file=sys.stderr, flush=True)
result["invalid"] = "model"
return web.json_response(result)
image = formdata.get("image")
if image is not None and image != "":
try:
download_model_preview({
"path": model_path + os.sep + name,
"image": image,
"overwrite": formdata.get("overwrite"),
})
except Exception as e:
print(e, file=sys.stderr, flush=True)
result["invalid"] = "preview"
result["success"] = True
return web.json_response(result)
@server.PromptServer.instance.routes.post("/model-manager/model/move")
async def move_model(request):
body = await request.json()
old_file = body.get("oldFile", None)
if old_file is None:
return web.json_response({ "success": False })
old_file, old_model_type = search_path_to_system_path(old_file)
if not os.path.isfile(old_file):
return web.json_response({ "success": False })
_, model_extension = os.path.splitext(old_file)
if not model_extension in folder_paths_get_supported_pt_extensions(old_model_type):
# cannot move arbitrary files
return web.json_response({ "success": False })
new_file = body.get("newFile", None)
if new_file is None or new_file == "":
# cannot have empty name
return web.json_response({ "success": False })
new_file, new_model_type = search_path_to_system_path(new_file)
if not new_file.endswith(model_extension):
return web.json_response({ "success": False })
if os.path.isfile(new_file):
# cannot overwrite existing file
return web.json_response({ "success": False })
if not model_extension in folder_paths_get_supported_pt_extensions(new_model_type):
return web.json_response({ "success": False })
new_file_dir, _ = os.path.split(new_file)
if not os.path.isdir(new_file_dir):
return web.json_response({ "success": False })
if old_file == new_file:
return web.json_response({ "success": False })
try:
shutil.move(old_file, new_file)
except ValueError as e:
print(e, file=sys.stderr, flush=True)
return web.json_response({ "success": False })
old_file_without_extension, _ = os.path.splitext(old_file)
new_file_without_extension, _ = os.path.splitext(new_file)
# TODO: this could overwrite existing files...
for extension in image_extensions + (".txt",):
old_file = old_file_without_extension + extension
if os.path.isfile(old_file):
try:
shutil.move(old_file, new_file_without_extension + extension)
except ValueError as e:
print(e, file=sys.stderr, flush=True)
return web.json_response({ "success": True })
def delete_same_name_files(path_without_extension, extensions, keep_extension=None):
for extension in extensions:
if extension == keep_extension: continue
image_file = path_without_extension + extension
if os.path.isfile(image_file):
os.remove(image_file)
@server.PromptServer.instance.routes.post("/model-manager/model/delete")
async def delete_model(request):
result = { "success": False }
model_path = request.query.get("path", None)
if model_path is None:
return web.json_response(result)
model_path = urllib.parse.unquote(model_path)
file, model_type = search_path_to_system_path(model_path)
if file is None:
return web.json_response(result)
_, extension = os.path.splitext(file)
if not extension in folder_paths_get_supported_pt_extensions(model_type):
# cannot delete arbitrary files
return web.json_response(result)
if os.path.isfile(file):
os.remove(file)
result["success"] = True
path_and_name, _ = os.path.splitext(file)
delete_same_name_files(path_and_name, image_extensions)
txt_file = path_and_name + ".txt"
if os.path.isfile(txt_file):
os.remove(txt_file)
return web.json_response(result)
@server.PromptServer.instance.routes.post("/model-manager/notes/save")
async def set_notes(request):
body = await request.json()
text = body.get("notes", None)
if type(text) is not str:
return web.json_response({ "success": False })
model_path = body.get("path", None)
if type(model_path) is not str:
return web.json_response({ "success": False })
model_path, _ = search_path_to_system_path(model_path)
file_path_without_extension, _ = os.path.splitext(model_path)
filename = os.path.normpath(file_path_without_extension + ".txt")
if text.isspace() or text == "":
if os.path.exists(filename):
os.remove(filename)
else:
try:
with open(filename, "w", encoding="utf-8") as f:
f.write(text)
except ValueError as e:
print(e, file=sys.stderr, flush=True)
web.json_response({ "success": False })
return web.json_response({ "success": True })
WEB_DIRECTORY = "web"

65
config_loader.py Normal file
View File

@@ -0,0 +1,65 @@
import yaml
from dataclasses import dataclass
@dataclass
class Rule:
key: any
value_default: any
value_type: type
value_min: int | float | None
value_max: int | float | None
def __init__(self, key, value_default, value_type: type, value_min: int | float | None = None, value_max: int | float | None = None):
self.key = key
self.value_default = value_default
self.value_type = value_type
self.value_min = value_min
self.value_max = value_max
def _get_valid_value(data: dict, r: Rule):
if r.value_type != type(r.value_default):
raise Exception(f"'value_type' does not match type of 'value_default'!")
value = data.get(r.key)
if value is None:
value = r.value_default
else:
try:
value = r.value_type(value)
except:
value = r.value_default
value_is_numeric = r.value_type == int or r.value_type == float
if value_is_numeric and r.value_min:
if r.value_type != type(r.value_min):
raise Exception(f"Type of 'value_type' does not match the type of 'value_min'!")
value = max(r.value_min, value)
if value_is_numeric and r.value_max:
if r.value_type != type(r.value_max):
raise Exception(f"Type of 'value_type' does not match the type of 'value_max'!")
value = min(r.value_max, value)
return value
def validated(rules: list[Rule], data: dict = {}):
valid = {}
for r in rules:
valid[r.key] = _get_valid_value(data, r)
return valid
def yaml_load(path, rules: list[Rule]):
data = {}
try:
with open(path, 'r') as file:
data = yaml.safe_load(file)
except:
pass
return validated(rules, data)
def yaml_save(path, rules: list[Rule], data: dict) -> bool:
data = validated(rules, data)
try:
with open(path, 'w') as file:
yaml.dump(data, file)
return True
except:
return False

BIN
demo-tab-download.png Normal file
View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 210 KiB

BIN
demo-tab-models.png Normal file
View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 947 KiB

View File

@@ -1,124 +0,0 @@
[
{
"type": "checkpoint",
"base": "sd-xl",
"name": "sd_xl_base_1.0.safetensors",
"page": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0",
"download": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors",
"description": "Stable Diffusion XL base model"
},
{
"type": "checkpoint",
"base": "sd-xl",
"name": "sd_xl_refiner_1.0.safetensors",
"page": "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0",
"download": "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0.safetensors",
"description": "Stable Diffusion XL refiner model"
},
{
"type": "vae",
"base": "sd-xl-vae",
"name": "sdxl_vae.safetensors",
"page": "https://huggingface.co/stabilityai/sdxl-vae",
"download": "https://huggingface.co/stabilityai/sdxl-vae/resolve/main/sdxl_vae.safetensors",
"description": "Stable Diffusion XL VAE"
},
{
"type": "checkpoint",
"base": "sd-1.5",
"name": "anything_v5.safetensors",
"page": "https://huggingface.co/stablediffusionapi/anything-v5",
"download": "https://huggingface.co/stablediffusionapi/anything-v5/resolve/main/unet/diffusion_pytorch_model.safetensors"
},
{
"type": "vae",
"name": "anything_v5.vae.safetensors",
"download": "https://huggingface.co/stablediffusionapi/anything-v5/resolve/main/vae/diffusion_pytorch_model.safetensors"
},
{
"type": "checkpoint",
"name": "Counterfeit-V3.0.safetensors",
"download": "https://huggingface.co/gsdf/Counterfeit-V3.0/resolve/main/Counterfeit-V3.0.safetensors"
},
{
"type": "embeddings",
"name": "EasyNegative.safetensors",
"download": "https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/EasyNegative.safetensors"
},
{
"type": "checkpoint",
"name": "CounterfeitXL_%CE%B2.safetensors",
"download": "https://huggingface.co/gsdf/CounterfeitXL/resolve/main/CounterfeitXL_%CE%B2.safetensors"
},
{
"type": "checkpoint",
"name": "AOM3A1B_orangemixs.safetensors",
"download": "https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A1B_orangemixs.safetensors"
},
{
"type": "vae",
"name": "orangemix.vae.pt",
"download": "https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt"
},
{
"type": "checkpoint",
"name": "Deliberate.safetensors",
"download": "https://huggingface.co/XpucT/Deliberate/resolve/main/Deliberate.safetensors"
},
{
"type": "checkpoint",
"name": "Realistic_Vision_V5.1.safetensors",
"download": "https://huggingface.co/SG161222/Realistic_Vision_V5.1_noVAE/resolve/main/Realistic_Vision_V5.1.safetensors"
},
{
"type": "vae",
"name": "sd_vae.safetensors",
"download": "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors"
},
{
"type": "checkpoint",
"name": "LOFI_V3.safetensors",
"download": "https://huggingface.co/lenML/LOFI-v3/resolve/main/LOFI_V3.safetensors"
},
{
"type": "checkpoint",
"name": "NeverendingDream_noVae.safetensors",
"download": "https://huggingface.co/Lykon/NeverEnding-Dream/resolve/main/NeverendingDream_noVae.safetensors"
},
{
"type": "vae",
"name": "sd_vae.safetensors",
"download": "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors"
},
{
"type": "checkpoint",
"name": "ProtoGen_X5.8.safetensors",
"download": "https://huggingface.co/darkstorm2150/Protogen_x5.8_Official_Release/resolve/main/ProtoGen_X5.8.safetensors"
},
{
"type": "checkpoint",
"name": "GuoFeng3.4.safetensors",
"download": "https://huggingface.co/xiaolxl/GuoFeng3/resolve/main/GuoFeng3.4.safetensors"
},
{
"type": "lora",
"name": "Xiaorenshu_v15.safetensors",
"download": "https://huggingface.co/datamonet/xiaorenshu/resolve/main/Xiaorenshu_v15.safetensors"
},
{
"type": "lora",
"name": "Colorwater_v4.safetensors",
"download": "https://huggingface.co/niitokikei/Colorwater/resolve/main/Colorwater_v4.safetensors"
},
{
"type": "lora",
"name": "huyefo-v1.0.safetensors",
"download": "https://civitai.com/api/download/models/104426"
},
{
"type": "upscale_models",
"name": "RealESRGAN_x2plus.pth",
"download": "https://huggingface.co/Rainy-hh/Real-ESRGAN/resolve/main/RealESRGAN_x2plus.pth"
}
]

View File

@@ -1,98 +1,3 @@
/* comfy table */
.comfy-table {
width: 100%;
table-layout: fixed;
border-collapse: collapse;
}
.comfy-table .table-head tr {
background-color: var(--tr-even-bg-color);
}
/* comfy tabs */
.comfy-tabs {
color: #fff;
}
.comfy-tabs-head {
display: flex;
gap: 8px;
flex-wrap: wrap;
border-bottom: 1px solid #6a6a6a;
}
.comfy-tabs-head .head-item {
padding: 8px 12px;
border: 1px solid #6a6a6a;
border-bottom: none;
border-top-left-radius: 8px;
border-top-right-radius: 8px;
cursor: pointer;
margin-bottom: -1px;
}
.comfy-tabs-head .head-item.active {
background-color: #2e2e2e;
cursor: default;
position: relative;
z-index: 1;
}
.comfy-tabs-body {
background-color: #2e2e2e;
border: 1px solid #6a6a6a;
border-top: none;
padding: 16px 0px;
}
/* comfy grid */
.comfy-grid {
display: flex;
flex-wrap: wrap;
gap: 16px;
}
.comfy-grid .item {
position: relative;
width: 230px;
height: 345px;
text-align: center;
overflow: hidden;
}
.comfy-grid .item img {
width: 100%;
height: 100%;
object-fit: contain;
}
.comfy-grid .item p {
position: absolute;
bottom: 0px;
background-color: #000a;
width: 100%;
margin: 0;
padding: 9px 0px;
}
/* comfy radio group */
.comfy-radio-group {
display: flex;
gap: 8px;
flex-wrap: wrap;
}
.comfy-radio {
display: flex;
gap: 4px;
padding: 4px 8px;
color: var(--input-text);
border: 1px solid var(--border-color);
border-radius: 8px;
background-color: var(--comfy-input-bg);
font-size: 18px;
}
/* model manager */
.model-manager {
box-sizing: border-box;
@@ -101,7 +6,7 @@
max-width: unset;
max-height: unset;
padding: 10px;
color: #fff;
color: var(--bg-color);
z-index: 2000;
}
@@ -110,18 +15,55 @@
gap: 16px;
}
/* model manager common */
.model-manager.sidebar-left {
width: 50%;
left: 25%;
}
.model-manager.sidebar-top {
height: 50%;
top: 25%;
}
.model-manager.sidebar-bottom {
height: 50%;
top: 75%;
}
.model-manager.sidebar-right {
width: 50%;
left: 75%;
}
/* common */
.model-manager h1 {
min-width: 0;
}
.model-manager textarea {
width: 100%;
font-size: 1.2em;
border: solid 2px var(--border-color);
border-radius: 8px;
resize: vertical;
}
.model-manager input[type="file"] {
width: 100%;
}
.model-manager button,
.model-manager select,
.model-manager input {
padding: 4px 8px;
margin: 0;
border: 2px solid var(--border-color);
}
.model-manager button:disabled,
.model-manager select:disabled,
.model-manager input:disabled {
background-color: #6a6a6a;
background-color: var(--comfy-menu-bg);
filter: brightness(1.2);
cursor: not-allowed;
}
@@ -136,27 +78,118 @@
}
.model-manager ::-webkit-scrollbar {
width: 6px;
width: 16px;
}
.model-manager ::-webkit-scrollbar-track {
background-color: #353535;
background-color: var(--comfy-input-bg);
border-right: 1px solid var(--border-color);
border-bottom: 1px solid var(--border-color);
}
.model-manager ::-webkit-scrollbar-thumb {
background-color: #a1a1a1;
background-color: var(--fg-color);
border-radius: 3px;
}
/* model manager row */
.model-manager .search-text-area::-webkit-input-placeholder {
font-style: italic;
}
.model-manager .search-text-area:-moz-placeholder {
font-style: italic;
}
.model-manager .search-text-area::-moz-placeholder {
font-style: italic;
}
.model-manager .search-text-area:-ms-input-placeholder {
font-style: italic;
}
.icon-button {
height: 40px;
width: 40px;
line-height: 1.15;
}
.model-manager .row {
display: flex;
min-width: 0;
gap: 8px;
}
/* comfy tabs */
.model-manager .tab-header {
display: flex;
padding: 8px 0;
flex-direction: column;
background-color: var(--bg-color);
}
.model-manager .tab-header-flex-block {
width: 100%;
min-width: 0;
}
.model-manager .button-success {
color: green;
border-color: green;
}
.model-manager .button-failure {
color: darkred;
border-color: darkred;
}
.model-manager .no-select {
-webkit-user-select: none;
-ms-user-select: none;
user-select: none;
}
/* sidebar buttons */
.model-manager .sidebar-buttons {
overflow: hidden;
padding-right: 10px;
color: var(--input-text);
}
/* tabs */
.model-manager .comfy-tabs {
color: var(--fg-color);
}
.model-manager .comfy-tabs-head {
display: flex;
gap: 8px;
flex-wrap: wrap;
border-bottom: 2px solid var(--border-color);
}
.model-manager .comfy-tabs-head .head-item {
padding: 8px 12px;
border: 2px solid var(--border-color);
border-bottom: none;
border-top-left-radius: 8px;
border-top-right-radius: 8px;
background-color: var(--comfy-menu-bg);
cursor: pointer;
margin-bottom: 0px;
z-index: 1;
}
.model-manager .comfy-tabs-head .head-item.active {
background-color: var(--comfy-input-bg);
cursor: default;
position: relative;
z-index: 1;
}
.model-manager .comfy-tabs-body {
background-color: var(--bg-color);
border: 2px solid var(--border-color);
border-top: none;
padding: 16px 0px;
}
.model-manager .comfy-tabs {
flex: 1;
display: flex;
@@ -171,36 +204,302 @@
.model-manager .comfy-tabs-body > div {
position: relative;
max-height: 100%;
height: 100%;
width: auto;
padding: 0 16px;
overflow-x: hidden;
overflow-x: auto;
}
/* model manager special */
.model-manager .close {
/* model info view */
.model-manager .model-info-view {
background-color: var(--bg-color);
border: 2px solid var(--border-color);
box-sizing: border-box;
display: flex;
flex-direction: column;
height: 100%;
margin-top: 40px;
overflow-wrap: break-word;
overflow-y: auto;
padding: 20px;
}
.model-manager .model-info-container {
background-color: var(--bg-color);
border-radius: 16px;
color: var(--fg-color);
width: auto;
}
/* download tab */
.model-manager [data-name="Download"] summary {
padding: 16px;
word-wrap: break-word;
}
.model-manager [data-name="Download"] .download-settings {
flex: 1;
}
.model-manager .download-model-infos {
padding: 16px 0;
}
/* models tab */
.model-manager [data-name="Models"] .row {
position: sticky;
z-index: 1;
top: 0;
}
/* preview image */
.model-manager .item {
position: relative;
width: 230px;
height: 345px;
text-align: center;
overflow: hidden;
border-radius: 8px;
}
.model-manager .item img {
width: 100%;
height: 100%;
object-fit: cover;
}
.model-manager .model-preview-button-left,
.model-manager .model-preview-button-right {
position: absolute;
padding: 1px 6px;
top: 0;
bottom: 0;
margin: auto;
border-radius: 20px;
}
.model-manager .model-preview-button-right {
right: 4px;
}
.model-manager .model-preview-button-left {
left: 4px;
}
.model-manager .item .model-preview-overlay {
position: absolute;
top: 0;
left: 0;
height: 100%;
width: 100%;
background-color: rgba(0, 0, 0, 0);
}
/* grid */
.model-manager .comfy-grid {
display: flex;
flex-wrap: wrap;
gap: 16px;
}
.model-manager .comfy-grid .model-label {
background-color: #000a;
width: 100%;
height: 2.2rem;
position: absolute;
bottom: 0;
text-align: center;
line-height: 2.2rem;
}
.model-manager .comfy-grid .model-label > p {
width: calc(100% - 2rem);
overflow-x: scroll;
white-space: nowrap;
display: inline-block;
vertical-align: middle;
margin: 0;
}
.model-manager .comfy-grid .model-label {
scrollbar-width: none;
-ms-overflow-style: none;
}
.model-manager .comfy-grid .model-label ::-webkit-scrollbar {
width: 0;
height: 0;
}
.model-manager .comfy-grid .model-preview-top-right,
.model-manager .comfy-grid .model-preview-top-left {
position: absolute;
display: flex;
flex-direction: column;
gap: 8px;
top: 8px;
}
.model-manager .comfy-grid .model-preview-top-right {
right: 8px;
}
.model-manager .comfy-grid .model-preview-top-left {
left: 8px;
}
.model-manager .comfy-grid .model-button {
opacity: 0.65;
}
.model-manager .comfy-grid .model-button:hover {
opacity: 1;
}
.model-manager .comfy-grid .model-label {
user-select: text;
}
/* radio */
.model-manager .comfy-radio-group {
display: flex;
gap: 8px;
flex-wrap: wrap;
min-width: 0;
}
.model-manager .comfy-radio {
display: flex;
gap: 4px;
padding: 4px 16px;
color: var(--input-text);
border: 2px solid var(--border-color);
border-radius: 16px;
background-color: var(--comfy-input-bg);
font-size: 18px;
}
.model-manager .comfy-radio:has(> input[type="radio"]:checked) {
border-color: var(--border-color);
background-color: var(--comfy-menu-bg);
}
.model-manager .comfy-radio input[type="radio"]:checked + label {
color: var(--fg-color);
}
.model-manager .radio-input {
opacity: 0;
position: absolute;
}
/* model preview select */
.model-preview-select-radio-container {
min-width: 0;
flex: 1;
}
.model-manager .model-preview-select-radio-container img {
position: relative;
width: 230px;
height: 345px;
text-align: center;
overflow: hidden;
border-radius: 8px;
object-fit: cover;
}
/* topbar */
.model-manager .topbar-buttons {
position: absolute;
display: flex;
top: 10px;
right: 10px;
}
.model-manager .row {
position: sticky;
padding-top: 2px;
margin-top: -2px;
padding-bottom: 18px;
margin-bottom: -2px;
top: 0px;
background-color: #2e2e2e;
.model-manager .topbar-buttons button {
width: 33px;
height: 33px;
padding: 1px 6px;
}
/* search dropdown */
.model-manager .search-models {
display: flex;
flex-direction: row;
flex: 1;
min-width: 0;
}
.model-manager .model-select-dropdown {
min-width: 0;
overflow: auto;
}
.model-manager .search-text-area,
.model-manager .plain-text-area,
.model-manager .model-select-dropdown {
flex: 1;
min-height: 36px;
padding-block: 0;
min-width: 36px;
}
.model-manager .model-select-dropdown {
min-height: 40px;
}
.model-manager .search-dropdown {
position: absolute;
background-color: var(--bg-color);
border: 2px var(--border-color) solid;
color: var(--fg-color);
max-height: 30vh;
overflow: auto;
border-radius: 10px;
z-index: 1;
}
.model-manager .table-head {
position: sticky;
top: 52px;
z-index: 1;
.model-manager .search-dropdown:empty {
display: none;
}
.model-manager div[data-name="Model List"] .row {
align-items: flex-start;
.model-manager .search-dropdown > p {
margin: 0;
padding: 0.85em 20px;
min-width: 0;
}
.model-manager .search-dropdown > p {
-ms-overflow-style: none; /* Internet Explorer 10+ */
scrollbar-width: none; /* Firefox */
}
.model-manager .search-dropdown > p::-webkit-scrollbar {
display: none; /* Safari and Chrome */
}
.model-manager .search-dropdown > p.search-dropdown-selected {
background-color: var(--border-color);
}
/* model manager settings */
.model-manager .model-manager-settings > div,
.model-manager .model-manager-settings > label {
display: flex;
flex-direction: row;
align-items: center;
gap: 8px;
margin: 16px 0;
}
.model-manager .model-manager-settings button {
height: 40px;
width: 120px;
}
.model-manager .model-manager-settings input[type="number"] {
width: 50px;
}
.search-settings-text {
width: 100%;
}

View File

File diff suppressed because it is too large Load Diff