Skip to content

Comfyui pony workflow example

Comfyui pony workflow example. ComfyUI Examples. 1 [schnell] for fast local development These models excel in prompt adherence, visual quality, and output diversity. Pinto: About SUPIR (Scaling-UP Image Restoration), a groundbreaking image restoration method that harnesses generative prior and the power of model scaling up. Then just click Queue Prompt and training starts! example. Click Manager > Update All. Basic Vid2Vid 1 ControlNet - This is the basic Vid2Vid workflow updated with the new nodes. Leveraging multi-modal techniques and advanced generative prior, SUPIR marks a significant advance in intelligent and realistic image restoration. Jan 20, 2024 · Inpainting in ComfyUI has not been as easy and intuitive as in AUTOMATIC1111. Discovery, share and run thousands of ComfyUI Workflows on OpenArt. safetensors, stable_cascade_inpainting. You switched accounts on another tab or window. Apr 27, 2024 · The workflow has Upscale resolution from 1024 x 1024 and metadata compatible with the Civitai website (upload) after saving the image. Created by: homer_26: Pony Diffusion model to create images with flexible prompts and numerous character possibilities, adding a 2. In this following example the positive text prompt is zeroed out in order for the final output to follow the input image more closely. The resources for inpainting workflow are scarce and riddled with errors. This guide is about how to setup ComfyUI on your Windows computer to run Flux. Here is an example of 3 characters each with its own pose, outfit, features, and expression : Left : woman wearing full armor, ginger hair, braids hair, hands on hips, serious Middle : girl, princess dress, blonde hair, tiara, jewels, sitting on a throne, blushing For demanding projects that require top-notch results, this workflow is your go-to option. 0 reviews. ComfyUI IPAdapter Plus; ComfyUI InstantID (Native) ComfyUI Essentials; ComfyUI FaceAnalysis; Not to mention the documentation and videos tutorials. Animation workflow (A great starting point for using AnimateDiff) View Now You signed in with another tab or window. Step 4: Update ComfyUI. A booru API powered prompt generator for AUTOMATIC1111's Stable Diffusion Web UI and ComfyUI with flexible tag filtering system and customizable prompt templates. ComfyUI: The Ultimate Guide to Stable Diffusion's Powerful and Modular GUI. Make sure to reload the ComfyUI page after the update — Clicking the restart button is not 👏 欢迎来到我的 ComfyUI 工作流集合地! 为了给大家提供福利,粗糙地搭建了一个平台,有什么反馈优化的地方,或者你想让我帮忙实现一些功能,可以提交 issue 或者邮件联系我 theboylzh@163. Img2Img works by loading an image like this example image (opens in a new tab), converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. Here is a link to download pruned versions of the supported GLIGEN model files (opens in a new tab). com/models/257749/pony-diffusion-v6-xl. Nov 25, 2023 · Upscaling (How to upscale your images with ComfyUI) View Now. A workaround in ComfyUI is to have another img2img pass on the layer diffuse result to simulate the effect of stop at param. com/wenquanlu/HandRefinerControlnet inp Examples of what is achievable with ComfyUI open in new window. com Jul 9, 2024 · This workflow contains custom nodes from various sources and can all be found using comfyui manager. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. ComfyUI has native support for Flux starting August 2024. The following images can be loaded in ComfyUI to get the full workflow. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. In a base+refiner workflow though upscaling might not look straightforwad. Efficiency Nodes for ComfyUI Version 2. DocVQA allows you to ask questions about the content of document images, and the model will provide answers based on the visual and textual information in the document. See the following workflow for an example: See this next workflow for how to mix multiple images together: You can find the input image for the above workflows on the unCLIP example page Nov 13, 2023 · A Windows Computer with a NVIDIA Graphics card with at least 12GB of VRAM. Hence, we'll delve into the most straightforward text-to-image processes in ComfyUI. As a pivotal catalyst within SUPIR, model scaling dramatically enhances This is hard/risky to implement directly in ComfyUI as it requires manually loading a model that has every change except the layer diffusion change applied. I found it very helpful. A sample workflow for running CosXL models, such as my RobMix CosXL checkpoint. In the Load Checkpoint node, select the checkpoint file you just downloaded. Table of contents. 6 and 1. You can Load these images in ComfyUI (opens in a new tab) to get the full workflow. All LoRA flavours: Lycoris, loha, lokr, locon, etc… are used this way. CosXL Edit Sample Workflow. ComfyUI ControlNet aux: Plugin with preprocessors for ControlNet, so you can generate images directly from ComfyUI. ComfyUI AnyNode: Any Node you ask for - AnyNodeLocal (6 ComfyUI also has a mask editor that can be accessed by right clicking an image in the LoadImage node and “Open in MaskEditor”. Basic txt2img with hiresfix + face detailer. Created by: CgTopTips: FLUX is an advanced image generation model, available in three variants: FLUX. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. Simple Run and Go With Pony. Example. Click Queue Prompt and watch your image generated. By examining key examples, you'll gradually grasp the process of crafting your unique workflows. Mixing ControlNets ComfyUI Manager: Plugin for CompfyUI that helps detect and install missing plugins. This is the most well organised and easy to use ComfyUI Workflow I've come across so far showing difference between Preliminary, Base and Refiner setup. 1 [pro] for top-tier performance, FLUX. ComfyUI workflow with all nodes connected. Achieves high FPS using frame interpolation (w/ RIFE). https://civitai. Aug 1, 2024 · For use cases please check out Example Workflows. Any Node workflow examples. json workflow file from the C:\Downloads\ComfyUI\workflows folder. A general purpose ComfyUI workflow for common use cases. safetensors(https://huggingface. om。 说明:这个工作流使用了 LCM Mar 23, 2024 · Hey this is my first ComfyUI workflow hope you enjoy it! I've never shared a flow before so if it has problems please let me know. This is the input image that will be used in this example: Here is an example using a first pass with AnythingV3 with the controlnet and a second pass without the controlnet with AOM3A3 (abyss orange mix 3) and using their VAE. Be it for character, clothing, or… 5 days ago · Img2Img Examples. You can load this image in ComfyUI to get the full workflow. 1 ComfyUI install guidance, workflow and example. Keybind Explanation; Feb 7, 2024 · My ComfyUI workflow that was used to create all example images with my model RedOlives: https://civitai. The same concepts we explored so far are valid for SDXL. Or click the "code" button in the top right, then click "Download ZIP". 0> will be interpreted as Frieren Pony even though it wasn't your intent to use the file name as part of the prompt. Because the context window compared to hotshot XL is longer you end up using more VRAM. The resolution it allows is also higher so a TXT2VID workflow ends up using 11. 5D LoRA of details for more styling options in the final result. 798. 0, and it uses the mad-cyberspace trigger word. Download Clip-L model. Hypernetworks are patches applied on the main MODEL so to use them put them in the models/hypernetworks directory and use the Hypernetwork Loader node like this: Discovery, share and run thousands of ComfyUI Workflows on OpenArt. Both of my images have the flow embedded in the image so you can simply drag and drop the image into ComfyUI and it should open up the flow but I've also included the json in a zip file. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. This example is an example of merging 3 different checkpoints using simple block merging where the input, middle and output blocks of the unet can have a Click Load Default button to use the default workflow. . Create animations with AnimateDiff. In this guide, I’ll be covering a basic inpainting workflow Here is an example workflow that can be dragged or loaded into ComfyUI. I made this using the following workflow with two images as a starting point from the ComfyUI IPAdapter node repository. co/openai/clip-vit-large Jan 8, 2024 · The optimal approach for mastering ComfyUI is by exploring practical examples. Topics ai style-transfer text-to-image image-to-image inpainting inpaint text2image image2image outpaint img2img outpainting stable-diffusion prompt-generator controlnet comfyui comfyui-workflow ipadapter Aug 26, 2024 · ComfyUI FLUX LoRA Training is a powerful tool for adapting FLUX models to your specific needs. This example is an example of merging 3 different checkpoints using simple block merging where the input, middle and output blocks of the unet can have a Upscale Model Examples. 4K. This workflow by Antzu is a good example of prompt scheduling, which is working well in Comfy thanks to Fitzdorf's great work. The solution (other than renaming the Lora) is to use ComfyRoll's CR LoRA Stack!. HandRefiner Github: https://github. Once loaded go into the ComfyUI Manager and click Install Missing Custom Nodes. ComfyUI is a node-based GUI designed for Stable Diffusion. Always refresh your browser and click refresh in the ComfyUI window after adding models or custom_nodes. This image has had part of it erased to alpha with gimp, the alpha channel is what we will be using as a mask for the inpainting. 🧩 Seth emphasizes the importance of matching the image aspect ratio when using images as references and the option to use different aspect ratios for image-to-image Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. Here is a workflow for using it: Save this image then load it or drag it on ComfyUI to get the workflow. 5 GB VRAM if you use 1024x1024 resolution. comfyUI stands out as an AI drawing software with a versatile node-based and flow-style custom workflow. The lower the value the more it will follow the concept. safetensors. Please keep posted images SFW. It offers convenient functionalities such as text-to-image It is a simple workflow of Flux AI on ComfyUI. In ComfyUI the saved checkpoints contain the full workflow used to generate them so they can be loaded in the UI just like images to get the full workflow that was used to create them. Jul 11, 2024 · With ComfyUI sometimes the filename of a Lora causes problems in the positive prompt. Merging 2 Images together. AP Workflow is a large ComfyUI workflow and moving across its functions can be time-consuming. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. Each ControlNet/T2I adapter needs the image that is passed to it to be in a specific format like depthmaps, canny maps and so on depending on the specific model if you want good results. Pressing the letter or number associated with each Bookmark node will take you to the corresponding section of the workflow. CosXL Sample Workflow. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. 1 [dev] for efficient non-commercial use, FLUX. com/models/628682/flux-1-checkpoint For example, if it's in C:/database/5_images, data_path MUST be C:/database. Jan 4, 2024 · ComfyUI Stable Diffusion Web UI Fooocus ComfyUIでSDXLを使う方法まとめ. Text to Image: Build Your First Workflow. A Sep 3, 2024 · Here is an example of a more complex 2 pass workflow, This image is first generated with the WD1. 5. However, there are a few ways you can approach this problem. Inpainting a cat with the v2 inpainting model: Inpainting a woman with the v2 inpainting model: It also works with non inpainting models. Finally, just choose a name for the LoRA, and change the other values if you want. SD3 performs very well with the negative conditioning zeroed out like in the following example: SD3 Controlnet. Check my ComfyUI Advanced Understanding videos on YouTube for example, part 1 and part 2. com/. links and info on use https:// rentry. 1. I'm not sure why it wasn't included in the image details so I'm uploading it here separately. Be sure to check the trigger words before running the Load the . 5 beta 3 illusion model, latent upscaled and then a second pass is done with cardosAnime_v10: English 🌞Light Created by: Bocian: This workflow aims at creating images 2+ characters with separate prompts for each thanks to the latent couple method, while solving the issues stemming from it. If you are not interested in having an upscaled image completely faithful to the original you can create a draft with the base model in just a bunch of steps, then upscale the latent and apply a second pass with the base and a third pass with the refiner. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. Flux. ControlNet Depth ComfyUI workflow. Another Example and observe its amazing output. Apr 26, 2024 · Workflow. SDXL Default ComfyUI workflow. Apr 15, 2024 · How to Add a LoRa to Your Workflow in ComfyUI LoRAs are an effective way to tailor the generation capabilities of the diffusion models in ComfyUI. Flux Schnell is a distilled 4 step model. If you choose a checkpoint that is not based on PonyXl you will most likely not get the same results as I did! Use the prompt: "score_8_up, score_7_up, score_6_up,feral pony, show accurate" and paste this in the upper window, the positive pompt 6 min read. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Dec 4, 2023 · It might seem daunting at first, but you actually don't need to fully learn how these are connected. com/models/283810 The simplicity of this wo Efficient Loader node in ComfyUI KSampler(Efficient) node in ComfyUI. This should update and may ask you the click restart. Welcome to the comprehensive, community-maintained documentation for ComfyUI open in new window, the cutting-edge, modular Stable Diffusion GUI and backend. At times you might wish to use a different VAE than the one that came loaded with the Load Checkpoint node. Apr 30, 2024 · As an example in my workflow, I am using the Neon Cyberpunk LoRA (available here). Merge 2 images together (Merge 2 images together with this ComfyUI workflow) View Now. The image below is the empty workflow with Efficient Loader and KSampler (Efficient) added and connected to each other nodes. Offers various art styles. In this example we will be using this image. By connecting various blocks, referred to as nodes, you can construct an image generation workflow. Specializes in adorable anime characters. In the example below we use a different VAE to encode an image to latent space, and decode the result of the Ksampler. Download it and place it in your input folder. ControlNet Depth Comfyui workflow (Use ControlNet Depth to enhance your SDXL images) View Now. A sample workflow for running CosXL Edit models, such as my RobMix CosXL Edit checkpoint. Img2Img ComfyUI workflow. SD3 Controlnets by InstantX are also supported. Sytan SDXL ComfyUI: Very nice workflow showing how to connect the base model with the refiner and include an upscaler. ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. Jan 15, 2024 · In this workflow building series, we'll learn added customizations in digestible chunks, synchronous with our workflow's development, and one update at a time. A comprehensive collection of ComfyUI knowledge, including ComfyUI installation and usage, ComfyUI Examples, Custom Nodes, Workflows, and ComfyUI Q&A. These are examples demonstrating how to use Loras. ComfyUI breaks down the workflow into rearrangeable elements, allowing you to effortlessly create your custom workflow. Lora Examples. If you see any red nodes, I recommend using comfyui manager's " install missing custom nodes " function. 0. I then recommend enabling Extra Options -> Auto Queue in the interface. Welcome to the unofficial ComfyUI subreddit. These are examples demonstrating how to do img2img. The only way to keep the code open and free is by sponsoring its development. Created by: Ashish Tripathi: Central Room Group : Start here Lora Integration : Model Configuration and FreeU V2 Implementation : Image Processing and Resemblance Enhancement : Latent Space Manipulation with Noise Injection : Image Storage and Naming : Optional Detailer : Super-Resolution (SD Upscale) : HDR Effect and Finalization : Performance : Processor (CPU): Intel Core i3-13500 Graphics Here is how you use it in ComfyUI (you can drag this into ComfyUI (opens in a new tab) to get the workflow): noise_augmentation controls how closely the model will try to follow the image concept. 确保ComfyUI本体和ComfyUI_IPAdapter_plus已经更新到最新版本(Make sure ComfyUI ontology and ComfyUI_IPAdapter_plus are updated to the latest version) name 'round_up' is not defined 参考: THUDM/ChatGLM2-6B#272 (comment) , 使用 pip install cpm_kernels 或者 pip install -U cpm_kernels 更新 cpm_kernels Aug 19, 2024 · Put it in ComfyUI > models > vae. This repo (opens in a new tab) contains examples of what is achievable with ComfyUI (opens in a new tab). It covers the following topics: Mar 23, 2024 · My Review for Pony Diffusion XL: Skilled in NSFW content. Upscaling ComfyUI workflow. Created by: C. All of those issues are solved using the OpenPose controlnet Sep 4, 2024 · GLIGEN Examples. Comfy Workflows Comfy Workflows. 358. You signed out in another tab or window. Put the GLIGEN model files in the ComfyUI/models/gligen directory. The recommended strength is between 0. You can construct an image generation workflow by chaining different blocks (called nodes) together. I then recommend enabling Extra Options -> Auto May 19, 2024 · These two files must be placed in the folder I show you in the picture: ComfyUI_windows_portable\ComfyUI\models\ipadapter. Unzip the downloaded archive anywhere on your file system. Inpainting with a standard Stable Diffusion model Comfyui work flow w/ HandRefiner, easy and convenient hand correction or hand fix. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI (opens in a new tab). See full list on github. Some people there just post a lot of very similar workflows just to show of the picture which makes it a bit annoying when you want to find new interesting ways to do things in comfyUI. hopefully this will be useful to you. Share, discover, & run thousands of ComfyUI workflows. 0 seed: 640271075062843 Here is a workflow for using it: Example. 2. May 19, 2024 · I will use PonyDiffusionV6XL in the example. 0. Feb 7, 2024 · Why Use ComfyUI for SDXL. 今回はComfyUIでSDXLを使う方法についてご紹介しました。 SDXLがリリースされた時にStable Diffusion Web UIより速く対応し、話題になっていたのがComfyUIです。 Using ComfyUI Online. You can then load or drag the following image in ComfyUI to get the workflow: Examples of ComfyUI workflows. CosXL models have better dynamic range and finer control than SDXL models. Save this image then load it or drag it on ComfyUI to get the workflow. To speed up your navigation, a number of bright yellow Bookmark nodes have been placed in strategic locations. Since SDXL requires you to use both a base and a refiner model, you’ll have to switch models during the image generation process. All Workflows / Simple Run and Go With Pony. Update x-flux-comfy with git pull or reinstall it. This repo contains examples of what is achievable with ComfyUI. Those include inconsistent perspective, jarring blending between areas and inability to generate characters interacting with each other in any way. Description. Then press “Queue Prompt” once and start writing your prompt. I will make only Apr 10, 2024 · These tips and prompting styles will work with any model that directly uses pony diffusion v6 xl, like autismix pony for example. 5 days ago · SDXL Examples. This tutorial provides a step-by-step walkthrough of the entire process, from organizing your training data to setting up the workflow, executing the training, and leveraging the resulting models for Official workflow example. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. A CosXL Edit model takes a source image as input An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. I added the node clip skip -2 (as recommended by the model In ComfyUI the saved checkpoints contain the full workflow used to generate them so they can be loaded in the UI just like images to get the full workflow that was used to create them. ai discord score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, just describe what you want This fork includes support for Document Visual Question Answering (DocVQA) using the Florence2 model. ControlNet and T2I-Adapter - ComfyUI workflow Examples Note that in these examples the raw image is passed directly to the ControlNet/T2I adapter. You can Load these images in ComfyUI to get the full workflow. May 27, 2024 · Simple ComfyUI workflow used for the example images for my model merge 3DPonyVision. Dec 10, 2023 · Introduction to comfyUI. EZ way, kust download this one and run like another checkpoint ;) https://civitai. This post hopes to bridge the gap by providing the following bare-bone inpainting examples with detailed instructions in ComfyUI. It combines advanced face swapping and generation techniques to deliver high-quality outcomes, ensuring a comprehensive solution for your needs. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. SDXL works with other Stable Diffusion interfaces such as Automatic1111 but the workflow for it isn’t as straightforward. Please note that in the example workflow using the example video we are loading every other frame of a 24 frame video and then turning that into at 8 fps animation (meaning things will be slowed compared to the original video) Workflow Explanations. Update ComfyUI if you haven’t already. Please share your tips, tricks, and workflows for using this software to create your AI art. Apr 21, 2024 · Inpainting with ComfyUI isn’t as straightforward as other applications. Reload to refresh your session. Here is an example for how to use the Canny Controlnet: Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. That's all for the preparation, now we can start! You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. SDXL Examples. For example, errors may occur when generating hands, and serious distortions can occur when generating full-body characters. 3) This one goes into: ComfyUI_windows_portable\ComfyUI\models\loras. Then I created two more sets of nodes, from Load Images to the IPAdapters, and adjusted the masks so that they would be part of a specific section in the whole image. All the images in this page contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Shortcuts. 2) This file goes into: ComfyUI_windows_portable\ComfyUI\models\clip_vision. [Last update: 01/August/2024]Note: you need to put Example Inputs Files & Folders under ComfyUI Root Directory\ComfyUI\input folder before you can run the example workflow 5 days ago · Hypernetwork Examples. Here is an example of how to use upscale models like ESRGAN. This by Nathan Shipley didn't use this exact workflow but is a great example of how powerful and beautiful prompt scheduling can be: Any Node workflow examples. The workflow has Upscale resolution to 1024 x 1024 and metadata compatible with the Civitai website (upload) after saving the image. By the end of this article, you will have a fully functioning text to image workflow in ComfyUI built entirely from scratch. 1. The easiest way to update ComfyUI is through the ComfyUI Manager. Very proficient in furry, feet, almost every NSFW stuffs etc For some workflow examples and see what ComfyUI can do you can check out: Workflow examples can be found on the Examples page. The way ComfyUI is built up, every image or video saves the workflow in the metadata, which means that once an image has been generated with ComfyUI, you can simply drag and drop it to get that complete workflow. For example, <lora:Frieren_Pony:1. The sample prompt as a test shows a really great result. ComfyUI (opens in a new tab) Examples. Simple workflow for using the new Stable Video Diffusion model in ComfyUI for image to video generation. After 5 days ago · Inpaint Examples. Upload workflow. co/ponyxl_loras_n_stuff was used to source as well as the purplesmart. By fine-tuning with your custom datasets, you can create highly personalized FLUX LoRA models. 0+ For these examples I have renamed the files by adding stable_cascade_ in front of the filename for example: stable_cascade_canny. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. Keybind Explanation; To use ComfyUI-LaMA-Preprocessor, you'll be following an image-to-image workflow and add in the following nodes: Load ControlNet Model, Apply ControlNet, and lamaPreprocessor: When setting the lamaPreprocessor node, you'll decide whether you want horizontal or vertical expansion and then set the amount of pixels you want to expand the image by Here is a basic example how to use it: As a reminder you can save these image files and drag or load them into ComfyUI to get the workflow. For some workflow examples and see what ComfyUI can do you can check out: Workflow examples can be found on the Examples page. Jun 23, 2024 · Despite significant improvements in image quality, details, understanding of prompts, and text content generation, SD3 still has some shortcomings. This was the base for my Create your comfyui workflow app,and share with your friends. Here is an example: You can load this image in ComfyUI to get the workflow. I added the node clip skip -2 (as recommended by the model), remembering that in ComfyUI the value -2 is equal to 2 (positive) in other generators (Civitai, Tensorart, etc). For some workflow examples and see what ComfyUI can do you can check out: Aug 31, 2024 · 🔗 The workflow integrates with ComfyUI's custom nodes and various tools like image conditioners, logic switches, and upscalers for a streamlined image generation process. Let's embark on a journey through fundamental workflow examples. Using SDXL 1. pnpdv oshid wkubg zprvs nnz osp yyzse sgmp rrt hecm