Comfyui lora reddit

Comfyui lora reddit. 5 Steps: 4 Scheduler: LCM Welcome to the unofficial ComfyUI subreddit. I need to add lora loader node, select lora, move other nodes to keep structure comprehensive, place new lora loader on canvas, disconnect previous lora node from sampler node, connect it to a new node, connect new node to sampler, realize this lora makes image worse, repeat the process. Since adding endless lora nodes tends to mess the simplest workflow, I'm looking for a plugin with a lora stacker node. 75s/it with the 14 frame model. Take a Lora of person A and a Lora of Person B, place them into the same photo (SD1. 1) in ComfyUI is much stronger than (word:1. The LoRA Caption custom nodes, just like their name suggests, allow you to caption images so they are ready for LoRA training. 5 works great. I was using it successfully for SD1. This can have bigger or smaller differences depending on the LoRA itself. The TL;DR version is this: it makes a image from your prompt without a LoRA, runs it through ControlNet, and uses that to make a new image with the LoRA. With the extension "ComfyUI manager" you can install almost automatically the missing nodes with the "install missing custom nodes" button. The extension also provides XY plot components to better evaluate merge settings. Is it the right way of doing this ? On a1111 the positive "clip skip" value is indicated, going to stop the clip before the last layer of the clip. The negative has a Lora loader. I've followed many videos to make sure I've done it correctly. To prevent the application of Lora that is not used in the prompt, you need to directly connect the model that does not have Lora applied. What is a LoRA? Stable Diffusion models are fine-tuned using Low-Rank Adaptation (LoRA), a unique training technique. You load ANY model (even a finetuned one), then connect it to the LCM-LoRA for the same base model. Ksampler takes only one model. im pretty sure its either in by default or one of those two that gives you the option. Workflows are much more easily reproducible and versionable. Using only the trigger word in the prompt, you cannot control Lora. 0. In an upcoming post, we’ll delve into how to use the XYZ plot in ComfyUI to further analyze the impacts of multiple LoRAs. Here is a link to the workflow if you can't see the image clear enough. json format, but images do the same thing), which ComfyUI supports as it is - you don't even need custom nodes. A lot of people are just discovering this technology, and want to show off what they created. py. Lora usage is confusing in ComfyUI. Anyone has an easy solution ? ps: Iv'e tried to pass the IPadapter into the model for the Lora, and then plug it to Ksampler. Is this workflow at all possible in ComfyUI? Unless I'm missing something somewhere, the LORA loader nodes are also a PITA when you DON'T want to use a LORA for a generation. Welcome to the unofficial ComfyUI subreddit. So using the same type of prompts like he is doing for pw_a, pw_b, etc. 0 Scheduler settings: CFG Scale: 1. This is because the model's patch for Lora is applied regardless of the presence of the trigger word. Please share your tips, tricks, and workflows for using this… Comfyui is much better suited for studio use than other GUIs available now. I use loras all the time in workflows with ultimate sd upscaler. (github. When you have a Lora that accepts float strength values between -1 and 1, how can you randomize this for every generation? There is the randomized primitive INT and there are math nodes that convert integers to floats. The CLIP in the model is a tree-like structure, with the 'roots' layer being very general, and the subsequent layers getting more and more specific. Please share your tips, tricks, and… Base model "Model" and "Clip" outputs go to the respective "Model" and "Clip" inputs of the first Load Lora node. Belittling their efforts will get you banned. 5 Loras, is it possible for me to recreate everything as a script to be run on AWS? I guess I'm asking if I can expect everything in the comfyUI gui to be equal to a script run method? Thanks. I want a ComfyUI workflow that's compatible with SDXL with base model, refiner model, hi-res fix, and one LORA all in one go. If you drag in a png made with comfyui, you'll see the workflow in comfyui with the nodes etc. com) Reply reply Make sure you add the Lora trigger word to your prompt. I can get it to work just fine with a text prompt but I'm trying to give it a little more control with an image input. Check comfyUI image examples in the link. Proper result from a1111 So far the only lora I used was either in a1111, or lcm lora, now I made my own, but it doesn't seem to work. CLIP is the text part of the model, the bit that is used to decode the prompt. Please keep posted images SFW. anyway. Help me make it better! 4. My ComfyUI workflow was created to solve that. comfyui workflow. The prompt for the first couple for example is this: We would like to show you a description here but the site won’t allow us. But it separates LORA to another workflow (and it's not based on SDXL either). #config for comfyui. Please share your tips, tricks, and workflows for using this software to create your AI art. And above all, BE NICE. 5 not XL) I know you can do this by generating an image of 2 people using 1 lora (it will make the same person twice) and then inpainting the face with a different lora and use openpose / regional prompter. I follow his stuff a lot trying to learn. Previously I used to train LoRAs with Kohya_ss, but I think it would be very useful to train and test LoRAs directly in ComfyUI. Multiple lora references for Comfy are simply non-existant, not even in Youtube where 1000 hours of video are uploaded every second. I don't see what's special. 2 change in weight, so I can compare them and choose the best one, I use Efficiency Nodes for ComfyUI Version 2. Tutorial 6 - upscaling. Because it brighten the dark squares and darken the bright ones. if not, install either comfyUI manager or comfyUI custom scripts by pythongosssss. Has anyone else had this issue and how can I get past it. Currently I have the Lora Stacker from efficiency nodes, but it works only with the propietary Efficient KSampler node, and to make it worse the repository has been archived on Jan 9, 2024, meaning it could permanently stop working with the next comfyui update any minute now. My bad I can already use wildcards in ComfyUI via Lilly Nodes, but there's no node I know of that makes it possible to call one or more LoRAs from a text prompt. Another thing that bothers me is that I've noticed in videos on FreeU that when the settings are found, the resulting image is very different from the original image. You can find them by right-clicking and looking for the LJRE category, or you can double-click on an empty space and search for "caption". . for - SDXL. Please share your tips, tricks, and workflows for using this… Short Version: I’m just getting started using Comfy, so are there any settings or nodes I should know about in Comfy that might affect the accuracy of a LoRA? ATM, I've just got the model loader, empty latent, LoRA loader, positive prompt, and a kSampler, but what I'm getting isn't as accurate as the training samples Kohya produces. So the stupid answer is it will cost less than $30,000. Better yet output trigger words right into the prompt I call it 'The Ultimate ComfyUI Workflow', easily switch from Txt2Img to Img2Img, built-in Refiner, LoRA selector, Upscaler & Sharpener. ps: I also posted this in r/StableDiffusion before realizing this community existed. So to use them in ComfyUI, load them like you would any other LoRA and change the strength to somewhere between 0. Whereas a single wildcard prompt can range from 0 LoRAs to 10. Please share your tips, tricks, and… I am just starting out with comfyui, so I need some advice. For instance (word:1. My sdxl Lora works fine with base sdxl and dreamxl in A1111 but I want to try it in ComfyUI with the refiner. That's fine but having to choose, type the beginning of, and click the button is really fucking laborious. The workflow is saved as a json file. I've trained a LoRA using Kohya_ss, however when I load the LoRA in ComfyUI and use the tag words, it doesn't generate the image e. You can do that with the name of a LoRA, and instead of completing the text, you can click the little "i" symbol to bring up a window that seems to scrape data from CivitAI. You don't need to create a model, that's the beauty of LCM-LoRA presented here. 97 votes, 17 comments. No, for ComfyUI - it isn't made specifically for SDXL. Atleast for me it was like that, but i can't say for you since we don't have the workflow you use Welcome to the unofficial ComfyUI subreddit. from a folder What is the easiest way to have a lora loader and Ipadapter plugged to one Ksampler ? Ive tried the simpleModel Merge But it doesnt seems to activate the Ipadapters. Since I've 'got' you here and we're on the subject, I'd like your take on a small matter: Welcome to the unofficial ComfyUI subreddit. 25K subscribers in the comfyui community. , set your lora loader to allow strength input, and just direct that type of scheduling prompt to the strength of the Lora, it works just with the adjusted code in the node. 86s/it on a 4070 with the 25 frame model, 2. The more likely answer is you can train a LoRA on a single 4090 running for a day or two. I tested all of them which are now accompanied with a ComfyUI workflow that will get you started in no time. The "Model" output of the last Load Lora node goes to the "Model" input of the sampler node. They are also quite simple to use with ComfyUI, which is the nicest part about them. Iǘe started to use Comfy UI but Loras dont work, they are in the correct folder and have used all triggers but nothing happens with any. And a few Lora’s require a positive weight in the negative text encode. 5 including Multi-ControlNet, LoRA, Aspect Ratio, Process Switches, and many more nodes. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. Mostly the way words (tokens) in your prompt are interpreted (CLIP encoders). ComfyUI only allows stacking LoRA nodes, as far as I know. I can’t use the refiner in A1111 because the webui will crash when swapping to the refiner, even though I use a 4080 16gb. In my experience, It works with most loras, offset lora for exemple make lines appear. 5. putting a lora in text, it didn't matter where in the prompt it went. What am I doing wrong? I played around a lot with lora strength, but the result always seems to have lot of artifacts. VFX artists are also typically very familiar with node based UIs as they are very common in that space. Works well, but stretches my RAM to the absolute limit. I also have Loras with Eric, John and Ted, I'd like to have them randomized in the scene each time I queue a prompt, but no luck so far. In Comfy UI well. I talk a bunch about some of the different upscale methods and show what I think is one of the better upscale methods, I also explain how lora can be used in a comfyUI workflow. Do you want to save the image? choose a save image node and you'll find the outputs in the folders or you can right click and save that way too. The image is being accepted and rendered but I'm not getting any motion. So I gave it already, it is in the examples. The Lora won’t work, it’s ignored in Comfy. Apr 30, 2024 · Here’s how you can incorporate LoRAs into your workflow in ComfyUI to unlock new creative possibilities. However, what you CAN do in ComfyUI is generate an image with a normal model, then load the LCM-LoRA and upscale the generated image with the LCM sampler, using 8 steps. Please share your tips, tricks, and workflows for using this… 16K subscribers in the comfyui community. Some may work from -1. The weights are also interpreted differently. Training a LoRA will cost much less than this and it costs still less to train a LoRA for just one stage of Stable Cascade. 1) in A1111. In this article, we will demonstrate the exciting possibilities that are associated with LoRA models in ComfyUI. The image comes out looking dappled and fuzzy, not nearly as good as ddim for example. Once I have all my settings perfect via comfyUI and some SD1. Or do something even more simpler by just paste the link of the loras in the model download link and then just change the files to the different folders. I cannot find settings that work well for SDXL with LCM Lora. 2K subscribers in the comfyui community. I've developed a comfyui extension that offers a wide range of LoRA merge techniques (including dare). When you use Lora stacker, Lora weight & Clip weight of the Lora are the same, when you load a lora in the lora loader, you can use 2 differents values. I'm trying to configure ComfyUI with Animatediff using a motion lora. Try changing that or use a lora stacker that can allow lora/clip weight. Please share your tips, tricks, and workflows for using this… RockOfFire/ComfyUI_Comfyroll_CustomNodes: Custom nodes for SDXL and SD1. So I'm someone who stacks a lot of LoRAs in prompts on A1111 and moving to comfyui, I want to try and mix different loras for different styles. try it with a lora What tools do you have that can help someone go through their Lora, TI, Hypernetworks, & even Base Models, that will show the keywords, sample images, and or descriptions, in a way that is easy to include in the workflow? Actually lol, im not sure which custom node has it or whether this comes with comfyUI now. Even though it's a slight annoyance having to wire them up, especially more than one - that does come with some UI validation and cleaner prompts. There are many regional conditioning solutions available, but as soon as you try to add LoRA data to the conditioning channels, the LoRA data seems to overrun the whole generation. 0 to +1. The lora stacker in comfy ui is pretty basic and its difficult to re-order the loras, remove empty loras etc I've developed a comfyui extension that offers a wide range of LoRA merge techniques (including dare). /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It provides workflow for SDXL (base + refiner). For comparison, 30 steps SDXL dpm2m sde++ takes 20 seconds. Checkpoints --> Lora. Furthermore, as you pointed out, it depends not only on each model but also on the image style, whether there's one LoRa or two, and so on. in this setup it isnt clear where the lora stack goes and what it impacts, i want to know if i can specify this or if i need to set a - value for things like easynegative and a + value for things like DolLikeness and so on. It does work if connected with lcm lora, but the images are too sharp where it shouldn't be (burnt), and not sharp enough where it should be. selfie. That’s a cost of about $30,000 for a full base model train. 😋 the workflow is basically an image loader combined with a whole bunch of little modules for doing various tasks like build a prompt with an image, generate a color gradient, batchload images. Then I make two basic pipes, one with LoRA A and one with LoRA B, feed the model/clip each into a separate conditioning box. Right click on a ksampler and the drop down MAY have the option to add hiresfix. Do I need to Adjust something or do anything else? same Loras worked fine in 1111. Anyone have a workflow to do the following. Comfy does the same just denoting it negative (I think it's referring to the Python idea that uses negative values in array indices to denote last elements), let's say ComfyUI is more programmer friendly; then 1(a111)=-1(ComfyUI) and so on (I mean the clip skip values and no This is something I have been chasing for a while. 5 with the following settings: LCM lora strength 1. StabilityAI just release new ControlNet LoRA for SDXL so you can run these on your GPU without having to sell a kidney to buy a new one. Tried new LCM Loras. The Clip model is part of what you (if you want to) feed into the LoRA loader and will also have, in simple terms, trained weights applied to it to subtly adjust the output. It would be great for a node to show the trigger word for a given Lora as part of the flow. But I can’t seem to figure out how to pass all that to a ksampler for model. Please share your tips, tricks, and… Sharing checkpoint, lora, controlnet, upscaler, and all models between ComfyUI and Automatic1111 (what's the best way?) upvotes · comments r/StableDiffusion I see a lot of tutorials demonstrating LoRa usage with Automatic111 but not many for comfyUI. ComfyUI is also trivial to extend with custom nodes. Also, if this is new and exciting to you, feel free to post lol, thats silly, its a chance to learn stuff you dont know, and thats always worth a look. e: Bob as a paladin riding a white horse in a shining armour. That's the one I'm referring to. It is available at civitai, but not always in the accompanying json files. This was incredibly easy to setup in auto1111 with the composable lora + latent couple extensions, but it seems an impossible mission in Comfy. I use it in the 2nd step of my workflow where I create the realistic image with the control net inputs. This is crazy. The one for SD1. 9K subscribers in the comfyui community. comfyui: base_path: C:\Users\Blaize\Documents\COMFYUI\ComfyUI_windows_portable\ComfyUI\ checkpoints: models/checkpoints/ clip: models/clip/ clip_vision: models/clip_vision/ configs: models/configs/ I have a scene were a character lora gets styled into a given situation, i. In simple terms, it's how much of the LoRA is applied to the Clip Model. g. It's ComfyUI, with the latest version you just need to drop the picture of the linked website into ComfyUI and you'll get the setup. 0+for stacked Lora so making the change in the weight of the lora can make huge different in image but with stacked Lora it becomes time-consuming and tiring process so I This is one of my best SDXL LoRA`s ever! I hope you will like it ! Triggerword: in the style of drawholic. Any advice or resource regarding the topic would be greatly appreciated! LoRA has no concept of precedence (where it appears in the prompt order makes no difference), so the standard ComfyUI workflow of not injecting them into prompts at all actually makes sense. Two others (lcm-lora-sdxl and lcm-lora-ssd-1b) generate images around 1 minute 5 steps. Multiple characters from separate LoRAs interacting with each other. Ctrl+shift+b / ctrl+b also doesn't do anything with the loader node selected on my install ( the AIO windows download ). #your base path should be either an existing comfy install or a central folder where you store all of your models, loras, etc. When I use this LORA it always messes up my image. The images above were all created with this method. What tools do you have that can help someone go through their Lora, TI, Hypernetworks, & even Base Models, that will show the keywords, sample images, and or descriptions, in a way that is easy to include in the workflow? Actually lol, im not sure which custom node has it or whether this comes with comfyUI now. In Automatic1111, for example, you load and control the strength by simply typing something like this: <lora:Dragon_Ball_Backgrounds_XL:0. (Using the Lora in A1111 generates a base 1024x1024 in seconds). We would like to show you a description here but the site won’t allow us. If we've got LoRA loader nodes with actual sliders to set the strength value, I've not come across them yet. 8>. Load the default ComfyUI workflow by Apr 22, 2024 · This guide illustrates how the use of ComfyUI along with Efficiency Nodes not only simplifies the traditional workflow but also preserves its efficiency and elegance. But what do I do with the model? The positive has a Lora loader. I load the models fine and connect the proper nodes, and they work, but I'm not sure how to use them properly to mimic other webuis behavior. But how would you normalize the huge random seed to fit into that Welcome to the unofficial ComfyUI subreddit. I found I can send the clip to negative text encode …. That functionality of adding a combo box to pick the available embeddings will be sweet, its something that Ive never seen in ComfyUI! Its something that Auto 1111 gives out of the box, but Comfy kind of discouraged me of using embeddings because the lack of it (In auto 1111 the Civitai Helper is just amazing). Seems very hit and miss, most of what I'm getting look like 2d camera pans. Tutorial 7 - Lora Usage 67 votes, 24 comments. You don't understand how ComfyUI works? It isn't a script, but a workflow (which is generally in . Both are quick and dirty tutorials without tooo much rambling, no workflows included because of how basic they are. 0, and some may support values outside that range. 0 to 1. I want to automate the weight adjustment of the Lora weight, I would like to generate multiple images for every 0. Step 1: Loading the Default ComfyUI Workflow. so what's the point of it being in the prompt? When people share the settings used to generate images, they'll also include all the other things: cfg, seed, size, model name, model hash, etc. 20K subscribers in the comfyui community. 19 votes, 16 comments. Or just skip the lora download python code and just upload the lora manually to the loras folder. I feed the latent from the first pass into sampler A with conditioning on the left hand side of the image (coming from LoRA A), and sampler B with right-side conditioning (from LoRA B). Here is a with LoRA and without comparison: Prompt: A blue bird in the style of drawholic ~Without~ ~With~ And one more. /r/StableDiffusion is back open after the protest of Reddit killing Hi everyone, I am looking for a way to train LoRA using ComfyUI. There is no "none" or "bypass" in the dropdown menu. Prompt: A man in the style of drawholic ~Without~ ~With~ I hope you will like it! The link to my patreon, its for free Upcoming tutorial - SDXL Lora + using 1. It offers a solution that is Now I want to use a video game character lora. Take outputs of that Load Lora node and connect to the inputs of the next Lora Node if you are using more than one Lora model. Until then, I've light a candle to the gods of Copy & Paste and created the Lora vs Lora plot in a workflow. czjchgq dgidq vbkhjre plbt sesbd bjdeda goflima vwwgfup pbax ojltpd