sdxl refiner comfyui. 0 through an intuitive visual workflow builder. sdxl refiner comfyui

 
0 through an intuitive visual workflow buildersdxl refiner comfyui Welcome to the unofficial ComfyUI subreddit

5. Favors text at the beginning of the prompt. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. ComfyUI is a powerful modular graphic interface for Stable Diffusion models that allows you to create complex workflows using nodes. The Tutorial covers:1. Study this workflow and notes to understand the. 1:39 How to download SDXL model files (base and refiner). 0 BaseContribute to markemicek/ComfyUI-SDXL-Workflow development by creating an account on GitHub. 0 Base and Refiner models An automatic calculation of the steps required for both the Base and the Refiner models A quick selector for the right image width/height combinations based on the SDXL training set Text2Image with Fine-Tuned SDXL models (e. 根据官方文档,SDXL需要base和refiner两个模型联用,才能起到最佳效果。 而支持多模型联用的最佳工具,是comfyUI。 使用最为广泛的WebUI(秋叶一键包基于WebUI)只能一次加载一个模型,为了实现同等效果,需要先使用base模型文生图,再使用refiner模型图生图。3. go to img2img, choose batch, dropdown refiner, use the folder in 1 as input and the folder in 2 as output. Fixed issue with latest changes in ComfyUI November 13, 2023 11:46 notes Version 3. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. SDXL Lora + Refiner Workflow. 9) Tutorial | Guide 1- Get the base and refiner from torrent. Please read the AnimateDiff repo README for more information about how it works at its core. You can type in text tokens but it won’t work as well. 5 models. Works with bare ComfyUI (no custom nodes needed). ComfyUI Examples. In this quick episode we do a simple workflow where we upload an image into our SDXL graph inside of ComfyUI and add additional noise to produce an altered i. The following images can be loaded in ComfyUI to get the full workflow. useless) gains still haunts me to this day. 0 Features: Shared VAE Load: the loading of the VAE is now applied to both the base and refiner models, optimizing your VRAM usage and enhancing overall performance. This is an answer that someone corrects. No, for ComfyUI - it isn't made specifically for SDXL. SDXL 1. . 0 Base model used in conjunction with the SDXL 1. Sign up Product Actions. o base+refiner model) Usage. . 左上角的 Prompt Group 內有 Prompt 及 Negative Prompt 是 String Node,再分別連到 Base 及 Refiner 的 Sampler。 左邊中間的 Image Size 就是用來設定圖片大小, 1024 x 1024 就是對了。 左下角的 Checkpoint 分別是 SDXL base, SDXL Refiner 及 Vae。 This is the most well organised and easy to use ComfyUI Workflow I've come across so far showing difference between Preliminary, Base and Refiner setup. Aug 20, 2023 7 4 Share Hello FollowFox Community! Welcome to part of the ComfyUI series, where we started from an empty canvas, and step by step, we are building up SDXL workflows. He linked to this post where We have SDXL Base + SD 1. T2I-Adapter aligns internal knowledge in T2I models with external control signals. Always use the latest version of the workflow json file with the latest version of the custom nodes!Yes it’s normal, don’t use refiner with Lora. Install SDXL (directory: models/checkpoints) Install a custom SD 1. Download and drop the JSON file into ComfyUI. This workflow uses similar concepts to my iterative, with multi-model image generation consistent with the official approach for SDXL 0. Table of Content ; Searge-SDXL: EVOLVED v4. md. Fixed SDXL 0. 0: SDXL support (July 24) The open source Automatic1111 project (A1111 for short), also known as Stable Diffusion WebUI, is a. SDXL places very heavy emphasis at the beginning of the prompt, so put your main keywords. 1024 - single image 25 base steps, no refiner 1024 - single image 20 base steps + 5 refiner steps - everything is better except the lapels Image metadata is saved, but I'm running Vlad's SDNext. After an entire weekend reviewing the material, I think (I hope!) I got the implementation right: As the title says, I included ControlNet XL OpenPose and FaceDefiner models. ( I am unable to upload the full-sized image. Place VAEs in the folder ComfyUI/models/vae. The base doesn't - aesthetic score conditioning tends to break prompt following a bit (the laion aesthetic score values are not the most accurate, and alternative aesthetic scoring methods have limitations of their own), and so the base wasn't trained on it to enable it to follow prompts as accurately as possible. If you don't need LoRA support, separate seeds, CLIP controls, or hires fix - you can just grab basic v1. Custom nodes extension for ComfyUI, including a workflow to use SDXL 1. 0 with both the base and refiner checkpoints. Hi there. 0, the flagship image model developed by Stability AI, stands as the pinnacle of open models for image generation. just using SDXL base to run a 10 step dimm ksampler then converting to image and running it on 1. Do I need to download the remaining files pytorch, vae and unet? also is there an online guide for these leaked files or do they install the same like 2. Support for SD 1. I've been tinkering with comfyui for a week and decided to take a break today. For instance, if you have a wildcard file called. For me it makes a huge difference the refiner, since I have only have a laptop to run SDXL with 4GB vRAM, I manage to get as fast as possible by using very few steps, 10+5 refiner steps. 3. 5B parameter base model and a 6. 0_0. Have fun! agree - I tried to make an embedding to 2. In ComfyUI this can be accomplished with the output of one KSampler node (using SDXL base) leading directly into the input of another KSampler. x for ComfyUI; Table of Content; Version 4. A little about my step math: Total steps need to be divisible by 5. 9 with updated checkpoints, nothing fancy, no upscales, just straight refining from latent. Do I need to download the remaining files pytorch, vae and unet? also is there an online guide for these leaked files or do they install the same like 2. Fooocus, performance mode, cinematic style (default). You really want to follow a guy named Scott Detweiler. Click “Manager” in comfyUI, then ‘Install missing custom nodes’. 5 model, and the SDXL refiner model. SDXL-refiner-1. (early and not finished) Here are some more advanced examples: “Hires Fix” aka 2 Pass Txt2Img. 0 is configured to generated images with the SDXL 1. Subscribe for FBB images @ These configs require installing ComfyUI. This gives you the option to do the full SDXL Base + Refiner workflow or the simpler SDXL Base-only workflow. 1. 12:53 How to use SDXL LoRA models with Automatic1111 Web UI. 0 Refiner. It is a Latent Diffusion Model that uses a pretrained text encoder ( OpenCLIP-ViT/G ). Part 4 (this post) - We will install custom nodes and build out workflows. SDXL ComfyUI ULTIMATE Workflow. The issue with the refiner is simply stabilities openclip model. Stability is proud to announce the release of SDXL 1. A good place to start if you have no idea how any of this works is the:Sytan SDXL ComfyUI. Searge-SDXL: EVOLVED v4. 動作が速い. 这一期呢我们来开个新坑,来讲sd的另一种打开方式,也就是这个节点化comfyUI。那熟悉我们频道的老观众都知道,我一直是用webUI去做演示和讲解的. (In Auto1111 I've tried generating with the Base model by itself, then using the Refiner for img2img, but that's not quite the same thing, and it doesn't produce the same output or the same. The workflow should generate images first with the base and then pass them to the refiner for further. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. Final 1/5 are done in refiner. Place upscalers in the folder ComfyUI. Note that in ComfyUI txt2img and img2img are the same node. 0已更新!遥遥领先~快来看看更新内容及使用体验~,免费开源AI音乐:文本生成音乐,使用Riffusion实现音乐实时生成,【AI绘画】SDXL进阶篇:如何生成不同艺术风格的优质画面In the realm of artificial intelligence and image synthesis, the Stable Diffusion XL (SDXL) model has gained significant attention for its ability to generate high-quality images from textual descriptions. Next)によるSDXLの動作確認 「web UIでSDXLの動作確認を行いたい」「Refinerでさらに画質をUPさせたい. Fooocus and ComfyUI also used the v1. If you only have a LoRA for the base model you may actually want to skip the refiner or at. 私の作ったComfyUIのワークフローjsonファイル 4. go to img2img, choose batch, dropdown refiner, use the folder in 1 as input and the folder in 2 as output. 1. Installing. 0, 10 steps on the base sdxl model, and steps 10-20 on the sdxl refiner. Reply reply1. The Stability AI team takes great pride in introducing SDXL 1. The workflow I share below is based upon an SDXL using base and refiner models both together to generate the image and then run it through many different custom nodes to showcase the different. Welcome to SD XL. 点击load,选择你刚才下载的json脚本. The solution to that is ComfyUI, which could be viewed as a programming method as much as it is a front end. I discovered through a X post (aka Twitter) that was shared by makeitrad and was keen to explore what was available. 15:22 SDXL base image vs refiner improved image comparison. By default, AP Workflow 6. Workflow for ComfyUI and SDXL 1. Stable Diffusion XL 1. Here Screenshot . A historical painting of a battle scene with soldiers fighting on horseback, cannons firing, and smoke rising from the ground. I'm creating some cool images with some SD1. sdxl-0. Model Description: This is a model that can be used to generate and modify images based on text prompts. I need a workflow for using SDXL 0. Part 3 - we will add an SDXL refiner for the full SDXL process. A technical report on SDXL is now available here. . You know what to do. This notebook is open with private outputs. json format, but images do the same thing), which ComfyUI supports as it is - you don't even need custom nodes. ControlNet Workflow. see this workflow for combining SDXL with a SD1. Setup a quick workflow to do the first part of the denoising process on the base model but instead of finishing it stop early and pass the noisy result on to the refiner to finish the process. 5 model (directory: models/checkpoints) Install your loras (directory: models/loras) Restart. The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. Set the base ratio to 1. I just uploaded the new version of my workflow. stable diffusion SDXL 1. Copy the update-v3. 0_comfyui_colab のノートブックが開きます。. Models and. 236 strength and 89 steps for a total of 21 steps) 3. . Having previously covered how to use SDXL with StableDiffusionWebUI and ComfyUI, let’s now explore SDXL 1. SDXL-OneClick-ComfyUI (sdxl 1. . To update to the latest version: Launch WSL2. that extension really helps. Hires fix is just creating an image at a lower resolution, upscaling it and then sending it through img2img. Model type: Diffusion-based text-to-image generative model. ComfyUI SDXL Examples. x, SDXL and Stable Video Diffusion; Asynchronous Queue systemComfyUI installation. 5/SD2. refiner is an img2img model so you've to use it there. With SDXL as the base model the sky’s the limit. If you haven't installed it yet, you can find it here. The result is mediocre. 0, with refiner and MultiGPU support. in subpack_nodes. custom_nodesComfyUI-Impact-Packimpact_subpackimpact. 4/5 of the total steps are done in the base. Welcome to the unofficial ComfyUI subreddit. When you define the total number of diffusion steps you want the system to perform, the workflow will automatically allocate a certain number of those steps to each model, according to the refiner_start. SDXL comes with a base and a refiner model so you’ll need to use them both while generating images. python launch. Txt2Img or Img2Img. 0 Alpha + SD XL Refiner 1. Table of Content. Having issues with refiner in ComfyUI. Such a massive learning curve for me to get my bearings with ComfyUI. SDXL - The Best Open Source Image Model. SDXLをGoogle Colab上で簡単に使う方法をご紹介します。 Google Colabに既に設定済みのコードを使用することで、簡単にSDXLの環境をつくりあげす。また、ComfyUIも難しい部分は飛ばし、わかりやすさ、応用性を意識した設定済みのworkflowファイルを使用することで、すぐにAIイラストを生成できるように. Click. ComfyUI was created by comfyanonymous, who made the tool to understand how Stable Diffusion works. So I have optimized the ui for SDXL by removing the refiner model. 1. For me, this was to both the base prompt and to the refiner prompt. Basic Setup for SDXL 1. For good images, typically, around 30 sampling steps with SDXL Base will suffice. that extension really helps. I used it on DreamShaper SDXL 1. After gathering some more knowledge about SDXL and ComfyUI, and experimenting few days with both I've ended up with this basic (no upscaling) 2-stage (base + refiner) workflow: It works pretty well for me - I change dimensions, prompts, and samplers parameters, but the flow itself stays as it is. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. Installation. 第一、风格控制 第二、base模型以及refiner模型如何连接 第三、分区提示词控制 第四、多重采样的分区控制 comfyui节点流程这个东西一通百通,逻辑正确怎么连都可以,所以这个视频我讲得并不仔细. import json from urllib import request, parse import random # this is the ComfyUI api prompt format. SDXL Prompt Styler. Settled on 2/5, or 12 steps of upscaling. The goal is to build up knowledge, understanding of this tool, and intuition on SDXL pipelines. What's new in 3. We are releasing two new diffusion models for research purposes: SDXL-base-0. This is a comprehensive tutorial on understanding the Basics of ComfyUI for Stable Diffusion. 0 with the node-based user interface ComfyUI. It fully supports the latest Stable Diffusion models including SDXL 1. 5 checkpoint files? currently gonna try them out on comfyUI. BRi7X. 9 base+refiner, my system would freeze, and render times would extend up to 5 minutes for a single render. If you get a 403 error, it's your firefox settings or an extension that's messing things up. 0 base model. 論文でも書いてある通り、SDXL は入力として画像の縦横の長さがあるのでこのようなノードになるはずです。 Refiner を入れると以下のようになります。 最後に 最後まで読んでいただきありがとうございました。今回は 流行りの SDXL についてです。 Use SDXL Refiner with old models. Efficient Controllable Generation for SDXL with T2I-Adapters. safetensors files to the ComfyUI file which is present with name ComfyUI_windows_portable file. The only important thing is that for optimal performance the resolution should. 1 and 0. 3. Table of contents. How to use the Prompts for Refine, Base, and General with the new SDXL Model. 5 + SDXL Refiner Workflow : StableDiffusion Continuing with the car analogy, ComfyUI vs Auto1111 is like driving manual shift vs automatic (no pun intended). However, the SDXL refiner obviously doesn't work with SD1. After gathering some more knowledge about SDXL and ComfyUI, and experimenting few days with both I've ended up with this basic (no upscaling) 2-stage (base + refiner) workflow: It works pretty well for me - I change dimensions, prompts, and samplers parameters, but the flow itself stays as it is. SDXL 1. Put into ComfyUImodelsvaeSDXL and ComfyUImodelsvaeSD15). Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything. Then refresh the browser (I lie, I just rename every new latent to the same filename e. 33. 9. You can use this workflow in the Impact Pack to regenerate faces with the Face Detailer custom node and SDXL base and refiner models. ago. SDXL has 2 text encoders on its base, and a specialty text encoder on its refiner. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. If you want to use the SDXL checkpoints, you'll need to download them manually. Part 1: Stable Diffusion SDXL 1. The base model was trained on the full range of denoising strengths while the refiner was specialized on "high-quality, high resolution data" and denoising of <0. Using the refiner is highly recommended for best results. But suddenly the SDXL model got leaked, so no more sleep. py I've successfully run the subpack/install. The SDXL 1. Then move it to the “ComfyUImodelscontrolnet” folder. I trained a LoRA model of myself using the SDXL 1. 0 Part 5: Scale and Composite Latents with SDXL Part 6: SDXL 1. 999 RC August 29, 2023 20:59 testing Version 3. 手順5:画像を生成. June 22, 2023. install or update the following custom nodes. 0. To simplify the workflow set up a base generation and refiner refinement using two Checkpoint Loaders. In researching InPainting using SDXL 1. FWIW latest ComfyUI does launch and renders some images with SDXL on my EC2. 0; Width: 896; Height: 1152; CFG Scale: 7; Steps: 30; Sampler: DPM++ 2M Karras; Prompt: As above. It would need to denoise the image in tiles to run on consumer hardware, but at least it would probably only need a few steps to clean up VAE artifacts. 0; the highly-anticipated model in its image-generation series! After you all have been tinkering away with randomized sets of models on our Discord bot, since early May, we’ve finally reached our winning crowned-candidate together for the release of SDXL 1. , this workflow, or any other upcoming tool support for that matter) using the prompt?Is this just a keyword appended to the prompt?You can use any SDXL checkpoint model for the Base and Refiner models. Tutorial Video : ComfyUI Master Tutorial - Stable Diffusion XL (SDXL) - Install On PC, Google Colab (Free) & RunPod . 3-中文必备插件篇,stable diffusion教学,stable diffusion进阶教程3:comfyui深度体验以及照片转漫画工作流详解,ComfyUI系统性教程来啦!简体中文版整合包+全新升级云部署!预装超多模块组一键启动!All images were created using ComfyUI + SDXL 0. ·. 4. Stable diffusion-TensorRT安装教程,看完省一张显卡钱!,fooocus完全体2. RTX 3060 12GB VRAM, and 32GB system RAM here. 2 Workflow - Simple - Easy to use and simple with Upscaling 4K, just. 2. 0 and refiner) I can generate images in 2. You can use this workflow in the Impact Pack to regenerate faces with the Face Detailer custom node and SDXL base and refiner models. I am using SDXL + refiner with a 3070 8go. The node is located just above the “SDXL Refiner” section. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. Lora. I want a ComfyUI workflow that's compatible with SDXL with base model, refiner model, hi-res fix, and one LORA all in one go. It's official! Stability. Inpainting a woman with the v2 inpainting model: . In Prefix to add to WD14 caption, write your TRIGGER followed by a comma and then your CLASS followed by a comma like so: "lisaxl, girl, ". Opening_Pen_880. Share Sort by:. SDXL Prompt Styler Advanced: New node for more elaborate workflows with linguistic and supportive terms. 6B parameter refiner model, making it one of the largest open image generators today. It provides workflow for SDXL (base + refiner). July 4, 2023. The refiner is trained specifically to do the last 20% of the timesteps so the idea was to not waste time by. BRi7X. 5x), but I can't get the refiner to work. 9. and have to close terminal and restart a1111 again. This one is the neatest but. 9 was yielding already. 0 is “built on an innovative new architecture composed of a 3. SDXL_LoRA_InPAINT | SDXL_With_LoRA | SDXL_Inpaint | SDXL_Refiner_Inpaint . There are several options on how you can use SDXL model: How to install SDXL 1. ComfyUI is great if you're like a developer because you can just hook up some nodes instead of having to know Python to update A1111. Adds 'Reload Node (ttN)' to the node right-click context menu. 9. Here's what I've found: When I pair the SDXL base with my LoRA on ComfyUI, things seem to click and work pretty well. workflow custom-nodes stable-diffusion comfyui sdxl Updated Nov 13, 2023; Python;. ComfyUI ControlNet aux: Plugin with preprocessors for ControlNet, so you can generate images directly from ComfyUI. 6B parameter refiner. Direct Download Link Nodes: Efficient Loader &. Always use the latest version of the workflow json file with the latest version of the custom nodes! SDXL 1. Those are two different models. It is totally ready for use with SDXL base and refiner built into txt2img. ComfyUI with SDXL (Base+Refiner) + ControlNet XL OpenPose + FaceDefiner (2x) ComfyUI is hard. 0 through an intuitive visual workflow builder. 9. Yup, all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have that, though). Favors text at the beginning of the prompt. If you want it for a specific workflow you can copy it from the prompt section # of the image metadata of images generated with ComfyUI # keep in mind ComfyUI is pre alpha software so this format will change a bit. 1/1. Here are the configuration settings for the SDXL models test: 17:38 How to use inpainting with SDXL with ComfyUI. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. About SDXL 1. Developed by: Stability AI. WAS Node Suite. My research organization received access to SDXL. png files that ppl here post in their SD 1. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. Automatic1111–1. 4/1. SDXL includes a refiner model specialized in denoising low-noise stage images to generate higher-quality images from the base model. SDXL Base+Refiner All images are generated using both the SDXL Base model and the Refiner model, each automatically configured to perform a certain amount of diffusion. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. Intelligent Art. To make full use of SDXL, you'll need to load in both models, run the base model starting from an empty latent image, and then run the refiner on the base model's output to improve detail. IDK what you are doing wrong to wait 90 seconds. If you look for the missing model you need and download it from there it’ll automatically put. Although SDXL works fine without the refiner (as demonstrated above) you really do need to use the refiner model to get the full use out of the model. 1 Workflow - Complejo - for Base+Refiner and Upscaling; 1. 5 on A1111 takes 18 seconds to make a 512x768 image and around 25 more seconds to then hirezfix it to 1. In any case, just grabbing SDXL. 1/1. If you have the SDXL 1. Not a LORA, but you can download ComfyUI nodes for sharpness, blur, contrast, saturation, sharpness, etc. x, SDXL and Stable Video Diffusion; Asynchronous Queue system ComfyUI installation. bat file. {"payload":{"allShortcutsEnabled":false,"fileTree":{"":{"items":[{"name":"LICENSE","path":"LICENSE","contentType":"file"},{"name":"README. 5 refiner tutorials into your ComfyUI browser and the workflow is loaded. ago. Currently, a beta version is out, which you can find info about at AnimateDiff. Due to the current structure of ComfyUI, it is unable to distinguish between SDXL latent and SD1. Both ComfyUI and Foooocus are slower for generation than A1111 - YMMW. Here is the rough plan (that might get adjusted) of the series: How To Use Stable Diffusion XL 1. The ONLY issues that I've had with using it was with the. Play around with different Samplers and different amount of base Steps (30, 60, 90, maybe even higher). That's the one I'm referring to. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. ai has now released the first of our official stable diffusion SDXL Control Net models. Searge-SDXL: EVOLVED v4. Traditionally, working with SDXL required the use of two separate ksamplers—one for the base model and another for the refiner model. You can use the base model by it's self but for additional detail you should move to. 9 and Stable Diffusion 1. 2 more replies. Commit date (2023-08-11) My Links: discord , twitter/ig . I mean, it's also possible to use it like that, but the proper intended way to use the refiner is a two-step text-to-img. Outputs will not be saved. I've a 1060 GTX, 6gb vram, 16gb ram. Custom nodes extension for ComfyUI, including a workflow to use SDXL 1. 9 and Stable Diffusion 1. The second setting flattens it a bit and gives it a more smooth appearance, a bit like an old photo. and After 4-6 minutes until the both checkpoints are loaded (SDXL 1. Updated with 1. But, as I ventured further and tried adding the SDXL refiner into the mix, things. Yet another week and new tools have come out so one must play and experiment with them. Please share your tips, tricks, and workflows for using this software to create your AI art. Load成功后你应该看到的是这个界面,你需要重新选择一下你的refiner和base modelI was having very poor performance running SDXL locally in ComfyUI to the point where it was basically unusable. This seems to give some credibility and license to the community to get started. 0 with SDXL-ControlNet: Canny Part 7: This post!Wingto commented on May 9. To get started, check out our installation guide using. 0 with refiner. The SDXL Discord server has an option to specify a style. Holding shift in addition will move the node by the grid spacing size * 10. This GUI provides a highly customizable, node-based interface, allowing users to intuitively place building blocks of the Stable Diffusion. png","path":"ComfyUI-Experimental. I tried with two checkpoint combinations but got the same results : sd_xl_base_0. I think you can try 4x if you have the hardware for it. 如下图refiner model生成的图片质量和细节捕捉要好过base model生成的图片,没有对比就没有伤害! . Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. Here Screenshot . Copy the sd_xl_base_1. With Vlad releasing hopefully tomorrow, I'll just wait on the SD. 0以降 である必要があります(※もっと言うと後述のrefinerモデルを手軽に使うためにはv1. 🧨 Diffusers This uses more steps, has less coherence, and also skips several important factors in-between. In addition to that, I have included two different upscaling methods, Ultimate SD Upscaling and Hires. 0 and upscalers. Put the model downloaded here and the SDXL refiner in the folder: ComfyUI_windows_portable\ComfyUI\models\checkpoints. . 6. safetensor and the Refiner if you want it should be enough. Just training the base model isn't feasible for accurately generating images of subjects such as people, animals, etc. 20:57 How to use LoRAs with SDXL.