It's every computer. Here is one 1024x1024 benchmark, hopefully it will be of some use. 4 to 26. This is the Stable Diffusion web UI wiki. Comparing all samplers with checkpoint in SDXL after 1. Performance benchmarks have already shown that the NVIDIA TensorRT-optimized model outperforms the baseline (non-optimized) model on A10, A100, and. Unless there is a breakthrough technology for SD1. Stable diffusion 1. Consider that there will be future version after SDXL, which probably need even more vram, it seems wise to get a card with more vram. In this Stable Diffusion XL (SDXL) benchmark, consumer GPUs (on SaladCloud) delivered 769 images per dollar - the highest among popular clouds. One way to make major improvements would be to push tokenization (and prompt use) of specific hand poses, as they have more fixed morphology - i. 0 Launch Event that ended just NOW. 5 and 2. SDXL basically uses 2 separate checkpoints to do the same what 1. เรามาลองเพิ่มขนาดดูบ้าง มาดูกันว่าพลังดิบของ RTX 3080 จะเอาชนะได้ไหมกับการทดสอบนี้? เราจะใช้ Real Enhanced Super-Resolution Generative Adversarial. Segmind's Path to Unprecedented Performance. Stability AI aims to make technology more accessible, and StableCode is a significant step toward this goal. 9. August 27, 2023 Imraj RD Singh, Alexander Denker, Riccardo Barbano, Željko Kereta, Bangti Jin,. I have a 3070 8GB and with SD 1. However, this will add some overhead to the first run (i. The answer is that it's painfully slow, taking several minutes for a single image. Portrait of a very beautiful girl in the image of the Joker in the style of Christopher Nolan, you can see a beautiful body, an evil grin on her face, looking into a. The M40 is a dinosaur speed-wise compared to modern GPUs, but 24GB of VRAM should let you run the official repo (vs one of the "low memory" optimized ones, which are much slower). For additional details on PEFT, please check this blog post or the diffusers LoRA documentation. Aug 30, 2023 • 3 min read. 51. How Use Stable Diffusion, SDXL, ControlNet, LoRAs For FREE Without A GPU On. Zero payroll costs, get AI-driven insights to retain best talent, and delight them with amazing local benefits. 5 takes over 5. ago. ; Use the LoRA with any SDXL diffusion model and the LCM scheduler; bingo! You get high-quality inference in just a few. 1. workflow_demo. x models. First, let’s start with a simple art composition using default parameters to give our GPUs a good workout. Dubbed SDXL v0. I also looked at the tensor's weight values directly which confirmed my suspicions. The images generated were of Salads in the style of famous artists/painters. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. Here is what Daniel Jeffries said to justify Stability AI takedown of Model 1. Benchmark Results: GTX 1650 is the Surprising Winner As expected, our nodes with higher end GPUs took less time per image, with the flagship RTX 4090 offering the best performance. 24GB VRAM. タイトルは釣りです 日本時間の7月27日早朝、Stable Diffusion の新バージョン SDXL 1. Performance Against State-of-the-Art Black-Box. because without that SDXL prioritizes stylized art and SD 1 and 2 realism so it is a strange comparison. 2, along with code to get started with deploying to Apple Silicon devices. 8 / 2. 9 is now available on the Clipdrop by Stability AI platform. 9 is able to be run on a fairly standard PC, needing only a Windows 10 or 11, or Linux operating system, with 16GB RAM, an Nvidia GeForce RTX 20 graphics card (equivalent or higher standard) equipped with a minimum of 8GB of VRAM. 0 is expected to change before its release. Generate image at native 1024x1024 on SDXL, 5. SDXL Installation. 5 and 2. 9. From what i have tested, InvokeAi (latest Version) have nearly the same Generation Times as A1111 (SDXL, SD1. keep the final output the same, but. SDXL-0. cudnn. SDXL v0. The answer from our Stable Diffusion XL (SDXL) Benchmark: a resounding yes. ago. [08/02/2023]. 5 billion-parameter base model. SDXL performance optimizations But the improvements don’t stop there. safetensors file from the Checkpoint dropdown. I'm able to build a 512x512, with 25 steps, in a little under 30 seconds. In the second step, we use a. 0: Guidance, Schedulers, and Steps. Right: Visualization of the two-stage pipeline: We generate initial. Radeon 5700 XT. I prefer the 4070 just for the speed. 0 Seed 8 in August 2023. py, then delete venv folder and let it redownload everything next time you run it. 6k hi-res images with randomized. 163_cuda11-archive\bin. Can someone for the love of whoever is most dearest to you post a simple instruction where to put the SDXL files and how to run the thing?. Let's create our own SDXL LoRA! For the purpose of this guide, I am going to create a LoRA on Liam Gallagher from the band Oasis! Collect training imagesSDXL 0. Speed and memory benchmark Test setup. 5 negative aesthetic score Send refiner to CPU, load upscaler to GPU Upscale x2 using GFPGANSDXL (ComfyUI) Iterations / sec on Apple Silicon (MPS) currently in need of mass producing certain images for a work project utilizing Stable Diffusion, so naturally looking in to SDXL. IP-Adapter can be generalized not only to other custom models fine-tuned from the same base model, but also to controllable generation using existing controllable tools. 0: Guidance, Schedulers, and. ☁️ FIVE Benefits of a Distributed Cloud powered by gaming PCs: 1. Last month, Stability AI released Stable Diffusion XL 1. I used ComfyUI and noticed a point that can be easily fixed to save computer resources. Stable Diffusion XL delivers more photorealistic results and a bit of text. PyTorch 2 seems to use slightly less GPU memory than PyTorch 1. We cannot use any of the pre-existing benchmarking utilities to benchmark E2E stable diffusion performance,","# because the top-level StableDiffusionPipeline cannot be serialized into a single Torchscript object. The chart above evaluates user preference for SDXL (with and without refinement) over Stable Diffusion 1. 5: Options: Inputs are the prompt, positive, and negative terms. 5 and 2. SDXL is now available via ClipDrop, GitHub or the Stability AI Platform. What is interesting, though, is that the median time per image is actually very similar for the GTX 1650 and the RTX 4090: 1 second. 6B parameter refiner model, making it one of the largest open image generators today. As much as I want to build a new PC, I should wait a couple of years until components are more optimized for AI workloads in consumer hardware. For users with GPUs that have less than 3GB vram, ComfyUI offers a. The disadvantage is that slows down generation of a single image SDXL 1024x1024 by a few seconds for my 3060 GPU. SDXL GeForce GPU Benchmarks. The WebUI is easier to use, but not as powerful as the API. Next. 5 is superior at human subjects and anatomy, including face/body but SDXL is superior at hands. It should be noted that this is a per-node limit. 2. Omikonz • 2 mo. r/StableDiffusion. 5. ) RTX. 5: Options: Inputs are the prompt, positive, and negative terms. r/StableDiffusion • "1990s vintage colored photo,analog photo,film grain,vibrant colors,canon ae-1,masterpiece, best quality,realistic, photorealistic, (fantasy giant cat sculpture made of yarn:1. It’ll be faster than 12GB VRAM, and if you generate in batches, it’ll be even better. 8 min read. ago. If you would like to access these models for your research, please apply using one of the following links: SDXL-base-0. The 3090 will definitely have a higher bottleneck than that, especially once next gen consoles have all AAA games moving data between SSD, ram, and GPU at very high rates. [8] by. 5 and 2. SDXL’s performance has been compared with previous versions of Stable Diffusion, such as SD 1. 9 model, and SDXL-refiner-0. After. make the internal activation values smaller, by. Zero payroll costs, get AI-driven insights to retain best talent, and delight them with amazing local benefits. Despite its powerful output and advanced model architecture, SDXL 0. 9 の記事にも作例. From what I've seen, a popular benchmark is: Euler a sampler, 50 steps, 512X512. Optimized for maximum performance to run SDXL with colab free. At 769 SDXL images per dollar, consumer GPUs on Salad’s distributed cloud are still the best bang for your buck for AI image generation, even when enabling no optimizations on Salad and all optimizations on AWS. August 21, 2023 · 11 min. The first invocation produces plan files in engine. Close down the CMD and. Originally Posted to Hugging Face and shared here with permission from Stability AI. 0 Features: Shared VAE Load: the loading of the VAE is now applied to both the base and refiner models, optimizing your VRAM usage and enhancing overall performance. Another low effort comparation using a heavily finetuned model, probably some post process against a base model with bad prompt. There are slight discrepancies between the output of SDXL-VAE-FP16-Fix and SDXL-VAE, but the decoded images should be close. Figure 1: Images generated with the prompts, "a high quality photo of an astronaut riding a (horse/dragon) in space" using Stable Diffusion and Core ML + diffusers. 5x slower. 5 will likely to continue to be the standard, with this new SDXL being an equal or slightly lesser alternative. The most notable benchmark was created by Bellon et al. Training T2I-Adapter-SDXL involved using 3 million high-resolution image-text pairs from LAION-Aesthetics V2, with training settings specifying 20000-35000 steps, a batch size of 128 (data parallel with a single GPU batch size of 16), a constant learning rate of 1e-5, and mixed precision (fp16). The train_instruct_pix2pix_sdxl. SDXL-0. 10 k+. Maybe take a look at your power saving advanced options in the Windows settings too. Devastating for performance. 6 and the --medvram-sdxl. DubaiSim. The SDXL base model performs significantly. SD XL. 使用 LCM LoRA 4 步完成 SDXL 推理 . The beta version of Stability AI’s latest model, SDXL, is now available for preview (Stable Diffusion XL Beta). 1. In a groundbreaking advancement, we have unveiled our latest optimization of the Stable Diffusion XL (SDXL 1. Double click the . Specs n numbers: Nvidia RTX 2070 (8GiB VRAM). Resulted in a massive 5x performance boost for image generation. 5: SD v2. 9 の記事にも作例. Description: SDXL is a latent diffusion model for text-to-image synthesis. heat 1 tablespoon of olive oil in a skillet over medium heat ', ' add bell pepper and saut until softened slightly , about 3 minutes ', ' add onion and season with salt and pepper ', ' saut until softened , about 7 minutes ', ' stir in the chicken ', ' add heavy cream , buffalo sauce and blue cheese ', ' stir and cook until heated through , about 3-5 minutes ',. 11 on for some reason when i uninstalled everything and reinstalled python 3. Stability AI, the company behind Stable Diffusion, said, "SDXL 1. Consider that there will be future version after SDXL, which probably need even more vram, it seems wise to get a card with more vram. This means that you can apply for any of the two links - and if you are granted - you can access both. SDXL performance does seem sluggish for SD 1. 0 is still in development: The architecture of SDXL 1. Aug 30, 2023 • 3 min read. 0, an open model representing the next evolutionary step in text-to-image generation models. First, let’s start with a simple art composition using default parameters to. Your Path to Healthy Cloud Computing ~ 90 % lower cloud cost. You should be good to go, Enjoy the huge performance boost! Using SD-XL. 42 12GB. I figure from the related PR that you have to use --no-half-vae (would be nice to mention this in the changelog!). ' That's the benchmark and what most other companies are trying really hard to topple. The mid range price/performance of PCs hasn't improved much since I built my mine. Create models using more simple-yet-accurate prompts that can help you produce complex and detailed images. 5 billion parameters, it can produce 1-megapixel images in different aspect ratios. I don't think it will be long before that performance improvement come with AUTOMATIC1111 right out of the box. At higher (often sub-optimal) resolutions (1440p, 4K etc) the 4090 will show increasing improvements compared to lesser cards. 既にご存じの方もいらっしゃるかと思いますが、先月Stable Diffusionの最新かつ高性能版である Stable Diffusion XL が発表されて話題になっていました。. Auto Load SDXL 1. 0 outputs. Wiki Home. 10 k+. arrow_forward. r/StableDiffusion. finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. Even less VRAM usage - Less than 2 GB for 512x512 images on ‘low’ VRAM usage setting (SD 1. Consider that there will be future version after SDXL, which probably need even more vram, it. Pertama, mari mulai dengan komposisi seni yang simpel menggunakan parameter default agar GPU kami mulai bekerja. r/StableDiffusion. The images generated were of Salads in the style of famous artists/painters. 10it/s. Your card should obviously do better. 0 Has anyone been running SDXL on their 3060 12GB? I'm wondering how fast/capable it is for different resolutions in SD. 由于目前SDXL还不够成熟,模型数量和插件支持相对也较少,且对硬件配置的要求进一步提升,所以. The advantage is that it allows batches larger than one. 5 when generating 512, but faster at 1024, which is considered the base res for the model. The result: 769 hi-res images per dollar. 5 Vs SDXL Comparison. 2. All of our testing was done on the most recent drivers and BIOS versions using the “Pro” or “Studio” versions of. Step 1: Update AUTOMATIC1111. 6. Running on cpu upgrade. At 769 SDXL images per dollar, consumer GPUs on Salad’s distributed. What is interesting, though, is that the median time per image is actually very similar for the GTX 1650 and the RTX 4090: 1 second. 9 and Stable Diffusion 1. This might seem like a dumb question, but I've started trying to run SDXL locally to see what my computer was able to achieve. Funny, I've been running 892x1156 native renders in A1111 with SDXL for the last few days. 122. For our tests, we’ll use an RTX 4060 Ti 16 GB, an RTX 3080 10 GB, and an RTX 3060 12 GB graphics card. This ensures that you see similar behaviour to other implementations when setting the same number for Clip Skip. ) Cloud - Kaggle - Free. The Collective Reliability Factor Chance of landing tails for 1 coin is 50%, 2 coins is 25%, 3. 9 Release. In Brief. Stable Diffusion XL (SDXL) Benchmark shows consumer GPUs can serve SDXL inference at scale. To use the Stability. Even with great fine tunes, control net, and other tools, the sheer computational power required will price many out of the market, and even with top hardware, the 3x compute time will frustrate the rest sufficiently that they'll have to strike a personal. For a beginner a 3060 12GB is enough, for SD a 4070 12GB is essentially a faster 3060 12GB. This GPU handles SDXL very well, generating 1024×1024 images in just. In a notable speed comparison, SSD-1B achieves speeds up to 60% faster than the foundational SDXL model, a performance benchmark observed on A100 80GB and RTX 4090 GPUs. Everything is. 0-RC , its taking only 7. It needs at least 15-20 seconds to complete 1 single step, so it is impossible to train. 153. Opinion: Not so fast, results are good enough. 3 seconds per iteration depending on prompt. 0 (SDXL 1. Base workflow: Options: Inputs are only the prompt and negative words. The release went mostly under-the-radar because the generative image AI buzz has cooled. Untuk pengetesan ini, kami menggunakan kartu grafis RTX 4060 Ti 16 GB, RTX 3080 10 GB, dan RTX 3060 12 GB. For AI/ML inference at scale, the consumer-grade GPUs on community clouds outperformed the high-end GPUs on major cloud providers. 02. Build the imageSDXL Benchmarks / CPU / GPU / RAM / 20 Steps / Euler A 1024x1024 . 9 can run on a modern consumer GPU, requiring only a Windows 10 or 11 or Linux operating system, 16 GB of RAM, and an Nvidia GeForce RTX 20 (equivalent or higher) graphics card with at least 8 GB of VRAM. 3. The LoRA training can be done with 12GB GPU memory. 10 k+. -. Optimized for maximum performance to run SDXL with colab free. 0 base model. Image: Stable Diffusion benchmark results showing a comparison of image generation time. Let's dive into the details! Major Highlights: One of the standout additions in this update is the experimental support for Diffusers. 6k hi-res images with randomized prompts, on 39 nodes equipped with RTX 3090 and RTX 4090 GPUs - getting . -. This repository hosts the TensorRT versions of Stable Diffusion XL 1. With Stable Diffusion XL 1. The most you can do is to limit the diffusion to strict img2img outputs and post-process to enforce as much coherency as possible, which works like a filter on a pre-existing video. Scroll down a bit for a benchmark graph with the text SDXL. 5B parameter base model and a 6. The abstract from the paper is: We present SDXL, a latent diffusion model for text-to-image synthesis. Free Global Payroll designed for tech teams. 0 is the evolution of Stable Diffusion and the next frontier for generative AI for images. Installing ControlNet. 9: The weights of SDXL-0. 0, a text-to-image generation tool with improved image quality and a user-friendly interface. 16GB VRAM can guarantee you comfortable 1024×1024 image generation using the SDXL model with the refiner. This architectural finesse and optimized training parameters position SSD-1B as a cutting-edge model in text-to-image generation. 👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ Thank you for watching! please consider to subs. On Wednesday, Stability AI released Stable Diffusion XL 1. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting r/StableDiffusion • Making Game of Thrones model with 50 characters4060Ti, just for the VRAM. After the SD1. Stable Diffusion. This is the default backend and it is fully compatible with all existing functionality and extensions. tl;dr: We use various formatting information from rich text, including font size, color, style, and footnote, to increase control of text-to-image generation. Notes: ; The train_text_to_image_sdxl. SDXL consists of a two-step pipeline for latent diffusion: First, we use a base model to generate latents of the desired output size. 6. Unfortunately, it is not well-optimized for WebUI Automatic1111. It'll most definitely suffice. Instead, Nvidia will leave it up to developers to natively support SLI inside their games for older cards, the RTX 3090 and "future SLI-capable GPUs," which more or less means the end of the road. This metric. Eh that looks right, according to benchmarks the 4090 laptop GPU is going to be only slightly faster than a desktop 3090. 35, 6. "finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. 这次我们给大家带来了从RTX 2060 Super到RTX 4090一共17款显卡的Stable Diffusion AI绘图性能测试。. However, ComfyUI can run the model very well. I have 32 GB RAM, which might help a little. 5 & 2. First, let’s start with a simple art composition using default parameters to. Hires. を丁寧にご紹介するという内容になっています。. The RTX 2080 Ti released at $1,199, the RTX 3090 at $1,499, and now, the RTX 4090 is $1,599. So it takes about 50 seconds per image on defaults for everything. when fine-tuning SDXL at 256x256 it consumes about 57GiB of VRAM at a batch size of 4. 6k hi-res images with randomized prompts, on 39 nodes equipped with RTX 3090 and RTX 4090 GPUs - getting . 5 it/s. Large batches are, per-image, considerably faster. SD XL. Image size: 832x1216, upscale by 2. The current benchmarks are based on the current version of SDXL 0. I find the results interesting for. The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. Building a great tech team takes more than a paycheck. SD-XL Base SD-XL Refiner. 2. 5 it/s. Generate image at native 1024x1024 on SDXL, 5. but when you need to use 14GB of vram, no matter how fast the 4070 is, you won't be able to do the same. Note | Performance is measured as iterations per second for different batch sizes (1, 2, 4, 8. Stability AI API and DreamStudio customers will be able to access the model this Monday,. 9. Create an account to save your articles. 4K SR Benchmark Dataset The 4K RTSR benchmark provides a unique test set com-prising ultra-high resolution images from various sources, setting it apart from traditional super-resolution bench-marks. SD. 94, 8. I have 32 GB RAM, which might help a little. Running on cpu upgrade. . The answer from our Stable […]29. Even with AUTOMATIC1111, the 4090 thread is still open. The most recent version, SDXL 0. 1mo. 5). The RTX 3060. 100% free and compliant. 3. Animate Your Personalized Text-to-Image Diffusion Models with SDXL and LCM Updated 3 days, 20 hours ago 129 runs petebrooks / abba-8bit-dancing-queenIn addition to this, with the release of SDXL, StabilityAI have confirmed that they expect LoRA's to be the most popular way of enhancing images on top of the SDXL v1. Stable Diffusion Benchmarked: Which GPU Runs AI Fastest (Updated) vram is king,. SD 1. For our tests, we’ll use an RTX 4060 Ti 16 GB, an RTX 3080 10 GB, and an RTX 3060 12 GB graphics card. Or drop $4k on a 4090 build now. With 3. 0 aesthetic score, 2. This also somtimes happens when I run dynamic prompts in SDXL and then turn them off. Copy across any models from other folders (or previous installations) and restart with the shortcut. To see the great variety of images SDXL is capable of, check out Civitai collection of selected entries from the SDXL image contest. SDXL-VAE-FP16-Fix was created by finetuning the SDXL-VAE to: 1. My SDXL renders are EXTREMELY slow. After searching around for a bit I heard that the default. The model is designed to streamline the text-to-image generation process and includes fine-tuning. The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. Over the benchmark period, we generated more than 60k images, uploading more than 90GB of content to our S3 bucket, incurring only $79 in charges from Salad, which is far less expensive than using an A10g on AWS, and orders of magnitude cheaper than fully managed services like the Stability API. 5 - Nearly 40% faster than Easy Diffusion v2. "Cover art from a 1990s SF paperback, featuring a detailed and realistic illustration. Maybe take a look at your power saving advanced options in the Windows settings too. metal0130 • 7 mo. 8, 2023. Like SD 1. SDXL does not achieve better FID scores than the previous SD versions. i dont know whether i am doing something wrong, but here are screenshot of my settings. Today, we are excited to release optimizations to Core ML for Stable Diffusion in macOS 13. The 4060 is around 20% faster than the 3060 at a 10% lower MSRP and offers similar performance to the 3060-Ti at a. PC compatibility for SDXL 0. We're excited to announce the release of Stable Diffusion XL v0. keep the final output the same, but. 在过去的几周里,Diffusers 团队和 T2I-Adapter 作者紧密合作,在 diffusers 库上为 Stable Diffusion XL (SDXL) 增加 T2I-Adapter 的支持. Human anatomy, which even Midjourney struggled with for a long time, is also handled much better by SDXL, although the finger problem seems to have. Next supports two main backends: Original and Diffusers which can be switched on-the-fly: Original: Based on LDM reference implementation and significantly expanded on by A1111. 0. If you have the money the 4090 is a better deal. AUTO1111 on WSL2 Ubuntu, xformers => ~3. 🔔 Version : SDXL. ☁️ FIVE Benefits of a Distributed Cloud powered by gaming PCs: 1. I believe that the best possible and even "better" alternative is Vlad's SD Next. A_Tomodachi. Funny, I've been running 892x1156 native renders in A1111 with SDXL for the last few days. Download the stable release. 0, which is more advanced than its predecessor, 0. Adding optimization launch parameters. Asked the new GPT-4-Vision to look at 4 SDXL generations I made and give me prompts to recreate those images in DALLE-3 - (First. 1,871 followers. 19it/s (after initial generation). The drivers after that introduced the RAM + VRAM sharing tech, but it. py implements the InstructPix2Pix training procedure while being faithful to the original implementation we have only tested it on a small-scale. In addition, the OpenVino script does not fully support HiRes fix, LoRa, and some extenions. The SDXL 1. I selected 26 images of this cat from Instagram for my dataset, used the automatic tagging utility, and further edited captions to universally include "uni-cat" and "cat" using the BooruDatasetTagManager. Within those channels, you can use the follow message structure to enter your prompt: /dream prompt: *enter prompt here*. Yesterday they also confirmed that the final SDXL model would have a base+refiner. (This is running on Linux, if I use Windows and diffusers etc then it’s much slower, about 2m30 per image) 1. Stable Diffusion XL (SDXL) is a powerful text-to-image generation model that iterates on the previous Stable Diffusion models in three key ways: the UNet is 3x larger and SDXL combines a second text encoder (OpenCLIP ViT-bigG/14) with the original text encoder to significantly increase the number of parameters. The chart above evaluates user preference for SDXL (with and without refinement) over Stable Diffusion 1. For those purposes, you. via Stability AI. 5 LoRAs I trained on this. Spaces. Looking to upgrade to a new card that'll significantly improve performance but not break the bank. Meantime: 22. But this bleeding-edge performance comes at a cost: SDXL requires a GPU with a minimum of 6GB of VRAM,. We are proud to host the TensorRT versions of SDXL and make the open ONNX weights available to users of SDXL globally. SDXL is the new version but it remains to be seen if people are actually going to move on from SD 1. In this SDXL benchmark, we generated 60. Automatically load specific settings that are best optimized for SDXL. 0 A1111 vs ComfyUI 6gb vram, thoughts. py script shows how to implement the training procedure and adapt it for Stable Diffusion XL. First, let’s start with a simple art composition using default parameters to. make the internal activation values smaller, by. In order to test the performance in Stable Diffusion, we used one of our fastest platforms in the AMD Threadripper PRO 5975WX, although CPU should have minimal impact on results. I just listened to the hyped up SDXL 1. 1 in all but two categories in the user preference comparison. • 3 mo.