floyoofficial
14.2k
Marketing
Photography
Production
Text2Image
Z-Image Turbo
Fast Image Generation in Seconds
Z-Image Turbo: Fast Image Generation in Seconds
Fast Image Generation in Seconds
floyoofficial
3.5k
Animation
Filmmaking
First and last frame
Game Development
Image to Video
Wan2.2
Generate high quality video from a start frame, as well as an optional end frame with this Wan2.2 14b Image to Video workflow!
Wan2.2 14b - Image to Video w/ Optional Last Frame
Generate high quality video from a start frame, as well as an optional end frame with this Wan2.2 14b Image to Video workflow!
floyoofficial
11.1k
API
Floyo API
Image2Image
Nano Banana Pro
Google just released Nano Banana Pro, and honestly, it's a pretty big step up from the original Nano Banana. The main thing? It can actually put legible text in images now. Like, real text that you can read, not the garbled nonsense most AI models spit out.
Nano Banana Pro Text-to-Image: Gemini 3 Pro
Google just released Nano Banana Pro, and honestly, it's a pretty big step up from the original Nano Banana. The main thing? It can actually put legible text in images now. Like, real text that you can read, not the garbled nonsense most AI models spit out.
Qwen Image Edit 2509 for LoRA Dataset
Create Character LoRA Dataset
Z-Image Turbo - Text to Image w/ Optional Image Input (Image to Image)
floyoofficial
3.1k
Image to Video
Wan
Created by @vrgamedevgirl on Civitai, please support the original creator!
Wan2.1 FusionX Image2Video
Created by @vrgamedevgirl on Civitai, please support the original creator!
Wan 2.6 Reference to Video
floyoofficial
3.3k
API
Flux
LoRa Training
FLUX is great at generating images, but locking in a specific aesthetic or character is easier with a LoRA. Here's how to create your own.
Fast LoRA Training for Flux via Floyo API
FLUX is great at generating images, but locking in a specific aesthetic or character is easier with a LoRA. Here's how to create your own.
360
Image2Video
Wan2.1
See an image of a character spin 360 degrees. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Width & height: Default resolution settings are noted: Default image resize resolution works best for portrait images, if the image is landscape change from 480x832 to 832x480 Prompt: Follow example format: The video shows (describe the subject), performs a r0t4tion 360 degrees rotation. Denoise: The amount of variance in the new image. Higher has more variance. File Format: H.264 and more
Image to Character Spin
See an image of a character spin 360 degrees. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Width & height: Default resolution settings are noted: Default image resize resolution works best for portrait images, if the image is landscape change from 480x832 to 832x480 Prompt: Follow example format: The video shows (describe the subject), performs a r0t4tion 360 degrees rotation. Denoise: The amount of variance in the new image. Higher has more variance. File Format: H.264 and more
Qwen Image Edit 2509 Face Swap and Inpainting
Face Swap and Inpainting
ComfyUI Flux LoRA Trainer
Created by @Kijai on Github, please support the original creator!
Wan2.2 Animate Character
Character Sheet
Controlnet
Flux
Create a character and a range of consistent outputs suitable for establishing character consistency, training a model, and ensuring consistency throughout multiple scenes. Key Inputs Image reference: Use the included pose sheet to show range of positions Prompt: as descriptive a prompt as possible
Flux Text to Character Sheet
Create a character and a range of consistent outputs suitable for establishing character consistency, training a model, and ensuring consistency throughout multiple scenes. Key Inputs Image reference: Use the included pose sheet to show range of positions Prompt: as descriptive a prompt as possible
floyoofficial
2.0k
Flux Kontext
Lineart
Previz
Sketch to Image
Quickly convert rough sketches into polished lineart and colorized concepts. Ideal for early storyboards, character designs, scene planning, and other visual explorations.
Flux Kontext Sketch to LineArt + Color Previz
Quickly convert rough sketches into polished lineart and colorized concepts. Ideal for early storyboards, character designs, scene planning, and other visual explorations.
SeedVR2 Upscale: Upscale to Extreme Clarity
Upscale to Extreme Clarity
floyoofficial
2.6k
Outpainting
Video to Video
Wan
Wan VACE video outpainting invites you to break free from the limits of the frame and explore endless creative possibilities.
Wan2.1 and VACE for Video to Video Outpainting
Wan VACE video outpainting invites you to break free from the limits of the frame and explore endless creative possibilities.
Qwen Image Edit - Edit Image Easily
floyoofficial
3.8k
Filmmaking
LTX 2
LTX 2 Fast
Open Source
Text2Video
Videography
A text video model using LTX 2
LTX 2 19B Fast for Text to Video
A text video model using LTX 2
Flux Dev - Text to Image w/ Optional Image Input
floyoofficial
2.1k
API
Image2Image
Nano Banana
Nano Banana 2
Text2Image
The top-ranked image model on Artificial Analysis and LM Arena. 4K output, text rendering, and subject consistency across 5 characters.
Nano Banana 2 - Google's #1 ranked image model
The top-ranked image model on Artificial Analysis and LM Arena. 4K output, text rendering, and subject consistency across 5 characters.
Animation
Filmmaking
Flux
Game Development
LoRA
Text to Image
Test and compare multiple epochs of a character LoRA side by side with preset prompts When training a LoRA, you'll usually have a few checkpoints throughout the process to test. This workflow lets you load up to 4 LoRAs to test side by side, making it easier to determine which one is right for you! Key Inputs: LoRA Loaders: Load each LoRA epoch for the same character in up to 4 groups. Groups Bypasser: Enable/disable groups as needed. If you only have 2 epochs to test, disable the back 2 groups! Triggerword: Simply add the trigger word for your LoRA and it will auto-fill in the default prompts. Leave blank if you're using your own custom prompts that include the trigger word. LoRA Testing Prompts: Default prompts work well to get an idea of how your character will look in different situations, but feel free to replace them with your own prompts (max 4).
Flux Character LoRA Test and Compare
Test and compare multiple epochs of a character LoRA side by side with preset prompts When training a LoRA, you'll usually have a few checkpoints throughout the process to test. This workflow lets you load up to 4 LoRAs to test side by side, making it easier to determine which one is right for you! Key Inputs: LoRA Loaders: Load each LoRA epoch for the same character in up to 4 groups. Groups Bypasser: Enable/disable groups as needed. If you only have 2 epochs to test, disable the back 2 groups! Triggerword: Simply add the trigger word for your LoRA and it will auto-fill in the default prompts. Leave blank if you're using your own custom prompts that include the trigger word. LoRA Testing Prompts: Default prompts work well to get an idea of how your character will look in different situations, but feel free to replace them with your own prompts (max 4).
API
Flux
Text to Image
Start with a prompt, and get a different render from a range of unique models at the same time.
Multi-Image Flux Ultra, Pro, Dev, Recraft+
Start with a prompt, and get a different render from a range of unique models at the same time.
Z-Image Turbo Controlnet 2.1 Image to Image
Image to Image
floyoofficial
1.1k
Flux
Kontext
Sketch to Image
Bring your sketches to life in full color with Flux Kontext! Key Inputs Load Image – Upload the sketch you want to transform. Prompt – Describe the desired output style, such as: “Render this sketch as a realistic photo” or “Turn this sketch into a watercolor painting.”
Flux Kontext - Sketch to Image
Bring your sketches to life in full color with Flux Kontext! Key Inputs Load Image – Upload the sketch you want to transform. Prompt – Describe the desired output style, such as: “Render this sketch as a realistic photo” or “Turn this sketch into a watercolor painting.”
LTX 2.3 Pro Image to Video
LTX 2.3
floyoofficial
2.6k
Animation
Filmography
Grok
Image2Video
Turn images into excellent video using the Grok Imagine
Grok Imagine for Image to Video
Turn images into excellent video using the Grok Imagine
Qwen Image Edit 2509: Combine Multiple Images Into One Scene for Fashion, Products, Poses & more
floyoofficial
1.1k
Animation
Filmmaking
Image2Video
LTX 2 Pro
Video Editing
Image to Video using LTX 2 Pro API
LTX 2 Pro API for Image to Video
Image to Video using LTX 2 Pro API
floyoofficial
1.3k
Animation
Image to Video
Kling 2.6
Create an excellent for movement for your characters using Kling 2.6 Standard Motion Control
Kling 2.6 Standard Motion Control
Create an excellent for movement for your characters using Kling 2.6 Standard Motion Control
floyoofficial
2.5k
Controlnet
Flux
Video2Video
Wan2.1
Create a new video by restyling an existing video with a reference image.
Wan2.1 Fun Control and Flux for V2V Restyle
Create a new video by restyling an existing video with a reference image.
ace
face
face swap
faceswap
face swapper
sebastian kamph
swap
Face swapper built with Flux and ACE++. Works with added details too, like hats, jewelry. Smart features, use natural language.
SMART FACE SWAPPER - Ace++ Flux Face swap
Face swapper built with Flux and ACE++. Works with added details too, like hats, jewelry. Smart features, use natural language.
Flux
Flux Kontext
Image2Image
kontext
panorama
Flux Kontext 360° Workflow - Seamless Panorama Generation Input: Simply upload an image in the "Load Image from Outputs" node Output: A 360° Panoramic image
Flux Kontext and HD360 LoRA for 360 Degree View
Flux Kontext 360° Workflow - Seamless Panorama Generation Input: Simply upload an image in the "Load Image from Outputs" node Output: A 360° Panoramic image
Wan2.6 Image to Video
Image to Video
VibeVoice Text to Speech Single Speaker
floyoofficial
1.1k
Image2Video
Start and end frame
Wan2.1
Used for image to video generation, defined by the first frame and end frame images.
Wan2.1 Start & End Frame Image to Video
Used for image to video generation, defined by the first frame and end frame images.
VEO3 Future of Video Creation
Image-to-Video with Reference Video (Prompt-Based Camera Rotation)
Video to Video with Camera Control with Wan
Adjust the camera angle of an existing video, like magic.
Flux
Text2Image
Create original images using only text prompts, which can be simple or elaborate. Key Inputs Prompt: as descriptive a prompt as possible Width & height: Optimal resolution settings are noted
Flux Text to Image
Create original images using only text prompts, which can be simple or elaborate. Key Inputs Prompt: as descriptive a prompt as possible Width & height: Optimal resolution settings are noted
Text to Image with Multi-LoRA
Create consistent images with multiple LoRA models.
Z-Image Turbo + DyPE + SeedVR2 2.5 + TTP 16k reso
API
Floyo API
Image to Video
Seedance 1.5 Pro
Draft mode lets you first experiment at a low cost by generating 480p draft videos
Seedance 1.5 Pro with Draft Mode
Draft mode lets you first experiment at a low cost by generating 480p draft videos
Controlnet
Flux
Image
Transform your images into something completely new, yet retaining specific details and composition from your original using flexible controls. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Prompt: as descriptive a prompt as possible Denoise Strength: The amount of variance in the new image. Higher has more variance. Width & height: Try and match the aspect ratio of the original if possible.
Image to Image with Flux ControlNet
Transform your images into something completely new, yet retaining specific details and composition from your original using flexible controls. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Prompt: as descriptive a prompt as possible Denoise Strength: The amount of variance in the new image. Higher has more variance. Width & height: Try and match the aspect ratio of the original if possible.
Ace+
Fashion
Flux
Image to Image
Virtual Try-on
Virtual Outfit Try-On with Auto Segmentation Try virtual clothing on any subject using Flux Dev, Ace Plus, and Redux, with automatic segmentation. Great for concept previews, fashion mockups, or character styling. Key Inputs Outfit: Load the outfit image you want to apply. Make sure it's high quality — visible artifacts or distortions may carry over into the final result. Actor: Add the subject or character you want to dress. Ideally, use a clear, front-facing image. Human Parts Ultra: Choose which parts of the body the clothing should apply to. For example, for a long-sleeve shirt, select: torso, left arm, and right arm. This helps the model align the clothing properly during generation. Prompt: Default value works for most outfits, however you may try to adjust it to describe the desired outfit.
Flux Outfit Transfer
Virtual Outfit Try-On with Auto Segmentation Try virtual clothing on any subject using Flux Dev, Ace Plus, and Redux, with automatic segmentation. Great for concept previews, fashion mockups, or character styling. Key Inputs Outfit: Load the outfit image you want to apply. Make sure it's high quality — visible artifacts or distortions may carry over into the final result. Actor: Add the subject or character you want to dress. Ideally, use a clear, front-facing image. Human Parts Ultra: Choose which parts of the body the clothing should apply to. For example, for a long-sleeve shirt, select: torso, left arm, and right arm. This helps the model align the clothing properly during generation. Prompt: Default value works for most outfits, however you may try to adjust it to describe the desired outfit.
floyoofficial
1.3k
Recammaster
Video to Video
Wan
Adjust the camera angle of an existing video, like magic.
Wan2.1 and RecamMaster for V2V Camera Control
Adjust the camera angle of an existing video, like magic.
Character + Outfit → High-End Editorial Shoot
AnimateDiff
Control Image
HotshotXL
SDXL
Video2Video
Breathe life into a character from an image reference using motion reference from a video. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly and the style of your shot Load Video: Use any Mp4 that you would like to use for motion reference
Video to Video with Control Image
Breathe life into a character from an image reference using motion reference from a video. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly and the style of your shot Load Video: Use any Mp4 that you would like to use for motion reference
Flux
LoRa
Text2Image
Create an image from a trained AI model of something specific ( a specific figure, outfit, art style, product etc) to ensure specific details within.
Text to Image + LoRA model
Create an image from a trained AI model of something specific ( a specific figure, outfit, art style, product etc) to ensure specific details within.
floyoofficial
1.8k
Flux
Flux.2 Klein
Image2image
Unified workflow: one model for text‑to‑image, image‑to‑image, and image editing
FLUX.2 Klein 9B for Image Editing
Unified workflow: one model for text‑to‑image, image‑to‑image, and image editing
Character Sheet
Controlnet
Flux
Generate a character sheet using a prompt and a LoRA model of a particular person for more accurate renders. Key Inputs Load Image: Use any JPG or PNG of your pose sheet Prompt: as descriptive a prompt as possible Width & height: Optimal resolution settings are noted at 1280px x 1280px Denoise: The amount of variance in the new image. Higher has more variance. ControlNet Strength: The amount of adherence to the original image. Higher has more adherence. Start Percent: The point in the generation process where the control starts exerting influence. (Have it start later, to let AI imagine first.) End Percent: The point in the generation process where the control stops exerting influence. (Have it end sooner, to let AI finish it off with some variation.) Flux Guidance: How much influence the prompt has over the image. Higher has more guidance.
Text to Character Sheet with a reference LoRA
Generate a character sheet using a prompt and a LoRA model of a particular person for more accurate renders. Key Inputs Load Image: Use any JPG or PNG of your pose sheet Prompt: as descriptive a prompt as possible Width & height: Optimal resolution settings are noted at 1280px x 1280px Denoise: The amount of variance in the new image. Higher has more variance. ControlNet Strength: The amount of adherence to the original image. Higher has more adherence. Start Percent: The point in the generation process where the control starts exerting influence. (Have it start later, to let AI imagine first.) End Percent: The point in the generation process where the control stops exerting influence. (Have it end sooner, to let AI finish it off with some variation.) Flux Guidance: How much influence the prompt has over the image. Higher has more guidance.
FlashVSR Upscale Your Videos Instantly
API
Ecommerce
Image2Image
Nano Banana Pro
Product Ads
Create grids of different angles for your ecommerce products.
Nano Banana Pro for Multi Grid View of Product Ads
Create grids of different angles for your ecommerce products.
360° Character Turnaround & Sheet Workflow
Multi-Angle LoRA and Qwen Image Edit 2509: Unlocking Dynamic Camera Control for Your Images
floyoofficial
1.6k
Image2Image
Image Editing
Seedream 5.0
Text2Image
ByteDance's latest image model. Text-to-image, image editing, and multi-reference composition in one workflow.
Seedream 5.0 Lite Unified for Image Generation
ByteDance's latest image model. Text-to-image, image editing, and multi-reference composition in one workflow.
floyoofficial
3.1k
Text2Image
Z-Image
Z-image-base
Create sunning images using z-image base model (non distlled).
Z-Image Base for Text to Image
Create sunning images using z-image base model (non distlled).
Kling Omni One Video to Video Edit
Create Images Using Qwen Image Edit 2511
Qwen Image edit 2511
floyoofficial
1.3k
MMaudio
Video to Video
Generate synchronized audio with a given video input. It can be combined with video models to get videos with audio.
MMAudio: Video to Synced Audio
Generate synchronized audio with a given video input. It can be combined with video models to get videos with audio.
floyoofficial
1.2k
Image
Inpaint
LoRa
Change specific details on just a portion of the image for inpainting or Erase & Replace, adding a LoRA for extra control.
Image Inpainting with LoRA
Change specific details on just a portion of the image for inpainting or Erase & Replace, adding a LoRA for extra control.
Flux
Image
Inpaint
Change specific details on just a portion of the image, sometimes known as inpainting or Erase & Replace. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Masking tools: Right-click to reveal the masking tool option, and create a mask of the desired area to inpaint Prompt: as descriptive a prompt as possible to help guide what you would like replaced in the masked area
Image Inpainting
Change specific details on just a portion of the image, sometimes known as inpainting or Erase & Replace. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Masking tools: Right-click to reveal the masking tool option, and create a mask of the desired area to inpaint Prompt: as descriptive a prompt as possible to help guide what you would like replaced in the masked area
Flux
Image
UltimateSD
Upscale
A simple workflow to enlarge & add detail to an existing image. Key Inputs Image: Use any JPG or PNG Upscale by: The factor of magnification Denoise: The amount of variance in the new image. Higher has more variance.
Flux Image Upscaler with UltimateSD
A simple workflow to enlarge & add detail to an existing image. Key Inputs Image: Use any JPG or PNG Upscale by: The factor of magnification Denoise: The amount of variance in the new image. Higher has more variance.
Start/End Frame Multi-Video via Floyo API
Compare between Luma Dream Machine and Kling Pro 1.6 via Fal API
API
Hunyuan
LORA Training
Hunyuan is great at generating videos, but locking in a specific aesthetic or character is easier with a LoRA.
LoRA Training Video with Hunyuan
Hunyuan is great at generating videos, but locking in a specific aesthetic or character is easier with a LoRA.
floyoofficial
1.8k
Animation
Filmmaking
Image to Video
Lipsync
Marketing
Multitalk
Wan2.1
Turn any portrait - artwork, photos, or digital characters - into speaking, expressive videos that sync perfectly with audio input. MultiTalk handles lip movements, facial expressions, and body motion automatically.
Wan2.1 FusionX and MultiTalk - Image to Video
Turn any portrait - artwork, photos, or digital characters - into speaking, expressive videos that sync perfectly with audio input. MultiTalk handles lip movements, facial expressions, and body motion automatically.
FLUX
FLUX.2 Klein
Image2Image
LoRA
Create realistic image but in an enhanced details using FLUX.2 Klein 9B and with LoRA
FLUX.2 Klein 9B + Realistic Enhanced Details LoRA
Create realistic image but in an enhanced details using FLUX.2 Klein 9B and with LoRA
3D View
Animation
Architecture
Filmmaking
Game Development
Hunyuan 3D
Hunyuan3D
Image to 3D
A simple workflow to create a detailed & textured 3D model from a reference image.
Image to 3D with Hunyuan3D
A simple workflow to create a detailed & textured 3D model from a reference image.
HunyuanVideo Foley: Create a Lifelike Sound
Clothes Swap
Flux
Flux.2 Klein
Image Editing
LanPaint
Replace clothes using the Flux.2 Klein 4B
FLUX.2 Klein 4B and LanPaint for Swap Clothes
Replace clothes using the Flux.2 Klein 4B
Character Sheet
Image to Image
SDXL
Generate a character sheet with multiple angles from a single input image as reference. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly. If you're trying to create a full body output, a full body input must be provided.
Image to Character Sheet
Generate a character sheet with multiple angles from a single input image as reference. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly. If you're trying to create a full body output, a full body input must be provided.
Seedance I2V: Image to Video in Minutes
Flux 2 Text-to-Image Generation
Controlnet
SD1.5
Turn your scribbles into a beautiful image with only a drawing tool and a text prompt. Key Inputs Scribble: Create your scribble with the painting and design tools Prompt: as descriptive a prompt as possible Width & height: Optimal resolution settings are noted ControlNet Strength: The amount of adherence to the original image. Higher has more adherence. Start Percent: The point in the generation process where the control starts exerting influence. (Have it start later, to let AI imagine first.) End Percent: The point in the generation process where the control stops exerting influence. (Have it end sooner, to let AI finish it off with some variation.)
Scribble to Image
Turn your scribbles into a beautiful image with only a drawing tool and a text prompt. Key Inputs Scribble: Create your scribble with the painting and design tools Prompt: as descriptive a prompt as possible Width & height: Optimal resolution settings are noted ControlNet Strength: The amount of adherence to the original image. Higher has more adherence. Start Percent: The point in the generation process where the control starts exerting influence. (Have it start later, to let AI imagine first.) End Percent: The point in the generation process where the control stops exerting influence. (Have it end sooner, to let AI finish it off with some variation.)
floyoofficial
1.1k
API
Controlnet
Floyo API
Image2Image
LoRA
Z-Image Turbo
Creating Accurate Variety of Images
Z-Image Turbo with Controlnet 2.1 and Qwen VLM
Creating Accurate Variety of Images
Qwen Image Edit 2511 Restore Damage Old Photograph
Restore Damage Old Photograph
Vertical Video Light & Mood Shift
fx-integration
image-to-image
qwen
reference-image
upscaling
video-conditioning
wan21-funcontrol
Vertical Video FX Inserter - Qwen + Wan 2.1 FunControl
Wan2.2 and Bullet Time LoRA: Transform Static Shots into Product Spins
Character Sheet
Face Swap
Flux
Image
Take a character sheet and use a reference image to replace all the faces with that new person. Key Inputs Load Image: Use any JPG or PNG showing your pose sheet Load New Face: Use any JPG or PNG showing your subject clearly that you would like to swap into the pose sheet. Prompt: as descriptive a prompt as possible Width & height: Optimal resolution settings are noted at 1024px x 1024px Keep Proportion: Enable keep_proportion if you want to keep the same size with input and output Denoise: The amount of variance in the new image. Higher has more variance.
Image to Image Character Sheet Face Swap with Ace+
Take a character sheet and use a reference image to replace all the faces with that new person. Key Inputs Load Image: Use any JPG or PNG showing your pose sheet Load New Face: Use any JPG or PNG showing your subject clearly that you would like to swap into the pose sheet. Prompt: as descriptive a prompt as possible Width & height: Optimal resolution settings are noted at 1024px x 1024px Keep Proportion: Enable keep_proportion if you want to keep the same size with input and output Denoise: The amount of variance in the new image. Higher has more variance.
360 Degree Product Video Using Nano Banana Pro
character replacement
character swap
image to video
masking
Points Editor
vertical video
Wan2.2 Animate
WanAnimateToVideo
Vertical Video Character Face & Actor Swap (Wan 2.2 Animate)
SeedVR2 and TTP Toolset 8k Image Upscale
Chroma 1 Radiance Text to Image
DyPe and Z-Image Turbo for High Quality Text to Image
AniSora 3.2 and Wan2.2: Best Practices for Generating Smooth Character 3D Spin
Nano Banana Pro Edit Image to Image
SeC Video Segmentation: Unleashing Adaptive, Semantic Object Tracking
Flux.2 Klein Image Expansion / Outpaint
Wan2.2 Fun Camera for Camera Control
Wan2.6 Text to Video
Qwen Image Edit 2509 + Flux Krea for Creating Next Scene
LoRa
Text2Video
Wan2.1
Generate a high-quality video from a text prompt and add in a LoRA for extra control over character or style consistency. Key Inputs Prompt: as descriptive a prompt as possible Load LoRA: Load your reference model here Width & height: Optimal resolution settings are noted File Format: H.264 and more
Text to Video and Wan with optional LoRA
Generate a high-quality video from a text prompt and add in a LoRA for extra control over character or style consistency. Key Inputs Prompt: as descriptive a prompt as possible Load LoRA: Load your reference model here Width & height: Optimal resolution settings are noted File Format: H.264 and more
Wan2.1 InfiniteTalk Video to Video
text2image
Wan2.1
Created by @yanokusnir on Reddit, please support the original creator! https://www.reddit.com/r/StableDiffusion/comments/1lu7nxx/wan_21_txt2img_is_amazing/ If this is your workflow, please contact us at team@floyo.ai to claim it! Original post from the creator: Hello. This may not be news to some of you, but Wan 2.1 can generate beautiful cinematic images. I was wondering how Wan would work if I generated only one frame, so to use it as a txt2img model. I am honestly shocked by the results. All the attached images were generated in fullHD (1920x1080px) and on my RTX 4080 graphics card (16GB VRAM) it took about 42s per image. I used the GGUF model Q5_K_S, but I also tried Q3_K_S and the quality was still great. The only postprocessing I did was adding film grain. It adds the right vibe to the images and it wouldn't be as good without it. Last thing: For the first 5 images I used sampler euler with beta scheluder - the images are beautiful with vibrant colors. For the last three I used ddim_uniform as the scheluder and as you can see they are different, but I like the look even though it is not as striking. :) Enjoy.
Wan 2.1 Text2Image
Created by @yanokusnir on Reddit, please support the original creator! https://www.reddit.com/r/StableDiffusion/comments/1lu7nxx/wan_21_txt2img_is_amazing/ If this is your workflow, please contact us at team@floyo.ai to claim it! Original post from the creator: Hello. This may not be news to some of you, but Wan 2.1 can generate beautiful cinematic images. I was wondering how Wan would work if I generated only one frame, so to use it as a txt2img model. I am honestly shocked by the results. All the attached images were generated in fullHD (1920x1080px) and on my RTX 4080 graphics card (16GB VRAM) it took about 42s per image. I used the GGUF model Q5_K_S, but I also tried Q3_K_S and the quality was still great. The only postprocessing I did was adding film grain. It adds the right vibe to the images and it wouldn't be as good without it. Last thing: For the first 5 images I used sampler euler with beta scheluder - the images are beautiful with vibrant colors. For the last three I used ddim_uniform as the scheluder and as you can see they are different, but I like the look even though it is not as striking. :) Enjoy.
Controlnet
SD1.5
Turn your sketches into full blown scenes. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Prompt: as descriptive a prompt as possible Width & height: In pixels ControlNet Strength: The amount of adherence to the original image. Higher has more adherence. Start Percent: The point in the generation process where the control starts exerting influence. (Have it start later, to let AI imagine first.) End Percent: The point in the generation process where the control stops exerting influence. (Have it end sooner, to let AI finish it off with some variation.)
Sketch to Image
Turn your sketches into full blown scenes. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Prompt: as descriptive a prompt as possible Width & height: In pixels ControlNet Strength: The amount of adherence to the original image. Higher has more adherence. Start Percent: The point in the generation process where the control starts exerting influence. (Have it start later, to let AI imagine first.) End Percent: The point in the generation process where the control stops exerting influence. (Have it end sooner, to let AI finish it off with some variation.)
Flux
Image
Redux
Create variations of a given image, or restyle them. It can be used to refine, explore, or transform ideas and concepts. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Width & height: In pixels Prompt: as descriptive a prompt as possible Strength (step 5: value): Strength of redux model, play around with the value to increase or decrease the amount of variation
Image Redux with Flux
Create variations of a given image, or restyle them. It can be used to refine, explore, or transform ideas and concepts. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Width & height: In pixels Prompt: as descriptive a prompt as possible Strength (step 5: value): Strength of redux model, play around with the value to increase or decrease the amount of variation
Kling 3.0 Pro for Image to Video
Turn images into a video using Kling 3.0 Pro
Wan Alpha Create Transparent Videos
Vertical Video FX Insterter / Element Pass with Seedream + Wan
Veo 3.1 Image to Video - First Frame and Optional Last Frame
Hunyuan
LoRa
Text2Video
Integrate a custom model with your text prompt to create a video with a consistent character, style or element. Key Inputs Prompt: as descriptive a prompt as possible. Make sure to include the trigger word from your LoRA below Load LoRA: Load your reference model here Width & height: resolution settings are noted in pixels Guidance strength (CFG): Higher numbers adhere more to the prompt Flow Shift: For temporal consistency, adjust to tweak video smoothness.
Text to Video + Hunyuan LoRA
Integrate a custom model with your text prompt to create a video with a consistent character, style or element. Key Inputs Prompt: as descriptive a prompt as possible. Make sure to include the trigger word from your LoRA below Load LoRA: Load your reference model here Width & height: resolution settings are noted in pixels Guidance strength (CFG): Higher numbers adhere more to the prompt Flow Shift: For temporal consistency, adjust to tweak video smoothness.
FLUX
FLUX.2 Klein
Ghost Mannequin
Image2Image
SAM3
Create a ghost mannequin clothes using flux.2 klein, SAM3 and Ghost mannequin LoRA
FLUX.2 Klein 9B + SAM3 + GhostMannequin LoRA
Create a ghost mannequin clothes using flux.2 klein, SAM3 and Ghost mannequin LoRA
3D
3D Model
Hyper3D Rodin v2
Image to 3D
Rodin v2
Turn your images into 3D using Hyper3D Rodin v2
Hyper3D Rodin V2 for Image to 3D
Turn your images into 3D using Hyper3D Rodin v2
Grok Imagine for Imagine Edit
Edit images using Grok Imagine
floyoofficial
1.2k
Flux
Flux.2 Klein
Image2Image
Inpainting
LanPaint
Inpainting image using Flux.2 Klein and LanPaint
FLUX.2 Klein 9B Image Inpainting
Inpainting image using Flux.2 Klein and LanPaint
Clothing & Accessories Replacement
Image2Image
Image Edit
Qwen Image Edit 2511
Create different angle of the image using Qwen Image Edit 2511 and with special node
Camera Angle Control with QwenMultiAngle
Create different angle of the image using Qwen Image Edit 2511 and with special node
Chatterbox Text to Speech
Text to speech workflow using Chatterbox
Studio Relighting for Composited Products
API
Filmography
Fimmaking
Floyo API
Image2Video
LTX 2 Fast
Image to Video using LTX 2 Fast API
LTX 2 Fast API for Image to Video
Image to Video using LTX 2 Fast API
Create Photorealistic Packaging from Dielines
Qwen Image 2512 Text to Image
GPT Image 1.5
for Image Editing
Wan2.1 + WanMOVE for Animating Movement using Trajectory Path
Insert Products in Ecommerce Ads - NanoBanana Pro
3D Print LoRA and Flux Kontext for Image to 3D Print Mockup
Wan LoRA Trainer
ElevenLabs Text to Speech
ElevenLabs Text to Speech
VibeVoice Text to Speech Multi Speaker
Speech Multi Speaker
Kling Omni One Image to Video
Light Restoration LoRA + Qwen Image Edit 2509 Image to Image
SAM3 Image Segmentation
FlatLogColor LoRA and Qwen Image Edit 2509
🔥Create Stunning 10 Second 3D Spin Shots in Seedance for Characters, Products, and Hero Scenes
Image to Video with Seedance Pro API
Wan2.1 and ATI for Control Video Motion: Draw Your Path, Get Your Video
animation
Ditto
lora
VACE
Video2Video
Wan
Upload any video, describe a new style, and Wan 2.1 rewrites every frame. Ditto keeps motion and structure intact across anime, Pixar, clay, and dozens more.
Wan 2.1 Vid2Vid Style Transfer with Ditto
Upload any video, describe a new style, and Wan 2.1 rewrites every frame. Ditto keeps motion and structure intact across anime, Pixar, clay, and dozens more.
SRPO Next-Gen Text-to-Image
Seedance Text to Video: Create Stunning 1080p Videos Instantly
Masking
Segmentation
Video
Use a video clip and visual markers to segment/create masks of the subject or the inverse. Key Inputs Load Video: Use any Mp4 that you would like to segment or create a mask from Select subject: Use 3 green selectors to identify your subject and one red selector to identify the space outside your subject Modify markers: Shift+Click to add markers, Shift+Right Click to remove markers
Video Masking with Sam2 Comparison
Use a video clip and visual markers to segment/create masks of the subject or the inverse. Key Inputs Load Video: Use any Mp4 that you would like to segment or create a mask from Select subject: Use 3 green selectors to identify your subject and one red selector to identify the space outside your subject Modify markers: Shift+Click to add markers, Shift+Right Click to remove markers
Vace
Video
Wan
Created by @davcha on Civitai, please support the original creator! https://civitai.com/models/1674121/simple-self-forcing-wan13bvace-workflow If this is your workflow, please contact us at team@floyo.ai to claim it! Original guide from creator: This is a very simple workflow to run Self-Forcing Wan 1.3B + Vace, it only uses a single custom node, which everyone making videos should have: Kosinkadink/ComfyUI-VideoHelperSuite. Everything else is pure comfy core. You'll need to download the model of your choice from here lym00/Wan2.1-T2V-1.3B-Self-Forcing-VACE · Hugging Face, and put it inside your /path/to/models/diffusion_models folder. This workflow can be used as a very good start for experimenting. You can refer to this [2503.07598] VACE: All-in-One Video Creation and Editing for how to use Vace. You don't need to read the paper of course, the information you are interested in is mostly at the top of page 7, which I reproduce in the following: Basically, in the WanVaceToVideo node, you have 3 optional inputs: control_video, control_masks, and reference_image. control_video and control_masks are a little bit misleading. You don't have to provide a full video. You can in fact provide a variety of things to obtain various effects. For example: if you provide a single image, it's basically more or less equivalent to image2video. if you provide a sequence of images separated by empty images: img1, black, black, black, img2, black, black, black, img3, etc... then it's equivalent to interpolating all these img, filling the blacks. A special case of this one to make it clear is if you have img1, black, black, ..., black, img2, then it's equivalent to start_img, end_img to video. control_masks control where Wan should paint. Basically if wherever the mask is 1, the original image will be kept. So you can for example pad and/or mask an input image, like this: and use that image and mask as control_video and control_mask, and you'll basically do a image2video inpaint and outpaint. If you input a video in control_video, then you can control where the changes should happen in the same way, using control_mask. You'll need to set one mask per frame in the video. if you input an image preprocessed with openpose or a depthmap, you can finely control the movement in the video output. reference_image node is basically an image that you feed to Wan+Vace that serves as a reference point. For example, if you put the image of someone's face here, there's a good chance you'll get a video with that person's face.
Simple Self-Forcing Wan1.3B+Vace workflow
Created by @davcha on Civitai, please support the original creator! https://civitai.com/models/1674121/simple-self-forcing-wan13bvace-workflow If this is your workflow, please contact us at team@floyo.ai to claim it! Original guide from creator: This is a very simple workflow to run Self-Forcing Wan 1.3B + Vace, it only uses a single custom node, which everyone making videos should have: Kosinkadink/ComfyUI-VideoHelperSuite. Everything else is pure comfy core. You'll need to download the model of your choice from here lym00/Wan2.1-T2V-1.3B-Self-Forcing-VACE · Hugging Face, and put it inside your /path/to/models/diffusion_models folder. This workflow can be used as a very good start for experimenting. You can refer to this [2503.07598] VACE: All-in-One Video Creation and Editing for how to use Vace. You don't need to read the paper of course, the information you are interested in is mostly at the top of page 7, which I reproduce in the following: Basically, in the WanVaceToVideo node, you have 3 optional inputs: control_video, control_masks, and reference_image. control_video and control_masks are a little bit misleading. You don't have to provide a full video. You can in fact provide a variety of things to obtain various effects. For example: if you provide a single image, it's basically more or less equivalent to image2video. if you provide a sequence of images separated by empty images: img1, black, black, black, img2, black, black, black, img3, etc... then it's equivalent to interpolating all these img, filling the blacks. A special case of this one to make it clear is if you have img1, black, black, ..., black, img2, then it's equivalent to start_img, end_img to video. control_masks control where Wan should paint. Basically if wherever the mask is 1, the original image will be kept. So you can for example pad and/or mask an input image, like this: and use that image and mask as control_video and control_mask, and you'll basically do a image2video inpaint and outpaint. If you input a video in control_video, then you can control where the changes should happen in the same way, using control_mask. You'll need to set one mask per frame in the video. if you input an image preprocessed with openpose or a depthmap, you can finely control the movement in the video output. reference_image node is basically an image that you feed to Wan+Vace that serves as a reference point. For example, if you put the image of someone's face here, there's a good chance you'll get a video with that person's face.
Image2Video
LTX
Video
Used for image to video generation, including first frame, end frame, or other multiple key frames. Key Inputs Load Image (Start Frame): Use any JPG or PNG showing your subject clearly to start your video Load Image (End Frame): Use any JPG or PNG showing your subject clearly to act as the last part of your video. Make sure it's the same resolution as the load image. Width & height: Optimal resolution settings are noted. LTX maximum resolution is 768x512 Prompt: as descriptive a prompt as possible
Image to Video with Multiframe Control
Used for image to video generation, including first frame, end frame, or other multiple key frames. Key Inputs Load Image (Start Frame): Use any JPG or PNG showing your subject clearly to start your video Load Image (End Frame): Use any JPG or PNG showing your subject clearly to act as the last part of your video. Make sure it's the same resolution as the load image. Width & height: Optimal resolution settings are noted. LTX maximum resolution is 768x512 Prompt: as descriptive a prompt as possible
Flux
Image
LoRa
Upscale
Create a larger more detailed image along with an extra AI model for fine tuned guidance. Key Inputs Load Image: Use any JPG or PNG showing your subject clearly Load LoRA: Load your reference model here Prompt: as descriptive a prompt as possible Upscale by: The factor of magnification Denoise: The amount of variance in the new image. Higher has more variance.
Image Upscaler with LoRA
Create a larger more detailed image along with an extra AI model for fine tuned guidance. Key Inputs Load Image: Use any JPG or PNG showing your subject clearly Load LoRA: Load your reference model here Prompt: as descriptive a prompt as possible Upscale by: The factor of magnification Denoise: The amount of variance in the new image. Higher has more variance.
Flux
Image
Outpaint
Extend your images out for a wider field of view or just to see more of your subject. Expand compositions, change aspect ratios, or add creative elements while maintaining consistency in style, lighting, and detail while seamlessly blending with the existing artwork. Enhance visuals, create immersive scenes, and repurpose images for different formats without losing their original essence. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Prompt: as descriptive a prompt as possible to describe the area you want to extend out to Left, Right, Top, Bottom: Amount of extension in pixels Feathering: Amount of radius around the original image in pixels that the AI generated outpainting will blend with the original
Flux Fill Dev Image Outpainting
Extend your images out for a wider field of view or just to see more of your subject. Expand compositions, change aspect ratios, or add creative elements while maintaining consistency in style, lighting, and detail while seamlessly blending with the existing artwork. Enhance visuals, create immersive scenes, and repurpose images for different formats without losing their original essence. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Prompt: as descriptive a prompt as possible to describe the area you want to extend out to Left, Right, Top, Bottom: Amount of extension in pixels Feathering: Amount of radius around the original image in pixels that the AI generated outpainting will blend with the original
Seedream 4.5 Image Generation
Ovi: Create a Talking Portrait
FantasyTalking
Image2Video
Lipsync
Wan2.1
Create high quality lipsync video from image inputs with Wan2.1 FantasyTalking Key Inputs Load Image: Select an image of a person with their face in clear view Load Audio: Choose audio file Frames: How many frames generated
Wan2.1 and FantasyTalking - Image2Video Lipsync
Create high quality lipsync video from image inputs with Wan2.1 FantasyTalking Key Inputs Load Image: Select an image of a person with their face in clear view Load Audio: Choose audio file Frames: How many frames generated
Grok Imagine for Text to Image
Create cool images using Grok Imagine
Captioning
LLM
Prompt Generator
Qwen3VL
VLM
Upload an image or video and get a detailed text description from Qwen3-VL. Choose your model size, pick a preset prompt, or write your own. Runs in your browser.
Qwen3-VL Image and Video Captioning
Upload an image or video and get a detailed text description from Qwen3-VL. Choose your model size, pick a preset prompt, or write your own. Runs in your browser.
Hunyuan Video LoRA Trainer
Flux LoRA Trainer
subtitling
vid2vid
video generation
Upload a video and get it back with burned-in subtitles. Whisper transcribes the audio, then the text gets placed frame-by-frame with word-level timing.
Auto Subtitles with Whisper - Video to Video
Upload a video and get it back with burned-in subtitles. Whisper transcribes the audio, then the text gets placed frame-by-frame with word-level timing.
Vertical Video Prop & Object Replacement Using Seedream + Wan 2.2
Vertical Video Background & Scene Rebuild
floyoofficial
1.1k
Flux
FLUX2 Klein
Photography
Text2Image
Create a high quality image using 9B model of Flux 2 Klein
FLUX.2 Klein 9B for Text to Image
Create a high quality image using 9B model of Flux 2 Klein
Qwen Image Edit – Multi-Angle Camera View
Veo 3.1 Image to Video
Block-wise Image Upscaling with Qwen
Block-wise Image Upscaling with Qwen
Floyo API
Image2Video
PixVerse
You can swap object,, character and background using PixVerse
Pixverse Swap for Image to Video Swap
You can swap object,, character and background using PixVerse
floyoofficial
1.0k
Camera Control
Image2Image
LoRA
Qwen
Qwen Image Edit 2509
Re-render your subject from any camera angle with Qwen Image Edit 2509 and a Multi-Angle LoRA. Pan, tilt, rotate, wide-angle, or close-up. No trigger word.
Qwen Image Edit 2509 + Multi-Angle LoRA for Camera
Re-render your subject from any camera angle with Qwen Image Edit 2509 and a Multi-Angle LoRA. Pan, tilt, rotate, wide-angle, or close-up. No trigger word.
Multiple Angle Lighting LoRA + 2511
Kling 2.5 Image to Video
3D Products with Logo - Wan2.6 Image to Video
Qwen Image Edit 2509 and Grayscale to Color LoRA
Grok Imagine for Text to Video
Create excellent videos using Grok Imagine for T2V
SAM3 for Video Masking using Text
Create a video masking using SAM3 and Text only.
Vertical Video Scene Extension & Coverage Generator using Seedream +Wan
Vertical Video Scene Extension & Coverage Generator
Image Edit
Qwen
Qwen Image Edit 2511
Relighting
Relighting images using Qwen multiangle light node
Qwen Multiangle Light with Qwen Image Edit 2511
Relighting images using Qwen multiangle light node
Kling 3.0 for Video Generation
Coming soon page for Kling 3.0
API
Floyo API
Topaz
Video2Video
Video Upscale
Upload a video, pick your enhancement model and quality level, and Topaz Video AI sharpens, denoises, and upscales it. Audio is preserved. Output is H265 MP4.
Topaz Video Upscaler for Sharper Results
Upload a video, pick your enhancement model and quality level, and Topaz Video AI sharpens, denoises, and upscales it. Audio is preserved. Output is H265 MP4.
Minimax Speech 2.8 HD for Text to Speech
Create realistic speech using Minimax speech 2.8
Tripo3D for Image to 3D
Create 3D model using Tripo3D with v2.5
Meshy v6 for Image to 3D Model
Create a 3D model from Image using Meshy v6
Vidu Q3 for Image to Video
Turn to images to real life
Realistic Product or Props Replacement
3D
Chord
Game Design
PBR Material
Text to 3D
Ubisoft
Create a 3D Game Material Asset using Chord Model from Ubisoft
Chord for PBR Material Generation using Text to 3D
Create a 3D Game Material Asset using Chord Model from Ubisoft
audio
speech to text
srt
STT
subtitles
transcription
whisper
Upload any audio file and Whisper transcribes it into text with word-level and segment-level SRT subtitle files. Auto language detection included.
Whisper Speech-to-Text and SRT Subtitle Generator
Upload any audio file and Whisper transcribes it into text with word-level and segment-level SRT subtitle files. Auto language detection included.
Kling Master 2.0 Create Engaging Video Content
Next-Level Motion from Images using MiniMax
MiniMax Text-to-Video will Bring Your Creative Concepts to Life with Realistic Motion
ACE-Step 1.5 for Music Generation
Create stunning music using ACE Step 1.5
Boost Your Creative Video: Comprehensive Solutions with Seedance Image to Video
AILab
Audio to Text
Speech to Text
STT
Transcribe
Create a text from speech using Whisper STT
Whisper STT
Create a text from speech using Whisper STT
3D
Animation
Architecture
Flux
Game Development
Hunyuan 3D
Image to 3D
Upscaling
Create a 3D model from a reference image with Flux Dev texture upscaling. Key Inputs Image: Use any JPG or PNG. Load the image you want to generate a 3D asset from, if it has a background this workflow will remove it and center the subject. Prompt: as descriptive a prompt as possible Denoise: The amount of variance in the new image. Higher has more variance. Notes: If you aren’t satisfied with the initial mesh, simply cancel the workflow generation process, preferably before the process reaches the SamplerCustomAdvanced node because applying the textures to the model may take a little bit more time, and you’ll be unable to cancel the generation during that time. The seed is fixed for the mesh generation, this is so if you need to retry the texture upscale you don't need to also re-generate the mesh. If you would like to try a different seed for a better mesh, simply expand the node below and change the seed to another random number. Changing the seed could help in some cases, but ultimately the biggest factor is the input image. If the first mesh isn't showing, give it a moment, there are some additional post processing steps going on in the background for de-light/multiview.
Image to 3D with Hunyuan3D w/ Texture Upscale
Create a 3D model from a reference image with Flux Dev texture upscaling. Key Inputs Image: Use any JPG or PNG. Load the image you want to generate a 3D asset from, if it has a background this workflow will remove it and center the subject. Prompt: as descriptive a prompt as possible Denoise: The amount of variance in the new image. Higher has more variance. Notes: If you aren’t satisfied with the initial mesh, simply cancel the workflow generation process, preferably before the process reaches the SamplerCustomAdvanced node because applying the textures to the model may take a little bit more time, and you’ll be unable to cancel the generation during that time. The seed is fixed for the mesh generation, this is so if you need to retry the texture upscale you don't need to also re-generate the mesh. If you would like to try a different seed for a better mesh, simply expand the node below and change the seed to another random number. Changing the seed could help in some cases, but ultimately the biggest factor is the input image. If the first mesh isn't showing, give it a moment, there are some additional post processing steps going on in the background for de-light/multiview.
Craft Stunning Edits Instantly with Nano Banana Edit
LTX 2.0 – Prompting & Dynamic Camera Movement
SopranoTTS for Text to Speech
Turn speech using Soprano TTS
Wan2.2 Fun and RealismBoost LoRA for V2V
Audio2Audio
SoproTTS
Text to Speech
TTS
Turn your text to excellent speech using SoproTTS
Sopro for Text to Speech
Turn your text to excellent speech using SoproTTS
Modify the Image using InstantX Union ControlNet
Image to SVG
Qwen Image Edit 2511
SVG
SVG Potracer
Create SVG image using Qwen Image Edit and SVG Potracer node
SVG Potracer + Qwen Image 2511 for Image to SVG
Create SVG image using Qwen Image Edit and SVG Potracer node
Anything2Real 2601A
LTX 2.3 Audio to Video
LTX 2.3 Pro Text to Video
LongCat for Text to Image
Create cool images using the LongCat
Capybara for Image Editing
Edit your cool images using Capybara
LTX 2.3 Image to Video with Two-Pass Upscaling
Image to video
LTX
text to video
Video generation
Generate video and audio together with LTX 2.3 22B. Switch between text-to-video and image-to-video with one toggle. Separate multimodal guidance keeps video and audio quality tuned independently.
LTX 2.3 Text to Video and Image to Video
Generate video and audio together with LTX 2.3 22B. Switch between text-to-video and image-to-video with one toggle. Separate multimodal guidance keeps video and audio quality tuned independently.
API
Floyo API
Kling
MotionControl
Transfer movements from a reference video to any character image.
Kling 3.0 Standard Motion Control
Transfer movements from a reference video to any character image.
bitdance
T2V
text to image
Generate photorealistic images from text prompts using BitDance 14B, a 14-billion parameter autoregressive model that predicts up to 64 visual tokens per step.
BitDance 14B - Text to Image
Generate photorealistic images from text prompts using BitDance 14B, a 14-billion parameter autoregressive model that predicts up to 64 visual tokens per step.
API
FloyoAPI
Recraft
Text2Image
Generate images with Recraft V3 from a text prompt. Choose a preset size or custom dimensions, pick a style, and run.
Recraft V3 Text to Image
Generate images with Recraft V3 from a text prompt. Choose a preset size or custom dimensions, pick a style, and run.
digital illustration
image to image
recraft
style transfer
text to image
Transform an existing image with Recraft V3. Upload a reference, write a prompt, set strength, and pick a style. Controls how much of the original survives.
Recraft V3 Image to Image - Style Transfer
Transform an existing image with Recraft V3. Upload a reference, write a prompt, set strength, and pick a style. Controls how much of the original survives.
image to video
ltx 2
text to video
video generation
Add seconds to an existing video with LTX 2.3. Upload a clip, set the duration and mode
LTX 2.3 - Extend Video
Add seconds to an existing video with LTX 2.3. Upload a clip, set the duration and mode
Flimography
LTX 2 Pro
Open Source
Text2Video
Videography
An open source LTX 2 Pro for Text to Video
LTX 2 19B Pro for Text to Video
An open source LTX 2 Pro for Text to Video
Video Detailer using LTX 2 Vid2Vid
It can enhance the detail of the video
Animation
Filmography
Image2Video
LTX 2
Open Source
A workflow for ltx 2 image to video using distilled model
LTX 2 19B Fast for Image to Video
A workflow for ltx 2 image to video using distilled model
ChatterBox
Higgs
Text to Speech
TTS
VibeVoice
A workflow of TTS Audio Suite which can to use different type of audio models.
Multi Model for Voice Convesion and Text to Speech
A workflow of TTS Audio Suite which can to use different type of audio models.
API
Bedrock
Nova Canvas
SDXL
Text to Image
Titan
Generate and compare images between 3 different models powered by Amazon Bedrock. Key Inputs Prompt: as descriptive a prompt as possible Models SDXL: Solid all-around performer with strong prompt adherence and wide style range Titan: Versatile model with built-in editing features and customization flexibility Nova Canvas: Quick iterations with creative flair, ideal for brainstorming and concept exploration
Amazon Bedrock - Text to Multi-Image with SDXL, Titan and Nova Canvas
Generate and compare images between 3 different models powered by Amazon Bedrock. Key Inputs Prompt: as descriptive a prompt as possible Models SDXL: Solid all-around performer with strong prompt adherence and wide style range Titan: Versatile model with built-in editing features and customization flexibility Nova Canvas: Quick iterations with creative flair, ideal for brainstorming and concept exploration
LTX 2 Retake Video for Video Editing
Create Cinematic Poster & Ad from Your Product
LTX 2 Fast API for Text to Video
Text to video using LTX 2 Fast API
Insert Product into Existing Ad
Animation
Filmmaking
Image2Video
Kling 2.6 Pro
Create stunning videos using Kling 2.6 Pro
Kling 2.6 Pro for Image to Video
Create stunning videos using Kling 2.6 Pro
GPT-Image 1.5
Image2Image
Image2Video
Kling 2.6
Text2Image
VLM
Create a high quality demo for your products using Kling 2.6 Image to Video
Create Product Demo from Concept to Video
Create a high quality demo for your products using Kling 2.6 Image to Video
Static Watermark Remover
Wan2.1 + SCAIL for Animating Images for Movement
Z-Image Turbo + Chord Image to PBR Material
Change Product Shots with NanoBanana Pro
Kandinsky for Text to Video
Creating excellent videos using Kandinsky
background removal
film production
vfx
vid2vid
video generation
Remove any subject from video with MatAnyone2. Auto-detects by text, tracks frame by frame, and outputs a green screen, matte, and side-by-side comparison.
MatAnyone2 V2V with Auto Segmentation
Remove any subject from video with MatAnyone2. Auto-detects by text, tracks frame by frame, and outputs a green screen, matte, and side-by-side comparison.
Anima2
character design
concept art
fantasy
Text2Image
Generate images with Anima 2, a model built for anime and fantasy art. Write a prompt, set your resolution, and get stylized results in one run. Free to try.
Anima2 for Text to Image
Generate images with Anima 2, a model built for anime and fantasy art. Write a prompt, set your resolution, and get stylized results in one run. Free to try.
Single Image to Multiple Consistent Shots
Wan 2.2 T2V Workflow with UnifiedReward Flex LoRA
Wan 2.2 T2V Workflow with UnifiedReward Flex LoRA
Kling Omni One Image Edit
Image2Video
Kling Omni One
Next Scene LoRA
Qwen Image Edit 2511
Reference2Video
Creating a reshoot for a character
Character Reshoot using Qwen Edit 2511 + Kling O1
Creating a reshoot for a character
HunyuanImage 3.0 Text to Image
HunyuanVideo 1.5 for Image to Video
Ovis Text to Image
Upscaling Images to 4k using Qwen Image Edit 2511
Upscale to 2k to 4k
Z-Image Turbo Inpainting
Z-Image Turbo Inpainting
Image2Image
Image2Video
Kling Omni One Video Edit
Qwen Image Edit 2511
Editing the character in the video without losing quality using video to video workflow
Change in the Character using Image2Vid
Editing the character in the video without losing quality using video to video workflow
Grok
image-to-image
multi-style
prompt-based editing
Multi-Style Image Transformation Workflow (One Input → Multiple Outputs)
Multi-Style Image Transformation Workflow
Multi-Style Image Transformation Workflow (One Input → Multiple Outputs)
image-to-image
Nano banana
Omnipotent
Upload up to 3 reference images for a face, clothing, and scene, then describe what you want. Nano Banana 2 combines all three into one composed output.
Omnipotent Image 2.0 – Multi-Image Scene Composer
Upload up to 3 reference images for a face, clothing, and scene, then describe what you want. Nano Banana 2 combines all three into one composed output.
Flux. 2 Klein INPAINT Segment Edit Accurate Image
Isometric Miniatures from a Selfie
FLUX.2 Klein 9B + Virtual Tryon LoRA
Try a clothes using Flux.2 Klein 9B and tryon LoRA from
Qwen Image Edit – Portrait Light Migration
Camera Control
Image2Vid
Qwen Image Edit 2511
Vid2Vid
Wan2.6
Using witness cameras to recreate additional shots that were not captured by principal photography
Camera Angle Creation using Image2Vid
Using witness cameras to recreate additional shots that were not captured by principal photography
Kling Omni 1 Reference to Video
Create Magazine Cover & Package Design
Flux
Flux.2 Klein
Image Outpainting
Outpaint image using Flux 2 Klein 4B using LanPaint and Outpaint LoRA
FLUX.2 Klein 4B for Image Outpainting
Outpaint image using Flux 2 Klein 4B using LanPaint and Outpaint LoRA
Flux
Flux.2 Klein 4B
Image2Image
Create sprite sheet for game characters in using flux 2 Klein 4B
FLUX.2 Klein 4B for Text to Sprite Sheet
Create sprite sheet for game characters in using flux 2 Klein 4B
Ecommerce
NanoBanana
Reference Image
Upload the outfit image to generate fashion billboard
Generate Fashion Billboard Using Outfit Image
Upload the outfit image to generate fashion billboard
FLUX
Flux.2 Klein
Image2Image
Image Editing
LoRA
Edit images with consistency of the subject or things using Flux.2 Klein 9B and a LoRA
FLUX.2 Klein 9B + Consistency LoRA
Edit images with consistency of the subject or things using Flux.2 Klein 9B and a LoRA
Qwen Thinking Prompt Refiner
Image Editing
LoRA
Qwen Image Edit 2511
Style Transfer
Create new images from your lineart drawing or sketch using the style transfer LoRA and Qwen Image Edit 2511
Qwen Image Edit 2511 + Style Transfer LoRA
Create new images from your lineart drawing or sketch using the style transfer LoRA and Qwen Image Edit 2511
Z-Image Turbo - 2K Upscaler
Z-Image Turbo - 2K Upscaler
Qwen Image Max for Text to Image
Create a high quality using the flagship model of Qwen Image
API
Image2Image
Image Editing
Qwen Image Max Edit
Editing images using the flagship model of Qwen Image Max Edit
Qwen Image Max Edit for Editing Images
Editing images using the flagship model of Qwen Image Max Edit
animation
character design
concept art
lumina
portrait
text to image
Generate high-quality anime images with NetaYume Lumina, a fine-tuned model built on Lumina Image 2.0. Describe a scene, hit run, get detailed anime art.
NetaYume Lumina Text to Image
Generate high-quality anime images with NetaYume Lumina, a fine-tuned model built on Lumina Image 2.0. Describe a scene, hit run, get detailed anime art.
Dreamina 3.1 Text to Image
Fun Controlnet Union 2602
Image2Image
Image Editing
Qwen
Qwen Image 2512
Transform your mages using the Qwen Image 2512 and with Fun Controlnet Union 2602
Qwen Image 2512 + Fun Controlnet Union 2602
Transform your mages using the Qwen Image 2512 and with Fun Controlnet Union 2602
API
Filmmaking
Floyo API
LTX 2 Pro
Text to Video
Videography
Text to video using LTX 2 Pro API
LTX 2 Pro API for Text to Video
Text to video using LTX 2 Pro API
ComfySketch for Creating Images
Draw cool images using comfysketch
Capybara for Text to Image
Create unique images using Capybara
audio
Audio2Audio
Chatterbox
tts
TTS Audio Suite
voice conversion
Convert any voice to match a target speaker using ChatterBox TTS. Upload source and narrator audio, run it, get back a converted MP3. No voice training needed.
Voice Changer using TTS Audio Suite (ChatterBox)
Convert any voice to match a target speaker using ChatterBox TTS. Upload source and narrator audio, run it, get back a converted MP3. No voice training needed.
SAM2
Segment Anything 2
video2video
Video Mask
Create a video mark frame by frame using Segment Anything 2
Segment Anything 2 for Creating Video Mask
Create a video mark frame by frame using Segment Anything 2
Qwen 3.5 Plus for Multimodal LLM and VLM
Analyze your images or videos using you Qwen 3.5 Plus
SAM3 for Video Masking using Points
Create a video masking using SAM3 and Points only.
Kling Image to Video with Reference Control
Kling O3 Pro - Image to Video
Kling O3 Pro - Video to Video Edit
API
Video2Video
Edit your video with Kling O3 Pro. Upload a clip, describe what to change, set duration and aspect ratio. Audio is preserved by default.
Kling O3 Pro - Video to Video Reference
Edit your video with Kling O3 Pro. Upload a clip, describe what to change, set duration and aspect ratio. Audio is preserved by default.
Z-Image Base
It will come out soon.
Kling O3 Video to Video — Standard Edit
Kling 3.0 Pro for Text to Video
Create videos using Kling 3.0
Krea Wan 14B Video to Video
image to video
ltx 2
retake
vid2vid
video generation
Re-generate a specific segment of an existing video with LTX 2.3.
LTX 2.3 - Retake Video
Re-generate a specific segment of an existing video with LTX 2.3.
Image2Image
Image Editing
Seedream 4.5
Text2Image
An all purpose Seedream 4.5 for image generation
Seedream 4.5 Unified for Image Generation
An all purpose Seedream 4.5 for image generation
Vidu Q3 for Text to Video
Create good videos with Vidu Q3
Enjoy Effortless Image-to-Image Transformation to Jaw-Dropping Photorealism using Anime2Reality LoRA
Wan 2.6 Video Generation
Meshy v6 Text to 3D Model
Create a 3D using Meshy v6 text to model
Kling O3 Pro Text to Video
Kling O3 Standard Image to Video with Reference
Kling O3 Pro Image to Video with Reference
Audio2Audio
Audio Editing
Step Audio EditX
Voice Cloning
Upload a voice sample, transcribe it automatically with Whisper, then use Step-Audio EditX to clone that voice speaking your custom script. No trigger word needed.
Step Audio EditX for Voice Cloning
Upload a voice sample, transcribe it automatically with Whisper, then use Step-Audio EditX to clone that voice speaking your custom script. No trigger word needed.
character design
character sheet
FaceAnalysis
Image2Image
Image Dataset
InsightFace
lora
lora training
Upload a reference face, point to your dataset, and InsightFace filters out images that don't match. Cosine, L2 Norm, and Euclidean distance all supported.
InsightFace for Filtering Character LoRA Dataset
Upload a reference face, point to your dataset, and InsightFace filters out images that don't match. Cosine, L2 Norm, and Euclidean distance all supported.
Audio2Audio
Step Audio EditX
Voice Editing
Edit existing voice recordings with Step-Audio EditX. Change emotion, dialect, or style. Whisper transcribes your audio so you describe the edit, not the source.
Step Audio EditX for Voice Editing
Edit existing voice recordings with Step-Audio EditX. Change emotion, dialect, or style. Whisper transcribes your audio so you describe the edit, not the source.
Kling 3.0 Standard for Text to Video
Create videos using Kling 3.0 Standard
Kling 3.0 Standard for Image to Video
Animate the images using Kling 3.0 Standard
Image Editing
Qwen
Qwen Image Edit 2511
VNCCS Utils
Create different position of person using VNCCS custom node and Qwen Image Edit 2511
Qwen Image Edit 2511 and VNCCS Utils - Visual Pose
Create different position of person using VNCCS custom node and Qwen Image Edit 2511
Audio Separation
Video to Audio
Upload a video, strip the audio, and split it into four clean stems (Bass, Drums, Other, and Vocals), then save your chosen stem as an MP3. No model required.
Audio Separation for Video to Audio
Upload a video, strip the audio, and split it into four clean stems (Bass, Drums, Other, and Vocals), then save your chosen stem as an MP3. No model required.
GPT Image 1.5 Text to Image
image-to-image
Lipsync
reference-image
seedream
upscaling
Video-conditioning
wan2.1_funControl
Vertical Video Lighting & Mood Shift Using Seedream + Wan
animation
character design
image to video
kling
video generation
Apply motion from a reference video to a still image with Kling 3.0 Pro.
Kling 3.0 Pro Motion Control
Apply motion from a reference video to a still image with Kling 3.0 Pro.
Create a Fashion Shoot - NanoBanana + Kling



































_1762434919692.gif?width=400&height=300&quality=80&resize=cover)








_1767601871879.png?width=400&height=300&quality=80&resize=cover)




_1767784624270.png?width=400&height=300&quality=80&resize=cover)



















_1764582642632.webp?width=400&height=300&quality=80&resize=cover)



_1767112903535.webp?width=400&height=300&quality=80&resize=cover)















_1774195486846.webp?width=400&height=300&quality=80&resize=cover)



_1758800553742.webp?width=400&height=300&quality=80&resize=cover)






_1767704595216.png?width=400&height=300&quality=80&resize=cover)





_1766997541061.png?width=400&height=300&quality=80&resize=cover)









%20(1)_1774207669060.webp?width=400&height=300&quality=80&resize=cover)

_1762345216109.gif?width=400&height=300&quality=80&resize=cover)










_1766051317506.webp?width=400&height=300&quality=80&resize=cover)



_1774789046971.png?width=400&height=300&quality=80&resize=cover)








_1764579820037.webp?width=400&height=300&quality=80&resize=cover)
_1764570564795.webp?width=400&height=300&quality=80&resize=cover)

%20(2)_1774208003255.webp?width=400&height=300&quality=80&resize=cover)






_1774200945888.webp?width=400&height=300&quality=80&resize=cover)
_1762437195803.gif?width=400&height=300&quality=80&resize=cover)
_1763538074182.webp?width=400&height=300&quality=80&resize=cover)











_1774241709541.webp?width=400&height=300&quality=80&resize=cover)
%20(2)_1774290976171.webp?width=400&height=300&quality=80&resize=cover)
_1775022984504.png?width=400&height=300&quality=80&resize=cover)


%20(3)_1774349172672.webp?width=400&height=300&quality=80&resize=cover)










_1768981819654.webp?width=400&height=300&quality=80&resize=cover)


_1766992060792.png?width=400&height=300&quality=80&resize=cover)



%20(2400%20x%201080%20px)%20(2430%20x%201080%20px)%20(10)_1771484497818.png?width=400&height=300&quality=80&resize=cover)







_1774874350635.png?width=400&height=300&quality=80&resize=cover)

_1774777336819.png?width=400&height=300&quality=80&resize=cover)
_1774783037628.png?width=400&height=300&quality=80&resize=cover)









_1768210693794.png?width=400&height=300&quality=80&resize=cover)


_1774791423297.png?width=400&height=300&quality=80&resize=cover)


















_1774416566203.webp?width=400&height=300&quality=80&resize=cover)


_1763991450209.png?width=400&height=300&quality=80&resize=cover)









_1764320932433.webp?width=400&height=300&quality=80&resize=cover)

_1767292063423.webp?width=400&height=300&quality=80&resize=cover)