The SuperImage Network currently encompasses multiple models for generating images from text, including both open-source and closed-source models. Users can select a model to execute commands when using the service.
Model developers can also submit their models to the SuperImage Network, where the community votes on whether to incorporate the developer’s model into the distributed network
FLUX Dev
The highest-performance version of FLUX, offering state-of-the-art image generation with top-notch prompt-following capabilities, visual quality, image detail, and output diversity.
Development Team: Black Forest Labs
Launch Date:2024.8
Model Parameters: 12B
Features: Exceptional image generation quality, diverse application capabilities, optimized performance and efficiency, open-source characteristics, and strong multilingual and cross-cultural capabilities.
Sana
The highest-performance version of Sana offers state-of-the-art and high-quality image generation, featuring top-notch image generation quality, diverse application functions, optimized performance, and output diversity.
Development Team:Jointly developed by NVIDIA
Launch Date: 2025.1
Model Parameters: 7B
Features: Images with rich details and lifelike textures,Powerful editing and customization capabilities.
CogView4
CogView4 is a leading open-source text-to-image generation model. The images it generates have realistic details and are vividly lifelike. It can quickly and flexibly adapt to various needs and has a very wide range of application scenarios.
Development Team: Zhipu Qingyan Expert Team
Launch Date: 2025.3
Model Parameters: 123M
Features: Text-to-Image Generation,High Quality and Coherence,Wide Range of Applications
HiDream-I1
HiDream-I1 A new open – source image – generation model that can achieve state – of – the – art image – generation quality in just a few seconds.
Development Team: HiDream.ai
Launch Date: 2025.4
Model Parameters: 17B
Features: HiDream-I1 supports 4K ultra-high-definition image generation, features advanced text comprehension, multi-style adaptation, and precise detail control. It employs an efficient diffusion architecture, making it ideal for professional-grade design needs, with optimized inference speed and multimodal input support.