OneStopVision is an open-source toolkit offering a comprehensive suite of algorithms for face and body analysis, landmark extraction, and ControlNet integration in Stable Diffusion.
-
Updated
Apr 17, 2024 - Python
OneStopVision is an open-source toolkit offering a comprehensive suite of algorithms for face and body analysis, landmark extraction, and ControlNet integration in Stable Diffusion.
👆Pytorch implementation of "Ctrl-V: Higher Fidelity Video Generation with Bounding-Box Controlled Object Motion"
Goob Workshop
diffusers for CoreML
GPT pose image generator to condition SD models with ControlNet OpenPose
Adds controlnet preprocessing feature into Extras tab
ControlNet Using the Facial Landmark Condition for De-identification Purposes
an example stable diffusion controlnet discord bot
Hooocus is a (H)eadless variant of Fooocus – Focus on prompting and generating. This is still very much a work in progress.
A small puzzle game with autogenerated images
text to text / image to text Architecture prompt generator for stable diffusion / any image generation plateform based on Ollama LLM
Using ControlNet right in Blender.
an out-of-the-box integrated ControlNet training script
A Light Neural Network To Control Stable Diffusion Spatial Information tuned by Chinese
Official repository of "Towards Learning Contrast Kinetics with Multi-Condition Latent Diffusion Models"
An implementation of ControlNet as described in "Adding Conditional Control to Text-to-Image Diffusion Models" published by Zhang et al.
Code and data for the CVPR24 paper "EFHQ: Multi-purpose ExtremePose-Face-HQ dataset" [CVPR'24]
Try-on outfit app with Gradio
Add a description, image, and links to the controlnet topic page so that developers can more easily learn about it.
To associate your repository with the controlnet topic, visit your repo's landing page and select "manage topics."