Introducing MX Fusion
We're excited to introduce MX Fusion, short for Mixer Diffusion, our custom real-time genAI workflow. This first incarnation runs several real-time models including SD Turbo and SDXL Turbo and compatible LoRAs. VFX from MXR route to local off-line models along with text prompts to generate an endless variety of effects. MX Fusion combines 3D source content and text prompts to create a completely new workflow: 3D-2-VIDEO.
Key Features
4K Up
Sampling
Anti-Aliasing
Multiple
Models
SD Turbo, etc..
Custom
LoRAs
for Supported
Getting Started
Go to the A.I. and click on the MX Fusion tab. Type in you prompt and hit start. That's it.
Well, that's never really it. Here are soome best practices to help you get the best looking images and FX.
Running MX Fusion for the first time - Running models for the first time requires several models to be downloaded then compiled. This process can take 5 - 20 minutes depending on your internet connection and GPU. Turn debug mode on in the MX Dif tab to see this process in action.
Settings - You're likely running MXR and MX Fusion on the same GPU. This means the two will be competing for resources. It's best to go into settings and cap the frame rate at 30 or 45 fps.
Don't Feed The A.I.
Prompts - Jokes aside, the more you feed the AI visually and with prompts the better results you'll get. Be descriptive and use words that describe the style from the content like anime to the camera effects like "shadow depth-of-field". Check out the genAI community to learn more about prompt writing. You can even ask ChatGPT to write prompts for you.
For example: "Female Robot Head in profile in a 3D cinematic high quality masterpiece shallow depth of field style"
Parameters - Several MX Dif settings can be changed in real-time. The two Step Schedule settings are the most powerful. The lower the number of step 1 the more the AI will dream. This allows for more resolved images but less of the original motion and shape of the MXR VFX. Step Schedule 2 adds detail, the higher the number the better up to 50 but sometimes this can cause images to look overly contrasted and burnt.
Custom Models and LoRAs - If you're havn trouble try enabling the debug window on the MX Dif tab to see the console window and note errors. Models are very GPU memory intensive which can be problematic on GPUs with less memory.
To change the model, LCM, VAD or LoRA you can copy / paste the file location of downloaded weights or copy Hugging Face IDs. For Instance:
File Directory:
c:\[Your Safetensors]\safetensor.safetensor
Hugging Face ID:
StabilityAI/sd-turbo
Image by GalleonLisette on Civic.AI
Visual FX - The FX section allows for 4 different types of processing for MX.RT output:
1. Normal - Native 512x512 up sampled to your screen size.
2. Mirror - Takes the square generate video and creates a mirror image doubling the image horizontally. Native resolution: 1024x512
3. Quad - takes the square generate image and mirrors it left to right and top to bottom. Native resolution: 1024x1024
You can preserve the native aspect ratio by disabling stretching.
Mirror Effect
Quad Effect
Try the Demo today for Free - on Steam
Quality-of-Life Improvements
Many little UI elementsha been impoved. For Example:
1. The FX button in the right side tool bar now links to FX in the Content Browser. Below the FX icon we added a new icon for post processing effects such as color grading, chromatic adoration, exposure, etc.
2. Clicking on an item in the outliner now changes the focus of the details panel.
We also refactored how assets loaded to improve initial app load times. However, this can cause a stutter when dragging and dropping items from the content browser into your scene for the first time.