MXR v1.4 Update

3 NewGenAI Modes, Image Coherence Tools, Improved UI

ControlNet

Kind of like SkyNet but with more control.

We added 3 new modes that create better structure to images. These modes provide more context for AI models such as depth and separation between objects. Objects can pass in front of each other and maintain their individuality. In color mode, objects can blend together to create new or odd shapes. This can be used to create interesting results but sometimes you want objects to be discrete.  These models also support high native generation resolutions up to 1024x1024. Some models support 1024x1024 but increasing the resolution can result in odd artifacts.  

Key Features

3 New GenAI

Modes

Image Coherence

Tools

Expanded

Nodes Panel

Custom
LoRAs

for Supported

Getting Started

Go to the A.I. and click on the MX Fusion tab. Type in you prompt and hit start. That's it.

Well, that's never really it. Here are soome best practices to help you get the best looking images and FX.
Blank Screens - Running genAI models for the first time takes 5-10 minutes. A black screen is displayed while the models are downloaded and built. Turn on "Console" in the advanced section of the AI tab to see this process in action. Compiling models is CPU intensive. Keep MXR open to speed up the process. Minimizing the app or multitasking slows down this process.
GPUs - a 4090 is recommended. 30 series cards work as well. Video memory is also very important when running models. A 4090 with 24GB is needed for controlNet models running multiple inference steps. Running color model on cards with less video memory is feasible but may result in "Fatal Error" crashes.
Don't Feed The A.I.

Prompts
- Jokes aside, the more you feed the AI visually and with prompts the better results you'll get. Be descriptive and use words that describe the style from the content like anime to the camera effects like "shadow depth-of-field". Check out the genAI community to learn more about prompt writing. You can even ask ChatGPT to write prompts for you.

For example: "Female Robot Head in profile in a 3D cinematic high quality masterpiece shallow depth of field style"
Parameters - Several MX Dif settings can be changed in real-time. The two Step Schedule settings are the most powerful. The lower the number of step 1 the more the AI will dream. This allows for more resolved images but less of the original motion and shape of the MXR VFX. Step Schedule 2 adds detail, the higher the number the better up to 50 but sometimes this can cause images to look overly contrasted and burnt.
Custom Models and LoRAs - If you're havn trouble try enabling the debug window on the MX Dif tab to see the console window and note errors. Models are very GPU memory intensive which can be problematic on GPUs with less memory.

To change the model, LCM, VAD or LoRA you can copy / paste the file location of downloaded weights or copy Hugging Face IDs. For Instance:

File Directory:
c:\[Your Safetensors]\safetensor.safetensor

Hugging Face ID:
StabilityAI/sd-turbo

Image by GalleonLisette on Civic.AI
Custom Models - MX Fusion is compatible with Stable Diffusion trained models such as SD-Turbo (Default) and Kohaku-v2.  Copy "KBlueLeaf/kohaku-v2.1" into the Model dialog in MX Fusion to download it or go to:

https://huggingface.co/KBlueLeaf/kohaku-v2.1
Custom LoRAs - Combine multiple LoRAs and adjust their weighting by adding an array using the plus sign. One of our favorites LoRAs is: Dreamy XL for SDXL Turbo

You can copy this Hugging Face ID into MX Fusion as a LoRA or manually download it from the link above: Lykon/dreamshaper-xl-1-0
Visual FX - The FX section allows for 4 different types of  processing for MX.RT output:

1. Normal - Native 512x512 up sampled to your screen size.
2. Mirror - Takes the square generate video and creates a mirror image doubling the image horizontally. Native resolution: 1024x512

3. Quad - takes the square generate image and mirrors it left to right and top to bottom. Native resolution: 1024x1024

You can preserve the native aspect ratio by disabling stretching.
Mirror Effect
Quad Effect
Try the Demo today for Free - on Steam

Quality-of-Life Improvements

The Nodes panel is now found in the right tool bar and expanding tray for better usability.

Experimental:
- Video Record featuer
- Video background playback
- Volumetric FX
///

Roadmap

We've already started on features for our next update. Here is a preview of things to come:

1. Volumetric Smoke FX
2. Volumetric Fire FX
3. Volumetric Water FX
4. New portal and VJ FX
5. AI Creative Up Sampling
6. GenAI in-painting

Try MXR For Free on Steam

Also available for purchase on the Epic Games Store

Disclaimer
- Stream Diffusion is provided under the terms of the Apache 2.0 License and certain portions are provided under non-commercial research licenses. By using this software, you agree to comply with the terms and conditions outlined in these licenses.Apache 2.0 LicenseStability AI models are provided under non-commercial research licenses. Stability allowing companies with revenue below $1M a year to use their software. Stability License
This software is provided on an "AS IS" basis, without warranties or conditions of any kind, either express or implied, including, without limitation, any warranties of merchantability, fitness for a particular purpose, non-infringement, or title. In no event shall Pull LLC or its contributors be liable for any damages arising in connection with the software, whether direct, indirect, incidental, or consequential. Liability Limitation: Under no circumstances shall Pull LLC be responsible for any loss or damage that results from the use of this software, including but not limited to data loss, business interruption, or financial losses. You use this software at your own risk.

This Stability AI Model is licensed under the Stability AI Community License, Copyright ©  Stability AI Ltd. All Rights Reserved