💡 Core Insight • 🛠️ Env Installation • ⚙️ Data Process • 🚀 Training
2026.02.09💡 Highlight: Please check the Core Insight section! We explain how ReAlign enables efficient scaling by decoupling dependence on multimodal paired data.2026.02.02🏖️ NOTE: The author is currently on a journey and will address the To-Do items upon return.2026.02.02✨ We release the code of the ReVision. Try training!2026.01.31📝 We release the paperReVision.[PAPER]2025.06.10✨ We release the code of the Unicorn. Try training!2025.04.15🗂️ We release Unicorn-1.2M & Unicorn-Instruction-471K Datasets. [HF]2025.03.28📝 We release the paperUnicorn. [PAPER]
Important
ReAlign leverages the Modality Gap phenomenon within the high-dimensional hyperspherical embedding space of multimodal contrastive learning to precisely map unpaired text representations into the visual representation distribution. Within this framework, text serves a dual role: the aligned text embeddings function as pseudo-visual representations providing equivalent semantics, while the original text serves as the supervision signal for the MLLM. This modality substitution mechanism decouples the model's dependence on expensive paired data, thereby serving as an efficient scaling law strategy that can be seamlessly integrated throughout the entire training lifecycle (spanning Pre-training, SFT, and RL). Theoretically, this paradigm is not limited to image-text pairs but can be generalized to any modalities possessing an aligned subspace.
Create a conda virtual environment and activate it:
conda create -n ReVision python=3.10
conda activate ReVisionBasic requirements
pip install --upgrade pip
pip install transformers=4.44.0
pip install torch torchvision xformers --index-url https://download.pytorch.org/whl/cu124Install flash-attention
pip install packaging
pip install flash-attn --no-build-isolationFormat your dataset.json as a dictionary containing two separate lists: images and texts.
- images: A list of dictionaries, each with an
idandimagepath. - texts: A list of dictionaries, each with an
idandtextcontent.
{
"images": [
{
"id": "img_001",
"image": "0001.jpg"
},
{
"id": "img_002",
"image": "folder/0002.png"
}
],
"texts": [
{
"id": "txt_001",
"text": "This is a text sample description."
},
{
"id": "txt_002",
"text": "Another independent text entry."
}
]
}
Ensure your directory looks similar to this before running:
├── data/
│ ├── images/ # Root folder for images
│ └── dataset.json # The JSON index file above
├── models/
│ ├── llm2clip-openai/ # Local vision encoder path
│ └── llm2vec-llama3/ # Local text encoder path
└── embed.py
Run the script to generate embeddings. By default, this script runs Offline (using local model paths).
python embed.py \
--data_json "./data/dataset.json" \
--image_root "./data/images" \
--output_text_dir "./output/text_feats" \
--output_image_dir "./output/image_feats" \
--llm2clip_path "/path/to/local/llm2clip-model" \
--llm_model_name "/path/to/local/llm2vec-model" \
--bsz 512 \
--modality both
Arguments:
--modality: Chooseboth,text, orimage.--bsz: Batch size (default 1024; reduce to 512 or 256 if OOM occurs).--online: Add this flag if you want to allow Hugging Face Hub access.
The script saves features in chunked .pkl files (default 200k records per file).
output/text_feats/text_embeds_1.pkloutput/image_feats/image_embeds_1.pkl
After generating the initial embeddings (Step 1), use this script to align the Text Embeddings into the Image Embedding Space. This process reduces the modality gap using the ReAlign method.
This script implements ReAlign, a robust, training-free statistical alignment strategy derived from our Fixed-frame Modality Gap Theory. It bridges the geometric misalignment by mapping text representations (
-
Anchor Alignment. We first address the first-order distributional shift by eliminating the mean difference. The source embeddings are centered and then shifted to the target anchor:
$$\dot{e}_y = (e_y - \mu_y) + \mu_x$$ -
Trace Alignment. Next, we adjust the scale of residuals to match the global energy of the visual modality while preserving the spectral structure. A scaling factor
$s$ is derived from the global trace (variance):$$s = \sqrt{\frac{Trace_{img}}{Trace_{txt}}} \quad \Rightarrow \quad \tilde{e}_y = \mu_x + s(e_y - \mu_y)$$ -
Centroid Alignment. While affine transformations align statistics in Euclidean space, the subsequent spherical projection induces a secondary "Phantom Drift." We perform a final centroid correction on the unit hypersphere to strictly align the angular mass centers:
$$e''_y = e'_y - \mu' + \mu_x$$
Execute embed_ReAlign.py to process the .pkl files generated in Step 1.
python embed_ReAlign.py \
--input_dir "./output/text_feats" \
--img_input_dir "./output/image_feats" \
--output_dir "./output/aligned_feats" \
--chunk_size 10000 \
--strict_finite 1
Arguments:
--input_dir: Path to the folder containing Text.pklfiles (from Step 1).--img_input_dir: Path to the folder containing Image.pklfiles.--output_dir: Where to save the aligned text embeddings.--chunk_size: Number of vectors to process in memory at once (default: 10,000).--strict_finite: Set to1(default) to immediately abort ifNaNorInfvalues are detected.
The script creates a trace/ subdirectory inside your output folder.
- Aligned Text:
output/aligned_feats/trace/text_embeds_X_trace.pkl - Statistics:
output/aligned_feats/trace_stats.pkl(Contains calculated means, scale factor, and trace values for validation).
Note: This script only transforms the Text embeddings. The Image embeddings remain unchanged as they serve as the "anchor" distribution.
Here is the Training section, written to perfectly match the style and context of your existing README. You can insert this section right after the Step 2: ReAlign section and before the To Do section.
ReVision training is conducted using the processed embeddings in Step 2.
Ensure your workspace is organized with the necessary JSON annotations and the feature files generated from the ReAlign step.
├── output/
│ └── aligned_feats/trace/ # [Input] The Aligned Embeddings (.pkl files) from Step 2
├── data/
│ ├── pretrain.json # Pretraining JSON
│ └── sft.json # SFT JSON
Before running the training scripts, you must manually configure the data loader to point to your feature directory.
- Open the file:
bunny/util/data_utils.py. - Locate the
LazySupervisedDatasetclass. - Find the
folder_pathvariable inside the__init__method. - Modify it to point to your aligned features directory (e.g., the absolute path to
output/aligned_feats/trace).
Example Modification in bunny/util/data_utils.py:
class LazySupervisedDataset(Dataset):
def __init__(self, data_path, tokenizer, data_args):
# ... (lines omitted)
# [ACTION REQUIRED] Change this path to your Step 2 output directory
# folder_path = '/old/path/...'
folder_path = '/your/project/path/ReVision/output/aligned_feats/trace'
# ... (The code will automatically load all .pkl files from this folder)Note: The data loader uses a dictionary to map IDs from your JSON dataset to the embeddings in the
.pklfiles. Ensure theidfields in your JSON match the IDs used during the Embedding Process.
In this stage, we train the projector (MLP) to align the language model with the synthetic visual embeddings.
Run the command:
cd ReVision
sh script/train/pretrain.sh
Configuration Notes:
--image_folder: This argument in the script is ignored (or acts as a placeholder) because the code is hardcoded to load features fromfolder_pathindata_utils.py.--data_path: Path to your pretraining JSON file.
In this stage, we perform Visual Instruction Tuning on the LLM backbone.
Run the command:
sh script/train/finetune_full.sh
Configuration Notes:
--pretrain_mm_mlp_adapter: Must point to themm_projector.binsaved from Stage 1.--data_path: Path to your SFT JSON file.- Memory Optimization: The script uses DeepSpeed ZeRO-3 (
zero3.json) to handle the memory requirements of the 8B model.
- Release embedding quality detection code.
- Refactor and optimize training code.
- Release ReVision Pre-training data.
- Release ReVision SFT data.
- Release evaluation benchmarks.
- Release ReVision-8B model weights.
If you find this repository helpful, please consider citing our paper:
@misc{yu2026modalitygapdrivensubspacealignment,
title={Modality Gap-Driven Subspace Alignment Training Paradigm For Multimodal Large Language Models},
author={Xiaomin Yu and Yi Xin and Wenjie Zhang and Chonghan Liu and Hanzhen Zhao and Xiaoxing Hu and Xinlei Yu and Ziyue Qiao and Hao Tang and Xue Yang and Xiaobin Hu and Chengwei Qin and Hui Xiong and Yu Qiao and Shuicheng Yan},
year={2026},
eprint={2602.07026},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2602.07026},
}
@article{yu2025unicorn,
title={Unicorn: Text-only data synthesis for vision language model training},
author={Yu, Xiaomin and Ding, Pengxiang and Zhang, Wenjie and Huang, Siteng and Gao, Songyang and Qin, Chengwei and Wu, Kejian and Fan, Zhaoxin and Qiao, Ziyue and Wang, Donglin},
journal={arXiv preprint arXiv:2503.22655},
year={2025}
}If you have any questions or are interested in collaborating, please reach out: yuxm02@gmail.com
