Dahye Kim*, Xavier Thomas*, Deepti Ghadiyaram
We study rich visual semantic information is represented within various layers and denoising timesteps of different diffusion architectures. We uncover monosemantic interpretable features by leveraging k-sparse autoencoders (k-SAE). We substantiate our mechanistic interpretations via transfer learning using light-weight classifiers on off-the-shelf diffusion models' features. On 4 datasets, we demonstrate the effectiveness of diffusion features for representation learning. We provide in-depth analysis of how different diffusion architectures, pre-training datasets, and language model conditioning impacts visual representation granularity, inductive biases, and transfer learning capabilities. Our work is a critical step towards deepening interpretability of black-box diffusion models.
-
diffc_image_classification/
Image Classification Experiments with Diffusion FeaturesExample run file:
diffc_image_classification/run.shPlease see the README for more details.
-
SD-KSAE/
Experiments with K-Sparse Autoencoders (K-SAE) on Diffusion FeaturesExtract features:
python extract_feature.pyTrain k-SAE:
python train_ksae.py -
LLaVA_Diffusion/
Setup of LLaVA with Diffusion FeaturesFor detailed setup instructions, and to run the code, refer to the LLaVA repository.

