Hypergraph Vision Transformers: Images are More than Nodes, More than Edges

Joshua Fixelle
University of Virginia
CVPR 2025
Comparison of Graph Structures for different methods.

Comparison of Graph Structures for different methods. Showing (a) CNNs, (b) Vision Transformers, (c) ViG with a KNN clustered GNN, (d) ViHGNN with clustered hyperedges, and (e) our proposed HgVT method. Strong group edges shown with solid lines; weak edges with dashed lines. Hyperedges shown with shaded regions; less dominant hyperedges with gray dashed regions.

Abstract

Recent advancements in computer vision have highlighted the scalability of Vision Transformers (ViTs) across various tasks, yet challenges remain in balancing adaptability, computational efficiency, and the ability to model higher-order relationships. Vision Graph Neural Networks (ViGs) offer an alternative by leveraging graph-based methodologies but are hindered by the computational bottlenecks of clustering algorithms used for edge generation. To address these issues, we propose the Hypergraph Vision Transformer (HgVT), which incorporates a hierarchical bipartite hypergraph structure into the vision transformer framework to capture higher-order semantic relationships while maintaining computational efficiency. HgVT leverages population and diversity regularization for dynamic hypergraph construction without clustering, and expert edge pooling to enhance semantic extraction and facilitate graph-based image retrieval. Empirical results demonstrate that HgVT achieves strong performance on image classification and retrieval, positioning it as an efficient framework for semantic-based vision tasks.

Hypergraph Representation

MY ALT TEXT

Comparison of (a) hypergraph and (b) equivalent bipartite representation, showing five hyperedges. Hypergraphs are represented in bipartite form, with distinct paths for image patches (vertices) and hyperedges, and with connections defined by a dynamic adjacency matrix.

Architecture Overview

MY ALT TEXT

HgVT Architecture Diagram, composed of stacked HgVT blocks with adjacency matrix A, vertex features X(V), and hyperedge features X(E). Edge attention flow is shown with gray arrows; input norms and residual adds are omitted for clarity.

Expert Edge Pooling

MY ALT TEXT

Inspired by Mixture of Experts (MoE), Expert Edge Pooling treats each edge as an expert and uses learned gate projections to score their relevance to the final classification token (top). This enables dynamic pruning and encourages specialization, leading edges to cluster by semantic macro-classes as shown in the UMAP visualizations (bottom). See Appendix J for details.

Hypergraph Interpretability

Loading visualization, please wait...

Loading visualization, please wait...

Loading visualization, please wait...

One of the key emergent properties of HgVT is its interpretability: the learned hypergraph forms a natural hierarchy over image regions. The interactive visualization above illustrates the tree-like structure extracted from the final layer of the HgVT-Ti model on samples from the ImageNet validation set. The middle row shows aggregated pooling over virtual vertices, while the bottom row highlights the focus of each connected hyperedge via patch brightness. See Appendix E for details.

BibTeX

@misc{fixelle2025hypergraphvisiontransformersimages,
      title={Hypergraph Vision Transformers: Images are More than Nodes, More than Edges}, 
      author={Joshua Fixelle},
      year={2025},
      eprint={2504.08710},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2504.08710}, 
}