SeqAfford: Sequential 3D Affordance Reasoning via Multimodal Large Language Model

Chunlin Yu*, Hanqing Wang*, Ye Shi, Haoyang Luo, Sibei Yang, Jingyi Yu, Jingya Wang+,
ShanghaiTech University, Shanghai, China
CVPR 2025

*Indicates Equal Contribution

+Indicates Corresponding Author

Sequential 3D Affordance Reasoning

Directional Weight Score

We introduce SeqAfford, a Multi-Modal Language Model (MLLM) capable of serialized affordance inference implied in human instructions: 1) Single Affordance Reasoning; 2) Sequential Affordance Reasoning; 3) Sequential Affordance Reasoning with Multiple Objects

Abstract

3D affordance segmentation aims to link human instructions to touchable regions of 3D objects for embodied manipulations. Existing efforts typically adhere to single-object, single-affordance paradigms, where each affordance type or explicit instruction strictly corresponds to a specific affordance region and are unable to handle long-horizon tasks. Such a paradigm cannot actively reason about complex user intentions that often imply sequential affordances. In this paper, we introduce the Sequential 3D Affordance Reasoning task, which extends the traditional paradigm by reasoning from cumbersome user intentions and then decomposing them into a series of segmentation maps. Toward this, we construct the first instruction-based affordance segmentation benchmark that includes reasoning over both single and sequential affordances, comprising 180K instruction-point cloud pairs. Based on the benchmark, we propose our model, SeqAfford, to unlock the 3D multi-modal large language model with additional affordance segmentation abilities, which ensures reasoning with world knowledge and fine-grained affordance grounding in a cohesive framework. We further introduce a multi-granular language-point integration module to endow 3D dense prediction. Extensive experimental evaluations show that our model excels over well-established methods and exhibits open-world generalization with sequential reasoning abilities.

Preparing the Instructions.

Directional Weight Score

To better utilize the world knowledge of GPT4, We prompt GPT-4o to generate diverse instructions based on 4 types of system prompts containing different modalities as input. Instructions are generated based on input prompts with modalities from a) purely textual affordance type, object name; b) the mesh-rendered image of the object; c) the mesh-rendered image and HOI images that reveal affordances of the object; d) the mesh-rendered image and textual description of the scenario.

Comparison of Exisiting 3D Affordance Dateset with Ours.

Directional Weight Score

#Point Cloud and #Instruction-Point Cloud Pairs denote the number of point clouds and instruction-point cloud pairs, respectively. X indicates that the dataset does not possess this attribute.

SeqAfford Method

Directional Weight Score

Given the point clouds of the target objects and a piece of complex human instruction, SeqAfford first reasons from this instruction and decomposes it into several hidden [SEG] tokens extracted from the last-layer embeddings, each representing an intermediate affordance segmentation result. Then, for each [SEG], the point features extracted by the 3D vision encoder dynamically interact with the [SEG] token before being sent to the decoder for mask generation. The interaction is achieved through multi-granular language-point integration, synergizing both reasoning and affordance segmentation. We use LoRA for efficient fine-tuning.

Multi-Granular Language-Point Integration Module.

Directional Weight Score

We propose an interaction module between [SEG] tokens from LLM and point features from the 3D vision encoder, to synergize both reasoning and segmentation in a cohesive framework. This module consists of the multi-granular feature propagation process, and the point-language integration stage.

Qualitative results

Directional Weight Score

SeqAfford understands human instruction and accurately segments the target affordance.

BibTeX

@inproceedings{yu2024seqafford,
        title={SeqAfford: Sequential 3D Affordance Reasoning via Multimodal Large Language Model},
        author={Yu, Chunlin and Wang, Hanqing and Shi, Ye and Luo, Haoyang and Yang, Sibei and Yu, Jingyi and Wang, Jingya},
        booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
        year={2025}
      }