Publication Date
12-1-2025
Document Type
Article
Publication Title
International Journal of Information Management Data Insights
Volume
5
Issue
2
DOI
10.1016/j.jjimei.2025.100355
Abstract
In the rapidly advancing field of visual search technology, traditional methods that rely only on visual features often struggle with accuracy and relevance. This challenge is particularly evident in e-commerce, where precise product recommendations are critical, and is further complicated by keyword stuffing in product descriptions. To address these limitations, this study introduces BiLens, a multimodal recommendation framework that integrates both visual and textual information. BiLens leverages large language models (LLMs) to generate descriptive captions from image queries, which are transformed into word embeddings, and extracts visual features using Vision Transformers (ViT). The visual and textual representations are integrated using an early fusion strategy and compared using cosine similarity, enabling deeper contextual understanding and enhancing the accuracy and relevance of product recommendations in capturing customer intent. A comprehensive evaluation was conducted using Amazon product data across five categories, testing various image captioning models and embedding methods—including BLIP-2, ViT-GPT2, BLIP-Image-Captioning-Large, Florence-2-large, GIT (microsoft/git-base-coco), Word2Vec, GloVe, BERT, and ELMo. The combination of Florence-2-large and BERT emerged as the most effective, achieving a precision of 0.81±0.14 and F1 score of 0.49±0.16. This setup was further validated on the Myntra dataset, showing generalizability with precision of 0.59±0.27, recall of 0.47±0.25, and F1 score of 0.52±0.24. Comparisons with image-only and text-only baselines confirmed the superiority of the fusion-based approach, with statistically significant improvements in F1 scores, underscoring BiLens's ability to deliver more accurate, context-aware product recommendations.
Keywords
Generative AI, Image captioning, Large language models, Recommendation system, Vision transformers, Word embeddings
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.
Department
Applied Data Science
Recommended Citation
Anitha Balachandran and Mohammad Masum. "A multimodal framework for enhancing E-commerce information management using vision transformers and large language models" International Journal of Information Management Data Insights (2025). https://doi.org/10.1016/j.jjimei.2025.100355