Publication Date

Summer 2025

Degree Type

Thesis

Degree Name

Master of Science (MS)

Department

Computer Engineering

Advisor

Stas Tiomkin; Carlos Rojas; Jorjeta Jetcheva

Abstract

Research in useful information extraction has been motivated by the increasing demand to extract insights from unstructured data, and by the need to store and transmit great volumes of information, often originating in unstructured data such as videos. Research in rate distortion and information bottleneck paved the path for understanding and guiding the design of lossy encoders, capable of extracting relevant information. Independently, research in deep representation learning has enabled numerous applications for unstructured high-dimensional data such as images. However, the interpretability of the deep learning methods remained limited. Several desired properties of learned representations have been suggested, including disentanglement. We suggest that the Multivariate Information Bottleneck (MIB) is a natural solution to interpretable deep representation learning, as it can be used to obtain properties that previously required explicit optimization implicitly. To test this, we explore a particular instance of MIB, a parallel structure, as a fitting framework for disentanglement. While the structure can be applied to generative disentanglement, we focus on a variation relating to a relevant variable and suggest a corresponding evaluation metric. We experimentally show that while relevant disentanglement is achievable with current methods, it remains limited due to non-trivial challenges. We discuss the said limitation and the further directions.

Available for download on Sunday, September 20, 2026

Share

COinS