Skip to content
QDR Labs Advanced AI Analytics and Research and VR Research

Research 2026

Renormalized Volume as an Information-Capacity Functional for Hyperbolic Generative Models

Sepulveda-Jimenez, Alfredo

Zenodo 2026 Zenodo

Abstract

Hyperbolic representation learning has matured into a substantial subfield over the lastdecade, yet it lacks a principled, finite, conformally invariant capacity functional analogousto the differential entropy of Euclidean information theory. We propose that the renormal-ized volume Vren(g+) of the conformally compact Einstein (CCE) bulk geometry of a hy-perbolic generative model serves precisely this role. Building on Anderson's Gauss--Bonnetidentity in dimension four [And01] and the Henningson--Skenderis holographic renormaliza-tion scheme [HS98], we (i) define Vren rigorously, distinguishing the canonical Fefferman--Graham scheme from the practical Poincaré-radial scheme used by all popular hyperbolicembedding methods; (ii) give a sample-based estimator and analyse its bias--variance trade-off; (iii) formulate three conjectures ---a topological capacity bound (C1), a holographicmutual-information bound (C2), and a rigidity-based identifiability statement (C3)---andprove C1 unconditionally in dimension four; (iv) provide a working Python prototype (witha Julia transcription); and (v) propose five experiments testable on standard benchmarks(Poincaré WordNet, hyperbolic GCNs on graph datasets, Mip-NeRF unbounded scenes).The conjectures are open even in cases where the corresponding pure geometric statementsare known, because the empirical / sample-based formulations require additional analyticwork; we identify what is missing.

Hyperbolic representation learning has matured into a substantial subfield over the lastdecade, yet it lacks a principled, finite, conformally invariant capacity functional analogousto the differential entropy of Euclidean information theory. We propose that the renormal-ized volume Vren(g+) of the conformally compact Einstein (CCE) bulk geometry of a hy-perbolic generative model serves precisely this role. Building on Anderson’s Gauss–Bonnetidentity in dimension four [And01] and the Henningson–Skenderis holographic renormaliza-tion scheme [HS98], we (i) define Vren rigorously, distinguishing the canonical Fefferman–Graham scheme from the practical Poincaré-radial scheme used by all popular hyperbolicembedding methods; (ii) give a sample-based estimator and analyse its bias–variance trade-off; (iii) formulate three conjectures —a topological capacity bound (C1), a holographicmutual-information bound (C2), and a rigidity-based identifiability statement (C3)—andprove C1 unconditionally in dimension four; (iv) provide a working Python prototype (witha Julia transcription); and (v) propose five experiments testable on standard benchmarks(Poincaré WordNet, hyperbolic GCNs on graph datasets, Mip-NeRF unbounded scenes).The conjectures are open even in cases where the corresponding pure geometric statementsare known, because the empirical / sample-based formulations require additional analyticwork; we identify what is missing.

Cite this paper

Sepulveda-Jimenez, Alfredo (2026). Renormalized Volume as an Information-Capacity Functional for Hyperbolic Generative Models. Zenodo.

DOI: 10.5281/zenodo.19966065