Probabilistic inference using contraction of tensor networks
Jul 10, 2024·
·
0 min read
Martin Roa Villescas
Image credit: UnsplashAbstract
Reasoning under uncertainty is a key challenge in fields like AI, medical diagnosis, computer vision, and natural language processing. Probabilistic graphical models (PGMs) are crucial for this, representing complex systems’ joint probability distributions efficiently.
Despite PGMs’ utility, probabilistic inference remains a complex task due to the high-dimensional combinatorial optimization involved. To address this, we introduce TensorInference.jl, a Julia package that combines PGMs with tensor network computational power to improve probabilistic inference performance for complex models.
Probabilistic inference calculates probabilities from observed data using probability theory axioms. Inference methods are categorized into exact and approximate. Exact methods face NP-hard computational challenges related to the model’s treewidth. In contrast, approximate methods like Markov chain Monte Carlo and variational inference, implemented in packages like Stan and PyMC3, offer scalability but lack formal accuracy guarantees. Thus, exact methods are resurging for their accuracy potential.
TensorInference.jl utilizes recent tensor network advances to provide exact high-performance solutions for inference problems. It enables tasks like calculating partition functions, computing marginal probabilities, finding likely variable assignments, and drawing posterior samples.
Tensor networks, like PGMs, are adept at representing complex system states. Computational efficiency in tensor networks hinges on the contraction sequence; hence, optimizing this order is vital.
TensorInference.jl enhances computational tasks using differentiable programming, supports generic element types, and allows for hyper-optimized contraction order settings. It includes advanced contraction methods including a local search-based method, denoted as TreeSA, two methods based on min-cut algorithms, denoted as SABipartite and KaHyParBipartite, as well as a greedy algorithm, denoted as GreedyMethod. Additionally, it incorporates BLAS routines and GPU technology for improved performance.
Date
Jul 10, 2024 2:00 PM — 2:40 PM
Event

Authors
Teacher & Researcher
Martin Roa Villescas holds a BSc in Electronic Engineering from the National
University of Colombia and an MSc in Embedded Systems from Eindhoven
University of Technology (TU/e). He worked at Philips Research as an
embedded software designer from 2013 to 2018. He later returned to TU/e for
his doctoral research in model-based machine learning, carried out within
the PhD-Teaching Assistant trajectory combining research and teaching. Since
2023, he has been working at Fontys University of Applied Sciences in the
Netherlands, where he teaches in the Information and Communication
Technology program and conducts research in robotics and smart industry.