Beheshteh T. Rakhshan

About News Research Talks Contact
Profile picture of Beheshteh T. Rakhshan

Beheshteh T. Rakhshan

PhD student, University of Montreal & MILA - Québec AI Institute

About

I am a fourth year PhD student in the theory group at the Department of Computer Science and Operations Research (DIRO) Université de Montréal and MILA, where I am fortunate to be advised by Guillaume Rabusseau. My research interests lie in the theoretical foundations of machine learning, particularly Randomized algorithms and Tensor Decompositions.

Prior to joining Mila, I was a PhD student in the Mathematics Department at Purdue University where I was advised by Petros Drineas, you can read my story here.

News

March 2025: Mila Women in AI scholarship recipient 8k/year.

December 2024: Happy to present our work Efficient Leverage Score Sampling for Tensor Train Decomposition in NeurIPS 2024. [poster]

December 2024: The first draft of our new book, Towards Mastering Tensor Networks: A Comprehensive Guide, is now available!

September 2024: Our paper Efficient Leverage Score Sampling for Tensor Train Decomposition got accepted as a poster to NeurIPS 2024. [paper] [code]

August 2024: Started an internship at Zapata AI.

July 2024: Selected to participate in the 2024 Gene Golub SIAM summer school.

December 2023: Served as a co-organizer for the New In ML workshop, NeurIPS 2023.

September 2023: Served as a Mila Tensor Networks Reading Group [More info Here].

May 2021: IVADO PhD Excellence Scholarship recipient.

Research

I am broadly interested in the theory behind Big Data and Machine Learning problems. My research is focused on developing fast and efficient randomized algorithms with tensor networks for large-scale problems. My goal is to develop algorithms with provable guarantees, and accurate and fast solutions to computationally expensive methods by leveraging dimensionality reduction and tensor decompositions' techniques.

Talks

Spotlight on Beheshteh T. Rakhshan

About Mila and my research interests.

Rademacher Random Projections with Tensor Networks

Random projections have recently emerged as popular techniques in the machine learning community for their ability to reduce the dimension of very high-dimensional tensors. In this work, we consider a tensorized random projection map relying on Tensor Train (TT) decomposition where each element of the core tensors is drawn from a Rademacher distribution. Our theoretical results reveal that the Gaussian low-rank tensor represented in compressed form in TT format can be replaced by a TT tensor with core elements drawn from a Rademacher distribution with the same embedding size. In addition, we show both theoretically and experimentally, that the tensorized RP in the Matrix Product Operator (MPO) format is not a Johnson-Lindenstrauss transform (JLT) and therefore not a well-suited random projection map.

.

Tensorized Random Projection

In this work, we introduce a novel random projection technique for efficiently reducing the dimension of very high-dimensional tensors. Building upon classical results on Gaussian random projections and Johnson-Lindenstrauss transforms (JLT), we propose two tensorized random projection maps relying on the tensor train (TT) and CP decomposition format, respectively. The two maps offer very low memory requirements and can be applied efficiently when the inputs are low-rank tensors given in the CP or TT format. Our theoretical analysis shows that the dense Gaussian matrix in JLT can be replaced by a low-rank tensor implicitly represented in compressed form with random factors, while still approximately preserving the Euclidean distance of the projected inputs. In addition, our results reveal that the TT format is substantially superior to CP in terms of the size of the random projection needed to achieve the same distortion ratio.

Contact

Email: rakhshab@mila.quebec

Wide SVG representing research or academic work