Hyper-v

Sparsifying Sums of Functions – Yang Liu



Computer Science/Discrete Mathematics Seminar II

Topic: Sparsifying Sums of Functions
Speaker: Yang Liu
Affiliation: Institute for Advanced Study
Date: November 21, 2023

Consider a function on Rn that can be written as a sum of functions f=f1 + f2 + … + fm, for m greater than n.

The question of approximating f by a reweighted sum using only a small number of summands has many applications in CS theory, mathematical analysis, data science, and numerical linear algebra.

We’ll see that sparsification is possible in substantial generality. For instance, if the functions fi
are arbitrary norms, symmetric submodular functions, or come from a general class of linear models, then one can sparsify down to a number of terms that is nearly linear in the dimension. As an application, we obtain algorithms with near-optimal running time a variety of linear regression problems, including lp regression for 1 less than p less than 2.

The most common way of building sparsifiers is by random sampling the functions fi
to produce a sparse reweighting. Attempting to analyze such a sampling algorithm naturally leads to the study of chaining methods, entropy numbers, etc. In the first part of the talk, we will walk through such a proof for the special case of spectral sparsification, where each fi
is the square of a linear function, i.e., fi(x):= less than ai,x greater than 2
for some ai in Rn. Then, time permitting, we will discuss extensions beyond this case.

Based on joint work with Arun Jambulapati, James Lee, and Aaron Sidford.

[ad_2]

source

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button