Logo

Mathematical Sciences Research Institute

Home » Sparser Johnson-Lindenstrauss Transforms

Seminar

Sparser Johnson-Lindenstrauss Transforms September 07, 2011 (10:30 AM PDT - 11:30 AM PDT)
Location: MSRI: Simons Auditorium
Speaker(s) Jelani Nelson
Description No Description

Video
No Video Uploaded
Abstract/Media

The Johnson-Lindenstrauss (JL) lemma (1984) states that any n points in d-dimensional Euclidean space can be embedded into k = O((log n)/eps^2) dimensions so that all pairwise distances are preserved up to 1+/-eps. Furthermore, this embedding can be achieved via a linear mapping. The JL lemma is a useful tool for speeding up solutions to several high-dimensional problems: closest pair, nearest neighbor, diameter, minimum spanning tree, etc. It also speeds up some clustering and string processing algorithms, reduces the amount of storage required to store a dataset, and can be used to reduce memory required for numerical linear algebra problems such as linear regression and low rank approximation.

The original proofs of the JL lemma let the linear mapping be specified by a random dense k x d matrix (e.g. i.i.d. Gaussian entries). Thus, performing an embedding requires dense matrix-vector multiplication. There has been much recent work on developing distributions that allow for embedding vectors quickly, begun by the work of [Ailon, Chazelle, STOC 2006]. Unfortunately, many of the works in this direction cannot take advantage of sparsity of the vector to embed, and take Omega(d) time to embed a vector even with only one non-zero entry. This feature is particularly unfortunate in streaming applications, where a vector x receives coordinate-wise updates of the form x

One way to speed up embeddings for sparse vectors is to develop distributions over linear mappings whose matrices themselves are sparse, and investigation in this direction was carried out in [Achlioptas, PODS 2001] and [Dasgupta, Kumar, Sarlos, STOC 2010]. The former work gave a distribution over embedding matrices where each column only had s = k/3 non-zero entries, and the latter where each column had only s = O~(eps^{-1}*log^2 n) non-zero entries (which is an improvement over s=k for 1/eps >> log n).

In this talk, I will present two new distributions over JL embeddings which achieve the best known sparsity to date: s = O(eps^{-1}*log n).
These are the first distributions to achieve o(k) non-zero entries per column regardless of how eps and n are related.

This is based on joint work with Daniel Kane (Stanford).

No Notes/Supplements Uploaded No Video Files Uploaded