
Randomness and Pseudorandomness
Avi Wigderson, Institute for Advanced Study
Is the universe inherently deterministic or probabilistic? Perhaps more
importantly  can we tell the difference between the two?
Humanity has pondered the meaning and utility of randomness for millennia.
There is a remarkable variety of ways in which we utilize perfect coin
tosses to our advantage: in statistics, cryptography, game theory,
algorithms, gambling... Indeed, randomness seems indispensable! Which of
these applications survive if the universe had no randomness in it at
all? Which of them survive if only poor quality randomness is available,
e.g. that arises from “unpredictable” phenomena like the weather or the
stock market?
A computational theory of randomness, developed in the past three
decades, reveals (perhaps counterintuitively) that very little is lost
in such deterministic or weakly random worlds – indeed, most
application areas above survive!
In the talk I'll explain the main ideas and results of this theory. A
key notion is pseudorandomness, whose understanding impacts large areas
in mathematics and computer science.

Towards Provable Bounds for Machine Learning: Three Vignettes
Sanjeev Arora, Princeton University
Many tasks in machine learning (especially unsupervised learning) are
provably intractable: NPhard or worse. Nevertheless, researchers have
developed heuristic algorithms to solve these tasks in practice. In
most cases, there are no provable guarantees on the performance of
these algorithms/heuristics — neither on their running time, nor on
the quality of solutions they return. Can we change this state of
affairs?
This talk will suggest that the answer is yes, and cover three recent
works as illustration. (a) A new algorithm for learning topic models.
This concerns a new algorithm for topic models (including the Linear
Dirichlet Allocations of Blei et al. but also works for more general
models) that provably works in theory under some reasonable
assumptions and is also up to 50 times faster than existing software
in practice. It relies upon a new procedure for nonnegative matrix
factorization. (b) What classifiers are worth learning? (c) Provable
ICA with unknown Gaussian noise.
(Based upon joint works with Rong Ge, Ravi Kannan, Ankur Moitra,
Sushant Sachdeva.)
