TCS+

A carbon-free dissemination of ideas across the globe.

TCS+ talk: Wednesday, March 16nd — Tali Kaufman, Bar-Ilan University

Our next talk will take place this coming Wednesday, March 16th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 18:00 Central European Time, 17:00 UTC). Tali Kaufman from Bar-Ilan University will speak about “Bounded degree high dimensional expanders” (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

In recent years a high dimensional theory of expanders has emerged. The notion of combinatorial expansion of graphs (i.e. the Cheeger constant of a graph) has seen two generalizations to high dimensional simplicial complexes (or hypergraphs). One generalization is called coboundary expansion, and the other is termed cosystolic expansion. Gromov has shown that cosystolic expanders have the topological overlapping property. He then asked whether there exist bounded degree complexes with the topological overlapping property in every dimension.
In the well studied one dimensional case, the existence of a bounded degree combinatorial expander is easy to prove. However, bounded degree high dimensional expanders (random or explicit), according to either of the definitions above, were not known until our work.
In this talk we present a local to global criterion on a complex that implies cosystolic expansion. We then use our criterion to present explicit bounded degree cosystolic expanders of every dimension. This solves in the affirmative the open question raised by Gromov.
We expect that the emerging theory of high dimensional expansion is likely to have various application in the theory of computation. Thus, one of the goals of this talk in to introduce this concept to the theory community.

No prior background is assumed.

Based on joint works with Alex Lubotzky, David Kazhdan, and Shai Evra.

 

TCS+ talk: Wednesday, March 2nd — Ola Svensson, EPFL

Our next talk will take place this coming Wednesday, March 2nd at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Ola Svensson (EPFL) will speak about “Lift-and-Round to Improve Scheduling on Unrelated Machines” (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Weighted completion time and makespan are some of the most relevant and well-studied measures of quality of service in scheduling and resource allocation problems. While these objectives have been studied since the 50’s, a systematic study of their approximability was started in the 90’s. Since then, impressive progress has led to a complete understanding of these objectives in simple machine models, such as identical machines.
In contrast, it remains a notorious problem to understand the approximability in the more general unrelated machine setting: the classic algorithms developed in the 90’s resisted any improvements while having guarantees that are far from the strongest known lower bounds.
In this talk, we overview recent developments with a focus on our recent approximation algorithm that improves upon the barrier of 3/2 for the weighted completion time objective. The improvement is based on a new lift-and-project based SDP relaxation and a general bipartite-rounding procedure that produces an assignment with certain strong negative correlation properties.

This is based on joint work with Nikhil Bansal and Aravind Srinivasan (abs/1511.07826).

 

TCS+ talk: Wednesday, February 17th — Ronitt Rubinfeld, Tel Aviv University and MIT

Our next talk will take place this coming Wednesday, February 17th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Ronitt Rubinfeld (Tel Aviv University and MIT) will cover recent and exciting developments in “Local Computation Algorithms” (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Consider a setting in which inputs to and outputs from a computational problem are so large, that there is not time to read them in their entirety.   However, if one is only interested in small parts of the output at any given time, is it really necessary to solve the entire computational problem? Is it even necessary to view the whole input?
We survey recent work in the model of local computation algorithms which for a given input, supports queries by a user to values of specified bits of a legal output.  The goal is to design local computation algorithms in such a way that very little of the input needs to be seen in order to determine the value of any single bit of the output. In this talk, we describe a number of problems for which sublinear time and space local computation algorithms have been developed.

 

TCS+ talk: Wednesday, February 3rd— Ilya Razenshteyn, MIT

Our next talk will take place this coming Wednesday, February 3rd at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Ilya Razenshteyn from MIT will speak about strong connections between sketching and embedding — namely, that Sketching and Embedding are Equivalent for Norms (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Imagine the following communication task. Alice and Bob each have a point from a metric space. They want to transmit a few bits and decide whether their points are close to each other or are far apart. Of particular interest are sketching protocols: Alice and Bob both compute short summaries of their inputs and then a referee, given these summaries, makes the decision; sketches are very useful for the nearest neighbor search, streaming, randomized linear algebra etc. Indyk (FOCS 2000) showed that for the \ell_p spaces with 0 < p \leq 2 the above problem allows a very efficient sketching protocol. Consequently, any metric that can be mapped into the \ell_p space with all the distances being approximately preserved has a good protocol as well.

I will show that for normed spaces (a very important class of metric spaces) embedding into \ell_p is the only possible technique for solving the communication problem. Slightly more formally, we show that any normed space that admits a good communication (in particular, sketching) protocol for distinguishing close and far pairs of points embeds well into \ell_p with p being close to 1. The proof uses tools from communication complexity and functional analysis.

As a corollary, we will show communication lower bounds for the planar Earth Mover’s Distance (minimum-cost matching metric) and for the trace norm (the sum of the singular values of a matrix) by deriving them from the (known) non-embeddability theorems and (the contrapositive of) our result.

The talk is based on a joint paper with Alexandr Andoni and Robert Krauthgamer (arXiv:1411.2577).

 

TCS+ talk: Wednesday, January 20th — Scott Aaronson, MIT

To kick off the new semester, our next talk will take place this coming Wednesday, January 20th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 18:00 UTC). Scott Aaronson from MIT will speak about “The Largest Possible Quantum Speedups.”

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, please see the website.

I’ll describe recent progress on understanding the largest possible quantum vs. classical speedups in the setting of query complexity. First, I’ll discuss “Forrelation,” a problem that Andris Ambainis and I proved yields essentially the largest quantum speedup of any promise problem, as well as Fourier Sampling, which yields essentially the largest speedup of any sampling problem. I’ll then explain how Fourier Sampling relates to BosonSampling, the Bremner-Jozsa-Shepherd commuting Hamiltonians model, and other experimental systems that fall short of universal quantum computing, but that might be used in the near future to try to demonstrate quantum supremacy on some task. Finally, I’ll explain how my student Shalev Ben-David “broke the Grover barrier” this summer, achieving a 2.5th-power separation between randomized and quantum query complexities for a total Boolean function—and how the proof of a key conjecture about the Forrelation problem would improve the separation to cubic.

TCS+ talk: Wednesday, December 9th — Virginia Vassilevska Williams, Stanford

The last talk of the semester will take place this coming Wednesday, December 9th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 18:00 UTC). Virginia Vassilevska Williams from Stanford will speak about “Fine-Grained Complexity of Problems in P”.

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, please see the website.

A central goal of algorithmic research is to determine how fast computational problems can be solved in the worst case. Theorems from complexity theory state that there are problems that, on inputs of size n, can be solved in t(n) time but not in t(n)^{1-eps} time for eps>0. The main challenge is to determine where in this hierarchy various natural and important problems lie. Throughout the years, many ingenious algorithmic techniques have been developed and applied to obtain blazingly fast algorithms for many problems. Nevertheless, for many other central problems, the best known running times are essentially those of the classical algorithms devised for them in the 1950s and 1960s.

Unconditional lower bounds seem very difficult to obtain, and so practically all known time lower bounds are conditional. For years, the main tool for proving hardness of computational problems have been NP-hardness reductions, basing hardness on P\neq NP. However, when we care about the exact running time (as opposed to merely polynomial vs non-polynomial), NP-hardness is not applicable, especially if the running time is already polynomial. In recent years, a new theory has been developed, based on “fine-grained reductions” that focus on exact running times. Mimicking NP-hardness, the approach is to (1) select a key problem X that is conjectured to require essentially t(n) time for some t, and (2) reduce X in a fine-grained way to many important problems. This approach has led to the discovery of many meaningful relationships between problems, and even sometimes to equivalence classes.

The main key problems used to base hardness on have been: the 3SUM problem, the CNF-SAT problem (based on the Strong Exponential TIme Hypothesis (SETH)) and the All Pairs Shortest Paths Problem (APSP). Research on SETH-based lower bounds has flourished in particular in recent years showing that the classical algorithms are optimal for problems such as Approximate Diameter, Edit Distance, Frechet Distance, Longest Common Subsequence (LCS) etc.

In this talk I will give an overview or the current progress in this area of study, and will highlight some new exciting developments leading to new, even more believable conjectures and to surprising new consequences of fast algorithms for Edit Distance or LCS.

TCS+ talk: Wednesday, November 25th — Piotr Indyk, MIT

Our next talk will take place this coming Wednesday, November 25th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 18:00 UTC). Piotr Indyk from MIT will speak about “Fast Algorithms for Structured Sparsity”.

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, please see the website.

Sparse representations of signals (i.e., representations that have only few non-zero or large coefficients) have emerged as powerful tools in signal processing theory, algorithms, machine learning and other applications. However, real-world signals often exhibit rich structure beyond mere sparsity. For example, a natural image, once represented in the wavelet domain, often has the property that its large coefficients occupy a subtree of the wavelet hierarchy, as opposed to arbitrary positions. A general approach to capturing this type of additional structure is to model the support of the signal of interest (i.e., the set of indices of large coefficients) as belonging to a particular family of sets. Computing a sparse representation of the signal then corresponds to the problem of finding the support from the family that maximizes the sum of the squares of the selected coefficients. Such a modeling approach has proved to be beneficial in a number of applications including compression, de-noising, compressive sensing and machine learning. However, the resulting optimization problem is often computationally difficult or intractable, which is undesirable in many applications where large signals and datasets are commonplace.

In this talk, I will outline some of the past and more recent algorithms for finding structured sparse representations of signals, including piecewise constant approximations, tree-sparse approximations and graph-sparse approximations. The algorithms borrow several techniques from combinatorial optimization (e.g., dynamic programming), graph theory, and approximation algorithms. For many problems the algorithms often run in (nearly) linear time, which makes them applicable to very large datasets.

Joint work with Chinmay Hegde and Ludwig Schmidt

TCS+ talk: Wednesday, November 11th — Tim Roughgarden, Stanford

Our next talk will take place this coming Wednesday, November 11th at **NOTE UNUSUAL TIME** 12:00 PM Eastern Time (09:00 AM Pacific Time, 18:00 Central European Time, 17:00 UTC). Tim Roughgarden from Stanford will speak about some new connections in complexity theory and algorithmic game theory:

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, please see the website.

We survey three recent applications of complexity theory to algorithmic game theory.

First, we explain “why prices need algorithms,” in the sense that the guaranteed existence of market equilibria is inextricably connected to the computational complexity of related optimization problems: demand oracles, revenue-maximization, and welfare-maximization. This connection implies several impossibility results, and suggests a complexity-theoretic explanation for the lack of useful extensions of the Walrasian equilibrium concept.

Second, we explain when and how communication and computational lower bounds for algorithms for an optimization problem translate to lower bounds on the price of anarchy (POA) in games derived from the problem. This is a new approach to POA lower bounds, based on reductions in lieu of explicit constructions. These lower bounds are particularly significant for problems of designing games that have only near-optimal equilibria, ranging from the design of simple combinatorial auctions to the existence of effective tolls for routing networks.

Finally, we study “the borders of Border’s theorem,” a fundamental result in auction design that gives an intuitive linear characterization of the feasible interim allocation rules of a Bayesian single-item environment. We explain a complexity-theoretic barrier that indicates, assuming standard complexity class separations, that Border’s theorem cannot be extended significantly beyond the state-of-the-art.

Includes joint work with Parikshit Gopalan, Noam Nisan, and Inbal Talgam-Cohen.

TCS+ talk: Wednesday, October 28th— Li-Yang Tan, TTIC

Our next talk will take place this coming Wednesday, October 28th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 18:00 Central European Time, 17:00 UTC). Li-Yang Tan from TTIC will speak about “An average-case depth hierarchy theorem for Boolean circuits” (abstract below).

Please sign up on the online form if you wish to join the talk as a group (we still have a few seats left, so don’t refrain!). As usual, for more information about the TCS+ online seminar series and the upcoming talks, please see the website.

 We prove an average-case depth hierarchy theorem for Boolean circuits over the standard basis of AND, OR, and NOT gates.  Our hierarchy theorem says that for every d \geq 2, there is an explicit n-variable Boolean function f, computed by a linear-size depth-d formula, which is such that any depth-(d-1) circuit that agrees with f on (1/2 + o_n(1)) fraction of all inputs must have size \exp(n^{\Omega(1/d)}).  This answers an open question posed by Håstad in his Ph.D. thesis.

Our average-case depth hierarchy theorem implies that the polynomial hierarchy is infinite relative to a random oracle with probability 1, confirming a conjecture of Håstad, Cai, and Babai.  We also use our result to show that there is no “approximate converse” to the results of Linial, Mansour, Nisan and Boppana on the total influence of constant-depth circuits, thus answering a question posed by O’Donnell, Kalai, and Hatami.

A key ingredient in our proof is a notion of random projections which generalize random restrictions.

Joint work with Ben Rossman and Rocco Servedio.

Also, note that the following TCS+ talk will be given by Tim Roughgarden, on November 11th at 12pm EST. This is one hour earlier than usual: mark your calendars!

TCS+ talk: Wednesday, October 14th— Ryan O’Donnell, CMU

Our next talk will take place this coming Wednesday, October 14th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Ryan O’Donnell from CMU will speak about his work on “How to refute a random CSP”, to appear in the next FOCS (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, please see the website.

Let P be a k-ary predicate and let H be a uniformly random constraint satisfaction problem with n variables and m = m(n) constraints, each of type P applied to k literals. For example, if P  is the 3-ary OR predicate, then H is a classic random instance of 3SAT.

For each P there is a critical constant \alpha such that H will be satisfiable (with high probability) if m < \alpha n and will be unsatisfiable (whp) if m > \alpha n. In the former case, the natural algorithmic task is to find a satisfying assignment to the variables.

In the latter case, the natural algorithmic task is to find a refutation; i.e., a proof of unsatisfiability. This task becomes algorithmically easier the larger m is. As an example, in the case of 3SAT, it is known that efficient refutation algorithms exist provided m \gg n^{3/2}. We will discuss the refutation problem in general, focusing on how the predicate, P, affects the number of constraints, m, required for efficient refutation. We will also describe the applications to hardness of learning.

Design a site like this with WordPress.com
Get started