TCS+

A carbon-free dissemination of ideas across the globe.

TCS+ talk: Wednesday, March 9 — Roei Tell, IAS

The next TCS+ talk will take place this coming Wednesday, March 9th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 18:00 UTC). Roei Tell from IAS will speak about “Hardness vs Randomness, Revised: Uniform, Non-Black-Box, and Instance-Wise” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Registration is not required to attend the interactive talk, and the link will be posted on the website the day prior to the talk; however, by registering in the form, you will receive a reminder, along with the link. (The recorded talk will also be posted on our website afterwards) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: In this talk I’ll show how to revise the classical hardness-vs-randomness framework, so that it can work in a non-black-box fashion. Specifically, we will construct derandomization algorithms that don’t rely on classical PRGs, and instead “extract pseudorandomness” from the given input on which we want to simulate the probabilistic machine.

Using a non-black-box approach allows us to deduce stronger conclusions (or alternatively rely on weaker hypotheses), compared to classical approaches. In one instantiation of the new framework, we reveal a close connection between the promiseBPP = promiseP conjecture and a new type of uniform lower bounds. In another instantiation, we simulate probabilistic algorithms with essentially no observable runtime overhead, under plausible hypotheses.

Based on a joint work with Lijie Chen.

TCS+ talk: Wednesday, February 23 — Merav Parter, Weizmann Institute of Science

The first TCS+ talk of 2022 will take place on Wednesday, February 23rd at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 18:00 UTC). Merav Parter from Weizmann Institute of Science will speak about “New Diameter Reducing Shortcuts: Breaking the O(\sqrt{n}) Barrier” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Registration is not required to attend the interactive talk, and the link will be posted on the website the day prior to the talk; however, by registering in the form, you will receive a reminder, along with the link. (The recorded talk will also be posted on our website afterwards) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: For an n-vertex digraph G=(V,E), a shortcut set is a (small) subset of edges H taken from the transitive closure of G that, when added to G guarantees that the diameter of G \cup H is small. Shortcut sets, introduced by Thorup in 1993, have a wide range of applications in algorithm design, especially in the context of parallel, distributed and dynamic computation on directed graphs. A folklore result in this context shows that every n-vertex digraph admits a shortcut set of linear size (i.e., of O(n) edges) that reduces the diameter to \widetilde{O}(\sqrt{n}). Despite extensive research over the years, the question of whether one can reduce the diameter to o(\sqrt{n}) with \widetilde{O}(n) shortcut edges has been left open.

In this talk, I will present the first improved diameter-sparsity tradeoff for this problem, breaking the \sqrt{n} diameter barrier. Specifically, we show an O(n^{\omega})-time randomized algorithm for computing a linear shortcut set that reduces the diameter of the digraph to \widetilde{O}(n^{1/3}). We also extend our algorithms to provide improved (\beta,\epsilon) hopsets for n-vertex weighted directed graphs.

Joint work with Shimon Kogan.

TCS+ talk: Wednesday, December 8 — Guy Blanc, Stanford

The next TCS+ talk (and last of the semester!) will take place this coming Wednesday, December 8th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 18:00 UTC). Guy Blanc from Stanford will speak about “Properly learning decision trees in almost polynomial time” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Registration is not required to attend the interactive talk, and the link will be posted on the website the day prior to the talk; however, by registering in the form, you will receive a reminder, along with the link. (The recorded talk will also be posted on our website afterwards) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: We give an n^{O(\log\log n)}-time membership query algorithm for properly and agnostically learning decision trees under the uniform distribution over \{-1,1\}^n. Even in the realizable setting, the previous fastest runtime was n^{O(\log n)}, a consequence of a classic algorithm of Ehrenfeucht and Haussler.

Our algorithm shares similarities with practical heuristics for learning decision trees, which we augment with additional ideas to circumvent known lower bounds against these heuristics. To analyze our algorithm, we prove a new structural result for decision trees that strengthens a theorem of O’Donnell, Saks, Schramm, and Servedio. While the OSSS theorem says that every decision tree has an influential variable, we show how every decision tree can be “pruned” so that every variable in the resulting tree is influential.

Joint work with Jane Lange, Mingda Qiao, and Li-Yang Tan (arXiv:2109.00637). To appear in FOCS 2021.

TCS+ talk: Wednesday, December 1 — William Kuszmaul, MIT

The next TCS+ talk will take place this coming Wednesday, December 1th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 18:00 UTC). William Kuszmaul from MIT will speak about “Linear Probing Revisited: Tombstones Mark the Demise of Primary Clustering” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Registration is not required to attend the interactive talk, and the link will be posted on the website the day prior to the talk; however, by registering in the form, you will receive a reminder, along with the link. (The recorded talk will also be posted on our website afterwards) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: The linear-probing hash table is one of the oldest and most widely used data structures in computer science. However, linear probing also famously comes with a major drawback: as soon as the hash table reaches a high memory utilization, elements within the hash table begin to cluster together, causing insertions to become slow. This phenomenon, now known as “primary clustering”, was first captured by Donald Knuth in 1963; at a load factor of 1 - 1/x, the expected time per insertion becomes \Theta(x^2), rather than the more desirable \Theta(x).

We show that there is more to the story than the classic analysis would seem to suggest. It turns out that small design decisions in how deletions are implemented have dramatic effects on the asymptotic performance of insertions. If these design decisions are made correctly, then even a hash table that is continuously at a load factor 1 - \Theta(1/x) can achieve average insertion time \tilde{O}(x). A key insight is that the tombstones left behind by deletions cause a surprisingly strong “anti-clustering” effect, and that when insertions and deletions are one-for-one, the anti-clustering effects of deletions actually overpower the clustering effects of insertions.

We also present a new variant of linear probing, which we call “graveyard hashing”, that completely eliminates primary clustering on any sequence of operations. If, when an operation is performed, the current load factor is 1 - 1/x for some x, then the expected cost of the operation is O(x). One corollary is that, in the external-memory model with a data block size of B, graveyard hashing offers the following remarkable guarantee: at any load factor 1-1/x satisfying x = o(B), graveyard hashing achieves 1 + o(1) expected block transfers per operation. Past external-memory hash tables have only been able to offer a 1+o(1) guarantee when the block size B is at least \Omega(x^2).

Based on joint work with Michael A. Bender and Bradley C. Kuszmaul (arXiv:2107.01250). To appear in FOCS 2021.

TCS+ talk: Wednesday, November 10 — Kuikui Liu, University of Washington

The next TCS+ talk will take place this coming Wednesday, November 10th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 18:00 UTC). Kuikui Liu from the University of Washington will speak about “Spectral Independence: A New Tool to Analyze Markov Chains” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Registration is not required to attend the interactive talk, and the link will be posted on the website the day prior to the talk; however, by registering in the form, you will receive a reminder, along with the link. (The recorded talk will also be posted on our website afterwards) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: Markov chain Monte Carlo is a widely used class of algorithms for sampling from high-dimensional probability distributions, both in theory and in practice. While simple to implement, analyzing the rate of convergence to stationarity, i.e. the “mixing time”, remains a challenging problem in many settings. We introduce a new technique to bound mixing times called “spectral independence”, which says that certain pairwise correlation matrices all have bounded spectral norm. This surprisingly powerful technique originates in the emerging study of high-dimensional expanders, and has allowed us to “unify” nearly all existing approaches to approximate counting and sampling by building new connections with other areas, including statistical physics, geometry of polynomials, functional analysis, and more. Through these connections, several long-standing open problems have recently been answered, including counting bases of matroids and optimal mixing of the Glauber dynamics/Gibbs sampler up to the algorithmic phase transition threshold.

Based on several joint works with Dorna Abdolazimi, Nima Anari, Zongchen Chen, Shayan Oveis Gharan, Eric Vigoda, Cynthia Vinzant, and June Vuong.

TCS+ talk: Wednesday, October 27 — Shravas Rao, Northwestern University

The next TCS+ talk will take place this coming Wednesday, October 27th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Shravas Rao from Northwestern University will speak about “Degree vs. Approximate Degree and Quantum Implications of Huang’s Sensitivity Theorem” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Registration is not required to attend the interactive talk, and the link will be posted on the website the day prior to the talk; however, by registering in the form, you will receive a reminder, along with the link. (The recorded talk will also be posted on our website afterwards) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: Based on the recent breakthrough of Huang (2019), we show that for any total Boolean function f,

  • The degree of f is at most quadratic in the approximate degree of f. This is optimal as witnessed by the OR function.
  • The deterministic query complexity of f is at most quartic in the quantum query complexity of f. This matches the known separation (up to log factors) due to Ambainis, Balodis, Belovs, Lee, Santha, and Smotrovs (2017).

We apply these results to resolve the quantum analogue of the Aanderaa–Karp–Rosenberg conjecture. We show that if f is a nontrivial monotone graph property of an n-vertex graph specified by its adjacency matrix, then Q(f)=\Omega(n), which is also optimal. We also show that the approximate degree of any read-once formula on n variables is \Theta(\sqrt{n}).

Based on joint work with Scott Aaronson, Shalev Ben-David, Robin Kothari, and Avishay Tal.

TCS+ talk: Wednesday, October 13 — Nutan Limaye, IT University of Copenhagen

The next TCS+ talk will take place next Wednesday, October 13th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC: check your time zone!). Nutan Limaye from IT University of Copenhagen will speak about “Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Registration is not required to attend the interactive talk, and the link will be posted on the website the day prior to the talk; however, by registering in the form, you will receive a reminder, along with the link. (The recorded talk will also be posted on our website afterwards) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: Every multivariate polynomial P(X) can be written as a sum of monomials, i.e. a sum of products of variables and field constants. In general, the size of such an expression is the number of monomials that have a non-zero coefficient in P.

What happens if we add another layer of complexity, and consider sums of products of sums (of variables and field constants) expressions? Now, it becomes unclear how to prove that a given polynomial P(X) does not have small expressions. In this result, we solve exactly this problem.

More precisely, we prove that certain explicit polynomials have no polynomial-sized “Sigma-Pi-Sigma” (sums of products of sums) representations. We can also show similar results for Sigma-Pi-Sigma-Pi, Sigma-Pi-Sigma-Pi-Sigma and so on for all “constant-depth” expressions.

The talk is based on a joint work of Nutan Limaye, Srikanth Srinivasan, and Sébastien Tavenas.

TCS+ talk: Wednesday, September 29 — Audra McMillan, Apple

The next TCS+ talk (and first of the semester!) will take place this coming Wednesday, September 29th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC: check your time zone!). We’re excited to have Audra McMillan from Apple speak about “Hiding among the clones: a simple and nearly optimal analysis of privacy amplification by shuffling” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Due to security concerns, registration is required to attend the interactive talk. (The recorded talk will also be posted on our website afterwards, so people who did not sign up will still be able to watch the talk)

As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: Differential privacy (DP) is a model of privacy-preserving machine learning that has garnered significant interest in recent years due to its rigorous privacy guarantees. An algorithm differentially private if the output is stable under small changes in the input database. While DP has been adopted in a variety of applications, most applications of DP in industry actually satisfy a stronger notion called local differential privacy. In local differential privacy data subjects perturb their data before it reaches the data analyst. While this requires less trust, it comes a substantial cost to accuracy. Recent work of Erlingsson, Feldman, Mironov, Raghunathan, Talwar, and Thakurta [EFMRTT19] demonstrated that random shuffling amplifies differential privacy guarantees of locally randomized data. Such amplification implies substantially stronger privacy guarantees for systems in which data is contributed anonymously [BEMMRLRKTS17] and has led to significant interest in the shuffle model of privacy [CSUZZ19, EFMRTT19]. In this talk, we will discuss a new result on privacy amplification by shuffling, which achieves the asymptotically optimal dependence in the local privacy parameter. Our result is based on a new proof strategy which is simpler than previous approaches, and extends to a lightly weaker notion known as approximate differential privacy with nearly the same guarantees.

Based on joint work with Vitaly Feldman and Kunal Talwar (https://arxiv.org/abs/2012.12803).

TCS+ talk: Wednesday, June 9 — Pravesh Kothari (CMU) and Ankur Moitra (MIT)

The next TCS+ talk will take place this coming Wednesday, June 9th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Pravesh Kothari  and Ankur Moitra from CMU and MIT will (jointly) speak about “Robustly Learning Mixtures of Gaussians” (abstract below).

Note that the seminar will be a bit longer than the usual: it’s a double feature!

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Due to security concerns, registration is required to attend the interactive talk. (The recorded talk will also be posted on our website afterwards, so people who did not sign up will still be able to watch the talk) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: For a while now the problem of robustly learning a high-dimensional mixture of Gaussians has had a target on its back. The first works in algorithmic robust statistics gave provably robust algorithms for learning a single Gaussian. Since then there has been steady progress, including algorithms for robustly learning mixtures of spherical Gaussians, mixtures of Gaussians under separation conditions, and arbitrary mixtures of two Gaussians. In this talk we will discuss two recent works that essentially resolve the general problem. There are important differences in their techniques, setup, and overall quantitative guarantees, which we will discuss.

The talk will cover the following independent works:

  • Liu, Moitra, “Settling the Robust Learnability of Mixtures of Gaussians”
  • Bakshi, Diakonikolas, Jia, Kane, Kothari, Vempala, “Robustly Learning Mixtures of k Arbitrary Gaussians”

TCS+ talk: Wednesday, May 26 — Kira Goldner, Columbia University

The next TCS+ talk will take place this coming Wednesday, May 26th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Kira Goldner from Columbia University will speak about “An Overview of Using Mechanism Design for Social Good” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Due to security concerns, registration is required to attend the interactive talk. (The recorded talk will also be posted on our website afterwards, so people who did not sign up will still be able to watch the talk) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: In order to accurately predict an algorithm’s outcome and quality when it interacts with participants who have a stake in the outcome, we must design it to be robust to strategic manipulation. This is the subject of algorithmic mechanism design, which borrows ideas from game theory and economics to design robust algorithms. In this talk, I will show how results from the theoretical foundations of algorithmic mechanism design can be used to solve problems of societal concern.

I will overview recent work in this area in many different applications — housing, labor markets, carbon license allocations, health insurance markets, and more — as well as discuss open problems and directions ripe for tools from both mechanism design and general TCS.

Design a site like this with WordPress.com
Get started