TCS+

A carbon-free dissemination of ideas across the globe.

TCS+ talk: Wednesday, May 27 — Rahul Ilango, MIT

The next TCS+ talk will take place this coming Wednesday, May 27th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Rahul Ilango from MIT will speak about “Is it (NP) hard to distinguish order from chaos?” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Due to security concerns, registration is required to attend the interactive talk. (The link to the YouTube livestream will also be posted on our
website
on the day of the talk, so people who did not sign up will still be able to watch the talk live.) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: The Minimum Circuit Size Problem (MCSP) roughly asks what the “complexity” of a given string is. Informally, one can think of this as determining the degree of “computational order” a string has.

In the past several years, there has been a resurgence of interest in MCSP. A series of exciting results have begun unraveling what looks to be a fascinating story. This story already reveals deep connections between MCSP and a growing list of fields, including cryptography, learning theory, structural complexity theory, average-case complexity, and circuit complexity. As an example, Santhanam recently proved a conditional equivalence between the complexity of MCSP and the existence of one-way functions.

This talk is split into two parts. The first part is a broad introduction to MCSP, answering the following questions: What is this problem? Why is it interesting? What do we know so far, and where might the story go next? The second part discusses recent joint work with Bruno Loff and Igor Oliveira showing that the “multi-output version” of MCSP is NP-hard.

 

TCS+ talk: Wednesday, May 20 — Mark Bun, Boston University

The next TCS+ talk will take place this coming Wednesday, May 20th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Mark Bun from Boston University will speak about “An Equivalence between Private Classification and Online Predictability” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Due to security concerns, registration is required to attend the interactive talk. (The link to the YouTube livestream will also be posted on our website on the day of the talk, so people who did not sign up will still be able to watch the talk live.) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: We prove that every concept class with finite Littlestone dimension can be learned by an (approximate) differentially-private algorithm. The converse direction was shown in recent work of Alon, Livni, Malliaris, and Moran, STOC ’19. Together these two results show that a class of functions is privately learnable if and only if it is learnable in the mistake-bound model of online learning. To establish our result, we introduce “global stability,” a new notion of algorithmic stability for learning algorithms that we show can always be satisfied when learning classes of finite Littlestone dimension.

 

TCS+ talk: Wednesday, May 13 — Sahil Singla, Princeton University and IAS

The next TCS+ talk will take place this coming Wednesday, May 13th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Sahil Singla from Princeton University and IAS will speak about “Online Vector Balancing and Geometric Discrepancy” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Due to security concerns, registration is required to attend the interactive talk. (The link to the YouTube livestream will also be posted on our website on the day of the talk, so people who did not sign up will still be able to watch the talk live.) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: We consider an online vector balancing question where T vectors, chosen from an arbitrary distribution over [-1,1]^n, arrive one-by-one and must be immediately given a \{+1,-1\} sign. The goal is to keep the discrepancy—the \ell_{\infty}-norm of any signed prefix-sum—as small as possible. A concrete example of this question is the online interval discrepancy problem where T points are sampled one-by-one uniformly in the unit interval [0,1], and the goal is to immediately color them \{+1,-1\} such that every sub-interval remains always nearly balanced. As random coloring incurs \Omega(T^{1/2}) discrepancy, while the worst-case offline bounds are \Theta(\sqrt{n \log (T/n)}) for vector balancing and 1 for interval balancing, a natural question is whether one can (nearly) match the offline bounds in the online setting for these problems. One must utilize the stochasticity as in the worst-case scenario it is known that discrepancy is \Omega(T^{1/2}) for any online algorithm.

In this work, we introduce a new framework that allows us to handle online vector balancing even when the input distribution has dependencies across coordinates. In particular, this lets us obtain a \textrm{poly}(n, \log T) bound for online vector balancing under arbitrary input distributions, and a \textrm{poly}\log (T) bound for online interval discrepancy. Our framework is powerful enough to capture other well-studied geometric discrepancy problems; e.g., we obtain a \textrm{poly}(\log^d (T)) bound for the online d-dimensional Tusnády’s problem. All our bounds are tight up to polynomial factors.

A key new technical ingredient in our work is an anti-concentration inequality for sums of pairwise uncorrelated random variables, which might also be of independent interest.

Based on joint works with Nikhil Bansal, Haotian Jiang, Janardhan Kulkarni, and Makrand Sinha. Part of this work appears in STOC 2020 and is available at https://arxiv.org/abs/1912.03350

 

TCS+ talk: Wednesday, May 6 — Nathan Klein, University of Washington

The next TCS+ talk will take place this coming Wednesday, May 6th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Nathan Klein from the University of Washington will speak about “An improved approximation algorithm for TSP in the half integral case” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Due to security concerns, registration is required to attend the interactive talk. (The link to the YouTube livestream will also be posted on our
website
on the day of the talk, so people who did not sign up will still be able to watch the talk live.) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: A classic result from Christofides in the 70s tells us that a fast algorithm for the traveling salesperson problem (TSP) exists which returns a solution at most 3/2 times worse than the optimal. Since then, however, no better approximation algorithm has been found. In this talk, I will give an overview of research towards the goal of beating 3/2 and will present the first sub-3/2 approximation algorithm for the special case of “half integral” TSP instances. These instances have received significant attention in part due to a conjecture from Schalekamp, Williamson and van Zuylen that they attain the integrality gap of the subtour polytope. If this conjecture is true, our work shows that the integrality gap of the polytope is bounded away from 3/2, giving hope for an improved approximation for the general case. This presentation is of joint work with Anna Karlin and Shayan Oveis Gharan.

TCS+ talk: Wednesday, April 29 — Sepideh Mahabadi, TTIC

The next TCS+ talk will take place this coming Wednesday, April 29th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Sepideh Mahabadi from TTIC will speak about “Non-Adaptive Adaptive Sampling in Turnstile Streams” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Due to security concerns, registration is required to attend the interactive talk. (The link to the YouTube livestream will also be posted on our website on the day of the talk, so people who did not sign up will still be able to watch the talk live.) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: Adaptive sampling is a useful algorithmic tool for data summarization problems in the classical centralized setting, where the entire dataset is available to the single processor performing the computation. Adaptive sampling repeatedly selects rows of an underlying n by d matrix A, where n \gg d, with probabilities proportional to their distances to the subspace of the previously selected rows. Intuitively, adaptive sampling seems to be limited to trivial multi-pass algorithms in the streaming model of computation due to its inherently sequential nature of assigning sampling probabilities to each row only after the previous iteration is completed. Surprisingly, we show this is not the case by giving the first one-pass algorithms for adaptive sampling on turnstile streams and using space \text{poly}(d,k,\log n), where k is the number of adaptive sampling rounds to be performed.

Our adaptive sampling procedure has a number of applications to various data summarization problems on turnstile streams that either improve state-of-the-art or have only been previously studied in the more relaxed row-arrival model. This includes column subset selection, subspace approximation, projective clustering, and volume maximization. We complement our volume maximization algorithmic results with lower bounds that are tight up to lower order terms, even for multi-pass algorithms. By a similar construction, we also obtain lower bounds for volume maximization in the row-arrival model, which we match with competitive upper bounds.

This is a joint work with Ilya Razenshteyn, David Woodruff, and Samson Zhou.

TCS+ talk: Wednesday, April 22 — Huacheng Yu, Princeton

The next TCS+ talk will take place this coming Wednesday, April 22nd at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Huacheng Yu from Princeton will speak about a “Nearly Optimal Static Las Vegas Succinct Dictionary” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Due to security concerns, registration is required to attend the interactive talk. (The link to the YouTube livestream will also be posted on our website on the day of the talk, so people who did not sign up will still be able to watch the talk live.) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: Given a set S of n (distinct) keys from key space [U], each associated with a value from \Sigma, the static dictionary problem asks to preprocess these (key, value) pairs into a data structure, supporting value-retrieval queries: for any given x \in [U], valRet(x) must return the value associated with x if x is in S, or return “N/A” if x is not in S. The special case where |\Sigma|=1 is called the membership problem. The “textbook” solution is to use a hash table, which occupies linear space and answers each query in constant time. On the other hand, the minimum possible space to encode all (key, value) pairs is only \textrm{OPT} := \lg \binom{U}{n} + n \lg |\Sigma| bits, which could be much less.

In this talk, we will talk about a randomized dictionary data structure using \textrm{OPT}+\textrm{poly}\lg n+O(\lg\lg\cdots\lg U) bits of space and expected constant query time, assuming the query algorithm have access to an external data-independent lookup table of size n^{0.001}. Previously, even for membership queries and when U\leq n^{O(1)}, the best known data structure with constant query time requires \textrm{OPT}+n/\textrm{poly} \lg n bits of space (due to Pagh [Pagh’01] and Pătraşcu [Pat’08]). It has O(\lg n) query time when the space is at most \textrm{OPT}+n^{0.999}.

TCS+ talk: Wednesday, April 8 — Ramon van Handel, Princeton

The next TCS+ talk will take place this coming Wednesday, April 8th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Ramon van Handel from Princeton will speak about how “Rademacher type and Enflo type coincide” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. (The link will also be posted on our website on the day of the talk, so people who did not sign up will still be able to join, until the maximum capacity of 300 seats is reached.) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: In Banach space theory, Rademacher type is an important invariant that controls many geometric and probabilistic properties of normed spaces. It is of considerable interest in various settings to understand to what extent such powerful tools extend to general metric spaces. A natural metric analogue of Rademacher type was proposed by Enflo in the 1960s-70s, and has found a number of interesting applications. Despite much work in the intervening years, however, the relationship between Rademacher type and Enflo type has remained unclear. This basic question is settled in joint work with Paata Ivanisvili and Sasha Volberg: in the setting of Banach spaces, Rademacher type and Enflo type coincide. The proof is based on a very simple but apparently novel insight on how to prove dimension-free inequalities on the Boolean cube. I will not assume any prior background in Banach space theory in the talk.

TCS+ talk: Wednesday, April 1 — Venkat Guruswami, CMU

The next TCS+ talk will take place this coming Wednesday, April 1th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Venkat Guruswami from CMU will speak about “Arıkan meets Shannon: Polar codes with near-optimal convergence to channel capacity” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. (The link will also be posted on our website on the day of the talk, so people who did not sign up will still be able to join, until the maximum capacity of 300 seats is reached.) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: We establish a constructive version of Shannon’s noisy coding theorem for binary codes, with information rate converging near-optimally fast to channel capacity as a function of code length. Specifically, let W be an arbitrary binary-input memoryless symmetric channel with Shannon capacity I(W), and fix any \delta >0. We construct, for any sufficiently small \varepsilon >0, binary linear codes of block length O(1/\varepsilon^{2+\delta}) and rate I(W)-\varepsilon that enable reliable communication on W with quasi-linear time encoding and decoding. Shannon’s theorem implies the existence of such codes (without efficient constructions or decoding) with block length O(1/\varepsilon^2). This quadratic dependence on the gap epsilon to capacity is known to be best possible. Previously such a construction was only known for the case of the erasure channel.

Our codes are a variant of Arıkan’s polar codes based on multiple carefully constructed local kernels, one for each intermediate channel that arises in the decoding. A key technical ingredient in the analysis is a strong converse to the noisy coding theorem that shows extreme unpredictability of even a single message bit when communicating via random linear codes at rates slightly above capacity.

Joint work with Andrii Riazanov and Min Ye.

 

TCS+ talk: Wednesday, March 25 — Dana Moshkovitz, UT Austin

The next TCS+ talk will take place this coming Wednesday, March 25th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 18:00 Central European Time, 17:00 UTC). Dana Moshkovitz from UT Austin will speak about “Nearly Optimal Pseudorandomness From Hardness” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. (The link will also be posted on our website on the day of the talk, so people who did not sign up will still be able to join, until the maximum capacity of 300 seats is reached.) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: Existing proofs that deduce BPP=P from circuit lower bounds convert randomized algorithms into deterministic algorithms with a large polynomial slowdown. We convert randomized algorithms into deterministic ones with little slowdown. Specifically, assuming exponential lower bounds against randomized single-valued nondeterministic (SVN) circuits, we convert any randomized algorithm over inputs of length n running in time t\geq n to a deterministic one running in time t^{2+\alpha} for an arbitrarily small constant \alpha > 0. Such a slowdown is nearly optimal, as, under complexity-theoretic assumptions, there are problems with an inherent quadratic derandomization slowdown. We also convert any randomized algorithm that errs rarely into a deterministic algorithm having a similar running time (with pre-processing). The latter derandomization result holds under weaker assumptions, of exponential lower bounds against deterministic SVN circuits. Our results follow from a new, nearly optimal, explicit pseudorandom generator. The construction uses, among other ideas, a new connection between pseudoentropy generators and locally list recoverable codes.

 

Updated Spring schedule: increased talk frequency, and joining as individuals

We hope you are all as safe and sound as possible these days, and will be for the weeks to come.

For those of you confined at home, it may be hard to remain connected with the TCS community or stay up-to-date with the current research, as in-person seminars and conferences are cancelled. In case this may help, we have decided to increase for now the frequency of our online seminars, to try and mitigate this aspect. This won’t restore normalcy, but every little thing counts.

Here is our current, now weekly schedule: we may add more talks to it in the days to come, so keep an eye on our calendar and don’t hesitate to suggest talks and results you are curious about.

  • 03/25: Dana Moshkovitz (UT Austin) on “Nearly Optimal Pseudorandomness From Hardness”
  • 04/01: Venkat Guruswami (CMU) on “Arıkan meets Shannon: Polar codes with near-optimal convergence to channel capacity”
  • 04/08: Rahul Ilango (MIT) on “NP-Hardness of Circuit Minimization for Multi-Output Functions”
  • 04/15: Ramon van Handel (Princeton) on “Rademacher type and Enflo type coincide”
  • 04/22: Huacheng Yu (Princeton) on “Nearly Optimal Static Las Vegas Succinct Dictionary”
  • 04/29: Sepideh Mahabadi (TTIC) on “Non-Adaptive Adaptive Sampling on Turnstile Streams”

We emphasize that you can (and should) join as individuals, from home: if you are interested in a talk, ask for a spot for yourself, no need to go to your institution. We have the live audience capacity for this to work, so don’t hesitate!

We will post individual announcements several days before each talk, including the abstracts and how to ask for a spot, as usual; and the talks will of course be available on YouTube and our website afterwards if you couldn’t make it to the live, interactive one. Crucially, we would like your feedback: not only talk suggestions as pointed out before, but also any idea or suggestion you may have of things we could do or implement, or of content you would like to see. You can send us feedback by emailing any of our organizers, or leaving a comment below.

Stay safe,

The TCS+ team

Note: you don’t need to sign up in advance (the link will be made public on our website the day of the talk, and you can just join then). We only encourage you to do so in order for us to get a sense of the audience size, but it’s optional: don’t feel you have to plan ahead!

Design a site like this with WordPress.com
Get started