A sensitivity analysis in an observational study tests whether the qualitative conclusions of an analysis would change if we were to allow for the possibility of limited bias due to confounding. The design sensitivity of a hypothesis test quantifies the asymptotic performance of the test in a sensitivity analysis against a particular alternative. We propose a new, non-asymptotic, distribution-free test, the uniform general signed rank test, for observational studies with paired data, and examine its performance under Rosenbaum's sensitivity analysis model. Our test can be viewed as adaptively choosing from among a large underlying family of signed rank tests, and we show that the uniform test achieves design sensitivity equal to the maximum design sensitivity over the underlying family of signed rank tests. Our test thus achieves superior, and sometimes infinite, design sensitivity, indicating it will perform well in sensitivity analyses on large samples. We support this conclusion with simulations and a data example, showing that the advantages of our test extend to moderate sample sizes as well.
A confidence sequence is a sequence of confidence intervals that is uniformly valid over an unbounded time horizon. In this paper, we develop confidence sequences whose widths go to zero, with non-asymptotic coverage guarantees under nonparametric conditions. Our technique draws a connection between the classical Cramér-Chernoff method for exponential concentration bounds, the law of the iterated logarithm (LIL), and the sequential probability ratio test---our confidence sequences extend the first to time-uniform concentration bounds; provide tight, non-asymptotic characterizations of the second; and generalize the third to nonparametric settings, including sub-Gaussian and Bernstein conditions, self-normalized processes, and matrix martingales. We illustrate the generality of our proof techniques by deriving an empirical-Bernstein bound growing at a LIL rate, as well as a novel upper LIL for the maximum eigenvalue of a sum of random matrices. Finally, we apply our methods to covariance matrix estimation and to estimation of sample average treatment effect under the Neyman-Rubin potential outcomes model.
This paper develops a class of exponential bounds for the probability that a martingale sequence crosses a time-dependent linear threshold. Our key insight is that it is both natural and fruitful to formulate exponential concentration inequalities in this way. We illustrate this point by presenting a single assumption and a single theorem that together strengthen many tail bounds for martingales, including classical inequalities (1960-80) by Bernstein, Bennett, Hoeffding, and Freedman; contemporary inequalities (1980-2000) by Shorack and Wellner, Pinelis, Blackwell, van de Geer, and de la Peña; and several modern inequalities (post-2000) by Khan, Tropp, Bercu and Touati, Delyon, and others. In each of these cases, we give the strongest and most general statements to date, quantifying the time-uniform concentration of scalar, matrix, and Banach-space-valued martingales, under a variety of nonparametric assumptions in discrete and continuous time. In doing so, we bridge the gap between existing line-crossing inequalities, the sequential probability ratio test, the Cramér-Chernoff method, self-normalized processes, and other parts of the literature.
We develop nonasymptotic confidence sequences for average treatment effect estimation offering uniform coverage over unbounded time horizon and achieving arbitrary precision. These confidence sequences enable an investigator to continuously monitor an experiment and stop based on observed results without invalidating inferential guarantees. We build upon exponential concentration inequalities to guarantee coverage at all sample sizes under a randomization inference framework with fixed potential outcomes, giving a sequence of confidence intervals for a changing sequence of finite-population estimands.
It is now commonplace for organizations with websites or mobile apps to run randomized controlled experiments, or “A/B tests” as they’re often called in industry. Such experiments provide a reliable way to determine which product changes lead to the most successful user interactions. In this lecture we will discuss why randomized experiments are so important, talk about some of the key design choices that go into A/B tests, and get a brief introduction to sequential monitoring of experimental results.
The management of dependencies among classes is one of the most important (and underappreciated) aspects of object-oriented programming. In this talk I make a case for composition over inheritance and give a brief introduction to dependency injection. I then spend the rest of the talk outlining a step-by-step, Fowler-style method for migrating incrementally from a system built upon static binding of global state to one that uses dependency injection exclusively.
In this talk I share some of my experiences developing a large-scale web crawler using Python and AWS. I give an overview of the Mercator web crawler, share some tips and hard-earned wisdom on implementing Mercator with Python and AWS, and end with some real-world results from our crawls.
Recorded in late 2013/early 2014 while I was playing with a couple of awesome musicians in Little Heart. I played the lead guitar parts, wrote some of them, and coiled and uncoiled a lot of microphone cables.