Illustrating Frostman measures

5 min read

In Week 3 of the Geometric Measure Theory course I’m teaching, an important class of objects we discuss are Frostman measures. An s-Frostman Measure is a measure $ \mu$ in Euclidean space so that $ \mu(B(x,r))\leq Cr^s$ for any ball $ B(x,r)$. In this note I show how I generated illustrations of them using Matplotlib.

Frostman measures are crucial for estimating the Hausdorff measure and dimension of a set. In particular, if $ E$ is a set and $ \mu$ is an s-Frostmann measure, then $ \mu(E)\lesssim \mathscr{H}^s(E)$. Hence, if such a measure exists and is positive for the set $ E$, that forces its s-Hausdorff measure to be positive, and so the dimension must be at least s. This is a really convenient way of estimating dimension from below, since it is usually easier to construct such a measure than prove bounds on dimension by hand.

For my class, I wanted to illustrate what Frostman measures actually looked like. Frostman’s Lemma says that any Borel set of positive s-measure supports a nontrivial s-Frostman measure, and the proof for compact sets as shown in Mattila’s Geometry of Sets and Measures in Euclidean Space is pretty constructive, in the sense that it lays out an algorithm that approximates.

The algorithm in $ \mathbb{R}^{d}$ goes roughly as follows: without loss of generality, we assume our set is contained in the unit cube $ [0,1]^d$ we fix some positive integer $ m$ and define our first measure $ \mu_m^m$ to be a measure such that $ \mu_m^m(Q)=\ell(Q)^s$ for any dyadic cube of sidelength $ \ell(Q)=2^{-m}$ provided it intersects our set $ E$, otherwise the measure is zero. We can pick this measure to be, for example,

$$ \sum_{\ell(Q)= 2^{-m}\atop Q\cap E\neq\emptyset} 2^{m(d-m)}\chi_{Q}dx$$

where $ dx$ denotes Lebesgue measure (note that the d-dimensional Lebesgue measure of a dyadic cube $ Q$ of sidelength $ 2^{-md}$ and $ \ell(Q)^s = 2^{-ms}$.

Now we work our way backwards in scale, defining an updated measure $ \mu_{m}^{m-1}$ as follows: if there is $ Q$ of sidelength $ 2^{-(m-1)}$ for which $ \mu_{m}^{m}(Q)>\ell(Q)^{s}$, then we define $ \mu_{m}^{m-1}|_{Q} = \frac{\ell(Q)^{s}}{\mu_{m}^{m}(Q)} \mu_{m}^{m}|_Q$ (note that after this correction, we have $ \mu_{m}^{m-1}(Q)=\ell(Q)^{s}$); otherwise, we set $ \mu_{m}^{m-1}|_{Q} =\mu_{m}^{m}|_Q$ (that is, we don’t alter the measure in this cube). We repeat this for the cubes of sidelength $ 2^{-(m-2)}$: if there is $ Q$ of sidelength $ 2^{-(m-2)}$ for which $ \mu_{m}^{m-1}(Q)>\ell(Q)^{s}$, then we define $ \mu_{m}^{m-2}|_{Q} = \frac{\ell(Q)^{s}}{\mu_{m}^{m-1}(Q)} \mu_{m}^{m-1}|_Q$ (so now $ \mu_{m}^{m-2}(Q)=\ell(Q)^{s}$); otherwise, we set $ \mu_{m}^{m-2}|_{Q} =\mu_{m}^{m-1}|_Q$. We repeat this on $ \mu_{m}^{m-3}$ and so on until we reach the top cube $ [0,1]^d$ and obtain a measure $ \mu_{m}^{0}$. By construction, this measure satisfies $ \mu_{m}^{0}(Q)\leq \ell(Q)^{s}$ for any dyadic cube of sidelength at least $ 2^{-m}$: whenever our sequence of measures violated this inequality on a cube, we changed it so that it would by shrinking it in that cube. Now we take a weak limit of the measures $ \mu_m^{0}$ as m goes to infinity and that gives us our Frostman measure. If $ E$ has positive s-measure, it can be shown that this measure gives positive measure for $ E$.

Note that the support of $ \mu$ is all of $ E$, but in the places where our set $ E$ is in some sense distributed $ t$-dimensionally where $ t>s$, we suppress the size of our measure in those places (that is, it is in those cubes where we had to adjust our measure since there was too much mass in that cube). In other words, we expect an $ s$-Frostmann measure to be large around parts that are distributed like a $ t$-dimensional set for $ t\leq s$.

As you can see, there’s a very simple algorithm for computing $ \mu_{m}^{0}$. In this Colab notebook you’ll see how I went about this. The one thing of note is that I use a module called “gmt” that contains classes for geometric objects like Cubes and Intervals, each with some methods that, when writing out the code, make the process a bit more readable.

In the figure below, I generate a heatmap of the density of the level m=5 Frostman measure (so $ \mu_{5}^{0}$ in the above proof) for a particular set and for three different dimensions, s=.1, 1, 1.5, and 2.

The set we consider consists of some points randomly distributed according to a multivariate gaussian distribution near the center of the square, essentially to simulate a ball (which looks 2 dimensional) plus some noise (i.e. many isolated points). Additionally, we’ll tack on a line passing through the set, which is distributed in a 1-dimensional way.

For s=.1, the Frostman measure is concentrated around small dimensional parts of the set like isolated points, and small along the line and blob where things are distributed at least 1-dimensionally. For s=1, we see now that the line (being 1-dimensional) now has large mass with respect to the Frostman measure, but not the interior of the blob. For s=1.5, now the measure gives large mass to the center blob but mostly near the boundary, since the set is more concentrated near the center. Finally, the 2-Frostman measure just returns a constant density, so the measure is just Lebesgue measure restricted to the 5-th generation squares containing our set.