
(Image credit David Zerba, 2013)
— GRAPHS OF LARGE GIRTH —
To refresh on a bit of graph theory, we define a k-regular graph to be a graph in which every vertex has k neighbors. Thus 0-regular graphs are collections of disconnected vertices, while 1-regular graphs consist of disconnected edges. A 2-regular graph consists of disconnected cycles (or perhaps infinite chains, if the graph is infinite).
For a general graph , we define the girth of
, written
, to be the length of the shortest non-trivial cycle in
. If no cycles of finite length exist, as is necessarily the case with 0-regular and 1-regular graphs, the girth is said to be infinite. In the case of 2-regular graphs on
vertices, the trivial upper bound
is attained by the cycle graph on
vertices.
The story is quite changed if we consider finite k-regular graphs with . Here, one can show that the girth of a
-regular graph satisfies
and Erdős and Sachs showed that logarithmic growth (with a constant worse than 2) is actually attainable. The best known constant in these estimates is currently 4/3, as happens for certain trivalent sextet graphs and for the Ramanujan graphs of Lubotzky, Philips, and Sarnak. (In particular, it’s not known whether or not the constant 2 is sharp.)
A construction due to Brooks and Makover relates graphs of large girth to hyperbolic surfaces of large systole. (Here, the systole of a hyperbolic surface is the shortest homotopically non-trivial (and non-peripheral) loop on the surface.) This construction is perhaps unsurprising, since the dual graph of a surface triangulation is 3-regular.
— BOUNDING THE NUMBER OF MATRICES OF SMALL TRACE —
In 2016, Bram Petri and I submitted a paper that builds upon the machinery of Erdős and Sachs to construct a family of genus surfaces for which the length of the systole exceeds
(This lower bound on the size of the systole is sharp up to constants.)
A key ingredient in our paper (and where I, as a number theorist, finally made myself useful) is a count of matrices with non-negative entries and bounded trace. To be specific, let
denote the semigroup of
consisting of matrices with non-negative entries, and let
This will actually be infinite for , since our semigroup contains
for all . Otherwise,
is finite for
. Moreover, we see that
has a nice closed form in terms of the divisor function:
Proposition 1 (PW, Proposition 3.2): For , we have
Here, the trivial bound (see Apostol’s Introduction to Analytic Number Theory, Theorem 13.12, eg.) gives
, and it’s reasonable to assume that averaging over several values of the divisor function would show
for some constant
. Unfortunately, I suspect that showing this would be quite hard, as averaging arithmetic functions over the values of a quadratic polynomial is notoriously difficult. (Compare, for example, the problem of finding infinitely many primes in the outputs of a quadratic polynomial, and in particular, Hardy-Littlewood’s Conjecture F.)
For averages of divisor sums over linear polynomials, far more is known. In the absolute simplest case, we have
(1)
for some , in which
denotes the Euler-Mascheroni constant. Dirichlet proved that
is admissible, and incremental reduction of
towards the conjecturally-optimal exponent
is known as the Dirichlet divisor problem. The current bound is due to Huxley (2003), who gives
. (This is also the record exponent in the Gauss Circle Problem, which belies the similarity of these problems. In fact, Huxley holds both records.)
Fortunately for Bram and me, a bound for the partial sums of was sufficient for the purposes of our paper. Explicitly, we need only consider
The trivial bound implies that
, but we expect to remove the
-dependence via averaging. Interchanging the order of summation, we write
The first term is by the classic Dirichlet divisor estimate, and in the second we’ve reduced our work to the study of the divisor function in averages over arithmetic progressions. Luckily, this has been studied before; for example, Fouvry and Iwaniec give that
as a consequence of the fact that the divisor function equidistributes over the (primitive) residue classes.
Tracking the propagation of this error term and adjusting some limits of summation for simplicity, we see that
The next step in our evaluation is estimation of the inner sum. If we open up the inner sum in the line above we see that
(2)
The innermost sum just counts the number of integers less than which are relatively prime to
, which suggests a heuristic main term of the form
for this sum. The shape of the error is harder to guess, but is nevertheless small:
Proposition 2: We have
Proof: (See here, as well.) Standard tricks give
in which denotes the Möbius function,
denotes the integer part of
, and
denotes the fractional part of
. The Dirichlet convolution identity
expresses the first term at right as
. To bound the second term, we need only remark that each of the
terms in the divisor sum is
.
If you’re just looking for a slightly-improved upper bound (as was the case in my paper with Bram), it’s enough to bound the inner sum in (2) via Proposition 2 and write
Ignoring coprimality in the -sum for simplicity and applying the bound
gives the following upper bound for
:
Theorem 3 (PW, Proposition 3.4): We have .
— DIRICHLET’S HYPERBOLA METHOD —
There’s some clear room for improvement here, and while looking over this paper recently I decided to see what could be shown with a little more work. The core inefficiency in the previous section concerns the way we opened up the divisor sum
in line (2). In essence, our treatment resembles the calculation
which gives a quick proof of the fact the main term in (1) is but introduces a large error and obscures some secondary terms. The same is true in our estimate of
above: we produce a main term (for simplicity, just an upper bound) but can say little about the error.
As it happens, our problem can be solved using the same tool that Dirichlet used to extract refined asymptotics in (1): the hyperbola method. (So-called because divisors of are in bijection with lattice points on the hyperbola
.)
Very briefly, the hyperbola method exploits the -symmetry of the hyperbola
to count the number of points beneath it. Combinatorially, this corresponds to counting only the smallest factor in a pair of divisors, which is of course no larger than
. Explicitly, one has
Here, the inner sum of the double sum counts the number of lattice points under the hyperbola
that satisfy
. As we vary
, this counts about half of the points under the hyperbola — to finish, we double our estimate to account for
and add a final sum to treat the
case. (This is presented in greater detail in Apostol, Section 3.5.)
With some bookkeeping, we realize that a similar statement holds for the sum over :
Applying Proposition 2 gives
(3)
To handle the two terms that appear in the sum at right in (3), we require two variants of Proposition 2. This first follows from Proposition 2 and partial summation, and we state it without proof:
The proof of the other fact we’ll need is different enough that we state it as a Lemma. The proof is similar to Proposition 2 and is hinted in the exercises below.
Lemma 4: We have
Proof: See the Exercises.
Simplification of the sums in (3) produces an estimate for the restricted partial sums of divisors in the spirit of Dirichlet’s hyperbola method:
Heuristically, we consider the factors as marking the proportion of integers that occur in the sum over
coprime to
. The divisor sum that appears can be considered as a correction term, signifying that divisors aren’t perfectly distributed in residue classes. We note that Dirichlet’s estimate (1) follows from taking
.
— ASYMPTOTICS FOR MATRICES OF BOUNDED TRACE —
Now that we have a good estimate for sums of divisors over the integers coprime to , our work is nearly done. Elementary bounds for error terms massage our expression for
into
(4)
The first term at right in (4) represents the dominant contribution towards . By partial summation, we write this term as
To handle the simplified partial sum of , we recall (see Apostol, Thereom 3.7, eg.) that
(Better error bounds are known but this is more than sufficient.) Thus the first term in (4) grows as
The double sum in (4) may be simplified by reversing the order of summation. Splitting into two terms and treating each separately gives
To simplify further, we note that
in which we’ve used that to obtain the limiting value. Collecting our estimates for each line in (4), we see at last that
It’s unclear what the optimal error should be in this expression. Certainly our error exceeds , and numerical evidence suggests that the error can be quite larger. In fact, this numerical evidence suggests the conjecture that this error is
, which matches the conjectured exponent in the Dirichlet divisor problem (renormalized since our main term for
is
instead of
).
A plot of the error term for , normalized by division by
, is given in the figure below for
:
— EXERCISES —
Exercise 1: Prove Lemma 4 by modifying Proposition 2 and applying the estimate
Exercise 2: By reversing the order of summation, prove that
Exercise 3: Prove that
and apply the Cauchy-Schwarz inequality to prove that .
Exercise 4: Prove that
Conclude using Exercise 3 that .
Pingback: Two Classic Problems in Point-Counting | a. w. walker·