# Research Interests

Classical computability theory studies sets of integers (\( \omega \)) under the relation of relative computability. Intuitively, a set \( A\subset\omega \) is computable from \( X\subset\omega \) (written \( A\leq_T X \)) if there is an algorithm that can decide membership of elements in \( A \) by asking questions about membership in \( X \). The equivalence classes of sets which are relatively computable in each other (\( A\leq_T X \) and \( X \leq_T A \)) are known as the Turing degrees and the smallest such degree is denoted \( 0 \). The sets of degree \( 0 \) are termed the (outright) computable sets.

One of the first, and most obvious, classes of sets to study from the perspective of computability theory is the class of computably enumerable sets. These are the sets whose elements can be listed by some effective procedure (algorithm). This is a more expansive class than the computable sets as, to be computable, you must have an algorithm which eventually commits to a yes/no answer to every membership question. To be computably enumerable, the algorithm only needs to commit to elements being in the set.

To illustrate this difference, consider the halting set (denoted \( 0' \)) consisting of all (codes for) programs that eventually halt, i.e., finish executing and return a value. This set can't be computable since, if you suppose that the program \( g(i) \) determines whether program \( i \) halts, you could write a program which scans it's own code in memory, feeds that code to the subroutine computing \( g \) and then does the opposite of whatever \( g\) says (i.e., halts just if \( g\) says it doesn't). However, this set can easily be seen to be computably enumerable by simply simulating all possible programs (e.g. at stage \( s\) running the first \( s\) programs for \( s\) steps) and enumerating them if they are ever seen to halt.

Implicit Definability and Rate of Growth

The study of functions computable from sufficiently fast growing functions goes back to the earliest days of computability theory. In the 1950s Kuznecov and B.\ A.\ Trahtenbrot [10] as well as Myhill [11] established that every function \( f \) pointwise above \( g \), denoted \( f \gg g \), uniformly computes \( g \) if and only if \( g \) is a \( \Pi^0_1 \) singleton. Jockusch and McLaughlin studied the notion of computable in every pointwise dominating function under the name \( 0 \)-majorreducibility and Solovay [14] identified those functions computable by all sufficiently fast growing functions with the \( \delta^1_1 \) definable functions. We adopt the more recent terminology from Slaman and Groszek[6]. By their definition a function \( f\in\omega^{\omega} \) is a modulus (of computation) of a function \( X \) if every \( g\in\omega^{\omega} \) with \( g \) pointwise above \( f \) (written \( g \gg f \)) \( g \) can compute \( X \). A self-modulus is then a function \( f \) such that \( f \) is a modulus for \( f \). We call a modulus uniform if there is a single function witnessing the reduction.

In my thesis I explored several properties of moduli and self-moduli. One particularly interesting question I addressed was the existence of non-uniform self-moduli. Say a function \( g \) is a \( \alpha \)-nonuniform self-modulus if \( g \) is a self-modulus but nothing truth-table computable^{1} from \( g^{(\alpha)} \) is not a uniform modulus of \( g \). While interesting in it's own right, what is particularly compelling about this notion is its connection to genericity and implicit definability. Previously Kuznecov and Trahtenbrot [10] and latter Myhill [11] had identified \( \Pi^0_1 \) singletons with uniform self-moduli. I was able to extend their result to the following theorem^{2}.

Thus, lacking a uniform modulus in \( \alpha \) jumps can be seen as a kind of \( \Pi^0_{\alpha+1} \) genericity property. However, having a modulus of computation is itself a kind of uniqueness property and thus in tension with the genericity required to avoid having a uniform modulus. Thus my result

can be seen as demonstrating the existence of functions that are globally somehow very generic but locally special. The trick I employed to resolve this tension was to build \( g \) as a very typical and non-isolated element of some \( \Pi^{0}_1 \) class in \( \omega^{\omega} \) yet in such a fashion that every member \( g \) of the \( \Pi^{0}_1 \) class was locally special. In particular, I built \( g \) so that function \( h \) majorizing \( g \) could either uniquely recover \( g \) by using it's knowledge of where \( g \) was small or was so fast growing it could directly compute the function \( g \). This general approach of coding \( g \) into the locations at which \( g \) is small (i.e. less than the uniform self-modulus for \( 0^{(\beta)} \geq_T g \)) was also deployed in my thesis to prove the finite version of the above theorem but to generalize the construction to the infinite case I needed a manageable way to construct \( \Pi^{0}_1 \) classes of arithmetically typical elements. I eventually found the approach I needed in Harrington's solution to McLaughlin's conjecture [7] where he described how to build a (non-trivial) \( \Pi^{0}_1 \) class with each element in some sense generic relative to the class, i.e., the truth or falsity of any \( \Pi^{0}_{\alpha} \) statement about a path \( g \) in the class is decided by some finite initial segment of \( g \). However, as Harrington's approach had only appeared in brief set of unpublished notes, in my paper [5] I first had to present a rigorous proof of Harrington's result. This required I provide my own non-trivial arguments and constructions to fill in missing details. While Harrington's results were accepted as valid, there was a desire by several members of the community to see a rigorous proof of his result and it's corollaries in print. For this reason I included in my paper rigorous proofs of several of Harrington's corollaries in addition to describing the basic method I required for my result.

Properties of \( \omega \)-REA setsUp to Turing degree one may define a \( \omega \)-REA set as follows

In effect this says that an \( \omega \)-REA set is one in which the \( n+1 \) column of the set is a c.e. set relative to the first \( n \) columns and the computation of the \( n+1 \) column from the first \( n \) columns is given uniformly. Professor Shore observed that if \( C \not\leq_{T} 0 \) is \( \omega \)-REA then \( \exists B \leq_T C \) with \( 0 <_{T} B <_{T} 0'' \) and asked if this would still hold if \( 0'' \) was replaced with \( 0' \). I answered this question in the negative in [4] with the following theorem.

Since allowing any column in the set \( C \) to be non-computable would prevent \( C \) from satisfying the theorem my construction of \( C \) proceeded by viewing \( C \) as the result of some c.e. list of commitments for the computations building later columns of \( C \) from earlier ones. I then used a \( \Pi^0_2 \) approximation argument to control these commitments and thereby constructing \( C \). This framework is augmented by a strategy to prevent incompatible guesses about how requirements are fulfilled from interfering with each other plus a mechanism that always allows intermediate commitments to be rolled back so that if we ever see a later guess at the value of \( \Phi_{i}(C;) \) that disagrees with an earlier guess we can switch between these two guesses as needed to ensure \( \Phi_{i}(C;) \) disagrees with a given \( \Delta^0_2 \) set.

Automorphisms of the c.e. SetsIn joint work with Peter Cholak and Karen Lange I've worked on questions about the automorphisms of the lattice of c.e. sets. We denote the class of c.e. sets structured by set containment by \( \mathscr{E} \). Thus an automorphism \( p \) of \( \mathscr{E} \) is a bijection of the c.e. sets such that \( p(W_e) \subseteq p(W_i) \iff W_e \subseteq W_d \). In exploring these automorphisms, we focused on the \( n \)-tardy sets. Informally, a c.e. set is \( n \)-tardy if it allows elements to enter only very slowly. More formally, for every total computable function \( p(s) \) there there is a sequence of c.e. sets \( X_{e_1}, X_{e_2},\ldots,X_{e_n} \) such that:

\begin{gather*} X_{e_1} - X_{e_2} \cup X_{e_3} - X_{e_4} \dots \stackrel{\tiny\text{def}}{=} X^n_e = \overline{A} \\ (\forall x)(\forall s)\left[ x \in X^n_{e,s} \implies x \not\in A_{p(s)} \right] \end{gather*}The interest in \( \mathscr{E} \) extends back to the work of Lachlan, Martin and others in the 1960s. Initial investigations only deployed effective constructions of automorphisms, but this technique was an obvious limitation. Further progress resulted from increasingly sophisticated methods to construct increasingly less effective automorphisms. As the tools for constructing automorphisms became more powerful the class of sets known to be automorphic to a complete set grew to include many classes of c.e. sets. This naturally raised the question of whether every c.e. set was automorphic to a complete set. Harrington and Soare [9] finally resolved this question by defining a property \( Q(A) \) in the language of \( \mathscr{E} \) guaranteeing incompleteness. In particular \( Q(A) \) guaranteed incompleteness by requiring elements to enter in a very slow fashion, i.e., \( Q(A) \) entails that \( A \) is \( 2 \)-tardy.

Harrington and Soare's construction of an incomplete orbit left two major issues unresolved. First, what kind of sets can be \( n \)-tardy, and second, what other incomplete orbits, if any, exist? These two issues are closely related as Harrington and Soare demonstrated every set that isn't very tardy is automorphic to a complete set. Therefore, studying the properties of the very tardy sets throws light on the existence and potential properties of other incomplete orbits. In joint work with Karen Lange and Peter Cholak, I've made substantial contributions to the understanding of both issues.

Harrington and Soare asked in [8] whether every \( 3 \)-tardy set is computable from some \( 2 \)-tardy set. This question is of particular interest as by Harrington and Soare this was equivalent to asking whether every \( 3 \)-tardy was computable by some set in every orbit. We resolved this question with the following theorem:

The proof of the above theorem used a novel kind of construction that paired up dangerous requirements with backups in such a way that the threat to the primary requirement could only be avoided by granting an injury free win to the backup. We also established that Harrington and Soare's result showing no low simple set could be \( 2 \)-tardy set is sharp by proving:

More significantly, we succeeded in showing that not only is there more than one incomplete orbit, but there are as many incomplete orbits as possible. In particular, we defined a generalization \( Q^n(A) \) of \( Q(A) \) in the language of \( \mathscr{E} \) with \( Q^{2}(A)=Q(A) \) and proved the following theorems about these properties:

An immediate consequence of these two theorems is that there are countably many incomplete orbits. Interestingly, the properties we exhibit are all \( \Pi_2 \) formulas in the language of \( \mathscr{E} \) which, if they constitute an exhaustive list of the incomplete orbits, would be a counterpoint to the result of Cholak, Downey and Harrington [2] exhibiting a\( \Sigma^1_1 \) complete orbit.

Other Work and Future DirectionsA question also posed by Shore relating to my work on \( \alpha \)-REA sets is the existence of an \( \alpha \)-REA set of minimal arithmetic degree. While the machinery used in such a construction will likely build on what I used in [4], the problem presents a different set of challenges. In particular, to be of non-zero arithmetic degree one must avoid being computed by \( 0^{(n)} \) for every \( n \in \omega \) while still implementing some version of Sack's tree strategy for the construction of minimal degrees.

Another problem, suggested to me by Johanna Franklin, that I've recently become interested in is whether the bi-hyperhyperimmune degrees are upward closed. Johanna Franklin and Barbra Csima have constructed bi-hyperhyperimmune sets in every degree above \( 0'' \) but their coding argument can't be extended to show upward closure in general. While initially this question might seem to have little to do with my previous research, it's my view that constructing a counterexample will require the use of both a \( \Pi^0_2 \) approximation argument to guess at which indexes code for valid weak arrays as well as c.e. trees whose paths give possible Turing equivalent sets via some particular functionals. By leveraging the c.e. trees one must then construct weak arrays demonstrating every infinite path on these trees fails to be bi-hyperhyperimmune.

Finally, another problem recently brought to my attention by Simpson (with background available in [13]) is the question of whether \( \mathcal{P}_{w} \) is dense under weak reducibility where \( \mathcal{P}_{\omega} \) is the lattice of mass problems associated with non-empty \( \Pi^0_1 \) classes. In weak reducibility, one collection of Turing degrees \( P \) is weakly reducible to \( Q \) if any degree \( \underset{\widetilde{\phantom{d}}}{d} \in Q \) computes some \( \underset{\widetilde{\phantom{d}}}{d}' \in P \). Thus, being able to produce a member of \( Q \) is at least as hard as being able to produce a member of \( P \).

\( \mathcal{P}_{w} \) is dense under weak reducibility if given any two \( \Pi^0_1 \) classes \( P <_w Q \) in cantor space there is some \( \Pi^0_1 \) class \( R \) with \( P <_w R <_w Q \). Simpson has brought to bear on this question a powerful translation lemma [12] which states that for any non-empty \( \Pi^0_1 \) class \( P \) and \( \Sigma^0_3 \) class \( S \) there is some \( \Pi^0_1 \) \( Q \equiv_w S \cup P \). One initial strategy to prove \( \mathcal{P}_{\omega} \) is not dense suggested by this lemma was to build a \( \Sigma^0_3 \) class \( S \) consisting of the cone strictly above \( \underset{\widetilde{\phantom{x}}}{x} \), another \( \Sigma^0_3 \) class \( S'=S \cup \underset{\widetilde{\phantom{x}}}{x} \) and some non-empty \( \Pi^0_1 \) class not containing \( \underset{\widetilde{\phantom{x}}}{x} \). Thus by the translation lemma there would be comparable elements of \( \mathcal{P}_{\omega} \) differing only by \( \underset{\widetilde{\phantom{x}}}{x} \) thus contradicting the density of \( \mathcal{P}_{\omega} \). However, after Simpson mentioned this approach to me at a conference I was able to demonstrate that any \( \Sigma^{0,x}_3 \) class containing the cone above \( \underset{\widetilde{\phantom{x}}}{x} \) must also contain \( \underset{\widetilde{\phantom{x}}}{x} \). While this result frustrates one particular approach to the question of density the translation lemma still provides a powerful tool with which to attack the problem. In particular, since \( \Sigma^0_3 \) classes in the Baire and Cantor spaces are equivalent (up to degree) the translation lemma lets me bring my experience constructing complex \( \Pi^0_1 \) sets in the Baire space to bear on this question. Though this approach might seem roundabout it provides greater control and a larger toolkit than direct construction of \( \Pi^0_1 \) classes.

Though I've answered my initial questions about rate of growth and Turing degree, the nice relation between having a (close) uniform modulus and implicit definability naturally begs the question as to whether there is a similar relation between having a (close) modulus and some alternative form of implicit definability. Though vague, as stated this question seems closely related to the question asked by Anderson in [1] calling for a characterization of the reals \( n \)-generic relative to a perfect tree \( T \). The notion of genericity used by Anderson allows the formula to make reference to the (pruned) tree \( T \) itself and is the natural way to make functions that lack any modulus computable within a small number of jumps. On the other hand, if the notion of genericity is relaxed slightly so the formulas are no longer allowed to reference \( T \), then being \( \beta+1 \) generic and non-isolated on some \( T \) corresponds to not being a \( \Pi^0_{\beta+1} \) singleton and, by my earlier result, is equivalent to lacking any close uniform moduli. Taken together this suggests there are promising connections between forcing on trees, implicit definability, and the existence of a close moduli of computation.

^{[1]}
Equivalently computable via a reduction which is total on all possible oracles.

^{[1]}
The theorem is reworded for easier readability in this context.

### References

Reals \( n \)-generic relative to some perfect tree, J. Symbolic Logic 73 (2006), no. 2, 401--411. MR2414456

The complexity of orbits of computably enumerable sets, Bull. Symbolic Logic 14 (2008), no.~1, 69--87. MR2395047

On \( n \)-tardy sets, Submitted (2010), 1--38.

A \( \omega \)-rea set forming a minimal pair with \( 0' \), Submitted (2010), 1--11.

—, Harrington's solution to McLaughlin's conjecture and non-uniform self-moduli, Submitted (2010), 1--27.

Moduli of computation (talk), January 2007.

Mclaughlin's conjecture, Handwritten Notes, September 1976.

Codable sets and orbits of computably enumerable sets, J. Symbolic Logic 63 (1998), no.~1, 1--28. MR1610758

—, Definable properties of the computably enumerable sets, Ann. Pure Appl. Logic 94 (1998), no. 1-3, 97--125, Conference on Computability Theory (Oberwolfach, 1996). MR1640265

Investigation of partially recursive operators by means of the theory of baire space, Dokl. Akad. Nauk SSSR (N.S.) 105 (1955), 897--900. MR0077474

Finitely representable functions, Constructivity in mathematics: Proceedings of the colloquium held at Amsterdam, 1957 (A. Heyting, ed.), Studies in Logic and the Foundations of Mathematics, North Holland Publishing Co., Amsterdam, 1959, pp.~195--207. MR0105362

An extension of the recursively enumerable Turing degrees, Journal of the London Mathematical Society 75 (2007), no.~2, 287--297. MR2340228

—, Mass problems and measure-theoretic regularity, Bull. Symbolic Logic 15 (2009), no.~4, 385--409. MR2682785

Hyperarithmetically encodable sets, Trans. Amer. Math. Soc. 239 (1978), 99--122. MR0491103