Lineability within probability theory settings

The search of lineability consists on finding large vector spaces of mathematical objects with special properties. Such examples have arisen in the last years in a wide range of settings such as in real and complex analysis, sequence spaces, linear dynamics, norm-attaining functionals, zeros of polynomials in Banach spaces, Dirichlet series, and non-convergent Fourier series, among others. In this paper we present the novelty of linking this notion of lineability to the area of Probability Theory by providing positive (and negative) results within the framework of martingales, random variables, and certain stochastic processes.


Introduction
Since the beginning of the 21st century many authors have become interested in the study of linearity within non linear settings or, in other words, the search for linear structures of mathematical objects enjoying certain special or unexpected properties. Vector spaces and linear algebras are elegant mathematical structures which, at first glance, seem to be "forbidden" to families of "strange" objects. In other words, take a function with some special or (as sometimes it is called) "pathological" property (for example, the classical nowhere differentiable function, also known as Weierstrass' monster). Coming up with a concrete example of such a function might be difficult. In fact, it may seem so difficult that if you succeed, you think that there cannot be too many functions of that kind. Probably one cannot find infinite dimensional vector spaces or infinitely generated algebras of such functions. This is, however, exactly what has been happening in the last years in many fields of mathematics, from Linear chaos to real and complex analysis [2,6,15], passing through set theory [17] and linear and multilinear Algebra, or even operator theory [9,11], topology, measure theory [5,6,13], and abstract algebra.
Recall that, as it nowadays is common terminology, a subset M of a topological vector space X is called lineable (respectively, spaceable) in X if there exists an infinite dimensional linear space (respectively, infinite dimensional closed linear space) Y ⊂ M ∪ {0}. Moreover, given an algebra A, a subset B ⊂ A is said to be algebrable if there is a subalgebra C of A such that C ⊂ B ∪ {0} and the cardinality of any generator of C is infinite (see, e.g., [2,3,7]).
As we mentioned above, there have recently been many results regarding the linear structure of certain special subsets. One of the earliest results in this direction was provided by Gurariy, who showed that the set of Weierstrass' monsters is lineable [18]. Also, and more recently, Enflo et al. [15] proved that, for every infinite dimensional closed subspace X of C[0, 1], the set of functions in X having infinitely many zeros in [0, 1] is spaceable in X (see, also, [12,16]). A vast literature on this topic have been built during the last decade, and we refer the interested reader to the survey paper [7] or, for a much detailed and thorough study, to the forthcoming monograph [3].
In this paper, we relate for the first time, the topic of lineability with Probability Theory and Stochastic Processes. However one needs to be careful when trying to find linear structures within certain sets of objects in this setting. Indeed, the set of probability density functions cannot contain any linear space since any non-trivial multiple of one already fails to be a probability density function or, in a deeper level, if we had two martingales {X n } n , {Y n } n , with their corresponding filtrations {F n } n and {G n } n , the sequence of random variables {X n +Y n } n is not, in general, a martingale unless we had a "universal" filtration that would comply with both simultaneously. Nevertheless, we shall consider some classical (counter)examples in probability theory and study up to what level it is possible to obtain lineability-related results. In this paper we shall consider lineability and algebrability problems related to the following concepts: (i) Convergent martingales that are not L 1 bounded, (ii) pointwise convergence of random variables, (iii) stochastic processes being L 2 bounded, converging in L 2 , and not converging for any point off a null set, and (iv) zero-mean sequences of mutually independent random variables with divergent sample mean. (v) unbounded random variables with finite expected value.

Preliminaries and notation
In this section, we recall some results that will be needed throughout the paper (for more details see, e.g., [10]).
Let be a non-empty space and let F be a σ -algebra over . We say that the pair ( , F ) is a probabilizable (measurable) space. Given ( , F ), a filtration of σ -algebras of F is an increasing sequence of σ -algebras, such that F n ⊂ F for every n ∈ N.
Adding a function μ : F → [0, 1], we say that the triplet ( , F , P) is a probability space. A random variable X on ( , F , P) is a real-valued function defined on , such that for every open subset B ⊂ R we have X −1 (B) ∈ F . The expected value of the random variable X , namely E(X ), is computed as A collection of random variables indexed by a totally ordered set, representing the evolution of some system of random variables is said to be a stochastic process.
We now introduce the notion of a conditional expectation of a random variable X.
A sequence of random variables {X n } n defined on ( , F , μ) is said to be a Markov chain if for every n ≥ 1, the variable X n+1 only depends upon the state of X n . Given a sequence of random variables {X n } n∈N and a filtration {F n } n∈N of σ -algebras of F , we say that {X n } n∈N is a martingale if X n is integrable and E[X n+1 |F n ] = X n almost surely (a.s. from now one) for all n ∈ N.
Finally, let us recall the following definition that will be necessary in order to introduce the notion of a martingale indexed by a directed set (see, e.g., [14]).

Definition 2 (directed set)
A directed set is a nonempty set D with a relation ∼ R such that: Let D be a directed set and let {X d : d ∈ D} be an indexed family of random variables. Let {F d : d ∈ D} be a family of σ -algebras such that for d 1

Lineability of special sequences of random variables
The motivation for our first result is the fact that many martingale convergence theorems require the martingale to be L 1 -bounded (for instance, in the famous Doob's martingale convergence theorems or in Lévy's zero-one law, [10]). However, this condition (although sufficient) is not necessary. Indeed, there is a classical and well known example due to Ash (see [4], or [21, Example 9.15] for a more modern reference), in which (briefly) the author constructed a martingale via a Markov chain {X n : n ∈ N}, properly defined on a probability space ( , F , P), such that (X n ) n converges for every ω ∈ , and with E[|X n |] n→∞ −→ ∞.
Here, and although (as we mentioned in the Introduction) one cannot consider lineability within martingales, we shall show that one can construct an infinite dimensional vector space every non-zero element of which, {X n : n ∈ N}, is a sequence of convergent random variables with E[|X n |] n→∞ −→ ∞. That is, the main tool in Ash's example is, actually, "not as uncommon" as one might expect. The proof is a little bit technical, although constructive.

Theorem 1 The set of convergent sequences of random variables {X
Proof First let us denote by S = {s j } j∈N the (increasing) sequence of odd prime numbers. Next, for every s ∈ S we consider the Markov chain defined as follows. Let X (s) and, if X (s) n+1 is defined following equation (3). Moreover, note that for every n ∈ N, Therefore, for every s ∈ S, the Markov chain {X (s) n : n ∈ N} is a martingale respect on the natural filtration, that is, F n = σ (X 1 , . . . , X n ) 1 for all n. Furthermore, given s ∈ S, and assuming all of the above random variables are properly defined on a probability space ( , F , P), we have that either X (s) n (ω) = 0 for every n ∈ N or even in the case that there is some m ∈ N such that X Before carrying on with the main construction, let us recall that it can be assumed, without loss of generality, that the set {X (s) n : s ∈ S} is linearly independent, just taking, for instance, disjoint supports in the construction of the random variables.
Our aim now is to show that any non-zero element in the linear span of {X (s) n : s ∈ S} is convergent and not L 1 -bounded. The convergence is straightforward from the fact that {X We still need a couple of estimates in order to achieve our goal. For every I ∈ F , let us define I A as the characteristic function on the set A. Let s ∈ S and k ∈ N, we have that from which we obtain that where, for the sake of simplicity, we have denoted a n := s n n s and p n := 1/s n . Applying the definition of X (s) n , making some simple calculations, and keeping in mind that for every j ∈ {1, . . . , k − 1}, we have 0 < 1 − 2 p j < 1, and As a consequence, we obtain the following lower bound for E |X (s) In the previous expression, let us recall that the amount ∞ j=1 1 − 2 s j is known, in Number Theory, as the q-Pochhammer symbol (also known as q-shifted factorial, see [8]) (2; s) ∞ , which verifies 0 < (2; s) ∞ < 1 if s > 2 (which complies with our hypotheses). We, thus, have and it can be easily checked that the expression R s+1 (k) is a polynomial of degree s + 1 with lim k→∞ R s+1 (k) = +∞.
Now, let X k ∈ span X (s) k : s ∈ S , then: where s 1 < s 2 < · · · < s m are elements from S, {α n } n ⊂ R, and (without loss of generality) α m = 0. Let us now show that X k is not L 1 -bounded. Indeed, using the linearity of E[·], the reverse triangle inequality, and Eqs. (10) and (11), we have: since the expression 2|α m | · (2; s m ) ∞ · R s m +1 (k) is a polynomial of degree s m + 1 with the expression is a polynomial of degree s m−1 + 1, and s m−1 < s m . Therefore, X k is not L 1 -bounded, and the result is proved.

Remark 1
We recall that the previous result could certainly be stated in terms of martingales assuming, of course, that the martingales adapted to the same filtration form a vector space (the proof would follow the same ideas as in that of Theorem 1). Now, let us continue focusing on obtaining lineability-related results of certain subsets of random variables enjoying "unexpected" properties. For instance, in [21, Example 9.2], the authors provide (given any b > 0) a sequence of integrable random variables {X n } n∈N and an integrable random variable X such that X n converges to X pointwise and, yet, E[X n ] = −b and E[X ] = b (the important point here is that one has, under the previous hypotheses, E[X n ] = E[X ] for every n ∈ N). This construction can be generalized in order to construct a positive cone (see, e.g., [1]) of such elements since, in general, linearity of elements enjoying such properties might get lost.
Let {X n } n∈N and {Y n } n∈N be sequences of integrable random variables converging, pointwise, to the integrable random variables X, Y (respectively). Let b, c > 0, and X n , X, Y n , Y random variables such that E[ , which does not fall into the class of examples we are working with. Thus, the above property is "not a lineable one". However, one could try to find a positive cone of such objects, as it was done in [1] when certain sets failed to be lineable (calling these sets coneable). More precisely, a subset M of a topological vector space X is called positively coneable in X if there exists an infinite dimensional set M such that α M ⊂ M for every α > 0. ([0, 1], B([0, 1]), λ), where λ denotes the Lebesgue measure. The set of sequences of integrable random variables {X n } n converging to an integrable random variable X such that lim n→∞ E[X n ] = E[X ] is positively coneable.

Theorem 2 Let us consider the probability space
Proof For every m ∈ N, let us take B (m) , C (m) > 0 and let us define the following random variables for every ω ∈ [0, 1] where {a m } m∈N ⊂ N is defined, recursively, as follows: This permits us to state that the set of sequences {X (m) n : m ∈ N} are linearly independent when seen as regular functions in R [0,1] (due to the choice of the a m 's in order to avoid major overlappings). The sequence X (m) n converges to X (m) pointwisely when n tends to infinity. It can be easily seen that {X (m) n } n is a sequence of integrable random variables for every m ∈ N and that X (m) is an integrable random variable, too.
Furthermore, for every n, m ∈ N we have and We then consider the positive cone given by C n = {α X (m) n : m ∈ N, α > 0} where any element Y n ∈ C n can be written as and The following result shows the algebrability of the set of unbounded random variables with a finite expected value. The example used for the construction is inspired in [21,Example 5.2].
For each n ∈ N, we define: Each function f n is null except in the interval J n := n, n + 1 n 3 . Moreover, and then, the random variable defined as has an expected value E[X ] = π 2 12 . Let us consider a Cantor set on the unit interval obtained as C = ∪ ∞ n=1 I n , where I 0 = [0, 1] and I n is obtained from I n−1 removing the inner third of each of its subintervals. Let us define L n := J n ∩ (n + I n ). Then, we have Let {α l } l∈ be a non-numerable set of irrational numbers on (0, 1) which are not Q linearly dependent, then for every α ∈ {α l } l∈ we define the functions: e l s e w h e r e .
and then, we consider the random variable Consider the algebra generated by these functions A({X α } α∈ ). It is clear that for every α ∈ {α l } l∈ the random variable X α has a finite expected value and it is an unbounded random variable. Besides, this algebra is uncountably generated. Given an arbitrary function On the one hand, these random variables are unbounded, too. Indeed, let α min := min{α m : 1 ≤ m ≤ m 0 } and we get X (n + α min ) = n. Additionally, this random variable has a finite expected value as well.
Remark 2 Let us recall that, in the previous result, the unboundedness holds outside every interval of finite length, which adds an extra pathology to the considered property.
For the final part of this paper, let us recall the work [20] (see, also, [21, Example 9.17]), in which Walsh provided an example of a martingale (indexed by a directed set) that is L 2 bounded and converging in L 2 and that, also, does not converge for any point off a null set. Our aim here shall be to generalize this example in order to build an infinite dimensional linear space such that every non zero element of which is a martingale enjoying the previous property. Before starting its proof, we need to recall the following lemma (due to Muñoz, Palmberg, Puglisi, and the second author), which is a particular case of [19,Theorem 3.5]. In what follows ( p , · p ) denotes the Banach space of real valued sequences with the usual p-norm.

Theorem 4
The set of stochastic processes that are L 2 bounded, converging in L 2 and that, also, do not converge for any point of a null set, is lineable.
Proof By Lemma 1, let V be any (countably generated) linear space contained in ( 2 \ 1 )∪{0} and let {h (m) n } n : m ∈ N be a basis for V . For instance, and in order to be more clear in the coming construction, we can take (see [19,Theorem 3.5]) For every m ∈ Q ∩ 1 2 , 1 , let {X (m) n } n be an linearly independent (and infinite) set, every element of which is a sequence of mutually independent random variables such that provided m ∈ Q ∩ 1 2 , 1 P X (m) n = −1 = P X (m) for every n ∈ N. By construction, one has that n∈N h (m) n X (m) n converges almost surely for every m ∈ Q ∩ 1 2 , 1 . Let D be the family of all finite subsets of N, partially ordered by set inclusion, which is a directed set. For every d ∈ D, m ∈ Q ∩ 1 2 , 1 we define: Therefore, for every m ∈ Q ∩ 1 2 , 1 , and with respect to its own filtration, it can be easily checked that {M Indeed, take s ≥ 2, s ∈ N, and let 1 = {ω ∈ : X (s) n (ω) = −n s } and 2 = \ 1 . Now, if ω ∈ 1 , we have X (s) k (ω) = −k s , obtaining that, as n → ∞, Eq. (37) holds. Let now V = span{X (s) n : s ≥ 2, s ∈ N} and let Y n ∈ V . Thus, Y n can be written as for some N ∈ N, s i ∈ N, 2 ≤ s 1 < s 2 < · · · < s N , and α i ∈ R for every i ∈ {1, 2, . . . , N } with α N = 0. By the linearity of E[·], and Eq. (35), we have that Also, notice that From the previous inequality, the fact that s N > s N −1 > · · · > s 1 ≥ 2, and Eqs. (36) and (37) it can be seen that 1 n n k=1 Y k → ∞ a.s. and the claim holds for ω ∈ 1 . The case w ∈ 2 also holds in a similar fashion and, thus, we spare the details of the calculations involved in it.