Week 3 Power series
Version 2025/02/12 Week 3 in PDF All notes in PDF To other weeks
Notice (Feb 2025): Week 3 lectures begin with the Alternating Series Test, Theorem 2.8.
We now consider series where the \(n\)th term depends on a variable \(x\) and we ask for which \(x\) does the series converge. We study the simplest case of a series with \(x:\) power series.
Definition: power series.
Let \(c_0,c_1,c_2,\dots \) be real numbers. A series of the form \(\sum _{n=0}^\infty c_nx^n,\) which we also write as \(C(x),\) is called a power series in the variable \(x.\)
A power series can be convergent for some values of \(x\) and divergent for others:
Example: geometric series as a power series.
Let \(G(x)\) denote the power series \(1+x+x^2+x^3+\dots \) Then \(G(x)\) is convergent when \(|x|<1\) and divergent when \(|x|\ge 1.\) Indeed, for any given value of \(x,\) \(G(x)\) becomes a geometric series with ratio
\(x,\) and we apply the known results.
The set of \(x\) where \(G(x)\) is convergent is the interval \((-1,1).\) We will show that the picture for other power series is broadly similar, although their interval of convergence can be closed, half open or open, or be the whole real line \(\RR .\) We begin with a lemma.
Lemma 3.1: absolute convergence for smaller modulus.
If, for a given \(x_0\in \RR ,\) the power series \(C(x_0)\) is convergent, then for all \(y\) with \(|y|<|x_0|,\) \(C(y)\) is absolutely convergent.
-
Proof. Assume \(C(x_0)=\sum _{n=0}^\infty c_n x_0^n\) is a convergent series. Then by the Nullity Test, the sequence \((c_nx_0^n)_{n\ge 0}\) has limit \(0.\) Sequences that have a limit are bounded, so there exists \(M\ge 0\) such that \(|c_n x_0^n|\le M\) for all \(n.\)
Given \(y:\) \(|y|<|x_0|,\) set \(r=\frac {|y|}{|x_0|}.\) As \(|r|<1,\) the geometric series \(\sum \limits _n Mr^n\) converges, and
\[ 0\le |c_n y^n| = |c_n x_0^n| r^n \le Mr^n\textsf { for all }n. \]
Hence by the Comparison Test, \(\sum _{n=0}^\infty |c_n y^n|\) is convergent. This means that the series \(C(y)=\sum _{n=0}^\infty c_ny^n\) is absolutely convergent. □
Corollary 3.2: the set of points where the power series is convergent.
The set of \(t\in \RR \) such that the series \(C(t)\) is convergent is one of the following:
-
i. interval \((-R,R),\) \([-R,R],\) \((-R,R]\) or \([-R,R)\) where \(R>0;\)
-
ii. \(\{0\};\)
-
iii. \(\RR .\)
We formally put \(R=0\) in ii. and \(R=\infty \) in iii., so that there is a unique value \(R\in [0,\infty ]\) such that \(C(t)\) is absolutely convergent when \(|t|<R\) and divergent when \(|t|>R.\)
Figure 3.1: The possible intervals \(I\) of convergence of a power series
Definition: interval of convergence, radius of convergence.
The set \(I\) given in Corollary 3.2 is called the interval of convergence of the power series \(C(x),\) and \(R\in [0,\infty ]\) is the radius of convergence.
Remark: suppose the radius of convergence \(R\) of \(C(x)\) is strictly between \(0\) and \(\infty .\) At \(x=R,\) the series \(C(x)\) may
-
• be absolutely convergent; or
-
• be conditionally convergent; or
-
• be divergent.
At \(x=-R\) one of the three options will also hold. We note that the series is absolutely convergent at \(x=R\) iff it is absolutely convergent at \(x=-R,\) because \(\sum _{n\ge 0}|a_nR^n|\) and \(\sum _{n\ge 0}|a_n(-R)^n|\) are the same series.
We have already obtained the following
Result: interval of convergence of \(G(x)=\sum _{n=0}^\infty x^n\).
\(G(x)=1+x+x^2+\dots \) has radius of convergence \(1,\) interval of convergence \((-1,1).\)
We will use a practical method to find the interval of convergence of a power series:
Method: finding the interval of convergence.
-
1. Apply the Ratio Test to the non-negative series \(\sum _{n=0}^\infty |a_n| |x|^n.\)
-
2. The answer will show absolute convergence of \(\sum _{n=0}^\infty a_n x^n\) when \(|x|<R,\) where \(R\) is the radius of convergence.
-
3. Use further tests to check convergence of \(\sum _{n=0}^\infty a_n x^n\) at \(x=-R\) and \(x=R.\)
Example 1: find the interval of convergence of the series \(\sum _{n=1}^\infty \frac {x^n}{n}= x+\frac {x^2}2+\frac {x^3}3+\dots \) and determine the type of convergence at the endpoints of the interval.
Solution. We apply the Ratio Test to the series \(\sum _{n=1}^\infty \frac {|x^n|}{n}:\)
\[ \ell = \lim _{n\to \infty }\frac {|x^{n+1}|/(n+1)}{|x^n|/n}= \lim _{n\to \infty } |x|\frac {n}{n+1} = |x|\lim _{n\to \infty } \frac {1}{1+\frac 1n}=|x|\frac {1}{1+0}=|x|. \]
If \(\ell <1,\) that is \(|x|<1,\) the series \(\sum _{n=1}^\infty \frac {x^n}{n}\) is absolutely convergent. If \(|x|>1,\) the series is not absolutely convergent. We conclude that the radius of convergence is \(R=1.\)
What happens when \(x=\pm 1\)?
At \(x=-1\) the series \(\sum _{n=1}^\infty \frac {x^n}{n}\) becomes \(-1+\frac 12 -\frac 13+\frac 14-\dots \) This is \(-1\) times the alternating harmonic series, convergent (conditionally) by the Alternating Series Test.
At \(x=1\) the series is the harmonic series \(1+\frac 12 +\frac 13+\frac 14+\dots ,\) divergent. We have:
Result: the interval of convergence of \(\sum _{n=1}^\infty \frac {x^n}{n}\).
The interval of convergence is \([-1,1).\) At \(x=-1\) convergence is conditional.
Example 2: find the interval of convergence of the series \(\sum _{n=1}^\infty nx^n = x+2x^2+3x^3+\dots \) and determine the type of convergence at the endpoints of the interval.
Solution. We apply the Ratio Test to the series \(\sum _{n=1}^\infty nx^n:\)
\[ \ell = \lim _{n\to \infty }\frac {(n+1)|x^{n+1}|}{n|x^n|}= \lim _{n\to \infty } |x|\frac {n+1}{n} = |x|. \]
If \(\ell <1,\) that is \(|x|<1,\) the series \(\sum _{n=1}^\infty \frac {x^n}{n}\) is absolutely convergent. If \(|x|>1,\) the series is not absolutely convergent. We conclude that the radius of convergence is \(R=1.\)
At \(x=\pm 1\) the series is \(\sum _{n=1}^\infty n(\pm 1)^n,\) divergent by Nullity Test.
Result: the interval of convergence of \(\sum _{n=1}^\infty n x^n\).
The interval of convergence is \((-1,1).\)
Remark. A limitation of power series that it can only define a function on an interval centred at zero. To overcome this, one can consider, for \(a\in \RR ,\) power series at \(a:\)
\[ \sum _{n=0}^\infty c_n (x-a)^n. \]
Such a series defines a function on an interval \(I\) such that \((a-R,a+R)\subseteq I \subseteq [a-R,a+R]\) where \(R\) is the radius of convergence.
The function defined by a power series is continuous
Every power series \(C(x)\) can be viewed as a real-valued function on \(I,\) its interval of convergence. We claim that, at least on the open interval \((-R, R),\) this function is continuous. As \(C(x)\) is an infinite sum of terms of the form \(c_nx^n\) which are continuous functions of \(x,\) it makes sense to study the continuity of a sum of a series of functions. We begin with the following
Proposition 3.3: infinite sum of increasing continuous functions.
Suppose each of the functions \(f_1(x), f_2(x), \dots \) is increasing and continuous on \([c,d],\) and for each \(x\in [c,d]\) the series \(\sum _{m=1}^\infty f_m(x)\) is convergent with sum \(F(x).\) Then
\(F(x)\) is a continuous function on \([c,d].\)
-
Proof. By criterion of continuity, we need to prove, for each \(b\in (c,d),\) that \(\lim \limits _{x\to b-} F(x) = \lim \limits _{x\to b+} F(x) = F(b).\) We will only prove that \(\lim \limits _{x\to b-} F(x)\) is \(F(b);\) the limit \(\lim \limits _{x\to b+}\) is similar and is left to the student. By Lemma 1.1, we need to show that
\(\seteqnumber{0}{3.}{0}\)\begin{equation*} \tag {*} \lim _{n\to \infty } F(x_n) = F(b), \end{equation*}
given any strictly increasing sequence \(x_1,x_2,\dots \) with limit \(b.\) To prove (*), we sum the double series \(a_{m,n}=f_m(x_n)-f_m(x_{n-1}),\)
\[ \begin {matrix} f_1(x_2)-f_1(x_1) & f_1(x_3)-f_1(x_2) & f_1(x_4)-f_1(x_3) & \dots \\ f_2(x_2)-f_2(x_1) & f_2(x_3)-f_2(x_2) & f_2(x_4)-f_2(x_3) & \dots \\ f_3(x_2)-f_3(x_1) & f_3(x_3)-f_3(x_2) & f_3(x_4)-f_3(x_3) & \dots \\ \vdots & \vdots & \vdots & \ddots \end {matrix} \]
by two methods. The first method is summation by columns, where we calculate
\[ \mathrm {ColumnSum}_n = \sum _{m=1}^\infty f_m(x_n) - f_m(x_{n-1}) = F(x_n) - F(x_{n-1}), \]
so that \(\sum _{n=2}^\infty \mathrm {ColumnSum}_n = \sum _{n=2}^\infty F(x_n) - F(x_{n-1}).\) A partial sum \((F(x_2) - F(x_1)) + (F(x_3) - F(x_2)) + \dots + (F(x_n) - F(x_{n-1}))\) of this series is telescoping: the intermediate terms cancel, leaving \(F(x_n) - F(x_1).\) The sum of the series is the limit of partial sums, so
\(\seteqnumber{0}{3.}{0}\)\begin{equation*} \sum _{n=2}^\infty \mathrm {ColumnSum}_n = \lim _{n\to \infty } F(x_n) - F(x_1). \end{equation*}
The second method is summation by rows: using telescoping sums again, we calculate
\[ \mathrm {RowSum}_m = \sum _{n=2}^\infty f_m(x_n)-f_m(x_{n-1}) = \lim _{n\to \infty } f_m(x_n) - f_m(x_1). \]
Here \(f_m\) is continuous, so \(x_n\to b\) as \(n\to \infty \) implies \(\lim _{n\to \infty } f_m(x_n)= f_m(b).\) Therefore
\[ \sum _{m=1}^\infty \mathrm {RowSum}_m = \sum _{m=1}^\infty f_m(b) - f_m(x_1) = F(b) - F(x_1). \]
Note that all the \(a_{m,n}\) are non-negative: \(x_{n}>x_{n-1},\) \(f_m\) is increasing, so \(f_m(x_n)\ge f_m(x_{n-1}).\) Hence by Proposition 2.3, the sum of all the \(a_{m,n}\) does not depend on the method of summation, which means that \(\lim _{n\to \infty } F(x_n) - F(x_1) = F(b) - F(x_1).\) This proves (*) and the Proposition. □
If we consider the function \(f_m(x) = c_m x^m\) on, say, an interval \([0,d],\) it can be increasing or decreasing, depending on the sign of \(c_m.\) Recall that a function which is increasing on \([c,d]\) or is decreasing on \([c,d]\) is called a monotone function. With this in mind, we give a modified version of the previous result.
Proposition 3.4: absolutely convergent sum of monotone continuous functions.
Suppose each of the functions \(f_1(x), f_2(x), \dots \) is monotone and continuous on \([c,d],\) and for each \(x\in [c,d]\) the series \(\sum _{m=1}^\infty f_m(x)\) is absolutely convergent and has sum
\(F(x).\) Then \(F(x)\) is a continuous function on \([c,d].\)
-
Proof (not given in class). We literally repeat the proof of Proposition 3.3, not including the last paragraph which begins with the words “Note that all the \(a_{m,n}\) are non-negative.”: \(a_{m,n} = f_m(x_n) - f_m(x_{n-1})\) may be negative, so we cannot use Proposition 2.3 to say that the sum of the double series does not depend on the method of summation.
Instead, we will use Claim 2.7. For that, we need to check that \(\sum _{m,n} |a_{m,n}|<+\infty .\) Note that in the \(m\)th row, all numbers \(a_{m,2}, a_{m,3},\dots \) are either all non-negative if \(f_m\) is an increasing function, or all non-positive if \(f_m\) is decreasing. That is,
\(\seteqnumber{0}{3.}{0}\)\begin{align*} \sum _{n=2}^\infty |a_{m,n}| & = |f_m(x_2)-f_m(x_1)| + |f_m(x_3)-f_m(x_2)| + \dots \\ & = \begin{cases} & (f_m(x_2)-f_m(x_1)) + (f_m(x_3)-f_m(x_2))+\dots , \\ & \text { if $f_m$ is increasing,} \\ & -(f_m(x_2)-f_m(x_1)) - (f_m(x_3)-f_m(x_2)) - \dots , \\ & \text { if $f_m$ is decreasing,} \\ \end {cases} \end{align*} which is a telescopic series with sum \(|f_m(b) - f_m(x_1)|.\) By the triangle inequality, \(|f_m(b) - f_m(x_1)|\) is bounded above by \(|f_m(b)|+|f_m(x_1)|.\)
Therefore, \(\sum _{m,n}|a_{m,n}| \le \sum _{m=1}^\infty |f_m(b)|+ \sum _{m=1}^\infty |f_m(x_1)|.\) This is finite by the assumption about absolute convergence. So Claim 2.7 guarantees that summation of the \(a_{m,n}\) by colums gives the same answer as summation by rows, leading to the same conclusion as in Proposition 3.3. □
Theorem 3.5: a function defined by a power series is continuous.
Let \(C(x)\) denote the sum of a power series with radius of convergence \(R.\) Then \(C(x)\) is a continuous function on the interval \((-R, R).\)
-
Proof. Let \(C(x)=c_0+c_1x+c_2 x^2+\dots \) Each term \(c_m x^m\) is a continuous and monotone function on \([0,R),\) the convergence for \(|x|<R\) is absolute, so by Proposition 3.4, \(C(x)\) is continuous on \([0,R).\)
On \((-R, 0],\) \(C(x)\) is continuous for exactly the same reason. We have to split the interval \((-R, R)\) into \((-R,0]\) and \([0,R)\) because if \(m\) is even, \(x^m\) is not monotone on the whole \((-R, R)\) but is monotone on \((-R,0]\) and on \([0,R).\)
It is easy to check (using one-sided limits at \(0\)) that a function, continuous on \((-R,0]\) and on \([0,R),\) is continuous on \((-R,R).\) □
Version 2025/02/12 Week 3 in PDF All notes in PDF To other weeks