Mathematical theorem
In mathematics , Ramanujan's Master Theorem , named after Srinivasa Ramanujan ,[1] is a technique that provides an analytic expression for the Mellin transform of an analytic function .
Page from Ramanujan's notebook stating his Master theorem.
The result is stated as follows:
If a complex-valued function
f
(
x
)
{\textstyle f(x)}
has an expansion of the form
f
(
x
)
=
∑
k
=
0
∞
φ
(
k
)
k
!
(
−
x
)
k
{\displaystyle f(x)=\sum _{k=0}^{\infty }{\frac {\,\varphi (k)\,}{k!}}(-x)^{k}}
then the Mellin transform of
f
(
x
)
{\textstyle f(x)}
is given by
∫
0
∞
x
s
−
1
f
(
x
)
d
x
=
Γ
(
s
)
φ
(
−
s
)
{\displaystyle \int _{0}^{\infty }x^{s-1}f(x)\,dx=\Gamma (s)\,\varphi (-s)}
where
Γ
(
s
)
{\textstyle \Gamma (s)}
is the gamma function .
It was widely used by Ramanujan to calculate definite integrals and infinite series .
Higher-dimensional versions of this theorem also appear in quantum physics (through Feynman diagrams ).[2]
A similar result was also obtained by Glaisher .[3]
Alternative formalism [ edit ]
An alternative formulation of Ramanujan's Master Theorem is as follows:
∫
0
∞
x
s
−
1
(
λ
(
0
)
−
x
λ
(
1
)
+
x
2
λ
(
2
)
−
⋯
)
d
x
=
π
sin
(
π
s
)
λ
(
−
s
)
{\displaystyle \int _{0}^{\infty }x^{s-1}\left(\,\lambda (0)-x\,\lambda (1)+x^{2}\,\lambda (2)-\,\cdots \,\right)dx={\frac {\pi }{\,\sin(\pi s)\,}}\,\lambda (-s)}
which gets converted to the above form after substituting
λ
(
n
)
≡
φ
(
n
)
Γ
(
1
+
n
)
{\textstyle \lambda (n)\equiv {\frac {\varphi (n)}{\,\Gamma (1+n)\,}}}
and using the functional equation for the gamma function .
The integral above is convergent for
0
<
R
e
(
s
)
<
1
{\textstyle 0<\operatorname {\mathcal {Re}} (s)<1}
subject to growth conditions on
φ
{\textstyle \varphi }
.[4]
A proof subject to "natural" assumptions (though not the weakest necessary conditions) to Ramanujan's Master theorem was provided by G. H. Hardy [5] (chapter XI) employing the residue theorem and the well-known Mellin inversion theorem .
Application to Bernoulli polynomials [ edit ]
The generating function of the Bernoulli polynomials
B
k
(
x
)
{\textstyle B_{k}(x)}
is given by:
z
e
x
z
e
z
−
1
=
∑
k
=
0
∞
B
k
(
x
)
z
k
k
!
{\displaystyle {\frac {z\,e^{x\,z}}{\,e^{z}-1\,}}=\sum _{k=0}^{\infty }B_{k}(x)\,{\frac {z^{k}}{k!}}}
These polynomials are given in terms of the Hurwitz zeta function :
ζ
(
s
,
a
)
=
∑
n
=
0
∞
1
(
n
+
a
)
s
{\displaystyle \zeta (s,a)=\sum _{n=0}^{\infty }{\frac {1}{\,(n+a)^{s}\,}}}
by
ζ
(
1
−
n
,
a
)
=
−
B
n
(
a
)
n
{\textstyle \zeta (1-n,a)=-{\frac {B_{n}(a)}{n}}}
for
n
≥
1
{\textstyle ~n\geq 1}
.
Using the Ramanujan master theorem and the generating function of Bernoulli polynomials one has the following integral representation:[6]
∫
0
∞
x
s
−
1
(
e
−
a
x
1
−
e
−
x
−
1
x
)
d
x
=
Γ
(
s
)
ζ
(
s
,
a
)
{\displaystyle \int _{0}^{\infty }x^{s-1}\left({\frac {e^{-ax}}{\,1-e^{-x}\,}}-{\frac {1}{x}}\right)dx=\Gamma (s)\,\zeta (s,a)\!}
which is valid for
0
<
R
e
(
s
)
<
1
{\textstyle 0<\operatorname {\mathcal {Re}} (s)<1}
.
Application to the gamma function [ edit ]
Weierstrass's definition of the gamma function
Γ
(
x
)
=
e
−
γ
x
x
∏
n
=
1
∞
(
1
+
x
n
)
−
1
e
x
/
n
{\displaystyle \Gamma (x)={\frac {\,e^{-\gamma \,x\,}}{x}}\,\prod _{n=1}^{\infty }\left(\,1+{\frac {x}{n}}\,\right)^{-1}e^{x/n}\!}
is equivalent to expression
log
Γ
(
1
+
x
)
=
−
γ
x
+
∑
k
=
2
∞
ζ
(
k
)
k
(
−
x
)
k
{\displaystyle \log \Gamma (1+x)=-\gamma \,x+\sum _{k=2}^{\infty }{\frac {\,\zeta (k)\,}{k}}\,(-x)^{k}}
where
ζ
(
k
)
{\textstyle \zeta (k)}
is the Riemann zeta function .
Then applying Ramanujan master theorem we have:
∫
0
∞
x
s
−
1
γ
x
+
log
Γ
(
1
+
x
)
x
2
d
x
=
π
sin
(
π
s
)
ζ
(
2
−
s
)
2
−
s
{\displaystyle \int _{0}^{\infty }x^{s-1}{\frac {\,\gamma \,x+\log \Gamma (1+x)\,}{x^{2}}}\mathrm {d} x={\frac {\pi }{\sin(\pi s)}}{\frac {\zeta (2-s)}{2-s}}\!}
valid for
0
<
R
e
(
s
)
<
1
{\textstyle 0<\operatorname {\mathcal {Re}} (s)<1}
.
Special cases of
s
=
1
2
{\textstyle s={\frac {1}{2}}}
and
s
=
3
4
{\textstyle s={\frac {3}{4}}}
are
∫
0
∞
γ
x
+
log
Γ
(
1
+
x
)
x
5
/
2
d
x
=
2
π
3
ζ
(
3
2
)
{\displaystyle \int _{0}^{\infty }{\frac {\,\gamma x+\log \Gamma (1+x)\,}{x^{5/2}}}\,\mathrm {d} x={\frac {2\pi }{3}}\,\zeta \left({\frac {3}{2}}\right)}
∫
0
∞
γ
x
+
log
Γ
(
1
+
x
)
x
9
/
4
d
x
=
2
4
π
5
ζ
(
5
4
)
{\displaystyle \int _{0}^{\infty }{\frac {\,\gamma \,x+\log \Gamma (1+x)\,}{x^{9/4}}}\,\mathrm {d} x={\sqrt {2}}{\frac {4\pi }{5}}\zeta \left({\frac {5}{4}}\right)}
Application to Bessel functions [ edit ]
The Bessel function of the first kind has the power series
J
ν
(
z
)
=
∑
k
=
0
∞
(
−
1
)
k
Γ
(
k
+
ν
+
1
)
k
!
(
z
2
)
2
k
+
ν
{\displaystyle J_{\nu }(z)=\sum _{k=0}^{\infty }{\frac {(-1)^{k}}{\Gamma (k+\nu +1)k!}}{\bigg (}{\frac {z}{2}}{\bigg )}^{2k+\nu }}
By Ramanujan's Master Theorem, together with some identities for the gamma function and rearranging, we can evaluate the integral
2
ν
−
2
s
π
sin
(
π
(
s
−
ν
)
)
∫
0
∞
z
s
−
1
−
ν
/
2
J
ν
(
z
)
d
z
=
Γ
(
s
)
Γ
(
s
−
ν
)
{\displaystyle {\frac {2^{\nu -2s}\pi }{\sin {(\pi (s-\nu ))}}}\int _{0}^{\infty }z^{s-1-\nu /2}J_{\nu }({\sqrt {z}})\,dz=\Gamma (s)\Gamma (s-\nu )}
valid for
0
<
2
R
e
(
s
)
<
R
e
(
ν
)
+
3
2
{\textstyle 0<2\operatorname {\mathcal {Re}} (s)<\operatorname {\mathcal {Re}} (\nu )+{\tfrac {3}{2}}}
.
Equivalently, if the spherical Bessel function
j
ν
(
z
)
{\textstyle j_{\nu }(z)}
is preferred, the formula becomes
2
ν
−
2
s
π
(
1
−
2
s
+
2
ν
)
cos
(
π
(
s
−
ν
)
)
∫
0
∞
z
s
−
1
−
ν
/
2
j
ν
(
z
)
d
z
=
Γ
(
s
)
Γ
(
1
2
+
s
−
ν
)
{\displaystyle {\frac {2^{\nu -2s}{\sqrt {\pi }}(1-2s+2\nu )}{\cos {(\pi (s-\nu ))}}}\int _{0}^{\infty }z^{s-1-\nu /2}j_{\nu }({\sqrt {z}})\,dz=\Gamma (s)\Gamma {\bigg (}{\frac {1}{2}}+s-\nu {\bigg )}}
valid for
0
<
2
R
e
(
s
)
<
R
e
(
ν
)
+
2
{\textstyle 0<2\operatorname {\mathcal {Re}} (s)<\operatorname {\mathcal {Re}} (\nu )+2}
.
The solution is remarkable in that it is able to interpolate across the major identities for the gamma function. In particular, the choice of
J
0
(
z
)
{\textstyle J_{0}({\sqrt {z}})}
gives the square of the gamma function,
j
0
(
z
)
{\textstyle j_{0}({\sqrt {z}})}
gives the duplication formula ,
z
−
1
/
2
J
1
(
z
)
{\textstyle z^{-1/2}J_{1}({\sqrt {z}})}
gives the reflection formula , and fixing to the evaluable
s
=
1
2
{\textstyle s={\frac {1}{2}}}
or
s
=
1
{\textstyle s=1}
gives the gamma function by itself, up to reflection and scaling.
Bracket integration method [ edit ]
The bracket integration method (method of brackets) applies Ramanujan's Master Theorem to a broad range of integrals.[7] [8] The bracket integration method generates an integral of a series expansion, introduces simplifying notations, solves linear equations, and completes the integration using formulas arising from Ramanujan's Master Theorem.[8]
Generate an integral of a series expansion [ edit ]
This method transforms the integral to an integral of a series expansion involving M variables,
x
1
,
…
x
M
{\textstyle x_{1},\ldots x_{M}}
, and S summation parameters ,
n
1
,
…
n
S
{\textstyle n_{1},\dots n_{S}}
. A multivariate integral may assume this form.[2] : 8
∫
0
∞
⋯
∫
0
∞
∑
n
1
,
…
,
n
S
=
0
∞
φ
(
n
1
⋯
n
S
)
∏
j
=
1
S
(
(
−
1
)
n
j
n
j
!
)
∏
j
=
1
M
(
x
j
)
(
−
c
j
+
a
j
1
⋅
n
1
+
⋯
+
a
j
S
⋅
n
S
−
1
)
d
x
1
⋯
d
x
M
{\displaystyle \int _{0}^{\infty }\cdots \int _{0}^{\infty }\sum _{n_{1},\ldots ,n_{S}=0}^{\infty }\varphi (n_{1}\cdots n_{S})\ \prod _{j=1}^{S}\left({\frac {(-1)^{n_{j}}}{n_{j}!}}\right)\prod _{j=1}^{M}(x_{j})^{(-c_{j}+a_{j1}\cdot n_{1}+\cdots +a_{jS}\cdot n_{S}-1)}\ dx_{1}\cdots dx_{M}}
(B.0 )
Apply special notations [ edit ]
The bracket (
⟨
⋯
⟩
{\textstyle \langle \cdots \rangle }
), indicator (
ϕ
{\textstyle \phi }
), and monomial power notations replace terms in the series expansion.[2] : 8
∫
0
∞
x
c
+
b
⋅
n
−
1
d
x
→
<
c
+
b
⋅
n
>
{\displaystyle \int _{0}^{\infty }x^{c+b\cdot n-1}\ dx\ \to \ <c+b\cdot n>}
(B.1 )
(
−
1
)
n
n
!
→
ϕ
n
{\displaystyle {\frac {(-1)^{n}}{n!}}\ \to \ \phi _{n}}
(B.2 )
∏
j
=
1
S
(
(
−
1
)
n
j
n
j
!
)
→
ϕ
n
1
,
…
,
n
S
{\displaystyle \prod _{j=1}^{S}\left({\frac {(-1)^{n_{j}}}{n_{j}!}}\right)\ \to \ \phi _{n_{1},\ldots ,n_{S}}}
(B.3 )
(
∑
k
=
1
P
u
k
)
∓
d
→
∑
n
1
,
…
,
n
P
=
0
∞
φ
n
1
,
…
,
n
P
∏
k
=
1
P
u
k
n
k
⟨
±
d
+
∑
j
=
1
P
n
j
⟩
Γ
(
±
d
)
{\displaystyle \left(\sum _{k=1}^{P}u_{k}\right)^{\mp d}\ \to \ \sum _{n_{1},\ldots ,n_{P}=0}^{\infty }\varphi _{n_{1},\ldots ,n_{P}}\prod _{k=1}^{P}u_{k}^{n_{k}}{\frac {\langle \pm d+\sum _{j=1}^{P}n_{j}\rangle }{\Gamma (\pm d)}}}
(B.4 )
Application of these notations transforms the integral to a bracket series containing B brackets.[7] : 56
∑
n
1
,
…
,
n
S
=
0
∞
φ
(
n
1
⋯
n
S
)
ϕ
(
n
1
⋯
n
S
)
∏
j
=
1
B
⟨
−
c
j
+
∑
k
=
1
S
a
j
k
⋅
n
k
⟩
{\displaystyle \sum _{n_{1},\ldots ,n_{S}=0}^{\infty }\varphi (n_{1}\cdots n_{S})\ \phi (n_{1}\cdots n_{S})\prod _{j=1}^{B}\left\langle -c_{j}+\sum _{k=1}^{S}a_{jk}\cdot n_{k}\right\rangle }
(B.5 )
Each bracket series has an index defined as index = number of sums − number of brackets.
Among all bracket series representations of an integral, the representation with a minimal index is preferred.[8] : 984
Solve linear equations [ edit ]
The array of coefficients
a
j
k
{\textstyle a_{jk}}
must have maximum rank, linearly independent leading columns to solve the following set of linear equations .[2] : 8 [8] : 985
If the index is non-negative, solve this equation set for each
n
j
∗
{\textstyle n_{j}^{\ast }}
. The terms
n
j
∗
{\textstyle n_{j}^{\ast }}
may be linear functions of
{
n
B
+
1
…
n
S
}
{\textstyle \{n_{B+1}\dots n_{S}\}}
.
−
c
j
+
∑
k
=
1
B
a
j
k
⋅
n
k
∗
+
∑
k
=
B
+
1
S
a
j
k
⋅
n
k
=
0
{\displaystyle -c_{j}+\sum _{k=1}^{B}a_{jk}\cdot n_{k}^{\ast }+\sum _{k=B+1}^{S}a_{jk}\cdot n_{k}=0}
(B.6 )
If the index is zero, equation (B.6 ) simplifies to solving this equation set for each
n
j
∗
{\textstyle n_{j}^{\ast }}
−
c
j
+
∑
k
=
1
B
a
j
k
⋅
n
k
∗
=
0
{\displaystyle -c_{j}+\sum _{k=1}^{B}a_{jk}\cdot n_{k}^{\ast }=0}
(B.7 )
If the index is negative, the integral cannot be determined.
Apply formulas [ edit ]
If the index is non-negative, the formula for the integral is this form.[7] : 54
∑
n
B
+
1
…
n
S
=
0
∞
φ
(
n
1
∗
…
n
B
∗
,
n
B
+
1
…
n
S
)
⋅
∏
j
=
1
B
Γ
(
−
n
j
∗
)
det
|
A
|
{\displaystyle \sum _{n_{B+1}\dots n_{S}=0}^{\infty }{\frac {\varphi (n_{1}^{\ast }\dots n_{B}^{\ast },n_{B+1}\dots n_{S})\cdot \prod _{j=1}^{B}\Gamma (-n_{j}^{\ast })}{\det |A|}}}
(B.8 )
These rules apply.[8] : 985
A series is generated for each choice of free summation parameters,
{
n
B
+
1
,
…
N
S
}
{\textstyle \{n_{B}+1,\dots N_{S}\}}
.
Series converging in a common region are added.
If a choice generates a divergent series or null series (a series with zero valued terms), the series is rejected.
A bracket series of negative index is assigned no value.
If all series are rejected, then the method cannot be applied.
If the index is zero, the formula B.8 simplifies to this formula and no sum occurs.
φ
(
n
1
∗
…
n
S
∗
)
⋅
∏
j
=
1
S
Γ
(
−
n
j
∗
)
det
|
A
|
{\displaystyle \ {\frac {\varphi (n_{1}^{\ast }\dots n_{S}^{\ast })\cdot \prod _{j=1}^{S}\Gamma (-n_{j}^{\ast })}{\det |A|}}}
(B.9 )
Mathematical basis [ edit ]
Apply this variable transformation to the general integral form (B.0 ).[4] : 14
y
k
=
x
1
a
1
k
⋅
…
⋅
x
M
a
M
k
{\displaystyle y_{k}=x_{1}^{a_{1k}}\cdot \ldots \cdot x_{M}^{a_{Mk}}}
(B.10 )
.
This is the transformed integral (B.11 ) and the result from applying Ramanujan's Master Theorem (B.12 ).
(
det
|
A
|
−
1
)
⋅
∫
0
∞
⋯
∫
0
∞
∑
n
1
,
…
,
n
S
=
0
∞
φ
(
n
1
⋯
n
S
)
∏
j
=
1
S
(
(
−
1
)
n
j
n
j
!
)
∏
j
=
1
M
(
y
j
)
−
n
j
∗
+
n
j
−
1
d
y
1
⋯
d
y
M
{\displaystyle (\det |A|^{-1})\cdot \int _{0}^{\infty }\cdots \int _{0}^{\infty }\sum _{n_{1},\ldots ,n_{S}=0}^{\infty }\varphi (n_{1}\cdots n_{S})\prod _{j=1}^{S}\left({\frac {(-1)^{n_{j}}}{n_{j}!}}\right)\prod _{j=1}^{M}(y_{j})^{-n_{j}^{\ast }+n_{j}-1}\ dy_{1}\cdots dy_{M}}
(B.11 )
=
∑
n
M
+
1
⋯
n
S
=
0
∞
φ
(
n
1
∗
…
n
M
∗
,
n
M
+
1
…
n
S
)
⋅
∏
j
=
1
M
Γ
(
−
n
j
∗
)
det
|
A
|
{\displaystyle =\sum _{n_{M+1}\cdots n_{S}=0}^{\infty }{\frac {\varphi (n_{1}^{\ast }\dots n_{M}^{\ast },n_{M+1}\dots n_{S})\cdot \prod _{j=1}^{M}\Gamma (-n_{j}^{\ast })}{\det |A|}}}
(B.12 )
The number of brackets (B) equals the number of integrals (M) (B.1 ). In addition to generating the algorithm's formulas (B.8 ,B.9 ), the variable transformation also generates the algorithm's linear equations (B.6 ,B.7 ).[4] : 14
Example [ edit ]
The bracket integration method is applied to this integral.
∫
0
∞
x
3
/
2
⋅
e
−
x
3
/
2
d
x
{\displaystyle \int _{0}^{\infty }x^{3/2}\cdot e^{-x^{3}/2}\ dx}
Generate the integral of a series expansion (B.0 ).
∫
0
∞
∑
n
=
0
∞
2
−
n
⋅
(
−
1
)
n
n
!
⋅
x
(
3
⋅
n
+
5
/
2
)
−
1
d
x
{\displaystyle \int _{0}^{\infty }\sum _{n=0}^{\infty }2^{-n}\cdot {\frac {(-1)^{n}}{n!}}\cdot x^{(3\cdot n+5/2)-1}\ dx}
Apply special notations (B.1 , B.2 ).
∑
n
=
0
∞
2
−
n
⋅
ϕ
(
n
)
⋅
⟨
3
⋅
n
+
5
2
⟩
{\displaystyle \sum _{n=0}^{\infty }2^{-n}\cdot \phi (n)\cdot \langle 3\cdot n+{\frac {5}{2}}\rangle }
Solve the linear equation (B.7 ).
3
⋅
n
∗
+
5
2
=
0
,
n
∗
=
−
5
6
,
|
A
|
=
3
{\displaystyle 3\cdot n^{\ast }+{\frac {5}{2}}=0,\ n^{\ast }={\frac {-5}{6}},\ |A|=3}
2
5
/
6
⋅
Γ
(
5
6
)
3
{\displaystyle {\frac {2^{5/6}\cdot \Gamma ({\frac {5}{6}})}{3}}}
References [ edit ]
^ Berndt, B. (1985). Ramanujan's Notebooks, Part I . New York: Springer-Verlag.
^ a b c d González, Iván; Moll, V.H.; Schmidt, Iván (2011). "A generalized Ramanujan Master Theorem applied to the evaluation of Feynman diagrams". arXiv :1103.0588 [math-ph ].
^ Glaisher, J.W.L. (1874). "A new formula in definite integrals". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science . 48 (315): 53–55. doi :10.1080/14786447408641072 .
^ a b c Amdeberhan, Tewodros; Gonzalez, Ivan; Harrison, Marshall; Moll, Victor H.; Straub, Armin (2012). "Ramanujan's Master Theorem". The Ramanujan Journal . 29 (1–3): 103–120. CiteSeerX 10.1.1.232.8448 . doi :10.1007/s11139-011-9333-y . S2CID 8886049 .
^ Hardy, G.H. (1978). Ramanujan: Twelve lectures on subjects suggested by his life and work (3rd ed.). New York, NY: Chelsea. ISBN 978-0-8284-0136-4 .
^ Espinosa, O.; Moll, V. (2002). "On some definite integrals involving the Hurwitz zeta function. Part 2". The Ramanujan Journal . 6 (4): 449–468. arXiv :math/0107082 . doi :10.1023/A:1021171500736 . S2CID 970603 .
^ a b c Gonzalez, Ivan; Moll, Victor H. (July 2010). "Definite integrals by the method of brackets. Part 1,". Advances in Applied Mathematics . 45 (1): 50–73. doi :10.1016/j.aam.2009.11.003 .
^ a b c d e Gonzalez, Ivan; Jiu, Lin; Moll, Victor H. (1 January 2020). "An extension of the method of brackets. Part 2" . Open Mathematics . 18 (1): 983–995. arXiv :1707.08942 . doi :10.1515/math-2020-0062 . ISSN 2391-5455 .
External links [ edit ]