I am having trouble to understand how do we define green's function just like that because previously it was just an inverse d'Alembertian and d'Alembertian only depends on 1 variable, however in this definition it's inverse depends on 2 variables. For me, the confusing part is actually whether this definition is independent of our previous definitions or just a result of them. If it is just a result of those definitions, how can I check that this equation is satisfied by only using previous definitions. You can find the content in page 41 of Schwarz's QFT book.
-
3$\begingroup$ Well, take $z=x-y$ and you will reduce the "two variables" to just one. $\endgroup$naturallyInconsistent– naturallyInconsistent2025-08-01 03:11:03 +00:00Commented Aug 1 at 3:11
-
$\begingroup$ If the question about a definition, usually when using operators, one can perform a Fourier transform on them, obtaining polynomes each derivation corresponding to a power of 1 of a Fourier space variable. They can be in nominator or denominator. Thus the differential operation is "possible" to devide by. As for the number of variables, I do not truly undertand your question, since it seems for me that the d'Alambertian depends on the variables $x^\mu$ and inverse one, will also depend on the same coordinates. Why will number of variables change? $\endgroup$Pierre Polovodov– Pierre Polovodov2025-08-01 12:05:51 +00:00Commented Aug 1 at 12:05
-
1$\begingroup$ It's just a sloppy physicist's notation... the equation (3.77) should read $\frac{1}{\square} \delta^{(4)}(x-y)$ $\endgroup$Jeanbaptiste Roux– Jeanbaptiste Roux2025-08-01 13:27:54 +00:00Commented Aug 1 at 13:27
2 Answers
If $L: V\rightarrow V$ is a linear operator on a finite dimensional vector spaces, then by choosing a basis, you can always write it as a matrix, and write the relation $w = Lv$ as $w_i = \sum_{j} L_{ij}v_j$.
Notably, this does not hold for operators between infinite dimensional spaces in general, at least not like this.
In the physics literature, this is however not an obstacle, and authors often essentially assume that every operator between function spaces has a kernel, i.e. if $L$ is a linear map between suitable (and usually not explicitly defined) function spaces, the relation $g = Lf$ can be written as an integral operator relation $$g(x) = \int L(x, y)f(y)\, dy,$$ which is really just the continuum analogue of the matrix equation above. The bivariate function $L(x, y)$ above is called the kernel of the operator $L$.
As mentioned, most linear operators don't have a kernel, but this doesn't stop people from treating them as if they had kernels. Sometimes using function notation for distributions can make it appear that an operator has a kernel when in fact it doesn't and still retain at least some semblance of rigour.
So with that in mind, if $D$ is a linear differential operator between some (implicit) function spaces, then a bivariate function $G(x, y)$ is called a Green function for the operator if $G$ is the kernel of an inverse of $D$ (I say "an" instead of "the" because most differential operators are not invertible, and for example elliptic operators require appropriate boundary conditions for unique solution to exist, so an elliptic operator is invertible only if th function space is restricted by a choice of boundary condition).
Let's for a moment suppose that $D$ can be written as an integral operator (it usually can't but whatever), then we have $$f(x)=\iint D(x, z)G(z, y)f(y)\,dz\,dy,$$ which means that the inner integral just produces the Dirac delta, $\int D(x, z)G(z,y)\, dz = \delta(x-y)$. Which is really just the continuum analogue of $\delta_{ij=}\sum_k A_{ik}B_{kj}$.
But if we recall that $D(x,y)$ is just the fictious kernel of $D$, then this can be written as $$D_x G(x,y) = \delta(x-y), $$ where the index $x$ in $D_x$ signifies that the operator acts on the $x$ variable of $G(x,y)$ only.
So $G(x, y)$ is bivariate by design because it is supposed to be a "continuum matrix" representation of a linear operator.
However if the operator $D$ is translation invariant, then so is $G$ and in this case one can define a single-variable function $\hat G(z)$ by $\hat G(x-y) = G(x, y)$. It is then a very common abuse of notation to use the same symbol for $G$ and $\hat G$.
previously it was just an inverse d'Alembertian and d'Alembertian only depends on 1 variable, however in this definition it's inverse depends on 2 variables.
For a homogeneous system, the Green's function $G(x,y)$ can be written as a function of just the difference $x-y$. This is because for a homogeneous system the propagator (A.K.A. the "Green's function") from $x$ to $y$ should be the same as the propagator from $x-a$ to $y-a$ for any $a$.
In this case (considering just the d'Alembertian) the system is homogeneous, since since $\frac{\partial \phi}{\partial x}=\frac{\partial (x-a)}{\partial x}\cdot\frac{\partial \phi}{\partial (x-a)}=\frac{\partial \phi}{\partial (x-a)}$ for constant $a$.
But in that case we have $$ \phi(x) = \int G(x-a,y-a)\phi(y)\;,\qquad\text{(homogeneous system)} $$ and we can choose $a=x$ to find $$ \phi(x) = \int G(0, y-x)\phi(y) \equiv \int g(x-y)\phi(y)\;, $$ where $$ g(z) = G(0,-z)\;. $$
