18Fundamental ConceptsBecause A is Hermitian, we also have(1.3.2)<α"|A=α"*a"l,where a', a"... are eigenvalues of A. If we multiply both sides of (1.3.1) by<a'l on the left, both sides of (1.3.2) by la') on the right, and subtract, weobtain(1.3.3)(a'-a"*)<a"Ja")=0.Now a' and a" can be taken to be either the same or different. Let us firstchoose them to be the same; we then deduce the reality condition (the firsthalf of the theorem)(1.3.4)a'=α*,where we have used the fact that [a') is not a null ket. Let us now assume a'and a" to be different.Because of the just-proved reality condition, thedifference a'- a"* that appears in (1.3.3) is equal to a'- a", which cannotvanish, by assumption. The inner product (a'la')must then vanish:(1.3.5)<a"la")=0,(a'a"),which proves the orthogonality property (the second half of the theorem). We expect on physical grounds that an observable has real eigenval-ues, a point that will become clearer in the next section, where measure-ments in quantum mechanics will be discussed. The theorem just provedguarantees the reality of eigenvalues whenever the operator is Hermitian.That is why we talk about Hermitian observables in quantum mechanics.It is conventional to normalize la') so the (la')) form a orthonormalset:(1.3.6)<a"la")=8a"a".We may logically ask, Is this set of eigenkets complete? Since we started ourdiscussion by asserting that the whole ket space is spanned by the eigenketsof A, the eigenkets of A must therefore form a complete set by constructionof our ket space.*Eigenkets as Base KetsWe have seen that the normalized eigenkets of A form a completeorthonormal set. An arbitrary ket in the ket space can be expanded in terms* The astute reader, alreadyfamiliar with wave mechanics, may point out that the complete-ness of eigenfunctions we use can be proved by applying the Sturm-Liouville theory lo theSchrodinger wave equation. But to “derive" the Schrodinger wave equation from our funda-mental postulates, the completeness of the position eigenkets must he assumed
191.3.BaseKets andMatrixRepresentationsof the eigenkets of A. In othcr words, the cigenkets of A are to be used asbase kets in much the same wayas a set of mutually orthogonal unit vectorsis used as base vectors in Euclidean space.Given an arbitrary ket [α) in the ket space spanned by the eigenketsof A, let us attempt to expand it as follows:[α)=Ecala").(1.3.7)AMultiplying <a" on the left and using the orthonormality property (1.3.6),we can immediately find the expansion coeficient(1.3.8)Cα=<alα).In other words, we have(α) =la')(aαa),(1.3.9)a'which is analogous to an expansion of a vector v in (real) Euclidean space:V=Ee,(e,-v),(1.3.10)where (e, form an orthogonal set of unit vectors. We now recall theassociative axiom of multiplication: la')(a'a) can be regarded either as thenumber(a'α)multiplying Ja")or,equivalently,as the operator Ja')(aacting on [α). Because [α) in (1.3.9) is an arbitrary ket, we must haveZla'a}=1,(1.3.11)a'where the 1 on the right-hand side is to be understood as the identityoperator. Equation (1.3.11) is known as the completeness relation or closure.It is difficult to overestimate the usefulness of (1.3.11). Given a chainof kets, operators, or bras multiplied in legal orders, we can insert, in anyplace at our convenience, the identity operator written in form (1.3.11).Consider, for example <ala); by inserting the identity operator between (aland [α), we obtain<αla)=(αl (Za')(a)-la)=EKaα>12(1.3.12)a'This, incidentally, shows that if [a) is normalized, then the expansioncoeficients in (1.3.7)must satisfyElca/2-EKaα)/2=1.(1.3.13)a'a
20Fundamental ConceptsLet us now look at Ja')<a' that appears in (1.3.11). Since this is anouter product, it must be an operator. Let it operate on lα):(1.3.14)([a"<a-lα)=[a'a'lα)=cala')We see that la'<a' selects that portion of the ket la) parallel to [a'), sola'<a'is known as the projection operator along the base ket a') and isdenoted by A.":(1.3.15)Aα=[a"a'l.The completeness relation (1.3.11) can now be written asEA.=1.(1.3.16)a'Matrix RepresentationsHaving specified the base kets, we now show how to represent anoperator, sayX, by a square matrix.First, using (1.3.11) twice,we write theoperator X asX-EEla"a"lXla"a'T.(1.3.17)a"a"There are altogether N? numbers of form <aXJa'), where N is thedimensionality of the ket space. We may arrange them into an N X Nsquare matrix such that the column and row indices appear as follows:(1.3.18)<a"X la'>.rowcolumnExplicitly we may write the matrix as<a()]Xja())<a()|X|a(2))X=(1.3.19)<a(2)]Xa()<a(2)]X|a(2))::where the symbol = stands for"is represented by."*Using (1.2.38), we can write(a"|x|a") = (a'|xta")*(1.3.20)At last, the Hermitian adjoint operation, originally defined by (1.2.24), hasbeen related to the (perhaps more familiar) concept of complex conjugatetransposed. If an operator B is Hermitian, we have(1.3.21)<a"|B|a')=<a"B|a")*+We do not use the equality sign here because the particular form of a matrix representationdcpends on thc particular choice of basekets used.The opcrator is differentfrom a represcnta-tion of the operator just as the actress is different from a poster of the actress
211.3. Base Kets and Matrix RepresentationsThe way we arranged <a"Xja') into a square matrix is in confor-mity with the usual rule of matrix multiplication. To see this just note thatthe matrix representation of the operator relation(1.3.22)Z= XYreads(a"|Z]a")=<a"XYa")(1.3.23)= E<a"lX]a")(u"Yla").Again, all we have done is to insert the identity operator, written in form(1.3.11), between X and Y!Let us now examine how the ket relation(1.3.24)()= X[α)can be represented using our base kets. The expansion coefficients of l)can be obtained by multiplying (a' on the left:(a)=(aX|α)=E<aiXja")(a"lαa).(1.3.25)a"But this can be seen as an application of the rule for multiplying a squarematrix with a column matrix representing once the expansion coefficients ofJα)and /)arrangethemselves to formcolumnmatrices asfollows:<a(Jα)(a()ly)<α(2)]α)<a(2)])[α) =(1.3.26)Iy> =(a(3)]α)<a(3)])..Likewise,given(1.3.27)</=(α|X,we can regard(1.3.28)<la")=E(αla")a"lX]a')a"So a bra is represented by a row matrix as follows:<= (<a(1),(α(2)),<a(3)), ...)(1.3.29)= (<a(1))*,<a(2))*,<a(3)/>*, ...).Note the appearance of complex conjugation when the elements of the columnmatrix are written as in (1.3.29). The inner product (βα) can be written as
22Fundamental Conceptsthe product of the row matrix representing <βl with the column matrixrepresenting [α):<β[α) =<βa')<aα)a'<α(1)]α)= (<a(1)Iβ)*,<a(2)|β)*,...)(1.3.30)<a(2)]α)If we multiply the row matrix representing (al with the column matrixrepresenting Iβ), then we obtain just the complex conjugate of the preced-ing expression, which is consistent with the fundamental property of theinner product (1.2.12). Finally, the matrix representation of the outerproduct [β)<α is easily seen to be<a()|β)(a()]α)*(a(1)|β)(a(2)]α)*Iβ)<α| =<a(2)β)a()]α)*<a(2)|β)<a(2)]α)*:.(1.3.31)The matrix representation of an observable A becomes particularlysimple if the eigenkets of A themselves are used as the base kets. First, wehaveA=Ela"a"lAla'"a'T(1.3.32)a"u'But the square matrix <a"|Ala'") is obviously diagonal,(1.3.33)<a"|Aa')=(a'Aja")8a'a"=a"a"a",soA=Eaa'alu=EaNa"(1.3.34)a'Spin ↓ SystemsIt is here instructive to consider the special case of spin systems.The base kets used are [S,; ±), which we denote, for brevity, as I±). Thesimplest operator in the ket space spanned by |+)is the identity operator,which, according to (1.3.11), can be written as(1.3.35)1=I+×+I+/-×-1.According to (1.3.34), we must be able to write S, as(1.3.36)S, =(h /2)[(I+><+) -(/-><-D)]