Available online at www.sciencedirect.com DIRECT Journal of Mathematical ACADEMIC Psychology PRESS Journal of Mathematical Psychology 47(2003)90-100 http://www.elsevier.com/locate/jmp Tutorial Tutorial on maximum likelihood estimation In Jae Myung* Department of Psychology,Ohio State University,1885 Neil Arenue Mall,Columbus,OH 43210-1222,USA Received 30 November 2001:revised 16 October 2002 Abstract In this paper.I provide a tutorial exposition on maximum likelihood estimation(MLE).The intended audience of this tutorial are researchers who practice mathematical modeling of cognition but are unfamiliar with the estimation method.Unlike least-squares estimation which is primarily a descriptive tool,MLE is a preferred method of parameter estimation in statistics and is an indispensable tool for many statistical modeling techniques,in particular in non-linear modeling with non-normal data.The purpose of this paper is to provide a good conceptual explanation of the method with illustrative examples so the reader can have a grasp of some of the basic principles. C 2003 Elsevier Science (USA).All rights reserved. 1.Introduction (i.e.r2),and root mean squared deviation.LSE,which unlike MLE requires no or minimal distributional In psychological science,we seek to uncover general assumptions,is useful for obtaining a descriptive laws and principles that govern the behavior under measure for the purpose of summarizing observed data, investigation.As these laws and principles are not but it has no basis for testing hypotheses or constructing directly observable,they are formulated in terms of confidence intervals. hypotheses.In mathematical modeling,such hypo- On the other hand.MLE is not as widely recognized theses about the structure and inner working of the among modelers in psychology,but it is a standard behavioral process of interest are stated in terms of approach to parameter estimation and inference in parametric families of probability distributions called statistics.MLE has many optimal properties in estima- models.The goal of modeling is to deduce the form of tion:sufficiency (complete information about the para- the underlying process by testing the viability of such meter of interest contained in its MLE estimator); models. consistency (true parameter value that generated the Once a model is specified with its parameters,and data recovered asymptotically,i.e.for data of suffi- data have been collected,one is in a position to evaluate ciently large samples):efficiency (lowest-possible var- its goodness of fit,that is,how well it fits the observed iance of parameter estimates achieved asymptotically); data.Goodness of fit is assessed by finding parameter and parameterization invariance (same MLE solution values of a model that best fits the data-a procedure obtained independent of the parametrization used).In called parameter estimation. contrast,no such things can be said about LSE.As such, There are two general methods of parameter estima- most statisticians would not view LSE as a general tion.They are least-squares estimation (LSE)and method for parameter estimation,but rather as an maximum likelihood estimation (MLE).The former approach that is primarily used with linear regression has been a popular choice of model fitting in psychology models.Further,many of the inference methods in (e.g.,Rubin,Hinton,Wenzel,1999;Lamberts,2000 statistics are developed based on MLE.For example, but see Usher McClelland,2001)and is tied to many MLE is a prerequisite for the chi-square test,the G- familiar statistical concepts such as linear regression, square test,Bayesian methods,inference with missing sum of squares error,proportion variance accounted for data,modeling of random effects,and many model selection criteria such as the Akaike information *Fax:+614-292-5601 criterion (Akaike,1973)and the Bayesian information E-mail address:myung.1@osu.edu criteria (Schwarz,1978). 0022-2496/03/S-see front matter C 2003 Elsevier Science (USA).All rights reserved. doi:10.1016/S0022-2496(02)00028-7
Journal of Mathematical Psychology 47 (2003) 90–100 Tutorial Tutorial on maximum likelihood estimation In Jae Myung* Department of Psychology, Ohio State University, 1885 Neil Avenue Mall, Columbus, OH 43210-1222, USA Received 30 November 2001; revised 16 October 2002 Abstract In this paper, I provide a tutorial exposition on maximum likelihood estimation (MLE). The intended audience of this tutorial are researchers who practice mathematical modeling of cognition but are unfamiliar with the estimation method. Unlike least-squares estimation which is primarily a descriptive tool, MLE is a preferred method of parameter estimation in statistics and is an indispensable tool for many statistical modeling techniques, in particular in non-linear modeling with non-normal data. The purpose of this paper is to provide a good conceptual explanation of the method with illustrative examples so the reader can have a grasp of some of the basic principles. r 2003 Elsevier Science (USA). All rights reserved. 1. Introduction In psychological science, we seekto uncover general laws and principles that govern the behavior under investigation. As these laws and principles are not directly observable, they are formulated in terms of hypotheses. In mathematical modeling, such hypotheses about the structure and inner working of the behavioral process of interest are stated in terms of parametric families of probability distributions called models. The goal of modeling is to deduce the form of the underlying process by testing the viability of such models. Once a model is specified with its parameters, and data have been collected, one is in a position to evaluate its goodness of fit, that is, how well it fits the observed data. Goodness of fit is assessed by finding parameter values of a model that best fits the data—a procedure called parameter estimation. There are two general methods of parameter estimation. They are least-squares estimation (LSE) and maximum likelihood estimation (MLE). The former has been a popular choice of model fitting in psychology (e.g., Rubin, Hinton, & Wenzel, 1999; Lamberts, 2000 but see Usher & McClelland, 2001) and is tied to many familiar statistical concepts such as linear regression, sum of squares error, proportion variance accounted for (i.e. r2), and root mean squared deviation. LSE, which unlike MLE requires no or minimal distributional assumptions, is useful for obtaining a descriptive measure for the purpose of summarizing observed data, but it has no basis for testing hypotheses or constructing confidence intervals. On the other hand, MLE is not as widely recognized among modelers in psychology, but it is a standard approach to parameter estimation and inference in statistics. MLE has many optimal properties in estimation: sufficiency (complete information about the parameter of interest contained in its MLE estimator); consistency (true parameter value that generated the data recovered asymptotically, i.e. for data of suffi- ciently large samples); efficiency (lowest-possible variance of parameter estimates achieved asymptotically); and parameterization invariance (same MLE solution obtained independent of the parametrization used). In contrast, no such things can be said about LSE. As such, most statisticians would not view LSE as a general method for parameter estimation, but rather as an approach that is primarily used with linear regression models. Further, many of the inference methods in statistics are developed based on MLE. For example, MLE is a prerequisite for the chi-square test, the Gsquare test, Bayesian methods, inference with missing data, modeling of random effects, and many model selection criteria such as the Akaike information criterion (Akaike, 1973) and the Bayesian information criteria (Schwarz, 1978). *Fax: +614-292-5601. E-mail address: myung.1@osu.edu. 0022-2496/03/$ - see front matter r 2003 Elsevier Science (USA). All rights reserved. doi:10.1016/S0022-2496(02)00028-7
I.J.Myung Journal of Mathematical Psychology 47 (2003)90-100 91 In this tutorial paper,I introduce the maximum model's parameter.As the parameter changes in value, likelihood estimation method for mathematical model- different probability distributions are generated.For- ing.The paper is written for researchers who are mally,a model is defined as the family of probability primarily involved in empirical work and publish in distributions indexed by the model's parameters. experimental journals (e.g.Journal of Experimental Let f(yw)denote the probability density function Psychology)but do modeling.The paper is intended to (PDF)that specifies the probability of observing data serve as a stepping stone for the modeler to move vector y given the parameter w.Throughout this paper beyond the current practice of using LSE to more we will use a plain letter for a vector (e.g.y)and a letter informed modeling analyses,thereby expanding his or with a subscript for a vector element (e.g.y).The her repertoire of statistical instruments,especially in parameter w=(w1,...,wk)is a vector defined on a non-linear modeling.The purpose of the paper is to multi-dimensional parameter space.If individual ob- provide a good conceptual understanding of the method servations,yi's,are statistically independent of one with concrete examples.For in-depth,technically more another,then according to the theory of probability,the rigorous treatment of the topic,the reader is directed to PDF for the data y=(v1,...,ym)given the parameter other sources (e.g.,Bickel Doksum,1977,Chap.3; vector w can be expressed as a multiplication of PDFs Casella Berger,2002,Chap.7;DeGroot Schervish, for individual observations, 2002,Chap.6;Spanos,1999,Chap.13). f(y=(vI,y2,...,yn)w)=fi(vw)f(v2w) …f(ym w) (1) 2.Model specification To illustrate the idea of a PDF,consider the simplest case with one observation and one parameter,that is, 2.1.Probability density function m=k=1.Suppose that the data y represents the number of successes in a sequence of 10 Bernoulli trials From a statistical standpoint,the data vector y= (e.g.tossing a coin 10 times)and that the probability of (y1,...,ym)is a random sample from an unknown a success on any one trial,represented by the parameter population.The goal of data analysis is to identify the w,is 0.2.The PDF in this case is given by population that is most likely to have generated the 10-n0.2y0.80- 101 sample.In statistics,each population is identified by a f0yln=10,w=0.2)= corresponding probability distribution.Associated with each probability distribution is a unique value of the 0y=0,1,,10) (2) 04 .3 0.2 Data y 0.4 0.3 0.2 1 Data y Fig.1.Binomial probability distributions of sample size n=10 and probability parameter w=0.2 (top)and w=0.7 (bottom)
In this tutorial paper, I introduce the maximum likelihood estimation method for mathematical modeling. The paper is written for researchers who are primarily involved in empirical workand publish in experimental journals (e.g. Journal of Experimental Psychology) but do modeling. The paper is intended to serve as a stepping stone for the modeler to move beyond the current practice of using LSE to more informed modeling analyses, thereby expanding his or her repertoire of statistical instruments, especially in non-linear modeling. The purpose of the paper is to provide a good conceptual understanding of the method with concrete examples. For in-depth, technically more rigorous treatment of the topic, the reader is directed to other sources (e.g., Bickel & Doksum, 1977, Chap. 3; Casella & Berger, 2002, Chap. 7; DeGroot & Schervish, 2002, Chap. 6; Spanos, 1999, Chap. 13). 2. Model specification 2.1. Probability density function From a statistical standpoint, the data vector y ¼ ðy1;y; ymÞ is a random sample from an unknown population. The goal of data analysis is to identify the population that is most likely to have generated the sample. In statistics, each population is identified by a corresponding probability distribution. Associated with each probability distribution is a unique value of the model’s parameter. As the parameter changes in value, different probability distributions are generated. Formally, a model is defined as the family of probability distributions indexed by the model’s parameters. Let fðyjwÞ denote the probability density function (PDF) that specifies the probability of observing data vector y given the parameter w: Throughout this paper we will use a plain letter for a vector (e.g. y) and a letter with a subscript for a vector element (e.g. yi). The parameter w ¼ ðw1;y; wkÞ is a vector defined on a multi-dimensional parameter space. If individual observations, yi’s, are statistically independent of one another, then according to the theory of probability, the PDF for the data y ¼ ðy1;y; ymÞ given the parameter vector w can be expressed as a multiplication of PDFs for individual observations, fðy ¼ ðy1; y2;y; ynÞ j wÞ ¼ f1ðy1 j wÞ f2ðy2 j wÞ ?fnðym j wÞ: ð1Þ To illustrate the idea of a PDF, consider the simplest case with one observation and one parameter, that is, m ¼ k ¼ 1: Suppose that the data y represents the number of successes in a sequence of 10 Bernoulli trials (e.g. tossing a coin 10 times) and that the probability of a success on any one trial, represented by the parameter w; is 0.2. The PDF in this case is given by fðy j n ¼ 10; w ¼ 0:2Þ ¼ 10! y!ð10 yÞ! ð0:2Þ y ð0:8Þ 10y ðy ¼ 0; 1;y; 10Þ ð2Þ Fig. 1. Binomial probability distributions of sample size n ¼ 10 and probability parameter w ¼ 0:2 (top) and w ¼ 0:7 (bottom). I.J. Myung / Journal of Mathematical Psychology 47 (2003) 90–100 91
92 1J.Myung I Journal of Mathematical Psychology 47(2003)90-100 which is known as the binomial distribution with interest,find the one PDF,among all the probability parameters n=10,w=0.2.Note that the number of densities that the model prescribes,that is most likely to trials(n)is considered as a parameter.The shape of this have produced the data.To solve this inverse problem, PDF is shown in the top panel of Fig.1.If the we define the likelihood function by reversing the roles of parameter value is changed to say w=0.7,a new PDF the data vector y and the parameter vector w in f(yw), is obtained as i.e. 10川 f0yln=10,w=0.7)= 10-70.7y(0.3)10- L(wly)=f(ylw). (5) 0=0,1,,10) (3) Thus L(wly)represents the likelihood of the parameter w given the observed data y,and as such is a function of whose shape is shown in the bottom panel of Fig.1.The w.For the one-parameter binomial example in Eq.(4), following is the general expression of the PDF of the the likelihood function for y =7 and n=10 is given by binomial distribution for arbitrary values of w and n: L(w|n=10,y=7)=fy=7|n=10,w) n! f6a,w)=-n"(1-w =73n7(1-w320≤w≤1). 101 (6) (0≤w≤1;y=0,1,,n) (4) The shape of this likelihood function is shown in Fig.2. which as a function ofy specifies the probability of data There exist an important difference between the PDF y for a given value of n and w.The collection of all such f(yw)and the likelihood function L(wly).As illustrated PDFs generated by varying the parameter across its in Figs.I and 2,the two functions are defined on range (0-1 in this case for w,n1)defines a model. different axes,and therefore are not directly comparable to each other.Specifically,the PDF in Fig.I is a 2.2.Likelihood function function of the data given a particular set of parameter values,defined on the data scale.On the other hand,the Given a set of parameter values,the corresponding likelihood function is a function of the parameter given PDF will show that some data are more probable than a particular set of observed data,defined on the other data.In the previous example,the PDF with w= parameter scale.In short,Fig.I tells us the probability 0.2,y=2 is more likely to occur than y=5(0.302 vs. of a particular data value for a fixed parameter,whereas 0.026).In reality,however,we have already observed the Fig.2 tells us the likelihood ("unnormalized probabil- data.Accordingly,we are faced with an inverse ity")of a particular parameter value for a fixed data set. problem:Given the observed data and a model of Note that the likelihood function in this figure is a curve 0.35 0.3 0.25 0.2 0.15 0 0.05 0 0.1 2 030,4650.日070a Parameter w Fig.2.The likelihood function given observed data y=7 and sample size n=10 for the one-parameter model described in the text
which is known as the binomial distribution with parameters n ¼ 10; w ¼ 0:2: Note that the number of trials ðnÞ is considered as a parameter. The shape of this PDF is shown in the top panel of Fig. 1. If the parameter value is changed to say w ¼ 0:7; a new PDF is obtained as fðy j n ¼ 10; w ¼ 0:7Þ ¼ 10! y!ð10 yÞ! ð0:7Þ y ð0:3Þ 10y ðy ¼ 0; 1;y; 10Þ ð3Þ whose shape is shown in the bottom panel of Fig. 1. The following is the general expression of the PDF of the binomial distribution for arbitrary values of w and n: fðyjn; wÞ ¼ n! y!ðn yÞ! wy ð1 wÞ ny ð0pwp1; y ¼ 0; 1;y; nÞ ð4Þ which as a function of y specifies the probability of data y for a given value of n and w: The collection of all such PDFs generated by varying the parameter across its range (0–1 in this case for w; nX1) defines a model. 2.2. Likelihood function Given a set of parameter values, the corresponding PDF will show that some data are more probable than other data. In the previous example, the PDF with w ¼ 0:2; y ¼ 2 is more likely to occur than y ¼ 5 (0.302 vs. 0.026). In reality, however, we have already observed the data. Accordingly, we are faced with an inverse problem: Given the observed data and a model of interest, find the one PDF, among all the probability densities that the model prescribes, that is most likely to have produced the data. To solve this inverse problem, we define the likelihood function by reversing the roles of the data vector y and the parameter vector w in fðyjwÞ; i.e. LðwjyÞ ¼ fðyjwÞ: ð5Þ Thus LðwjyÞ represents the likelihood of the parameter w given the observed data y; and as such is a function of w: For the one-parameter binomial example in Eq. (4), the likelihood function for y ¼ 7 and n ¼ 10 is given by Lðw j n ¼ 10; y ¼ 7Þ ¼ fðy ¼ 7 j n ¼ 10; wÞ ¼ 10! 7!3! w7 ð1 wÞ 3 ð0pwp1Þ: ð6Þ The shape of this likelihood function is shown in Fig. 2. There exist an important difference between the PDF fðyjwÞ and the likelihood function LðwjyÞ: As illustrated in Figs. 1 and 2, the two functions are defined on different axes, and therefore are not directly comparable to each other. Specifically, the PDF in Fig. 1 is a function of the data given a particular set of parameter values, defined on the data scale. On the other hand, the likelihood function is a function of the parameter given a particular set of observed data, defined on the parameter scale. In short, Fig. 1 tells us the probability of a particular data value for a fixed parameter, whereas Fig. 2 tells us the likelihood (‘‘unnormalized probability’’) of a particular parameter value for a fixed data set. Note that the likelihood function in this figure is a curve Fig. 2. The likelihood function given observed data y ¼ 7 and sample size n ¼ 10 for the one-parameter model described in the text. 92 I.J. Myung / Journal of Mathematical Psychology 47 (2003) 90–100
I.J.Myung Journal of Mathematical Psychology 47 (2003)90-100 93 because there is only one parameter beside n,which is at wi=wiMLE for all i=1,...,k.This is because the assumed to be known.If the model has two parameters, definition of maximum or minimum of a continuous the likelihood function will be a surface sitting above the differentiable function implies that its first derivatives parameter space.In general,for a model with k vanish at such points parameters,the likelihood function L(wly)takes the The likelihood equation represents a necessary con- shape of a k-dim geometrical "surface"sitting above a dition for the existence of an MLE estimate.An k-dim hyperplane spanned by the parameter vector w= additional condition must also be satisfied to ensure (1,,wk) that In L(wy)is a maximum and not a minimum,since the first derivative cannot reveal this.To be a maximum, the shape of the log-likelihood function should be 3.Maximum likelihood estimation convex (it must represent a peak,not a valley)in the neighborhood of wMLE.This can be checked by Once data have been collected and the likelihood calculating the second derivatives of the log-likelihoods function of a model given the data is determined,one is and showing whether they are all negative at wi=wi.MLE in a position to make statistical inferences about the fori=1,...,k population,that is,the probability distribution that underlies the data.Given that different parameter values m L(wb)o. (8) Ow? index different probability distributions(Fig.1),we are interested in finding the parameter value that corre- To illustrate the MLE procedure,let us again consider sponds to the desired probability distribution. the previous one-parameter binomial example given a The principle of maximum likelihood estimation fixed value of n.First,by taking the logarithm of the (MLE),originally developed by R.A.Fisher in the likelihood function L(wln =10,y=7)in Eq.(6),we 1920s,states that the desired probability distribution is obtain the log-likelihood as the one that makes the observed data "most likely," 101 which means that one must seek the value of the nL(wln=10,y=7)=n73+7lnw+3ln(1-w9) parameter vector that maximizes the likelihood function Next,the first derivative of the log-likelihood is L(wly).The resulting parameter vector,which is sought calculated as by searching the multi-dimensional parameter space,is called the MLE estimate,and is denoted by wMLE d1nL(w|n=10,y=7)_7_37-10w (10) (WI.MLE,...,W&.MLE).For example,in Fig.2,the MLE dw Γw1-ww(1-w) estimate is wMLE=0.7 for which the maximized like- By requiring this equation to be zero,the desired MLE lihood value is L(wMLE 0.7In 10,y=7)=0.267. estimate is obtained as wMLE =0.7.To make sure that The probability distribution corresponding to this the solution represents a maximum,not a minimum,the MLE estimate is shown in the bottom panel of Fig.1. second derivative of the log-likelihood is calculated and According to the MLE principle,this is the population evaluated at w WMLE, that is most likely to have generated the observed data of y=7.To summarize,maximum likelihood estima- d2In L(wIn =10,y=7) 7 、3 tion is a method to seek the probability distribution that dw2 -(1-w makes the observed data most likely. =-47.62<0 (11) 3.1.Likelihood equation which is negative,as desired. In practice,however,it is usually not possible to MLE estimates need not exist nor be unique.In this obtain an analytic form solution for the MLE estimate, section,we show how to compute MLE estimates when especially when the model involves many parameters and its PDF is highly non-linear.In such situations,the they exist and are unique.For computational conve- nience,the MLE estimate is obtained by maximizing the MLE estimate must be sought numerically using non- log-likelihood function,In L(wly).This is because the linear optimization algorithms.The basic idea of non- two functions,In L(wly)and L(wly),are monotonically linear optimization is to quickly find optimal parameters related to each other so the same MLE estimate is that maximize the log-likelihood.This is done by obtained by maximizing either one.Assuming that the log-likelihood function,In L(wly),is differentiable,if Consider the Hessian matrix H(w)defined as (w)= WMLE exists,it must satisfy the following partial differential equation known as the likelihood equation: L)(.).Then a more accurate test of the convexity owiowi aln L(wly)=0 condition requires that the determinant of H(w)be negative definite. that is,H(w WMLE)z<0 for any kxl real-numbered vector z,where O z'denotes the transpose of z
because there is only one parameter beside n; which is assumed to be known. If the model has two parameters, the likelihood function will be a surface sitting above the parameter space. In general, for a model with k parameters, the likelihood function LðwjyÞ takes the shape of a k-dim geometrical ‘‘surface’’ sitting above a k-dim hyperplane spanned by the parameter vector w ¼ ðw1;y; wkÞ: 3. Maximum likelihood estimation Once data have been collected and the likelihood function of a model given the data is determined, one is in a position to make statistical inferences about the population, that is, the probability distribution that underlies the data. Given that different parameter values index different probability distributions (Fig. 1), we are interested in finding the parameter value that corresponds to the desired probability distribution. The principle of maximum likelihood estimation (MLE), originally developed by R.A. Fisher in the 1920s, states that the desired probability distribution is the one that makes the observed data ‘‘most likely,’’ which means that one must seekthe value of the parameter vector that maximizes the likelihood function LðwjyÞ: The resulting parameter vector, which is sought by searching the multi-dimensional parameter space, is called the MLE estimate, and is denoted by wMLE ¼ ðw1;MLE;y; wk;MLEÞ: For example, in Fig. 2, the MLE estimate is wMLE ¼ 0:7 for which the maximized likelihood value is LðwMLE ¼ 0:7jn ¼ 10; y ¼ 7Þ ¼ 0:267: The probability distribution corresponding to this MLE estimate is shown in the bottom panel of Fig. 1. According to the MLE principle, this is the population that is most likely to have generated the observed data of y ¼ 7: To summarize, maximum likelihood estimation is a method to seekthe probability distribution that makes the observed data most likely. 3.1. Likelihood equation MLE estimates need not exist nor be unique. In this section, we show how to compute MLE estimates when they exist and are unique. For computational convenience, the MLE estimate is obtained by maximizing the log-likelihood function, ln LðwjyÞ: This is because the two functions, ln LðwjyÞ and LðwjyÞ; are monotonically related to each other so the same MLE estimate is obtained by maximizing either one. Assuming that the log-likelihood function, ln LðwjyÞ; is differentiable, if wMLE exists, it must satisfy the following partial differential equation known as the likelihood equation: @ln LðwjyÞ @wi ¼ 0 ð7Þ at wi ¼ wi;MLE for all i ¼ 1;y; k: This is because the definition of maximum or minimum of a continuous differentiable function implies that its first derivatives vanish at such points. The likelihood equation represents a necessary condition for the existence of an MLE estimate. An additional condition must also be satisfied to ensure that ln LðwjyÞ is a maximum and not a minimum, since the first derivative cannot reveal this. To be a maximum, the shape of the log-likelihood function should be convex (it must represent a peak, not a valley) in the neighborhood of wMLE: This can be checked by calculating the second derivatives of the log-likelihoods and showing whether they are all negative at wi ¼ wi;MLE for i ¼ 1;y; k; 1 @2 ln LðwjyÞ @w2 i o0: ð8Þ To illustrate the MLE procedure, let us again consider the previous one-parameter binomial example given a fixed value of n: First, by taking the logarithm of the likelihood function Lðwjn ¼ 10; y ¼ 7Þ in Eq. (6), we obtain the log-likelihood as ln Lðw j n ¼ 10; y ¼ 7Þ ¼ ln 10! 7!3! þ 7 ln w þ 3 lnð1 wÞð:9Þ Next, the first derivative of the log-likelihood is calculated as d ln Lðw j n ¼ 10; y ¼ 7Þ dw ¼ 7 w 3 1 w ¼ 7 10w wð1 wÞ : ð10Þ By requiring this equation to be zero, the desired MLE estimate is obtained as wMLE ¼ 0:7: To make sure that the solution represents a maximum, not a minimum, the second derivative of the log-likelihood is calculated and evaluated at w ¼ wMLE; d2 ln Lðw j n ¼ 10; y ¼ 7Þ dw2 ¼ 7 w2 3 ð1 wÞ 2 ¼ 47:62o0 ð11Þ which is negative, as desired. In practice, however, it is usually not possible to obtain an analytic form solution for the MLE estimate, especially when the model involves many parameters and its PDF is highly non-linear. In such situations, the MLE estimate must be sought numerically using nonlinear optimization algorithms. The basic idea of nonlinear optimization is to quickly find optimal parameters that maximize the log-likelihood. This is done by 1Consider the Hessian matrix HðwÞ defined as HijðwÞ ¼ @2 ln LðwÞ @wi@wj ði; j ¼ 1;y; kÞ: Then a more accurate test of the convexity condition requires that the determinant of HðwÞ be negative definite, that is, z0 Hðw ¼ wMLEÞzo0 for any kx1 real-numbered vector z; where z0 denotes the transpose of z: I.J. Myung / Journal of Mathematical Psychology 47 (2003) 90–100 93
94 I.J.Myung Journal of Mathematical Psychology 47 (2003)90-100 a1 a2 83 Parameter w Fig.3.A schematic plot of the log-likelihood function for a fictitious one-parameter model.Point B is the global maximum whereas points A and C are two local maxima.The series of arrows depicts an iterative optimization process searching much smaller sub-sets of the multi-dimen- tries to improve upon an initial set of parameters that is sional parameter space rather than exhaustively search- supplied by the user.Initial parameter values are chosen ing the whole parameter space,which becomes either at random or by guessing.Depending upon the intractable as the number of parameters increases.The choice of the initial parameter values,the algorithm "intelligent"search proceeds by trial and error over the could prematurely stop and return a sub-optimal set of course of a series of iterative steps.Specifically,on each parameter values.This is called the local maxima iteration,by taking into account the results from the problem.As an example,in Fig.3 note that although previous iteration,a new set of parameter values is the starting parameter value at point a2 will lead to the obtained by adding small changes to the previous optimal point B called the global maximum,the starting parameters in such a way that the new parameters are parameter value at point al will lead to point A,which is likely to lead to improved performance.Different a sub-optimal solution.Similarly,the starting parameter optimization algorithms differ in how this updating value at a3 will lead to another sub-optimal solution at routine is conducted.The iterative process,as shown by point C. a series of arrows in Fig.3,continues until the Unfortunately,there exists no general solution to the parameters are judged to have converged (i.e.,point B local maximum problem.Instead,a variety of techni- in Fig.3)on the optimal set of parameters on an ques have been developed in an attempt to avoid the appropriately predefined criterion.Examples of the problem,though there is no guarantee of their stopping criterion include the maximum number of effectiveness.For example,one may choose different iterations allowed or the minimum amount of change in starting values over multiple runs of the iteration parameter values between two successive iterations. procedure and then examine the results to see whether the same solution is obtained repeatedly.When that happens,one can conclude with some confidence that a 3.2.Local maxima global maximum has been found.2 It is worth noting that the optimization algorithm does not necessarily guarantee that a set of parameter 2A stochastic optimization algorithm known as simulated annealing values that uniquely maximizes the log-likelihood will be (Kirkpatrick,Gelatt,Vecchi.1983)can overcome the local maxima problem,at least in theory,though the algorithm may not be a feasible found.Finding optimum parameters is essentially a option in practice as it may take an realistically long time to find the heuristic process in which the optimization algorithm solution
searching much smaller sub-sets of the multi-dimensional parameter space rather than exhaustively searching the whole parameter space, which becomes intractable as the number of parameters increases. The ‘‘intelligent’’ search proceeds by trial and error over the course of a series of iterative steps. Specifically, on each iteration, by taking into account the results from the previous iteration, a new set of parameter values is obtained by adding small changes to the previous parameters in such a way that the new parameters are likely to lead to improved performance. Different optimization algorithms differ in how this updating routine is conducted. The iterative process, as shown by a series of arrows in Fig. 3, continues until the parameters are judged to have converged (i.e., point B in Fig. 3) on the optimal set of parameters on an appropriately predefined criterion. Examples of the stopping criterion include the maximum number of iterations allowed or the minimum amount of change in parameter values between two successive iterations. 3.2. Local maxima It is worth noting that the optimization algorithm does not necessarily guarantee that a set of parameter values that uniquely maximizes the log-likelihood will be found. Finding optimum parameters is essentially a heuristic process in which the optimization algorithm tries to improve upon an initial set of parameters that is supplied by the user. Initial parameter values are chosen either at random or by guessing. Depending upon the choice of the initial parameter values, the algorithm could prematurely stop and return a sub-optimal set of parameter values. This is called the local maxima problem. As an example, in Fig. 3 note that although the starting parameter value at point a2 will lead to the optimal point B called the global maximum, the starting parameter value at point a1 will lead to point A, which is a sub-optimal solution. Similarly, the starting parameter value at a3 will lead to another sub-optimal solution at point C. Unfortunately, there exists no general solution to the local maximum problem. Instead, a variety of techniques have been developed in an attempt to avoid the problem, though there is no guarantee of their effectiveness. For example, one may choose different starting values over multiple runs of the iteration procedure and then examine the results to see whether the same solution is obtained repeatedly. When that happens, one can conclude with some confidence that a global maximum has been found.2 Fig. 3. A schematic plot of the log-likelihood function for a fictitious one-parameter model. Point B is the global maximum whereas points A and C are two local maxima. The series of arrows depicts an iterative optimization process. 2A stochastic optimization algorithm known as simulated annealing (Kirkpatrick, Gelatt, & Vecchi, 1983) can overcome the local maxima problem, at least in theory, though the algorithm may not be a feasible option in practice as it may take an realistically long time to find the solution. 94 I.J. Myung / Journal of Mathematical Psychology 47 (2003) 90–100