Dynamic Discrete Choice Models: Methods, Matlab Code, and Exercises

May 2022

This document supports the first computing sessions in a graduate course on dynamic discrete choice models. It is centered around basic Matlab code for solving, simulating, and empirically analyzing a simple model of firm entry and exit under uncertainty. This code is available from a public Github repository and can be distributed to students as starter code, for example using the Github Classroom. Exercises ask students to adapt and extend this starter code to apply different and more advanced computational and econometric methods to a wider range of models.

 ✶ We thank Jeffrey Campbell for help with Komments++ and Nan Yang and our students in CentER Tilburg's Empirical Industrial Organization 2 (230323) for comments on earlier versions of this code. First draft: February 8, 2011. We welcome the use of this software under an MIT license. DOI: 10.5281/zenodo.4287733. © 2020 Jaap H. Abbring and Tobias J. Klein. § Department of Econometrics & OR, Tilburg University, P.O. Box 90153, 5000 LE Tilburg, The Netherlands. E-mail: jaap@abbring.org. Web: jaap.abbring.org. † Department of Econometrics & OR, Tilburg University, P.O. Box 90153, 5000 LE Tilburg, The Netherlands. E-mail: t.j.klein@uvt.nl. Web: www.tobiasklein.ws.

##### Viewing and Using this File

This file (dynamicDiscreteChoice.m.html) documents the code in the GitHub repository jabbring/dynamic-discrete-choice (a Zip archive packages both together). It was generated from the Matlab script dynamicDiscreteChoice.m using the Komments++ package, which was created and generously provided to us by Jeffrey R. Campbell. It documents how you can run the script dynamicDiscreteChoice.m with Matlab to specify, simulate, and estimate an empirical discrete-time version of the model of firm entry and exit under uncertainty by Dixit (1989). These computations will use, and therefore exemplify the use of, various tailor-made Matlab functions that are documented in this file. Thus, this file also is a guide to these functions and the way they can be adapted and used in your own exercises.

In Safari and Firefox, you can switch between the default view of this document, which displays the working code with all its documentation, and an alternative view that shows only the code by pressing “c”.

##### Software Requirements

Running the code documented in this file requires a recent version of Matlab with its Optimization Toolbox. The code can easily be adapted to use the Knitro solver instead of Matlab's Optimization Toolbox (see Section 5).1

##### Implementations in Other Languages

A Julia implementation of this code by Rafael Greminger is available from the GitHub repository rgreminger/DDCModelsExample.jl.

The remainder of this document proceeds as follows. The next section covers two functions that define the decision problem, flowpayoffs and bellman. The versions of these functions handed out to the students, and documented here, define a very basic entry and exit problem with sunk costs and ongoing uncertainty. By changing these functions, different and more extensive decision problems can be studied with the procedures documented in later sections. Section 2 discusses fixedPoint, which solves for the optimal choice-specific expected discounted values (and therewith for the optimal decision rule) by finding the unique fixed point of the contraction mapping defined by bellman. Section 3 documents how simulateData can be used to simulate data for the decision problem set up by flowpayoffs and bellman. Section 4 documents functions for estimating the transition matrix $\Pi$ and for computing the log partial likelihood for conditional choice data. Section 5 documents the script that uses these functions to set up the model, simulate data, and estimate the model with Rust (1987)'s nested fixed point (NFXP) maximum likelihood method. Section 6 contains a range of student exercises. Appendix 1 provides mathematical background by discussing some theory of contractions. Appendix 2 documents procedures that are called by the main routines, but that are of limited substantial interest, such as randomDiscrete.

### 1A Firm Entry and Exit Problem

Consider a simple, discrete-time version of the model of firm entry and exit under uncertainty in Dixit (1989). At each time $t\in\mathbb{N}$, a firm can choose to either serve a market (choice 1) or not (choice 0). The payoffs from either choice depend on the firm's choice $A_{t-1}$ in the previous period, because the firm may have to incur a sunk cost to enter or exit the market (for definiteness, suppose that the firm is initially inactive, $A_0=0$). Payoffs also depend on externally determined (choice-independent) observed scalar state variables $X_t$, as well as unobserved state variables $\varepsilon_{t}(0)$ and $\varepsilon_{t}(1)$ that are revealed to the firm before it makes its choice in period $t$. Specifically, in period $t$, its flow profits from choice 0 are

(0)

$u_0(X_t,A_{t-1})+\varepsilon_{t}(0)= -A_{t-1}\delta_0+\varepsilon_{t}(0),$

and its flow profits from choice 1 are

(0)

$u_1(X_t,A_{t-1})+\varepsilon_{t}(1)=\beta_0+\beta_1 X_t-(1-A_{t-1})\delta_1+\varepsilon_{t}(1).$

Note that $\delta_0$ and $\delta_1$ are exit and entry costs that the firm only pays if it changes state. Gross of these costs, an active firm makes profits $\beta_0+\beta_1 X_t+\varepsilon_{t}(1)$ and an inactive firm has profits $\varepsilon_{t}(0)$.

The state variable $X_t$ has finite support ${\cal X}\equiv\{x^1,\ldots,x^K\}$. From its random initial value $X_0$, it follows a first-order Markov chain with $K\times K$ transition probability matrix $\Pi$, with typical element $\Pi_{ij}=\Pr(X_{t+1}=x^j|X_t=x^i)$, independently of the firm's choices. The profit shocks $\varepsilon_{t}(a)$ are independent of $\{X_t\}$ and across time $t$ and choices $a$, with type I extreme value distributions centered around 0. Like $X_t$, they may affect but are not affected by the firm's choices. Thus, $X_t$ and $\varepsilon_t\equiv[\varepsilon_t(0),\varepsilon_t(1)]$ are “exogenous” state variables of which the evolution is externally specified. Consequently, the firm controls the evolution of the state $(X_t,A_{t-1},\varepsilon_t)$ only through its choice of $A_{t-1}$.

The firm has rational expectations about future states and choices. It chooses the action $A_t$ that maximizes its expected flow of profits, discounted at a factor $\rho \lt 1$.

Two functions, flowpayoffs and bellman, together code up this simple model. If you wish to experiment with other functional forms for the flow profits, you should edit flowpayoffs.m. The model's dynamic specification can be changed in bellman.m.

#### 11Flow Payoffs

The function flowpayoffs computes the mean (over $\varepsilon$) flow payoffs, $u_0(x,a)$ and $u_1(x,a)$, for each profit and past choice pair $(x,a)\in{\cal X}\times\{0,1\}$.

 flowpayoffs.m 7 function [u0,u1] = flowpayoffs(supportX,beta,delta)

It requires the following input arguments:

{supportX} a $K\times 1$ vector with the support points of the profit state $X_t$ (the elements of $\cal{X}$, consistently ordered with the Markov transition matrix $\Pi$);
{beta} a $2\times 1$ vector that contains the intercept ($\beta_0$) and profit state slope ($\beta_1$) of the net payoffs to choice $1$; and
{delta} a $2\times 1$ vector that contains the firm's exit ($\delta_0$) and entry ($\delta_1$) costs.

It returns

{u0} a $K\times 2$ matrix of which the $(i,j)$th entry is $u_0(x^i,j-1)$ and
{u1} a $K\times 2$ matrix of which the $(i,j)$th entry is $u_1(x^i,j-1)$ .

That is, the rows correspond to the support points of $X_t$, and the columns to the choice in the previous period, $A_{t-1}$.

The function flowpayoffs first stores the number $K$ of elements of supportX in a scalar nSuppX.

 flowpayoffs.m 24 nSuppX = size(supportX,1);

Then, it constructs a $K\times 2$ matrix u0 with the value of

(0)

$\left[\begin{array}{ccc} u_0(x^1,0)&~~~&u_0(x^1,1)\\ \cdot&~~~&\cdot\\ \cdot&~~~&\cdot\\ \cdot&~~~&\cdot\\ u_0(x^{K},0)&~~~&u_0(x^{K},1) \end{array}\right] = \left[\begin{array}{ccc} 0&~~~&-\delta_0\\ \cdot&~~~&\cdot\\ \cdot&~~~&\cdot\\ \cdot&~~~&\cdot\\ 0&~~~&-\delta_0 \end{array}\right]$

and a $K\times 2$ matrix u1 with the value of

(0)

$\left[\begin{array}{ccc} u_1(x^1,0)&~~~&u_1(x^1,1)\\ \cdot&~~~&\cdot\\ \cdot&~~~&\cdot\\ \cdot&~~~&\cdot\\ u_1(x^K,0)&~~~&u_1(x^{K},1) \end{array}\right] = \left[\begin{array}{ccc} \beta_0+\beta_1 x^1-\delta_1&~~~&\beta_0+\beta_1 x^1\\ \cdot&~~~&\cdot\\ \cdot&~~~&\cdot\\ \cdot&~~~&\cdot\\ \beta_0+\beta_1 x^{K}-\delta_1&~~~&\beta_0+\beta_1 x^{K} \end{array}\right].$

 flowpayoffs.m 55 u0 = [zeros(nSuppX,1) -delta(1)*ones(nSuppX,1)]; flowpayoffs.m 56 u1 = [ones(nSuppX,1) supportX]*beta*[1 1]-delta(2)*ones(nSuppX,1)*[1 0];

You can change the specification of the flow profits by editing these two lines of code.

#### 12Bellman Operator

The expected discounted profits net of $\varepsilon_{t}(a)$ immediately following choice $a$ at time $t$, $U_a(X_t,A_{t-1})$, satisfy a recursive system $U_0=\Psi_0(U_0,U_1)$ and $U_1=\Psi_1(U_0,U_1)$ or, with $U\equiv(U_0,U_1)$ and $\Psi\equiv(\Psi_0,\Psi_1)$, simply $U=\Psi(U)$ (see e.g. Rust (1994)). Here, $\Psi$ is a Bellman-like operator that embodies the model's dynamic specification and that depends on the flow payoffs $u_0$ and $u_1$, the Markov transition matrix $\Pi$, and the discount factor $\rho$. Its elements are the operators $\Psi_0$ and $\Psi_1$, with $\Psi_a$ implicitly defined by the right hand side of

(1)

$\label{eq:bellman} U_a(X_t,A_{t-1})=u_a(X_t,A_{t-1})+\rho\mathbb{E}\left[R_a(X_{t+1})|X_t\right],$

where $R_a(x)\equiv\mathbb{E}\left[\max\{U_0(x,a)+\varepsilon_{t+1}(0),U_1(x,a)+\varepsilon_{t+1}(1)\}\right]$ is McFadden (1981)'s social surplus for the binary choice problem with utilities $U_0(x,a)+\varepsilon_{t+1}(0)$ and $U_1(x,a)+\varepsilon_{t+1}(1)$. The first term in the right-hand side of (1) equals period $t$'s flow profits following choice $a$, net of $\varepsilon_t(a)$. The second term equals the expected discounted profits from continuing into period $t+1$ with choice $a$ (note that, given the choice $a$, these continuation payoffs do not depend on the past choice $A_{t-1}$). The (mean-zero) extreme value assumptions on $\varepsilon_{t+1}(0)$ and $\varepsilon_{t+1}(1)$ imply that

(2)

$\label{eq:surplus} R_a(x)=\log\left\{\exp\left[U_0(x,a)\right]+\exp\left[U_1(x,a)\right]\right\}.$

The function bellman applies the operator $\Psi$ once to input values of $U$ and returns $\Psi(U)$.

 bellman.m 15 function [capU0,capU1] = bellman(capU0,capU1,u0,u1,capPi,rho)

It requires the following input arguments:

{capU0} a $K\times 2$ matrix of which the $(i,j)$th entry is $U_0(x^i,j-1)$;
{capU1} a $K\times 2$ matrix of which the $(i,j)$th entry is $U_1(x^i,j-1)$;
{u0} a $K\times 2$ matrix of which the $(i,j)$th entry is $u_0(x^i,j-1)$;
{u1} a $K\times 2$ matrix of which the $(i,j)$th entry is $u_1(x^i,j-1)$;
{capPi} the $K\times K$ Markov transition matrix $\Pi$ for $\{X_t\}$, with typical element $\Pi_{ij}=\Pr(X_{t+1}=x^j|X_t=x^i)$; and
{rho} a scalar with the value of the discount factor $\rho$.

It returns

{capU0} a $K\times 2$ matrix of which the $(i,j)$th entry is $U_0(x^i,j-1)$;
{capU1} a $K\times 2$ matrix of which the $(i,j)$th entry is $U_1(x^i,j-1)$.

To this end, bellman first computes the surpluses $R_0(x)$ and $R_1(x)$ in (2) for all $x\in\cal{X}$ and stacks these in $K\times 1$ vectors r0 and r1.

 bellman.m 34 r0 = log(exp(capU0(:,1))+exp(capU1(:,1))); bellman.m 35 r1 = log(exp(capU0(:,2))+exp(capU1(:,2)));

Then, it applies (1) to compute new values of capU0 and capU1.

 bellman.m 39 capU0 = u0 + rho*capPi*r0*[1 1]; bellman.m 40 capU1 = u1 + rho*capPi*r1*[1 1];

Here, the conditional expectation over $X_{t+1}$ in (1) is taken by premultiplying the vectors r0 and r1 by the Markov transition matrix capPi. The vectors r0 and r1 are postmultiplied by [1 1] because the surpluses, and therefore the continuation payoffs, are independent of the past choice that indexes the columns of capU0 and capU1.

The logit assumption only affects the operator $\Psi$, and therefore the function bellman, through the specification of the surpluses $R_0(x)$ and $R_1(x)$ in (2). If you want to change the logit assumption, you should change the computation of r0 and r1 (and make sure to adapt the computation of choice probabilities and inverse choice probabilities elsewhere as well).

### 2Solving the Decision Problem

The function fixedPoint computes the fixed point $U$ of the Bellman-like operator $\Psi$, using the method of successive approximations. It is easy to show that $\Psi$ is a contraction, so that the method of successive approximations converges linearly and globally to a point within a positive maximum absolute distance tolFixedPoint (see Appendix 13).

 fixedPoint.m 7 function [capU0,capU1] = fixedPoint(u0,u1,capPi,rho,tolFixedPoint,bellman,capU0,capU1)

It requires the following input arguments:

{u0} a $K\times 2$ matrix of which the $(i,j)$th entry is $u_0(x^i,j-1)$;
{u1} a $K\times 2$ matrix of which the $(i,j)$th entry is $u_1(x^i,j-1)$;
{capPi} the $K\times K$ Markov transition matrix $\Pi$ for $\{X_t\}$, with typical element $\Pi_{ij}=\Pr(X_{t+1}=x^j|X_t=x^i)$;
{rho} a scalar with the value of the discount factor $\rho$;
{tolFixedPoint} a scalar tolerance level that is used to determine convergence of the successive approximations;
{bellman} the handle to the function [capU0,capU1] = bellman(capU0,capU1,u0,u1,capPi,rho) that iterates once on $\Psi$;
{capU0} a $K\times 2$ matrix of which the $(i,j)$th entry is a starting value for $U_0(x^i,j-1)$ (optional; set to [] to select default starting value);
{capU1} a $K\times 2$ matrix of which the $(i,j)$th entry is a starting value for $U_1(x^i,j-1)$ (optional; set to [] to select default starting value).

It returns

{capU0} a $K\times 2$ matrix of which the $(i,j)$th entry is $U_0(x^i,j-1)$; and
{capU1} a $K\times 2$ matrix of which the $(i,j)$th entry is $U_1(x^i,j-1)$.

The function fixedPoint first stores the number $K$ of elements of supportX in a scalar nSuppX.

 fixedPoint.m 28 nSuppX = size(capPi,1);

The starting values for $U_0$ and $U_1$ are set to $0$ if the input arguments capU0 and capU1 are empty.

 fixedPoint.m 32 if isempty(capU0) fixedPoint.m 33 capU0 = zeros(nSuppX,2); fixedPoint.m 34 end fixedPoint.m 35 if isempty(capU1) fixedPoint.m 36 capU1 = zeros(nSuppX,2); fixedPoint.m 37 end

The $K\times 2$ matrices inU0 and inU1 store the values of $U$ that are fed into the operator $\Psi$, for comparison with the value of $\Psi(U)$ that is subsequently stored in capU0 and capU1. They are initialized to deviate from capU0 and capU1 by more than tolFixedPoint, so that the while statement allows at least one iteration of $\Psi$, and stops as soon as $\max\{\max|\Psi_0(U)-U_0|,\max|\Psi_1(U)-U_1|\}$ no longer exceeds the tolerance level in tolFixedPoint.

 fixedPoint.m 41 inU0 = capU0+2*tolFixedPoint; fixedPoint.m 42 inU1 = capU1+2*tolFixedPoint; fixedPoint.m 43 while (max(max(abs(inU0-capU0)))>tolFixedPoint) || (max(max(abs(inU1-capU1)))>tolFixedPoint); fixedPoint.m 44 inU0 = capU0; fixedPoint.m 45 inU1 = capU1; fixedPoint.m 46 [capU0,capU1] = bellman(inU0,inU1,u0,u1,capPi,rho); fixedPoint.m 47 end

You can replace fixedPoint by another function if you want to use alternative methods, such as Newton methods, for computing the fixed point $U$, or if you want to work with finite horizon problems.

### 3Data Simulation

Suppose that we have computed $\Delta U(x,a)\equiv U_1(x,a)-U_0(x,a)$ for all $(x,a)\in{\cal X}\times\{0,1\}$, using e.g. flowpayoffs and fixedPoint, and that we have specified the Markov transition matrix $\Pi$. Then, the function simulateData can be used to simulate $N$ independent histories $\{(X_1,A_1),\ldots,(X_T,A_T)\}$, taking $A_0=0$ as given and drawing $X_1$ from the stationary distribution of $\{X_t\}$.

 simulateData.m 7 function [choices,iX] = simulateData(deltaU,capPi,nPeriods,nFirms)

The function simulateData requires the following input arguments:

{deltaU} a $K\times 2$ matrix of which the $(i,j)$th entry is $\Delta U(x^i,j-1)$;
{capPi} the $K\times K$ Markov transition matrix $\Pi$ for $\{X_t\}$, with typical element $\Pi_{ij}=\Pr(X_{t+1}=x^j|X_t=x^i)$;
{nPeriods} the scalar number $T$ of time periods to simulate data for; and
{nFirms} the scalar number $N$ of firms to simulate data for.

It returns

{choices} a $T\times N$ matrix with simulated choices, with each each column containing an independent simulation of $(A_1,\ldots,A_T)$; and
{iX} a $T\times N$ matrix with simulated states, with each each column containing the indices (in ${\cal X}$) of an independent simulation of $(X_1,\ldots,X_T)$. For example, if $X_1=x^3$ for some firm, then 3, not $x^3$, is stored in iX.

The function simulateData first stores the number $K$ of elements of supportX in a scalar nSuppX.

 simulateData.m 23 nSuppX = size(capPi,1);

Next, it assumes that $\{X_t\}$ is ergodic and initializes the simulation by drawing $X_1$, $N$ independent times, from the stationary distribution of $\{X_t\}$. To this end, it first solves

(0)

$\left[\begin{array}{cccc} 1-\Pi_{11} &-\Pi_{21} &\cdots &-\Pi_{K1}\\ -\Pi_{12} &1-\Pi_{22}&\cdots &-\Pi_{K2}\\ \vdots & &\ddots &\vdots\\ -\Pi_{1(K-1)}&\cdots &~~1-\Pi_{(K-1)(K-1)}~~&-\Pi_{K(K-1)}\\ 1 &\cdots &1 &1 \end{array}\right]P^\infty= \left(\begin{array}{c} 0\\0\\ \vdots\\ 0\\1 \end{array}\right)$

for the $K\times 1$ vector $P^\infty$ with the stationary probabilities $P^\infty_k=\lim_{t\rightarrow\infty}\Pr(X_t=x^k)$ and stores the result in pInf.

 simulateData.m 39 oneMinPi = eye(nSuppX)-capPi'; simulateData.m 40 pInf = [oneMinPi(1:nSuppX-1,:);ones(1,nSuppX)]\[zeros(nSuppX-1,1);1];

Then, it uses the auxiliary function randomDiscrete (see Appendix 2) and the values stored in pInf to simulate a $1\times N$ vector of values of $X_1$ from the stationary distribution $P^\infty$ and stores their indices in iX.

 simulateData.m 44 iX = randomDiscrete(pInf*ones(1,nFirms));

Using these $N$ simulated values of $X_1$, and $N$ simulated values of $-\Delta\varepsilon_1\equiv\varepsilon_1(0)-\varepsilon_1(1)$ that are stored in deltaEpsilon, it simulates $N$ values of the first choice by using that $A_1=1$ if $\Delta U(X_1,0) \gt -\Delta\varepsilon_1$ and $A_1=0$ otherwise. These are stored in the $1\times N$ vector choices.

 simulateData.m 48 deltaEpsilon = random('ev',zeros(1,nFirms),ones(1,nFirms))-random('ev',zeros(1,nFirms),ones(1,nFirms)); simulateData.m 49 choices = deltaU(iX,1)' > deltaEpsilon;

Finally, $N$ values of $X_t$ are simulated, using the transition matrix $\Pi$ and randomDiscrete, and their indices added as a row to the bottom of the $(t-1)\times N$ matrix iX; and $N$ values of $A_t$ are simulated, using that $A_t=1$ if $\Delta U(X_t,A_{t-1}) \gt -\Delta\varepsilon_t$ and $A_t=0$ otherwise, and stored as a row at the bottom of the $(t-1)\times N$ matrix choices; recursively for $t=2,\ldots,T$.

 simulateData.m 53 for t = 2:nPeriods simulateData.m 54 iX = [iX;randomDiscrete(capPi(iX(end,:),:)')]; simulateData.m 55 deltaEpsilon = random('ev',zeros(1,nFirms),ones(1,nFirms))-random('ev',zeros(1,nFirms),ones(1,nFirms)); simulateData.m 56 choices = [choices;(deltaU(iX(end,:)+nSuppX*choices(end,:)) > deltaEpsilon)]; simulateData.m 57 end

### 4Estimation

Suppose that we have a random sample $\left\{\left[(x_{1n},a_{1n}),\ldots,(x_{Tn},a_{Tn})\right]; n=1,\ldots,N\right\}$ from the population of state and choice histories $\{(X_1,A_1),\ldots,(X_T,A_T)\}$. Following Rust (1994), we provide functions for implementing a two-stage procedure in which, first, the state transition matrix $\Pi$ is estimated directly from observed state transitions and, second, the remaining parameters are estimated by maximizing the partial likelihood for the observed choices.

#### 41First Stage: State Transitions

The function estimatePi computes a frequency estimate of the Markov transition matrix $\Pi$ from state transition data.

 estimatePi.m 7 function piHat = estimatePi(iX,nSuppX)

It requires the following input arguments:

{iX} a $T\times N$ matrix with indices of observed states $x_{tn}$ in ${\cal X}$ (for example, if $x_{11}=x^3$, then the first element of iX is 3, not $x^3$);
{nSuppX} the scalar number $K$ of support points of the profit state $X_t$ (the number of elements of $\cal{X}$)

and returns

{piHat} an estimate $\hat\Pi$ of the $K\times K$ Markov transition matrix $\Pi$ for $\{X_t\}$, with typical element $\hat\Pi_{ij}$ equal to the sample frequency of transitions to state $j$ among all transitions from state $i$.

The function estimatePi first stores the number of time periods $T$ in a scalar nPeriods.

 estimatePi.m 20 nPeriods=size(iX,1);

Then, for each pair $(i,j)\in\{1,\ldots,K\}\times\{1,\ldots,K\}$, it estimates the probability $\Pi_{ij}=\Pr(X_{t+1}=x^j|X_t=x^i)$ by the appropriate sample frequency, the number of transitions from $i$ to $j$ divided by the total number of transitions from $i$ in the data iX.

 estimatePi.m 24 for i=1:nSuppX estimatePi.m 25 for j=1:nSuppX estimatePi.m 26 piHat(i,j)=sum(sum((iX(2:nPeriods,:)==j)&(iX(1:nPeriods-1,:)==i)))/sum(sum((iX(1:nPeriods-1,:)==i))); estimatePi.m 27 end estimatePi.m 28 end

Note that estimatePi requires a positive number of transition observations from each state. More generally, the frequency estimator that it implements only performs well with samples that are large relative to the state space. With relatively small samples, the frequency estimator should be replaced by one that smoothes across support points.

#### 42Second Stage: Choices

The function negLogLik computes minus the log partial likelihood for the conditional choice part of the data. Optionally, it also returns minus the corresponding score vector and an estimate of the information matrix for the parameter (sub)vector $\theta\equiv(\beta_0,\beta_1,\delta_1)'$ (the scores are specific to the estimation example in Section 5's script and should be adapted for inference on other parameters).

 negLogLik.m 7 function [nll,negScore,informationMatrix] = negLogLik.m 8 negLogLik(choices,iX,supportX,capPi,beta,delta,rho,flowpayoffs,bellman,fixedPoint,tolFixedPoint)

The function negLogLik requires the following input arguments:

{choices} a $T\times N$ matrix with choice observations $a_{tn}$;
{iX} a $T\times N$ matrix with indices of observed states $x_{tn}$ in ${\cal X}$ (for example, if $x_{11}=x^3$, then the first element of iX is 3, not $x^3$);
{supportX} a $K\times 1$ vector with the support points of the profit state $X_t$ (the elements of $\cal{X}$, consistently ordered with the Markov transition matrix $\Pi$);
{capPi} the (possibly estimated) $K\times K$ Markov transition matrix $\Pi$ for $\{X_t\}$, with typical element $\Pi_{ij}=\Pr(X_{t+1}=x^j|X_t=x^i)$;
{beta} a $2\times 1$ vector that contains the intercept ($\beta_0$) and profit state slope ($\beta_1$) of the flow payoffs to choice $1$;
{delta} a $2\times 1$ vector that contains the firm's exit ($\delta_0$) and entry ($\delta_1$) costs;
{rho} a scalar with the value of the discount factor $\rho$;
{flowpayoffs} a handle of a function [u0,u1]=flowpayoffs(supportX,beta,delta) that computes the mean flow payoffs $u_0$ and $u_1$;
{bellman} a handle of a function [capU0,capU1] = bellman(capU0,capU1,u0,u1,capPi,rho) that iterates once on $\Psi$;
{fixedPoint} a handle of a function [capU0,capU1] = fixedPoint(u0,u1,capPi,rho,tolFixedPoint,bellman,capU0,capU1) that computes the fixed point $U$ of $\Psi$; and
{tolFixedPoint} a scalar tolerance level that is used to determine convergence of the successive approximations of the fixed point $U$ of $\Psi$.

It returns

{nll} a scalar with minus the log partial likelihood for the conditional choices

and optionally

{negScore} a $3\times 1$ vector with minus the partial likelihood score for $\theta$ and
{informationMatrix} a $3\times 3$ matrix with the sum of the $N$ outer products of the individual contributions to the score for $\theta$.

The function negLogLik first stores the number $K$ of elements of supportX in a scalar nSuppX.

 negLogLik.m 35 nSuppX = size(supportX,1);

Next, it computes the flow payoffs $u_0$ (u0) and $u_1$ (u1), the choice-specific net expected discounted values $U_0$ (capU0) and $U_1$ (capU1), their contrast $\Delta U$ (deltaU), and the implied probabilities $1/\left[1+\exp(\Delta U)\right]$ of not serving the market (pExit) for the inputted parameter values. Note that this implements the NFXP procedure's inner loop.

 negLogLik.m 39 [u0,u1] = flowpayoffs(supportX,beta,delta); negLogLik.m 40 [capU0,capU1] = fixedPoint(u0,u1,capPi,rho,tolFixedPoint,bellman,[],[]); negLogLik.m 41 deltaU = capU1-capU0; negLogLik.m 42 pExit = 1./(1+exp(deltaU));

##### Log Partial Likelihood

The contribution to the likelihood of firm $n$'s choice in period $t$ is the conditional choice probability

(0)

$p(a_{tn}|x_{tn},a_{(t-1)n})=a_{tn}+\frac{1-2 a_{tn} }{1+\exp\left[\Delta U(x_{tn},a_{(t-1)n})\right]},$

with $a_{0n}=0$. The function negLogLik first computes these probabilities for each firm $n$ and period $t$ and stores them in a $T\times N$ matrix p. Then, it returns minus the sum of their logs, the log partial likelihood for the conditional choices, in nll.

 negLogLik.m 49 laggedChoices = [zeros(1,size(choices,2));choices(1:end-1,:)]; negLogLik.m 50 p = choices + (1-2*choices).*pExit(iX+nSuppX*laggedChoices); negLogLik.m 51 nll = -sum(sum(log(p)));

##### Score

If two or more output arguments are demanded from negLogLik, it computes and returns minus the partial likelihood score for $\theta$ (the derivative of minus the log partial likelihood with respect to $\theta$), in negScore.

 negLogLik.m 57 if nargout>=2

Firm $n$'s contribution to the score equals

(0)

$\frac{\partial\log\left[\prod_{t=1}^Tp(a_{tn}|x_{tn},a_{(t-1)n})\right]}{\partial \theta} = -\sum_{t=1}^T\left(1-2 a_{tn}\right)\left[1-p(a_{tn}|x_{tn},a_{(t-1)n})\right] \frac{\partial\Delta U(x_{tn},a_{(t-1)n})}{\partial\theta}.$

Its calculation requires that we compute $\partial\Delta U/\partial\theta$. Recall that $U$, and therewith $\Delta U$, is only implicitly given by $U=\Psi(U)$. Note that $U=(U_0,U_1)$ is defined on a set with $2K$ points, so that $U$ can be represented by a $4K\times 1$ vector and $\Psi$ by a mapping from $\mathbb{R}^{4K}$ into $\mathbb{R}^{4K}$. Specifically, $U$ can be represented by the $4K\times 1$ vector $\bar U$ that lists the values of $U_0$ and $U_1$ on their domain,

(0)

$\bar U=\left[U_0(x^1,0),\ldots,U_0(x^K,0),U_0(x^1,1),\ldots,U_0(x^K,1),U_1(x^1,0),\ldots,U_1(x^K,0),U_1(x^1,1),\ldots,U_1(x^K,1)\right]',$

and that satisfies

(0)

$\bar U= \tilde\Psi_\theta(\bar U),$

with $\tilde\Psi_\theta:\mathbb{R}^{4K}\rightarrow\mathbb{R}^{4K}$ an appropriately rearranged version of $\Psi$ (note that we made its dependence on $\theta$ explicit). With this alternative representation of $U$ and $\Psi$ in place, we can solve

(0)

$\left[I_{4K} - \frac{\partial \tilde \Psi_\theta(\bar U)}{\partial\bar U'}\right]\frac{\partial\bar U}{\partial \theta'} = \frac{\partial \tilde\Psi_\theta(\bar U)}{\partial\theta'}$

for $\partial\bar U/\partial\theta'$ (see also Rust (1994), p.3110), where $I_i$ is a $i\times i$ unit matrix;

(0)

$\frac{\partial \tilde\Psi_\theta(\bar U)}{\partial\bar U'}= \left(\begin{array}{cccc} ~D_{00}~ &~O_K~ &~D_{10}~ &~O_K~\\ ~D_{00}~ &~O_K~ &~D_{10}~ &~O_K~\\ ~O_K~ &~D_{01}~ &~O_K~ &~D_{11}~\\ ~O_K~ &~D_{01}~ &~O_K~ &~D_{11}~ \end{array}\right),$

with

(0)

$D_{a'a}=\rho\Pi\left[\begin{array}{cccc} p(a'|x^1,a) &0 &\cdots &0\\ 0 &p(a'|x^2,a)& &\vdots\\ \vdots & &\ddots &0\\ 0 &\cdots &0 &p(a'|x^K,a) \end{array}\right],$

and $O_i$ is a $i\times i$ matrix of zeros; and

(0)

$\frac{\partial \tilde \Psi_\theta(\bar U)}{\partial\theta'}= \left[ \begin{array}{cccc} ~0~ & ~0~ & ~0~\\ &\cdot&\\ &\cdot&\\ &\cdot&\\ ~0~ & ~0~ & ~0~\\ ~0~ & ~0~ & ~0~\\ &\cdot&\\ &\cdot&\\ &\cdot&\\ ~0~ & ~0~ & ~0~\\ ~1~ & ~x^1~ & ~-1~\\ &\cdot&\\ &\cdot&\\ &\cdot&\\ ~1~ & ~x^K~ & ~-1~\\ ~1~ & ~x^1~ & ~0~\\ &\cdot&\\ &\cdot&\\ &\cdot&\\ ~1~ & ~x^K~ & ~0~ \end{array} \right].$

The function negLogLik first computes $D_{00}$ (d00), $D_{01}$ (d01), $D_{10}$ (d10), $D_{11}$ (d11) and constructs $\partial \tilde\Psi_\theta(\bar U)/\partial\bar U'$ (dPsi_dUbar).

 negLogLik.m 117 d00 = rho*capPi*diag(pExit(:,1)); negLogLik.m 118 d01 = rho*capPi*diag(pExit(:,2)); negLogLik.m 119 d10 = rho*capPi-d00; negLogLik.m 120 d11 = rho*capPi-d01; negLogLik.m 121 dPsi_dUbar = [[d00;d00;zeros(2*nSuppX,nSuppX)] [zeros(2*nSuppX,nSuppX);d01;d01] negLogLik.m 122 [d10;d10;zeros(2*nSuppX,nSuppX)] [zeros(2*nSuppX,nSuppX);d11;d11]];

It then computes $\partial\tilde\Psi_\theta(\bar U)/\partial\theta'$ (dPsi_dTheta; note that the following line is the only code in negLogLik that needs to be changed if other parameters than those in $\theta$ are estimated).

 negLogLik.m 127 dPsi_dTheta = [[zeros(2*nSuppX,1);ones(2*nSuppX,1)] [zeros(2*nSuppX,1);supportX;supportX] [zeros(2*nSuppX,1);-ones(nSuppX,1);zeros(nSuppX,1)]];

Next, it computes $\partial\bar U/\partial\theta'$ (dUbar_dTheta) and $\partial\Delta U/\partial\theta'$ (dDeltaU_dTheta).

 negLogLik.m 131 dUbar_dTheta = (eye(4*nSuppX)-dPsi_dUbar)\dPsi_dTheta; negLogLik.m 132 dDeltaU_dTheta = dUbar_dTheta(2*nSuppX+1:4*nSuppX,:)-dUbar_dTheta(1:2*nSuppX,:);

Finally, it computes the $1\times 3$ vector $-\partial\log\left[\prod_{t=1}^T p(a_{tn}|x_{tn},a_{(t-1)n})\right]/\partial \theta'$ for each $n$, stacks these individual (minus) score contributions in the $N\times 3$ matrix negFirmScores, and sums them to compute minus the score vector, negScore.

 negLogLik.m 136 nTheta = size(dUbar_dTheta,2); negLogLik.m 137 negFirmScores = repmat((1-2*choices).*(1-p),[1 1 nTheta]); negLogLik.m 138 for i=1:nTheta negLogLik.m 139 negFirmScores(:,:,i) = negFirmScores(:,:,i).*dDeltaU_dTheta(iX+nSuppX*laggedChoices+2*(i-1)*nSuppX); negLogLik.m 140 end negLogLik.m 141 negFirmScores=squeeze(sum(negFirmScores,1)); negLogLik.m 142 negScore = sum(negFirmScores)'; negLogLik.m 143 end

##### Information Matrix

If three output arguments are demanded, negLogLik computes the sum of the $N$ outer products of the individual score contributions,

(0)

$\sum_{n=1}^N\frac{\partial\log\left[\prod_{t=1}^Tp(a_{tn}|x_{tn},a_{(t-1)n})\right]}{\partial\theta}\cdot\frac{\partial\log\left[\prod_{t=1}^Tp(a_{tn}|x_{tn},a_{(t-1)n})\right]}{\partial\theta'},$

and returns it in informationMatrix.

 negLogLik.m 152 if nargout==3 negLogLik.m 153 informationMatrix = zeros(nTheta,nTheta); negLogLik.m 154 for n=1:size(negFirmScores,1) negLogLik.m 155 informationMatrix = informationMatrix + negFirmScores(n,:)'*negFirmScores(n,:); negLogLik.m 156 end negLogLik.m 157 end

When evaluated at an estimate of $\theta$, this provides an estimate of the expected partial likelihood information matrix for $\theta$. This estimate can in turn be used to estimate the variance-covariance matrix, and thus the standard errors, of the maximum partial likelihood estimator $\hat\theta$ of $\theta$.

### 5The Script that Puts It All Together

The script in dynamicDiscreteChoice.m simulates data and computed maximum likelihood estimates using the nested fixed point (NFXP) method of Rust (1987) and Rust (1994). It takes ${\cal X},\delta_0,\rho$ to be known, either takes $\Pi$ to be known or estimates it in a first stage, and focuses on maximum partial likelihood estimation of the remaining parameters; $\beta_0,\beta_1,\delta_1$; from conditional choice probabilities.

#### 51Simulating Data

First, we set the number of time periods (nPeriods) and firms (nFirms) that we would like to have in our sample.

 dynamicDiscreteChoice.m 76 nPeriods = 100 dynamicDiscreteChoice.m 77 nFirms = 1000

We also set the tolerance tolFixedPoint on the fixed point $U$ of $\Psi$ that we will use to determine the simulation's entry and exit rules. This same tolerance will also be used when solving the model in the inner loop of the NFXP procedure.

 dynamicDiscreteChoice.m 81 tolFixedPoint = 1e-10

Next, we specify the values of the model's parameters used in the simulation:

{nSuppX} the scalar number $K$ of elements of ${\cal X}$;
{supportX} the $K\times 1$ vector ${\cal X}$ with the support points of $X_t$;
{capPi} the $K\times K$ Markov transition matrix $\Pi$ for $\{X_t\}$, with typical element $\Pi_{ij}=\Pr(X_{t+1}=x^j|X_t=x^i)$;
{beta} the $2\times 1$ vector $\beta$ with the parameters of the flow profit of active firms;
{delta} the $2\times 1$ vector of exit and entry costs $\delta$; and
{rho} the scalar discount factor $\rho$.

 dynamicDiscreteChoice.m 93 nSuppX = 5; dynamicDiscreteChoice.m 94 supportX = (1:nSuppX)' dynamicDiscreteChoice.m 95 capPi = 1./(1+abs(ones(nSuppX,1)*(1:nSuppX)-(1:nSuppX)'*ones(1,nSuppX))); dynamicDiscreteChoice.m 96 capPi = capPi./(sum(capPi')'*ones(1,nSuppX)) dynamicDiscreteChoice.m 97 beta = [-0.1*nSuppX;0.2] dynamicDiscreteChoice.m 98 delta = [0;1] dynamicDiscreteChoice.m 99 rho = 0.95

For these parameter values, we compute the flow payoffs $u_0$ (u0) and $u_1$ (u1), the choice-specific expected discounted values $U_0$ (capU0) and $U_1$ (capU1), and their contrast $\Delta U$ (deltaU).

 dynamicDiscreteChoice.m 103 [u0,u1] = flowpayoffs(supportX,beta,delta); dynamicDiscreteChoice.m 104 [capU0,capU1] = fixedPoint(u0,u1,capPi,rho,tolFixedPoint,@bellman,[],[]); dynamicDiscreteChoice.m 105 deltaU = capU1-capU0;

With $\Delta U$ computed, and $\Pi$ specified, we proceed to simulate a $T\times N$ matrix of choices choices and a $T\times N$ matrix of states iX (recall from Section 3 that iX contains indices that point to elements of ${\cal X}$ rather than those values themselves).

 dynamicDiscreteChoice.m 109 [choices,iX] = simulateData(deltaU,capPi,nPeriods,nFirms);

#### 52Nested Fixed Point Maximum Likelihood Estimation

First, suppose that $\Pi$ is known. We use fmincon from Matlab's Optimization Toolbox to maximize the partial likelihood for the choices (the code can easily be adapted to use other optimizers and packages, because these have a very similar syntax; see below). Because fmincon is a minimizer, we use minus the log likelihood as its objective. The function negLogLik computes this objective, but has input arguments other than the vector of model parameters to be estimated. Because the syntax of fmincon does not allow this, we define a function handle objectiveFunction to an anonymous function that equals negLogLik but does not have this extra inputs.

 dynamicDiscreteChoice.m 115 objectiveFunction = @(parameters)negLogLik(choices,iX,supportX,capPi,parameters(1:2),[delta(1);parameters(3)], dynamicDiscreteChoice.m 116 rho,@flowpayoffs,@bellman,@fixedPoint,tolFixedPoint)

Before we can put fmincon to work on this objective function, we first have to set some of its other input arguments. We specify a $3\times 1$ vector startvalues with starting values for the parameters to be estimated, $(\beta_0,\beta_1,\delta_1)'$.

 dynamicDiscreteChoice.m 120 startvalues = [-1;-0.1;0.5];

We also set a lower bound of 0 on the third parameter, $\delta_1$, and (nonbinding) lower bounds of $-\infty$ on the other two parameters (lowerBounds). There is no need to specify upper bounds.2

 dynamicDiscreteChoice.m 124 lowerBounds = -Inf*ones(size(startvalues)); dynamicDiscreteChoice.m 125 lowerBounds(3) = 0;

Finally, we pass some options, including tolerances that specify the criterion for the outer loop convergence, to fmincon through the structure OptimizerOptions (recall that we have already set the inner loop tolerance in tolFixedPoint). We use the function optimset from the Optimization Toolbox to assign values to specific fields (options) in OptimizerOptions and then call fmincon to run the NFXP maximum likelihood procedure (to use Knitro instead, simply replace fmincon by knitromatlab, knitrolink, or ktrlink, depending on the packages installed3).

 dynamicDiscreteChoice.m 129 OptimizerOptions = optimset('Display','iter','Algorithm','interior-point','AlwaysHonorConstraints','bounds', dynamicDiscreteChoice.m 130 'GradObj','on','TolFun',1E-6,'TolX',1E-10,'DerivativeCheck','off','TypicalX',[beta;delta(2)]); dynamicDiscreteChoice.m 131 [maxLikEstimates,~,exitflag] = fmincon(objectiveFunction,startvalues,[],[],[],[],lowerBounds,[],[],OptimizerOptions)

This gives maximum partial likelihood estimates of $(\beta_0,\beta_1,\delta_1)$. To calculate standard errors, we call negLogLik once more to estimate the corresponding Fisher information matrix and store this in informationMatrix. Its inverse is an estimate of the maximum likelihood estimator's asymptotic variance-covariance matrix.

 dynamicDiscreteChoice.m 135 [~,~,informationMatrix] = objectiveFunction(maxLikEstimates); dynamicDiscreteChoice.m 136 standardErrors = diag(sqrt(inv(informationMatrix)));

The resulting parameter estimates and standard errors are displayed (third and fourth columns), together with the parameters' true (first column) and starting values (second column).

 dynamicDiscreteChoice.m 140 disp('Summary of Results'); dynamicDiscreteChoice.m 141 disp('--------------------------------------------'); dynamicDiscreteChoice.m 142 disp(' true start estim ste.'); dynamicDiscreteChoice.m 143 disp([[beta;delta(2)] startvalues maxLikEstimates standardErrors]);

#### 53Extension to an Unknown Markov Transition Matrix for the State

Finally, consider the more realistic case that $\Pi$ is not known. In this case, Rust (1994) suggests a two-stage procedure. In the first stage, we estimate $\Pi$ using estimatePi and store the results in a $K\times K$ matrix piHat.

 dynamicDiscreteChoice.m 148 piHat = estimatePi(iX,nSuppX)

In the second stage, maximum partial likelihood estimates of $(\beta_0,\beta_1,\delta_1)$ can be computed using the NFXP procedure, with piHat replacing capPi. This, with the question how the first-stage sampling error affects the precision of the second-stage estimator of $(\beta_0,\beta_1,\delta_1)$, is left for the exercises.

### 6Student Exercises

#### 61Theoretical Exercises

##### Theoretical Exercise 1

Prove that $\Psi$ is a contraction.

##### Theoretical Exercise 2

Is $\rho$ identified in the example model? What about $\delta_0$? (To the extent that these are identified, you may want to extend your numerical experiments below to include their estimation.) See Abbring (2010) for results and references.

##### Theoretical Exercise 3

Show the equivalence of policy iteration and the Newton-Kantorovich method for solving $U-\Psi(U)=0$ for $U$ (see Appendices 13 and 15). Read about the properties of policy iteration and discuss what these imply about the convergence properties of the Newton-Kantorovich method. Does policy iteration converge in a finite number of steps? Why (not)?

#### 62Computational Exercises

The computational exercises typically ask you to modify the code so that it can handle alternative econometric procedures and models. This often requires that you adapt the analytic computation of the derivatives of the log likelihood. It may be convenient to first (or only) work with numerical derivatives by setting the option "GradObj" in fmincon or its alternative to "off"; this allows you to find the estimates without using the analytical gradient in the optimization step. Once you have coded up the analytical derivatives (in order to use them in the optimization step and to obtain an estimate of the standard errors), you are advised to verify them against numerical derivatives, for example by setting the option "DerivativeCheck" in fmincon to "on" (however, make sure to set this option to "off" once you have verified the derivatives and start using the code for repeated, e.g. Monte Carlo, calculations).

##### Computational Exercise 1

Extend the script in dynamicDiscreteChoice.m with a single simulation to a Monte Carlo experiment in which estimates are computed for a sequence of simulated data sets (for now, as in dynamicDiscreteChoice.m, take $\Pi$ to be known). Keep track of the estimates and report statistics such as their means and standard deviations. Compare the latter to asymptotic standard errors. Make sure to set the seed of the random number generator so that your numerical results can be replicated. Note that you can also use this Monte Carlo setup in later questions to study the finite-sample performance of the various procedures studied.

##### Computational Exercise 2

For the case of estimating the demand for differentiated products using NFXP maximum likelihood, as in Berry, Levinsohn, and Pakes (1995), Dubé, Fox, and Su (2012) claim that it is important to carefully balance the convergence tolerances used in the inner (contraction) loop and the outer (likelihood maximization) loop. In particular, they argue that the inner loop needs to be computed at a much higher precision than the tolerance used in the outer loop. Experiment with the values of tolFixedPoint on the one hand and the tolerances in the tolX and tolFun fields of OptimizerOptions on the other hand to investigate this issue in the context of this package's firm entry and exit problem.

##### Computational Exercise 3

Implement the MPEC approach to maximum likelihood estimation of our structural model, as advocated by Judd and Su (2012).

• First, modify negLogLik so that it it takes $(U_0,U_1)$ or $\Delta U$ as an input argument (instead of solving the model for them) and computes the log (partial) likelihood directly from the choice probabilities implied by $\Delta U$.
• Then, extend the script in dynamicDiscreteChoice.m so that it alternatively maximizes the log likelihood with respect to both the parameters of interest and the values of $\Delta U$, subject to the constraint on $\Delta U$ implied by $U=\Psi(U)$ (which can be specified using the function bellman).

Would you expect the NFXP and MPEC approaches to give the same estimates of the parameters of interest (up to numerical precision)? How is the relative numerical performance of both procedures? Which one is faster? Compare your results to those in Iskhakov et al. (2016) and explain.

##### Computational Exercise 4

Compute the two-stage maximum partial likelihood estimates of $(\beta_0,\beta_1,\delta_1)$ for the case that $\Pi$ is not known. Are the estimators of the standard errors that assume $\Pi$ known consistent if $\Pi$ is estimated instead? Read Rust (1994) and explain why the two-stage estimator of $(\beta_0,\beta_1,\delta_1)$ is not efficient. The full information maximum likelihood (FIML) estimator estimates all parameters, including the ones in $\Pi$, simultaneously by maximizing the full likelihood for the observed state transitions and choices. Write an alternative for negLogLik that codes up this likelihood function and adapt the estimation script so that it obtains the FIML estimates and their estimated standard errors. Do not code up the gradient and do not use it in the likelihood maximization. Perform a Monte Carlo study of the two-stage and FIML estimators of $(\beta_0,\beta_1,\delta_1)$. For both estimators, report the means and standard deviations of the estimates of $(\beta_0,\beta_1,\delta_1)$ and the average estimates of the standard errors across Monte Carlo simulations. Compare and discuss. Finally, discuss the three-stage approach suggested by Rust (1994).

##### Computational Exercise 5

Implement a simulation-based two-step estimator along the lines of Hotz et al. (1994), Rust (1994), and Bajari, Benkard, and Levin (2007).

• Add a new function for nonparametrically estimating the choice probabilities $p(a|X_t,A_{t-1})\equiv\Pr(A_t=a|X_t,A_{t-1})$ over the support of $(X_t,A_{t-1})$. With enough data and a finite state space, you can simply use the conditional sample frequency.
• Add (to this same function or in a new function) a procedure for estimating $\Delta U$ by inverting the estimated choice probabilities (as proposed by Hotz and Miller (1993) and Hotz et al. (1994)).
• Add a new function for simulating a given number of state (including $\varepsilon_t$) and choice paths of some given length from each state observed in the data and for each first choice that can be made at the beginning of the path. Make use of the linearity of the flow utility in the parameters to reduce the dimensionality of the objects returned by this function and needed for constructing the objective in the next step.
• Adapt negLogLik into an objective function that takes nonparametric estimates of $p$ or $\Delta U$ and (a relevant summary of) simulated states and choices as inputs. As possible objectives to be minimized, implement both a weighted distance between the nonparametric estimates of $\Delta U$ and simulated values of $\Delta U$, and minus a log pseudo-likelihood based on the choice probabilities implied by the simulated values of $\Delta U$. When doing so exploit linearity of the flow utilities in the parameters.
• Finally, extend the script in dynamicDiscreteChoice.m so that it successively calls these functions to implement a two step procedure for estimating the model, as in Hotz and Miller (1993) and Hotz et al. (1994). Analyze the numerical and statistical performance of this procedure with both of the possible objective functions implemented and compare. Discuss the theoretical relation between both approaches; see Pesendorfer and Schmidt-Dengler (2008).

See the course slides for a brief and general description of this procedure and Rust (1994) for a detailed algorithm, with discussion. Note that the algorithm described in Rust (1994) is similar to that of Bajari, Benkard, and Levin (2007) for games that we will discuss later in the course. Both build on the ideas of Hotz et al. (1994), who use the special logit structure to simplify the simulation (see the discussion in Bajari, Benkard, and Levin (2007)).

##### Computational Exercise 6

Extend the code for simulation and NFXP estimation so that it can handle a finite number of unobserved types, as in the work of Eckstein and Wolpin (1990), Keane and Wolpin (1997), and Eckstein and Wolpin (1999). Specifically, suppose that there are two types in the population, one with entry cost $\delta_1^1$ and the other with entry cost $\delta_1^2$. The firms are distributed across both unobserved entry cost types independently from all other variables in the model. We would like to estimate $\beta_0,\beta_1,\delta_1^1,\delta_1^2$, and the share of agents with entry cost $\delta_1^1$. To this end:

• Extend simulateData so that it randomly draws an entry cost from a distribution with two points of support, $\delta_1^1$ and $\delta_1^2$, and simulates choice data corresponding to the entry cost drawn, for each firm, independently across firms.
• Extend negLogLik so that it computes each firm's likelihood contribution as the mixture (expectation) of the likelihood contributions for each entry cost type (which are given by the current specification of the likelihood for homogeneous firms) over the distribution of these two types in the population of firms.
• Finally, extend the script in dynamicDiscreteChoice.m so that it successively calls these functions. Check whether you can recover the entry cost distribution, and the other parameters, well.

##### Computational Exercise 7

Implement a version of the two-step estimator of Hotz et al. (1994) that similarly allows for a finite number of unobserved types. To this end, combine this estimator with the EM algorithm as in Arcidiacono and Miller (2011) (see also Arcidiacono and Ellickson (2011)).

##### Computational Exercise 8

Perform a Monte Carlo study of all the estimators that you have implemented. Evaluate and compare their numerical performance as measured in terms of convergence success and computation time; and their finite sample statistical performance in terms of, among other things, mean squared errors. Experiment with different choices of the numerical design parameters, such as the inner and outer loop tolerances used in the NFXP procedure.

##### Computational Exercise 9

Extend fixedPoint so that it has the option to compute $U$ using the Newton-Kantorovich method instead of successive approximations, or a hybrid method that combines both (as in Rust (1987)). Note that, with a finite state space as in our example, the Newton-Kantovorich method reduces to the Newton-Raphson method and requires the Jacobian $\partial\tilde\Psi_\theta(\bar U)/\partial\bar U'$ computed in an intermediate step of negLogLik. Explore the numerical performance of the various methods. What is your preferred method?

• Abbring, Jaap H. (2010). Identification of dynamic discrete choice models. Annual Review of Economics 2, 367-394.
• Aguirregabiria, Victor and Pedro Mira (2002). Swapping the nested fixed point algorithm: A class of estimators for discrete Markov decision models. Econometrica 70(4), 1519-1543.
• Arcidiacono, Peter and Paul B. Ellickson (2011). Practical methods for estimation of dynamic discrete choice models. Annual Review of Economics 3, 363-394.
• Arcidiacono, Peter and David A. Miller (2011). Conditional choice probability estimation of dynamic discrete choice models with unobserved heterogeneity. Econometrica 79, 1823-1867.
• Bajari, Patrick, Lanier Benkard, and Jonathan Levin (2007). Estimating dynamic models of imperfect competition. Econometrica 75(5), 1331-1370.
• Berry, Steven, James Levinsohn, and Ariel Pakes (1995). Automobile prices in market equilibrium. Econometrica 63, 841-890.
• Dixit, Avinash K. (1989). Entry and exit decisions under uncertainty. Journal of Political Economy 97(3), 620-638.
• Dubé, Jean-Pierre, Jeremy T. Fox, and Che-Lin Su (2012). Improving the numerical performance of static and dynamic aggregate discrete choice random coefficients demand estimation. Econometrica 80, 2231-2267.
• Eckstein, Zvi and Kenneth I. Wolpin (1990). Estimating a market equilibrium search model from panel data on individuals. Econometrica 58(4), 783-808.
• Eckstein, Zvi and Kenneth I. Wolpin (1999). Why youths drop out of high school: The impact of preferences, opportunities, and abilities. Econometrica 67(6), 1295-1339.
• Hotz, V. Joseph and David A. Miller (1993). Conditional choice probabilities and the estimation of dynamic models. Review of Economic Studies 60, 497-529.
• Hotz, V. Joseph, David A. Miller, Seth Sanders and Jeffrey Smith (1994). A simulation estimator for dynamic models of discrete choice. Review of Economic Studies 61(2), 265-289.
• Iskhakov, Fedor, Jinhyuk Lee, John Rust, Bertel Schjerning and Kyoungwon Seo (2016). Comment on "Constrained optimization approaches to estimation of structural models". Econometrica 84(1), 365-370.
• Judd, Kenneth L. 1998. Numerical Methods in Economics. MIT Press. Cambridge, MA.
• Judd, Kenneth L. and Che-Lin Su (2012). Constrained optimization approaches to estimation of structural models. Econometrica 80(5), 2213-2230.
• Keane, Michael P. and Kenneth I. Wolpin (1997). The career decisions of young men. Journal of Political Economy 105(3), 473-522.
• Ljungqvist, Lars and Thomas J. Sargent (2000). Recursive Macroeconomic Theory, Second edition. MIT Press. Cambridge, MA.
• McFadden, Daniel (1981). Econometric models of probabilistic choice. In C. Manski and D. McFadden (Eds.). Discrete Data with Econometric Applications. MIT Press. Cambridge, MA.
• Pesendorfer, Martin and Philipp Schmidt-Dengler (2008). Asymptotic least squares estimators for dynamic games. Review of Economic Studies 75(3), 901-928.
• Puterman, Martin L. and Shelby L. Brumelle (1979). On the convergence of policy iteration in stationary dynamic programming. Mathematics of Operations Research 4.1, 60-69.
• Rust, John (1987). Optimal replacement of GMC bus engines: An empirical model of Harold Zurcher. Econometrica 55, 999-1033.
• Rust, John (1994). Structural estimation of Markov decision processes. In R. Engle and D. McFadden (Eds.). Handbook of Econometrics 4, 3081-3143. North-Holland. Amsterdam.
• Stokey, Nancy L. and Robert E. Lucas (with Edward C. Prescott) (1989). Recursive Methods in Economic Dynamics. Harvard University Press. Cambridge, MA.

 1 The code has been tested with Matlab 2014a and later, with its Optimization Toolbox, on iMacs with OS X 10.9 and later. A free trial version of Knitro is available from Artelys. 2 Note that fmincon, but also its alternatives discussed below, allow the user to specify bounds on parameters; if another function is used that does not allow for bounds on the parameters, you can use an alternative parameterization to ensure that parameters only take values in some admissible set (for example, you can specify $\delta_1=\exp(\delta_1^*)$ for $\delta_1^*\in\mathbb{R}$ to ensure that $\delta_1 \gt 0$). Minimizers like fmincon also allow you to impose more elaborate constraints on the parameters; you will need this option when implementing the MPEC alternative to NFXP of Judd and Su (2012) (see Section 6). 3 fmincon requires Matlab's Optimization Toolbox, knitromatlab is included in Knitro 9.0, knitrolink uses both, and ktrlink can be used if the Optimization Toolbox is installed with an earlier version of Knitro.

### 1Contractions and Dynamic Programming

See e.g. Stokey, Lucas, and Prescott (1989) and Judd (1998) for a rigorous and comprehensive treatment of the topics in this appendix and their economic applications.

#### 11Definitions

A metric space is a set $\cal{U}$ with a metric $m:\cal{U}\times\cal{U}\rightarrow\mathbb{R}$ such that, for any $U,U',U''\in\cal{U}$,

1. $m(U,U')=0$ if and only if $U=U'$,
2. $m(U,U')=m(U',U)$, and
3. $m(U,U'')\leq m(U,U')+m(U',U'')$ (triangle inequality).

A Cauchy sequence in $({\cal U},m)$ is a sequence $\{U_n\}$ in ${\cal U}$ such that, for each $\epsilon \gt 0$, there exists some $n(\epsilon)\in\mathbb{N}$ such that $m(U_n,U_{n'}) \lt \epsilon$ for all $n,n'\geq n(\epsilon)$.

A metric space $({\cal U},m)$ is complete if every Cauchy sequence in $({\cal U},m)$ converges to a point in ${\cal U}$.

A map $\Psi:{\cal U}\rightarrow{\cal U}$ is a contraction with modulus $\rho\in(0,1)$ if $m\left[\Psi(U),\Psi(U')\right]\leq \rho m(U,U')$ for all $U,U'\in{\cal U}$.

#### 12Contraction Mapping (Banach Fixed Point) Theorem

If $({\cal U},m)$ is a complete metric space and $\Psi:{\cal U}\rightarrow{\cal U}$ is a contraction, then there exists a unique $U\in{\cal U}$ such that $U=\Psi(U)$. Furthermore, for any $U_0\in{\cal U}$, the sequence $\{U_n\}$ with $U_{n+1}=\Psi(U_{n})$, $n\in\mathbb{N}$, converges to $U$.

##### Sketch of Proof

Suppose $U,U'\in{\cal U}$ are such that $U=\Psi(U)$ and $U'=\Psi(U')$. Then, $0\leq m(U,U')=m\left[\Psi(U),\Psi(U')\right]\leq\rho m(U,U')$ for some $\rho\in(0,1)$. Consequently, $m(U,U')=0$ and $U=U'$.

Because $\Psi$ is a contraction, any $\{U_n\}$ it generates is a Cauchy sequence. Because $({\cal U},m)$ is complete, it converges to some $U\in{\cal U}$. Because contractions are continuous, $U=\Psi(U)$.

#### 13Computing the Fixed Point of a Contraction

The method of successive approximations directly applies the Contraction Mapping Theorem and approximates the fixed point $U$ of a contraction $\Psi$ with the sequence $\{U_n\}$ generated from $U_{n+1}=\Psi(U_n)$ on a finitely discretized state space. This method has global but linear convergence.

The Newton-Kantorovich method searches for a zero of $I-\Psi$, where $I:U\in{\cal U}\mapsto U$ is the identity mapping. It approximates $U$ with the sequence generated from

(0)

$U_{n+1}=U_n-\left[I-\Psi'_{U_n}\right]^{-1}\left[U_n-\Psi(U_n)\right],$

with $I-\Psi'_{U_n}$ the Fréchet derivative of $I-\Psi$ at $U_n$ (with a discrete state space, this is simply the linear map defined by the appropriate finite dimensional Jacobian, as in negLogLik).

#### 14Blackwell's Sufficient Conditions for a Contraction

Let ${\cal U}$ be the space of functions $U:{\cal X}\rightarrow\mathbb{R}$ such that $\sup|U| \lt \infty$ ($U$ bounded), with metric $m(U,U')=\sup|U-U'|$. Suppose that, for some $\rho\in(0,1)$, $\Psi:{\cal U}\rightarrow{\cal U}$ satisfies

1. (monotonicity) $\Psi(U)\leq \Psi(U')$ and
2. (discounting) $\Psi(U+\gamma)\leq \Psi(U)+\rho\gamma$

for all $U,U'\in{\cal U}$ such that $U\leq U'$ and all $\gamma\in(0,\infty)$. Then, $\Psi$ is a contraction with modulus $\rho$.

##### Sketch of Proof

Let $U,U'\in{\cal U}$ be such that $U\leq U'$. Note that $U\leq U'+m(U,U')$, so that $\Psi(U)\leq \Psi(U')+\rho m(U,U')$ by monotonicity and discounting. Similarly, $\Psi(U')\leq \Psi(U)+\rho m(U,U')$. Thus, $m\left[\Psi(U),\Psi(U')\right]\leq\rho m(U,U')$.

#### 15Application to Dynamic Programming

The choice-specific value function $U$ is a fixed point of $\Psi$, which satisfies Blackwell's sufficient conditions with $\rho$ equal to the discount factor.

• Under regularity conditions such that $U$ is bounded, this ensures that it is the unique fixed point of a contraction on the complete (Banach) space $({\cal U},m)$ of bounded functions.
• If ${\cal U}'\subset{\cal U}$ is closed and $\Psi(U')\in{\cal U}'$ for all $U'\in{\cal U}'$, then $\Psi$ is a contraction on the complete subspace $({\cal U}',m)$, so that $U\in{\cal U}'$ (examples: monotonicity, continuity, etcetera).
• $U$ can be computed by successive approximations, the Newton-Kantorovich method, or a hybrid algorithm as in Rust (1987).

In this context, the method of successive approximations is alternatively referred to as value iteration. Moreover, the Newton-Kantorovich method is closely related to an alternative method called policy iteration. Each of this method's iterations takes a value function as input, determines the corresponding optimal policy, and then gives the values of applying that policy forever as output (see e.g. Ljungqvist and Sargent (2000), Section 3.1.1, which refers to this method as Howard's policy improvement algorithm, or Rust (1994), Section 2.5).

Puterman and Brumelle (1979) show that policy iteration is generally equivalent to applying the Newton-Kantorovich method to finding the fixed point of the Bellman equation (in the sense that both produce the same sequence of value functions when starting from the same value). Ljungqvist and Sargent (2000), Section 4.4, present and develop this result for the special case of a finite state space.

Finally, for our example's special case of a finite ${\cal X}$, Aguirregabiria and Mira (2002)'s Proposition 1 establishes that policy iteration is equivalent to applying the Newton-Kantorovich method to finding a fixed point of the Bellman-like operator $\Psi$ (which Aguirregabiria and Mira (2002) refer to as a smoothed Bellman operator).

### 2Miscellaneous Utilities

This section documents some miscellaneous utilities that are called by the main functions. At this point, it only includes randomDiscrete.

The function randomDiscrete returns a random draw from the distribution of $(Y_1,\ldots,Y_n)$; with $Y_1,\ldots,Y_n$ independently discretely distributed with (not necessarily identical) distributions on $\{1,2,\ldots,k\}$ for some $2\leq k \lt \infty$.

 randomDiscrete.m 7 function y = randomDiscrete(p)

It requires one input argument:

{p} a $k\times n$ matrix with $(k,n)$th element $\Pr(Y_n=k)$.

It returns

{y} a $1\times n$ vector with a random draw $(y_1,\ldots,y_N)$ from $(Y_1,\ldots,Y_N)$.

The function randomDiscrete first stores the number $k$ of support points in a scalar nSupp and the number $n$ of random variables in a scalar nVar.

 randomDiscrete.m 19 nSupp = size(p,1); randomDiscrete.m 20 nVar = size(p,2);

Then, it creates a $(k-1)\times n$ matrix uniformDraws with $k-1$ identical rows, containing $n$ independent draws from a standard uniform distribution, and computes the $k\times n$ matrix cumulativeP with $(k,n)$th element $\Pr(Y_n\leq k)$.

 randomDiscrete.m 24 uniformDraws = ones(nSupp-1,1)*random('unif',zeros(1,nVar),ones(1,nVar)); randomDiscrete.m 25 cumulativeP = cumsum(p);

Finally, for each $n$, it sets $y_n$ equal to 1 plus the number of elements of $\{\Pr(Y_n\leq 1), \ldots, \Pr(Y_n\leq k-1)$} that are weakly smaller than the uniform random draw in the $n$th column of uniformDraws.

 randomDiscrete.m 29 y = sum([ones(1,nVar);cumulativeP(1:nSupp-1,:)<=uniformDraws]);

 dynamicDiscreteChoice.m 76 nPeriods = 100 dynamicDiscreteChoice.m 77 nFirms = 1000 dynamicDiscreteChoice.m 81 tolFixedPoint = 1e-10 dynamicDiscreteChoice.m 93 nSuppX = 5; dynamicDiscreteChoice.m 94 supportX = (1:nSuppX)' dynamicDiscreteChoice.m 95 capPi = 1./(1+abs(ones(nSuppX,1)*(1:nSuppX)-(1:nSuppX)'*ones(1,nSuppX))); dynamicDiscreteChoice.m 96 capPi = capPi./(sum(capPi')'*ones(1,nSuppX)) dynamicDiscreteChoice.m 97 beta = [-0.1*nSuppX;0.2] dynamicDiscreteChoice.m 98 delta = [0;1] dynamicDiscreteChoice.m 99 rho = 0.95 dynamicDiscreteChoice.m 103 [u0,u1] = flowpayoffs(supportX,beta,delta); dynamicDiscreteChoice.m 104 [capU0,capU1] = fixedPoint(u0,u1,capPi,rho,tolFixedPoint,@bellman,[],[]); dynamicDiscreteChoice.m 105 deltaU = capU1-capU0; dynamicDiscreteChoice.m 109 [choices,iX] = simulateData(deltaU,capPi,nPeriods,nFirms); dynamicDiscreteChoice.m 115 objectiveFunction = @(parameters)negLogLik(choices,iX,supportX,capPi,parameters(1:2),[delta(1);parameters(3)], dynamicDiscreteChoice.m 116 rho,@flowpayoffs,@bellman,@fixedPoint,tolFixedPoint) dynamicDiscreteChoice.m 120 startvalues = [-1;-0.1;0.5]; dynamicDiscreteChoice.m 124 lowerBounds = -Inf*ones(size(startvalues)); dynamicDiscreteChoice.m 125 lowerBounds(3) = 0; dynamicDiscreteChoice.m 129 OptimizerOptions = optimset('Display','iter','Algorithm','interior-point','AlwaysHonorConstraints','bounds', dynamicDiscreteChoice.m 130 'GradObj','on','TolFun',1E-6,'TolX',1E-10,'DerivativeCheck','off','TypicalX',[beta;delta(2)]); dynamicDiscreteChoice.m 131 [maxLikEstimates,~,exitflag] = fmincon(objectiveFunction,startvalues,[],[],[],[],lowerBounds,[],[],OptimizerOptions) dynamicDiscreteChoice.m 135 [~,~,informationMatrix] = objectiveFunction(maxLikEstimates); dynamicDiscreteChoice.m 136 standardErrors = diag(sqrt(inv(informationMatrix))); dynamicDiscreteChoice.m 140 disp('Summary of Results'); dynamicDiscreteChoice.m 141 disp('--------------------------------------------'); dynamicDiscreteChoice.m 142 disp(' true start estim ste.'); dynamicDiscreteChoice.m 143 disp([[beta;delta(2)] startvalues maxLikEstimates standardErrors]); dynamicDiscreteChoice.m 148 piHat = estimatePi(iX,nSuppX)
 flowpayoffs.m 7 function [u0,u1] = flowpayoffs(supportX,beta,delta) flowpayoffs.m 24 nSuppX = size(supportX,1); flowpayoffs.m 55 u0 = [zeros(nSuppX,1) -delta(1)*ones(nSuppX,1)]; flowpayoffs.m 56 u1 = [ones(nSuppX,1) supportX]*beta*[1 1]-delta(2)*ones(nSuppX,1)*[1 0];
 bellman.m 15 function [capU0,capU1] = bellman(capU0,capU1,u0,u1,capPi,rho) bellman.m 34 r0 = log(exp(capU0(:,1))+exp(capU1(:,1))); bellman.m 35 r1 = log(exp(capU0(:,2))+exp(capU1(:,2))); bellman.m 39 capU0 = u0 + rho*capPi*r0*[1 1]; bellman.m 40 capU1 = u1 + rho*capPi*r1*[1 1];
 fixedPoint.m 7 function [capU0,capU1] = fixedPoint(u0,u1,capPi,rho,tolFixedPoint,bellman,capU0,capU1) fixedPoint.m 28 nSuppX = size(capPi,1); fixedPoint.m 32 if isempty(capU0) fixedPoint.m 33 capU0 = zeros(nSuppX,2); fixedPoint.m 34 end fixedPoint.m 35 if isempty(capU1) fixedPoint.m 36 capU1 = zeros(nSuppX,2); fixedPoint.m 37 end fixedPoint.m 41 inU0 = capU0+2*tolFixedPoint; fixedPoint.m 42 inU1 = capU1+2*tolFixedPoint; fixedPoint.m 43 while (max(max(abs(inU0-capU0)))>tolFixedPoint) || (max(max(abs(inU1-capU1)))>tolFixedPoint); fixedPoint.m 44 inU0 = capU0; fixedPoint.m 45 inU1 = capU1; fixedPoint.m 46 [capU0,capU1] = bellman(inU0,inU1,u0,u1,capPi,rho); fixedPoint.m 47 end
 simulateData.m 7 function [choices,iX] = simulateData(deltaU,capPi,nPeriods,nFirms) simulateData.m 23 nSuppX = size(capPi,1); simulateData.m 39 oneMinPi = eye(nSuppX)-capPi'; simulateData.m 40 pInf = [oneMinPi(1:nSuppX-1,:);ones(1,nSuppX)]\[zeros(nSuppX-1,1);1]; simulateData.m 44 iX = randomDiscrete(pInf*ones(1,nFirms)); simulateData.m 48 deltaEpsilon = random('ev',zeros(1,nFirms),ones(1,nFirms))-random('ev',zeros(1,nFirms),ones(1,nFirms)); simulateData.m 49 choices = deltaU(iX,1)' > deltaEpsilon; simulateData.m 53 for t = 2:nPeriods simulateData.m 54 iX = [iX;randomDiscrete(capPi(iX(end,:),:)')]; simulateData.m 55 deltaEpsilon = random('ev',zeros(1,nFirms),ones(1,nFirms))-random('ev',zeros(1,nFirms),ones(1,nFirms)); simulateData.m 56 choices = [choices;(deltaU(iX(end,:)+nSuppX*choices(end,:)) > deltaEpsilon)]; simulateData.m 57 end
 estimatePi.m 7 function piHat = estimatePi(iX,nSuppX) estimatePi.m 20 nPeriods=size(iX,1); estimatePi.m 24 for i=1:nSuppX estimatePi.m 25 for j=1:nSuppX estimatePi.m 26 piHat(i,j)=sum(sum((iX(2:nPeriods,:)==j)&(iX(1:nPeriods-1,:)==i)))/sum(sum((iX(1:nPeriods-1,:)==i))); estimatePi.m 27 end estimatePi.m 28 end
 negLogLik.m 7 function [nll,negScore,informationMatrix] = negLogLik.m 8 negLogLik(choices,iX,supportX,capPi,beta,delta,rho,flowpayoffs,bellman,fixedPoint,tolFixedPoint) negLogLik.m 35 nSuppX = size(supportX,1); negLogLik.m 39 [u0,u1] = flowpayoffs(supportX,beta,delta); negLogLik.m 40 [capU0,capU1] = fixedPoint(u0,u1,capPi,rho,tolFixedPoint,bellman,[],[]); negLogLik.m 41 deltaU = capU1-capU0; negLogLik.m 42 pExit = 1./(1+exp(deltaU)); negLogLik.m 49 laggedChoices = [zeros(1,size(choices,2));choices(1:end-1,:)]; negLogLik.m 50 p = choices + (1-2*choices).*pExit(iX+nSuppX*laggedChoices); negLogLik.m 51 nll = -sum(sum(log(p))); negLogLik.m 57 if nargout>=2 negLogLik.m 117 d00 = rho*capPi*diag(pExit(:,1)); negLogLik.m 118 d01 = rho*capPi*diag(pExit(:,2)); negLogLik.m 119 d10 = rho*capPi-d00; negLogLik.m 120 d11 = rho*capPi-d01; negLogLik.m 121 dPsi_dUbar = [[d00;d00;zeros(2*nSuppX,nSuppX)] [zeros(2*nSuppX,nSuppX);d01;d01] negLogLik.m 122 [d10;d10;zeros(2*nSuppX,nSuppX)] [zeros(2*nSuppX,nSuppX);d11;d11]]; negLogLik.m 127 dPsi_dTheta = [[zeros(2*nSuppX,1);ones(2*nSuppX,1)] [zeros(2*nSuppX,1);supportX;supportX] [zeros(2*nSuppX,1);-ones(nSuppX,1);zeros(nSuppX,1)]]; negLogLik.m 131 dUbar_dTheta = (eye(4*nSuppX)-dPsi_dUbar)\dPsi_dTheta; negLogLik.m 132 dDeltaU_dTheta = dUbar_dTheta(2*nSuppX+1:4*nSuppX,:)-dUbar_dTheta(1:2*nSuppX,:); negLogLik.m 136 nTheta = size(dUbar_dTheta,2); negLogLik.m 137 negFirmScores = repmat((1-2*choices).*(1-p),[1 1 nTheta]); negLogLik.m 138 for i=1:nTheta negLogLik.m 139 negFirmScores(:,:,i) = negFirmScores(:,:,i).*dDeltaU_dTheta(iX+nSuppX*laggedChoices+2*(i-1)*nSuppX); negLogLik.m 140 end negLogLik.m 141 negFirmScores=squeeze(sum(negFirmScores,1)); negLogLik.m 142 negScore = sum(negFirmScores)'; negLogLik.m 143 end negLogLik.m 152 if nargout==3 negLogLik.m 153 informationMatrix = zeros(nTheta,nTheta); negLogLik.m 154 for n=1:size(negFirmScores,1) negLogLik.m 155 informationMatrix = informationMatrix + negFirmScores(n,:)'*negFirmScores(n,:); negLogLik.m 156 end negLogLik.m 157 end
 randomDiscrete.m 7 function y = randomDiscrete(p) randomDiscrete.m 19 nSupp = size(p,1); randomDiscrete.m 20 nVar = size(p,2); randomDiscrete.m 24 uniformDraws = ones(nSupp-1,1)*random('unif',zeros(1,nVar),ones(1,nVar)); randomDiscrete.m 25 cumulativeP = cumsum(p); randomDiscrete.m 29 y = sum([ones(1,nVar);cumulativeP(1:nSupp-1,:)<=uniformDraws]);