全部版块 我的主页
论坛 计量经济学与统计论坛 五区 计量经济学与统计软件 Stata专版
5407 6
2021-11-18

Stata混合logit模型不收敛问题:各位老师,您们好,目前我在做选择实验的东西,我将价格、ASC和个人特征变量的交互项设置为固定参数,其他的属性(赋值1234)设置为随机变量,但是目前出现了一个问题,不放ASC、交互项的时候混合logit模型收敛,但是当我放入ASC、交互项以后模型不收敛,

根据论坛的意见,我也采取了对解释变量取对数或者winsor处理,但是模型还是不收敛。

所以我想问问各位老师这种情况是什么原因呢,应当怎么处理?或者可以用什么方法可以找出离群值呢?

十分感谢老师们的解答!


我的ASC是按照以下的方式设置的:

一个选择集包含方案一,方案二和保持现状,即一个选择集对应三行数据,如果这人选择了方案一或方案二,三行的ASC0 0 0,如果这个人选择了保持现状,三行的ASC0 0 1


二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

全部回复
2021-11-19 17:01:29
lz你好!这个模型我不熟悉,相应命令是-asclogit-吗?stata help了一下好像也是极大似然求解的,或许可以试试改变估计的方法,使用-technique()-选项。默认是nr,你可以改成dfp,bfgs,nr,或者其任意组合(bhhh官方写了不支持)。
还可以试一下Roodman写的-cmp-命令,更加灵活(不过能不能处理你这个模型我不太清楚),以下是Roodman写的不收敛或收敛很慢的建议:
If you are having trouble achieving (or waiting for) convergence with cmp, these techniques
    might help:

        1. Changing the search techniques using ml's technique() option, or perhaps the search
            parameters, through its maximization options. cmp accepts all these and passes them on
            to ml. The default Newton-Raphson search method usually works very well once ml has
            found a concave region. The DFP algorithm (tech(dfp)) often works better before then,
            and the two can be mixed, as with tech(dfp nr). See the details of the technique()
            option at ml.
        2. If the estimation problem requires the GHK algorithm (see above), change the number of
            draws per observation in the simulation sequence using the ghkdraws() option. By
            default, cmp uses twice the square root of the number of observations for which the GHK
            algorithm is needed, i.e., the number of observations that are censored in at least
            three equations. Raising simulation accuracy by increasing the number of draws is
            sometimes necessary for convergence and can even speed it by improving search precision.
            On the other hand, especially when the number of observations is high, convergence can
            be achieved, at some loss in precision, with remarkably few draws per observations--as
            few as 5 when the sample size is 10,000 (Cappellari and Jenkins 2003). And taking more
            draws can also greatly extend execution time.
        3. If getting many "(not concave)" messages, try the difficult option, which instructs ml to
            use a different search algorithm in non-concave regions.
        4. If the search appears to be converging in likelihood--if the log likelihood is hardly
            changing in each iteration--and yet fails to converge, try adding a nrtolerance(#) or
            nonrtolerance option to the command line after the comma. These are ml options that
            control when convergence is declared. (See ml_opts, below.) By default, ml declares
            convergence when the log likelihood is changing very little with successive iterations
            (within tolerances adjustable with the tolerance(#) and ltolerance(#) options) and when
            the calculated gradient vector is close enough to zero.  In some difficult problems,
            such as ones with nearly collinear regressors, the imprecision of floating point numbers
            prevents ml from quite satisfying the second criterion.  It can be loosened by using the
            nrtolerance(#) to set the scaled gradient tolerance to a value larger than its default
            of 1e-5, or eliminated altogether with nonrtolerance. Because of the risks of
            collinearity, cmp warns when the condition number of an equation's regressor matrix
            exceeds 20 (Greene 2000, p. 40).
        5. Try cmp's interactive mode, via the interactive option. This allows the user to interrupt
            maximization by hitting Ctrl-Break or its equivalent, investigate and adjust the current
            solution, and then restart maximization by typing ml max. Techniques for exploring and
            changing the current solution include displaying the current coefficient and gradient
            vectors (with mat list $ML_b or mat list $ML_g) and running ml plot. See help ml, [R]
            ml, and Gould, Pitblado, and Sribney (2006) for details. cmp is slower in interactive
            mode.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-11-20 21:26:19
Raymond.K 发表于 2021-11-19 17:01
lz你好!这个模型我不熟悉,相应命令是-asclogit-吗?stata help了一下好像也是极大似然求解的,或许可以试 ...
您好,我用的是mixlogit这个命令,十分感谢您的回答,我下去试试
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-11-22 11:41:39
出一个东西 发表于 2021-11-20 21:26
您好,我用的是mixlogit这个命令,十分感谢您的回答,我下去试试
嗯嗯,看了-mixlogit-命令,在help文档中也有maximize_option选项,可以调整试试。因为不太熟悉模型原理,所以只能从技巧上提一点微不足道的建议哈哈
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-11-23 12:06:35
Raymond.K 发表于 2021-11-22 11:41
嗯嗯,看了-mixlogit-命令,在help文档中也有maximize_option选项,可以调整试试。因为不太熟悉模型原理, ...
嗯嗯,客气啦,谢谢您的建议
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2021-11-29 11:09:01
哥们儿,问题解决了嘛?我也遇到了相同的问题
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

点击查看更多内容…
相关推荐
栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群