-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
metropolis_hastings() running too slow #117
Comments
I then also started an estimation of 10,000 draws per block, for 125 blocks, to get 1.25 million total draws (inclusive of draws that would be burned). My estimated runtime is 2 hours and 1 minute (takes about 1 minute per block). My estimation script (after adding your Gabaix model to my local version of DSGE.jl)
I get 3.33 ms for the time it takes to calculate the log posterior once. This will help us get a sense of the difference in machine speed. |
Thanks again, William. The date start really didn't make sense. While the tests below were done using In the speed test you suggested, I got 3.38 ms, so the remaining runtime difference might be due to our machines. Using I didn't really get your point about |
meaning there is only one parameter block, as is the standard case for random-walk MH. However, it is known that blocking parameters can improve sampling efficiency. Since the FRBNY DSGE team has mostly moved toward using our SMC algorithm, we only got around to implementing a very naive parameter blocking scheme, which randomly allocates parameters into
our MH algorithm will randomly split the parameters of your model into 5 blocks during the set up of the algorithm and will block-update during sampling. |
I did get a bit confused with the date specification, but the period I'm interested in is actually 1982:Q1 to 2007:Q4, so a bit longer than the one I was using before. Nevertheless, using 125 blocks of 10,000 draws I got it to run in 118 minutes. I tried using The script I'm using now is
Strangely, I'm not getting rejection rates as high as you reported with |
|
Hi caioodantas, I just remembered another feature of the DSGE.jl code which may suggest our code is even faster than Dynare. I don't know how the Dynare code for estimation works since I haven't looked into it, but the number of simulations (i.e. |
I'm estimating an AnSchorheide-like model (available here: https://github.com/caioodantas/Behavioral-New-Keynesian-Model) but the runtime (10 hours predicted) is higher than what I get using Dynare (3 hours).
I'm using the following custom settings:
m <= Setting(:data_vintage, "820102")
m <= Setting(:date_forecast_start, quartertodate("2007-Q4"))
m <= Setting(:n_mh_simulations, 250000)
m <= Setting(:n_mh_blocks, 5)
m <= Setting(:mh_thin, 4)
m <= Setting(:use_population_forecast, false) # Population forecast not available as data to turn off
m <= Setting(:mh_cc, 0.6, "Jump size for Metropolis-Hastings (after initialization)")
Is there a way to speed up this process?
In Dynare I was using only 1 block with 1,250,000 simulations(replics), but when I try to use n_mh_simulations=1250000 in Julia I get:
ERROR: LoadError: InexactError: check_top_bit(UInt64, -1250000)
Stacktrace:
[1] throw_inexacterror(::Symbol, ::Type{UInt64}, ::Int64) at .\boot.jl:558
[2] check_top_bit at .\boot.jl:572 [inlined]
[3] toUInt64 at .\boot.jl:683 [inlined]
[4] UInt64 at .\boot.jl:713 [inlined]
[5] convert at .\number.jl:7 [inlined]
[6] setindex! at .\array.jl:847 [inlined]
[7] _dataspace(::Tuple{Int64,Int64}, ::Tuple{}) at C:\Users\Samsung\.julia\packages\HDF5\T1b9x\src\HDF5.jl:1221
The text was updated successfully, but these errors were encountered: