We need to improve the way in which Cloudy treats the radiative transfer of IR dust emission in models which go out to large AV. Once Cloudy goes to the point where all atoms are in molecular form, then Cloudy starts to use large zone sizes until the stopping criteria is reached. One consequence of this is a given zone becomes optically thick in the IR. This can lead to energy conservation issues. All the luminosity should be in the form of FIR deep in a molecular cloud, or at least all the luminosity be in the IR. However, if you compare the total luminosity at the illuminated face to the total luminosity in the final zone, the final zone luminosity is orders of magnitude lower.
To see this effect, all you have to do is run a model such as orion_hii_pdr.in and run the calculation out to Av = 1,000 or 10,00 mag. Compare the total intensity given at the beginning of the output to the total FIR intensity. You will need to run the model for at least two iterations, and also be sure to use the emergent intensity.
One way I have found to at least partially get around this issue is to artificially decrease the zone thickness deep in the model. If I run a model at constant density, and I know the AV/NH ratio, then I can set a value of maximum zone thickness "set drmax xxx", such that the change in AV is never greater than 1-2. This assures that no zone is optically thick in the IR. This does get the total luminosity to be correct to within 5%, which in a model where all IR = FIR should be reasonable in getting the FIR correct. Additionally, plots of the dust spectrum seem to agree reasonably well with observation. However, this is by no means a final fix. Something about the dust RT needs to be modified. Some combination of improved zone logic, the RT of the dust continuum, etc. needs to be worked out. There should be a way to take larger zone sizes by improved RT methods. My workaround of this issue takes a long time to finish a single model (~5000 - 10000 zones), which is highly inefficient.