Myths and Legends: LOOT

I hear a lot of Myths about LOOT. I hear it’s the perfect load order tool, and if you run LOOT your game will never have any bugs. I hear it’s a horrible mess, and running LOOT will turn your load order into a massive pile of sewage. I even hear that LOOT never does anything and why do people recommend it? (On that last one: Learn the difference between mods and plugins, mmkay?)

Thing is LOOT isn’t Jesus come to save us and it’s not a pile of dog poo either. It’s a massive crowdsourcing project.

Like this piece of music:

It’s not a bad song, but there’s some places where you just don’t understand why it made the decisions it made. Got caught in a loop or something.

And it’s pretty cool for something that’s totally free and all crowd sourced.


LOOT does not solve bugs. LOOT sorts plugins to the best of its ability to do so. And it reports issues, if those issues have been reported to LOOT. Right? LOOT can’t know things it doesn’t know. It’s not magic.

LOOT cannot replace your own reading on incompatibility, looking in TES5edit at mod overwrites, looking for errors, etc. If you say “I ran LOOT I don’t know why it’s buggy”, I’m going to take quick look to see if you actually ran LOOT, then tell you to go look for bugs.


LOOT does not do a bad job sorting. It does a fantastic job sorting. I know just about all there is to know about mods 😉 and I still can’t sort a load order as well as LOOT can, with all the tools available to me.

Some people claim LOOT doesn’t sort mods correctly. This is also a Myth. Because guess what? Half the time, LOOT actually DOES sort the damned thing correctly and you just think otherwise because you’re wrong, or you didn’t actually try it, or some other reason. The other half the time, it’s because LOOT didn’t know it was doing anything wrong. LOOT only knows what it knows.

You are the crowd.

Go tell it what to do on Github and quit your pointless bitching.

The above thoughts also apply to Mod Picker (which I promise is coming out Soon TM and at this point that is a Riot Soon TM and not a Blizzard one and I’m sorry).


Myths and Legends: Papyrus Ini Settings

It’s well established in the community that some ini settings should never be changed. For the most part, this is true. Increasing ugrids to load WILL decrease the stability of your game. Adding HWHavokThreads WILL (probably) do literally nothing.

And yet, there is the pervasive Myth that some settings will improve your game’s performance, or at least prevent stack dumps. The Legends of the community will tell you the opposite – that changing these settings will actually INCREASE the likelihood of stack dumps.

The truth might be a little more subtle.

However, nothing will improve the stability of the scripting engine as much as using fewer, and better-written, mods.


No one thinks this setting is unsafe. It will increase your loading screen by the time listed, in this case half a second (the default value). I’d recommend leaving it less than 1000, as large values may noticeably increase loading screen time.


This setting controls how much time per frame papyrus gets to do its thing. Each frame, when at 60 fps, is 16.67 ms. 1.2 ms of that is taken for Papyrus to do its calculations. The remainder goes to other calculations, and the largest chunk of it to drawing the frame. If you’re struggling to stay at 60 fps, it’s because your computer can’t do everything it needs to do (calculations and rendering) in 16 ms. If you’re well over 60 fps and have to cap it, your computer has no problem drawing the frame in 16 ms.

My understanding is that any papyrus steps that do not get completed in the time set by this setting get pushed to the next frame. After getting pushed past a certain number of frames, the script may fail to run or will certainly fail to do what it was supposed to do in a timely fashion. If the game decides a script is frozen altogether because it hasn’t had a chance to do its thing in a very, very long time, it may dump it. That’s bad.

If your computer is running at lower frames, scripts will take longer to run too. Remember, they only get a certain amount of time per frame. If your computer is taking 30 ms to draw a frame, papyrus only gets 25 ms every second to do its thing. If your computer is taking only 16 ms to draw a frame, papyrus gets 50 ms out of every second to do its thing. Everything is smoother at higher frame rate!

However, more things to do in papyrus will not decrease frame rate, because of that little setting up there. It prevents a laggy or badly written script from taking over and freezing the game – the game will not wait for the script.

So. What happens when we increase that setting? Papyrus can use more time. Unlike the load screen thing, it doesn’t have to use more time. It just can.

Let’s say we increase it by 25%. A nice conservative change.

Papyrus gets 25% more time to process. Everything else you need to do to get a frame drawn in 16.67 ms gets 0.3 ms less to process. Doesn’t seem real significant, does it? It’s a big boon to papyrus if it happens to need all that time, and a tiny change in how much time your computer gets to draw a frame, if it uses all that time.

Let’s say your computer is really struggling. You’re at 30 frames, and you know scripts aren’t running in a timely fashion.

That change will reduce your framerate. But, it will overall increase how much time papyrus gets… at the cost of everything else!

If your computer’s really breezing along, and you’d be at 100 fps if you didn’t have to cap it to 60… your computer can process a frame in 10 ms. You have to cap it at 60. Why not give an extra 5 ms to papyrus? If it needs it, you’ll still be above 60 fps. If it doesn’t need it, you’ll be exactly where you were before. Your scripts may run in fewer frames (they may not), leading to an overall performance improvement and more stable gameplay.

But if your game gets a particularly difficult to process scene, with a lot of scripts and a lot of things to compute… it’s suddenly going to chug much, much harder. Not only will it take more time to draw the frame, but papyrus will demand that time too instead of patiently waiting its turn.

And if you think that papyrus is going to take a full 800 ms to process anything, per frame? Just turn that shit off. You’re saying papyrus gets to take almost an entire second just to do its stuff it if it needs it? If your game is ever at that degree of laggy, it is literally unplayable.

And that’s why you should never set these the way the popular Myths will have you do it.

But nor should you shun them in fear, despite what the Legends tell you, because they’re not actually that scary.


These are how much memory is devoted to hold skyrim processes. Anyone who’s ever thought “Wildcat is only 76 kb download? That can’t be right!” has come across the fact that scripts are really small. They don’t require that much memory to process either.

The first two control the size of stacks for papryus to allocate. The way this works might remind you of the way the familiar SKSE patch works, except this only for papyrus (and it’s stacks, not heaps. See discussion below). Also, those values are literally six orders of magnitude smaller, because they’re in bytes, not megabytes. Also, unlike the skse patch, THIS ALLOCATOR IS NOT BROKEN.

Repeat after me:

Papyrus Memory allocation IS NOT BROKEN.

So there’s no need to increase the stack size! It will fucking allocate a new stack when it needs to! You don’t need to make bigger stacks, because it can make more stacks!

Anyways. No reason to mess with these then. According to the CK wiki:

“iMinMemoryPageSize is the smallest amount of memory the VM will allocate for a single stack page, in bytes. Smaller values will waste less memory on small stacks, but larger values will reduce the number of allocations for stacks with many small frames (which improves performance).”

Decreasing = less memory wasteage, perhaps important if your Skyrim uses more than 3.1 GB (Seriously, if your skyrim ever uses more than 3.1 GB of RAM, screenshot that shit, I want to see it! And your modlist! Please!)

Increasing = fewer allocations, better performance – which is probably not noticeable because allocation is not really the slow part of the skyrim engine in most cases.

“iMaxMemoryPageSize is the largest amount of memory the VM will allocate for a single stack page, in bytes. Smaller values may force the VM to allocate more pages for large stack frames. Larger values may cause the memory allocator to allocate differently, decreasing performance for large stack frames.”

Decreasing = more allocations, less performance

Increasing = broken memory allocation, less performance.


It doesn’t help, and it can certainly hurt.

I guess I just agreed with the Myth, eh? Well that’s the problem with Myths. Most of them have a grain of truth.


Last one. This is the maximum amount of stack size. So Skyrim can only use 75 kb of memory for Papyrus. That’s…. not a whole lot.

But there’s a reason it’s not a whole lot. Scripts are tiny. Real tiny. Even all 76 kb of Wildcat doesn’t need 75 kb to process at anyone time.

But still… Skyrim… with a lot of mods… that’s a lot of scripts. Even if they’re tiny that can all add up. What if papyrus needs more and can’t allocate it?

It waits for memory to be freed, and then it uses it. Waiting = slower scripts, and eventually, if your game is that overloaded, stack dumps.

So why don’t we increase it?

Because if you make it bigger, you get stack thrashing, and stack thrashing causes stack dumps, and that’s bad.

(I can’t actually explain stack thrashing, I read the whole wikipedia article on it and I’m still not really sure what it is other than “buffer overflow”, and I can’t explain what a buffer overflow is, other than it’s bad. It’s real bad. It’s a heck of a lot worse than waiting on a slow script).

Again, small increases might give papyrus a little more breathing room without harming it too much. 25%. Maybe 50%. But doubling it? Increasing it by 9 orders of magnitude? (I’m not kidding. I wish I was kidding). Don’t do that.

Probably it’s important right now to note the difference between Stacks (what all that discussion up there was about) and Heaps (which is what the SKSE memory patch edits).

Stacks are quick memory used for active calculations. They’re allocated, the calculation gets done, and they go away. Easy.

Heaps are slow memory used for storing things, like objects and variables. They’re allocated, lots of different stacks access them to do different things, and they don’t get cleared. They stay there and the heaps slowly grow as new objects get named.

That’s why stacks are so much smaller, and why it’s really not a big deal to have a slightly-too-small max number of stacks (because it’ll just go away and then you can make a new stack). And why it’s a big deal to have a slightly-too-small heap size.

So – the Myth has some truth, and some falsehood. Or maybe depending on which Myth you read, it was all false or all true. There’s a lot of different Myths. But written here, is the truth, as best I know it.

Myths and Legends: Autosaves and Quicksaves

TES is a universe built on Myths and Legends. Or Legends that are Myths. Or Myths that are really Legends. Everything is true and not true at the same time. It’s a great universe to create in.

No surprise that Myths about modding are just as common.

There is an extremely popular myth that the default skyrim autosaves and quicksaves do not properly stop scripts and therefore will cause CTDs? Save bloat? Stack dumps? The actual symptom is different every time the myth is repeated. (A common problem with myths). Only going to the skyrim menu and saving from there will result in a safe save. Or the console. That works too.

This is most certainly a myth. First of all, anyone can test for themselves that quicksaves and autosaves do indeed stop scripts from running, identical to going to the main menu. If you turn on papyrus logging and go to make a quicksave, you can easily see the “VM is freezing” and “VM is thawing” messages that indicated scripts were stopped, recorded, and restarted. This happens regardless of type of save.

No, the rendering engine doesn’t stop, but the rendering engine doesn’t get baked into a save, now does it?

Secondly, there’s this excellent breakdown of why this is an utterly ridiculous thing to even think in the first place, by Merad.

To quote:

“The whole ” script running while saving” thing, however, is moronic. Creating a save requires capturing a snapshot of the world state to a file. If you allow the world state to be altered while you are saving it, of course you will end up with blatant corruption everywhere. That’s the kind of mistake that a sophomore CS major should know to avoid. I find it hard to believe that Beth could have devs that stupid, and also hard to believe that the game would function at all if it was written that way.”

Now where did this myth come from?

There’s a few possibilities.

  1. Windows corrupted files when it was handling the IO of writing a save. While unlikely, windows does corrupt files and especially on older computers this is possible. My understanding is that this is more likely if you’re overwriting files.

  2. The saves got corrupted because of skyrim/mod bugs, and people who relied only on autosaves and quicksaves didn’t have old backup saves to go to.

  3. Overwriting files (when files are named the same) takes longer because of an issue with SKSE. It just takes longer, there’s no actual harm in it.

What’s the real answer?

Autosaves and Quicksaves are perfectly fine, but you should never overwrite files, as that may cause problems. Plus, you always want an old save to return to in case something bad happens to your more recent saves. Don’t overwrite or delete saves!