donatas abraitis

Meltdown - Intel’s consumer loan for performance

Looks like everyone is involved in this bazaar. Cloud providers should clap hands to increase profit by 30%, Intel, in turn, will sell more chips, increased GDP, etc.

Personally, I don’t think it’s a bug and Intel’s fault. I think it’s even good that it appeared now. Maybe developers will start thinking about performance again? Or there won’t be any excuses to not use newest kernel releases, finally(?).

I wrote a post about 5-level page tables more than half a year ago and I see that we are going that way forward. It’s really a good time to think about increasing PAGE_SIZE or use huge pages to avoid the vast amount of page faults which are expensive, what we see in Google results (performance penalty 5-35%).

Why is it expensive?

Kernel’s memory is mapped into every running process’s address space - that’s why it’s fast. The most expensive player for performance is TLB which is responsible to keep TLB entries in the cache as long as possible to minimize page faults.

With KAISER (or KPTI- kernel page table isolation) we have two page-tables separated for kernel and user-spaces. The latter one is just a shadow copy with restricted memory access.

Hence, it means that every context-switch between kernel-user-kernel or process-process will flush TLB entries and there we go.

Something like context-based TLB flushes would help for process-process switches, but for kernel-user-kernel to the rescue comes PCID (process context id).

Conclusion

If we are talking nowadays about how bad is speculation, what’re your thoughts about speculative next-interrupt prediction?

I think it’s more or less the same for performance’s side - especially for networking. Imagine you operate latency bound network and instead of polling or busy polling, you speculate next-interrupt. After predicting the next possible interrupt you can allocate structures, for instance, _skb_s in advance or do something more sophisticated.