Intel 13th Gen CPU's

Intel gen 5 PCIe Gotcha
Intel gen 12 and 13 have 16 lanes of PCIe gen 5, and plugging PCIe SSD into NVMe slot closest to the processor, at least on some MSI Z790 motherboards, leaves only 8 lanes for the GPU, which is not enough for an RTX 4090. Neither NVMe nor GPUs currently support gen 5 PCIe, and RTX 4090 may be the only GPU which can saturate 8 lane gen 4 PCIe..
 
Thanks. Are all sims single core?
Not completely, but it's the best indicator we've got that correlates well.
And the single core speeds don't matter, The CPU will not use single core speeds even with very old games. Completely ignore single core speeds, it's just marketing crap and useless unless you're into benchmarking.
My interpretation of what @RCHeliguy said (and with which I agree) is that - for all current sim racing games - the 12700 can already handle more than enough threads that you would gain nothing from the extra cores in the 13900. This means that multi-threaded benchmark results will leave you none the wiser about how much faster a 13900 might run those games. The single-threaded benchmark results will at least give you a decent estimate of the potential improvement to be had. (Single-core clock speeds may be what you're talking about @Spinelli but this is a different point in my view.)
My gut feeling: hardly any sim racers would benefit much from this particular upgrade. People desperate to gain a slight boost for VR might want to go for it, but given that @BillyBobSenna is on triple 1440p, a small variation in frame rate isn't gonna be a biggie.
 
We definently see a good 5-8% step up from 12 to 13 series in iRacing during the worst scenarios, like grid/first laps.
For those who count fps pr dollar, maybe not worth it, but no doubt it performs better.
 
We definently see a good 5-8% step up from 12 to 13 series in iRacing during the worst scenarios, like grid/first laps.
For those who count fps pr dollar, maybe not worth it, but no doubt it performs better.
PurePc.pl got 79.2 fps for the 12700k and 87.5 fps for the 13900k.
That's 110.5% of the 12700k.

The question is weather a 5800x3D build and selling the Intel cpu/mobo combo would be cheaper than the 13900k, lol.
 
Still waiting for a next gen VR headset.
Yeah I'm waiting for the 7800x3D and the 8600X.
I really want to push my CPU avg fps headroom from 10 fps to 50 fps beyond the the capabilities of my monitor (90 fps) :roflmao:
Not really serious obv., but the 1% lows are only around 70-80 fps, so I could argue with some data :D
 
I'm disappointed in the 7950X3D and 7900X3D. They may not have the same kind of insane gaming boosts compared to the 7950X non-3D as the 5800X3D has compared to the 5800X non-3D. Why? Because the 3D cache will only be on 1 of the CCDs - so just 8 cores / 16 threads - and those cores will also be clocked lower to 5.0 GHz if my sources are correct. While the other 8 cores / 16 threads on the other CCD will have no 3D-cache but will therefore keep the stock higher frequency of the 7950X non-3D (5.5 GHz or 5.7 or whatever it is).

This is kind of cool because it allows you to have 8/16 other cores/threads still clocked very high which can help benefit A) for highly multithreaded programs, B) for games that don't or barely take advantage of the extra cache (you would then use the 8 higher clocked non-3D cache cores instead for those games).

Better explained here:

Personally I would have preferred an all or nothing approach but the above approach does benefit in other situations. I honestly think the real reason AMD did this was for either issues with heat and/or issues with 2 3D caches working together while being separated on 2 different CCDs.

Might as well just go for the 7800X3D for the majority of gamers. However, if you were already planning to get a 7950X or 7900X anyways, and you also play games, then you might as well get the 7950X3D/7900X3D.

P.S. I stated previously that it sounds like the 3D cache versions this gen (7000 series) are supposed to be running at the same frequency as the non-3D cache versions. It looks like that's wrong.

My interpretation of what @RCHeliguy said (and with which I agree) is that - for all current sim racing games - the 12700 can already handle more than enough threads that you would gain nothing from the extra cores in the 13900. This means that multi-threaded benchmark results will leave you none the wiser about how much faster a 13900 might run those games. The single-threaded benchmark results will at least give you a decent estimate of the potential improvement to be had. (Single-core clock speeds may be what you're talking about @Spinelli but this is a different point in my view.)
My gut feeling: hardly any sim racers would benefit much from this particular upgrade. People desperate to gain a slight boost for VR might want to go for it, but given that @BillyBobSenna is on triple 1440p, a small variation in frame rate isn't gonna be a biggie.
Yes, my bad. I didn't mean per-core performance or IPC, I meant single-core clock speeds, you're correct. For example, the 5.8 GHz and 6.0 GHz single-core clockspeed boost on the 13900K and 13900KS respectively - just ignore that. They'll only hit those speeds during single-core benchmark programs or for fractions of a second here and there doing extremely light tasks like booting Windows.

I still don't understand why the single-core boost speeds aren't used even if we specifically go into Windows Task Manager and set the affinity of a program to only use 1 core and/or thread. Yet, when a benchmark runs it's purpose-built single-core benchmark, the single core boost speeds are in fact properly used and held for the entire duration.
 
Last edited:
PurePc.pl got 79.2 fps for the 12700k and 87.5 fps for the 13900k.
That's 110.5% of the 12700k.
Gosh, that's a larger real-world improvement than I'd have guessed at (tho obvs still too small for me to upgrade). This was iRacing?
Because the 3D cache will only be on 1 of the CCDs - so just 8 cores / 16 threads - and those cores will also be clocked lower to 5.0 GHz if my sources are correct. While the other 8 cores / 16 threads on the other CCD will have no 3D-cache but will therefore keep the stock higher frequency of the 7950X non-3D (5.5 GHz or 5.7 or whatever it is).
Yup, that's how Steve from GN described it. An interesting design which will be a nightmare for the process scheduler to get right, and I wait with interest to see if people end up having to manually tweak CPU affinities.
Might as well just go for the 7800X3D for the majority of gamers.
Agreed.
I still don't understand why the single-core boost speeds aren't used even if we specifically go into Windows Task Manager and set the affinity of a program to only use 1 core and/or thread. Yet, when a benchmark runs it's purpose-built single-core benchmark, the single core boost speeds are in fact properly used and held for the entire duration.
Damn, that's weird. I haven't tinkered with that stuff for gaming in forever but I'd have expected it work the same as a single-core benchmark. If your tests have involved games, I can only suppose that some other cores have been active to a great enough degree that the CPU somehow crossed its threshold for single-core running. Baffling though.
 
This was iRacing?
ACC. Why does nobody click on my link :roflmao:

PurePC.pl keeps the benchmarks in English and also google auto-translate works very well.
They do 2 things for us simracers very well:
- Test ACC with extreme AI settings
- Do all tests with stock clocks and with really high OC

The OC tests aren't really important with the latest generations but made the 9600k almost match the 9900k since Intel simply lowered the stock boost clocks of the i5 (and i7).
 
PurePc.pl got 79.2 fps for the 12700k and 87.5 fps for the 13900k.
That's 110.5% of the 12700k.

The question is weather a 5800x3D build and selling the Intel cpu/mobo combo would be cheaper than the 13900k, lol.
Top end stuff will always be more expensive than compromises.
 
ACC. Why does nobody click on my link :roflmao:

PurePC.pl keeps the benchmarks in English and also google auto-translate works very well.
They do 2 things for us simracers very well:
- Test ACC with extreme AI settings
- Do all tests with stock clocks and with really high OC

The OC tests aren't really important with the latest generations but made the 9600k almost match the 9900k since Intel simply lowered the stock boost clocks of the i5 (and i7).
The testing is only in 1080, right?
 
  • Deleted member 197115

Still waiting for a next gen VR headset.
77kesx.jpg
 
I am still enjoying my Index and doing more with it because of my 4090.

Today I was playing "The Climb" for the first time in a long time. The visuals are so much better than they were with the Rift and the vertigo is real. My hands were hurting because I had a death grip on my controllers.

I'm not sure what some of the Oculus titles are doing but they look absolutely fantastic!
 
Intel gen 5 PCIe Gotcha
Intel gen 12 and 13 have 16 lanes of PCIe gen 5, and plugging PCIe SSD into NVMe slot closest to the processor, at least on some MSI Z790 motherboards, leaves only 8 lanes for the GPU, which is not enough for an RTX 4090. Neither NVMe nor GPUs currently support gen 5 PCIe, and RTX 4090 may be the only GPU which can saturate 8 lane gen 4 PCIe..
Ya. It's not a big deal anyways. The CPU has 20x PCI-e lanes, 16x PCI-e 5 and 4x PCI-e 4. The chipset has another 28x PCI-e 4 lanes. There are two "closest to the processor" slots on that motherboard. The other one uses the CPU's 4x PCI-e 4 lanes. Like you said though, if someone doesn't know, they could cut their GPU lanes in half.

There must be a bug with his BIOS or an issue with his BIOS' settings if none of the other NVMe slots would boot the drive.
 
Last edited:
Back
Top