Intel 13th Gen CPU's

So the 14th gen has production issues...
14th gen has no production issues. The oxidation issue was only at 1 specific plant (some American plant) and only affected a very small supply of only 13th gen chips.

Meanwhile I've never been a big water cooling fan, but as he mentions below that it's really only good for a few degrees and is mostly an enthusiast thing and as such has seen spiraling price increases and custom loop companies starting to go out of business.

I think it's a little overblown. Lots of what he said can be applied to many products in many industries.

I personally thought (not based on any "hard" info) that custom-water cooling was getting a bit of an increase in business the past few years since CPU temps have really increased. The AMD chips purposely try to automatically usually run at 90-95 degrees in order to automatically perform as fast as possible. The Intel chips are hot and do a similar thing.

I also think - again, not based on any "hard" info - that AIO water coolers have never been as popular and in-demand as of now ("now" being as of the past 2 or so years), again, mostly because of very hot chips.

Custom water-cooling has always been mostly an enthusiast thing that makes little sense in terms of cost-to-price ratio. AIOs certainly do though as most of them are cheap, some even cheaper than air-coolers.

Personally, a few months back, I went from using an AIO for the past 12 or so years to a custom-loop, delidded, liquid metal, direct die setup (14900KS). I'm basically now never heat-limited. I can overclock and do tests at any voltage, frequency, and power I desire since heat never limits me anymore to reaching/testing those voltages, frequencies, and power. In other words, heat is never a bottleneck anymore.

Custom PC watercooling is an enthusiast thing. It's more for fun. It's like people who spend a bunch of money, time, and effort to make their car have, say, 430 hp instead of 400 - in the real world, it barely makes a difference...and, besides you, no one cares.
 
Last edited:
People can be an enthusiast about anything. I consider my PC's to be disposable toasters, so I never have glass panels to see inside. I get a nice well ventilated case, Noctua CPU cooler and call it done, no fuss and easy to upgrade later. But all that means is that it isn't a priority for me. If I thought it was something cool to do, I'm sure I would have been all in.
 
I saw recently that some MB's are coming out with all the plugs on the back to make them look cleaner, so there are more improvements on the way for people wanting to show off their computer.
 
I saw recently that some MB's are coming out with all the plugs on the back to make them look cleaner, so there are more improvements on the way for people wanting to show off their computer.
It might even be a win for those of us who see no point in pretty PC interiors with coloured lights, by making installation and maintenance easier. Even just getting rid of the power cables to GPUs would already be a big win in that respect (Asus are one of the firms exploring custom PCIe slots for high power delivery).
It's a big change though, so will be interesting to see if it catches on.
 
Well yet another Intel release that isn't worth an upgrade.
Check back next year and maybe the year after that...




Yup. Seems like junk. Intel went after efficiency gains so that the next CPUs after Arrow Lake can bump up performance at decent power figures...OK, sure, whatever you say Intel. They're still getting slaughtered by AMD especially x3D CPUs, and at even lower power figures (not that I personally care about power).

This makes me feel even more satisfied about going with a direct-die, 6.0-6.1 GHz all-core 14900KS.

Having said that, I'll wait for Frame Chasers' review. He'll do 2 things. Disable E-cores and run the CPU at "max" overclocks. That'll be the real test for me.

P.S. Some people are saying that performance may improve a lot when Intel goes back to it's own manufacturing node with Nova Lake. They used TSMC with Arrow Lake. Well, that's in 1-2 years so who cares for now.
 
Last edited:
The largest Dutch website for tech/it did a benchmark for the new line of Intel processors. To my surprise they often benchmark ACC on day one. And now did so with the new Ultra series. Good for us of course, because we have something to relate to! Click me. Tested with RTX 4090.

As someone who's in the market for a new PC this winter, I'm quite disappointed. And AMD is about to release an improved version of their current processors. I read something about an 8% increase in gaming performance. Basically setting Intel so far behind it's embarrassing.

1729922501372.png
 
...and that is why I told people IF they are not in a hurry to wait for 9800X3D - X870
I got bagged for it lol


 
Sometimes it even gets massacred by the 14900K. What a disaster, lol.

The latency on Arrow Lake is an abomination too - like 100 ns, or 80 ns with really good RAM.



Anyone buying a new platform, get the 7800X3D, 9800X, or, preferably, wait for the the 9800X3D.

Still waiting for Frame Chasers' review.
 
Last edited:
Currently it appears to be a dice throw in terms of which works better for a given title.

The i9 still wins by 17% in single thread performance and even titles like DCS that have added better multi-thread support still run faster on an i9. DCS is supposed to use up to 16 threads on 8 P-cores for rendering, a separate thread for audio, etc.. and it still performs better in an i9 over the 7950x3D.

I don't know if DCS has specific optimizations for Intel chips or why it still runs faster on an i9.

However for sim racing, both ACC and iRacing both do better with the AMD chips right now.

Hopefully the 9800x3D will catch up where it is currently behind and jump ahead further where it is already ahead while still using less power than the Arrowlake CPU's albeit at a higher price tag.
 
Last edited:
It's quite surprising that the 285K is so variable - both in relative performance to its competitors like the 14900K or the 7800X3D, and in terms of frame-time spikes during certain benchmarks.
The former may just be a side-effect of it being a somewhat different architecture, and the latter... I dunno, maybe linked to the memory latency issues (but it's pretty nasty). No idea what fraction of these issues can be patched away with BIOS/microcode tweaks, but I guess we'll find out.

I'm baffled about how both Intel and AMD have managed to badly botch CPU launches recently. Feels like unforced errors but maybe they had targets they had to hit... 🤷‍♂️

Meanwhile, despite good intentions, I rarely work my CPUs hard for "productivity" stuff, so realistically the only speed-sensitive stuff I ever do is gaming. That makes it a total no-brainer for me to wait for the 9800X3D*, though I won't be buying it until after it has been heavily tested. (Trying to learn from the insanity of the last year - with AMD CPUs burning up and Intel CPUs umm, basically burning up too.)

[*having kinda waited too long on the 7800X3D upgrade I've planned; its pricing now is as high as the 9800X3D is likely to be]
 
Last edited:
JayzTwoCents used cu-dimm RAM. Gaming benchmarks looking a lot better now

I wonder how much Jay was paid or "influenced" for that video. I've seen quite a few 285K CUDIMM reviews with the 285K still performing poorly.

Correct me if I'm wrong but I'm pretty sure CUDIMM only allows hitting higher speeds easier, it's not inherently faster itself (ie. at the same speeds).

15th gen's / the 200 series' CPU layout is a disaster with all it's latency and terrible routing. The ring bus, core and cluster interconnect layout, everything is horrendous in terms of latency.
 
Last edited:
The thing I remember about the more recent CPU's is that they have larger and larger and faster caches both primary and secondary. Each time they improve the internal caches the less of an impact main memory speed has on overall performance.
 
Last edited:
The thing I remember about the more recent CPU's is that they have larger and larger and faster caches both primary and secondary. Each time they improve the internal caches the less of an impact main memory speed has on overall performance.
Exactly, and in the 200 series' case, when things drop out of the cache (L2 and/or L3, can't remember) such as in more heavy CPU dependent scenarios, performance tanks due to the 200 series' terrible latencies all over it's CPU layout.

Those terrible latencies would be much less of a problem - much less performance impacting - if the cache was huge such as with AMD's x3D chips but the cache is just too small on Intel's 200 series.

I even read of a person getting big performance gains by disabling all P-cores except 1 and letting the E-cores do most of the work in order to avoid some of the CPU architecture's latency hits as much as possible. What a joke.

I hope Intel releases Bartlett Lake for Intel 12/13/14 gen owners to easily upgrade to.
 
Last edited:
Just looked into Jays video a little deeper. Man is that guy a shill.
  1. He's wrong about CUDIMM inherently bringing performance improvements. CUDIMM has the same performance as "regular" RAM - maybe even a touch less - when at the same speeds, it's just that CUDIMM makes it easier / more likely to achieve higher speeds.
  2. His graphs only show results for 253 W power limited 13th and 14th gen chips.
  3. His graphs only show results for 13th & 14th gen chips running 6400 MHz RAM as opposed to the 285K running 8400 MHz. Dual-DIMM Z790 boards can usually hit 8200-8600 MHz as long as the CPU's IMC can keep up. Quad-DIMM Z790 boards can often do 7600 MHz fairly easily and 7200 with their eyes closed.
What a guy.
 
Last edited:
I know too little about this particular subject, so thanks for the additional info guys! I'm in the market for a new pc but it seems better to wait for more reviews/tests and the upcoming AMD processor. I assume you can't make up for bad cache design, so I think I'll just wait for when the complete package is reviewable (z890, 50xx gpu, bios updates etc).
 
Back
Top