AMD Ryzen For Simracing?

PBO and tuned negative curve optimiser settings are the best way to overclock Ryzen 5000 cpus. My 5800x is capable of hitting 5ghz on the boosted cores all the time. You have to spend some time working on it, but essentially you do it with independent core settings and the two fastest cores (identified in ryzen master) can’t be offset as much as the others. Think I was lucky as could do max -30 on all cores and -15 offset on the two fastest. Using something like cinebench all core test to find the stable settings.

The idea is a bit like undervolting per core and the additional thermal headroom allows a higher boost. All core overclocking isnt worth doing on ryzen in my opinion.
 
Edit: beat me to it Trebormoore84 lol.

@Yo0601 and all 5XXX owners

I'm new to Ryzen 5XXX myself but have a look at "Curve Optimizer" settings in your bios. I'm just reading up, haven't tried it yet, but the idea is to under volt individual cores using an offset. These efficiency gains lead to better boost behaviour.

Here's an article:


Here is a useful tool/thread: (See first post)

 
Last edited:
Many thx for the links.

So i did more tries using only Ryzen master on a clean bios, pbo ON
ppt limit 130w
edc 135A
tdc 75A
Ram stock 3400CL16
paired FCLK 1700, is it better?

auto OC gives all cores at 4.5ghz
PBO 4.65ghz only 2cores
On manual i reached same limitation as my bios oc@4.7ghz all cores, but temps are worse with RM.
Need fine tune the curves per core under the bios as RM doesnt seems to work when on Manual, or just redo the vid offset.

Cpu-Z validation https://valid.x86.fr/e3gwxf
Stable with intelburntest on very high.
Sans titre.png
 
My 2 cents about torture tests for stability:
I personally never need the full performance if an application is able to use all cores of the CPU.
This might be different for you due to blender.

I only need the max boost speeds when games/applications can only make use of 1-3 cores.

So what I did is limiting the max power to 90W, which allows a cinebench 4 thread run at 4.9 GHz on all cores (I tested this and with my system I get some microstutters in games like AC, when Windows throes the game in a different core but that core runs at sub 4 GHz.

So I locked them all to always the same clock when boosting.

As soon as I put cine bench to 5+ threads, the CPU will clock down dynamically and everything will be stable anyway.

So I pushed my CPU to be only stable at up to 90W.
 
Last 2 years gives me pause, so much money to jump resolutions these days.
Think I will stick with 5600X - RTX2080, runs pancake smooth rF2 @ 4K DL scaling ...................................................... :( lol
 
Last 2 years gives me pause, so much money to jump resolutions these days.
Think I will stick with 5600X - RTX2080, runs pancake smooth rF2 @ 4K DL scaling ...................................................... :( lol
I too, plan to stick with AM4.
I have a Ryzen 5-5600X on an MSI B450 Gaming Pro Carbon AC.
It does everything I need for now.
I am planning a GPU update when pricing stabilizes....maybe sometime around March-April.
Decided to under-volt my 5600X last night.
That thing is 'rock' stable and runs ACC and every other title insanely smooth.
 
Last edited:
Hardware Unboxed have a 7600X review with slightly more comprehensive memory testing. It's early days and people have yet to learn how to get the best from these things.

For gaming though, these seem barely competitive with 12th gen when 13th gen is around the corner. Raptor lake will likely walk all over these offerings but then AMD will strap on some extra cache and claim "world's fastest gaming cpu". Marketing wins.
 
Last edited:
Seems like the great value of the Ryzen 5 is gone for now.
300$ mobos without debug codes sounds bad...

I hope ddr5 prices will come down further and that the B650 Boards will be good enough for most users and only 120-150€.

Otherwise the 5600+B550+3600 cl16 will have almost twice the fps per € and the 5800x3D will have far better value for simracing!

I'd like to get a "cheap" B650 extreme board with a 7600x + ddr5 6000 for around the same fps per euro ratio as the 12600k but investing into the upgrade path of a 8800x3D or whatever might come.

A bit more money now but overall saving quite a bit compared to getting a 12400f, 14400f and 16400f or a 5600 non-x + b550 now and then B750 + 8600x or so.
 
The 7600X should come down to around 250€ soon enough as will ddr5 pricing as the market saturates. Avoid the cheapest B series boards and you should be golden for even a 16 core. You may keep the board a long time so best to buy right than buy cheap.

Same with memory. If you can afford to avoid the cheaper stuff and go for something fast enough to use in a future upgrade, you only buy once.

The only selling point that would sway me towards AM5 right now is the longevity of the socket. Absolutely the best thing about AM4!

I bought an Asus X370 Prime Pro + Ryzen 1600 in May 2017. I have since sold that system on to a friend, who a couple of months ago, had me upgrade it to a 5900X + more mem. Literally cleared the cmos, added mem, updated the bios, dropped the cpu in and away it went on the same windows install. Fantastic stuff.

Just gotta hope AMD stay competitive for another 5 years...
 
They also have DDR4 beating DDR5 for intel 12-gen by ~7%. Could it be that they used some bargain bin (as much as it applies to it) DDR5 RAM?

EDIT:
Indeed, they did - Corsair Vengeance 5200 MHz CL38
They seem to use the same slow DRR5 memory for both Intel 12th gen & the Ryzen 7000 series. It probably saves them the effort of redoing the Intel benchmarks, but the slower memory undoubtedly produces relatively poor results for both chips against processors using DDR4. The difference between the 5200mhz & 6000mzh DDR5 memory could be as high as 10-15% or less. However, it's still notable that the Purepc ACC results for Ryzen 7950X & 7700X fall below the Intel 12600K.

In the Hardware Unboxed Ryzen 7600X review mentioned by Bob, the results for the ACC benchmark are based on 16 cars , medium settings, on a sunny day. Ignoring the 5800X3D, the 7600X comes out on top. In another benchmark, where the settings are cranked up to 29 cars, epic settings, with medium rain, Intel regain the top spot.
Ryzen 7000 ACC benchmark.jpg

With 50 cars on the grid, in pouring rain, the 'Purepc' ACC results are probably not that far off - also, in terms of IPC (Cinebench 15), there is only 2% difference between the 12th gen & the Ryzen 7000 series.
 
Last edited:
If you run 1080p sims it means you can't afford this stuff anyway

Oh look mum I gain 10 fps on resolution I don't even use.

This is more like it 5600X and 7700X with 3090 @ 4K
Now tell me in any of those titles were any gain is worth it
because I can't see diddly squat.

Bupkis ;)

 
In my opinion, if you test something for their performance, you should test the limit.
CPU benchmarks need to be in the CPU limit, GPU benchmarks in the GPU limit.

That's independent of the resolution or settings. Testing parts for their limit is part of professional testing.
You don't test the max load of a rope by testing the max "realistic" load for the use case and if it doesn't break you write down "Good enough".
If a kitesurfing line can hold 1000kg, that's absolutely not needed.
But if a 500kg rope costs 50€ and the 1000kg rope costs 70€, you might want to invest the 20 bucks to be able to pull your car with it if you happen to get stuck at the beach with only other surfers around you and the tow ropes being buried below piles of equipment.

You never know if a use case might change completely and it's always good to know the limits.

If you aren't interested in 720p/1080p CPU benchmarks, don't look at them.
And if you are annoyed by them, write to the reviewers to put them at the end of reviews or hide them with an optional click to see them, lol.

CPU Benchmarks in the GPU Limit (basically everything in 4k at today's performance point) are pretty useless since the numbers are either identical to the GPU benchmark results or identical to the lower resolution CPU benchmark results.
You don't really get anything "new" from these results apart from checking for issues like a cheaper CPU not having enough PCI-E lanes.
It's basically just doing the work for you of looking at the GPU benchmarks and the CPU benchmarks and noting down the limits.

If you can't note down the limits of each part and understand what will apply for your use case, you have other problems than being annoyed by the low resolution CPU benchmarks.

Then people always argue "But nobody uses such low resolution".
On the other hand I know a lot of people using 4k to have the clarity and naturally low aliasing but only use medium settings, not everything to the max.
Which then results in the fps of 1080p ultra or 1440p high.

You can't know what you'll be wanting in the future. Maybe you'll give/sell your system to a friend/family who wants to race in esports leagues and will use a 1080p 360 Hz monitor with lowest graphics settings.

To put some numbers at the end:
Guru3D Far Cry best amd/Intel and worst (with 3090):

1080p:
5800X3D = 169 fps
12900k = 149 fps
9900k = 120 fps
5600G = 106 fps

1440p:
5800X3D = 163 fps
12900k = 143 fps
9900k = 120 fps
5600G = 102 fps

4k:
5800X3D = 113 fps
12900k = 113 fps
9900k = 104 fps
5600G = 94 fps

3090 review with a 9900k:
1080p = 120 fps
1440p = 120 fps
4k = 104 fps

Noting down:
4k and 1440p benchmarks were useless for CPUs
1080p and 1440p benchmarks were useless for 3090

One important fact though:
CPUs are barely influenced by the used resolution, apart from bandwidth issues in rare cases.
But GPUs don't scale linearly with different resolutions!

I'm not 100% sure why the fps are slightly higher in lower resolutions for CPUs but I guess that comes from some frametimes being higher on the GPU than on the CPU within the runs.
Which leads to slightly increased average fps.

So I personally would be fine by testing CPUs in 800x600 and in 8k for a quick bandwidth check.
GPUs however need all mainstream resolutions to be tested!
 
Last edited:
I think the 1080p benchmark is a great indicator, but it really only matters if you are planning to keep the system for multiple GPU generations. It gives you insight into potential future bottlenecks. The CPU might be a great match today, but by the time you upgrade to something like a (future) 5090ti or 8900XT that CPU may start to become the weak link even at 4k.

In other words, it helps to temper expectations.
 
I think the 1080p benchmark is a great indicator, but it really only matters if you are planning to keep the system for multiple GPU generations. It gives you insight into potential future bottlenecks. The CPU might be a great match today, but by the time you upgrade to something like a (future) 5090ti or 8900XT that CPU may start to become the weak link even at 4k.

In other words, it helps to temper expectations.
Exactly! Although that's only really applicable if you want to play current games at higher fps in the future. Everything else is crystal ball reading.

One thing that's not that useful but important for me:
I like to buy things that are the most reasonable. For CPUs that means:
- enough fps for my current use case
- the best fps to price ratio of all CPUs with enough fps
- the best fps to watt ratio

More and more reviewers do this calculation for you, which I really like!
Right now that would be a 12400f with ddr4 and B660 mobo.

I'm planning to get a 7600X with undervolting/curve optimizing with a B650 (e) mobo and DDR5 ofc.
Gonna wait for the next X3D variant to be released and then do the 3 calculations above to decide what I'll buy.
I'm hoping the premium for "future proof" ram and mobo won't be too high to kill the fps to price ratio...


Very interesting video btw:

1664543898689.png


1664543907757.png


2.3% higher fps at 26.2% lower power consumption.
 
Last edited:
My R5-5600X recording in the background during a 20+ minute online race of ACC yesterday.
The thing is smooth as glass.
The video on the under-volting forgot one step.


View attachment 604786
Looks like HWmonitor has some issues reading out everything correctly?
5600x at 6.3 GHz is a bit off :roflmao:

And I'm not sure what you want to say with that Screenshot?
Not wanting to be mean or anything, just wondering...
No loads visible, current overall load is 3.6%, so it's after the race/recording I guess.

The max values take the loading screens etc. into account, so max values alone can't be trusted most of the times.
Max values when starting Chrome after re-boot would show 100% load, 1.385 V vcore and 85°C and hitting the power limit for my 10600k.
Only for 2 seconds though.

You don't show any fps data or what kind of recording (quality, cpu/gpu etc) you used.
So all we can see is:
- it didn't overheat
- max power was 78 W
- max vcore was 1.216 V
- max VID was 1.237 V so it had a -0.021 undervolting at max vcore
- it wasn't fully utilized so 12 threads were enough (max load 64.3%)

So the value of your post, from my perspective, comes down to "The performance of this CPU was good enough for myself and it runs within specs".
But maybe I'm missing something here?

If you want a better read-out, I'd recommend using hwinfo64 in "sensors only" mode (it asks when launching it).
It has min/max and average!
To reset these, simply click on the clock button at the bottom.

Then the best results would be when resetting it after all "loading" is done.
So when in the pit menu on track and the recording is started.
And then taking the screenshot right before stopping the recording and while still in the pit menu.

This way min/max/avg will be accurate.
If you reset before the loading screens etc and take the screenshot when the recording etc. is stopped, the average will still be very accurate, the min/max can be ignored though.
 
Looks like HWmonitor has some issues reading out everything correctly?
5600x at 6.3 GHz is a bit off :roflmao:

And I'm not sure what you want to say with that Screenshot?
Not wanting to be mean or anything, just wondering...
No loads visible, current overall load is 3.6%, so it's after the race/recording I guess.

The max values take the loading screens etc. into account, so max values alone can't be trusted most of the times.
Max values when starting Chrome after re-boot would show 100% load, 1.385 V vcore and 85°C and hitting the power limit for my 10600k.
Only for 2 seconds though.

You don't show any fps data or what kind of recording (quality, cpu/gpu etc) you used.
So all we can see is:
- it didn't overheat
- max power was 78 W
- max vcore was 1.216 V
- max VID was 1.237 V so it had a -0.021 undervolting at max vcore
- it wasn't fully utilized so 12 threads were enough (max load 64.3%)

So the value of your post, from my perspective, comes down to "The performance of this CPU was good enough for myself and it runs within specs".
But maybe I'm missing something here?

If you want a better read-out, I'd recommend using hwinfo64 in "sensors only" mode (it asks when launching it).
It has min/max and average!
To reset these, simply click on the clock button at the bottom.

Then the best results would be when resetting it after all "loading" is done.
So when in the pit menu on track and the recording is started.
And then taking the screenshot right before stopping the recording and while still in the pit menu.

This way min/max/avg will be accurate.
If you reset before the loading screens etc and take the screenshot when the recording etc. is stopped, the average will still be very accurate, the min/max can be ignored though.
Yeah! I saw that.
I was more interested in the cooling and power draw.
 

What are you racing on?

  • Racing rig

    Votes: 528 35.2%
  • Motion rig

    Votes: 43 2.9%
  • Pull-out-rig

    Votes: 54 3.6%
  • Wheel stand

    Votes: 191 12.7%
  • My desktop

    Votes: 618 41.2%
  • Something else

    Votes: 66 4.4%
Back
Top