Jump to content
Sign in to follow this  
MrFuzzy

Intel is still the best choice for MSFS (and other games)

Recommended Posts

1 hour ago, Nedo68 said:

no difference at 4k.

This is important to me. After running my sim on a 43" 4K monitor, I can't go back to 1080p, evah. 

The 4K results would also be comparable for VR.

Share this post


Link to post
Share on other sites
1 hour ago, Colonel X said:

It's been the same for a long time. Intel leads the race for a premium price. Sometime later, AMD will come up with a comparable performing CPU for a smaller price. Since MSFS doesn't need more than 6-8 cores, AMD's big advantage of many cores for good value doesn't really work here. 

This is somewhat true for now, but for how long?

Obliviously if anyone is going 4k you may as well get the cheapest option seeing that the High end Intel or AMD CPU are equal on FPS.

  • Upvote 1

AMD Ryzen 5800X3D, GIGABYTE X570 Aorus Ultra, 32GB DDR4 3600 MHz RAM, 2* 1 TB SSD,1*4 TB M.2, 2*1 TB M.2, (4TB) HDD,RADEON RX 6800XT NITRO+ OC SE 16GB GDDR6, NZXT Kraken X73, NZXT 710 Case, X55 JOYSTICK/THROTTLES, LG 4K monitor, Dell 1080 monitor. Honeycomb Alpha Yoke, Bravo Throttle. Thrustmaster TPR Pedals. Tobii Eye tracker.

Share this post


Link to post
Share on other sites

If you have a high-end GPU, play at 1080p (sometimes at 1440p with the RTX 3080) and want to push as many frames as possible (mostly for competitive FPS games and high refresh rate monitors), then Intel is indeed the best choice, when taking some of the Comet Lake drawbacks (obscene power consumption and security vulnerabilities regarding the Skylake architecture) into account.

What's for sure is that Zen 3 will take the clear lead in single-threaded performance soon, so Intel's only advantage left will be gone then (and could stay like that for a long time unless they manage to secure enough capacity from other foundries).

  • Upvote 1

Share this post


Link to post
Share on other sites

A chart doesn't really mean much without specifying dollar spent per FPS gained, because that's all that matters in the end.

It totally depends on someone's budget and how much they are allocating on video card vs. CPU, AMD  is hard to beat on CPU's for under $300, which leaves more room to go up one notch on the video card in some people's budgets. 

Intel vs. AMD isn't that much different from price/performance between each other until you get under $300, then AMD kind of wins (at least when I checked 6 months ago, not sure about now). Otherwise, Intel wins on the more expensive CPU's for this game at least. 

Prices are constantly changing, so I'm not sure about right now as I haven't checked in a while. Both AMD and Intel are always in a non-stop pricing battle, so usually find a deal here and there.

Edited by SceneryFX
  • Upvote 1

AMD 5800x | Nvidia 3080 (12gb) | 64gb ram

Share this post


Link to post
Share on other sites

In my opinion Intel is NOT the best choice. The performance gain comes which such a huge price tag, it is simply not worth it.

  • Upvote 1

Share this post


Link to post
Share on other sites
18 minutes ago, eaim said:

This is somewhat true for now, but for how long?

Obliviously if anyone is going 4k you may as well get the cheapest option seeing that the High end Intel or AMD CPU are equal on FPS.

Yes, exactly! I see two reasons why MSFS may be going to make better use of more cores in the future: DX12 distributes the rendering/drawcall workload better on many cores, and airliners can delegate the simulation of the complex systems to other cores. The whole issue with fps hits from the 787 glass cockpit shows that it's absolutely worth considering. There's no reason why this should run on the main thread, as it apparently does, and the complexities involved in multi-threading the screen updates are relatively straight-forward. It wouldn't surprise me to see these optimisations in a future update, and I'm sure 3rd party airliners will make use of multi-threading in order to improve performance.

 

  • Upvote 2

My simming system: AMD Ryzen 5800X3D, 32GB RAM, RTX 4070 Ti Super 16GB, LG 38" 3840x1600

Share this post


Link to post
Share on other sites

All that tells me is that I can use anything listed with perfectly satisfactory results. When are we going to stop all this FPS nonsense and get back to flying?

  • Like 2

Intel 10700K @ 5.1Ghz, Asus Hero Maximus motherboard, Noctua NH-U12A cooler, Corsair Vengeance Pro 32GB 3200 MHz RAM, RTX 2060 Super GPU, Cooler Master HAF 932 Tower, Thermaltake 1000W Toughpower PSU, Windows 10 Professional 64-Bit, 100TB of disk storage. Klaatu barada nickto.

Share this post


Link to post
Share on other sites
1 hour ago, Farlis said:

In my opinion Intel is NOT the best choice. The performance gain comes which such a huge price tag, it is simply not worth it.

Well, the price tag has already come down a good bit since AMD introduced the Ryzen 3000 series. But Ryzen CPUs have a lot going for them already, such as good multi-threaded performance, better TDP, less issues with Meltdown / Spectre type vulnerabilities (ok, fixed for latest gen Intel), that's why many users couldn't care less about missing the single-threaded performance crown by 10-20%. For now this is clearly Intel's reign, but AMD has been closing in, and it looks like they'll finally closing the gap with Zen 3 in October, or maybe even get ahead of Intel.

After many years of relative boredom in the hardware world, we're finally seeing some action! Intel vs. AMD, Nvidia's 3000 series, AMD's upcoming Big Navy touted to attack at Nvidia's high end - it's definitely an exciting fall season!

And keep in mind that CPU performance has a lot to do with architecture specific software optimisations, be it with respect to cache sizes, out-of-order execution, available SIMD instruction sets such as SSE4.1/4.2, AVX2 etc. With Intel's dominance over the CPU market in the last decade, there was little incentive for developers to spend extra time to optimise for AMD CPUs. While this may not make a big difference in many situations, there have been cases where popular tools and libraries used optimised SSE4 and AVX2 instructions on Intel CPUs, but used a slow fallback on AMD CPUs, even when they supported all the necessary instructions. This happened with Intel's Math Kernel Library (MKL), with Matlab, and probably more software, and can make a difference of 3x or more (Matlab example: https://www.extremetech.com/computing/308501-crippled-no-longer-matlab-2020a-runs-amd-cpus-at-full-speed ). I'm not even saying this was intentional (although it's a distinct possibility), but in a world where everyone in need of raw compute power uses an Intel CPU, there's no good reason to optimise your code for AMD's slower competitor. With an increasing market share, software is bound to get better optimisation for AMD too.

 

Edited by pstrub
  • Like 2
  • Upvote 2

My simming system: AMD Ryzen 5800X3D, 32GB RAM, RTX 4070 Ti Super 16GB, LG 38" 3840x1600

Share this post


Link to post
Share on other sites

As with most things in life, it depends. The difference between AMD and Intel at 4K is imperceptible, with Intel taking the lead as resolution lowers. This is due to the load skewing toward the CPU as framerates increase with lower resolution and Intel's current core clock advantage. AMD is the clear choice for simmers who also do 3D modeling and video rendering since those applications scale very well with AMD's larger core counts.

Getting MSFS to scale across a large number of CPU cores is extremely difficult; It's not just a trivial matter of optimizing code. Many of the calculations required for simulation often can't be programmed to run concurrently in separate cores due to dependencies in the calculation process. For example, if I sent two people off into separate rooms to solve one of these two equations I would never get an answer in the form of a rational number for x and y. This math problem must be solved sequentially. It's a trivial example, but imagine this playing out in complex rendering equations that need to be made many times per second.

3x - y = 7

2x + 3y = 1

  • Like 2

Share this post


Link to post
Share on other sites
13 minutes ago, pstrub said:

Meltdown / Spectre type vulnerabilities (ok, fixed for latest gen Intel)

Those two since then seem to be fixed, but unfortunately more vulnerabilities keep showing up every now and then, albeit less serious ones. It is obvious now that Intel relaxed their approach towards security when designing the Skylake architecture for easier performance gains, which is why I cannot recommend that architecture anymore. Zen 2 is much superior, although somewhat held back by the DRAM latency and the MCM design in the Matisse implementation (Renoir is even better, its efficiency is just incredible). Even if stuck with an old node, Intel need to get rid of Skylake as soon as possible.

  • Like 2

Share this post


Link to post
Share on other sites
On 9/23/2020 at 6:29 AM, Hyperfocal said:

Getting MSFS to scale across a large number of CPU cores is extremely difficult; It's not just a trivial matter of optimizing code. Many of the calculations required for simulation often can't be programmed to run concurrently in separate cores due to dependencies in the calculation process.

DirectX 12 might have easier threading optimization, no idea as I don't often do that kind of programming. However, this type of optimization is always a tricky thing. As you probably already know, t's not hard to write threading code, but it's hard to write perfectly optimized threading code. This game is somewhat well optimized, but it also has to download stuff on the fly which still causes those pauses, but I think the servers get busier at times (not sure). I know the D/L itself doesn't have much to do with optimization, but it does mean they have to keep track of a lot more threads. This made their threading code EVEN harder to write than other games, which are only downloading saves/loads, updates, and possibly some live themes.

Well, this game has to do all of that and fight those aerial imagery downloads in real-time, so they have to get thread concurrency issues nailed down very well or they'll get locking issues that could even affect how well Windows responds, so it's just difficult.

Asobo did a great job technically, and I'm sure they had a discussion on this stuff. Sure it's not in the top 10 of best optimized games, but they are at a disadvantage compared to others due to above stated complexities. It runs above average (I'd say between good and great, but not excellent) when it comes to how well optimized it is. 

Red Dead Redemption 2 has some similarities to FS 2020 as far as what both had to accomplish in the game engine, RDR 2 did it better in some ways. Not a huge gamer as I get late middle-aged, but I do tend to check out the graphics of the latest games to see what's going on. RDR2, just like a Flight Sim, had to draw huge long viewing distances with vegetation and trees. It was using DX12 or Vulcan (you can switch between the two), and I'd say it probably had the edge in optimization, but not by much.

RDR 2's water and weather effects are the one to beat though. Once we get to the RDR 2 level of weather effects, than the game will feel 90% real or more.

As far a which did it better, well they both have similarities in some ways. RDR 2 is the one to beat overall though, that and some Tomb Raider maybe, and TR did best with a mega PC as well, though it ran "OK" on regular pc'S.

RDR2 has the advantage of a static world, whereas FS 2020 does not. So that is a big difference, but they do need to look at RDR2's weather effects and make something similar, it doesn't even have to be AS good as that, just an example.

Edited by SceneryFX
  • Like 5

AMD 5800x | Nvidia 3080 (12gb) | 64gb ram

Share this post


Link to post
Share on other sites
On 9/23/2020 at 3:16 AM, SierraHotel said:

Complete BS

How is what he posted objectively wrong? The benchmarks speak for themselves. This isn't a matter of opinion. 

  • Like 1

Share this post


Link to post
Share on other sites

The AMD strategy of chiplets versus monolithic was originally seen as a cost cutting measure, as they have a better yield than monolithic chips and you are only throwing away a small chiplet. They also allow you to ramp up to more cores relatively simply BUT there are some performance hits doing that.

Of course what we have seen over the past few years is that the chiplets, aside from being better yield and allowing more cores have allowed AMD to go to 10nm and better, meanwhile Intel has been stuck at 14nm seemingly forever.

Basically the Intel approach is in theory better but in practice the chiplet based AMD processors at 10 and 7 nm are cheaper and actually catching up with Intel in term of single thread performance.

Which to buy ? It is not always an easy decision. The word not allowed on both sides are very biased (whilst foaming at the mouth about how they have the true "facts" of the matter). To make matters worse most reviews focus on high end processors and the brand that wins at the top is not necessarily the best buy in the mid or low price range.  Basically pick the dollars you want to spend and try and compare what reviews there are of the choices at your price point.

  • Like 2

Share this post


Link to post
Share on other sites
On 9/23/2020 at 4:04 AM, Ricardo41 said:

Intel vs. AMD, AMD vs. Nvidia. Here we go again. Threads like this one usually don't end well.

 

On 9/23/2020 at 4:14 AM, GSalden said:

We just have to accept it and move on ... 🤓

Yes, and therefore, I have my bucket of buttered popcorn and I am in before the lock. 😜


My computer: ABS Gladiator Gaming PC featuring an Intel 10700F CPU, EVGA CLC-240 AIO cooler (dead fans replaced with Noctua fans), Asus Tuf Gaming B460M Plus motherboard, 16GB DDR4-3000 RAM, 1 TB NVMe SSD, EVGA RTX3070 FTW3 video card, dead EVGA 750 watt power supply replaced with Antec 900 watt PSU.

Share this post


Link to post
Share on other sites

Here's my question. Why do you need more than 40 FPS? Can't the human eye only see like 35 FPS? I've let games go hog wild and popped 130+ fps then vsynced to 60 and notice NO difference. So even if it is much high will you really notice? Also, once direct X12 drops you might just see AMD lead the pack.

  • Like 2

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

  • Tom Allensworth,
    Founder of AVSIM Online


  • Flight Simulation's Premier Resource!

    AVSIM is a free service to the flight simulation community. AVSIM is staffed completely by volunteers and all funds donated to AVSIM go directly back to supporting the community. Your donation here helps to pay our bandwidth costs, emergency funding, and other general costs that crop up from time to time. Thank you for your support!

    Click here for more information and to see all donations year to date.
×
×
  • Create New...