Jump to content
Sign in to follow this  
TuFun

140 FPS in Microsoft Flight Simulator

Recommended Posts

With DLSS 3 on that image was still only rendering at 2560 x 1440. Since there is lots of performance headroom to spare i would like to see what that 4090 can do rendering 3840 instead of 2560. 

  • Like 1

AMD Ryzen 9 7950X3D | RTX 4090 | 48GB DDR5 7200 RAM | 4TB M.2 NVMe SSD | Corsair H150i Liquid Cooled | 4K Dell G3223Q G-Sync | Win11 x64 Pro

Share this post


Link to post
Share on other sites
18 hours ago, TravelRunner404 said:

The one place the 4090 shines is VR.  My Varjo Aero can now hit the 2D CPU speed limit in VR with DLSS Quality and Ultra settings, or I can use DLAA/TAA and motor on at 35 FPS with those settings.  It's double what the 3090 was offering at those high resolutions.  If you are looking to push resolution that's where it separates itself generationally from the 30 series.

I assume you're using DLAA to supersample a resolution that exceeds the res of the Aero.  If that is true, very impressive to say the least (and with ultra settings too), and I'm curious what percentage is your supersample ratio.  Personally, for VR I'll skimp a bit on image quality if it buys me a noticeable fps boost, hence the settings I use.


Rod O.

i7 10700k @5.0 HT on|Asus Maximus XII Hero|G.Skill 2x16GB DDR4 4000 cas 16|evga RTX 3080 Ti FTW3 Ultra|Noctua NH-D15S|Thermaltake GF1 850W PSU|WD Black SN750 M.2 1TB SSD (x2)|Plextor M9Pe .5TB NVMe PCIe x4 SSD (MSFS dedicated)IFractal Design Focus G Case

Win 10 Pro 64|HP Reverb G2 revised VR HMD|Asus 25" IPS 2K 60Hz monitor|Saitek X52 Pro & Peddles|TIR 5 (now retired)

Share this post


Link to post
Share on other sites

So based on my very limited reading about DLSS It seems to me that what it does is interpolate an extra frame independently of the CPU to inserts it between the real frames generated by the CPU and GPU together. So if your CPU is say in the 90 to 100% utilization range and your GPU is say in the 60 to 70% range and you a getting about 30fps that's pretty much it. Otherwise known as CPU bottleneck. Given the same level of input from the CPU for the 30fps DLSS allows the GPU to interpolate an additional frame on its own without additional load on the CPU. What stood out in that Video for me is that MSFS developer mode only showed the frame count that involved the CPU whereas RTSS showed the frame count from the GPU which was about double.

Well my TV can do that. "Judder reduction" on a Samsung 4KTV is a setting that smooths out 24fps by interpolating extra frames. I don't know exactly how many extra frames it adds but I'd say it adds 1 interpolated frame for every 1 real frame it gets. So for a 24hz signal it will show you 48fps which should be pretty smooth. I'll bear witness to the fact that it is perfectly smooth. The Judder reduction also works perfectly for 30hz and 50hz (not 50hz in flight sim but for sim racers). So when I have my system setup with Vsync on for 30fps at 30hz with the Judder reduction filter on, I'm actually seeing 60fps.

The MSFS developer Mode FPS counter shows 30fps and RTSS also shows 30fps but I'm seeing 60fps. As I see it a 4090 is a slightly improved 3090 but instead of the Judder Reduction being on the TV end of the HDMI cable its on the GPU end of the HDMI cable. And instead of them calling it Judder Reduction, they give it a sexy name like Deep learning super sampling 3

Edited by FBW737
  • Like 1

Share this post


Link to post
Share on other sites
26 minutes ago, FBW737 said:

The MSFS developer Mode FPS counter shows 30fps and RTSS also shows 30fps but I'm seeing 60fps. As I see it a 4090 is a slightly improved 3090 but instead of the Judder Reduction being on the TV end of the HDMI cable its on the GPU end of the HDMI cable. And instead of them calling it Judder Reduction, they give it a sexy name like Deep learning super sampling 3

Sounds like it to me as well (just an upsample-supersample trick with added frame interpolation), and whenever a marketing team wants to hype something, they always add "AI" to it. No real AI as it would bottleneck the CPU (or GPU), any real AI in image processing is very CPU heavy in most cases and out of reach of current machines. There are currently all kinds of AI stuff being done in rendering, and some of it can take absolutely forever to finish rendering, much less trying to do it in real-time. Reminds me of Sony's claim that s they had AI image processing on their projectors and they had a lookup table for certain movies. It was just a well-done generic sharpening filter. There is no enforcement in marketing, they can pretty much say whatever they want (even up to the point of complete misrepresentation of functionality).

That said, for people with money to burn, the 4090 is still worth it, simply because they have money to burn. No disrespect meant for anyone with a 4090 of course, I would buy one too if I had the money to burn.

Edited by Alpine Scenery

AMD 5800x | Nvidia 3080 (12gb) | 64gb ram

Share this post


Link to post
Share on other sites

Remember when everyone in the forum would blast people for wanting more than 25fps because that's all your eyes could see, ahhh they were the good old days :laugh: now its 140fps omg!!

  • Like 1

Boeing777_Banner_Pilot.jpg

AME GE90, GP7200 CFM56 

Share this post


Link to post
Share on other sites

So much confusion here (as usual). If you're sceptical about the frame boost that DLSS 3 + Frame Generation gives, you have NOT UNDERSTOOD it yet.

The AI IS REAL and happens on the Tensor cores of your GPU. The 4090 have 512 of these specialized cores.

These are not to be confused with the 16k+ CUDA cores that rasterizes the frames (turn them from a 3D mathematical model into 2D frames).

The magic is the frame generation. DLSS v2 has to predict the AI generated frames from only ONE normally rendered frame, and this leads to unwanted artefacts and serious

blurring in many situations. DLSS v3 on the other hand takes TWO normally generated frames, so the AI has a start- and endpoint to work with.

Add to this the motion vectors that help the AI to figure out which way things in the scene are moving from frame to frame.

The end result is free (from the CPU point of view) frames and increased smoothness with minimal artifacts.

The tradeoff is added latency due to the required TWO normally generated frames, but this only really impacts competitive gaming and not flight sims.

The DLSS part is handling only the upscaling and is mostly useful at 4K resolutions, but might be OK on 1440p as well.

You should be able to activate AI frame generation only, and render at native resoulutions with TAA. This might be the better option for 1440p users.

I have neither MSFS nor a 4000 series cards yet, but I understand the technology and this is the way forward, no doubt.

It is not snake oil. It's the bees knees right here folks. But man is it expensive...

  • Like 5

Share this post


Link to post
Share on other sites

It makes sense to get as much power as you can since there are tons of addons that can slow things down. That said, it doesn't make sense to try to target FPS higher than 60 with everything active, there is no benefit in a flight sim over 60 fps. Even 40-60 the benefit is very debateable.

 

 

  • Like 1

AMD 5800x | Nvidia 3080 (12gb) | 64gb ram

Share this post


Link to post
Share on other sites
1 hour ago, neumanix said:

The AI IS REAL and happens on the Tensor cores of your GPU. The 4090 have 512 of these specialized cores.

These are not to be confused with the 16k+ CUDA cores that rasterizes the frames (turn them from a 3D mathematical model into 2D frames).

The magic is the frame generation. DLSS v2 has to predict the AI generated frames from only ONE normally rendered frame, and this leads to unwanted artefacts and serious

AI can be called anything, frame generation is not AI, sorry. This isn't AI as to how I would generally classify AI. 

If..then statements can be classified as AI (wizards used to be called AI). When you throw the word learning in the algorithm, it pre-supposes that the algorithm gets more advanced as time passes, because it is, well "learning". There is nothing "learning" about this algorithm. If you fire it up, the first 60 seconds the algorithm will be just as good as an hour later. Apparently you want to defend the marketing, there is no predictive AI in the frame generation. I've seen predictive AI frame generation, it takes about 2.5 hours to render one frame. Any of the statistics needed to process a 4k image to lend themselves to true AI requires a ridiculous amount of image processing and floating point calculations. Trying to improve frame generation with AI is like trying to make a tractor faster by loading it up with concrete. Every major company calls everything AI these days, are you not paying attention...

The only way to improve frame generation with AI would be to use a master cloud database that had game specific attributes in it that you could query in real-time so that the stuff didn't have to process so heavily. Everything would have to be done in a lookup table and be OBJECT specific, and the amount of labor required to generate all those parameters ahead of time would require a whole new type of game engine that outputted data to a database for imager gathering info. The amount of power required that actually teaches itself to do things in advanced imagery is off the charts. Sure there are simple AI algorithms, but it's not going to be advanced enough to improve real-time renders, we're not even 5% of the way there yet. So it is entirely pointless to add AI to real-time image processing at this point, which means it's marketing AI.

Every marketing team in every company is told to use the word AI.

 

Edited by Alpine Scenery
  • Like 1

AMD 5800x | Nvidia 3080 (12gb) | 64gb ram

Share this post


Link to post
Share on other sites
6 minutes ago, Alpine Scenery said:

AI can be called anything, frame generation is not AI, sorry. I do understand frame generation, and I've coded some AI before. This isn't AI as to how I generally classify AI. 

If..then statements can be classified as AI (wizards used to be called AI). When you throw the word learning in the algorithm, it pre-supposes that the algorithm gets more advanced as time passes, because it is, well "learning". There is nothing "learning" about this algorithm. If you fire it up, the first 60 seconds the algorithm will be just as good as an hour later. Apparently you want to defend the marketing, there is no predictive AI in the frame generation. I've seen predictive AI frame generation, it takes about 2.5 hours to render one frame. Any of the statistics needed to process a 4k image to lend themselves to true AI requires a ridiculous amount of image processing and floating point calculations. Trying to improve frame generation with AI is like trying to make a tractor faster by loading it up with concrete. Every major company calls everything AI these days, are you not paying attention...

The only way to improve frame generation with AI would be to use a master cloud database that had game specific attributes in it that you could query in real-time so that the stuff didn't have to process so heavily. Everything would have to be done in a lookup table and be OBJECT specific, and the amount of labor required to generate all those parameters ahead of time would require a whole new type of game engine that outputted data to a database for imager gathering info. The amount of power required that actually teaches itself to do things in advanced imagery is off the charts. Sure there are simple AI algorithms, but it's not going to be advanced enough to improve real-time renders, not in real-time, we're not even 5% of the way there yet.

Every marketing team in every company is told to use the word AI, give me a break.

 

I agree that Nvidia is abusing the term AI in the case of DLSS3. I think there's some merit to the use of the term for DLSS2 where they actually "train" or tune the engine using still images from the game so it "learns" what things should look like.

I wouldn't accuse anyone of overusing the term "AI" these days...  we universally use it to describe "AI traffic", or in other games, "AI enemies" or "AI team mates" which is just a fancy term for "bots" for example... which are all just elements in the game that the computer is controlling. 

Share this post


Link to post
Share on other sites
17 minutes ago, neumanix said:

DLSS v3 on the other hand takes TWO normally generated frames, so the AI has a start- and endpoint to work with

That's what the Judder reduction filter in my Samsung 4KTV does.

  • Like 1

Share this post


Link to post
Share on other sites
3 minutes ago, Virtual-Chris said:

I agree that Nvidia is abusing the term AI in the case of DLSS3. I think there's some merit to the use of the term for DLSS2 where they actually "train" or tune the engine using still images from the game so it "learns" what things should look like.

I wouldn't accuse anyone of overusing the term "AI" these days...  we universally use it to describe "AI traffic", or in other games, "AI enemies" or "AI team mates" which is just a fancy term for "bots" for example... which are all just elements in the game that the computer is controlling. 

That's called testing and tuning, not AI. Otherwise, every program ever written should be called AI, because everything is tuned to a final outcome. I'm not talking about using AI in those types of thing, but the marketing at every company calls everything AI.

Basic linear regression (just call it AI)
Any type of data storage for later analysis (it's no longer analytics, it's AI)
Any statistics that are beyond a basic median, call it AI

AI is hardly used in gaming at this point, the only places you find it are in the rendering engines themselves, like Maya and 3DS Max have some addons that do some interesting stuff with REAL AI, but it is very basic AI and doesn't really "learn" anything other than it uses some type of hit-miss ratio to generate objects. There is hardly any learning AI in anything consumer level that is useful yet. The outcomes of learning AI are always too erratic to be useful in most things, except the research field or modeling data. 

Edited by Alpine Scenery

AMD 5800x | Nvidia 3080 (12gb) | 64gb ram

Share this post


Link to post
Share on other sites
50 minutes ago, Alpine Scenery said:

Even 40-60 the benefit is very debateable.

Using head tracking, 60 is way better than 40. 

Share this post


Link to post
Share on other sites

Feel some slight buyer remorse now with my purchase of a 3080ti. 

  • Like 2

Specs: 11900K (5ghz), 64GB ram 3600mhz, RTX 3080 ti

Share this post


Link to post
Share on other sites
2 minutes ago, rocketlaunch said:

Feel some slight buyer remorse now with my purchase of a 3080ti. 

You should, apparently if you leave a 4090 running long enough, the AI will figure out how to bring you a cup of coffee.
 

Edited by Alpine Scenery

AMD 5800x | Nvidia 3080 (12gb) | 64gb ram

Share this post


Link to post
Share on other sites

I have nothing against AI generated frames, if the feeling of fluidity and smoothness increases - I don't care if those frames come from my hard working CPU or my hallucinating GPU.
It's not perfect, but the tech will likely just skyrocket from here. Like most things that include machine learning do.

I'm just a bit irked by the fact that my two years old €2000 RTX 3090 isn't invited to the DLSS3 party yet. But I get it. It's business. Gotta sell those 4090s!

  • Like 1

R7 5800X3D | RTX 4080 OC 16 GB | 64 GB 3600 | 3440x1440 G-Sync | Logitech Pro Throttles Rudder Yoke Panels | Thrustmaster T.16000M FCS | TrackIR 5 | Oculus Rift S
Experience with Flight Simulator since early 1990s

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

  • Tom Allensworth,
    Founder of AVSIM Online


  • Flight Simulation's Premier Resource!

    AVSIM is a free service to the flight simulation community. AVSIM is staffed completely by volunteers and all funds donated to AVSIM go directly back to supporting the community. Your donation here helps to pay our bandwidth costs, emergency funding, and other general costs that crop up from time to time. Thank you for your support!

    Click here for more information and to see all donations year to date.
×
×
  • Create New...