Jump to content

Virtual-Chris

Members
  • Content Count

    1,241
  • Donations

    $0.00 
  • Joined

  • Last visited

Everything posted by Virtual-Chris

  1. Ok, that would explain a lot of my confusion. Why not use the in-game scaling setting vs editing the config file?
  2. So basically you're saying you don't notice any image degradation with DLSS3 (or you can live with it) so it's a win for you. That's perfectly fine. I'm still undecided. I haven't seen it yet outside of highly compressed YouTube videos.
  3. I get how DLSS and scaling work, but I still don't understand where this secondary scaling comes into it. What you're describing sounds like primary scaling... Is this how you understand the process works both with and without DLSS? Without DLSS using primary scaling... Your display resolution is 1440p You set resolution scaling (primary scaling) in game to 58% The GPU renders the frame at 835p The GPU upscales this without any processing to 1440p to match your display The result is a blurry mess but great FPS since the GPU is only rendering a fraction of the pixels. Now with DLSS... Your display resolution is 1440p You set resolution scaling (primary scaling) to 100% DLSS2 is set to Balanced The GPU renders the frame at 835p The GPU then upscales this with AI processing to 1440p to match your display The resulting image is less blurry than without DLSS, but it's still a bit blurry as the AI is not perfect. The FPS is great because the GPU is only rendering a fraction of the pixels. Again with DLSS but with added scaling... Your display resolution is 1440p You set resolution scaling (primary scaling) to 150% (2160p) DLSS2 is set to Balanced The GPU renders the frame at 1252p (58% of 2160p) The GPU then upscales this with AI processing to 1440p to match your display The resulting image may be less blurry than the case above, but the performance isn't going to be as great since the GPU is rendering more pixels. Where in this process does your Secondary Scaling come in, and what does it do exactly? My assertion is that there's really only one scaling opportunity for rendering, and that's decided before the frame is rendered. I guess some post processing like sharpening and filtering could be applied at a super sampled resolution... is that what this secondary scaling is? If so, it's not affecting what DLSS is doing or the resolution DLSS is working with. That's decided by the primary scaling.
  4. Fair enough... let's call them AI-generated frames. The bottom line is there are now two types of frames, game engine generated frames, and AI generated frames. Interesting... Of course, you could have doubled your frame rate any time by lowering LOD, resolution, using scaling, or a variety of other settings. What is it about DLSS3 that is so magical? Is it that you don't notice any degradation with DLSS3 but those other methods of trading image quality for FPS are more noticeable?
  5. I agree, more is better, but there are diminishing returns. Where those returns really start to diminish will be different for everyone. For you, it may be at 144Hz. For me, it's above 60Hz, and for some here, it's 30Hz. I have to see DLSS3 in action with my own eyes, but I am skeptical. Let's be honest about what it is... it's a post processing technique that injects duplicate frames with some motion extrapolation applied. If your sim is only running at 30FPS before DLSS3, it's still only putting out 30FPS of real content after DLSS3. All DLSS is doing is adding some AI generated content to make it feel faster.
  6. That thread is mostly talking about render scaling. In that thread, no one really knows what secondary scaling does. One person suggests it affects the rendering of the cockpit textures... "I’m not 100% sure, but I think it has something to do with the resolution scaling in the interior cockpit. I set mine to 0.500000 and tested, my cockpit textures and labels were noticeably worse than at 1.00000" Then this guy says... "SecondaryScaling does the same as DSR yet in game (and using in game down sampling). ... The render resolution is what the environment gets rendered at, polygons, textures etc. The post process resolution is where sharpen, color grading and TAA get resolved. DSR allows you to start with a higher screen resolution, gives you more control over down sampling and gets better results when you use NVidea filters." I gather DSR is like super sampling. Are you sure that this secondary scaling is being applied before DLSS scaling? I think DLSS scaling will be applied on the rendered resolution. If you can control the scaling in the post processing stage, I think that would be after DLSS has done the upscaling.
  7. Thanks... that makes sense. The OP makes it sound like this Post resolution is BEFORE DLSS processing though. Even using your methodology, I'm still having difficulty understanding how this post upscaling can improve image quality. Let's say your monitor is 2560p. With DLSS2, the GPU renders it at like 1700p, then the DLSS upscales it to 2560p using machine learning to fill the gaps. Now let's say I then tell it to upscale that to 3840p using this secondary scaling and then the monitor renders that at 2560p. That last step of upscaling and downscaling would seemingly add nothing. What am I missing?
  8. I’m still not sure what post is. Here’s my understanding of the flow… maybe outline where this post resolution comes into it? 1. The CPU sends data to the GPU to render the frame. 2. The GPU renders the frame at a resolution dictated by your settings. This could be higher than your display resolution for scaling above 100% or lower if using DLSS2 or a scaling setting below 100%. 3. The rendered frame is then scaled (or not) and output to your display. If using DLSS2 it will upscale from render resolution and use ML algorithms to fill in the missing pixels to achieve display resolution. If using normal scaling, it’s just upsampled without adding any data.
  9. I don’t understand. There’s the resolution the GPU renders at and there’s your display resolution. What is post and what is secondary scaling?
  10. According to the devs on the official forums. 1.3.8 is a typo. The current version is 1.3.6.
  11. I haven't had a chance to try it, but this sounds off... I'm fairly sure the real plane needs a lot of rudder trim. The fact it requires less now and not more seems wrong.
  12. One way to look at it is that DLSS is just another setting you can use to trade image quality for performance. In the sim, you have several settings that do this already... Resolution - run at lower res for more FPS and lower image quality Scaling - render at a lower res and then upscale for more FPS and lower image quality LOD - use lower LOD for more FPS and lower image quality Other Graphic Settings for terrain, trees, objects, etc - trades FPS for image quality DLSS2 is just another one of these settings that trades FPS for image quality... it's really just a new form of scaling which you already have, except it uses machine learning to try and fill in the gaps. It comes down to, what image quality sacrifices are you willing to make and what FPS are you willing to live with? No one can decide for you as you may be really bothered by blurry cockpit instruments, or LOD pop-in, or poor quality textures, etc. while others aren't. When it comes to what GPU to buy, I would personally only ever buy Nvidia, but not because of DLSS, just because they seem to have the best overall solution from cards, to drivers, to game support. They are pushing tech forward though with DLSS and that's another good reason to buy Nvidia. AMD makes the best processors right now (in my view) but Nvidia makes the best GPUs.
  13. In my case, I'm more interested because the card may push me over 60FPS consistently in TAA with DX12. I'm close now since I upgraded to the 5800x3D (with the 3090) and if this can push me over 60FPS most of the time, then I don't need more FPS and thus don't need DLSS3. I'm a bit of an image quality snob. I will have to judge DLSS3 with my own eyes, but it seems a bit contrary for me to want to run LOD400, on Ultra settings, at 4K and then use something like DLSS3 to get some performance out of it at the expense of image artifacts. I could pull a lot of other levers if all I cared about was FPS.
  14. My wallet thanks you for lightening it's load! 😄 It sounds like this could get me over 60FPS consistently with TAA and certainly with DLSS3. It may be the last GPU I need to buy until going to 8K.
  15. There's a lot of confusion out there about what to use and under what conditions. I've seen a video by Nvidia where they recommend using V-Sync Fast (in NV Control Panel) with G-Sync for the best experience, but I can't find it now. In my case, I recently purchased an OLED G-Sync monitor and had to turn G-Sync off due to the notorious OLED flicker issue. V-Sync Fast works just as well as G-Sync did for me... without the flicker. If you're not on an OLED panel, then try just using G-sync... it should offer the lowest latency. Whatever you do, don't use V-Sync in the game. If you're going to use V-Sync use V-Sync Fast in NV control panel. Here's a description of all the various sync options... (Not mine... copied from Reddit)... V-SYNC - Framerate synced with monitor refresh rate (no tearing, input lag, increasing input lag as framerate lowers) V-SYNC OFF - Framerate un-synced (tearing, very little input lag) Adaptive Sync - At high framerates, VSync is enabled to eliminate tearing. At low frame rates, it's disabled to minimise stuttering. (input lag at high frame rates, tearing at low frame rates) G-SYNC / FreeSync - Syncs refresh rate to frame rate (no tearing, minor input lag) (important to note that if maximum refresh rate is hit, G-SYNC will no longer be functioning so then input lag may be encountered if V-SYNC is enabled, input lag will not be encountered if V-SYNC is disabled but tearing will be) FAST-SYNC - De-coupled rendering and monitor refresh. VSYNC OFF but the monitor will only render completely rendered frames, thus eliminating tearing (MINOR input lag but NO TEARING! ) also it is monitor agnostic so it will work with any screen. It is analogous to hardware triple buffering but does not back-pressure the render pipeline. It is a clever solution to the problem and requires very little overhead there should be no performance hit. It can still be used with G-SYNC for those that have systems that struggle to maintain FPS > refresh rate.
  16. Thanks. If I can summarize… you can’t use it if you really need it and you don’t really need it if you can use it. 🙂 I mean, he’s basically saying that you need 100FPS natively before you turn on DLSS3 or you’re going to notice artifacts. At least for me, if I can get 100FPS natively, I don’t need DLSS. I guess it’s another lever we have in our toolkit, but for me, I want max image quality at a rock solid 60FPS. So back to my previous question… what’s the FPS advantage using TAA? My 3090 gets me 40-50FPS at 4K, Ultra, LOD400 with TAA … is a 4090 going to get me over 60FPS with the 5800x3D feeding it?
  17. I’ve read through this thread and maybe I missed it, but I still have no idea what FPS increase I could expect on 4K Ultra TAA from a 3090 to a 4090… has anyone got any actual numbers?
  18. Cool. Let us know when it's out. I think I will buy it.
  19. @MikeT707That looks better... is that now part of the product?
  20. Can someone who went from a 3080/3090 to a 4090 report on the FPS increase using just TAA?
  21. I haven't got on DX12 yet. I wonder why v-sync fast doesn't work on DX12. Is that global or isolated to MSFS? Is it a bug, or will never work on DX12? It's a real shame as it seems like the ultimate choice for tear-free, flicker-free refresh for me.
  22. Yeah, tell me about it. I bought an LG OLED specifically because it had G-Sync. But unfortunately, the only effect I could see was that it induced OLED flickering on dark scenes. Having G-Sync on or off made no difference to tearing as v-sync fast takes care of that. I really don't see the benefit of g-sync, at least where v-sync fast is an option.
  23. I've played around with a lot of settings and I get a tear-free, flicker-free image with in-game v-sync off, g-sync off, and Nvidia Control Panel v-sync set to fast with no frame limitation. I turned g-sync off because it introduces OLED flicker in dark scenes... turns out g-sync was doing nothing for me (I'm at 45-60FPS on a 120Hz panel). (Not mine... copied from Reddit)... Explanation of terms for those that are unsure: V-SYNC - Framerate synced with monitor refresh rate (no tearing, input lag, increasing input lag as framerate lowers) V-SYNC OFF - Framerate un-synced (tearing, very little input lag) Adaptive Sync - At high framerates, VSync is enabled to eliminate tearing. At low frame rates, it's disabled to minimise stuttering. (input lag at high frame rates, tearing at low frame rates) G-SYNC / FreeSync - Syncs refresh rate to frame rate (no tearing, minor input lag) (important to note that if maximum refresh rate is hit, G-SYNC will no longer be functioning so then input lag may be encountered if V-SYNC is enabled, input lag will not be encountered if V-SYNC is disabled but tearing will be) FAST-SYNC - De-coupled rendering and monitor refresh. VSYNC OFF but the monitor will only render completely rendered frames, thus eliminating tearing (MINOR input lag but NO TEARING! ) also it is monitor agnostic so it will work with any screen. It is analogous to hardware triple buffering but does not back-pressure the render pipeline. It is a clever solution to the problem and requires very little overhead there should be no performance hit. It can still be used with G-SYNC for those that have systems that struggle to maintain FPS > refresh rate.
  24. Thanks. It may get better as the tech matures, but this shouldn't surprise anyone. DLSS, whether it's v2 upscaling or v3 frame prediction, it's all about creating something from nothing... and it's unlikely to be as good as the full something. Did you upgrade from a 3080? If so, what's the FPS improvement with TAA with the 4090?
  25. Interesting... I wonder if that's a bug or that's the way it is supposed to work.
×
×
  • Create New...