Jump to content


  • Content Count

  • Donations

  • Joined

  • Last visited

Community Reputation

50 Good

About boez

Profile Information

  • Gender

Flight Sim Profile

  • Commercial Member
  • Online Flight Organization Membership
  • Virtual Airlines

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. A 'manual install' option is selectable which will place the aircraft folder on your desktop where you may want to move it to somewhere more appropriate. I've written to Milviz suggesting they may want to allow the user to specify the destination. A.
  2. I cannot replicate this at all. In other words if I restart the software and start at EGNX on Ultra settings I get the same as Aristoteles picture 2. I can clearly see a degrade when I go to Medium as it looks like picture 1. Are you sure you haven't got your UserCfg.opt file set to read-only? p.s. running HDR so getting a screen shot is PITA. But believe me I'm getting Ultra with Ultra settings and with no swapping of settings needed.
  3. I was resigned to a 6 hour complete re-install but I thought I'd try one more restart...and hey presto issue is now gone 🙂
  4. Thanks Shack95 - I've not seen this myself until past few days! This location is one of my post-update test flights so I'm certain it wasn't a problem previously. Seems strange that it's not random tiles which could suggest a server connectivity error and as stated I've both restarted then rebooted to no avail. Cache off (actually never used since week 1).
  5. Hi Guys Before I post at zendesk can anyone see if they are missing a bing satellite tile at UK lake district 54.568, -3.151 I'm not using either cache (rolling or manual) and never seen this issue at this location prior to yesterday. Several restarts/reboots but issue remains. You can get to the location quickly by going into Dev mode and using teleport window 🙂 Edit: top two photos looking eastward. Last image looking west back towards location
  6. (I presume you mean 'pre-rendered' frames?) - see my comments earlier in the thread... https://www.avsim.com/forums/topic/595625-v113160-screen-tearing-not-a-bug-but-a-design-change/?do=findComment&comment=4477465 https://www.avsim.com/forums/topic/595625-v113160-screen-tearing-not-a-bug-but-a-design-change/?do=findComment&comment=4477525
  7. Actually I've been thinking about your Q some more and still come to the conclusion that leaving it at default of 3 (BTW that's probably what causes confusion of triple buffering!). Here's why... In your case with a stronger CPU that GPU, after less than a second of starting MSFS that pre-render queue will be filled up and the CPU will be *forced* to wait on the GPU. Any new frame added will be rendered eventually but the queue will stay at 3 (see later for exceptions so this). If you set the length to 1 then the queue fills up instantly but the CPU is still forced to wait. In other words the CPU % use and fps will be identical (but can you spot why in fast shooter games the latter option has an advantage?). In both cases the GPU slows up the CPU. However, lets assume the CPU now has to go off and fetch some texture from the disk (or internet) and this takes a long time (longer than 1 single frametime of e.g. 33msecs) - with a full queue of 3 pre-renders the GPU still can continue working (as it has instructions in the queue) and the queue length is reduced by 1 (or more) entries. But NO STUTTER 🙂 (Providing the CPU being busy doesn't last *too* long!). When you only have a pre-render queue of 1 then in the same scenario the GPU is available for work BUT has nothing to do, no instructions, nada, so it stalls and STUTTER. Eventually in both cases the CPU will catchup and re-fill the queue but by using 3 pre-render queue we have smoothed over a potential stutter. *Answer to Q: As flight sim enthusiasts we have an advantage that time delay (latency) is not too much of an issue. We are not trying to kill a fast moving enemy! With pre-render queue of 3 then we POTENTIALLY have a time delay of (e.g.)33msecs * 3 seconds since a frame was first requested but before it is seen on the screen. Enough to miss the target...hmmm not so sure but hey I'm no gamer. Hence a lot of the tips we should be careful about implementing come from first person shooter forums and they should be disregarded IF you prefer smoothness over lower time delay (latency).
  8. I usually don't give advice for the reason stated 🙂 leaving at default is the safest option! A sound approach is to turn all settings to min, thus ensure smooth fps (if not then you've no chance of getting it smooth!) then gradually increase. There are a few guides as to what affects gpu v CPU the most but generally screen resolution, AA, lighting effects, etc are gpu intensive so you need to be careful there. This is laborious but ensures the settings match your system. Edit: to add that ideally you should do every setting one by one and return that setting to min before moving onto the next. Then based on these findings select the best combination. This is the Six Sigma approach but impractical for so many settings!
  9. First part it true. This setting is for OpenGL only. In fact in nVidia control panel the description you see when you move the mouse over this setting states that. The second part is not correct but it a common misconception repeated in many forums! Triple Buffering refers to the storage areas in GPU VRAM while Pre-Rendered Frames are drawing requests that the CPU will send to the GPU. Let me expand... 'Triple Buffering' refers to the VRAM storage area (or buffers) where the GPU puts the frames that it has already rendered (drawn) OR is in the process of rendering. So you have a buffer which is currently complete and is the image that you can see on the screen, a buffer that contains a complete image which will be displayed on the screen next and a buffer that GPU it currently rendereding (drawing) into and will eventually have it's turn being displayed. The GPU cycles through these buffer continously. The Pre-Rendered Frame count refers to the MAXIMUM number of rendering requests that the CPU can pass to the GPU (which will be processed by the GPU and become frames in the buffer and hence eventually displayed!). IF the GPU is not busy already drawing a frame then it will take this request and draw it immediately while, without waiting, the CPU continues making up the next request it will send - BUT if the GPU is busy then it adds this requested frame to a queue. Once the MAXIMUM length of this queue is reached (and so this queue is full) then the CPU just has to wait until a slot becomes free. The important point here is the MAXIMUM part - if you have a faster GPU than CPU then this queue will always (usually!) be empty. The GPU draws faster than the CPU can request. Conversely if the GPU is slower than the CPU then the queue may always be full and the CPU will stall while waiting. This is why user experience can vary so much and a tip you read in a forum may be completely wrong for your system and why a balanced system is the best option. Edit: Note you could have a mix of triple buffering and pre-rendered frames if the graphics API allowed it. In fact I understand this is what Fast Sync is doing but also allows for the 'spare' buffer to be dropped (not displayed) if the GPU is lagging behind the CPU.
  10. I should add that like my orignal post I've greatly simplified these explanations. There is masses of information that has been written on the subject and I've tried to just include the salient points. Remember any fps over the refresh rate of your monitor are wasted! You wont see them 🙂
  11. It's due to whats called frame pacing. Frames per second (fps) is an average represention of the time that each individual frame takes to be displayed (called Frametime). You can be averaging 30fps but if that average is the result of very fast then very slow frametimes it will not seem smooth. e.g. assume 30fps is the target so that needs an average frametime of 1/30 = 33msecs. frametime of 1st frame is 27.2msecs frametime of 2nd frame is 34.2msecs frametime of 3rd frame is 33.4msecs frametime of 4th frame is 28.2msecs frametime of etc.etc.etc. to a total of 30 frames. Now this may average over 1 second of counting to 30fps but it will not seem smooth. Whereas: frametime of 1st frame is 32.99msecs frametime of 2nd frame is 33.01msecs frametime of 3rd frame is 33.00msecs frametime of etc.etc.etc. to a total of 30 frames. This also averages to 30fps but is much smoother. Of course CPU/GPU usage is identical (more or less!). In-game settings, swap chain modes, coding techniques all contribute to this variation.
  12. I also use 30 Hz and Vsync to ON/60fps and have no post update stutters...(other than when complex scenery is loading - vsync cannot do anything about that sadly). In other words I'm pretty much getting the same experience as before but should say I've only had about 2 hours with the new version. It does seem smoother but I'll reserve judgement for now. Even with a RTX 3090 (thanks Ian!) and 10700K the occasional stutter is part of the experience! Edit: With higher spec GPUs the copy operation I refer to in my OP will be negligible and so in terms of increased fps or lowered GPU % useage I doubt you'd see any difference. Hence lower spec GPUs should see most benefit from this change but YMMV.
  13. Quick answer: Post update ( MSFS now used a DXGI swap chain which allows for (actually designed for) a frame rate unthrottled by vsync but this has the side effect of allowing screen tearing to be observed. This was a design change by Asobo. (Edit: I presume!) Solution to the screen tearing: If you want to run MSFS in fullscreen mode then switch on vsync in either the in-game setting or nvidia control panel. If you don't want to use vsync then run in windowed mode (you can make the window almost as big as the desktop but just not the exact size of the desktop). Longer answer: The DXGI swap chain coordinates the handling of a frame drawn by the app on its journey to the screen. The app, Desktop Windows Manager (DWM) and graphics card all are involved in this. Prior to this update MSFS used a basic form of the swap chain called "Copy Blt[sic]" which was *always* subject to vsync restrictions placed on it by DWM. The downside of this method is need to copy the entire screen (Edit: entire app window) at some point in the chain - this takes time. The upside was no screen tearing could be seeen. This latest version of MSFS uses a mode called Flip (discard) which does not need that screen copy. However, if you run MSFS in a window using this mode then DWM still applies vsync but retains the speed increase of not needing a copy. BUT you run MSFS in fullscreen mode (note this is still NOT full screen exclusive) then DWM removes that restriction and leaves control of vsync up to the app itself. This is called Flip (discard) immediate. DWM takes a back seat. DirectX detects when the app is completely covering the screen and uses this to switch into that mode. Note any other windows open on top of MSFS will results in thst mode *not* being applied. This mode was added to DirectX to remove the latency inherent in any swap chain that involved vsync being turned on but the cost is that screen tearing can be observed. It is generally accepted that use of this flip (edit: flip immediate) mode results in an app that runs as well as the old Full Screen Exclusive mode but none of the disadvantages.
  14. I think he just recovered pre-Dec 22nd CGL files from a backup he had...see this thread (in particular the post on Dec 28th 6:00pm by mappamundi himself) A global solution for the world-wide spike & elevation artifacts introduced by Update 1.12.13 - Community / General Discussion - Microsoft Flight Simulator Forums
  • Create New...