Jump to content
Sign in to follow this  
mcdonoughdr

P3Dv5 Excessive vRam usage

Recommended Posts

Video memory in DX12 is allocated with a budget. That budget can consist of local and non-local (shared) memory. Programmers must be aware of the available space or the texture doesn't load.


Steve Waite: Engineer at codelegend.com

Share this post


Link to post

Another problem to recognise with VRAM use is due to other programs using the memory such as browsers, that can affect the outcome. Crashing or seizing is most likely a fault of the scenery or addon.


Steve Waite: Engineer at codelegend.com

Share this post


Link to post
11 hours ago, ttbq1 said:

I have a 11Gb GPU and have been testing V5 since release date, and trust me I have different graphic settings depending on the aircraft and the airport I am flying. Sometimes I need to change the graphic settings during cruise level.

I generally agree with everything you've said and I, too, have settings profiles saved based on six weeks of 'learning' what budget I need where and with what aircraft. I also learned that when approaching an airport, there's a 'temporary' need (I can't trace it, but it is consistent). I'll be approaching an airport using 5GB VRAM, it will jump to 6.6/7.1 and then about 60 seconds later, drop back to about 6GB and stay in that range through the gate.  I think this temporary jump is what causes most of the CTDs people experience on arrival, so enough budget has to be left when approaching a complex airport. Basically if I don't have 2GB free 50 miles from my destination, I need to lower settings to get 2GB free or I am guaranteed a CTD.  I've learned this process and don't get CTDs, but it isn't intuitive or seamless. In DX11, that temporary need would just be shuttled to shared GPU memory, would cause a couple stutters and then all would return to normal...flight continues without CTD.

Where I disagree is that LM is 'accountable' insofar as their implementation of DX12 leaves a lot to be desired from a user experience standpoint.  There are basically two standard practices for modern DX12 titles missing from LM's implementation.

The first is the VRAM budget calculator in the settings menu. I haven't used a DX12 title that doesn't have this except P3D. They all have a clear indication of how much VRAM the chosen settings will need, like this:

2-12-1024x583.jpg.webp

 It has taken me six weeks to learn how much VRAM a certain number of AI will take at 1024 vs 2048 textures, how much VRAM Trueglass needs when it rains, versus when it doesn't rain. How much VRAM EA needs at medium vs high, how much VRAM is saved between LOD Medium and LOD High. The VRAM delta between 2xSSAA vs 4xSSAA, how much VRAM dynamic lighting takes, or simobject shadows need... It is asking too much of users, even enterprise and professional users, to keep track of this stuff manually.

The second DX12 standard feature missing from P3D is the over-budget handling. P3D simply crashes when over-budget. When combined with no clear data provided to the user about which options require what VRAM budget, the user is left completely in the dark. DX12 titles, as a rule, do not CTD when over VRAM budget. They simply have the logic built-in to not go over budget and to reduce texture resolutions or LOD to fit the available VRAM, and they have a cascading order of priority they do this in.  I'm sure it's hard, but literally every other DX12 title does this.  Approaching an airport...need to load 1.2GB if new textures, only 0.8GB remaining, reduce EA to Medium, free up 0.5GB, now there's 1.3GB free, boom. All done behind the scenes without causing a CTD. This has been discussed amongst the developer/beta group apparently according to simbol and is hopefully being considered/in the works.

This DX12 stuff is hard, and I'm happy LM bit the bullet, but they need to finish the job by deploying industry standard DX12 features to their DX12 app. In the meantime, I've got settings that let me enjoy the sim, but it is irritating to have to remember to change settings on the fly (or to forget to...and CTD) and I'm not willing to accept the low settings required in London or Miami for all my flying.

1 hour ago, SteveW said:

The hardware is abstracted. If there's no room for a texture it doesn't load. If a CTD occurs loading scenery, then that is a fault in a scenery, missing or corrupt file maybe.

That really doesn't seem to be the case. I can make a vanilla out-of-the-box P3D v5 crash all day long by simply increasing settings; that shouldn't be possible. In DX11 it was abstracted and I could put all sliders to the right and not CTD. In DX12 it is not and I can not. Not an ideal user experience.  (To be clear, I operate far from sliders to the right, mostly in the middle or on lower settings)

Edited by cwburnett

Chris Burnett

i9 9900K @ 5Ghz | Radeon RX 6800 | 32GB DDR4 3200 MHz | 2 x M.2 Gen 3x4 1TB SSDs + 1 x SATA 1TB SSD
Honeycomb Alpha Yoke | Honeycomb Bravo Throttle | Saitek Rudder Pedals | Thrustmaster Warthog HOTAS

Share this post


Link to post

System memory sends a signal called a Page Fault when there is insufficient memory in the current allocation. GPU does not do this, instead the programmer makes a call to the system to determine the available memory in the budget and if there's enough then the process will continue to load another object. So for example if the object is a texture of 2KB and the available is 4KB then the program will continue and load the texture. However if that texture is corrupt for example, that may look like a texture of infinite bytes to the GPU and the program crashes. Excessive use of the VRAM could indicate something like that is happening.


Steve Waite: Engineer at codelegend.com

Share this post


Link to post

I also have high vRAM usage. I always fly B738 into addon airports, no default airports, and had to use default settings to not have any CTD. If I select "Use high-resolution terrain textures" I will get a warning about my GPU Ram, and that to me lowers the visual effect drastically.

This is my system:

1. ASUS Z170-K, i7/6700, GTX1080, 16Gb, SSD's (no overclocking)

2. Win10/64, P3Dv5.1. Active Sky, Orbx Global Base with LC, Vector, HD Trees, all their basic stuff. FreshMeshX and Addon Scenery.

These are my P3D settings (very basic):

p3d2.png

p3d3.png

This is my GPU Performance from Task Manager:

ped4.png

This is what I get visually, if their is bad weather with rain etc my GPU usage is in the 80-90's%.

p3d6.png

I used P3Dv4.5 before and in my opinion I had better GPU performance with higher "World" settings. Do you agree with me?

What do you think of my systems performance figures and the graphics presented?

Edited by portanav

Michael (Beta Tester ProATC-x)

SIM Specs: ASUS Z170-K, 17/6700, 16Gb Ram, GTX1080, SSD's

Apps: Win10/64, P3Dv5/Prosim737, ActiveSky, REX SF3D, TOGA Env...

Share this post


Link to post

I can't even run v5 longer than 10 minutes in VR, 1080ti 11gb user here.


Current system: ASUS ROG STRIX Z370-F GAMING, Intel i8086k @ 5.0ghz locked, 32GB RAM @ 2666mhz, Gigabyte AORUS GTX1080TI Extreme Edition 11G, M2 SSD, Oculus Rift S.

Share this post


Link to post

This is my GPU and CPU performance. Look at how high my CPU core 0 is, I thought LM said (in their forum) P3D will use equally all cores. Does not look like that to me. I had the weather maxed out with low vis, heavy rain, low cloud and thunderstorms; aircraft on the ground.

What do you folks think of this, is this normal for my system, good or should I consider myself lucky?

Continued from post one above.

p3d.png

Edited by portanav

Michael (Beta Tester ProATC-x)

SIM Specs: ASUS Z170-K, 17/6700, 16Gb Ram, GTX1080, SSD's

Apps: Win10/64, P3Dv5/Prosim737, ActiveSky, REX SF3D, TOGA Env...

Share this post


Link to post
11 minutes ago, portanav said:

This is my GPU and CPU performance. Look at how high my CPU core 0 is, I thought LM said (in their forum) P3D will use equally all cores. Does not look like that to me.

LM has never said this... and what you are seeing is normal. 🙂

  • Upvote 1

Bert

Share this post


Link to post

Michael, I'm guessing you, or better your plane, was stationary while taking those screenshots. Start moving, and you will see P3D using the other cores as well, namely to load new scenery.

  • Upvote 1

Best regards, Dimitrios

Share this post


Link to post
On 4/15/2020 at 5:20 AM, Dave_YVR said:

8GB isn't a lot of VRAM anymore. Adjust the sliders to make it work for your system.

Not a major problem tackled sensibly.

I have a 1080 8Gb card and have not yet run out of Vram. My settings are Normal to High and even with TE England and PMDG 744 landing at payware EGLL  I have only used about 6.5 of my 7.1 available. 

V5 gives equal results at lower settings than v4 did. IMHO.

  • Upvote 2

Intel i7 6700K @4.3. 32gb Gskill 3200 RAM. Z170x Gigabyte m/b. 28" LG HD monitor. Win 10 Home. 500g Samsung 960 as Windows home. 1 Gb Mushkin SSD for P3D. GTX 1080 8gb.

Share this post


Link to post
26 minutes ago, IanHarrison said:

Not a major problem tackled sensibly.

I have a 1080 8Gb card and have not yet run out of Vram. My settings are Normal to High and even with TE England and PMDG 744 landing at payware EGLL  I have only used about 6.5 of my 7.1 available. 

V5 gives equal results at lower settings than v4 did. IMHO.

Add a little PBR and EA (if LM ever makes it worth using), and 8GB is suddenly obsolete. 

  • Like 1

Share this post


Link to post
23 hours ago, portanav said:

I thought LM said (in their forum) P3D will use equally all cores

As Bert already pointed out, that's not what they say. P3D does use cores other than Core 0 (or whichever main core it's on), but the number of things it can do so for is limited by the nature of the task. To distribute load evenly across CPU cores you have to be able to divide the workload equally. You can't do that in a flight simulation (or really a game of any kind). Simply put, calculations for which you need the result in order to do the remaining calculations have to be performed in order, so cannot usefully be put onto separate cores, and there are a lot of those calculations going on in order to render each frame. Any workload which is separate enough to have been split onto a separate thread, has already been.

Is it possible to design a sim engine from the ground up that can put more work on separate cores than P3D does now? Most likely. Is it possible to design one that can split its load equally across whatever number of CPU cores you have? No. It isn't, and it will never be. And even if it were, once you start fully-loading multiple cores at the same time, other performance limiters kick in, notably I/O and memory access. You might be surprised how much of the time a CPU core is effectively doing nothing at all because the code it is executing is spinning in a loop waiting for data to be loaded from RAM or from a device. Storage, even super-fast storage like NVMe, is glacial compared to the speed of the CPU and you can burn many cycles waiting on a data load. Finding ways to break off and do productive things with those cycles (instead of waiting) and then come back exactly at the point when the data becomes available is where all of the big gains are being found right now and is the subject of lots and lots of research into parallel programming. If we could fill all those gaps all the time you could gain back significant processing power. 

Generally, though, multi-core is not the answer to CPU speed limits for anything other than highly parallel tasks. The whole reason chip designers went multi-core is because they had reached the practical speed limits of single-core designs - and chip-makers had to find somewhere to grow in order to sell new products. Single-core improvements do still happen: a current Intel or AMD CPU is still faster, clock cycle for clock cycle on a single core, than a CPU from a decade ago. But those gains are small and will not continue forever. 

Basically, we hit a speed limit a long time ago and PCs are not going to have the same doubling of performance that they used to have back in the days of Moore's Law ever again, barring fundamentally new technology that allows us to run chips faster and cooler, or something completely new like optical computing. 

The onus is now on software developers to improve their code and make it more efficient and more parallel, but there are hard limits on how much you can do that depending on your software's purpose. The DX12 update was about the biggest single thing LM could do to increase performance, but that has come at the cost of increased instability and most often the reason for that has nothing to do with P3D itself (though some is also clearly down to bugs in the sim). There's lots of good discussion of this up-thread. 

Simmers need to realise that there's no silver performance bullet waiting in the wings, either via hardware or software. We're in the age of small gains now.

  • Like 2
  • Upvote 3

Temporary sim: 9700K @ 5GHz, 2TB NVMe SSD, RTX 3080Ti, MSFS + SPAD.NeXT

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  
  • Tom Allensworth,
    Founder of AVSIM Online


  • Flight Simulation's Premier Resource!

    AVSIM is a free service to the flight simulation community. AVSIM is staffed completely by volunteers and all funds donated to AVSIM go directly back to supporting the community. Your donation here helps to pay our bandwidth costs, emergency funding, and other general costs that crop up from time to time. Thank you for your support!

    Click here for more information and to see all donations year to date.
×
×
  • Create New...