Jump to content
Sign in to follow this  
carlito777

A first glimpse at Prepar3d v2.3

Recommended Posts

 

 


For my P3D display settings, overclocking from 3.9Ghz to 4.8Ghz produces 1 fps difference with no impact to stutters.  GPU overclocking will produce about 3 fps difference.

 

Although my CPU and GPU aren't as advanced as yours, that has been my experience also. Mostly what OCing does in P3d is to give the electric company a few more pennies. I'm taking a wild guess that Word Not Allowed's initial theory that one CPU thread is limiting everything tends to be borne out by these kinds of results.

Share this post


Link to post
Guest

I'm taking a wild guess that Word Not Allowed's initial theory that one CPU thread is limiting everything tends to be borne out by these kinds of results.

 

I know Word Not Allowed has some expectations that all CPUs should be running at or close to 100%, but from a programming perspective there MUST be a single thread manager that brings everything together (all the other threads) and keeps everything synchronized with "world time" ... this is why you'll always see one core loaded more than any other.  Threads finish at different times and the thread manager will need to make sure everything it needs to build the next scene has reported back (completed).

 

If there wasn't a single managing thread then everything would happen out of sequence relative to the world time clock.  Like I've always maintain, the best anyone can expect to see across all CPUs is about 50% usage with one CPU doing the thread management part (which is often 90% usage or higher).  

 

Flight simulation paradigm is not like Video rendering paradigm that can be divided into equal chunks and utilize all cores at almost 100%.

 

Cheers, Rob.

 

EDIT: gains can be made with overclocking, but I just don't consider them significant enough to make a difference in P3D V2.x -- and there is some risk, especially if you disable thermal protect (be it CPU or GPU).

Share this post


Link to post

Although my CPU and GPU aren't as advanced as yours, that has been my experience also. Mostly what OCing does in P3d is to give the electric company a few more pennies. I'm taking a wild guess that Word Not Allowed's initial theory that one CPU thread is limiting everything tends to be borne out by these kinds of results.

 

That was because Word Not Allowed tested with all the GPU effects (tessellation, water, shadows etc.) turned off, to show the maximum performance he could get while being CPU-limited.

Share this post


Link to post

Regards P3D v.2.3, are there any time perspective upon release date?

 

Weeks, months, etc etc.

 

Reason i ask, build up a new computer, and don't wanna install double up in a short time.... B)

Share this post


Link to post

Rob,

 

I think I remember at some point in time you stated that the "vegetation autogen" had already been optimized.  I am curious if this is why I can max out that slider and still obtain 100+ fps with orbx PNW bush flying.  As soon as I hit any type of civilization, my framerate drops (I am running building autogen on Dense).  I see about a 50% plus drop in framerate.  Another 50% at night around the Seattle area.

Share this post


Link to post
Guest

 

 


Regards P3D v.2.3, are there any time perspective upon release date?

 

None that I'm aware of.  If  you are planning a new PC build, I'd wait for Intel to release Haswell-E using the X99 chipset (DDR4 support) -- supposed to be Q4 2014 release.  Also, nVidia will have a new GPU out later this year.

 

 

 


As soon as I hit any type of civilization, my framerate drops (I am running building autogen on Dense).

 

This has not changed in 2.3 early Beta, not sure there is anything to be done in regards to buildings.  Trees are essentially 2 polygons per tree, where as autogen buildings may have min 5 polygons on up 15-20 polygons -- hence the much higher fps hit for buildings.

 

Cheers, Rob.

Share this post


Link to post

None that I'm aware of.  If  you are planning a new PC build, I'd wait for Intel to release Haswell-E using the X99 chipset (DDR4 support) -- supposed to be Q4 2014 release.  Also, nVidia will have a new GPU out later this year.

 

 

 

 

This has not changed in 2.3 early Beta, not sure there is anything to be done in regards to buildings.  Trees are essentially 2 polygons per tree, where as autogen buildings may have min 5 polygons on up 15-20 polygons -- hence the much higher fps hit for buildings.

 

Cheers, Rob.

 

 

 

well, that would explain it, thx...

Share this post


Link to post

Bakern, on 08 Jul 2014 - 12:03 PM, said:

 

 

Regards P3D v.2.3, are there any time perspective upon release date?

 

 

 

None that I'm aware of. If you are planning a new PC build, I'd wait for Intel to release Haswell-E using the X99 chipset (DDR4 support) -- supposed to be Q4 2014 release. Also, nVidia will have a new GPU out later this year.

 

 

 

 

Thanks for answer, well i have done one upgrade now (not had any upgrade since I7-gen1 :huh: ), enough with that i think, and have starting install process of simulator again, point was, "if" there where any time perspective on v.2.3, since you don't know, better wait then and put in v. 2.3 when it's released.

Share this post


Link to post

This has not changed in 2.3 early Beta, not sure there is anything to be done in regards to buildings.  Trees are essentially 2 polygons per tree, where as autogen buildings may have min 5 polygons on up 15-20 polygons -- hence the much higher fps hit for buildings.

 

Are they planning to give buildings autogen the same CPU/GPU optimisations they gave to vegetation autogen? Zach said in the Prepar3D forums that they cannot apply the same VAS optimisations to buildings, but didn't touch on the matter of multi-threading the workload.

Share this post


Link to post

 

 


but from a programming perspective there MUST be a single thread manager that brings everything together (all the other threads) and keeps everything synchronized with "world time" ... this is why you'll always see one core loaded more than any other.

 

I've written enough software myself to understand that. But the so called manager thread shouldn't be the limiting process. I think, unfortunately, that for now, there is still a lot of work being done by the CPU in P3d and the parallelism just isn't that efficient. Faster and faster GPUs and more and more CPU cores aren't going to speed things up that much as long as one CPU core is pegged near 100% all the time.

Share this post


Link to post
Guest

I've written enough software myself to understand that. But the so called manager thread shouldn't be the limiting process.

 

 

Have to disagree with you there, GPUs will speed things up in terms of fps (especially AA and processing of shadows and shadow quality and water animations) -- the computation requirements to render a scene with full HDR, shadows, reflections is very much GPU limited.  

 

In terms of terrain loading that appears to be a latency issue (aka CPU bound).  The is still MUCH the CPU has to do, physics, instruments, SDK supported items, SimConnect, AI management, ATC, Weather, etc. etc.

 

I see a repeating theme in various forums about folks believing their hardware "should be fast enough" but their reference points are either none existent or they compare it to some 3D shooter.  There is no such thing as a "world clock" in 3D shooters, time of day is fixed for that particular level they are trying to finish, there is no change in lighting over a 24 period ... long long list of key elements in a Flight Sim that just aren't there in a 3D shooter.

 

The only way I can see the ESP engine working around terrain loading issue is to load A LOT more data (aka the LOD_RADIUS), but they can't ... 32 bit limit.  Go 64 bit and no more compatibility with just about any 3rd party product but it would solve the terrain loading.  Is this a limit of the Quad tree method of world representation in 3D space?  I don't know, in theory a Quad tree should work well for threading ... but I have no idea the performance of a quad tree especially around navigating it (searches).

 

Going back to this article from Microsoft on the Quadtree in Flight Simulator: http://www.microsoft.com/Products/Games/FSInsider/developers/Pages/GlobalTerrain.aspx scroll down to the section where it talks about Quadtree (linear quadtree) heading is "Subdividing the Surface of the Earth".

 

This brings me to Orbx Vector (a wonderful product but it puts a strain on the quad tree) - it'll consume 200-300MB VAS and that's mostly because of how the Quad tree must grow to accommodate much more data (road, powerlines, etc.) ... this growth in the quadtree just makes the CPU struggle that much more to keep up.  If I install Vector, my stutters do increase ... this tells me that quad tree navigation is slow - quote from that article:

 

 

 

Each cell in the quad tree is assigned a unique identifier composed from the cell's depth in the tree and its distance in cells south of the North Pole and east of the 180th meridian.  With this scheme, only simple integer operations are required to find the identifier of any cell's parent, siblings, and children.  We use the cell identifier as a search key when retrieving the data for each cell from disk and to cache and find cells in hash tables at run time. Explicit links between quad tree cells are only used where necessary for speed but otherwise we save on link storage and maintenance by using the cell identifiers as implicit links.  This type of implicit quad tree design is sometimes called a linear quad tree [13].

 

In the "Scene Graph" section of the above link:

 

 

 

The terrain scene graph is designed for rendering with as much fidelity as possible but without having to load more data than necessary.  Therefore, we only want to use the data with the highest level of detail (LOD) near the viewpoint and use progressively less detail as distance from the viewpoint increases.   Small cells deep in the tree contain data with the highest LOD so we want to concentrate them near the viewpoint.  The larger cells near the root of the tree hold data with low LOD so we want them to be visible from greater distances than the smaller cells. To accomplish this, we cache cells from all levels of the tree in the scene graph but we use a geometrically decreasing geographic radius as the depth of the cells increases.  Because the cache radius decreases at about the same rate as cell size, the number of cells cached in each level of the scene graph remains relatively constant at all levels.   
Ideally, the scene graph radius for each level of the quad tree would be determined by screen resolution, with higher resolutions requiring larger radii, but we leave the choice up to the individual user so they can find a balance between view quality and the capabilities of their machine.  In practice, we allow a user-selectable radius of between 2.5 and 4.5 cells, which seems to be a good balance between keeping the total number of cells low, while still allowing enough granularity for streaming in new content as the viewpoint moves through the world.  To support the requirement for multiple independent views, we create a separate scene graph for each viewpoint.  When the scene graphs for multiple views overlap spatially, they share as much view-independent data as possible such as textures and raw, untriangulated elevation data.

 

Has LM reached the point of diminishing returns?  Are quad trees just too slow?  LM obviously don't think so, I will however remain reserved.  I don't think there is anything inherently wrong with the ESP engine, it just needs significant breaking changes to move it out of a 4GB limit and beyond single GPU support.

 

I know this is not a favorable situation and most don't want to hear it, including LM ... but I'm not going to blow smoke, just my humble opinion of the self imposed restrictions (which could be entirely based on resources and funding).  XP10 also seems to be in the same boat when it comes to multi-GPU support  ... XP10 has the address space and engine that can keep the terrain flowing but not the multi-GPUs to keep the frame rates up.

 

64bit

Compatibility

Multi-GPU support

 

P3D has compatibility, XP has 64bit ... but neither has all 3 or even 2 out of 3.  I guess that's just where we're at, just have to keep supporting both and hope.  The other harsh reality I'd just rather not think about and try to stay positive - but I'm not going to kid myself, I know the challenges and I know development resources.

 

Since I don't have a spare $90-100 Million lying around to invest in flight simulation development ... just gotta make the best of it regardless of platform.

 

Cheers, Rob.

To add to the article/link I referenced:

 

 

A better solution, based on our experience, is to run most of the terrain engine tasks on Win32 fibers.  Fibers are similar to threads in that they each have their own context (i.e. register state and call stack).  However, the scheduling of fibers is entirely up to the application using them.

 
With fibers, Flight Simulator can schedule the terrain tasks to run in the interval between iterations of the main game loop and not during rendering or other time critical operations.
 
Fibers do have their down side, however.  Flight Simulator's fiber scheduler is cooperative, which means the fiber tasks must periodically call back into the scheduler to see if it's time to yield.
 
While this takes some getting used to when writing code, it does simplify the job of synchronizing data between fibers and the main game loop because all of the possible points of preemption are known.
 
Recognizing the increased availability of dual-core processors, we plan to execute some fibers in a single thread on the second core.  Fiber tasks that exchange data only with other fibers on the same core can still lightweight synchronization, but any data exchange with the main game loop or other fibers running on the primary core must be fully synchronized obviously.

 

Last sentence is key to my point about CPU utilization.

 

Cheers, Rob.

Share this post


Link to post

Good point there Rob.  I notice in my case, most of what would be described as stutters really are introduced when after I install orbx global and vector.  I can verify that without them, the bare sim runs like glass.  Once I add them (and shadows), I can tell the processing load has increased very much.  So there is allot there to consider when running with these additions.  Of course, I like my cake and would like to eat it too...so if there are further benefits to optimizing the sim to afford better graphic performance with these additions, I'm all for it!!!

Share this post


Link to post

Once again, there is no point to 64bit at this time.   There is no CPU or GPU combination that will keep up with it on the market for the AVERAGE CONSUMER.  2 Years from now, YES.  But that is an eternity.  You cannot live in the future NOW.  We have to live in real world limitations on real world time as some would say.  You could have 64bit and it wouldn't make a bit of difference right now ... in the real market, it's simply not gonna make a difference to how well it runs for the vast majority, and thus sells.  I'd buy it.  You'd buy it.  But early adapters are never your bread and butter, they're your food tasters.  If they don't like it you have to pay attention.  If some of them die off you should panic.  (If they start taking bites at you you're kinda doomed, cause if they're doing that to stay excited then next thing there's no one there at all)  I'd like to point off that we're all sitting at the table drooling like 5 month olds.  I think LM's development path is IRREPROACHABLE.   They also hire the best of the best, without fanfare.  This is Lockheed Martin you are talking about.  They are 3 times older than Microsoft, probably a deeper richer employee culture and history without all the prissiness, and everything they do has life or death implications.  Don't you think the company has SOPs established throughout to stay fast, lean, and ahead of everyone else s reasoning (they do).   Fact, 64bit is NEEDED.  Fact, not today, not at all, talk about diminishing returns.  They're doing first things first.  The order of things is the trick of it all, isn't it?


Disclaimer:  9900k@5.3ghz on Asus Maximus X Formula, G.Skill TridentZ RGB 4x8GB 4266/17 XMP, EVGA 2080 ti Kingpin (8400/2160Mhz), Samsung 960 EVO 250GB PCIe M.2 NVMe SSD , 28TB HDD total - 4TB+ photoscenery, Romex Software PrimoCache RAM and SSD cache (must have!), 3x1080p 30" monitors, Samsung Odyssey VR HMD, Pimax 4k & BE HMDs, Samsung Gear VR '17, Homdio v1, Cardboard, custom loop 2x 360x64ML Rads, Thermaltake View 71, VRM watercool, Thermal Grizzly Conductonaut CPU (naked die), Fujipoly / ModRight Ultra Extreme System Builder Thermal Pad on MB VRM. 8x Corsair ML120 (slight positive pressure). 🙂

Share this post


Link to post

Rob, if you're not allowed to talk about buildings autogen, just let me know. I think I made the same question in the past and you ignored it again.

Share this post


Link to post

Once again, there is no point to 64bit at this time.   There is no CPU or GPU combination that will keep up with it on the market for the AVERAGE CONSUMER.  2 Years from now, YES.  But that is an eternity.  You cannot live in the future NOW.  We have to live in real world limitations on real world time as some would say.  You could have 64bit and it wouldn't make a bit of difference right now ... in the real market, it's simply not gonna make a difference to how well it runs for the vast majority, and thus sells.  I'd buy it.  You'd buy it.  But early adapters are never your bread and butter, they're your food tasters.  If they don't like it you have to pay attention.  If some of them die off you should panic.  (If they start taking bites at you you're kinda doomed, cause if they're doing that to stay excited then next thing there's no one there at all)  I'd like to point off that we're all sitting at the table drooling like 5 month olds.  I think LM's development path is IRREPROACHABLE.   They also hire the best of the best, without fanfare.  This is Lockheed Martin you are talking about.  They are 3 times older than Microsoft, probably a deeper richer employee culture and history without all the prissiness, and everything they do has life or death implications.  Don't you think the company has SOPs established throughout to stay fast, lean, and ahead of everyone else s reasoning (they do).   Fact, 64bit is NEEDED.  Fact, not today, not at all, talk about diminishing returns.  They're doing first things first.  The order of things is the trick of it all, isn't it?

 

That's simply not true. Things like long rendering distance require lots of RAM, but not that much of CPU and GPU.

 

The big problems with 64-bits are obviously all addons which won´t be compatible and the amount of development time it will take to "convert" Prepar3d.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  
  • Tom Allensworth,
    Founder of AVSIM Online


  • Flight Simulation's Premier Resource!

    AVSIM is a free service to the flight simulation community. AVSIM is staffed completely by volunteers and all funds donated to AVSIM go directly back to supporting the community. Your donation here helps to pay our bandwidth costs, emergency funding, and other general costs that crop up from time to time. Thank you for your support!

    Click here for more information and to see all donations year to date.
×
×
  • Create New...