Jump to content

Sign in to follow this  
Dirk98

Current AM for P3Dv5 suggesions?

Recommended Posts

Any thoughts and suggestions on the current AM to use with HT On in P3Dv5 on a 8-core CPU?

It used to be 00.11.11.11.11.11.01 for P3Dv4.5

I'm running 11.11.11.11.11.11.11.11 now.

Thanks.

  • Like 1

Share this post


Link to post

Hi Dirk,

Unless I am mistaken 1111111111111111 is the same as running no AM? So not really any point in having it in the cfg. 

  • Like 1
  • Upvote 2

AMD Ryzen 3900X - Asus Crosshair VI Hero - G.Skill 32GB (2x16GB) 3000 C14 DDR4 @ 3600 C16

AMD Radeon VII 16GB HBM2 @ 1950/1150 1135mv  - 3 x Iiyama G-Master GB2888UHSU 4k @ 7680x1440

Saitek X-55 Rhino - Track IR5 - Obutto Sim Cockpit + Triple Monitor Stand - Fancy some Techno? https://www.mixcloud.com/dj_bully/

Share this post


Link to post

Preamble:

Most software does not care where on the CPU the threads run, the jobscheduler moves those around to suit demands.

Rather differently, P3D (and FSX) will always align tasks on a per Logical Processor basis via an Affinity Mask. That is, an Affinity Mask is in use whether an AM is specified or not.

With P3D the AM cannot be avoided. The jobscheduler is instructed by the program to keep those tasks aligned on those LPs once they are set up.

No AM on an 8 core Hyper Threading enabled is AM=0=65535=11,11,11,11,11,11,11,11 which is the entire affinity space used by the system. Note that on the right the pair of ones '11' means that two demanding tasks of P3D will be placed there as if they are two physical cores, but they belong to only one physical core.

Hyper Threading disabled no AM is AM=0=255=11111111. So that eight major tasks are split out over the 8 Logical Processors. Again here the entire affinity space is utilised by the system, but without the benefit of optimising the use of those cores with HT. Note here also that the first two P3D tasks are allocated one physical core each.

As can be seen, HT disabled avoids LP shares by simply not having the optimisation of the core to worry about. This is just a simple no-brainer method to avoid core sharing. So giving maximum bandwidth to all tasks laid out by the simulator.

Note that utilising HT provides better throughput for system resources which the simulator ultimately relies on.

However in P3D some of the Logical Processor aligned tasks are very demanding (example task zero) and we don't want to allow sharing of cores for two essentially monolithic demanding tasks. Instead, we choose the AM to avoid sharing on some cores providing maximum core bandwidth for those tasks.

With a purely multitasking algorithm, as with converting a movie between formats, we would allow all the LPs to be used without question.

Instead, with P3D (and FSX) we can allocate '01' pairs to all cores, or the least eight significant binary locations for good performance. With '01' on core zero we might see 95% on LP0 and 1.5% on LP1, this is giving maximum bandwidth to LP0 which in this case carries the main task of completing the scene render. That allows maximum frames per second from the HT enabled core. We can use core zero the jobscheduler will avoid it by keeping tasks to later cores.

We can improve loading by pairing '11' in the higher significant binary digits where parallel tasks take seconds to complete and sharing won't affect fps performance but can improve background throughput.

A balance is required because simply allowing all LPs to be used for tasks pushes out system resources and does not stop monolithic demanding tasks from being shared with each other on a single core.

 

Try in the range of:

21845 = 01,01,01,01,01,01,01,01 - this is what P3D would 'see' with HT disabled and no AM=0=255 avoids sharing tasks on any core but we get only 8 tasks split out.

5461 = 00,01,01,01,01,01,01,01 - indeed less but this frees up resources in the system. Depending on system resources demand can prove a better balance

13653 = 00,11,01,01,01,01,01,01 - increasing paralleled tasks can help even though they are shared on a core.

16213 = 00,11,11,11,01,01,01,01 - too many tasks split out can have a negative impact overall.


 

  • Like 2

Steve Waite: Engineer at codelegend.com

Share this post


Link to post
Posted (edited)
2 hours ago, SteveW said:

16213 = 00,11,11,11,01,01,01,01 - too many tasks split out can have a negative impact overall.

Excellent write up and read, as always.

I will experiment with this one as well if it looks ok:

15701 = 00,11,11,01,01,01,01,01

Thanks,

PS: should it be the same "loading time" calculation method?

Edited by Dirk98
  • Like 1

Share this post


Link to post
11 minutes ago, Dirk98 said:

I will experiment with this one..

Good place to start. Reducing loading times is a good indicator, but only seeing tiny decreases means you have enough LPs probably. You will be reasonably in the region of best performance, more or less.

  • Like 1

Steve Waite: Engineer at codelegend.com

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  
  • Tom Allensworth,
    Founder of AVSIM Online


  • Flight Simulation's Premier Resource!

    AVSIM is a free service to the flight simulation community. AVSIM is staffed completely by volunteers and all funds donated to AVSIM go directly back to supporting the community. Your donation here helps to pay our bandwidth costs, emergency funding, and other general costs that crop up from time to time. Thank you for your support!

    Click here for more information and to see all donations year to date.
  • Donation Goals

    AVSIM's 2020 Fundraising Goal

    Donate to our annual general fundraising goal. This donation keeps our doors open and providing you service 24 x 7 x 365. Your donation here helps to pay our bandwidth costs, emergency funding, and other general costs that crop up from time to time. We reset this goal every new year for the following year's goal.


    28%
    $7,075.00 of $25,000.00 Donate Now
×
×
  • Create New...