Jump to content
Sign in to follow this  
Janov

X-Plane beta 11.51 beta 1 is available

Recommended Posts

Just a heads-up, it was already mentioned in another thread: Beta 11.51 is released:

https://developer.x-plane.com/2020/11/stuff-we-are-working-on/

Release notes are here:

https://www.x-plane.com/kb/x-plane-11-51-release-notes/

Note that you must manually run your installer (to be found in your main X-Plane folder) and check the "Check for betas" box as well.

This beta does not really introduce anything new, it is a pure bugfix, aimed mostly at stomping out the remaining lingering "Device loss" crashes.

As with all betas: Install at your own risk. If you want to go back to stable, just uncheck the "Check for betas" box and run the installer again. You can go back to stable - but you can never go back to a previous beta (unless you made a copy).

Cheers, Jan

 

  • Like 4

Share this post


Link to post
Share on other sites

Say that the next update can't be a "0.01" due to the time period after the previous major release and what gets posted a day later...?

Oh, the irony! 😂

Edited by Bjoern
  • Like 1

7950X3D + 6900 XT + 64 GB + Linux | 4800H + RTX2060 + 32 GB + Linux
My add-ons from my FS9/FSX days

Share this post


Link to post
Share on other sites
10 minutes ago, Bjoern said:

Say that the next update can't be a "0.01" due to the time period after the previous major release and what gets posted a day later...?

Oh, the irony! 😂

Yeah, I had to chuckle a bit when I read that 🙂

  • Like 1

Share this post


Link to post
Share on other sites

As the other thread is closed:

The M1 is the fastest single core processor for laptops, at least at the current shown benchmark. 10 Watts power consumption, so the blog entry from the graphics developer looks right. As X-Plane is heavily depending on single-core, it might be on the top.

It`s not a real good day for X86 users and people who have grown up with 386 and Pentium. I never thought a RISC architecture could beat a CISC processor, as CISC normally do more instructions per cycle.  Perhaps the security layers of X86 waste CPU cycles.

  • Like 1

Share this post


Link to post
Share on other sites
6 hours ago, BigDee said:

As the other thread is closed:

Its probably a discussion worth spinning off into its own thread. (if you want to do that Ill quote myself in it)

As far as I can tell the gains are coming at the OS level.

the single thread gains aren't coming so much from the hardware, it's that posix is great at distributing a single threaded workload over multiple cores (and even multiple cpus).

These things have been incoming for a while (this video from 7 years ago)

Apple is just the first to start bringing them into the consumer space.

 

Edited by mSparks
  • Like 1

AutoATC Developer

Share this post


Link to post
Share on other sites

I do think it`s OK to talk about the M1 chip in this thread, because I don`t think every topic has to be a straight line and somehow it is releated to the subject, as the M1 chip will be implemented for X-Plane in the future.

"the single thread gains aren't coming so much from the hardware, it's that posix is great at distributing a single threaded workload over multiple cores (and even multiple cpus)"

This is interesting and I never heard of it before. An OS layer that translates code to multiple processors. But still, the M1 chip is very fast at Multi Core, for example in Cinebench R23.

Considering it uses 4 fast cores and no Hyper Threading, that does mean it should be faster than an equivalent AMD processor. Interestingly, on the other hand the M1 does consume 20 watts, the Ryzen top model consumes 15 to 25 watts at 8 cores and Hyper Threading.

 

So, Apple might have improved the RISC architecture very well, or even adapted a CISC equivalent on the M1 chip. Either way, it has become a top end competitor, even more, x86 emulation does seem to work very well. Looking at Microsoft, their ARM implementation is bad. Hardware speed has never been an issue for me the last times, just with every new Windows release, speed drops down and has to be compensated with new hardware. Windows 10 cannot run native code anymore and that`s an ultimate killing factor.

 

 

 

  • Like 1

Share this post


Link to post
Share on other sites
1 hour ago, BigDee said:

I do think it`s OK to talk about the M1 chip in this thread

No problem, just that this is a truly massive topic that could very quickly diverge away from even X-Plane, and even beyond the much broader topic of simulation.

1 hour ago, BigDee said:

This is interesting and I never heard of it before. An OS layer that translates code to multiple processors.

There is of course a lot more to it (and it is the hardware to), probably the best description of some of the technicalities I've seen has been the "Gary explains" video on it:

But a lot of this is about how the OS layer does its threading to. Windows for example just takes a really naive "round robin" approach, moving each main application thread from virtual core to virtual core - more to stop hotspots on the chip than any performance considerations.

POSIX based threading (if you can even call it that, because there are an army of options to choose from on the posix systems depending on how the system is configured) is much smarter, every action gets a priority, from the main application thread down to the objects and functions its using (closely related to "messaging" and how an application handles memory, shared memory and remote resources), this is really where all the research has been for the last 10 or 15 years, with a lot of of recent help from the real time guys. 

What this means is, that even applications not written with "being parallel" in mind can be split up by the OS and distributed to the CPUs in a way that makes best use of the given resources. Its actually gotten so good these days (particularity with swift and java/kotlin) that it can actually be hard to explicitly write code that can outperform it (what's left is mostly about finding big halting points that can be made to run asynchronously).

This is the #1 reason windows on ARM runs so terribly compared to Linux/Android and iOS, without this even today ARM "pure" single core performance is still way behind Intel/AMD from ~ 15 to 20 Years ago. with it 4x 2Ghz cores and 4x 1GHz cores almost appear to a POSIX application like a 12GHz Intel - if there was such a thing in a perfect world - but to an application on the windows kernel said application will spend half its time at 2GHz and the other half at 1GHz (imho what killed windows phone).

Edited by mSparks

AutoATC Developer

Share this post


Link to post
Share on other sites

Still, it also runs at true Multi-Core much faster than a new X86 processor. I will await when articles with more technical background and benchmarks will arrive. Also, when infos for X-Plane are out, it`s worth it to open a dedicated thread for the M1. POSIX looks very interesting.

Edit: It might be that Microsoft has totally lost on Multi-Core if POSIX does what it says. It`s not only Microsoft, the CAD software makers claim multi-core is impossible on CAD. It`s a total dumb statement, even a simple import of a CAD file can be split up with multiple-cores. The reality is a file import of a complex CAD file can take up to 2 hours, because it runs on one core, because, yes it`s impossible.

Edited by BigDee

Share this post


Link to post
Share on other sites
21 minutes ago, BigDee said:

it also runs at true Multi-Core much faster than a new X86 processor.

Kinda my point - when compared with windows, not when compared with Linux.

If anything Linux on a big x86_64 chip still has by far the upper hand, Apple don't target that market, they beat windows with small x86_64 chips, even the Mac pro is only a 2.5Ghz base clock.

The benefits of small risc instructions on arm is its easier for the OS to split and send them to different cores giving that extra kick, but the front line there is not Apple - its actually NVidia (who Apple dont play nice with) with their DGX station, oh - and of course their announced buyout of ARM. 

But those are not consumer units.

Edited by mSparks

AutoATC Developer

Share this post


Link to post
Share on other sites
8 hours ago, mSparks said:

Kinda my point - when compared with windows, not when compared with Linux.

I have read your posts carefully about performance in Linux, but still its unclear to me why X-Plane should run double the performance on Linux, or Cinebench beeing much faster in Linux than in Windows. If Windows is programmed multi-core the right way, it does utilize its full power, for example in Cinebench. There is no way I see a big performance hit for Linux or M1 in Cinebench.

 

The benchmarks I have seen so far, showed both beeing nearly on the same level:

-Cinebench R10 - 2% difference Windows vs. Linux

-X-Plane nearly same fps - Windows vs Linux

Need some help here, specifically the M1 and Cinebench multi-core.

Share this post


Link to post
Share on other sites
8 hours ago, BigDee said:

why X-Plane should run double the performance on Linux

Same reason MacOS is so much faster.

POSIX operating systems are designed to make good use of more than one CPU per application regardless of how you code the application, windows isn't and until very recently didn't even give developers the option to do so by default without resorting to something like the boost libraries.

8 hours ago, BigDee said:

X-Plane nearly same fps - Windows vs Linux

If the FPS is bottlenecked at the GPU (i.e. high graphics settings) it doesn't matter how much more CPU resource you throw at it, it wont get any faster. (and vice versa)

This is still heavily simplifying, because there is a lot more going on like bus and memory speeds and cache (+misses).

8 hours ago, BigDee said:

specifically the M1 and Cinebench multi-core

https://www.maxon.net/en/cinebench

Quote
  • Cinebench R23 now supports Apple’s M1-powered computing systems
     
  • Cinebench is now based on the latest Release 23 code using updated compilers

When you build an application it relies on the operating system for much of its functionality, so called "syscalls", looks like maxon just started using new syscalls (that were probably "new" 30 years ago). 

This is the main reason Benchmarks often don't give the real picture, but more a "view" of where things are, similar applies in the GPU space, even comparing Nvidia vs AMD hardware with similar capabilities, AMD drivers are generally terrible and unstable, the benchmarks don't reflect this.

See also

https://www.osnews.com/story/22683/intel-forced-to-remove-cripple-amd-function-from-compiler/

 

Edited by mSparks

AutoATC Developer

Share this post


Link to post
Share on other sites

From my experience:

NVidia GTX 1060: Slight advantage for Linux in OpenGL and Vulkan mode, but nothing worth writing home about.

AMD RX5700XT: The proprietary Vulkan driver from AMD delivers the same performance in Windows and Linux; the Linux open source driver is around 10% faster than both. In OpenGL mode, the open source Linux driver is like 40% faster than AMD's proprietary Windows driver, delivering performance equal to AMD's proprietary Vulkan driver.

As much as I like Openbenchmarking and Linux, the chart that msparks likes to post with Linux being twice as fast as Windows is nonrepresentative because it first and foremost does not use the worst case benchmark level 5 or 55, but a "medium" one which must logically be level 3.


7950X3D + 6900 XT + 64 GB + Linux | 4800H + RTX2060 + 32 GB + Linux
My add-ons from my FS9/FSX days

Share this post


Link to post
Share on other sites
2 hours ago, Bjoern said:

does not use the worst case benchmark level 5 or 55, but a "medium" one which must logically be level 3.

Yes and no.

No in the context of talking about the benefits of multithreading.

What it shows is as you go higher up the settings good multithreading doesnt help anymore, the bottleneck is bus and memory speed.

Typically for distributed compute loads the bottleneck is nearly always the network bandwidth, which is why there is InfiniBand and the recent efforts in optical hookups.

The only headroom Laminar have to do more is on the gpu, and as seen with Enhanced Cloudscapes that can get eaten very quickly, but they cant really touch that in XP11 because it will blow the current min specs out of the water.

 

Edited by mSparks

AutoATC Developer

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

  • Tom Allensworth,
    Founder of AVSIM Online


  • Flight Simulation's Premier Resource!

    AVSIM is a free service to the flight simulation community. AVSIM is staffed completely by volunteers and all funds donated to AVSIM go directly back to supporting the community. Your donation here helps to pay our bandwidth costs, emergency funding, and other general costs that crop up from time to time. Thank you for your support!

    Click here for more information and to see all donations year to date.
×
×
  • Create New...