Jump to content

MattNischan

Members
  • Content Count

    35
  • Donations

    $0.00 
  • Joined

  • Last visited

Community Reputation

27 Neutral

Flight Sim Profile

  • Commercial Member
    No
  • Online Flight Organization Membership
    Other
  • Virtual Airlines
    No

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. You said this: I would NOT recommend WiFi for the new MSFS or anything that is latency critical There is no evidence that the MSFS scenery streaming will be that latency critical, or that WiFi will not be sufficient. I additionally wager it will not be very bandwidth sensitive. There is also precisely zero evidence that any transient hiccups in connection quality are going to suddenly make you take a massive scenery quality hit. This is all complete conjecture, and makes absolutely no sense given the way these types of systems and engines are designed, at all. I have never, in all years as an IT professional, seen the kind of wild constant jitter in latency you describe, even in high traffic suboptimal environments with many computers per square foot and lots of metal around and constant logging. Sure, are you going to get some latency spikes, absolutely. But also, I get those randomly wired too. Upstream connections are not always perfect either (yay Comcast). All I'm saying is that there is no evidence that WiFi won't be just fine, and additionally knowing what I do about engine design, it is probably going to be just fine. Like I said, I'll be more than happy to eat my words if I'm wrong, but there's no need to be hyperbolic this early in the game. I don't get how this turned into some kind of strange war about things, but I'll bow out. Apparently there are egos to protect. We shall all find out when it hits our computers next year. Cheers all, Matt N
  2. Because that wasn't the argument. The claim was that WiFi will not be sufficient or recommended for the new MSFS. There's nothing about the apparent engine design nor characteristics of WiFi connections over wired in this scenario that would warrant that recommendation. We agree that a WiFi connection is not superior to a wired one and that RTT has an impact (sometimes significant) on TCP/IP bandwidth capacity. I am well aware of the math. That was not my claim, as I already stated in my last post. There's no need to flex here; I'm just a chill dude offering my advice knowing the subject area both from a practical and technical standpoint. The claim that a WiFi connection will not be sufficient for MSFS is unwarranted, and there's no need to whip people into a tizzy getting them thinking they need to go get expensive gigabit routers and hook wires up just to stream a little scenery data. The thing is, from a practical reality standpoint, the latency to the server will always swap the latency that your radio, which is feet away, will add. And if the average WiFi connection was indeed such a horrible worst case as is being claimed, nobody would be streaming Hulu to their laptops and tablets. I don't understand the necessity for tech folks to claim the sky is falling. Everyone relax, WiFi will be just fine for this purpose. That's all I was saying. No need for the bad vibes, seriously. Rob, simbol, if it turns out I'm wrong, I'll send you both a bottle of a booze of your choosing, up to $100. Truce?
  3. Wow, not at all, and I don't get the vitriol here. I was simply stating, based on my technical expertise in the field in multiple portions of it (both having deployed WiFi in the enterprise and having programmed both distributed systems and games), that I don't believe that the small amount of extra latency will be a problem for this type of application, and that I very seriously doubt much bandwidth will be needed. I would be extremely surprised to find otherwise. Certainly no one here is claiming that WiFi is a better PHY than wired. I'm just saying that you recommended avoiding WiFi for MSFS, in big bold letters, when really I doubt that kind of hyperbole is warranted for this type of scenario. This is a game; unless the server you connect to is in your house, hardly anyone ever gets single digit pings playing games online, and it would be absolute netcode suicide to build in so little leeway that 10-15ms is going to trash the scenery streaming. And let's be honest here; I have never measured a difference that large in WiFi vs wired (normally I see diffs of about 1-3ms extra) even in super crowded enterprise scenarios with lots of metal walls between everything. This is not a streaming video game platform like Stadia, Playstation Now, or OnLive (RIP) where you need dedicated low latency in order to provide accurate per-frame rendering and control inputs because the game is rendered and processed server side. Scenery data will be streamed to the client as the client enters scenery zones; the game is rendered locally. There's no other way to do it, and that much is super clear because of the fact that you can pre-download and cache scenery areas. Thus, the little extra latency of WiFi is not going to be a problem. Even the slight RTT hit is not going to cut your bandwidth down 80-90%. I stand by my analysis. If you can stream Netflix, you'll be fine here. When the game comes out, if I'm wrong about that, I'll gladly be the first to eat my words.
  4. Based on what information? I think it's way too early to say that WiFi would not be recommended. As an IT professional (first a sysadmin for 10 years, now a programmer for 6), we use WiFi in all sorts of applications professionally that have reasonable bandwidth requirements and latency is generally not a problem (including live streaming applications). Within a normal sized home, WiFi is going to add at most 10-15ms latency, and given the most likely architecture employed by the engine (i.e. spitballing based on what I know about both systems and game engine programming), I highly, highly doubt that having the lowest possible pings is going to be any concern whatsoever. Similarly, I keep hearing about people being nervous about bandwidth requirements. Yes, they have 2PB of data at their disposal. That's _their side_ data; I guarantee not client side. And keep in mind the earth is 192M mi^2. But mostly, the thing to keep in mind is that this is still just a game that has to run on commodity hardware. Mesh data is super tiny and very, very compressible (and by this I mean all in game vertex data). Texture data has to fit in VRAM, and loading in and out of VRAM constantly is extremely expensive. Which means even photogrammetry based buildings can't be massive 4K x 4K textures per building, you'd never get it all to fit. So, realistically, the onscreen data for your visible area is not going to be that massive. I would be highly, highly surprised if the max needed bandwidth was any more than 20-25Mbits in the worst case (flying low and really fast), and I think latency will be meaningless unless it is high enough that it impacts your actual bandwidth. Aircraft simply don't change position or vector enough at scale to affect any caching mechanism that they would have in place to grab anticipated data. And, don't forget, this data has to still fit on your disk without eating it for breakfast (either in space by taking up all the space on disk unexpectedly, or with huge amounts of write cycles for SSDs), and additionally the most recent areas that you've been will already be on disk, so your connection is not a concern there. All this says to me that bandwidth requirements will be very sane (think Netflix) and that latency will not be a big deal. -Matt N
  5. Just to be extra clear here as well: I was expressing my own opinion of what I would do in their positions, if I was in a good enough fiscal position to do so. I certainly don't expect or feel entitled to a free product that took work to produce. In fact, I expect most devs will probably have to charge, since the market is so niche that they aren't sitting on mounds of capital they can just burn. The cost of testing is not insignificant, and can be a large percentage of or even equal to your dev team cost. So, I know I threw testing out there without a lot of context, but if you take a few people two weeks of quick playability testing (which is going to be a low figure for these types of products), that can already be 1 to 1.5 man months, and you could easily be incurring 5-10K in costs. That is no small number for a niche developer.
  6. If, in fact, just a recompile is needed, and I was in that developer's position, I would probably opt to cash in on the goodwill and provide the upgrade free of charge. However, given that there is so much new code under the hood of MSFS, I would never dream of shipping without a decent testing cycle (even in this case), and that's going to cost money. The likelihood is near certain that some assumption you've made about an esoteric part of the original sim will not have carried over in exactly the same way, and now maybe your descent phase is jerking the nose up and down a bunch. Just to be clear, I don't sell flight sim software, so this isn't me defending a practice I may have a conflict of interest about (I actually do distributed financial systems). Just being honest that even something that seems simple does end up costing time and money to do. Whether or not a particular company sees it as an investment to take the hit and capture customer goodwill, or if they're even fiscally able to, is really going to be up to them and going to vary by company. -Matt N
  7. As a software developer, I believe the compatibility mode they talked about will be limited to using the .air file (the aero lookup table) since this was only spoken about in the context of the physics system. What seems more difficult, but still possible, perhaps, is that the model format can be used in the new engine. The reason I hedge on that is because it is clear that the new models use a much more advanced materials system, and from a software design perspective I'm finding it hard to come up with a design that would easily accommodate both types of systems (the old with simple UV diffuse/specular/lighting textures and the new, which looks like assignable per surface materials like most modern games). From the other end, I am almost certain that any aircraft using custom code (most of the good ones, and PMDG uses a _lot_ of custom code) shipped separately in a DLL will need to be ported. FS development relies very, very heavily on knowing the memory layout of the FSX/P3D executable process (which is how FSUIPC reads so much more than the SDK provides natively), and for the rest that use the SDKs, your library needs to be linked to that DLL as well. The memory layout of the process will be wildly different. As a result, the only way that developers will get away with not having to write code during port is if the SDK has a completely identical API surface that they can use that matches their existing usage and they are already not relying on the memory layout. Since the engine appears to be quite different (certainly much more different than FSX to P3D initially), I doubt that the APIs for the SDK will completely match the previous. If the APIs do happen to match, the minimum required would still be a recompile against the new SDK, because they will certainly not be binary compatible (even just going from linking to a 64bit version of a library from a 32bit version is going to need a recompile). Either scenario is going to mean a testing cycle as well to make sure that the new SDK functions and methods perform in the same way as you expect they would previously. I'm glad the Asobo team is looping developers in early, although it does sound like the SDK will be a moving target and a work-in-progress post initial release. So, my expectation would be that an extremely code heavy product like the PMDG NGX or 777 is probably not a day one item. My forecast is probably 3-6 months at a minimum after release for something like that (if indeed if even makes fiscal sense, they may wait and do it on the next aircraft version). Some of the smaller, less detailed aircraft might be able to be ported quicker or more easily, but overall I don't expect any of the high quality stuff to be a simple import/export type exercise, just looking at it how I would imagine the systems to be designed and how software has to go together in general. -Matt N
  8. I'm a lead engineer of a big software project. Here's the thing: both sides are correct. I'll speak to two things, the internal P3D architecture and external parties. C++ is what is called an unmanaged language. In other words, the programmer has to specifically allocate memory for things as well as explicitly tell the program to unallocate those things. This is where the term "memory leak" comes from: the programmer has allocated objects, but later has forgotten to delete them. In managed languages (Java, JS, C#, Python, etc), the deletion of objects is taken care of for you. When nobody is using those objects anymore, a process called the garbage collector comes in and deletes them. It's all obviously hugely more complicated than that, but that's the general gist from FL350. C++ guys know that there are newer ways of dealing with this stuff these days with smart pointers and other things, but I'm going to stick with the basics. In a 32bit application, you have chiefly two limitations: The magnitude and/or precision of numbers you can represent The amount of object memory you can grab before the allocator throws an Out Of Memory exception Surprise! The second limitation is actually the same as the first one. Wait, what? That's right, 32bit memory limitation all goes back to numbers. When a process starts in Windows, Windows creates what is called a virtual address space. You can think of it as a big array of bytes, and that's the memory your process gets to access. So, if you wanted to access the memory, you could do: byte firstByte = memory[0]; byte lastByte = memory[4294967295]; Now, this doesn't actually work in C++, as there isn't a magic array called memory. But it illustrates the problem: in 32bit land, you can't use a number higher than that. That's the maximum unsigned (only positive numbers) integer available to you. Because under the hood the memory allocator actually does work somewhat like that, and therefore once it runs out of numbers, it can't store any more bytes in memory. It has no more address space available. Once the allocator sees that all the space is used up, bam! Out Of Memory exception. Now, there are other practical limitations that mean the real space is closer to 3.5GB instead of 4, due to the kernel loading things into the front of the process memory as well as memory fragmentation. Memory fragmentation is particularly challenging as you get close to the memory limit. For example, lets pretend my whole memory is only 8 bytes. If I have two 4 byte things I want to load, no problem, right? 1 1 1 1 2 2 2 2 <- Two items side by side But what if I load something that's 1 byte, load 4 bytes, delete the first object, then load another 4 bytes? 1 - - - - - - - <- Loaded object 1 1 2 2 2 2 - - - <- Loaded object 2, 3 bytes free - 2 2 2 2 - - - <- Deleted object 1, 4 bytes free - 2 2 2 2 - - - <- Try to load object 3 (4 bytes), get OOM Even with 4 bytes free, there aren't 4 contiguous bytes free. Trying to load another 4 bytes results in an OOM exception. We can't just move object 2, because the program may be using the pointer to that object currently, and that would cause Big Problems(tm) with the program. This is why sometimes you get OOMs way before other times. It all depends on that specific run, and how memory got fragmented during that run. However, there's good news! Fixing the allocator to address more memory requires only one step: change the compile target to 64bit instead of 32bit, and recompile the program (and associated libraries). Now you can access 16 Exabytes of RAM! You might need a bigger case, though. That's where the "it's not so hard" camp is correct. Recompiling to 64bit is super easy. The bigger problem is the underlying architecture. In the video game world, every collection of objects is on a pretty strict diet. So, oftentimes you would budget maximums for each kind of item. You would take an inventory and say, ok, each tree looks like the following: struct Tree { byte type; float x; float y; float z; } Each floating point position is 32bits (4 bytes), and the type is 1 byte. So, each tree is 13 bytes. Doesn't sound like a lot, but consider how many can end up on screen. Each item type is going to be tracked in the program inside some collection. Those collections are likely fixed size given how old FS is as an engine, and what kind of memory targets it was up against. Once you go 64bit, all these different budgets have to be adjusted up, and it's a balancing act, because even though you can use 16 Exabytes, it's pretty likely Joe Simmer doesn't have that kind of memory in their machine. They likely only have an 8Gb-16Gb budget. So, you still need to come up with either algorithms to dynamically adjust the budgets, rebalance them by hand, or just deal with the fact that if someone busts the budget, they're going to be getting massive stutters when they hit the disk for swap memory. Going 64bit doesn't mean you get to ignore practical memory limitations of current machines. And dynamic management scaling algorithms are not simple stuff, and they all fight against each other. So, that's where the "it's really hard" camp is correct. The real world still exists. From a third party side, you're supplied with a library that you can link your library or program against that supplies you with all the functions that you can call in the SDK. If the SDK DLL changes platforms (i.e. to 64bit), you need to recompile your software against the new DLL, because the actual memory locations of the SDK functions may have changed, and you need your software to run the correct code. It is also possible, although unlikely, that some vendors are dynamically loading the SDK DLL (in which case you wouldn't need to recompile), but I doubt it, as it is a ton more work for no particular benefit. This is the easy case. However, if the SDK changed some aspect of their functions, then it becomes more difficult. This could be relatively simple, for example, a function called ChangeAircraftPosition may have taken 3 32bit floats as parameters, and now it takes 3 64bit floats (doubles). That would likely be a pretty small change. But if they change how you move your aircraft completely, then things might be a lot more difficult. This is where the "difficult" camp was going with their thinking: that the move to 64bit would also be a large API breaking situation if LM decided to make big changes. In the end, it looks like LM chose a pretty reasonable path for third parties. The SDK API hasn't really changed, and all the various file formats still appear to be using 32bit ints and floats, as far as I can tell. So, scenery designers don't have to really do much, and people using the SDK get by with minimal effort. Most of the changes are under the hood.
  9. This is simply not true. I'm sure there was mis-wording here, but just to clarify: Windows 7, 8, and 8.1 are non-server operating systems (i.e. desktop/workstation). All three fully support multi-GPU using either NVidia SLI or AMD Crossfire. Windows 2008, 2008 R2, 2012, and 2012 R2 are server operating systems. They are not optimized for desktop/workstation tasks and favor background threads. Currently GeForce drivers are not supported on these operating systems. You can get hardware accelerated GPU support if you have the correct type of card, such as a Quadro or Titan, but I would not recommend it, unless you are using them to run a render farm. As far as X-Plane is concerned, it will of course attempt to make the use of multiple GPUs if they are present. HOWEVER, the current deferred rendering implementation makes it difficult to share resources between multiple cards, so you may actually see performance decrease. Source: I'm a senior sysadmin. -Matt Nischan
×
×
  • Create New...