Jump to content


  • Content Count

  • Donations

  • Joined

  • Last visited

Community Reputation

261 Excellent

1 Follower

About MatthiasKNU

  • Rank

Profile Information

  • Gender
  • Location
  • Interests
    Aviation, Flight Simulation, Aerial Photography

Flight Sim Profile

  • Commercial Member
  • Online Flight Organization Membership
  • Virtual Airlines

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Dear all! First of all I would like to deliver the pictures from Uganda: And in the meantime, another country is newly ready: Tanzania! The map is of course as always also updated. More information can be found here or via private message! https://www.fsxforum.de/viewtopic.php?t=19503 I wish you all a nice weekend!
  2. You are absolutely right, I do not have P3Dv6. But from what I've seen in all the many screenshots, this is still miles away from what I'm talking about here.... But please - if I am wrong here, I would be very happy to be proven wrong with pictures! 😉
  3. Well, that's really an excellent question. And I'll also say this quite honestly: I don't know. I think it just depends on the user numbers: Many users = many sceneries, few users = few sceneries. Therefore, one should actually ask the question the other way around: Why does the P3D have significantly less users than XPlane and again much less than the MSFS? (Well, why less than XPlane I personally can not understand, because I do not get along with XP at all... 😉) With the MSFS, however, many reasons are obvious: The landscape rendering is very good "out-of-the-box", as is the light and weather rendering, and: It is easily available for PC and XBox via the MSFS Store. For 95% of users, this is the most important thing. P3D has other strengths - but these are of interest to fewer users. Therefore, to come back to your question, what would have to change: The default P3D would have to change. In particular, in the following points: - Landscape rendering (yes, with the help of AI's capabilities!). - Weather representation (yes, also here there would be huge potential for the use of AI - e.g. live weather not from model calculations as in the MSFS but on the basis of live satellite and weather radar data). - Engine and light rendering - Distribution channel - not only via the LM homepage, which alone scares off many. Therefore, these are all points that we as users simply can't really influence, but LM would have to take action here - but since LM has the strong military fixation, these points are probably not in the foreground. As I said, this is all just my subjective opinion - nothing that I can back up in any way with facts.
  4. Dear All, some time ago already a short picture, today now again the full announcement! I went back to Africa - and recreated some countries there, mainly in Central Africa. Let's start with a country in the very west: Gabon! Continuing east of there, with the Republic of the Congo: And the big neighboring country, the Democratic Republic of the Congo: Further east are two smaller countries: Rwanda... ... and Burundi: Uganda is also nearby and completed, but I haven't managed to take some screenshots yet. Sorry! But one more country further south - Botswana: And with that I would be through with today's announcements even again! For more information and links please write me as always a private message. The overview map is of course updated as always: https://www.fsxforum.de/viewtopic.php?t=19503 I wish you all a wonderful weekend!
  5. Yes, I totally agree with you there! However, the only prerequisite is that the influences of the materials and objects on the light representation and thus the appearance for the viewer in reality is known, so that it can be recreated in an engine. And that is not the case so far. The "creation" of the scenery, terrain shape, vector data, buildings, trees, yes, even just traffic up to animated people, creatures, birds, all that can be done (ok, still with some cutbacks...) to some extent by AI. - Provided that the computing capacity is there for it. But the visualization of the results has to be done by the PC in the end - and as long as the basics for a realistic visualization are not there, you can't really use all the advantages of this method. And this is the real problem: Engines that are among the best for realistic rendering (for example, the Unreal Engine) have not yet managed to process such large amounts of data. Engines that can handle such large amounts of data, however, don't look as good. AI may be able to help here in the future, but we're not there yet.
  6. Yes, NVIDIA has great tools for that, I won't deny that by any means. In the meantime, a lot can be solved with AI, also as an aid to understanding processes. Here, however, we have a completely different problem: What you are proposing is more or less a physically based world model that represents all processes in the real world without exception. This is necessary, because it is not possible to determine all parameters, which have an influence on the appearance, in real not world-wide (or at all?). Hence the "world model". By the way, this is also what climate science is trying to do - but much slimmed down, because the computing times are too long. And here "only" the climate is calculated - with uncertainty factors. And we are talking about supercomputers calculating for weeks and months. Now we would have to increase this to the extent that we do not only want to output simple 2D/3D arrays, raster data with a resolut of 50 km/pixel, but a 3-dimensional image, in which every ray of light is simulated correctly, with an insane resolution. And not only one picture, we want to have 60 pictures per second or even more! Here we are: Even a cluster of the most powerful computers in the world would give up. Therefore my statement: This is not possible (yet).
  7. As someone who has already trained AI systems and even programmed them himself: No, AI can't do that.
  8. That would surprise me - we can't even capture all the processes that have an impact on "the way it looks" in REAL, so how are we supposed to be able to recreate that already on the computer?
  9. No problem, I was just wondering if I had overlooked something... 😉
  10. Yes, how is it then? Without the -200ER expansion I can't activate the EFB either... With the -200ER expansion installed it works. Here from PMDG: https://forum.pmdg.com/forum/main-forum/pmdg-777-forum/111342-efb-missing-in-777-300er-and-777-200lr#post111434 https://forum.pmdg.com/forum/main-forum/pmdg-777-forum/113229-777-efb-yes-or-no?p=113234#post113234
  11. ... and (as far as I know) you have to own the 777-200ER Expansion Pack.
  12. No. Of course not - P3D is no capable of handling all of that. As always, it's not so black and white - it's the mix that makes it. Therefore: Yes, in many places 3D objects in P3D are useful. But in many places, classic 2D elements are also what make it look good.
  13. Everything displayed on a screen is necessarily 2D - but of course you try to add depth to it. Of course, 3-dimensional data is inherently larger than 2-dimensional data. But what is actually meant here is: Often it can be better performing to simply display 3 dimensional data than to calculate something that looks like 3D from 2 dimensional data.
  14. I am still using this old but fantastic program for lowering texture sizes and optimizing textures: https://www.nexusmods.com/skyrim/mods/12801/
  15. Yes, such data exist - partly on a regional basis, partly on a country basis, with huge data gaps also globally (OpenStreetMap). The problem with these vector datasets is that (except for OSM, which cannot be used globally) all datasets have to be paid for - and are not cheap. If we now use remote sensing data (i.e. mainly satellite systems), we have a spatial resolution of 10 m in the visible range (Sentinel-2). But to derive vegetation types we have to go into the multispectral range and here we are at 30 m/px (Sentinel-2 and Landsat-9 OLI) or 90 m/pixel (Landsat-8 TIRS). One step further, to be able to detect phenological state of vegetation, road surface, type of roof shingles, etc., we need hyperspectral data. Here we would be at the EnMap and PRISMA with 30 m/px. But to get field shapes, roads, etc., we need a hyperspectral data. Exactly, 10 m/pixel are often already too little. But as someone from DLR assured me, more than 30 m/pixel was simply not physically possible for EnMap. All that is needed now is an algorithm that can process these petabytes and exabytes of data, and an engine that can display the results. PS: By the way, what the vector landscape looks like based on 10 m/pixel data can be seen in the MSFS in regions where no aerial imagery is available, like here, south of Kisangani:
  • Create New...