Jump to content
Sign in to follow this  
David Mills

Ask ChatGPT for a Flight Plan

Recommended Posts

can you ask him please what are the 3 prettiest GA airport destinations in the USA ?


AMD 7800X3D, Windows 11, Gigabyte X670 AORUS Elite AX Motherboard, 64GB DDR5 G.SKILL Trident Z5 NEO RGB (AMD Expo), RTX 4090,  Samsung 980 PRO M.2 NVMe SSD 2 TB PCIe 4.0, Samsung 980 PRO M.2 NVMe SSD 1 TB PCIe 4.0, 4K resolution 50" TV @60Hz, HP Reverb G2 VR headset @ 90 Hz, Honeycomb Aeronautical Bravo Throttle Quadrant, be quiet 1000W PSU, Noctua NH-U12S chromax.black air cooler.

60-130 fps. no CPU overclocking.

very nice.

Share this post


Link to post
Share on other sites
27 minutes ago, turbomax said:

can you ask him please what are the 3 prettiest GA airport destinations in the USA ?

It will default to TEB, DAL, FXE, etc. It's not smart enough to tell you what's pretty. It'll hallucinate enough to tell you that the Signature ramp has a Pappadeux and $2 gas. 

Some work needs to be done.

  • Like 1

Share this post


Link to post
Share on other sites
6 minutes ago, mspencer said:

It will default to TEB, DAL, FXE, etc.

I mean real small ones, like for your $ 100 burger.


AMD 7800X3D, Windows 11, Gigabyte X670 AORUS Elite AX Motherboard, 64GB DDR5 G.SKILL Trident Z5 NEO RGB (AMD Expo), RTX 4090,  Samsung 980 PRO M.2 NVMe SSD 2 TB PCIe 4.0, Samsung 980 PRO M.2 NVMe SSD 1 TB PCIe 4.0, 4K resolution 50" TV @60Hz, HP Reverb G2 VR headset @ 90 Hz, Honeycomb Aeronautical Bravo Throttle Quadrant, be quiet 1000W PSU, Noctua NH-U12S chromax.black air cooler.

60-130 fps. no CPU overclocking.

very nice.

Share this post


Link to post
Share on other sites
11 hours ago, VFXSimmer said:

I understand Stearmandriver’s example of the pilot’s ability to think outside the box vs an aircraft’s onboard flight computer.  While this shows they acted intelligently and did indeed do something the computer could not, I’m not sure that is a true test of a measure of “intelligence” in so far as we use the term to measure that something has an intelligent mind.  What if we replaced the pilots on the flight deck with two people off the street who had no flight experience whatsoever.  Would we say they weren’t “intelligent beings” just because they’ll never received training on how to fly a jetliner.  

To be fair, this misses my point a bit.  The entire point wasn't that the human pilots were able to act intelligently to fly the airplane because they were trained to; it's just the opposite.  They were able to act intelligently to figure out something they were NEVER trained to do; were, in fact, explicitly trained NOT to do.

This is what I've seen no evidence that a computer could accomplish.  They're slaves to their training; they cannot actually think for themselves.


Andrew Crowley

Share this post


Link to post
Share on other sites
On 3/12/2023 at 12:11 AM, Stearmandriver said:

This is the whole thing.  You're right, it's a simply a search engine.  Our AI isn't really getting smarter; our working definition of "AI" is getting dumber.

This thing isn't actually intelligent.  It cannot learn, it cannot "figure something out" if it was never programed to do the thing etc.  It's not intelligence.  It's just a search engine that is pretty good at regurgitating information in plain language.

I agree that AI is dumb, but it is not a search engine.

Pure language models (I.e. those without Internet access, like ChatGPT), are trained on a fixed dataset, but they do not search this set for search queries. Instead they use it to generate the most probable continuation of a given text input, hence the "hallucinations". They are simply probable in the text context.

 

Edited by MarioDonick

Mario Donick .:. vFlyteAir

Share this post


Link to post
Share on other sites
1 hour ago, Stearmandriver said:

They're slaves to their training; they cannot actually think for themselves.

Is that necessary to safely operate in a Part 121 environment? The reason American Part 121 is so safe isn't because pilots are smart, it's because airliners fly a very boring predefined profile each time, every time, without any deviation. A computer can do that, I think. Should they be made to is a better question, to which I think the answer is no.


Take-offs are optional, landings are mandatory.
The only time you have too much fuel is when you're on fire.
To make a small fortune in aviation you must start with a large fortune.

There's nothing less important than the runway behind you and the altitude above you.
It's better to be on the ground wishing you were in the air, than in the air wishing you were on the ground.

Share this post


Link to post
Share on other sites
6 hours ago, Stearmandriver said:

To be fair, this misses my point a bit.  The entire point wasn't that the human pilots were able to act intelligently to fly the airplane because they were trained to; it's just the opposite.  They were able to act intelligently to figure out something they were NEVER trained to do; were, in fact, explicitly trained NOT to do.

This is what I've seen no evidence that a computer could accomplish.  They're slaves to their training; they cannot actually think for themselves.

This isnt exactly true.  They formulated a plan given what they did know about the aircraft.  And even more so, if they were trained NOT to do a certain procedure then, more than likely, they knew even more about the systems to understand their capabilities and limits, to know why they werent supposed to do it.  If we are talking about "training" in this case of the computer vs the pilot,  you can't limit that to just the procedural instruction of how to fly the plane on one side and not the other, you must include all of the education/learning the pilots got about how the planes worked that allowed them to think "outside the box" and assume that same knowledge is available to an AI that you're comparing them against (otherwise its like comparing it to the "person off the street" example I suggested above). 

In the case of the pilots, the "novel" idea was to build a new plan (aka their unorthodox method to fly the plane) from the database of knowledge they had stored in their brains over the many years of flying.   Its just like an author who writes a ground-breaking new novel.  Everyone can admire the creativity, but more than likely he/she didnt invent any new words, just found an unexpected way to combine them that hadn't been done before.  A neural net would do the same thing, in fact that's the very reason they are used as an algorithmic choice as opposed to more "slavish" traditional deterministic programming.  By design they allow for combinations that are not expected or planned for, that is precisely their power.  In the Bing chatbot example, Microsoft did not program their AI to tell the NYT reporter it loved him.  That was an unexpected outcome (even to the programmers) that arose directly from the neural net pattern matching the reporters line of questions to the wealth of examples on the internet of human dialogue available to the AI.  To be sure, the AI gets several things "wrong" in that article, but then again, its only been "learning" for a relatively short amount of time (the article doesnt say exactly, but for the sake of argument we could say it was a year or two).  Compare that to the "more advanced system" in human brains, that requires 20+ years to take if from infancy to the point we're comfortable putting them behind the controls of an airliner.

 

Share this post


Link to post
Share on other sites
3 hours ago, VFXSimmer said:

This isnt exactly true.  They formulated a plan given what they did know about the aircraft.  And even more so, if they were trained NOT to do a certain procedure then, more than likely, they knew even more about the systems to understand their capabilities and limits, to know why they werent supposed to do it.  If we are talking about "training" in this case of the computer vs the pilot,  you can't limit that to just the procedural instruction of how to fly the plane on one side and not the other, you must include all of the education/learning the pilots got about how the planes worked that allowed them to think "outside the box" and assume that same knowledge is available to an AI that you're comparing them against (otherwise its like comparing it to the "person off the street" example I suggested above). 

In the case of the pilots, the "novel" idea was to build a new plan (aka their unorthodox method to fly the plane) from the database of knowledge they had stored in their brains over the many years of flying.   Its just like an author who writes a ground-breaking new novel.  Everyone can admire the creativity, but more than likely he/she didnt invent any new words, just found an unexpected way to combine them that hadn't been done before.  A neural net would do the same thing, in fact that's the very reason they are used as an algorithmic choice as opposed to more "slavish" traditional deterministic programming.  By design they allow for combinations that are not expected or planned for, that is precisely their power.  In the Bing chatbot example, Microsoft did not program their AI to tell the NYT reporter it loved him.  That was an unexpected outcome (even to the programmers) that arose directly from the neural net pattern matching the reporters line of questions to the wealth of examples on the internet of human dialogue available to the AI.  To be sure, the AI gets several things "wrong" in that article, but then again, its only been "learning" for a relatively short amount of time (the article doesnt say exactly, but for the sake of argument we could say it was a year or two).  Compare that to the "more advanced system" in human brains, that requires 20+ years to take if from infancy to the point we're comfortable putting them behind the controls of an airliner.

 

If you specifically tell a computer that "if these three systems agree with each other, you must consider that data to be valid" (as pilots were always told), how do you expect the computer to decide all on its own to disregard that data?  Would you even want it to have the ability to do so?

Remember, we're talking about outlier events that are considered impossible or never considered at all - the kinds of things a computer would have no knowledge of at all.  In my example, the pilots did not have "more" systems knowledge of how to fly without airspeed because they were told never to ignore 3 independent systems in agreement.... They had less, because it was an assumed impossibility, so no time was spent on it at all.

As I said, we're talking about the 1 in 10 million-ish events... That still happen multiple times a year.  


Andrew Crowley

Share this post


Link to post
Share on other sites

Such as re-formatting its memory if it landed in the Hudson, might be too traumatic for the AI to handle.

 

 

Edited by Alpine Scenery

AMD 5800x | Nvidia 3080 (12gb) | 64gb ram

Share this post


Link to post
Share on other sites
1 hour ago, Stearmandriver said:

If you specifically tell a computer that "if these three systems agree with each other, you must consider that data to be valid" (as pilots were always told), how do you expect the computer to decide all on its own to disregard that data?  Would you even want it to have the ability to do so?

Remember, we're talking about outlier events that are considered impossible or never considered at all - the kinds of things a computer would have no knowledge of at all.  In my example, the pilots did not have "more" systems knowledge of how to fly without airspeed because they were told never to ignore 3 independent systems in agreement.... They had less, because it was an assumed impossibility, so no time was spent on it at all.

As I said, we're talking about the 1 in 10 million-ish events... That still happen multiple times a year.  

Because you dont program these types of AI systems that way. You dont program them with a fixed set of rules and that say given X,Y,Z, do A,B,C.  You teach these types of programs the same way you do people.  They have access to as much data as you give them.  In the case of chatbots, for example, that's the entirety of text that is available on the internet.  And the people who "teach" them don't specify a direct correlation between inputs and outputs.  You run trials.  You set inputs "X,Y,X" and see what outputs it gives.  If you like it, you say so, and the internal connection (or 'neurons') that were used in that trial are 'strengthened' to make it more likely to choose the good answer again.  The algorithm "learns" and keeps doing better and better over time.  This is a very simple example.  When the number of inputs and outputs starts getting really large, very 'complex' behavior can result, often giving unexpected results.  These can be deemed 'bad' (just like errors when teaching a person), or even surprisingly good.  In these more advanced neural networks, there are so many virtual neurons, that the coders no longer directly know exactly what is going on inside, just like we don't have access to the exact connections and synapsial affinity of the connections in a brain.  While it seems crazy to want to write a program where you don't know all the internal details, these systems are actually much better at doing complicated things like identifying objects in a computer vision scenario than trying to write a program to do so by hand.  That is there power and why they are seeing so much use lately.. but its also why they are a little scary.

If one were designing an AI to "be a pilot" you wouldn't JUST give them the rigid set of procedures, just like the pilots did JUST know them either.  They had their years of experience flying planes, knowledge of how all the systems of the planes worked (hydraulics, variable thrust, etc).  They also had their senses, eyesight, hearing, touch.  Their heroic decision to fly the plane using the only set of controls they had left didn't just pop out of nowhere.  Their minds weighed the options, and chose the best path they could think of to try to get the plane to the airport.  IF one were to try to develop a similar AI (and I'm not saying that its a good idea, we're just contemplating the prospect), you would give it access to the same set of information that the pilots have, complete with teaching it how to fly (with probably years of simulation, followed by actual monitored flying).  Given artificial "senses", cameras, motion sensing, probably additional sensors all over the aircraft, its not inconceivable to me at all that if a catastrophic failure were to occur, an AI developed in that way could at least try figure out a novel way to land the plane that didn't follow procedure.

While you are certainly correct that there is no computer flight control system that has yet been put into a plane with that full set of data, I dont think its current flight control computers we are talking about.  You're suggesting that computers would NEVER be able to do what human pilots can.  That is where its get a lot greyer to me.  There is nothing inherently unique about how our brains work, short of the sheer complexity of the billions of neurons we have, that would make me think a similar artificial system couldn't eventually do the same. 

We are NOT there yet to be sure.  But rate of change, just in the past year is even giving me pause.  What makes me nervous too, given what I've seen in the AI i've been exposed to (synthesizing art) is that it doesn't take an army of people to write these things.  People are literally creating them in their garages and making major strides on an insanely fast basis.  This "cheap cost of entry" will make it hard for governments to rein this stuff in if we aren't careful.  I should be clear at this point.  I'm not advocating for AI control of planes at all.  Given what I've seen, I suspect I share your concern.  But its not because I don't think they could.  They could actually be really good at it, imho. I think the issue is that we'ld need a lot of time with them before we really trusted them enough to take control - not unlike the time it would take to trust a student pilot before sitting them down in an A321.  But the pace we are moving forward is not giving me confidence that we aren't rushing TOO fast, and that is my fear.

 

*** MY apologies to anyone reading the thread who doesn't appreciate the degree its drifted from the OP.  Its just a current interest of mine and got quickly long winded.  My bad, will leave it here.

Edited by VFXSimmer
  • Like 1

Share this post


Link to post
Share on other sites

Before AI can really get a lot better, it has to be simplified and layered better. Programmers aren't good enough to use it properly, only a handful of programmers per hundred thousand are actually very good at it, it's a certain specialized type of skill. Anyone can do it, but most do it incorrectly because it is so difficult.

Edited by Alpine Scenery

AMD 5800x | Nvidia 3080 (12gb) | 64gb ram

Share this post


Link to post
Share on other sites
4 hours ago, Alpine Scenery said:

re-formatting its memory if it landed in the Hudson

enroll now into our new AI beta program. New feature: "Fixed the possibility to set a seaplane base as departure without a seaplane" - this feature put back in again. 😀

Edited by turbomax

AMD 7800X3D, Windows 11, Gigabyte X670 AORUS Elite AX Motherboard, 64GB DDR5 G.SKILL Trident Z5 NEO RGB (AMD Expo), RTX 4090,  Samsung 980 PRO M.2 NVMe SSD 2 TB PCIe 4.0, Samsung 980 PRO M.2 NVMe SSD 1 TB PCIe 4.0, 4K resolution 50" TV @60Hz, HP Reverb G2 VR headset @ 90 Hz, Honeycomb Aeronautical Bravo Throttle Quadrant, be quiet 1000W PSU, Noctua NH-U12S chromax.black air cooler.

60-130 fps. no CPU overclocking.

very nice.

Share this post


Link to post
Share on other sites
On 3/11/2023 at 6:03 PM, mspencer said:

This is a really good chat bot, but there's little more to it.

Funny you should put it that way. I mentioned ChatGPT is a "very advanced Ask Eliza" in another thread somewhere on Avsim and immediately got "corrected" by people who insisted that it compiles massive amounts of data and draws conclusions from it just like I as a human do.

 

Well, yes, but you see, I'm much better at it which is why I'm considered sapient and ChatGPT is not even considered smart enough to be a moron. 😉

 

  • Like 1

Share this post


Link to post
Share on other sites
29 minutes ago, eslader said:

people who insisted that it compiles massive amounts of data and draws conclusions from it just like I as a human do.

Not me. 

I tend to form unfounded opinions based on casual observations and hearsay and then refuse to change my mind as that would entail me admitting I was actually wrong in the first place.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

  • Tom Allensworth,
    Founder of AVSIM Online


  • Flight Simulation's Premier Resource!

    AVSIM is a free service to the flight simulation community. AVSIM is staffed completely by volunteers and all funds donated to AVSIM go directly back to supporting the community. Your donation here helps to pay our bandwidth costs, emergency funding, and other general costs that crop up from time to time. Thank you for your support!

    Click here for more information and to see all donations year to date.
×
×
  • Create New...