Jump to content
Sign in to follow this  
David Mills

Ask ChatGPT for a Flight Plan

Recommended Posts

45 minutes ago, Glenn Fitzpatrick said:

Not me. 

I tend to form unfounded opinions based on casual observations and hearsay and then refuse to change my mind as that would entail me admitting I was actually wrong in the first place.

This is the way.


Take-offs are optional, landings are mandatory.
The only time you have too much fuel is when you're on fire.
To make a small fortune in aviation you must start with a large fortune.

There's nothing less important than the runway behind you and the altitude above you.
It's better to be on the ground wishing you were in the air, than in the air wishing you were on the ground.

Share this post


Link to post
Share on other sites
6 hours ago, Glenn Fitzpatrick said:

Not me. 

I tend to form unfounded opinions based on casual observations and hearsay and then refuse to change my mind as that would entail me admitting I was actually wrong in the first place.

🤣

To be fair I think every one does that from time to time......just a casual observation.

  • Upvote 1

i7-4790k @ 4.4ghz for the moment. Asus z87-k mobo. GTX 1080, 32gb ram. couple of SSDs....Saitek X52

Share this post


Link to post
Share on other sites
36 minutes ago, GaryK said:

🤣

To be fair I think every one does that from time to time......just a casual observation.

If I proved you wrong, would you change your mind? 🤣

  • Like 1

Take-offs are optional, landings are mandatory.
The only time you have too much fuel is when you're on fire.
To make a small fortune in aviation you must start with a large fortune.

There's nothing less important than the runway behind you and the altitude above you.
It's better to be on the ground wishing you were in the air, than in the air wishing you were on the ground.

Share this post


Link to post
Share on other sites

Pilot to AI: „HAL extract the landing gear now"

AI to pilot: „Im sorry Dave, I'm afraid I can't do that“🙈

  • Like 2

Sometimes I have to admit to myself:
"Si tacuisses, philosophus mansisses"

 

Share this post


Link to post
Share on other sites
On 3/13/2023 at 12:30 PM, Stearmandriver said:

To be fair, this misses my point a bit.  The entire point wasn't that the human pilots were able to act intelligently to fly the airplane because they were trained to; it's just the opposite.  They were able to act intelligently to figure out something they were NEVER trained to do; were, in fact, explicitly trained NOT to do.

This is what I've seen no evidence that a computer could accomplish.  They're slaves to their training; they cannot actually think for themselves.

I'm pretty sure those pilots, which happen to be humans, were "trained" (or better "learned") to recognize patterns (like every human does) and therefore deviation from patterns, like when engine / wind sounds somehow seem "weird" in relation to the 3 speed indicators. What you might call "intuity" is nothing else but a very primitive pattern recognition algorithm. Which - again - is exactly what a neural net can learn. I'll try to make it even more clear:

Imagine ChatGPT knows nothing about that situation with the 3 faulty speed indicators, like it never happened before. Now you put ChatGPT into a cockpit, make him able to read the speed indicators, but also make him able to "hear" (or analyze frequencies if you want) engine and wind sounds. He knows how a specific airplane normally sounds at certain speeds and N1s etc. Therefore he realizes a deviation, calculates how probable it is that all 3 speed indicators are malfunctioning, calculates how probable it is that all 3 speed indicators are working but engine sounds, n1 etc. is "off", then maybe does some additional checks, and then decides that the speed indicators are likely all wrong. Where's the difference to a human pilot?

Edited by Fiorentoni

Share this post


Link to post
Share on other sites
6 hours ago, WestAir said:

If I proved you wrong, would you change your mind? 🤣

No point, my proof is conclusive. Unfortunately much of my evidence falls into the category of "topics best not discussed on forums"😉

  • Like 1

i7-4790k @ 4.4ghz for the moment. Asus z87-k mobo. GTX 1080, 32gb ram. couple of SSDs....Saitek X52

Share this post


Link to post
Share on other sites
7 hours ago, Fiorentoni said:

I'm pretty sure those pilots, which happen to be humans, were "trained" (or better "learned") to recognize patterns (like every human does) and therefore deviation from patterns, like when engine / wind sounds somehow seem "weird" in relation to the 3 speed indicators. What you might call "intuity" is nothing else but a very primitive pattern recognition algorithm. Which - again - is exactly what a neural net can learn. I'll try to make it even more clear:

Imagine ChatGPT knows nothing about that situation with the 3 faulty speed indicators, like it never happened before. Now you put ChatGPT into a cockpit, make him able to read the speed indicators, but also make him able to "hear" (or analyze frequencies if you want) engine and wind sounds. He knows how a specific airplane normally sounds at certain speeds and N1s etc. Therefore he realizes a deviation, calculates how probable it is that all 3 speed indicators are malfunctioning, calculates how probable it is that all 3 speed indicators are working but engine sounds, n1 etc. is "off", then maybe does some additional checks, and then decides that the speed indicators are likely all wrong. Where's the difference to a human pilot?

You missed the point where "he" (an algorithm gets a pronoun now?) would have been specifically programed to never ignore multiple concurring independent systems.

If chatGPT (or even "real" AI if such a thing existed) was specifically told to never do something.... How would it do it?

And who would have ever known before this to give the AI sensory inputs like sound, vibrational feel (not the vibe engine instruments), etc?  In retrospect it seems obvious, but remember that we are talking about things that were never anticipated or were considered impossible - how would we even know which inputs to give a computer to be able to recognize these occurrences?

The main thing, though, is that pilots were always told explicitly to never ignore multiple concurring independent systems, so the computer would of course have been told the same.  So, how do you have a computer that both follows orders and is able to break them at whim? How do you replicate human judgement, in other words?

You surely know this cannot be done today; it's never been demonstrated.

You keep focusing on how it would solve problems you know how to make it solve, which isn't what we're talking about.  I can only assume this will be a very common problem in the industry - the engineers developing this stuff having no concept of the human factors involved.


Andrew Crowley

Share this post


Link to post
Share on other sites
1 hour ago, Stearmandriver said:

You missed the point where "he" (an algorithm gets a pronoun now?) would have been specifically programed to never ignore multiple concurring independent systems.

If chatGPT (or even "real" AI if such a thing existed) was specifically told to never do something.... How would it do it?

And who would have ever known before this to give the AI sensory inputs like sound, vibrational feel (not the vibe engine instruments), etc?  In retrospect it seems obvious, but remember that we are talking about things that were never anticipated or were considered impossible - how would we even know which inputs to give a computer to be able to recognize these occurrences?

The main thing, though, is that pilots were always told explicitly to never ignore multiple concurring independent systems, so the computer would of course have been told the same.  So, how do you have a computer that both follows orders and is able to break them at whim? How do you replicate human judgement, in other words?

You surely know this cannot be done today; it's never been demonstrated.

You keep focusing on how it would solve problems you know how to make it solve, which isn't what we're talking about.  I can only assume this will be a very common problem in the industry - the engineers developing this stuff having no concept of the human factors involved.

The problem is that your mental concept of an AI is that of a robot that does only what you tell him to do, nothing more than an algorithm in an electrical box. That is *not* how a neural net AI works. An AI with a neural net learns new tasks, expands and corrects its abilties etc. It doesn't follow "orders" like a coffee machine. In our example a neural net AI would have literally *learned* to break with the usual path of never questioning 3 speed indicators indicating the same thing, just like a human pilot *learned* to not do always do what's written in the manual. A neural net AI would - given enough sensors and given the let's call it experience level of a human pilot - indeed be able to solve problems it does not "know" yet how to solve.
Unless you understand and accept that the technological reality of a neural net is vastly different from your common "computer", this discussion is futile, I'm afraid.

  • Like 1
  • Upvote 1

Share this post


Link to post
Share on other sites

ChatGPT isn't hooked up to the internet, it's using a several year old database, but Bing's version of it is.

 

Edited by Tuskin38

Share this post


Link to post
Share on other sites
4 hours ago, Fiorentoni said:

The problem is that your mental concept of an AI is that of a robot that does only what you tell him to do, nothing more than an algorithm in an electrical box. That is *not* how a neural net AI works. An AI with a neural net learns new tasks, expands and corrects its abilties etc. It doesn't follow "orders" like a coffee machine. In our example a neural net AI would have literally *learned* to break with the usual path of never questioning 3 speed indicators indicating the same thing, just like a human pilot *learned* to not do always do what's written in the manual. A neural net AI would - given enough sensors and given the let's call it experience level of a human pilot - indeed be able to solve problems it does not "know" yet how to solve.
Unless you understand and accept that the technological reality of a neural net is vastly different from your common "computer", this discussion is futile, I'm afraid.

Oh no, I fully understand that real AI is fundamentally different from everything we've seen thus far.  That's my entire point actually. You're describing things that do not exist, and according to industry analysts, probably never will.


Andrew Crowley

Share this post


Link to post
Share on other sites

Just following Florentoni's point but possibly taking the topic into the realms of Hangar Chat......
Do we want to control what we create or will we be satisfied with influence only. Do we want Controllable or autonomous AI?

See Mary Shelley, Philip K Dick et al. The stories may seem trite, cliched or hackneyed by todays standards but they are still incredibly profound and totally relevant with regards to creating AI and understanding what intelligence is.

Control V's influence is at the heart of this. I fear that humans operating in a patriarchy are always going to struggle understanding the distinction.

Do we want autonomous or controllable AI pilots (or anything else)? If we want autonomous AI then I would ask what's the point in removing humans? is AI a vanity project?

Controllable AI is an oxymoron anyway.

 

Edited by GaryK

i7-4790k @ 4.4ghz for the moment. Asus z87-k mobo. GTX 1080, 32gb ram. couple of SSDs....Saitek X52

Share this post


Link to post
Share on other sites
4 hours ago, Stearmandriver said:

Oh no, I fully understand that real AI is fundamentally different from everything we've seen thus far.  That's my entire point actually. You're describing things that do not exist, and according to industry analysts, probably never will.

Neural networks has been researched and used in the academic world for a long while, the only difference in recent years was the continuous improvements of hardware optimized for AI calculation and, of course, enough big companies willing to invest in this field, which in turn drives even more money into development of even better hardware.

Share this post


Link to post
Share on other sites

The good thing about researching stuff like machine learning is you just need to crunch numbers fast, There is no need for ridiculous precision even 32 bit is good enough.

Which means the hardware is relatively cheap compared to say Genomics or Statistics.  Cheap meaning 10's of thousands of dollars not 100s pf thousands just for one GPU.

Share this post


Link to post
Share on other sites

A look at the behavioural problems and almost complete lack of critical thinking in the generation that has never know a time without Google and Pornhub tells me this whole online direction is a road to hell.

The brave new world is f@cked

Share this post


Link to post
Share on other sites
On 3/11/2023 at 1:45 PM, David Mills said:

EmCTAyo.png

 

In the example above, I didn't ask for an ILS frequency at KLAX. But ChatGPT will give you that information as well if you ask. Try it out. ChatGPT is also very good at figuring out what you want even if you express your request poorly. For example, I mis-typed IFR as IRF but it still understood my intention. I'll bet you $5 that, within one year, ChatGPT (a Microsoft product) will be incorporated into MSFS to tremendously boost the quality of ATC.

EDIT: The web address is https://openai.com/blog/chatgpt

Hmmm.... this makes me think of the dangers in relying on CHATGPT info:

I asked:
Who is David Mills in avsim?

ChatGPT replied:
 

David Mills was a prominent member of the AVSIM community, which is a website and forum dedicated to flight simulation enthusiasts. He was known for his contributions to the website and the wider flight simulation community, particularly in the areas of aircraft design and development.

David Mills passed away in 2017, and his contributions to the community were widely recognized and appreciated by his fellow flight simulation enthusiasts. Many members of the AVSIM community have expressed their condolences and shared memories of David Mills, highlighting his generosity, expertise, and passion for flight simulation.

  • Like 1

Win 11 Pro, MSFS Deluxe, Quest 2.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

  • Tom Allensworth,
    Founder of AVSIM Online


  • Flight Simulation's Premier Resource!

    AVSIM is a free service to the flight simulation community. AVSIM is staffed completely by volunteers and all funds donated to AVSIM go directly back to supporting the community. Your donation here helps to pay our bandwidth costs, emergency funding, and other general costs that crop up from time to time. Thank you for your support!

    Click here for more information and to see all donations year to date.
×
×
  • Create New...