Jump to content
Sign in to follow this  
hs118

SayIntentions ATC and GPT-4o announcement today

Recommended Posts

Posted (edited)
1 hour ago, sloppysmusic said:

Maybe you need one of these for an uninterrupted nap..sorry I mean "FCOM study period"?

Drinking bird 

Ha ! yes one of those would be perfect.

I remember years ago a colleague inventing a ( hypothetical) pulley system that was attached to the flight deck door handle and the back of your shirt, so that when a cabin crew member came into the flight deck ,the cord attached to the opening door would pull your slumped body into an upright position giving the impression you were awake and alert.

Edited by jon b
  • Like 2

787 captain.  

Previously 24 years on 747-400.Technical advisor on PMDG 747 legacy versions QOTS 1 , FS9 and Aerowinx PS1. 

Share this post


Link to post
Share on other sites
3 hours ago, abrams_tank said:

If AI is that expensive, then how is OpenAI able to allow almost anybody in the entire world to use ChatGPT for free?

Because they're treating it as an investment and a research tool. The more you use it, the more data they have to tune it.

LLMs are presently one of the most expensive compute workloads that exists at this time. Even before the recent explosion, OpenAI was spending somewhere between $1-2M per day in compute costs. And the estimates just for training GPT-4o are in the hundreds of millions range. These models are massive and need a ton of really expensive hardware power to make them go.

A super basic GPT-3 setup useful for a small handful of users is quoted around this specification:

  • 4x compute nodes (probably with something like A800s) - $40-50K/ea
  • 8x InfiniBand (200Gbps) per node - $16K/ea for cards, $20K for the switch
  • Storage Server - $35K, plus $20K for connectivity infrastructure

So, just for a small setup sized for a basic research lab, you're already in for $300+K of static hardware costs before enclosures, racking, cooling, and electricity. Will the costs come down? Incrementally, sure, but there are no magic wands to wave here nor any freebies to be had. A single VM of this configuration in Azure (before storage and bandwidth costs) is $20K/mo.

  • Like 5
  • Upvote 2

Share this post


Link to post
Share on other sites
Posted (edited)
2 hours ago, martinboehme said:

It's currently a money pit for them. I have no insight into their business model, but I assume the idea is to get users exposed to the technology and make them see the value of AI so they can convert them to paying users later. It's the classic tech spiel of "get users first, then worry about how to monetize the product later".

I can't see the same thing working in MSFS. If they introduce AI ATC for free, there will be an uproar if the try to make users pay for the service later on.

So actually, I did some reading on what you said and yes, it does seem like the compute power required to run ChatGPT, especially with all the users using it for free, is a loss maker for OpenAI at the moment (the estimate is some 200 million users using ChatGPT but I'm not sure what percentage are free users and what percentage are paid users).  But I think there will always be a free version of ChatGPT/Co-Pilot/Bard for two reasons:

  1. Throttling and keeping a compute cap on each query for free accounts, which lowers the quality of the answer returned (the compute cap can sometimes leading to the famous "hallucination" of answers)
  2.  Improvements in technology over time will lower the cost per query

With respect 2., I simply cite you this excerpt about Deep Blue versus modern day chess programs that run on a desktop or laptop computer:

Quote

Nowadays, one can run chess programs even more advanced than Deep Blue on a standard desktop or laptop computer.

Deep Blue back in 1997 required a room full of super computers to beat Garry Kasparov.  So it would have cost an average person to have a chess program as advanced as Deep Blue back in 1997, hundreds of millions or billions in USD to afford it (or whatever they spent on Deep Blue back in 1997).  But with improvements to hardware, and improvements to the software/algorithm over the years, everything gets cheaper. Now, you and I can afford a chess program that can beat Deep Blue in 2024, for a fraction of the price.

I also see that they are coming up with "AI Chips" to replace the GPUs used behind the compute power for AI.  Looks like Microsoft themselves have come up with an AI chip that will start to replace some of the NVidia GPUs they are using to do the computation for AI. When the industry comes up with tailor made chips for AI (just like they came up with tailor made ASIC chips to mine Bitcoin, which phased out GPUs), the cost will come down further.

With respect to 1., OpenAI, Microsoft, Google, etc, can throttle the number of questions and cut down on the compute power before returning an answer for a particular question:

Quote

The cost-cutting took a toll: Bard stumbled over basic facts in its launch demonstration, shearing $100 billion from the value of Google’s shares. Bing, for its part, went off the rails early on, prompting Microsoft to scale back both its personality and the number of questions users could ask it in a given conversation.

Such errors, sometimes called “hallucinations,” have become a major concern with AI language models as both individuals and companies increasingly rely on them. Experts say they’re a function of the models’ basic design: They’re built to generate likely sequences of words, not true statements.

With respect to cutting down on the compute power for each question, this is akin to chess programs that return an answer earlier, which saves on compute time. For example, instead of computing 1 million chess positions, it only computed say, 10,000 chess positions, which is a lot cheaper than 1 million chess positions (I am making up numbers for chess programs, I don't know how many positions a modern chess program can compute).

So I think there will always be a free version of Bard, ChatGPT, etc.  Of course there will be paid versions as well, for companies that require more compute power and more accurate answers. And OpenAI is also making money by charging companies to use their API, etc. But I think for everyday users, there will always be a free version, as long as you have companies like Microsoft, Google, Amazon, and Apple competing against each other (technically, Co-Pilot isn't free because you have to own a version of Windows, and if Apple comes out with AI too, you would have to own an Apple product to use it).

Edited by abrams_tank
  • Like 1

i5-12400, RTX 3060 Ti, 32 GB RAM

Share this post


Link to post
Share on other sites
16 minutes ago, MattNischan said:

So, just for a small setup sized for a basic research lab, you're already in for $300+K of static hardware costs before enclosures, racking, cooling, and electricity. Will the costs come down? Incrementally, sure, but there are no magic wands to wave here nor any freebies to be had. A single VM of this configuration in Azure (before storage and bandwidth costs) is $20K/mo.

Thanks for the information. I agree with you though, that I think the costs will come down a lot, as well as increased efficiency over time. Similar to how costs came down, paired with increased efficiency, for chess programs over the years, you can now afford a chess program in 2024 that is even better than Deep Blue from 1997 (and Deep Blue cost a fortune to run back in 1997).


i5-12400, RTX 3060 Ti, 32 GB RAM

Share this post


Link to post
Share on other sites
Posted (edited)

It is free to write prompts in a web browser on their website. It is very hard/impossible to monetize on that for the user writing prompts. 
 

On the other hand, as soon as you offer an API, business's can build all kinds of services on top of AI. That is where ChatGPT brings in the big cash. 

Edited by espent

// 5800X3D // RTX 3090 // 64GB RAM // HP REVERB G2 //

Share this post


Link to post
Share on other sites
4 hours ago, abrams_tank said:

If AI is that expensive, then how is OpenAI able to allow almost anybody in the entire world to use ChatGPT for free

I assume, the AI trains itself when interacting with „…almost anybody…“ 


Sometimes I have to admit to myself:
"Si tacuisses, philosophus mansisses"

 

Share this post


Link to post
Share on other sites
12 hours ago, martinboehme said:

I believe they've announced the monthly price will come down from 30 dollars to 15 dollars (read this elsewhere). 

That would be still around $200 a year. Until this drops well below 100 it's just to expensive.

Share this post


Link to post
Share on other sites
7 hours ago, bendead said:

49% of Open AI, .....

Makes me wonder what they will do with this tech for MSFS 2024

 doors that Open 100%

  • Like 1

AMD 7800X3D, Windows 11, Gigabyte X670 AORUS Elite AX Motherboard, 64GB DDR5 G.SKILL Trident Z5 NEO RGB (AMD Expo), RTX 4090,  Samsung 980 PRO M.2 NVMe SSD 2 TB PCIe 4.0, Samsung 980 PRO M.2 NVMe SSD 1 TB PCIe 4.0, 4K resolution 50" TV @60Hz, HP Reverb G2 VR headset @ 90 Hz, Honeycomb Aeronautical Bravo Throttle Quadrant, be quiet 1000W PSU, Noctua NH-U12S chromax.black air cooler.

60-130 fps. no CPU overclocking.

very nice.

Share this post


Link to post
Share on other sites
1 hour ago, abrams_tank said:

the compute power required to run ChatGPT, especially with all the users using it for free, is a loss maker for OpenAI at the moment

OpenAI will find a way to eliminate its own cost.

  • Like 1

AMD 7800X3D, Windows 11, Gigabyte X670 AORUS Elite AX Motherboard, 64GB DDR5 G.SKILL Trident Z5 NEO RGB (AMD Expo), RTX 4090,  Samsung 980 PRO M.2 NVMe SSD 2 TB PCIe 4.0, Samsung 980 PRO M.2 NVMe SSD 1 TB PCIe 4.0, 4K resolution 50" TV @60Hz, HP Reverb G2 VR headset @ 90 Hz, Honeycomb Aeronautical Bravo Throttle Quadrant, be quiet 1000W PSU, Noctua NH-U12S chromax.black air cooler.

60-130 fps. no CPU overclocking.

very nice.

Share this post


Link to post
Share on other sites
13 hours ago, martinboehme said:

I believe they've announced the monthly price will come down from 30 dollars to 15 dollars (read this elsewhere). 

They never said that. It will become cheaper though.


Cheers, Bert

AMD Ryzen 5900X, 32 GB RAM, RTX 3080 Ti, Windows 11 Home 64 bit, MSFS

Share this post


Link to post
Share on other sites
Posted (edited)
2 hours ago, abrams_tank said:

Similar to how costs came down, paired with increased efficiency, for chess programs over the years, you can now afford a chess program in 2024 that is even better than Deep Blue from 1997 (and Deep Blue cost a fortune to run back in 1997).

It's not really very similar. Something like chess didn't come down in cost/size due to large leaps in algorithmic efficiency (there were some, but remember 1997 was home to Pentium IIs at 300Mhz), it came down naturally as a result of Moore's Law. It's still basically just as complex to perform as it was when Deep Blue did it, it's just that you happen to have that amount of power (and moreso memory) in a desktop.

Those computational leaps are long gone, for nearly 15 years now. It's just more and more parallelism, which takes up space and is expensive. AI will be relatively expensive for a good decade or more, and the best indicator of this is that cloud compute costs haven't gone down much in 10 years. You don't get 1000% better compute for the same price from 2014 to 2024 like you did from 1997 to 2007. Maybe 15-20% better.

Right now, AI is the new gold rush. Everyone is spending billions more than they're making, hoping that AI will become a monetizable household commodity, the same way billions were spent on Google Assistant, Alexa, Siri, and Cortana, with the same (in that case dashed) hopes. However, it's unclear exactly how LLMs are going to fit that bill, as they are more like probabilistic text generators than any kind of intelligence in the way you or I think about it. They don't actually know the correctness or soundness of any given statement; they can't do math, they can't understand what things are true or false, they don't have any concept or idea of logic, they're just reiterating what seems like a conversationally plausible response to a text prompt.

Edited by MattNischan
  • Like 2
  • Upvote 2

Share this post


Link to post
Share on other sites

SayIntentions is quickly gonna surpass BATC.

With a taxi editor AND Navigraph support all that is needed is the same voice fidelity as BATC (the only thing BATC has going for it right now)

 

Share this post


Link to post
Share on other sites
15 minutes ago, Rimshot said:

They never said that. It will become cheaper though.

See discussion up-thread -- this was information that was mis-reported on cruiselevel.de (now corrected there).

  • Like 1

Share this post


Link to post
Share on other sites
Posted (edited)
38 minutes ago, MattNischan said:

It's not really very similar. Something like chess didn't come down in cost/size due to large leaps in algorithmic efficiency (there were some, but remember 1997 was home to Pentium IIs at 300Mhz), it came down naturally as a result of Moore's Law. It's still basically just as complex to perform as it was when Deep Blue did it, it's just that you happen to have that amount of power (and moreso memory) in a desktop.

Those computational leaps are long gone, for nearly 15 years now. It's just more and more parallelism, which takes up space and is expensive. AI will be relatively expensive for a good decade or more, and the best indicator of this is that cloud compute costs haven't gone down much in 10 years. You don't get 1000% better compute for the same price from 2014 to 2024 like you did from 1997 to 2007. Maybe 15-20% better.

Right now, AI is the new gold rush. Everyone is spending billions more than they're making, hoping that AI will become a monetizable household commodity, the same way billions were spent on Google Assistant, Alexa, Siri, and Cortana, with the same (in that case dashed) hopes. However, it's unclear exactly how LLMs are going to fit that bill, as they are more like probabilistic text generators than any kind of intelligence in the way you or I think about it. They don't actually know the correctness or soundness of any given statement; they can't do math, they can't understand what things are true or false, they don't have any concept or idea of logic, they're just reiterating what seems like a conversationally plausible response to a text prompt.

In the case of chess engines, it seems like modern chess engines are able to take advantage of multiple cores: https://chess.stackexchange.com/questions/24338/how-does-engine-strength-scale-with-hardware. However, as per this link, it's a dminishing return when more cores  are used.

Concerning AI, I believe the adancement of AI chips will lower the cost of computing for AI. AI chips remind me of ASIC chips to mine Bitcoin. Back in the day, GPUs were used to mine Bitcoin but with the appearance of ASIC chips, GPUs were no longer efficient to mine Bitcoin. ASIC chips became way more efficient to mine Bitcoin. I think the same may happen with AI chips,, as AI chips become more advanced, it will reduce the cost to compute AI over time (assuming everything else is equal, but that may not be the case as AI algorithms become more advanced).

Edited by abrams_tank

i5-12400, RTX 3060 Ti, 32 GB RAM

Share this post


Link to post
Share on other sites
4 hours ago, UAL4life said:

SayIntentions is quickly gonna surpass BATC.

With a taxi editor AND Navigraph support all that is needed is the same voice fidelity as BATC (the only thing BATC has going for it right now)

 

At it's cost, it will only attract a very small percentage of simmers. 

  • Upvote 2

 

BOBSK8             MSFS 2020 ,    ,PMDG 737-600-800 Fenix A320, FSLTL , TrackIR ,  Avliasoft EFB2  ,  ATC  by PF3  ,

A Pilots LIfe V2 ,  CLX PC , Auto FPS, ACTIVE Sky FS,  PMDG DC6 , A2A Comanche, , Milviz C 310

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

  • Tom Allensworth,
    Founder of AVSIM Online


  • Flight Simulation's Premier Resource!

    AVSIM is a free service to the flight simulation community. AVSIM is staffed completely by volunteers and all funds donated to AVSIM go directly back to supporting the community. Your donation here helps to pay our bandwidth costs, emergency funding, and other general costs that crop up from time to time. Thank you for your support!

    Click here for more information and to see all donations year to date.
×
×
  • Create New...