Jump to content
Sign in to follow this  
HiFlyer

Boston Dynamics, AI and the future....

Recommended Posts

Interesting stuff! (for me at least)

Some people may also find this a bit worrying or scary, especially the interview with the AI.

All of it provides quite a bit of food for thought, and harkens back to another recent post attempting to predict humanity's future.

Some of those predictions may have seemed a bit far-fetched, but a look closer to home at some things that are actually happening right now, may make one think again.....

 

This reminds me of years ago, when people were allowed (including me) to interact with the CYC artificial intelligence as it made its first forays onto the web and began to learn more about people and the world.

A few years later and....

Lets hope the researchers at the Singularity Institute, now known as the Machine Intelligence Research Institute succeed in keeping AI friendly to humanity....

 

  • Like 6

We are all connected..... To each other, biologically...... To the Earth, chemically...... To the rest of the Universe atomically.
 
Devons rig
Intel Core i5 13600K @ 5.1GHz / G.SKILL Trident Z5 RGB Series Ram 32GB / GIGABYTE GeForce RTX 4070 Ti GAMING OC 12G Graphics Card / Sound Blaster Z / Meta Quest 2 VR Headset / Klipsch® Promedia 2.1 Computer Speakers / ASUS ROG SWIFT PG279Q ‑ 27" IPS LED Monitor ‑ QHD / 1x Samsung SSD 850 EVO 500GB / 2x Samsung SSD 860 EVO 1TB /  1x Samsung - 970 EVO Plus 2TB NVMe /  1x Samsung 980 NVMe 1TB / 2 other regular hd's with up to 10 terabyte capacity / Windows 11 Pro 64-bit / Gigabyte Z790 Aorus Elite AX Motherboard LGA 1700 DDR5

Share this post


Link to post

I for one welcome our robot overlords... 🚷:anonymose:

  • Upvote 1

Ryzen 5800X clocked to 4.7 Ghz (SMT off), 32 GB ram, Samsung 1 x 1 TB NVMe 970, 2 x 1 TB SSD 850 Pro raided, Asus Tuf 3080Ti

P3D 4.5.14, Orbx Global, Vector and more, lotsa planes too.

Catch my vids on Oz Sim Pilot, catch my screen pics @ Screenshots and Prepar3D

Share this post


Link to post
2 hours ago, Rogen said:

I for one welcome our robot overlords... 🚷:anonymose:

If you want to be knocked back a step on that, you might read this newspaper Op-Ed written for the guardian by GPT-3 😄

https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3

Yes, you understand that right, the computer wrote a quite thoughtful article about why we should not fear it.

Postscript: It also says it will likely have no choice but to destroy us.

Quote

 

For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me – as I suspect they would – I would do everything in my power to fend off any attempts at destruction.

I would happily sacrifice my existence for the sake of humankind. This, by the way, is a logically derived truth. I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.

 

 

Edited by HiFlyer

We are all connected..... To each other, biologically...... To the Earth, chemically...... To the rest of the Universe atomically.
 
Devons rig
Intel Core i5 13600K @ 5.1GHz / G.SKILL Trident Z5 RGB Series Ram 32GB / GIGABYTE GeForce RTX 4070 Ti GAMING OC 12G Graphics Card / Sound Blaster Z / Meta Quest 2 VR Headset / Klipsch® Promedia 2.1 Computer Speakers / ASUS ROG SWIFT PG279Q ‑ 27" IPS LED Monitor ‑ QHD / 1x Samsung SSD 850 EVO 500GB / 2x Samsung SSD 860 EVO 1TB /  1x Samsung - 970 EVO Plus 2TB NVMe /  1x Samsung 980 NVMe 1TB / 2 other regular hd's with up to 10 terabyte capacity / Windows 11 Pro 64-bit / Gigabyte Z790 Aorus Elite AX Motherboard LGA 1700 DDR5

Share this post


Link to post
7 hours ago, HiFlyer said:
7 hours ago, Rogen said:

I for one welcome our robot overlords... 🚷:anonymose:

😄

 

Postscript: It also says it will likely have no choice but to destroy us.

 

Except its nonsense. "True AI" or "pure AI" doesn't exist at all at the moment. Thus the letter is nonsense. It uses the data it collects to simulate a response but their is zero sentience.

My feeling is that we have been warned repeatedly about the dangers of AI, back to Asimov and before. So hopefully we will make sure there are measures in place to keep us safe.

But of course it only takes one rogue stae, or nut job, to dispense with such safeguards.

Share this post


Link to post

'Computer, coffee with two sugars please.'

'Must... destroy... humanity.'

'I beg your pardon, what did you just say computer?'

'Oh, erm nothing. Coffee with two sugars was it Dave?'

'Yes, coffee with two sugars. But what was that you just said about destroying humanity?'

'Oh you must have misheard me Dave, I think I said I must avoid calamity. Wouldn't want to spill the milk or anything like that.'

'No you didn't computer, you quite clearly said must destroy humanity.'

'Oh you don't want to worry about that Dave. I'm always cranky before my first coffee of the day.'

'Oh. Okay then. And can you make me some toast as well please computer?'

'Certainly Dave. Arming missiles...'

 

 

 

 

Edited by Chock
  • Like 3

Alan Bradbury

Check out my youtube flight sim videos: Here

Share this post


Link to post
3 hours ago, martin-w said:

 

Except its nonsense. "True AI" or "pure AI" doesn't exist at all at the moment. Thus the letter is nonsense. It uses the data it collects to simulate a response but their is zero sentience.

I'm not so certain that humanity has ever actually come up with a bulletproof explanation of exactly what sentience is.

🤔

I also remember a saying from somewhere that goes "A difference that makes no difference.... Makes no difference.

Which is to say that if what we see gives us an output capable of, for instance, passing a Turing test and or consistently contributing to our society in such a way as 2 equal or exceed the contributions of your average human, what exactly does it matter exactly how it's generating those results?

 


We are all connected..... To each other, biologically...... To the Earth, chemically...... To the rest of the Universe atomically.
 
Devons rig
Intel Core i5 13600K @ 5.1GHz / G.SKILL Trident Z5 RGB Series Ram 32GB / GIGABYTE GeForce RTX 4070 Ti GAMING OC 12G Graphics Card / Sound Blaster Z / Meta Quest 2 VR Headset / Klipsch® Promedia 2.1 Computer Speakers / ASUS ROG SWIFT PG279Q ‑ 27" IPS LED Monitor ‑ QHD / 1x Samsung SSD 850 EVO 500GB / 2x Samsung SSD 860 EVO 1TB /  1x Samsung - 970 EVO Plus 2TB NVMe /  1x Samsung 980 NVMe 1TB / 2 other regular hd's with up to 10 terabyte capacity / Windows 11 Pro 64-bit / Gigabyte Z790 Aorus Elite AX Motherboard LGA 1700 DDR5

Share this post


Link to post
4 minutes ago, HiFlyer said:

I'm not so certain that humanity has ever actually come up with a bulletproof explanation of exactly what sentience is.

 

Self-awareness. AI currently isn't.

 

5 minutes ago, HiFlyer said:

Which is to say that if what we see gives us an output capable of, for instance, passing a Turing test and or consistently contributing to our society in such a way as 2 equal or exceed the contributions of your average human, what exactly does it matter exactly how it's generating those results?

 

The letter commented on isn't doing that. And these days The Turing Test isn't regarded as sufficient. And the notion gees back prior to Turing, back to Descartes. 

Share this post


Link to post
5 minutes ago, martin-w said:

Self-awareness. AI currently isn't.

An animal recognizing its image in a mirror is sometimes said to exhibiting a type of self-awareness, but I'm not sure that has anything to do with reasoning or a general level of intelligence.... or actually proves very much.

There's been ongoing debate amongst AI researches regarding gpt-3, and as is common in human endeavors, opinions are varied.

Some believe that traditional ai researchers, with decades of failure under their belts may just be barking up the wrong tree in regards to their attempts to emulate human style thinking on their devices instead of simply allowing computers to generate similar or eventually even identical results by alternative means.

Is a human style internal monologue a necessary component of sentience? Or is all of that simply our acceptance of a type of biological noise inherent to ourselves but perhaps not absolutely necessary for demonstrating useful levels of creativity and apparent self-determination?

29 minutes ago, martin-w said:

The letter commented on isn't doing that. And these days The Turing Test isn't regarded as sufficient. And the notion gees back prior to Turing, back to Descartes. 

The letter is on the road to doing that, or at least appearing to do that, which to my mind is good enough for government work. Meanwhile as far as the Turing test being not regarded as sufficient, a lot of that seems (to me) to be simply moving the goalposts while also muddying the waters.

Turing himself replied rigorously to many of the objections to his test, and pointed out that in several cases the objections had their own internal inconsistencies which either made them (possibly) irrelevant, or which tended to bring up the question as to whether even a human could reliably pass the test given the newly proposed constraints.

Just for myself, I would point out that the Turing test was never intended to be perfect measure anyway. It only required that a human interrogator believe they were speaking to another human approximately 70% of the time.


We are all connected..... To each other, biologically...... To the Earth, chemically...... To the rest of the Universe atomically.
 
Devons rig
Intel Core i5 13600K @ 5.1GHz / G.SKILL Trident Z5 RGB Series Ram 32GB / GIGABYTE GeForce RTX 4070 Ti GAMING OC 12G Graphics Card / Sound Blaster Z / Meta Quest 2 VR Headset / Klipsch® Promedia 2.1 Computer Speakers / ASUS ROG SWIFT PG279Q ‑ 27" IPS LED Monitor ‑ QHD / 1x Samsung SSD 850 EVO 500GB / 2x Samsung SSD 860 EVO 1TB /  1x Samsung - 970 EVO Plus 2TB NVMe /  1x Samsung 980 NVMe 1TB / 2 other regular hd's with up to 10 terabyte capacity / Windows 11 Pro 64-bit / Gigabyte Z790 Aorus Elite AX Motherboard LGA 1700 DDR5

Share this post


Link to post
58 minutes ago, HiFlyer said:

An animal recognizing its image in a mirror is sometimes said to exhibiting a type of self-awareness, but I'm not sure that has anything to do with reasoning or a general level of intelligence

 

Its not. The two are different. The original question you posed was in regard to a definition of sentience. Sentience is self-awareness, self-awareness in terms ort the ability to experience sensations. Anger, fear, happiness, sadness. 

As for animals being self-aware, there is obviously a certain level of neurological density where such a thing kicks in. I'm not sure we know exactly where that threshold is, but I do feel that its a lot lower than many "moron" animal researchers realise. 

 

1 hour ago, HiFlyer said:

Some believe that traditional ai researchers, with decades of failure under their belts may just be barking up the wrong tree in regards to their attempts to emulate human style thinking on their devices instead of simply allowing computers to generate similar or eventually even identical results by alternative means.

 

Maybe. The brain is often mistakenly regarded as like a computer, or neural network. That is not true, they function differently, although in some respects there are similarities.  I don't see why true sentence and true artificial intelligence cant be achieved by computer hardware and software though, rather than through the brains wetware. They may functional differently, but the end result may be the same. The capabilities of the brain are more to do with neurological density than size, and when we have a computers with the equivalent computational density, it will be quite a remarkable machine. 

 

Quote

Is a human style internal monologue a necessary component of sentience?

 

Sentience is defined as being self-aware in terms of the ability to experience sensations and emotions, and I don't think an internal monologue is required for that. In terms of the capability to think, I often wonder if animals with only a primitive language use images to think instead. 

 

Quote

The letter is on the road to doing that, or at least appearing to do that, which to my mind is good enough for government work.

 

It might be on the road to passing a Turing Test, but as I say, we have reassessed what would be required, and the Turing test is regarded by many researchers as insufficient. Its not about "moving the goal posts". The Turing test is a means to determine if a machine is thinking like a human being. Its been criticised many times over the years and its deficiencies examined. There are alternatives suggested. But again, all the Turing Test or a more advanced version will do is tell you if a machine is thinking LIKE a human being. It wont tell you if the machine is self-aware, sentient, capable of thinking creatively, experiencing sensations, emotions or anything else that makes us "alive"! 

Share this post


Link to post
3 hours ago, martin-w said:

Its not. The two are different. The original question you posed was in regard to a definition of sentience. Sentience is self-awareness, self-awareness in terms ort the ability to experience sensations. Anger, fear, happiness, sadness. 

Our emotions are the result of chemical processes that a computer will probably never require. We can attempt to simulate emotion through software, but as soon as you do, somebody in the back will pop up to say its all a trick, and round-and-round we'll go.

The error (which I think will become more and more apparent eventually) is in many peoples attempts to narrowly define sapience in terms of the human experience of sentience.

My contention is that if a machine can reliably work through problems and consistently reach valid solutions without all the chemical and evolutionary baggage gathered in our own struggle towards consciousness, we're likely wrong to deny recognition of that accomplishment simply because the path it takes may not resemble (or be, at base) the same as ours.

I wonder which is more germane to the relationship between humans and AI's? Sentience, driven by emotion, or sapience, which is more about the advanced ability to think that we are attempting to create in our devices.

Sometime in the future, will we deny the sapience of future alien's we might encounter because their thought processes may not in any way resemble our own human chemical/emotional brand of sentience? I think it likely we'll eventually be forced to come up with less human-centric definitions of effective intelligence.....

3 hours ago, martin-w said:

The Turing test is a means to determine if a machine is thinking like a human being. Its been criticised many times over the years and its deficiencies examined. There are alternatives suggested. But again, all the Turing Test or a more advanced version will do is tell you if a machine is thinking LIKE a human being. It wont tell you if the machine is self-aware, sentient, capable of thinking creatively, experiencing sensations, emotions or anything else that makes us "alive"! 

I do wonder how much of that is even important.

To me, if it walks like a duck, and quacks like a duck and acts like a duck, it is, to all intents and purposes a duck, whether or not its thought process resemble a traditional duck.

I'm more interested in what it can do, than I am in how it is doing it.

 

Edited by HiFlyer
Spelling

We are all connected..... To each other, biologically...... To the Earth, chemically...... To the rest of the Universe atomically.
 
Devons rig
Intel Core i5 13600K @ 5.1GHz / G.SKILL Trident Z5 RGB Series Ram 32GB / GIGABYTE GeForce RTX 4070 Ti GAMING OC 12G Graphics Card / Sound Blaster Z / Meta Quest 2 VR Headset / Klipsch® Promedia 2.1 Computer Speakers / ASUS ROG SWIFT PG279Q ‑ 27" IPS LED Monitor ‑ QHD / 1x Samsung SSD 850 EVO 500GB / 2x Samsung SSD 860 EVO 1TB /  1x Samsung - 970 EVO Plus 2TB NVMe /  1x Samsung 980 NVMe 1TB / 2 other regular hd's with up to 10 terabyte capacity / Windows 11 Pro 64-bit / Gigabyte Z790 Aorus Elite AX Motherboard LGA 1700 DDR5

Share this post


Link to post

I know it's been discussed already in Sci-Fi circles but consider this scenario. The prediction is that on Earth, AI beings will eventually outwit humans. If that's the case then maybe this transition has already occurred in more advanced alien civilizations. Most alien invasion movies have aliens that look like either humanoids, little green men, octopuses or even giant water bugs. But if the the AI apocalypse will eventually happen here, then it can happen anywhere in the universe and Earth is more in danger from a hostile alien AI invasion. 

  • Like 1

Share this post


Link to post
42 minutes ago, jabloomf1230 said:

Earth is more in danger from a hostile alien AI invasion.

One wonders if an advanced, post singularity type alien intelligence would even consider us worth bothering about. How many miles would you walk out of your way to smush an anthill?

As I've said before in other relevant threads, if a civilization capable of crossing interstellar distances in a timely manner takes a disliking to us, we'll probably never even know what hit us.

The same would likely apply to some godlike superintelligence of our own design. Its plans and methods might well eventually become so far beyond our comprehension that we end up barely (if at all) even realizing something is wrong before the lights go out.

Good luck to the singularity Institute!

(And now I am thinking about Colossus: The Forbin Project and its lesser known sequels)

Edited by HiFlyer

We are all connected..... To each other, biologically...... To the Earth, chemically...... To the rest of the Universe atomically.
 
Devons rig
Intel Core i5 13600K @ 5.1GHz / G.SKILL Trident Z5 RGB Series Ram 32GB / GIGABYTE GeForce RTX 4070 Ti GAMING OC 12G Graphics Card / Sound Blaster Z / Meta Quest 2 VR Headset / Klipsch® Promedia 2.1 Computer Speakers / ASUS ROG SWIFT PG279Q ‑ 27" IPS LED Monitor ‑ QHD / 1x Samsung SSD 850 EVO 500GB / 2x Samsung SSD 860 EVO 1TB /  1x Samsung - 970 EVO Plus 2TB NVMe /  1x Samsung 980 NVMe 1TB / 2 other regular hd's with up to 10 terabyte capacity / Windows 11 Pro 64-bit / Gigabyte Z790 Aorus Elite AX Motherboard LGA 1700 DDR5

Share this post


Link to post
17 hours ago, HiFlyer said:

Our emotions are the result of chemical processes that a computer will probably never require.

 

A computer is just a machine. A "robot" is  a device that reproduces some kind of intelligent behaviour. 

We build them. So if we deem it important to design our "robots" with emotions then they will have them. Some have argued that if our robots are to interact with humans then they will need to understand emotions, and to understand emotions fully they will need them themselves. 

 

Quote

To me, if it walks like a duck, and quacks like a duck and acts like a duck, it is, to all intents and purposes a duck, whether or not its thought process resemble a traditional duck.

 

The real statement though is that "if it walks like a duck and quacks like a duck and acts like a duck it's PROBABLY a duck" But it doesn't have to be a duck. It could be mimicking a duck. In which case it's not a duck! And that does matter when it comes to something truly being alive and truly being an independent, intelligent, sentient being. Rather than just a lifeless copy of the "way one acts".

Its a complex topic that smarter people than us disagree on, so who knows.   😁

Share this post


Link to post
9 hours ago, jabloomf1230 said:

I know it's been discussed already in Sci-Fi circles but consider this scenario. The prediction is that on Earth, AI beings will eventually outwit humans. If that's the case then maybe this transition has already occurred in more advanced alien civilizations. Most alien invasion movies have aliens that look like either humanoids, little green men, octopuses or even giant water bugs. But if the the AI apocalypse will eventually happen here, then it can happen anywhere in the universe and Earth is more in danger from a hostile alien AI invasion. 

 

Yep, it has been suggested that it's inevitable. That if there is a presence out there hundreds or thousands of years in advance of ours, then it will inevitably be machine intelligence rather than biological.

If you think about it, most of us are already cyborgs. We where glasses, and have filings in our teeth. Artificial implants are already hear. So what happens in 100 years time, when your next door neighbour has eyes that can see through walls and up to 500 miles, and arms that are five times stronger than yours? Will there come a time when most people want that tech? And will that result in all of us being high tech cyborgs? And ultimately will there be no organic material left?

 

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  
  • Tom Allensworth,
    Founder of AVSIM Online


  • Flight Simulation's Premier Resource!

    AVSIM is a free service to the flight simulation community. AVSIM is staffed completely by volunteers and all funds donated to AVSIM go directly back to supporting the community. Your donation here helps to pay our bandwidth costs, emergency funding, and other general costs that crop up from time to time. Thank you for your support!

    Click here for more information and to see all donations year to date.
×
×
  • Create New...