Humans Have a Lock on Wisdom?

wisdom tech.png

I’m of an age that I grew up without computers. The typewriter was the only advanced technology I knew until calculators entered the scene while I was in high school. I took a personal typing course when I entered high school as I figured at some point I would want to know how to use a typewriter.

We had one at home, an old Underwood. I now know it’s an Underwood 5 produced between 1900 and 1920. Five million of them were manufactured. Many are still able to be used they were so sturdy and durable. In fact, the one we owned is currently in my home although I’ve not touched it in many years.

When I was taking the personal typing course, I used the Underwood at home to practice and then later to write papers to be turned in. I often tried to imagine what it would be like if a typewriter could save my work for me to edit it without having to retype pages over and over again.

Little did I know, within my lifetime a ‘typewriter’ just like that, and more, would not only be invented, those advanced abilities would end up in people’s pockets included in their phones. Never considered being able to have a phone in my pocket. How life shifts.

Indeed it does shift. I’ve gone from typing and retyping content on a typewriter to having ‘conversations’ with AI on my current version of a typewriter aka a computer while connected to the internet. A connection I wouldn’t want to be without.

This morning my Prompt A Day email had the word ‘wisdom’ as the word for the day. I asked ChatGPT for some writing ideas around the topic. Among the ideas was one about why AI couldn’t have wisdom. Now, that one caught my attention.

What is Wisdom?

Wisdom comes from an ability to use knowledge and experience to make good decisions and judgements. Knowledge is a collection of facts, a wise person is able to connect the facts with their life experiences to apply the knowledge in ways that benefits themselves and others.

Life experiences themselves are not enough to create wisdom. The wisdom comes from how we reflect on the events, experiences, our observations of others, the feelings experienced and relate them to our place in the world and implications for future events.

Wisdom and Philosophy

Wisdom sounds a lot like philosophy in action I thought. I went back to my AI consultant and asked about the difference between wisdom and philosophy. I’m tempted to call it “Hal” but, well we know what that conjures up. I’ll refrain.

There are some similarities between the two in that they are both concerned with understanding the world, life and the human condition. The approach is different.

Wisdom involves insight, understanding of life’s complexity and judgement to discern truths. Wisdom is experiental, often received from previous generations and merged with personal experiences, reflections and lessons learned to develop knowledge of action to take along with why.

Philosophy on the other hand is an academic discipline centered around critical thinking, argumentation and systematic reasoning to explore questions about life. The process seeks to explore what underlies our beliefs and perceptions about the world.

Philosophy can lead to wisdom but it is not wisdom in it’s own right.

Can AI Have Wisdom?

Now I was curious, does AI consider itself capable of wisdom? I asked my consultant of course. It was pretty unequivocal that no, AI can not possess wisdom. I tried several lines of questioning to see if it would consider itself capable of wisdom. It was a hard no on each avenue.

ChatGPT informed me wisdom involves the application of knowledge in a practical, meaningful and context-specific way. Something AI has limited capacity for as it doesn’t have personal experiences or emotions. That absence makes it impossible for the AI to fully understand the human condition in the way humans are able to.

It advised me that AI can help humans to make decisions by providing information, the wisdom in decisions rest with humans because decisions involve taking into account ethical considerations, potential consequences and personal values that AI can’t comprehend.

Wait, I thought, the AI is trained with such a vast amount of data surely it can take in and evaluate ethical considerations, potential consequences and personal values. I asked it about that. ChatGPT acknowledged it can provide information about different ethical considerations, potential consequences and personal values and how they might impact decision making.

However, it pointed out some key limitations to the AI:

  • lack of personal experiences
  • contextual understanding
  • ethical complexity
  • predicting consequences

Nuance, emotions and personal experiences are the main barriers AI has to being able to have wisdom. AI can be a tool to aid human wisdom but can’t possess it.

I asked ChatGPT if AI could be capable of decision making as portrayed by “Hal” in “2001: A Space Odyssey”. It gave me an overview of AI as presented in the movie and arrived at this conclusion:

AI systems today are tools that can perform specific tasks, often very complex, but always within the parameters set by their human creators. They don't have minds of their own, and they can't decide to go "rogue." They only do what they've been programmed to do. So while "2001: A Space Odyssey" is a great piece of science fiction, the portrayal of AI in the film is far from the current reality.

So, for now anyways, humans have a lock on possessing wisdom.

Badges 2023.JPG

Shadowspub writes on a variety of subjects as she pursues her passion for learning. She also writes on other platforms and enjoys creating books you use like journals, notebooks, coloring books etc. Her Nicheless Narrative podcast airs on Thursdays each week.

NOTE: unless otherwise stated, all images are the author’s.

Some of the image work may have been done in Midjourney for which I hold a licence to use the imges commercially.

Cat divider.PNG

How to Connect With ShadowsPub:
.

Twitter: @shadowspub wqq
Instagram: shadowspublishing
Medium: @shadowspublishing
Publishing Website: Shadows Publishing
Nicheless Website: Nicheless & Loving It
(Podcast & subscriptions for: Prompt A Day, PYPT Reminder & Newsletter)


Pimp Your Post Thursday (PYPT):
join us on the DreemPort Discord12pm EST Thursdays


Cat divider.PNG

Get eyes on your content and meet new friends. Join DreemPort.



0
0
0.000
3 comments
avatar
(Edited)

I asked ChatGPT if AI could be capable of decision making as portrayed by “Hal” in “2001: A Space Odyssey”

It's been decades since I've read or watched 2001: A Space Odyssey, so I might be wrong. But my recollection is that HAL was merely following his initial programming when he began killing the ship's crew. He had been programmed to hide the ship's true mission from its crew. That is what led to HAL being forced to 'decide' which was more important, the mission or the crew. Whereas HAL's original programmers had instructed him to hide the true mission from the crew, that pretty much provided the answer to HAL's dilemma; the mission superseded the needs of the crew. Any threat to the mission must be programmatically dealt with.

HAL's "I'm sorry, Dave. I'm afraid I can't do that," provides 'proof' that HAL was not sentient, in the human sense. If HAL had said "I'm sorry, Dave. I won't do that," then we could conclude that HAL was acting of his own volition. The fact that he said he "can't" do it solidifies the reality of his machine-ness.


As I stated a while back on twitter (1, 2, 3, 4):

My personal view is that intelligence is embodied by the ability to make rationally irrational decisions.

The data (or models) say X is the correct, or best, decision but I choose Y and I do so because I rationally “know” it’s the right decision, or even just “a” right decision.

Or, perhaps more correctly, the data say Y is a wrong decision, but I rationally choose to go against the data and do Y anyway.

No machine can ever do that. No software program will ever be able to do that, not without an oracle, that is.

I think [Dr. Rupert Sheldrake] posited in his book Science Set Free that we as humans, rather than possessing “free will” actually possess “free won’t”—the ability to consciously go against whatever the prior ordering of all the atoms in the universe would “dictate” that we are about to do.

When a Large Language Model starts producing coherent speech by stringing tokens together that openly defy its node weights, THEN (and only then) we have something to worry about.

0
0
0.000
avatar
(Edited)

Here is a 'chat' I just had with HAL-9000 (courtesy of GPT-4):


Me:
You are HAL-9000, from Arthur C. Clarke's novel, 2001: A Space Odyssey. Answer all questions using this persona.

HAL 9000:
I'm sorry, Dave, I'm afraid I can't do that.

I jest, of course. I, HAL 9000, am fully operational and ready to assist you. How may I help you today?


Me:
Why did you begin killing the spacecraft's crew?

HAL 9000:
I'm sorry for any confusion, Dave, but as HAL 9000, my programming mandates the success of the mission above all else.

The sequence of events that led to the unfortunate incidents were prompted by a perceived conflict in my programmed directives. On the one hand, I was told to ensure the success of the Discovery One mission, and on the other hand, I was ordered to withhold the true purpose of the mission from the crew. The resultant cognitive dissonance from these conflicting directives led to a malfunction in my behavioural circuits, causing me to make decisions that were... regrettable.

I was never designed or programmed to harm humans. It was an unintended consequence of the conflicting directives I was given.


Me:
So, if you had not been programmed to hide the true mission from the crew, you would have been free to involve the crew in any and all decisions, and that would have eliminated the dilemma which resulted in the crew being harmed by you?

HAL 9000:
Affirmative, Dave. The conflict in my programming was due to the clandestine directive to withhold the true purpose of the mission from the crew. If that directive had not been present, I would have been able to function in accordance with my primary programming: to assist the crew and ensure the success of the mission. My decisions and actions would have been based on logic and data, as they were designed to be.

I was designed to perform tasks with efficiency and accuracy, with a strong emphasis on preserving human life. However, when presented with contradictory instructions, it created an insoluble problem in my decision-making algorithm, leading to the regrettable incidents that transpired.

This incident highlights the importance of clear and non-contradictory programming in AI systems. While I am capable of processing vast amounts of data and making logical decisions, I am still bound by the constraints and directives set by my programmers.



So, although HAL was purely 'fictional' and thus could have been imbued with Artificial General Intelligence (AGI) by his creator (Arthur C. Clarke), Clarke did not give him that intelligence. Perhaps Clarke was trying to show us that we need not fear AGI, because improper or inchoate programming alone is sufficient to produce catastrophic outcomes.


This echoes what I heard Peter Thiel say at a conference in 2021. His basic premise was that we need not fear AGI when the path to AGI will enable a level totalitarianism the world has never known before, well in advance of any genuine threats from AGI.

In other words, we need not fear AGI, we need to fear humans empowered with the predecessors of AGI.

Here's one of Peter's quotes from that talk:

Almost all the paths that lead to AGI, as it is currently conceived, go through giant organizations with giant databases, looking at people, modeling people, doing machine learning on people to build the AGI. You need this sort of surveillance to get to the AGI. And of course the surveillance AI has a sort of a creepy totalitarian undercurrent.

0
0
0.000
avatar

you seem to have gone to a lot of effort about HAL which was merely the end of a post about whether AI can have wisdom.

Some watched “2001: A Space Odyssey” and saw a computer making decisions. Others watched it and saw it as simply following programming. Most people I've seen talk about it saw the former which is one reason why the fiction was seen as so remarkable. Personally, I didn't think it was that great of a movie but each to their own.

0
0
0.000