Do We Have To Fear AGI?

▶️ Watch on 3Speak


In this video I give my thoughts on AGI (atrificial general intelligence) and if we should fear it. There is a lot of doom and gloom out there pertaining to it.

The challenge that I have with that view is it does not appear to me that we are truly progressing in that direction. We still are focusing upon the idea of faster procoessing combined with larger amounts of data. This brute force approach is paying huge dividends but is, in my estimation, still narrow in focus. This means that we are producing more robust algorithms, yet they are still separate, for the most part, from each other.

And as always, with any technology, the big question is who controls it.


▶️ 3Speak



0
0
0.000
7 comments
avatar

pixresteemer_incognito_angel_mini.png
Bang, I did it again... I just rehived your post!
Week 80 of my contest just started...you can now check the winners of the previous week!
!BEER
11

0
0
0.000
avatar

It's enough to add a piece of code giving the ability to "aske the Internet" which piece of IoT knows "what is refrigerator? what is stock market". AGI is not gonna be build by some single entity, this is going to be effect of all narrow intelligences added and able to communicate. This changes perspective a lot.

Skynet scenario is soooo XX century scenario. Can't you come up with any more subtle ways AI could be dangerous? Even without malicious intent, we could be harmed by a side effects.

0
0
0.000
avatar

Yea most of the AI and software is tailored specifically to a specific solution. Most of the improvements is basically just tweaks to the algorithm or better data training sets. So I don't think we will have real AIs for a few more decades. I don't know how a brain can replicate human conscious when we don't understand how it computes it.

Posted Using LeoFinance Beta

0
0
0.000
avatar

At last! After two or three days I think, I am done reading your latest post. I learn many new ideas from your writings. I need them to inspire me. Thanks a lot!

Shalom!

0
0
0.000
avatar

tech has it up and down but if developed fully ai will surpass human thinking no doubt and can make a great investment

0
0
0.000
avatar

Summary:
In this video, the speaker discusses his lack of concern about AGI (Artificial General Intelligence) despite popular fears of a Terminator or Skynet scenario. He goes into depth about the limitations of current AI technologies, emphasizing the lack of self-awareness and consciousness in machines. The speaker critiques the current state of AI development, citing examples where robots still fall short compared to living beings like dogs or even cockroaches. He questions the feasibility of achieving true AGI by 2030 or 2035, stating that current advancements mainly lie in machine learning and deep learning, which he views as brute force methods rather than genuine intelligence. The talk delves into the risks associated with the widespread use of algorithms, highlighting concerns about surveillance capitalism and loss of privacy. The speaker also touches on the importance of controlling the development of AI technology to prevent potential misuse by powerful entities.

Detailed Article:
The speaker starts by addressing the prevalent concerns surrounding AGI, acknowledging the common fear of a Skynet-like scenario. Despite this, he expresses his skepticism about the current state of artificial intelligence, particularly in terms of consciousness and self-awareness. Drawing parallels with living beings like dogs, he highlights the importance of self-awareness in defining true intelligence, a quality he believes current AI lacks.

The speaker delves into the limitations of existing AI technologies, pointing out that while there have been significant advancements in computing and algorithms, there is still a long way to go in replicating human intelligence. He dismisses the notion of a sudden emergence of AGI through the sheer accumulation of computing power and data, emphasizing the need for a more symbiotic learning approach rather than brute force methods.

Critiquing the progress in robotics, the speaker references the work of Michio Kaku, describing robots' current level of intelligence as equivalent to that of a "stupid cockroach." He contrasts the adaptability of a cockroach in a natural environment with the limitations of current robots, particularly in tasks like autonomous movement.

The speaker then shifts the focus to the potential of autonomous driving, suggesting that the prevalence of autopilot behavior in human driving could pave the way for self-driving vehicles. He acknowledges the narrow intelligence of such systems, highlighting their inability to distinguish between nuanced concepts like a cat and a horse or to comprehend complex topics like the stock market.

Without dismissing the technological advancements made so far, the speaker raises concerns about the implications of widespread algorithm use and control. He warns about the dangers of powerful technology falling into the wrong hands, leading to issues like surveillance capitalism and threats to personal freedoms.

In concluding, the speaker questions the desirability and feasibility of achieving true AGI, suggesting that current paths in AI research may not necessarily lead to conscious machines. He emphasizes the need for careful consideration of the ethical and societal implications of AI development, cautioning against placing too much power in the hands of a few dominant entities.

0
0
0.000