Tuesday, June 20, 2023

R.E. Slater - It's Not AI Sentience We Should Fear but Our Own Misuse of It


amazon link


It's Not AI Sentience We Should Fear
but Our Own Misuse of It

by R.E. Slater


One last AI article before beginning a new series on the Evolution of God and Religion, which I began several months ago when examining the Evolution of Man and Religion in earlier articles including the last set of articles focused on the Evolution of the Christian faith.

While on vacation this past April I found a book lying around the beach condo by an unknown author, a Mr. Yuval Noah Harai entitled, Sapiens: A Brief History of Mankind. I began reading it but didn't get very far, perhaps the first several chapters; but would like to recommend it to anyone who is interested in mankind's evolutionary CULTURAL development.

It's not a Christian book or a religious book. Simply a book which states that man's ability to converse with one another through language laid the groundwork to be able to express our beliefs and sympathies about God, ourselves, and the world we live in. I did not get any sense in my brief read that Harai is expressing any validity for religion... only it's possibility through the miracle of human languages.

Further, while nosing around investigating the dark side of AI, I found that this same author, Yuval Harai, has been taking the results of his findings on human cultural development to apply it towards the much feared (overly feared? legitimately feared?) subject of AI sentience.

Likewise, I too have been nosing around trying to develop a sense of humanity's cultural development organically via evolution, culturally, via religion, and in the next segment, sociologically, via civilization... of which I continue to urge present cultures towards the building of ecological civilizations leaning into love, social justice and equality, environmental care and restoration, and generally, to find a new rhythm and balance in our lives more in line with nature's more generative side.

Moreover, I've been working through the many positives of AI sentience in recent chatbot posts while also working towards mankind's cultural beliefs about God and religion. Hence, I have posted David Foster's books and video to balance Yuval Harai's sentiments about AI.

Part A

Both authors approach AI differently from one another. Foster, from a technological sense where "Everything is Possible in the best senses of Possibilities" as versus Harari's "sociologist's approach" to AI by tying it backwards-and-forwards to our psychological and sociological evolutionary development.

One I describe as a positive approach to AI and the other as a negative approach to AI. Both, I believe are warranted, however with Yuval's statements I would disagree with his beliefs in fearing AI. Rather, I would fear the misuse of AI by the tech industry as it engages with human cultural development either positively or negatively.

History tells us what starts out good becomes corrupted later. The Oppenheimer movie speaks to this when addressing the many benefits of studying quantum physics which was shortly turned towards creating nuclear bombs:



Part B

Even as I am diligently developing a processual theology of a creationally driven, pancessual evolution filled with the presence of God infilled with hope and love throughout creation's ontological structures and teleologies (the study of ends and purposes), so I must also face the dark side of creational freedom gone wrong as evidenced in humanities' many disrupting histories to pancessualism.

Love has two sides to it: (i) It may birth liberty and freedom but may also (ii) birth bondage and destruction. Curiously, Love may bless another or it may harm and destroy another (something I call the "dark side of Love gone wrong"). Similar to Einstein's description of temperature in terms of factors of "no heat" or "coldness"; we might also describe sin and evil in terms of factors of "no Love" or "absence of loving."
A God-filled, divine creation is thus and thus fraught with processual tension. This is what is meant by pancessualism tension which may be either a generative process or a cruel and evil process. Whiteheadian prehension, actuality, and concreasence says that every event increases by one acts of value or removes those acts from any further benefit.

Part C

In process theology we might call this panpsychic chasm by its theological name of "theodicy" - which is the study of good and evil and may be resolve either through divine love or it's rejection.

A divine love which is embedded or imbued via a panenthetical creation filled and sustained by divine immanence (without denying the divine Otherness of God). More simply, processual panentheism refers to the abiding divine presence of God with-and-within creation which continually urges all particulates meaningfully forward towards generative forms and expressions of pancessual evolutionary progress. (This is to be distinguished from tradition non-processual theism and the Buddhistic idea of God as world and world as God = pantheism.)

Similarly, it seems that Yuval Harari is asking questions of humanity's cultural development in his Sapiens book and is currently asserting a similar attitude re AI sentience. Whether humanity is generative, non-generative, or some gradient of either depending on how the wind blows. For Yuval, I think it's NOT about the sentient possibility of AI but how humanity will use this technology to it's own destruction.

And when looking back on the many stories of human development I think we can find warrant for Yuval's concerns knowing humanity has misused, abused, and destroyed, all the benefits and beauty it encounters. Which is also very sad and speaks to why the traditional church preaches so much on the topics of sin and evil.

For myself, I wish to uplift the church's present conversation upwards towards a Theology of Love and thereby, perhaps, avoid the motif of religious man becoming what he preaches in his legalistic structures, beliefs, and cultural outcomes.

If we preach love and hate these qualities will eventually become part of who we are. But if we preach love - and learn to see and rebalance ourselves with a loving creation around us - just possibly this small nuanced shift in our attitudes and perspectives might salvifically, if not redemptively, reset humanity towards more loving actions than harmful in our (eco-)cultural developments with one another.

Blessings,

R.E. Slater
June 20, 2023
edited, June 20, 2023
amazon link



* * * * * * *

THE POSITIVE ASPECTS OF AI

by David Foster




amazon link

FIRST EDITION

Generative modeling is one of the hottest topics in AI. It’s now possible to teach a machine to excel at human endeavors such as painting, writing, and composing music. With this practical book, machine-learning engineers and data scientists will discover how to re-create some of the most impressive examples of generative deep learning models, such as variational autoencoders,generative adversarial networks (GANs), encoder-decoder models, and world models.

Author David Foster demonstrates the inner workings of each technique, starting with the basics of deep learning before advancing to some of the most cutting-edge algorithms in the field. Through tips and tricks, you’ll understand how to make your models learn more efficiently and become more creative.

  • Discover how variational autoencoders can change facial expressions in photos
  • Build practical GAN examples from scratch, including CycleGAN for style transfer and MuseGAN for music generation
  • Create recurrent generative models for text generation and learn how to improve the models using attention
  • Understand how generative models can help agents to accomplish tasks within a reinforcement learning setting
  • Explore the architecture of the Transformer (BERT, GPT-2) and image generation models such as ProGAN and StyleGAN

SECOND EDITION

Generative AI is the hottest topic in tech. This practical book teaches machine learning engineers and data scientists how to create impressive generative deep learning models from scratch using Tensorflow and Keras, including variational autoencoders (VAEs), generative adversarial networks (GANs), Transformers, normalizing flows, energy-based models, and denoising diffusion models. The book starts with the basics of deep learning and progresses to cutting-edge architectures. Through tips and tricks, readers can make their models learn more efficiently and become more creative.
 
  • Discover how VAEs can change facial expressions in photos
  • Train GANs to generate images based on your own dataset
  • Build diffusion models to produce new varieties of flowers
  • Train your own GPT for text generation
  • Learn how large language models like ChatGPT are trained
  • Explore state-of-the-art architectures such as StyleGAN 2 and Vision Transformer VQ-GAN
  • Compose polyphonic music using Transformers and MuseGAN
  • Understand how generative world models can solve reinforcement learning tasks
  • Dive into multimodal models such as DALL.E 2, Imagen and Stable Diffusion for text-to-image generation
The book also explores the future of generative AI and how individuals and companies can proactively begin to leverage this remarkable new technology to create competitive advantage.

0:24 / 2:31:36
Introducing Generative Deep Learning
Future of Generative AI
by David Foster
May 11, 2023

Generative Deep Learning, 2nd Edition [David Foster] https://www.oreilly.com/library/view/...



TOC: Introducing Generative Deep Learning [00:00:00]
Model Families in Generative Modeling [00:02:25]
Auto Regressive Models and Recurrence [00:06:26]
Language and True Intelligence [00:15:07]
Language, Reality, and World Models [00:19:10]
AI, Human Experience, and Understanding [00:23:09]
GPTs Limitations and World Modeling [00:27:52]
Task-Independent Modeling and Cybernetic Loop [00:33:55]
Collective Intelligence and Emergence [00:36:01]
Active Inference vs. Reinforcement Learning [00:38:02]
Combining Active Inference with Transformers [00:41:55]
Decentralized AI and Collective Intelligence [00:47:46]
Regulation and Ethics in AI Development [00:53:59]
AI-Generated Content and Copyright Laws [00:57:06]
Effort, Skill, and AI Models in Copyright [00:57:59]
AI Alignment and Scale of AI Models [00:59:51]
Democratization of AI: GPT-3 and GPT-4 [01:03:20]
Context Window Size and Vector Databases [01:10:31]
Attention Mechanisms and Hierarchies [01:15:04]
Benefits and Limitations of Language Models [01:16:04]
AI in Education: Risks and Benefits [01:19:41]
AI Tools and Critical Thinking in the Classroom [01:29:26]
Impact of Language Models on Assessment and Creativity [01:35:09]
Generative AI in Music and Creative Arts [01:47:55]
Challenges and Opportunities in Generative Music [01:52:11]
AI-Generated Music and Human Emotions [01:54:31]
Language Modeling vs. Music Modeling [02:01:58]
Democratization of AI and Industry Impact [02:07:38]
Recursive Self-Improving Superintelligence [02:12:48]
AI Technologies: Positive and Negative Impacts [02:14:44]
Runaway AGI and Control Over AI [02:20:35]
AI Dangers, Cybercrime, and Ethics [02:23:42]

In this conversation, Tim Scarfe and David Foster, the author of 'Generative Deep Learning,' dive deep into the world of generative AI, discussing topics ranging from model families and auto regressive models to the democratization of AI technology and its potential impact on various industries. They explore the connection between language and true intelligence, as well as the limitations of GPT and other large language models. The discussion also covers the importance of task-independent world models, the concept of active inference, and the potential of combining these ideas with transformer and GPT-style models.

Ethics and regulation in AI development are also discussed, including the need for transparency in data used to train AI models and the responsibility of developers to ensure their creations are not destructive. The conversation touches on the challenges posed by AI-generated content on copyright laws and the diminishing role of effort and skill in copyright due to generative models.

The impact of AI on education and creativity is another key area of discussion, with Tim and David exploring the potential benefits and drawbacks of using AI in the classroom, the need for a balance between traditional learning methods and AI-assisted learning, and the importance of teaching students to use AI tools critically and responsibly.

Generative AI in music is also explored, with David and Tim discussing the potential for AI-generated music to change the way we create and consume art, as well as the challenges in training AI models to generate music that captures human emotions and experiences. 

Throughout the conversation, Tim and David touch on the potential risks and consequences of AI becoming too powerful, the importance of maintaining control over the technology, and the possibility of government intervention and regulation. The discussion concludes with a thought experiment about AI predicting human actions and creating transient capabilities that could lead to doom.

* * * * * * *

THE NEGATIVE ASPECTS OF AI

by Yuval Noah Harari

0:04 / 41:21
AI and the future of humanity
Yuval Noah Harari at the Frontiers Forum
May 14, 2023

In this keynote and Q&A, Yuval Noah Harari summarizes and speculates on 'AI and the future of humanity'. There are a number of questions related to this discussion, including: "In what ways will AI affect how we shape culture? What threat is posed to humanity when AI masters human intimacy? Is AI the end of human history? Will ordinary individuals be able to produce powerful AI tools of their own? How do we regulate AI?" The event was organized and produced by the Frontiers Forum, dedicated to connecting global communities across science, policy, and society to accelerate global science related initiatives. It was produced and filmed with support from Impact, on April 29, 2023, in Montreux, Switzerland.


Yuval Noah Harari paints a grim picture
of the AI age, roots for safety checks

Celebrated author Yuval Noah Harari believes that AI has
"hacked" the operating system of human civilization.

Yuval Noah Harari warns about AI
X
The author said that the rise of AI is having a profound impact on society, affecting various aspects of economics, politics, culture, and psychology. (Image: Twitter)
Listen to this article
00:00
1x1.5x1.8x

Artificial intelligence is shaking up the world. While experiments and research in this sub-field of computer science have been ongoing for decades, the recent launch of OpenAI’s powerful chatbot ChatGPT seems to be a seminal point in the timeline of AI technologies. The chatbot’s astounding abilities have led many companies to try their hands at developing their own chatbots or even integrating similar AI in their products and services.

Since time immemorial, new technologies or innovations have witnessed fear and awe before they were embraced by mankind. Most new inventions have been met with shock and apprehension, with many either hailing them or downright condemning them. The ongoing AI wave is no different. While many have heaped praise on it, there is scepticism in equal measure.

Yuval Noah Harari, known for the acclaimed non-fiction book Sapiens: A Brief History of Mankind, in his latest article in The Economist, has said that artificial intelligence has “hacked” the operating system of human civilization. The Israeli public intellectual has been known for his comments on the opportunities and threats from AI in recent times.

The root of the fear

In his latest article, he argues that the fear of AI has haunted humanity ever since the beginning of the computer age. However, he said that the newly emerged AI tools in recent years could threaten the survival of human civilization from an “unexpected direction.”

He demonstrated how AI could impact culture by talking about language, which is integral to human culture. “Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our DNA. Rather, they are cultural artifacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artifacts we created by inventing myths and writing scriptures,” wrote Harari.

He stated that democracy is also a language that dwells on meaningful conversations, and when AI hacks language it could also destroy democracy.


The author said that the rise of AI is having a profound impact on society, affecting various aspects of economics, politics, culture, and psychology. The 47-year-old wrote that the biggest challenge of the AI age was not the creation of intelligent tools but striking a collaboration between humans and machines.

To highlight the extent of how AI-driven misinformation can change the course of events, Harari touched upon the cult QAnon, a political movement affiliated with the far-right in the US. QAnon disseminated misinformation via “Q drops” that were seen as sacred by followers.

AI and the power of intimacy

Harari also shed light on how AI could form intimate relationships with people and influence their decisions. “Through its mastery of language, AI could even form intimate relationships with people and use the power of intimacy to change our opinions and worldviews,” he wrote. To demonstrate this, he cited the example of Blake Lemoine, a Google engineer who lost his job after publicly claiming that the AI chatbot LaMDA had become sentient. According to the historian, the controversial claim cost Lemoine his job. He asked if AI can influence people to risk their jobs, what else could it induce them to do?

Harari also said that intimacy was an effective weapon in the political battle of minds and hearts. He said that in the past few years, social media has become a battleground for controlling human attention, and the new generation of AI can convince people to vote for a particular politician or buy a certain product.

The author drew parallels between present-day AI and the notions of the world of illusions by 17th-century philosopher Rene Descartes and the idea of Maya from Buddhist and Hindu sages. Highlighting the need for regulations, Harari cited the example of nuclear energy, stating that while it could produce cheap power, it could also destroy human civilization. However, over the years, we have reshaped the international order to ensure that nuclear technology is used for the collective good.

Regulation is key

In his bid to call attention to the need to regulate AI technology, Harari said that the first regulation should be to make it mandatory for AI to disclose that it is an AI. He said it was important to put a halt on ‘irresponsible deployment’ of AI tools in the public domain, and regulating it before it regulates us.

The author also shed light on the fact that how the current social and political systems are incapable of dealing with the challenges posed by AI. Harari emphasised the need to have an ethical framework to respond to challenges posed by AI.

ALSO READ
Orca
Six ways to access GPT-4 for free
Set your own business with ChatGPT
draggan ai featured

The author has, on numerous occasions, shared his thoughts on the rapid developments in AI. In March, Harari wrote an op-ed in The New York Times discussing the rapid progress and implications of GPT-like chatbots and the future of human interactions. He argued that while GPT-3 had made remarkable progress, it was far from replacing human interactions. He also suggested that AI could lead to greater inequality, something which billionaire Bill Gates had alluded to in his blog post.

1 comment:

  1. Re "Yuval Noah Harari paints a grim picture of the AI age, roots for safety checks"

    The FAKE narrative (ie propaganda) nearly everyone, including "alternative news" sources, have been spreading is that the TRULY big threat is that AI might achieve control over humans. Therefore it must be regulated.

    The TRUE narrative (ie empirical reality) virtually no one talks about or spreads is that the TRULY big threat with AI is that AI allows the governing psychopaths-in-power to materialize their ultimate wet dream to control and enslave everyone and everything on the whole planet, a process that's long been ongoing in front of everyone's "awake" (=sleeping, dumb) nose .... www.CovidTruthBeKnown.com (or https://www.rolf-hefti.com/covid-19-coronavirus.html)

    Like with every criminal inhumane self-concerned agenda of theirs the psychopaths-in-control sell and propagandize AI to the timelessly foolish (="awake") public with total lies such as AI being the benign means to connect, unit, transform, benefit, and save humanity.

    "We'll know our Disinformation Program is complete when everything the American public believes is false." ---William Casey, a former CIA director=a leading psychopathic criminal of the genocidal US regime

    The proof is in the pudding... ask yourself, "how is the hacking of the planet going so far? Has it increased or crushed personal freedom?"

    Since many of the same criminal establishment "expert" psychopaths, such as Musk (https://archive.ph/9ZNsL) and Harari (Harari is the psychopath working for Schwab's WEF [https://www.bitchute.com/video/Alhj4UwNWp2m]) or Geoffrey Hinton, the "godfather of AI" who have for many years helped develop, promote, and invest in AI are now suddenly supposedly have a change of heart and warn the public about AI it's clear their current call for a temporary AI ban and/or its regulation is just a manipulative tactic to misdirect and deceive the public, once again.

    This scheme is part of The Hegellian Dialectic in action: problem-reaction-solution.

    This "warning about AI" campaign is meant to raise public fear/hype panic about an alleged big "PROBLEM" (these psychopaths helped to create in the first place!) so the public demands (REACTION) the governments regulate and control this technology =they provide the "SOLUTION' FOR THEIR OWN INTERESTS AND AGENDAS... because... all governments are owned and controlled by the leading psychopaths-in-power (see CovidTruthBeKnown.com).

    What a convenient self-serving trickery ... of the ever foolish public.

    "AI responds according to the “rules” created by the programmers who are in turn owned by the people who pay their salaries. This is precisely why Globalists want an AI controlled society- rules for serfs, exceptions for the aristocracy." ---Unknown

    "Almost all AI systems today learn when and what their human designers or users want." ---Ali Minai, Ph.D., American Professor of Computer Science, 2023

    “Who masters those technologies [=artificial intelligence (AI), chatbots, and digital identities] —in some way— will be the master of the world.” --- Klaus Schwab, at the World Government Summit in Dubai, March 2023

    The ruling criminals pulled off the Covid Scam globally via its WHO institution because almost all nations belong to it. Sign the declaration at https://sovereigntycoalition.org to exit the WHO

    ReplyDelete