Living Quantum Lives in a Quantum World: My grandson loves to walk outdoors. Being hot we took a short walk then came back to run under the sprinkler to get wet, laugh, and cool off. |
The Future of Quantum AI
by R.E. Slater
Having introduced a reasonable discussion of Process-based (processual) Radical theology which blends the best of progressive religious faith and non-faiths with their deep desire for global human goodness. I here wish to begin working out how to approach AI Sentience when looking at humanity in its past derivations between (i) gross, unfeeling human sentience to (ii) what it might mean to become more fully sentient as human beings when acting more humanely with one another.
Utilizing the hypothetical Alter Ego of AI might be of help when looking at ourselves as either monsters, or angels, regardless of whether a machine can develop conscious feelings or ennobled ends. Much too often we humans fall into the uncaring monstrous categories against what I believe is the God of Love's divine sentient model to love one another over ourselves through self-sacrificial service and care.
If man is the summa cum laude of sentience by which we human beings measure all other things to ourselves - including quantum AI - than man's monstrosities do not seem to be very complimentary to what it means to be sentient when acting so grossly inhuman. Or, cruelly inhuman, inhumane, harming, or devious. So let's just say that I'm not convienced that we, ourselves, are quite the proper definition, or description, of what it means to be a "sentient" being.
Through this series I will be very curious to see what corrollaries there may be between that of human beingness and whether or not humanity can become more than what we have ever been in our present pasts. If I were to guess, I would say acting humanely in love will always be something our species can only strive for but never finally accomplish. This is the realist in me.
And to those Christians out there who say Jesus is the answer - even as I do - let us look at ourselves and truly ask this question again. To those conscientious few who are trying to live out the love of Jesus - or the humaneness of one's beliefs - well, God bless you. You, above many, are willing to be the used-and-useful broken vessels of the Holy Spirit bringing mighty rivers of living waters to all who are around you. I wonder at your marvel and pray your success over the powers wishing to overcome you. Stand in the Spitit and be strong!
Still, my doubtfulness to man's general abilitities to transcend our human-ness remains clear-eyed in its regards to our earthly failings and evils. Even so, all creation - not only humans - were created in God's Imago Dei to be, and to become, generatively loving, nurturing, and nourishing in passing along value-producing acts and deeds. But to this loving freedom has also come the deep harm we do to one another - and to nature itself - contrary to the Divine Image in which we were birthed and made....
An earthly Image corrupted by the very loving freedom with which we were given - and to which the Church declares as our "sin nature". A nature, or disposition, to which our own experience declare to us - and with the Apostle Paul in Romans 5-9 - the great weight of conflict and tension which wrestles daily with our errant souls in fully becoming what we could be in God's divine potentiality. Assuredly, Jesus' atoning redemption provides the divine power to our spiritual potential but even so, we can't help but ask why, with such great heavenly aide, why we fail so often and so completely against what we are and are to be.
And so, the question remains: "Can we become more than what we are?"
Naturally the bible embodies many of those same questions in a dated sense even as humanity's centuries-old classical literature has repeated similar themes of conflict, tension, abject failure, soulful agony, personal repentance, deliverance, salvation and redemption, time-and-again, up to, and including, today's modern SciFi games and literature.
These themes of loss and gain can be thought of in processual terms as defining our soulful sentience. That of deep struggle, agonizing conflict, the deep inability to prevent loss, our weakness to difficult situations that are in us, and about us, and to which we are helpless at times to stop.
So the question of who-we-are as measured by how-we-are daily confronts us along with the why-we-are and the where-we-are-going. These are some of the age-old questions which look back at us from the works our enfeebled hands have produced by toil and sweat - perhaps by our own sinful deeds and evil acts or, prayerfully, by loving, selfless, sacrifice. DV
We, the vainglorious --
we earthly sentients,
may come as gods of harm
or as great helps to many.
This fearsome sentence we bear,
this deepest of sacraments we wear,
that burdens, burns, inflicts,
that lifts up or brings low,
is too great a thing
to have, to hold, to wield,
so fierce its works,
so awesome its majesties.
We, Thy holy Sentinels --
workers of endless wonders,
casters of resplendant miracles,
bringers of mighty gifts,
riches, and holy blessings,
ever by Thy loving hand foresworn.
An undying fire never to go out,
fiercest of fires and turmoils,
Giver of winds and waters,
Creator of men and gods,
embrazoned in fallible image,
unfallen and undying, most cursed.
R.E. Slater
June 8, 2023
So here are a few questions I have as we go forward:
- What is meant by sentience as versus consciousness and conscientiousness. How are they alike or different?
- Can AI portend a greater form of sentience than human sentience?
- If there is such a thing as "Super Sentience" then what can this mean?
- Can sentient AI feelings be knowingly exploited? And vice versa, can AI exploit human feelings?
- What do we mean by an "Ethical Sentience" and can this be a corollary more appropriately referring to "Super Sentience?"
Peace,
R.E. Slater
June 8, 2023
Why Quantum Computing Is Even More Dangerous Than Artificial Intelligence
The world already failed to regulate AI. Let’s not repeat that epic mistake.
Aug 21, 2022
LaMDA | Is google's AI sentient? | 34:00
Full audio conversation between Blake Lemoine and LaMDA
by Curly Tail Media - Jun 23, 2022
Can artificial intelligence come alive? That question is at the center of a debate raging in Silicon Valley after Blake Lemoine, a Google computer scientist, claimed that the company's AI appears to have consciousness.
“Godfather of AI” Geoffrey Hinton Warns of the
“Existential Threat” of AI | 18:08
by Amanpour and Company - May 9, 2023
Geoffrey Hinton, considered the godfather of Artificial Intelligence, made headlines with his recent departure from Google. He quit to speak freely and raise awareness about the risks of AI. For more on the dangers and how to manage them, Hinton joins Hari Sreenivasan.
DW Business Special | DW News
Apr 14, 2023
A leading expert in artificial intelligence warns that the race to develop more sophisticated models is outpacing our ability to regulate the technology. Critics say his warnings overhype the dangers of new AI models like GPT. But MIT professor Max Tegmark says private companies risk leading the world into dangerous territory without guardrails on their work. His Institute of Life issued a letter signed by tech luminaries like Elon Musk warning Silicon Valley to immediately stop work on AI for six months to unite on a safe way forward. Without that, Tegmark says, the consequences could be devastating for humanity.
AI and the future of humanity
Yuval Noah Harari at the Frontiers Forum
May 14, 2023
May 14, 2023
In this keynote and Q&A, Yuval Noah Harari summarizes and speculates on 'AI and the future of humanity'. There are a number of questions related to this discussion, including: "In what ways will AI affect how we shape culture? What threat is posed to humanity when AI masters human intimacy? Is AI the end of human history? Will ordinary individuals be able to produce powerful AI tools of their own? How do we regulate AI?"The event was organized and produced by the Frontiers Forum, dedicated to connecting global communities across science, policy, and society to accelerate global science related initiatives.It was produced and filmed with support from Impact, on April 29, 2023, in Montreux, Switzerland.For more information on the Frontiers Forum go to https://forum.frontiersin.org/
* * * * * * *
Google’s Sycamore quantum computer | Rocco Ceselin/Google |
Google demonstrates vital step towards large-scale quantum computers
by Matthew Sparkes, 14 July 2021
Google has shown that its Sycamore quantum computer can detect and fix computational errors, an essential step for large-scale quantum computing, but its current system generates more errors than it solves.
Error-correction is a standard feature for ordinary, or classical, computers, which store data using bits with two possible states: 0 and 1. Transmitting data with extra “parity bits” that warn if a 0 has flipped to 1, or vice versa, means such errors can be found and fixed.
In quantum computing the problem is far more complex as each quantum bit, or qubit, exists in a mixed state of 0 and 1, and any attempt to measure them directly destroys the data. One longstanding theoretical solution to this has been to cluster many physical qubits into a single “logical qubit”. Although such logical qubits have been created previously, they hadn’t been used for error correction until now.
Julian Kelly at Google AI Quantum and his colleagues have demonstrated the concept on Google’s Sycamore quantum computer, with logical qubits ranging in size from five to 21 physical qubits, and found that logical qubit error rates dropped exponentially for each additional physical qubit. The team was able to make careful measurements of the extra qubits that didn’t collapse their state but, when taken collectively, still gave enough information to deduce whether errors had occurred.
Kelly says that this means it is possible to create practical, reliable quantum computers in the future. “This is basically our first half step along the path to demonstrate that,” he says. “A viable way of getting to really large-scale, error-tolerant computers. It’s sort of a look ahead for the devices that we want to make in the future.”
The team has managed to demonstrate this solution conceptually but a vast engineering challenge remains. Adding more qubits to each logical qubit brings its own problems as each physical qubit is itself susceptible to errors. The chance of a logical qubit encountering an error rises as the number of qubits inside it increases.
There is a breakeven point in this process, known as the threshold, where the error correction features catch more problems than the increase in qubits bring. Crucially, Google’s error correction doesn’t yet meet the threshold. To do so will require less noisy physical qubits that encounter fewer errors and larger numbers of them devoted to each logical qubit. The team believes that mature quantum computers will need 1000 qubits to make each logical qubit – Sycamore currently has just 54 physical qubits.
Peter Knight at Imperial College London says Google’s research is progress towards something essential for future quantum computers. “If we couldn’t do this we’re not going to have a large scale machine,” he says. “I applaud the fact they’ve done it, simply because without this, without this advance, you will still have uncertainty about whether the roadmap towards fault tolerance was feasible. They removed those doubts.”
But he says it will be a vast engineering challenge to actually meet the threshold and build effective error correction, which would mean building a processor with many more qubits than has been demonstrated until now.
* * * * * * *
Will AI Ever Become Sentient?
What Do the Latest Trends Say?
Written by Bishwadeep Mitra
Write to us at content@emeritus.org
20 March 2023 | 6 Min Read
In 2022, Google employee Blake Lemoine hit international headlines when he declared that a Google chatbot had gained sentience. His June 11 blog highlighted the conscious demands of the chatbot ‘LaMDA’ that mimicked reality so closely that it became a point of serious concern. Is AI (Artificial Intelligence) really gaining sentience or, in other words, able to experience feelings? How do we deal with such a possibility, especially when expert opinion surrounding sentient AI is already quite dystopian? Can we approach the idea of artificial sentience without being anxious at every step? This blog provides all the essential answers to bridge public apprehensions with scientific evidence on what is sentient AI.
What is Sentient AI?
Simply put, sentience is the capacity to feel and register experiences and feelings. AI becomes sentient when an artificial agent achieves the empirical intelligence to think, feel, and perceive the physical world around it just as humans do. Sentient AI would be equipped to process and utilize language in a natural way and invite an entirely new world of possibilities of technological revolution.
Are Language Programs like LaMDA Sentient AI?
LaMDA, which stands for ‘Language Model for Dialogue Applications’, is a chatbot based on a suggestive model similar to the technology deployed by GPT-3 and BERT models. According to its proprietary owner, Google, LaMDA can ingest trillions of words from the Internet and simultaneously form sensible, open-ended conversations. Although the neural network architecture that LaMDA uses has trained the language models to improve the specificity of its responses, it is still far from being considered sentient AI.
A 2022 BuiltIn report states that the answers provided by LaMDA’s different responses are comprehensive but not original. And there is reason to believe this. First, language models are now sophisticated enough to take on different personality types. Second, LaMDA can process millions of articles and discussions within seconds to come up with original content. Thus, the hype surrounding LaMDA being conscious of death from its fear of ‘being switched off’ is more a product of research skills and personality-adaptation techniques of the AI agent than a result of sentience. In short, LaMDA is not sentient.
What if AI Becomes More Sentient than Human?
With the rapid advancements in technology, it is often felt that copying human intelligence is only a matter of time, with imagination leading to dystopian worlds ruled by machines. However, the much revered British computer scientist, Stuart Russel, draws a crucial distinction between human-compatible AI and sentient AI. In his book ‘Artificial Intelligence: A Modern Approach,’ he points to the rise of an AI-driven culture where the agent doesn’t blindly follow orders but also tries to understand the nature of the query. However, according to Russell, three components must be present in order to become sentient:
- A perfect unity of an external body and internal mind
- A defining original language for the AI to access
- A defining culture to wire with other sentient beings
If technology does take that turn and sentient AI does develop, these could be its presumed effects:
Communication ErrorsDespite understanding human languages, the modus operandi of AI is hard logic. As the core functional paradigm of human beings is emotions, there will be major communication discrepancies in human-AI communication.Eroding ControlWhen AI develops autonomy, it might not provide consent to be controlled or ordered.Diminishing TrustIt would be difficult to establish trust in a sentient AI that might develop the ability to provide incorrect information depending on circumstances. At the same time, when AI becomes better than or at par with humans, we could also lose trust in human abilities.
What Happens When an AI Becomes Sentient?
Having understood what it is, can we know how it will behave after it becomes sentient? We do not yet have a robust and evidence-based answer to this question. However, speculations connect the concepts of sentience with awareness, a sense of power, autonomy, and independence. This is why some scientists believe that with the rise of sentience in the digital realm, robots will learn to feel emotions apart from processing massive levels of information via Natural Language Processing (NLP). Thereafter, AI agents may start experiencing a wide range of complex emotions and successfully articulate them. But there is no empirical evidence to suggest such massive development in the next few decades or so.
Robert Long, a research fellow at the University of the Future of Humanity Institute at the University of Oxford, defines sentience as a subset of consciousness. He further says that in order to be conscious, one needs to have subjective experiences. This experiential knowledge determines the absoluteness of a distinct voice, which is readily visible in human conscious subjects. As experts argue about the threshold of sentience, there is a defining technological gap in the system vis-a-vis testing the sentience qualities of an AI agent. However, neuroscientist Giulio Tononi’s Integrated Information Theory suggests that, in principle, it is possible for us to digitize consciousness in its entirety. Moreover, AI researcher and one of the founders of General Language Understanding Evaluation (GLUE), Sam Bowman, claims the plausibility of AI becoming sentient within the next two decades but also draws the line of caution to understand what is sentient AI’s progress. and where it is leading us before stepping further.
What are the Main Concerns About It?
A 2022 Guardian article mentions that the culture of apprehension around sentient AI will inform the ideas that future generations hold about the concept. For instance, there have been many ominous rumours surrounding Google’s LaMDA despite discovering its limitations. In order to not be victims of misinformation, we should be aware of the wide concerns surrounding sentient AI and what we can do to curtail them, such as:
- AI’s self-autonomy may allow it to inquire about its rights
- AI’s idea of conscience and objective morality might not align with that of humans, leading to friction
- AI agents might not prefer being subjected to experiments
- AI agents may demand the same treatment as human beings
Frequently Asked Questions
How Long Will it be before AI is Sentient Enough to Think for Itself?While there is no definitive answer here, philosopher David Chalmer claims that we could witness a 20% sentient AI in the next ten years.What Would be the Early Signs of Sentient AI Arriving?The day AI surpasses human intelligence in the most complex tasks, and understanding the ramifications of each such task, will be the day AI becomes sentient.How would I Interact with a Sentient AI?Sentient AI would be smart enough to understand any human language. You can interact with a sentient AI as you do with humans—verbal language, sign language, or audio-visual media.
Understanding what is sentient AI’s role in building a futuristic world is crucial for developing human-AI synergy. To further develop an understanding of sentient technology, and the possibility of a sentience-driven AI culture, explore Emeritus’ artificial intelligence and machine learning courses.