Quotes & Sayings


We, and creation itself, actualize the possibilities of the God who sustains the world, towards becoming in the world in a fuller, more deeper way. - R.E. Slater

There is urgency in coming to see the world as a web of interrelated processes of which we are integral parts, so that all of our choices and actions have [consequential effects upon] the world around us. - Process Metaphysician Alfred North Whitehead

Kurt Gödel's Incompleteness Theorem says (i) all closed systems are unprovable within themselves and, that (ii) all open systems are rightly understood as incomplete. - R.E. Slater

The most true thing about you is what God has said to you in Christ, "You are My Beloved." - Tripp Fuller

The God among us is the God who refuses to be God without us, so great is God's Love. - Tripp Fuller

According to some Christian outlooks we were made for another world. Perhaps, rather, we were made for this world to recreate, reclaim, redeem, and renew unto God's future aspiration by the power of His Spirit. - R.E. Slater

Our eschatological ethos is to love. To stand with those who are oppressed. To stand against those who are oppressing. It is that simple. Love is our only calling and Christian Hope. - R.E. Slater

Secularization theory has been massively falsified. We don't live in an age of secularity. We live in an age of explosive, pervasive religiosity... an age of religious pluralism. - Peter L. Berger

Exploring the edge of life and faith in a post-everything world. - Todd Littleton

I don't need another reason to believe, your love is all around for me to see. – Anon

Thou art our need; and in giving us more of thyself thou givest us all. - Khalil Gibran, Prayer XXIII

Be careful what you pretend to be. You become what you pretend to be. - Kurt Vonnegut

Religious beliefs, far from being primary, are often shaped and adjusted by our social goals. - Jim Forest

We become who we are by what we believe and can justify. - R.E. Slater

People, even more than things, need to be restored, renewed, revived, reclaimed, and redeemed; never throw out anyone. – Anon

Certainly, God's love has made fools of us all. - R.E. Slater

An apocalyptic Christian faith doesn't wait for Jesus to come, but for Jesus to become in our midst. - R.E. Slater

Christian belief in God begins with the cross and resurrection of Jesus, not with rational apologetics. - Eberhard Jüngel, Jürgen Moltmann

Our knowledge of God is through the 'I-Thou' encounter, not in finding God at the end of a syllogism or argument. There is a grave danger in any Christian treatment of God as an object. The God of Jesus Christ and Scripture is irreducibly subject and never made as an object, a force, a power, or a principle that can be manipulated. - Emil Brunner

“Ehyeh Asher Ehyeh” means "I will be that who I have yet to become." - God (Ex 3.14) or, conversely, “I AM who I AM Becoming.”

Our job is to love others without stopping to inquire whether or not they are worthy. - Thomas Merton

The church is God's world-changing social experiment of bringing unlikes and differents to the Eucharist/Communion table to share life with one another as a new kind of family. When this happens, we show to the world what love, justice, peace, reconciliation, and life together is designed by God to be. The church is God's show-and-tell for the world to see how God wants us to live as a blended, global, polypluralistic family united with one will, by one Lord, and baptized by one Spirit. – Anon

The cross that is planted at the heart of the history of the world cannot be uprooted. - Jacques Ellul

The Unity in whose loving presence the universe unfolds is inside each person as a call to welcome the stranger, protect animals and the earth, respect the dignity of each person, think new thoughts, and help bring about ecological civilizations. - John Cobb & Farhan A. Shah

If you board the wrong train it is of no use running along the corridors of the train in the other direction. - Dietrich Bonhoeffer

God's justice is restorative rather than punitive; His discipline is merciful rather than punishing; His power is made perfect in weakness; and His grace is sufficient for all. – Anon

Our little [biblical] systems have their day; they have their day and cease to be. They are but broken lights of Thee, and Thou, O God art more than they. - Alfred Lord Tennyson

We can’t control God; God is uncontrollable. God can’t control us; God’s love is uncontrolling! - Thomas Jay Oord

Life in perspective but always in process... as we are relational beings in process to one another, so life events are in process in relation to each event... as God is to Self, is to world, is to us... like Father, like sons and daughters, like events... life in process yet always in perspective. - R.E. Slater

To promote societal transition to sustainable ways of living and a global society founded on a shared ethical framework which includes respect and care for the community of life, ecological integrity, universal human rights, respect for diversity, economic justice, democracy, and a culture of peace. - The Earth Charter Mission Statement

Christian humanism is the belief that human freedom, individual conscience, and unencumbered rational inquiry are compatible with the practice of Christianity or even intrinsic in its doctrine. It represents a philosophical union of Christian faith and classical humanist principles. - Scott Postma

It is never wise to have a self-appointed religious institution determine a nation's moral code. The opportunities for moral compromise and failure are high; the moral codes and creeds assuredly racist, discriminatory, or subjectively and religiously defined; and the pronouncement of inhumanitarian political objectives quite predictable. - R.E. Slater

God's love must both center and define the Christian faith and all religious or human faiths seeking human and ecological balance in worlds of subtraction, harm, tragedy, and evil. - R.E. Slater

In Whitehead’s process ontology, we can think of the experiential ground of reality as an eternal pulse whereby what is objectively public in one moment becomes subjectively prehended in the next, and whereby the subject that emerges from its feelings then perishes into public expression as an object (or “superject”) aiming for novelty. There is a rhythm of Being between object and subject, not an ontological division. This rhythm powers the creative growth of the universe from one occasion of experience to the next. This is the Whiteheadian mantra: “The many become one and are increased by one.” - Matthew Segall

Without Love there is no Truth. And True Truth is always Loving. There is no dichotomy between these terms but only seamless integration. This is the premier centering focus of a Processual Theology of Love. - R.E. Slater

-----

Note: Generally I do not respond to commentary. I may read the comments but wish to reserve my time to write (or write from the comments I read). Instead, I'd like to see our community help one another and in the helping encourage and exhort each of us towards Christian love in Christ Jesus our Lord and Savior. - re slater

Saturday, February 4, 2023

What is a Language Model?



What Is a Language Model?

July 20, 2022


What are they used for? Where can you find them?
And what kind of information do they actually store?


Our aim at deepset is that everyone, no matter their level of technical background, can harness the power of modern natural language processing (NLP) and language models for their own use case. Haystack, our open-source framework, makes this a reality.

When we talk to our users, we encounter common sources of confusion about NLP and machine learning. Therefore, in the upcoming blog posts, we want to explain some basic NLP concepts in understandable language. First up: language models.


Language Models in NLP

Language models take center stage in NLP. But what is a language model? To answer that question, let’s first clarify the term model and its use in machine learning.

What is a machine learning model?

The real world is complex and confusing. Models serve to represent a particular field of interest — a domain — in simpler terms. For example, weather models are simplified representations of meteorological phenomena and their interactions. These models help us understand the weather domain better and make predictions about it.

In machine learning, models are much the same. They serve mainly to predict events based on past data, which is why they’re also known as forecasting or predictive models.

The data that we feed to an Machine Learning (ML) algorithm allows it to devise a model of the data’s domain. That data should represent reality most faithfully, so that the models which are based on it can approximate the real world as closely as possible.

What is a language model (LM)?

A language model is a machine learning model designed to represent the language domain. It can be used as a basis for a number of different language-based tasks, for instance:
and plenty of other tasks that operate on natural language.

In a domain like weather forecasting, it’s easy to see how past data helps a model to predict a future state. But how do you apply that to language? In order to understand how the concept of prediction factors into language modeling, let’s take a step back and talk about linguistic intuition.

Linguistic intuition

As the speaker of a language, you have assembled an astonishing amount of knowledge about it, much of which cannot be taught explicitly. It includes judgments about grammaticality (whether or not a sentence is syntactically correct), synonymity (whether two words mean roughly the same) and sentence completion. Suppose I asked you to fill in the gap in the following sentence:

“Julia is looking for ___ purse.”

You’d probably say “her” or “my” or any other pronoun. Even a possessive noun phrase like “the cat Pablo’s” would work. But you wouldn’t guess something like “toothbrush” or “Las Vegas.” Why? Because of linguistic intuition.

Training a language model

Language models seek to model linguistic intuition. That is not an easy feat. As we’ve said, linguistic intuition isn’t learned through schooling but through constant use of a language (Noam Chomsky even postulated the existence of a special “language organ” in humans). So how can we model it?

Today’s state of the art in NLP is driven by large neural networks. Neural language models like BERT learn something akin to linguistic intuition by processing millions of data points. In machine learning, this process is known as “training.”

To train a model, we need to come up with tasks that cause it to learn a representation of a given domain. For language modeling, a common task consists of completing the missing word in a sentence, much like in our example earlier. Through this and other training tasks, a language model learns to encode the meanings of words and longer text passages.

So how do you get from a computational representation of a language’s semantic properties to a model that can perform specific tasks like question answering or summarization?


General-purpose Versus Domain-specific Language Models

General language models like BERT or its bigger sister RoBERTa require huge amounts of data to learn a language’s regularities. NLP practitioners often use Wikipedia and other freely available collections of textual data to train them. By now, BERT-like models exist for practically all the languages with a sufficiently large Wikipedia. In fact, we at deepset have produced several models for German and English, which you can check out on our models page.


So what can you do with these models? Why are they so popular? Well, BERT can be used to enhance language understanding, for example in the Google search engine. But arguably the biggest value of general-purpose language models is that they can serve as a basis for other language-based tasks like question answering. By exposing it to different datasets and adjusting the training objective, we can adapt a general language model to a specific use case.

Fine-tuning a language model

There are many tasks that benefit from a representation of linguistic intuition. Examples of such tasks are sentiment analysis, named entity recognition, question answering, and others. Adapting a general-purpose language model to such a task is known as fine-tuning.


Fine-tuning requires data specific to the task you want the model to accomplish. For instance, to fine-tune your model to the question-answering task, you need a dataset of question-answer pairs. Such data often needs to be created and curated manually, which makes it quite expensive to generate. On the bright side, fine-tuning requires much less data than training a general language model.

Where to look for models

Both general-purpose models and fine-tuned models can be saved and shared. The Hugging Face model hub is the most popular platform for model-sharing, with tens of thousands of models of different sizes, for different languages and use cases. Chances are high that your own use case is already covered by one of the models on the model hub.

To help you find a model that might fit your needs, you can use the interface on the left side of the model hub page to filter by task, language, and other criteria. This lets you specifically look for models that have been trained for question answering, summarization, and many other tasks. Once you’ve found a suitable model, all you need to do is plug it into your NLP pipeline, connect to your database, and start experimenting.

How to handle domain-specific language

Though we often talk about languages as if they were homogeneous entities, the reality is very far from that. There are, for example, some professional domains — like medicine or law — that use highly specialized jargon, which non-experts can barely understand. Similarly, when a general BERT model is used to process data from one of those domains, it might perform poorly — just like a person without a degree in the field.

A technique called domain adaptation provides the solution: here, the pretrained model undergoes additional training steps, this time on specialized data like legal documents or medical papers.


The Hugging Face model hub contains BERT-based language models that have been adapted to the scientific, medical, legal, or financial domain. These domain-specific language models can then serve as a basis for further downstream tasks. For instance, this highly specialized model extracts named entities (like names for cells and proteins) from biomedical texts in English and Spanish.


What Can Language Models Do?

Language models can seem very smart. In this demo, for example, we show how well our RoBERTa model can answer questions about the Game of Thrones universe. It’s important to note, though, that this language model doesn’t actually know anything. It is just very good at extracting the right answers from documents — thanks to its mastery of human language and the fine-tuning it received on a question-answering dataset. It operates similarly to a human agent reading through documents to extract information from them, only much, much faster!

Other types of language models take a completely different approach. For example, the famed GPT family of generative language models actually do memorize information. They have so many parameters — billions — that they can store information picked up during training in addition to learning the language’s regularities.

So what can a language model do? Exactly what it’s been trained to do — not more, not less. Some models are trained to extract answers from text, others to generate answers from scratch. Some are trained to summarize text, others simply learn to represent language.

If your documents don’t use highly specialized language, a pre-trained model might work just fine — no further training required. Other use cases, however, might benefit from additional training steps. In our upcoming blog post, we’ll explore in more detail how you can work with techniques like fine-tuning and domain adaptation to get the most out of language models.


Composable NLP with Haystack

Modern NLP builds on decades of research and incorporates complex concepts from math and computer science. That’s why we promote a practice of composable NLP with Haystack, which lets users build their own NLP-based systems through a mix-and-match approach. You don’t have to be an NLP practitioner to use our framework, just as you don’t need to know anything about hardware or electricity to use a computer.

Want to see how to integrate pre-trained language models into an NLP pipeline? Check out our GitHub repository or sign up to deepset Cloud.

To learn more about NLP, make sure to download our free ebook NLP for Product Managers.



Tips & Tricks of Using A.I. LLM's, Parts 1 & 2


Large Language Models and
Where to Use Them: Part 1

Jul 7, 2022 • 8 min read

Over the past few years, large language models (LLMs) have evolved from emerging to mainstream technology. In this blog post, we'll explore some of the most common natural language processing (NLP) use cases that they can address. This is part one of a two-part series.

Large Language Models and Where to Use Them
You can find Part 2 here.


A large language model (LLM) is a type of machine learning model that can handle a wide range of natural language processing (NLP) use cases. But due to their versatility, LLMs can be a bit overwhelming for newcomers who are trying to understand when and where to use these models.

In this blog series, we’ll simplify LLMs by mapping out the seven broad categories of use cases where you can apply them, with examples from Cohere's LLM platform. Hopefully, this can serve as a starting point as you begin working with the Cohere API, or even seed some ideas for the next thing you want to build.

The seven use case categories are:
  1. Generate
  2. Summarize
  3. Rewrite
  4. Extract
  5. Search/Similarity
  6. Cluster
  7. Classify
Because of the general-purpose nature of LLMs, the range of use cases and relevant industries within each category is extremely wide. This post will not attempt to delve too deeply into each, but it will provide you with enough ideas and examples to help you start experimenting.


1. Generate

Probably the first thing that comes to mind when talking about LLMs is their ability to generate original and coherent text. And that’s what this use case category is all about. LLMs are pre-trained using a huge collection of text gathered from a variety of sources. This means that they are able to capture the patterns of how language is used and how humans write.

Getting the best out of these generation models is now becoming a whole field of study in and of itself called prompt engineering. In fact, the first four use case categories on our list all leverage prompt generation in their own ways.

More on the other three later, but the basic idea in prompt engineering is to provide a context for a model to work with. Prompt engineering is a vast topic, but at a very high level, the idea is to provide a model with a small amount of contextual information as a cue for generating a specific sequence of text.

One way to set up the context is to write a few lines of a passage for the model to continue. Imagine writing an essay or marketing copy where you would begin with the first few sentences about a topic, and then have the model complete the paragraph or even the whole piece.

Another way is by writing a few example patterns that indicate the type of text that we want the model to generate. This is an interesting one because of the different ways we can shape the models and the various applications that it entails.

Let’s take one example. The goal here is to have the model generate the first paragraph of a blog post. First, we prepare a short line of context about what we’d like the model to write. Then, we prepare two examples — each containing the blog’s title, its audience, the tone of voice, and the matching paragraph.

Finally, we feed the model with this prompt, together with the information for the new blog. And the model will duly generate the text that matches the context, as seen below.


Completion:


You can test it out by accessing the saved preset.

In fact, the excerpt you read at the beginning of this blog was generated using this preset!

That was just one example, but how we prompt a model is limited only by our creativity. Here are some other examples:
  • Writing product descriptions, given the product name and keywords
  • Writing chatbot/conversational AI responses
  • Developing a question-answering interface
  • Writing emails, given the purpose/command
  • Writing headlines and paragraphs

2. Summarize

The second use case category, which also leverages prompt engineering, is text summarization. Think about the amount of text that we deal with on a typical day, such as reports, articles, meeting notes, emails, transcripts, and so on. We can have an LLM summarize a piece of text by prompting it with a few examples of a full document and its summary.

The following is an example of article summarization, where we prepare the prompt to contain the full passage of an article and its one-line summary.

Prompt:


Completion:


You can test it out by accessing the saved preset.

Here are some other example documents where LLM summarization will be useful:
  • Customer support chats
  • Environmental, Social, and Governance (ESG) reports
  • Earnings calls
  • Paper abstracts
  • Dialogues and transcripts

3. Rewrite

Another flavor of prompt engineering is text rewriting. This is another of those tasks that we do every day and spend a lot of time on, and if we could automate them, it would free us up to work on more creative tasks.

Rewriting text can mean different things and take different forms, but one common example is text correction. The following is the task of correcting the spelling and grammar in voice-to-text transcriptions. We prepare the prompt with a short bit of context about the task, followed by examples of incorrect and corrected transcriptions.

Prompt:


Completion:


You can test it out by accessing the saved preset.

Here are some other example use cases for using an LLM to rewrite text:
  • Paraphrase a piece of text in a different voice
  • Build a spell checker that corrects text capitalizations
  • Rephrase chatbot responses
  • Redact personally identifiable information
  • Turn a complex piece of text into a digestible form

4. Extract

Text extraction is another use case category that can leverage a generation LLM. The idea is to take a long piece of text and extract only the key information or words from the text.

The following is the task of extracting relevant information from contracts. We prepare the prompt with a short bit of context about the task, followed by a couple of example contracts and the extracted text.

Prompt:


Completion:


You can test it out by accessing the saved preset.

Some other use cases in this category include:
  • Extract named entities from a document
  • Extract keywords and keyphrases from articles
  • Flag for personally identifiable information
  • Extract supplier and contract terms
  • Create tags for blogs

Conclusion

In part two of this series, we’ll continue our exploration of the remaining three use case categories (Search/Similarity, Cluster, and Classify). We’ll also explore how LLM APIs can help address more complex use cases. The world is complex, and a lot of problems can only be tackled by piecing multiple NLP models together. We’ll look at some examples of how we can quickly snap together a combination of API endpoints in order to build more complete solutions.


* * * * * * *

Large Language Models and
Where to Use Them: Part 2

Jul 7, 2022 • 8 min read

Over the past few years, large language models (LLMs) have evolved from emerging to mainstream technology. In this blog post, we'll explore some of the most common natural language processing (NLP) use cases that they can address. This is part one of a two-part series.

It can be a bit overwhelming for someone new to Large Language Models (LLMs) to understand when and where to use them in natural language processing (NLP) use cases. In this blog series, we simplify LLM application by mapping out the seven broad categories of use cases that you can address with Cohere’s LLM.

In Part 1 of our series, we covered the first four use case categories: Generate, Summarize, Rewrite, and Extractt. In this post, we will cover the other three: Search, Cluster, and Classify. Finally, we’ll look at how we can combine the different types, making their applications much more interesting and useful.


5. Search/Similarity

Any mention of LLMs will most likely spark discussion around their text generation capabilities, as we’ve seen in the previous four use cases. The less-talked-about, but equally powerful capability, is text representation.

While text generation is about creating new text, text representation is about making sense of existing text. Think about the amount of unstructured text data being generated today that’s only accelerated by the increasingly ubiquitous internet. It would not be possible for humans to process this massive volume of information without NLP-powered automation.

One such use case category for text representation is similarity search. Given a text query, the goal is to find documents that are most similar to the query.

The most obvious example use case for this is search engines. As users, we expect the search results to return links and documents that are highly relevant to our query. What makes modern search engines work very well is their ability to match the query to the appropriate results not just via keyword-matching, but by semantic similarity.

In simple words, they are able to perform matching based on meaning, context, themes, ideas — abstract concepts that may use different words altogether, but very much relate to each other.

Let’s say a user enters the search string “ground transportation at the airport.” The search engine must be able to know that the user is looking for taxis, car rentals, trains, or other similar services, even if the user doesn’t explicitly mention them.

When we input a piece of text into a representation model, instead of generating more text, the model generates a set of numbers that represent the meaning or context of the input text. These numbers are called “text embeddings”. In LLMs, they tend to be a very long sequence of numbers, typically in the thousands, and the longer they are, the more information is stored about the text.

With Cohere, you can access this type of model via the Embed endpoint. This Python notebook provides an example of a semantic search application, where given a question, the search engine would return other frequently asked questions (FAQ) whose text embeddings are the most similar to the question.

It goes on to show all the questions on a two-dimensional plot, shown in the image below, where the closer two points are on the plot, the more semantically similar they are.

Two examples of similar questions about sharks and Boxing Day


This concept can be applied to a much broader range of use cases, for example:
  • Retrieval of related and useful documents within an organization
  • Similar product recommendations
  • eCommerce product search
  • Next article recommendations based on reading history
  • Selecting chatbot responses from an available list

6. Cluster

Clustering is another use case category that leverages text embeddings. The idea is to take a group of documents and make sense of how they are organized and how they are related to each other.

In the previous use case, we visualized a set of documents on a plot to get a sense of how a set of documents are similar, or different, from each other. Clustering uses the same principles, but adds another step of organizing them into groups. This can be done via clustering algorithms, for example, k-means clustering, where we specify the number of clusters and the algorithm will return the appropriate cluster associated with each piece.

This Python notebook, also leveraging the Embed endpoint, goes into detail about how to make sense of three thousand “Ask HN” (Hacker News) posts. First, the text embeddings for each are generated. This is followed by clustering them into smaller groups by the theme or topic of the posts, supplemented by the keywords that represent the topic of each group.

Finally, these posts are visualized on a plot, shown in the image below, where one color represents a topic cluster. Below you can see a few topics emerging, such as life, career, coding, startups, and computer science.

Eight clusters from the top 3,000 Ask HN posts,
with each set of keywords representing a topic


This technique can be applied to number of different tasks, such as:
  • Organizing customer feedback and requests into topics
  • Segmenting products into categories based on product descriptions
  • Turning ESG reports and news into themes
  • Organizing a huge corpus of company documents
  • Discovering emerging themes in survey responses analysis


7. Classify

Last but not least is the text classification category, and that’s because it is probably the most widely applicable use of NLP today. You can think of it as similar to clustering, with a slight twist.

Clustering is called an “unsupervised learning” algorithm. That’s because we don’t know what the clusters are beforehand — we assign a number of clusters (we can choose any number), and the algorithm will group the documents we give according to that number.

On the other hand, classification is a “supervised learning” algorithm, because this time, we already know beforehand what those clusters, or more precisely classes, are.

For example, say we have a list of eCommerce customer inquiries, and for routing purposes, we would like to categorize each of them into one of three classes: Shipping, Returns, and Tracking. To make the classifier work, we first need to train it by showing it enough examples of a piece of text, such as “Do you offer same day shipping?”, and its actual class, which in this case is Shipping.

With LLMs, there are a couple of possible approaches to doing this. The first is via text embeddings, demonstrated in this Python notebook. It shows an example of training a classifier using text embeddings. First, it generates the embeddings of each piece of text. Next, it uses these embeddings as the input for training the classifier. For this kind of setup, the number of training examples required will depend on the task, but typically it can range in the hundreds or even thousands.

The other approach is by leveraging “few-shot” classification. With this approach, we are leveraging prompt engineering to provide classification examples to the model. This has shown to work well with as few as five training examples per class, though it still depends on the kind of task we are working on. But this option allows us to build a working classifier when we don’t have many training examples — an all-too-common problem.

Here’s how we would build the eCommerce inquiries classifier with a few-shot approach. The following is a screenshot from the Cohere Playground, where we leverage the Classify endpoint to build a classifier.

First, we prepare the prompt containing examples of text-class pairs. With a minimum of five examples per class, and three classes, we give it a total number of fifteen examples.

The list of examples used to build the classifier


Next, we add any number of inputs that we would like to classify — here we have two inputs as examples.

The list of inputs for the classifier to classify


We can then trigger the classification, in which the model will output the predicted class for each input and the accompanying confidence level values, which indicate how confident the model is in its prediction of each class.


The predictions given by the classifier together with the confidence levels


You can test it out by accessing the saved preset.

Some example areas where text classification can be useful include:
  • Content moderation for toxic comments on online platforms
  • Intent classification in chatbots
  • Sentiment analysis on social media activity
  • eCommerce product categorization
  • Assigning customer support tickets to the right teams

Getting the best out of the Cohere API

Now that we’ve covered the seven main use case categories for LLMs, let’s consider how we can build really interesting applications — by stacking these different capabilities together. Let’s look at a few examples and start with a fun one.

Imagine that you are creating a chatbot that needs to have a certain voice or style. In our case, that bot happens to a pirate!

Let’s make it a game where people can enter a phrase, and the bot will decide whether the phrase is “pirate” enough. And if it’s not, the bot will even correct the phrase and turn it into pirate lingo!

This is actually something that our team has experimented with ourselves. But without going into the implementation details, to make it work, we had to first classify whether or not a phrase is acceptable pirate speak. If it’s not, then we put the phrase through a pirate paraphraser. We then compare the similarity between the generated phrase and the original phrase, and only if they are similar enough would the bot return the new phrase.

To make this happen, we made use of three use case categories: Classify, Rewrite, and Search/Similarity.


A summary flow of the pirate paraphraser


A more serious example would be a chatbot that answers questions on a forum. Here’s one possible basic implementation. First, we implement a classification step to determine if a user has entered a question or just a general comment or chat. And if it’s a question, then we proceed to search for the question from our database that is the most similar to the query, so we can provide a relevant answer.


A summary flow of the question answering chatbot


In another example, let’s say we are building an article recommendation system, where the goal is to provide a list of other articles most relevant to the one that a user is currently reading. This article demonstrates an example of implementing similarity search, classification, and extraction in a basic recommender.


A summary flow of the article recommender


We can take it even further by combining these steps with other APIs. In a recent blog post, we describe how to build complete, fully playable Magic the Gathering cards with AI, combining the capabilities of Cohere’s API together with a text-to-image generation API.


A summary flow of the Magic the Gathering card generator


Conclusion

With these examples, we are only just scratching the surface. The possibilities of using LLMs are limited only by our imagination. This is an exciting time where any developer and team, not just the big players anymore, can tackle some of the toughest NLP challenges by leveraging cutting-edge AI technologies that are made available via simple API calls.



What Are the Signs of True Faith in Your Children?



What Are the Signs of True Faith
in Your Children?

Meg BucherWriter and Author
Feb 01, 2021

“But the fruit of the Spirit is love, joy, peace, patience, kindness, goodness, faithfulness, gentleness and self-control; against such things there is no law.” (Galatians 5:22-23)

My 10 year old threw one arm around her momma and the other in the air as we sang our favorite worship song. Real faith moves. She had been bullied to her breaking point, and it led her to straight to Jesus. I love that memory fiercely, but I hold it loosely. Transformation is ongoing for all of us – every journey littered with highs and lows.

I still consider myself a parenting rookie. Raising kids drives me to my knees in prayer more than I could have ever understood it would when my babies were just babies. In our society, we talk a lot about self-care. My parenting self-care strategy is to get quiet with Jesus. The more time I make to faithfully seek Him, the more He prepares me to survive another day of motherhood. “A Christ-centered life begins with realizing that the source of everything we are is the Lord,” Paul Tripp explains. “He created us, he owns us, he gifted us with talents, he authors our story, and every blessing that we receive comes from him.”

Christ-centered lives parent from a place of humble submission instead of pride and authority. God has placed us purposefully to parent the particular children we are raising. He intimately knows and has purposefully designed us and them. Raising children to be Christ-centered in a world waving a self-centered banner is hard, but not impossible. Through the power of Christ in us, let’s pray our children see what it’s like to live a Christ centered life, and choose to live that way as well.

Photo Credit: © Getty Images/myshkovsky



A Prayer for Seeing True Faith
in Your Children
Father,

Today as we talk about signs of our children’s faith, we ask You to provide clarity and encouragement. We are imperfect people, and our children are imperfect too. There are bound to be clashes of character and will, disagreement and misunderstandings. Parenting is challenging. Growing up is challenging. Meet us, and our children, in our challenges daily, Father. May we be slow to speak when we want to snap, and patient when we’re rushing to be on time. Let us lean into Your timing, Your plans, Your ways, and Your will, Father.

We pray our children would honor and obey us, so they receive Your full blessing! And we pray for You to equip and guide us to lead them in Your Truth, to live Your Truth out each day, and to love others the way You command. Let our lives bring glory and honor to You. May we be the biggest witness for our children. Let our lives, imperfect and messy, but faithful and honest, be signs of our faith to them.

Father, You reign sovereign over all. Parenting can make us feel frantically out of control. Bless and uphold us. Calm our hearts, and continue to minister to the anxieties of our hearts as we lift them up to You. Thank You for Your compassionate care for us, Lord Jesus. We pray in Your powerful name,

Amen.

The illusion of perfection is something we need to release into the hands of our heavenly Father. Perfect kids and easy parenting are no one’s reality. God is faithful to remind us of His faithful pursuit of our children, regardless of our messy human nature, and the sin that so easily entangles all of us. Through mistakes, mess-ups, groundings, misunderstandings, long-winded lectures, disagreements, sassiness, and drama, God is faithful and good. I have watched my children grow in their faith as the teen years creep in to our reality, and I see the work of Jesus shining through them in the following ways.

Photo Credit: © Getty Images/Michael Truelove



1. The Way They Treat or Serve Others

“Therefore encourage one another and build each other up, just as in fact you are doing.” (1 Thessalonians 5:11)

Every time we drive by sirens and flashing lights, my youngest drops her head to pray. True faith has an instinctual reaction to pray for others. Aloud, she lifts up perfect strangers, and close family and friends. Christ-centered lives face outward.

Following Jesus allows us to see others as He sees them. Christ commanded us to love God, and love each other. True faith shows up for the kid sitting alone at the lunch table, or stands up for the one being bullied. Out of empathy gained from each unique situation, true faith activates each lesson learned to reach out in love and encouragement for someone going through what they have gone through.

Kindness to siblings and friends is a sign of faith in action. Encouraging others, finding common interests with new friends seated next to them in class, and asking for prayer on account of others is a sign of true faith. The way we treat others puts the true nature of our hearts center-stage.

The apostle Mark wrote, “For even the Son of Man did not come to be served, but to serve, and to give his life as a ransom for many” (Mark 10:45). Christ-centered lives seek to serve others. Children may ask to donate to the local food bank or food drive at church or school, drop off outgrown clothes or pass them on to friends or family in need. Often the things our children struggle with and overcome through Christ will be the very conduits for change He will use them in for the same situation in another’s life. As children grow older, their concerns grow deeper alongside the issues they work through themselves. Being the new kid at school lends empathy for other newbies as they arrive, and having been the victim of bullying allows a deeper and more compassionate perspective to lead the kindness movement among their peers.

Photo Credit: © Getty Images/omgimages



2. Confidence Rooted in Christ

“But blessed is the one who trusts in the LORD, whose confidence is in him.” (Jeremiah 17:7)

Confidence in Christ is a humble submission and commitment to work hard with the talents and gifts God has given us. Everything we do is meant to glorify Him. True faith is hard-working, and gives glory to God. We live in a world that encourages self-love. Pride can deceitfully creep into our children’s consciousness, not only recognizable by arrogance, but in reverse as they put themselves down and count themselves out. Godly confidence is humble, hard working, and God-honoring. True faith chooses to believe the truth about who God says we are, rather than believe self-destructive lies.

The apostle John recorded these words of Jesus: “As the Father has loved me, so have I loved you. Now remain in my love” (John 15:9). When children ask for wisdom, that’s our queue to bring God’s truth into their situation. Faithfully, when they are obedient to listen and apply His word to what they're walking through, they learn where to come back again next time. Loving people sounds easy until all the spots at the lunch table fill up and loneliness sets in. It’s easy to be bitter when left out and feeling lonely, instead of looking around to see who God has placed in their lives. It’s also easy to forget about the lonely when seated at a table full of friends. On both sides of the table, those with true faith choose to allow the love and peace of Christ to guide them.

Psalm 119:103 reads, “How sweet your words taste to me; they are sweeter than honey.” Craving God’s word, whether it means pouring through the Bible themselves or asking us for more of His wisdom, is a sign of true faith. Paul wrote to the Thessalonians, “pray without ceasing” (2 Thessalonians 5:17). Let me assure you, nothing is too trivial for a junior high girl to pray about! Our children, no matter what age, will learn to turn not only to us, but to God in prayer when they or those they know need help or healing. True faith produces continual and conversational prayer, bookended only by sleeping and rising. As we all grow in our faith, we learn to stay tuned to our Savior, always.

Photo Credit: © Unsplash/Ben White



Monday, January 30, 2023

Chatbot and I Discuss Proving God and Using Love as a Theology



Chatbot and I  Discuss Proving God and
Using Love as a Theology

by R.E. Slater and OpenAI's ChatGPT
January 30, 2023

*Disclaimer: Reading through "Chatbot's" responses there are statements and replies I either would disagree with or tweak them a bit to be truer to the source. I will highlight those areas when I do. - R.E. Slater

Introduction

I developed this chat with the Internet a couple days ago as I was playing around with OpenAI's service tool. Today, I refined it a bit before releasing it. Mostly, I like CHAT-GPT because it presents grossly complicated ideas clearly and simply.

But this is also where I must distinguish AI-generated presentations, which I do here, in subject areas I have published a hundred different ways over the years by form and by function.

Today's chat is yet another iteration of how a new theology of important difference is seeking to counter older theologies overlooking difference based upon previous culturally-subjective philosophic models of bias.

Today's Christianity requires a better expression than the one I grew up in - and  of which I have become greatly discontent by - in its unbiblical positions and outcomes. Which I find humorous as its is usually my older Christian groups which would claim the same about me even as I find it more true of traditional church creeds and dogmas.

Which is also why so many people have drifted from the Christian church and Christianity itself, because such creeds and dogmas have produced so many ungodly perceptions and attitudes amongst its adherents.

And rather than struggle with building a progressive Christianity upon the older versions I was taught I simply removed the barriers to godly love to re-establish it's Jesus-core whereby progressive faiths might more easily rest when emphasizing God's love over God's judgments and wraths.

See what you think... this theology is there even in the vast realms of artificial intelligence. It lies there in plain view if we but look for it.

R.E. Slater
January 20, 2023


* * * * * * * * * *


Hi. Prove God

It is not possible to prove the existence of God using scientific methods, as God is generally considered to be a supernatural being. The existence of God is a matter of faith and belief, and different people have different beliefs about God. Some believe in the existence of one all-powerful God, while others believe in multiple gods or no gods at all. The concept of God is also central to many religions, and different religions have different beliefs about the nature and attributes of God.

Prove there is no God

It is not possible to prove the non-existence of God using scientific methods, as the existence of God is a matter of faith and belief. Arguments against the existence of God are typically based on philosophical and logical reasoning, rather than empirical evidence. Some argue that the concept of God is illogical or that the existence of God is incompatible with the existence of evil and suffering in the world. Others argue that the complexity and diversity of life can be explained through evolution and natural processes, rather than by the actions of a supreme being. However, these are arguments and perspectives, not proofs, as the existence of God is not a scientific hypothesis that can be tested with empirical data.

Does process theology remove the problem of theodicy?

Process theology is a type of theology that emphasizes the idea that God is constantly involved in the world and that the universe is in a state of constant change and evolution. It views God as a dynamic and evolving being who is affected by the world and its events.

One of the key ideas in process theology is the concept of divine persuasion, which holds that God works to persuade creatures to act in certain ways, rather than forcing them to do so. This means that God does not control everything that happens in the world, but rather works to influence and guide it.

In terms of the problem of theodicy, which is the question of how to reconcile the existence of evil and suffering with the belief in a benevolent and all-powerful God, process theology offers an alternative perspective. It suggests that God is not the creator of evil, but rather that it arises from the free choices of creatures and the natural processes of the universe. God is seen as working to overcome evil and suffering, rather than causing it.

Process theology does not completely remove the problem of theodicy, but offers an alternative perspective on it, by emphasizing the dynamic and evolving nature of God and the world, and by de-emphasizing the idea of God as an all-powerful, all-knowing and all-good being.

ADDENDUM: As such, process theology emphasizes an all-loving, uncontrolling God where love is free to be-and-become through a series of indeterminant creational events in which God participates with creation. Importantly, God is ontologically different from creation but intimately infilling freewilled creational events.

More so, that God's very being, God's DNA as it were, was poured into creation's birth and through continuous, processual cycles of rebirth. That it is unnecessary for God to control as God is on every leading edge of creational pronouncement as well as every series of creational rebirthing, both good and bad (re sin and evil).

Meaningfully, God is in every processual event along with every actualization of every processual event, as they twist-and-turn in relational enactment, bond, estrangement, cooperation, and union with other relationally interactive events. This is not divine control but divine birthing again-and-again in a cosmos infilled with God's love and generational insistence as a guiding teleology.

- re slater

How is divine omnipotence different from amipotence?

The terms "omnipotent" and "amipotent" are related to the concept of power and the nature of God.

Omnipotence is the belief that God is all-powerful, meaning that God has the ability to do anything that is logically possible. In other words, there is nothing that God cannot do. This is a traditional attribute of God in monotheistic religions such as Christianity, Islam and Judaism.

Amipotence, on the other hand, is a term that is used in some theological and philosophical contexts to describe a God who has limited power, or the ability to do certain things but not others. This idea of a limited God is often associated with Process Theology and Open Theism, which propose a God who is not all-knowing, all-powerful or unchanging, but a God who is affected by the world and is in a state of evolution [with creation's own processual evolution. Further, amipotence refers to a loving, uncontrolling God rather than an all-powerful God who sometimes loves. - res].

In summary, the term "omnipotent" refers to the belief that God has the ability to do anything that is logically possible, while "amipotent" refers to the belief that God has limited power, or the ability to do certain things but not others [because of divine love and not because of divine fiat. By God's love comes a freewill creation. Which is to say that all Godly attributes come after God's love and not before God's love. - re slater]

Describe divine amipotence in terms of divine love

Divine amipotence, as it relates to divine love, suggests that God's power is not absolute or limitless, but rather is limited in some way. This idea is associated with the belief that God's power is directed towards promoting love and goodness in the world.

In this view, God does not have the power to control everything that happens in the world, but rather works to influence and guide it towards love and goodness. God's power is seen as being directed towards the well-being and flourishing of creatures, rather than the exercise of arbitrary control.

In this sense, God's love is not a passive or detached love, but rather an active and engaged love that is constantly working to bring about the best for creatures. God's love is seen as the source of God's power and the driving force behind God's actions in the world. This perspective on divine amipotence suggests that God's power is grounded in and directed by God's love, which is the foundation of God's relationship with the world.

It's important to note that this perspective on divine amipotence is different from the traditional perspective of God's omnipotence, where God's power is absolute and limitless, and God is not affected by the world or the actions of creatures [based upon previous Greek Hellenizations or beliefs about gods and godhead such as Zeus and the Olympiad. Process thought removes Greek influences upon church doctrine and in its place promotes a processual metaphysic of cosmology and godhead. - res]

Explain divine sovereignty from a process viewpoint

Divine sovereignty is the belief that God is the supreme authority and ruler over all things. From a process viewpoint, divine sovereignty is understood differently than in traditional theology.

In traditional theology, divine sovereignty is often understood as God having complete control over everything that happens in the world, including the actions of creatures. God is often seen as the creator and sustainer of the universe, who is in control of all events and outcomes.

However, in process theology, the concept of divine sovereignty is understood differently. It emphasizes the idea that God is not the creator of [determinator of divine causal opportunities within] the world, but rather that the universe is in a state of constant change and evolution [as its own divinely initiated self-creator]. God is seen as being involved in the world, but not as the one who controls it.

[This is a difficult paragraph which I find myself continually working on... that God and creation are separate but intimately part with one another as the result of a divinely initiated processual relation with each other birthing profoundly loving and good teleological events which are immediately corrupted at inception countering a divine insistence towards generative value.

A creational inception event which then requires continuous acts of divine atonement and redemption - and not just once by the Christian Cross of Jesus which among other things symbolizes this ying-yang between God and creation. That these continuous creational process events are deeply, intimately underlaid and infilled with divine love and goodness.

It's my own philosophic-theological struggle to remove divine determination in favor of divine design allowing creational response with an activating processually-evolving creational teleology. That is, creation has the potential to fully love but not the ability to do so on it's own... which I find as a paradox if creation was at one time activated by divine love.

Further, creation's very "birth was as a continua creatio event ("from a primal state of somethingness") as versus creation ex nihilo, "out of nothing" referring to a primal state that wasn't, then was, by divine act. That creation was a clay awaiting a molder, a clay-organizer (rather than maker in the strictest sense), a clay-fashioner. Thus, process theology must speak to panentheism as opposed to Westernized classic theism on the one hand, or Eastern pantheism on the other hand. Here's a diagram of that arrangement: - re slater]

 

https://relevancy22.blogspot.com/2018/12/describing-relational-process-based.html


...From a process viewpoint, divine sovereignty is understood as God's ability to influence and guide the world towards love and goodness. God is seen as working to persuade creatures to act in certain ways, rather than forcing them to do so. This means that God does not control everything that happens in the world, but rather works to influence and guide it.

In this sense, divine sovereignty is understood as God's ability to guide the world towards love and goodness, rather than as an absolute and arbitrary control over it. It is also understood that God is affected by the world, God is in a state of evolution, and that the world is not a finished product, but a work in progress.

Using process theology, explain how there can be a creational telelogy, and if so, describe what this process-based teleology can mean.

In process theology, the concept of teleology refers to the idea that there is a purpose or goal to the universe, and that this purpose is guided by God.

In traditional theology, teleology is often understood as the idea that God created the universe with a specific purpose or goal in mind, and that this purpose is reflected in the natural laws and processes of the universe. However, in process theology, the concept of teleology is understood differently.

Process theology proposes that the universe is in a state of constant change and evolution, and that God is not the creator of the world, but rather involved in it. In this sense, teleology is not seen as something that is imposed on the world by God, but rather as something that emerges from the interactions and relationships between creatures and their environment.

From a process viewpoint, creational teleology refers to the idea that the universe is constantly evolving and that God is guiding this process towards the goal of greater love and goodness. This teleology is not a fixed or predetermined goal, but rather a dynamic and evolving one. It suggests that the purpose of the universe is not fixed but is constantly evolving as the universe itself evolves.
This process-based teleology can be understood as the idea that God is working to bring about the best possible outcome for creatures, and that this outcome is constantly evolving as the universe itself evolves. It also means that God is not the one who determines the outcome, but the one who guides the process towards the best possible outcome, and that the outcome is not predetermined but rather a result of the interactions between the creatures and their environment.

In summary, in process theology, creational teleology refers to the idea that the universe is constantly evolving, and that God is guiding this process towards the goal of greater love and goodness, and that this goal is not predetermined but rather a result of the interactions between the creatures and their environment.

Excellent. Thank you.

You're welcome! I'm glad I could help. If you have any other questions, feel free to ask.