Quotes & Sayings


We, and creation itself, actualize the possibilities of the God who sustains the world, towards becoming in the world in a fuller, more deeper way. - R.E. Slater

There is urgency in coming to see the world as a web of interrelated processes of which we are integral parts, so that all of our choices and actions have [consequential effects upon] the world around us. - Process Metaphysician Alfred North Whitehead

Kurt Gödel's Incompleteness Theorem says (i) all closed systems are unprovable within themselves and, that (ii) all open systems are rightly understood as incomplete. - R.E. Slater

The most true thing about you is what God has said to you in Christ, "You are My Beloved." - Tripp Fuller

The God among us is the God who refuses to be God without us, so great is God's Love. - Tripp Fuller

According to some Christian outlooks we were made for another world. Perhaps, rather, we were made for this world to recreate, reclaim, redeem, and renew unto God's future aspiration by the power of His Spirit. - R.E. Slater

Our eschatological ethos is to love. To stand with those who are oppressed. To stand against those who are oppressing. It is that simple. Love is our only calling and Christian Hope. - R.E. Slater

Secularization theory has been massively falsified. We don't live in an age of secularity. We live in an age of explosive, pervasive religiosity... an age of religious pluralism. - Peter L. Berger

Exploring the edge of life and faith in a post-everything world. - Todd Littleton

I don't need another reason to believe, your love is all around for me to see. – Anon

Thou art our need; and in giving us more of thyself thou givest us all. - Khalil Gibran, Prayer XXIII

Be careful what you pretend to be. You become what you pretend to be. - Kurt Vonnegut

Religious beliefs, far from being primary, are often shaped and adjusted by our social goals. - Jim Forest

We become who we are by what we believe and can justify. - R.E. Slater

People, even more than things, need to be restored, renewed, revived, reclaimed, and redeemed; never throw out anyone. – Anon

Certainly, God's love has made fools of us all. - R.E. Slater

An apocalyptic Christian faith doesn't wait for Jesus to come, but for Jesus to become in our midst. - R.E. Slater

Christian belief in God begins with the cross and resurrection of Jesus, not with rational apologetics. - Eberhard Jüngel, Jürgen Moltmann

Our knowledge of God is through the 'I-Thou' encounter, not in finding God at the end of a syllogism or argument. There is a grave danger in any Christian treatment of God as an object. The God of Jesus Christ and Scripture is irreducibly subject and never made as an object, a force, a power, or a principle that can be manipulated. - Emil Brunner

“Ehyeh Asher Ehyeh” means "I will be that who I have yet to become." - God (Ex 3.14) or, conversely, “I AM who I AM Becoming.”

Our job is to love others without stopping to inquire whether or not they are worthy. - Thomas Merton

The church is God's world-changing social experiment of bringing unlikes and differents to the Eucharist/Communion table to share life with one another as a new kind of family. When this happens, we show to the world what love, justice, peace, reconciliation, and life together is designed by God to be. The church is God's show-and-tell for the world to see how God wants us to live as a blended, global, polypluralistic family united with one will, by one Lord, and baptized by one Spirit. – Anon

The cross that is planted at the heart of the history of the world cannot be uprooted. - Jacques Ellul

The Unity in whose loving presence the universe unfolds is inside each person as a call to welcome the stranger, protect animals and the earth, respect the dignity of each person, think new thoughts, and help bring about ecological civilizations. - John Cobb & Farhan A. Shah

If you board the wrong train it is of no use running along the corridors of the train in the other direction. - Dietrich Bonhoeffer

God's justice is restorative rather than punitive; His discipline is merciful rather than punishing; His power is made perfect in weakness; and His grace is sufficient for all. – Anon

Our little [biblical] systems have their day; they have their day and cease to be. They are but broken lights of Thee, and Thou, O God art more than they. - Alfred Lord Tennyson

We can’t control God; God is uncontrollable. God can’t control us; God’s love is uncontrolling! - Thomas Jay Oord

Life in perspective but always in process... as we are relational beings in process to one another, so life events are in process in relation to each event... as God is to Self, is to world, is to us... like Father, like sons and daughters, like events... life in process yet always in perspective. - R.E. Slater

To promote societal transition to sustainable ways of living and a global society founded on a shared ethical framework which includes respect and care for the community of life, ecological integrity, universal human rights, respect for diversity, economic justice, democracy, and a culture of peace. - The Earth Charter Mission Statement

Christian humanism is the belief that human freedom, individual conscience, and unencumbered rational inquiry are compatible with the practice of Christianity or even intrinsic in its doctrine. It represents a philosophical union of Christian faith and classical humanist principles. - Scott Postma

It is never wise to have a self-appointed religious institution determine a nation's moral code. The opportunities for moral compromise and failure are high; the moral codes and creeds assuredly racist, discriminatory, or subjectively and religiously defined; and the pronouncement of inhumanitarian political objectives quite predictable. - R.E. Slater

God's love must both center and define the Christian faith and all religious or human faiths seeking human and ecological balance in worlds of subtraction, harm, tragedy, and evil. - R.E. Slater

In Whitehead’s process ontology, we can think of the experiential ground of reality as an eternal pulse whereby what is objectively public in one moment becomes subjectively prehended in the next, and whereby the subject that emerges from its feelings then perishes into public expression as an object (or “superject”) aiming for novelty. There is a rhythm of Being between object and subject, not an ontological division. This rhythm powers the creative growth of the universe from one occasion of experience to the next. This is the Whiteheadian mantra: “The many become one and are increased by one.” - Matthew Segall

Without Love there is no Truth. And True Truth is always Loving. There is no dichotomy between these terms but only seamless integration. This is the premier centering focus of a Processual Theology of Love. - R.E. Slater

-----

Note: Generally I do not respond to commentary. I may read the comments but wish to reserve my time to write (or write from the comments I read). Instead, I'd like to see our community help one another and in the helping encourage and exhort each of us towards Christian love in Christ Jesus our Lord and Savior. - re slater

Tuesday, February 7, 2023

50th Anniversary Conference of the Center for Process Studies


50th Anniversary Conference of
the Center for Process Studies

A 3-day celebration of the 50-year legacy of CPS and its creative transformation in the context of a new generation of process thinkers


Date and time
Wed, Feb 15, 2023, 9:00 AM – Fri, Feb 17, 2023, 6:00 PM PST

Location
Claremont United Church of Christ
233 Harrison Avenue
Claremont, CA 91711

CONFERENCE VISION AND RATIONALE

Founded in 1973 by John B. Cobb Jr. and David Ray Griffin, The Center for Process Studies (CPS) is coming upon its 50-year anniversary as a faculty research center of Claremont School of Theology. Under the leadership of Cobb and Griffin, CPS earned a global reputation as the southern California hub for the study of process philosophy and theology. Through rigorous research, cutting-edge conferences, and academic publications, Cobb and Griffin enacted the mission of CPS to facilitate the development, relevance and application of Whitehead’s philosophy across disciplines. The continued leadership by subsequent co-directors Marjorie Suchocki (1990), Philip Clayton (2003), Roland Faber (2006), Monica Coleman (2008) and others, built upon the foundations of Cobb and Griffin and continued to lead CPS into new generations of research and relevance. Now led by executive director Wm. Andrew Schwartz (2013) and program director Andrew M. Davis (2020), CPS enters its 50th year indebted to the contributions of the past, but looking toward the novelty of the future.

Co-sponsored by The Cobb Institute, The Institute for Ecological Civilization, and the Institute for the Postmodern Development of China, the “50th Anniversary Conference of the Center for Process Studies” celebrates the 50-year legacy of CPS and its creative transformation in the context of a new generation of process thinkers. Three core layers of investigation, each with four respective sessions (A-D), are proposed in light of this purpose:

1. Reenchanting Religion: Process Theology in the 21st Century (Feb 15th)

  • A. Divinity Reimagined: What is novel in the explication and/or reception of a process doctrine (or model) of God over and against its alternatives? Where are challenges and opportunities arising in current debates and conversations?
  • B. Religious Experience and Religious Belonging: What forms of religious experience, intuition and belonging are substantiated and/or produced from within process theology as creatively expressed across ecclesial settings? What does process theology/spirituality look like when lived out communally?
  • C. Panpsychism and Religious Naturalism: In what ways are process conceptions of panpsychism (or panexperientialism) relevant to the re-formulation of central Christian doctrines e.g., creation, Christology and divine action? What is the relationship between panpsychism and affirmations of religious naturalism?
  • D. Beyond Dialogue and Deep Religious Pluralism: Where are new connections being made between process theology and different religious systems and thinkers? In what ways is process theology informing new discussions of religious pluralism and multiplicity?

2. Science and Philosophy: Nature and the Nature of Reality (Feb 16th)

  • A. Facts, Values and Possibilities: How does process thought understand the relationship between facts, values and possibilities as variously expressed from physics to metaphysics? In what ways does process philosophy offer an interpretive metaphysics for the findings of physics?
  • B. New Materialism, Poststructuralism, and Process Philosophy: Where do points of connection lie between process philosophy, new materialism and poststructuralism? What new voices/insights are being integrated and what interdisciplinary insights are emerging?
  • C. Brains, Souls and Self: What new contributions is process thought making to related concerns and questions in neurology, psychology and psychiatry? Where have advances been made with respect to the brain, the mind-body problem, personal identity and the self, and personality growth and transformation?
  • D. Art, Beauty and Creativity: In what ways does process philosophy inform aesthetic experience, theory, and expression? Where are new developments and connections being made across artistic disciplines?

3. Process in Practice: Society, Sustainability and Ecological Civilization (Feb 17th)

  • A. Economies and Communities for the Common Good: In what ways is process thought informing and/or critiquing economic theories, policies and measures? What practical revisions are being proposed and where? To what end are these revisions sought and for which communities?
  • B. Power, Peace and Politics: What resources does process thought sustain in attempts to remedy deepening bifurcations across politics, society, and culture? What extremes does process thought seek to avoid in political theory, governance, and power? How is peace possible given the dysfunctional entanglement of power and politics?
  • C. Process Philosophy and Education: How is process philosophy informing and/or transforming educational theory and practice? Where do theoretical and practical weaknesses lie from grade-school to university settings and beyond? How is process philosophy responding to the crises in education?
  • D. Environmental Ethics and Ecological Civilization: What theoretical and practical ethical measures does process thought support and/or justify in light of the ecological crisis and the quest for an ecological civilization? What requires transformation in our relation to the environment and its human and non-human “others”?

It remains a truly exciting time for the global process movement with a new generation of scholars leading the way. This conference aims not only to celebrate the 50-year legacy of CPS, but also anticipate its continued relevance and creative transformation in the decades still ahead.



Alfred North Whitehead

About The Center for Process Studies

Our Mission

Driven by the principle of relationality and commitment to the common good, the Center for Process Studies (CPS) works on cutting edge discourse across disciplines to promote the exploration of interconnection, change, and intrinsic value as core features of our world.

As a faculty-based research center at Claremont School of Theology (CST), CPS conducts research and develops educational resources that explore the implications of these principles on a range of topics (e.g. science, ecology, culture, philosophy, religion, education, psychology, political theory, etc.) in a unique transdisciplinary style that harmonizes fragmented disciplinary thinking in order to develop integrated and holistic modes of understanding.

The CPS mission is carried out through academic conferences, courses, and seminars, a robust visiting scholars program, the world’s largest library related to process-relational writings, and an array of publications (including a peer-reviewed journal and a number of active books series).


Our History

The Center for Process Studies (CPS) is a faculty-based research center at Claremont School of Theology, in affiliation with Claremont Graduate University. The Center conducts research, publishes scholarly and popular works, organizes courses and conferences, and seeks to promote the common good for our common world by means of the relational approach found in process thought.

Influenced by the work of Alfred North Whitehead and John B. Cobb, Jr., CPS is dedicated to the expansive exploration of ever-new sectors of academia and society through the transformative and ever-transforming tradition of process thinking. Process thinkers engage many different religious traditions and non-religious worldviews, working to both create positive social change and protect the natural environment. Among the Center’s publications are: Process Studies, the leading refereed journal in the field; Process Perspectives, a popular newsmagazine; and multiple series of books and publications stemming from its various projects.

Founded in 1973 by John B. Cobb, Jr. and David Ray Griffin as a faculty center of Claremont School of Theology in affiliation with Claremont Graduate University, the Center for Process Studies was established to facilitate the development and application of a relational worldview. Marjorie Hewitt Suchocki became a co-director in 1990. Philip Clayton became a co-director in fall 2003. Roland Faber became a co-director in January 2006, and Monica A. Coleman became a co-director in fall 2008. In fall 2013, Wm Andrew Schwartz (a PhD student at the time) was appointed as Managing Director of CPS. In this capacity, he played a central role in organizing the 2000-person "Seizing Alternative" conference in June 2015. Upon completion of his PhD in fall 2016 Schwartz was appointed as Executive Director of CPS. As of summer of 2020, CPS is affiliated with Willamette University and located in Salem, OR.

Through seminars, conferences, publications, a library/archive, and visiting scholars program, the Center for Process Studies encourages research on the form of process thought which received its primary impetus from philosophers Alfred North Whitehead (1861-1947) and Charles Hartshorne (1897-2000).

CPS contributes to the development of a new cultural paradigm of systematic metaphysics influenced by a relational worldview. As a resource for scholars and professionals, CPS coordinates multidisciplinary and trans-disciplinary research on pressing issues while seeking to avoid the inertia and limitations of segregated university disciplines. Where other new paradigm institutes focus on singular issues—like ecology, agriculture, feminism, race and class, decentralized political economic theory, or appropriate technology—a typical process focus is to integrate these issues through a non-dualistic worldview applicable to a wide range of issues.

Deeply appreciative of the natural sciences, process philosophy uniquely integrates science, religion, ethics, and aesthetics. It portrays the cosmos as an organic whole analyzable into internally related processes. In this way process thought offers a postmodern alternative to the mechanistic model that still influences much scientific work and is presupposed in much humanistic literature. The relational process perspective deals with multicultural, feminist, ecological, inter-religious, political, philosophical, and economic issues. Process thought provides a basis for discussion between Eastern and Western religious and cultural traditions. It offers an agenda on the social, political, and economic order that brings issues of human justice together with a concern for ecology. In these and other ways, process thinkers hope to contribute to those movements that will carry the world beyond war, injustice, and despair.


Saturday, February 4, 2023

What is a Language Model?



What Is a Language Model?

July 20, 2022


What are they used for? Where can you find them?
And what kind of information do they actually store?


Our aim at deepset is that everyone, no matter their level of technical background, can harness the power of modern natural language processing (NLP) and language models for their own use case. Haystack, our open-source framework, makes this a reality.

When we talk to our users, we encounter common sources of confusion about NLP and machine learning. Therefore, in the upcoming blog posts, we want to explain some basic NLP concepts in understandable language. First up: language models.


Language Models in NLP

Language models take center stage in NLP. But what is a language model? To answer that question, let’s first clarify the term model and its use in machine learning.

What is a machine learning model?

The real world is complex and confusing. Models serve to represent a particular field of interest — a domain — in simpler terms. For example, weather models are simplified representations of meteorological phenomena and their interactions. These models help us understand the weather domain better and make predictions about it.

In machine learning, models are much the same. They serve mainly to predict events based on past data, which is why they’re also known as forecasting or predictive models.

The data that we feed to an Machine Learning (ML) algorithm allows it to devise a model of the data’s domain. That data should represent reality most faithfully, so that the models which are based on it can approximate the real world as closely as possible.

What is a language model (LM)?

A language model is a machine learning model designed to represent the language domain. It can be used as a basis for a number of different language-based tasks, for instance:
and plenty of other tasks that operate on natural language.

In a domain like weather forecasting, it’s easy to see how past data helps a model to predict a future state. But how do you apply that to language? In order to understand how the concept of prediction factors into language modeling, let’s take a step back and talk about linguistic intuition.

Linguistic intuition

As the speaker of a language, you have assembled an astonishing amount of knowledge about it, much of which cannot be taught explicitly. It includes judgments about grammaticality (whether or not a sentence is syntactically correct), synonymity (whether two words mean roughly the same) and sentence completion. Suppose I asked you to fill in the gap in the following sentence:

“Julia is looking for ___ purse.”

You’d probably say “her” or “my” or any other pronoun. Even a possessive noun phrase like “the cat Pablo’s” would work. But you wouldn’t guess something like “toothbrush” or “Las Vegas.” Why? Because of linguistic intuition.

Training a language model

Language models seek to model linguistic intuition. That is not an easy feat. As we’ve said, linguistic intuition isn’t learned through schooling but through constant use of a language (Noam Chomsky even postulated the existence of a special “language organ” in humans). So how can we model it?

Today’s state of the art in NLP is driven by large neural networks. Neural language models like BERT learn something akin to linguistic intuition by processing millions of data points. In machine learning, this process is known as “training.”

To train a model, we need to come up with tasks that cause it to learn a representation of a given domain. For language modeling, a common task consists of completing the missing word in a sentence, much like in our example earlier. Through this and other training tasks, a language model learns to encode the meanings of words and longer text passages.

So how do you get from a computational representation of a language’s semantic properties to a model that can perform specific tasks like question answering or summarization?


General-purpose Versus Domain-specific Language Models

General language models like BERT or its bigger sister RoBERTa require huge amounts of data to learn a language’s regularities. NLP practitioners often use Wikipedia and other freely available collections of textual data to train them. By now, BERT-like models exist for practically all the languages with a sufficiently large Wikipedia. In fact, we at deepset have produced several models for German and English, which you can check out on our models page.


So what can you do with these models? Why are they so popular? Well, BERT can be used to enhance language understanding, for example in the Google search engine. But arguably the biggest value of general-purpose language models is that they can serve as a basis for other language-based tasks like question answering. By exposing it to different datasets and adjusting the training objective, we can adapt a general language model to a specific use case.

Fine-tuning a language model

There are many tasks that benefit from a representation of linguistic intuition. Examples of such tasks are sentiment analysis, named entity recognition, question answering, and others. Adapting a general-purpose language model to such a task is known as fine-tuning.


Fine-tuning requires data specific to the task you want the model to accomplish. For instance, to fine-tune your model to the question-answering task, you need a dataset of question-answer pairs. Such data often needs to be created and curated manually, which makes it quite expensive to generate. On the bright side, fine-tuning requires much less data than training a general language model.

Where to look for models

Both general-purpose models and fine-tuned models can be saved and shared. The Hugging Face model hub is the most popular platform for model-sharing, with tens of thousands of models of different sizes, for different languages and use cases. Chances are high that your own use case is already covered by one of the models on the model hub.

To help you find a model that might fit your needs, you can use the interface on the left side of the model hub page to filter by task, language, and other criteria. This lets you specifically look for models that have been trained for question answering, summarization, and many other tasks. Once you’ve found a suitable model, all you need to do is plug it into your NLP pipeline, connect to your database, and start experimenting.

How to handle domain-specific language

Though we often talk about languages as if they were homogeneous entities, the reality is very far from that. There are, for example, some professional domains — like medicine or law — that use highly specialized jargon, which non-experts can barely understand. Similarly, when a general BERT model is used to process data from one of those domains, it might perform poorly — just like a person without a degree in the field.

A technique called domain adaptation provides the solution: here, the pretrained model undergoes additional training steps, this time on specialized data like legal documents or medical papers.


The Hugging Face model hub contains BERT-based language models that have been adapted to the scientific, medical, legal, or financial domain. These domain-specific language models can then serve as a basis for further downstream tasks. For instance, this highly specialized model extracts named entities (like names for cells and proteins) from biomedical texts in English and Spanish.


What Can Language Models Do?

Language models can seem very smart. In this demo, for example, we show how well our RoBERTa model can answer questions about the Game of Thrones universe. It’s important to note, though, that this language model doesn’t actually know anything. It is just very good at extracting the right answers from documents — thanks to its mastery of human language and the fine-tuning it received on a question-answering dataset. It operates similarly to a human agent reading through documents to extract information from them, only much, much faster!

Other types of language models take a completely different approach. For example, the famed GPT family of generative language models actually do memorize information. They have so many parameters — billions — that they can store information picked up during training in addition to learning the language’s regularities.

So what can a language model do? Exactly what it’s been trained to do — not more, not less. Some models are trained to extract answers from text, others to generate answers from scratch. Some are trained to summarize text, others simply learn to represent language.

If your documents don’t use highly specialized language, a pre-trained model might work just fine — no further training required. Other use cases, however, might benefit from additional training steps. In our upcoming blog post, we’ll explore in more detail how you can work with techniques like fine-tuning and domain adaptation to get the most out of language models.


Composable NLP with Haystack

Modern NLP builds on decades of research and incorporates complex concepts from math and computer science. That’s why we promote a practice of composable NLP with Haystack, which lets users build their own NLP-based systems through a mix-and-match approach. You don’t have to be an NLP practitioner to use our framework, just as you don’t need to know anything about hardware or electricity to use a computer.

Want to see how to integrate pre-trained language models into an NLP pipeline? Check out our GitHub repository or sign up to deepset Cloud.

To learn more about NLP, make sure to download our free ebook NLP for Product Managers.



Tips & Tricks of Using A.I. LLM's, Parts 1 & 2


Large Language Models and
Where to Use Them: Part 1

Jul 7, 2022 • 8 min read

Over the past few years, large language models (LLMs) have evolved from emerging to mainstream technology. In this blog post, we'll explore some of the most common natural language processing (NLP) use cases that they can address. This is part one of a two-part series.

Large Language Models and Where to Use Them
You can find Part 2 here.


A large language model (LLM) is a type of machine learning model that can handle a wide range of natural language processing (NLP) use cases. But due to their versatility, LLMs can be a bit overwhelming for newcomers who are trying to understand when and where to use these models.

In this blog series, we’ll simplify LLMs by mapping out the seven broad categories of use cases where you can apply them, with examples from Cohere's LLM platform. Hopefully, this can serve as a starting point as you begin working with the Cohere API, or even seed some ideas for the next thing you want to build.

The seven use case categories are:
  1. Generate
  2. Summarize
  3. Rewrite
  4. Extract
  5. Search/Similarity
  6. Cluster
  7. Classify
Because of the general-purpose nature of LLMs, the range of use cases and relevant industries within each category is extremely wide. This post will not attempt to delve too deeply into each, but it will provide you with enough ideas and examples to help you start experimenting.


1. Generate

Probably the first thing that comes to mind when talking about LLMs is their ability to generate original and coherent text. And that’s what this use case category is all about. LLMs are pre-trained using a huge collection of text gathered from a variety of sources. This means that they are able to capture the patterns of how language is used and how humans write.

Getting the best out of these generation models is now becoming a whole field of study in and of itself called prompt engineering. In fact, the first four use case categories on our list all leverage prompt generation in their own ways.

More on the other three later, but the basic idea in prompt engineering is to provide a context for a model to work with. Prompt engineering is a vast topic, but at a very high level, the idea is to provide a model with a small amount of contextual information as a cue for generating a specific sequence of text.

One way to set up the context is to write a few lines of a passage for the model to continue. Imagine writing an essay or marketing copy where you would begin with the first few sentences about a topic, and then have the model complete the paragraph or even the whole piece.

Another way is by writing a few example patterns that indicate the type of text that we want the model to generate. This is an interesting one because of the different ways we can shape the models and the various applications that it entails.

Let’s take one example. The goal here is to have the model generate the first paragraph of a blog post. First, we prepare a short line of context about what we’d like the model to write. Then, we prepare two examples — each containing the blog’s title, its audience, the tone of voice, and the matching paragraph.

Finally, we feed the model with this prompt, together with the information for the new blog. And the model will duly generate the text that matches the context, as seen below.


Completion:


You can test it out by accessing the saved preset.

In fact, the excerpt you read at the beginning of this blog was generated using this preset!

That was just one example, but how we prompt a model is limited only by our creativity. Here are some other examples:
  • Writing product descriptions, given the product name and keywords
  • Writing chatbot/conversational AI responses
  • Developing a question-answering interface
  • Writing emails, given the purpose/command
  • Writing headlines and paragraphs

2. Summarize

The second use case category, which also leverages prompt engineering, is text summarization. Think about the amount of text that we deal with on a typical day, such as reports, articles, meeting notes, emails, transcripts, and so on. We can have an LLM summarize a piece of text by prompting it with a few examples of a full document and its summary.

The following is an example of article summarization, where we prepare the prompt to contain the full passage of an article and its one-line summary.

Prompt:


Completion:


You can test it out by accessing the saved preset.

Here are some other example documents where LLM summarization will be useful:
  • Customer support chats
  • Environmental, Social, and Governance (ESG) reports
  • Earnings calls
  • Paper abstracts
  • Dialogues and transcripts

3. Rewrite

Another flavor of prompt engineering is text rewriting. This is another of those tasks that we do every day and spend a lot of time on, and if we could automate them, it would free us up to work on more creative tasks.

Rewriting text can mean different things and take different forms, but one common example is text correction. The following is the task of correcting the spelling and grammar in voice-to-text transcriptions. We prepare the prompt with a short bit of context about the task, followed by examples of incorrect and corrected transcriptions.

Prompt:


Completion:


You can test it out by accessing the saved preset.

Here are some other example use cases for using an LLM to rewrite text:
  • Paraphrase a piece of text in a different voice
  • Build a spell checker that corrects text capitalizations
  • Rephrase chatbot responses
  • Redact personally identifiable information
  • Turn a complex piece of text into a digestible form

4. Extract

Text extraction is another use case category that can leverage a generation LLM. The idea is to take a long piece of text and extract only the key information or words from the text.

The following is the task of extracting relevant information from contracts. We prepare the prompt with a short bit of context about the task, followed by a couple of example contracts and the extracted text.

Prompt:


Completion:


You can test it out by accessing the saved preset.

Some other use cases in this category include:
  • Extract named entities from a document
  • Extract keywords and keyphrases from articles
  • Flag for personally identifiable information
  • Extract supplier and contract terms
  • Create tags for blogs

Conclusion

In part two of this series, we’ll continue our exploration of the remaining three use case categories (Search/Similarity, Cluster, and Classify). We’ll also explore how LLM APIs can help address more complex use cases. The world is complex, and a lot of problems can only be tackled by piecing multiple NLP models together. We’ll look at some examples of how we can quickly snap together a combination of API endpoints in order to build more complete solutions.


* * * * * * *

Large Language Models and
Where to Use Them: Part 2

Jul 7, 2022 • 8 min read

Over the past few years, large language models (LLMs) have evolved from emerging to mainstream technology. In this blog post, we'll explore some of the most common natural language processing (NLP) use cases that they can address. This is part one of a two-part series.

It can be a bit overwhelming for someone new to Large Language Models (LLMs) to understand when and where to use them in natural language processing (NLP) use cases. In this blog series, we simplify LLM application by mapping out the seven broad categories of use cases that you can address with Cohere’s LLM.

In Part 1 of our series, we covered the first four use case categories: Generate, Summarize, Rewrite, and Extractt. In this post, we will cover the other three: Search, Cluster, and Classify. Finally, we’ll look at how we can combine the different types, making their applications much more interesting and useful.


5. Search/Similarity

Any mention of LLMs will most likely spark discussion around their text generation capabilities, as we’ve seen in the previous four use cases. The less-talked-about, but equally powerful capability, is text representation.

While text generation is about creating new text, text representation is about making sense of existing text. Think about the amount of unstructured text data being generated today that’s only accelerated by the increasingly ubiquitous internet. It would not be possible for humans to process this massive volume of information without NLP-powered automation.

One such use case category for text representation is similarity search. Given a text query, the goal is to find documents that are most similar to the query.

The most obvious example use case for this is search engines. As users, we expect the search results to return links and documents that are highly relevant to our query. What makes modern search engines work very well is their ability to match the query to the appropriate results not just via keyword-matching, but by semantic similarity.

In simple words, they are able to perform matching based on meaning, context, themes, ideas — abstract concepts that may use different words altogether, but very much relate to each other.

Let’s say a user enters the search string “ground transportation at the airport.” The search engine must be able to know that the user is looking for taxis, car rentals, trains, or other similar services, even if the user doesn’t explicitly mention them.

When we input a piece of text into a representation model, instead of generating more text, the model generates a set of numbers that represent the meaning or context of the input text. These numbers are called “text embeddings”. In LLMs, they tend to be a very long sequence of numbers, typically in the thousands, and the longer they are, the more information is stored about the text.

With Cohere, you can access this type of model via the Embed endpoint. This Python notebook provides an example of a semantic search application, where given a question, the search engine would return other frequently asked questions (FAQ) whose text embeddings are the most similar to the question.

It goes on to show all the questions on a two-dimensional plot, shown in the image below, where the closer two points are on the plot, the more semantically similar they are.

Two examples of similar questions about sharks and Boxing Day


This concept can be applied to a much broader range of use cases, for example:
  • Retrieval of related and useful documents within an organization
  • Similar product recommendations
  • eCommerce product search
  • Next article recommendations based on reading history
  • Selecting chatbot responses from an available list

6. Cluster

Clustering is another use case category that leverages text embeddings. The idea is to take a group of documents and make sense of how they are organized and how they are related to each other.

In the previous use case, we visualized a set of documents on a plot to get a sense of how a set of documents are similar, or different, from each other. Clustering uses the same principles, but adds another step of organizing them into groups. This can be done via clustering algorithms, for example, k-means clustering, where we specify the number of clusters and the algorithm will return the appropriate cluster associated with each piece.

This Python notebook, also leveraging the Embed endpoint, goes into detail about how to make sense of three thousand “Ask HN” (Hacker News) posts. First, the text embeddings for each are generated. This is followed by clustering them into smaller groups by the theme or topic of the posts, supplemented by the keywords that represent the topic of each group.

Finally, these posts are visualized on a plot, shown in the image below, where one color represents a topic cluster. Below you can see a few topics emerging, such as life, career, coding, startups, and computer science.

Eight clusters from the top 3,000 Ask HN posts,
with each set of keywords representing a topic


This technique can be applied to number of different tasks, such as:
  • Organizing customer feedback and requests into topics
  • Segmenting products into categories based on product descriptions
  • Turning ESG reports and news into themes
  • Organizing a huge corpus of company documents
  • Discovering emerging themes in survey responses analysis


7. Classify

Last but not least is the text classification category, and that’s because it is probably the most widely applicable use of NLP today. You can think of it as similar to clustering, with a slight twist.

Clustering is called an “unsupervised learning” algorithm. That’s because we don’t know what the clusters are beforehand — we assign a number of clusters (we can choose any number), and the algorithm will group the documents we give according to that number.

On the other hand, classification is a “supervised learning” algorithm, because this time, we already know beforehand what those clusters, or more precisely classes, are.

For example, say we have a list of eCommerce customer inquiries, and for routing purposes, we would like to categorize each of them into one of three classes: Shipping, Returns, and Tracking. To make the classifier work, we first need to train it by showing it enough examples of a piece of text, such as “Do you offer same day shipping?”, and its actual class, which in this case is Shipping.

With LLMs, there are a couple of possible approaches to doing this. The first is via text embeddings, demonstrated in this Python notebook. It shows an example of training a classifier using text embeddings. First, it generates the embeddings of each piece of text. Next, it uses these embeddings as the input for training the classifier. For this kind of setup, the number of training examples required will depend on the task, but typically it can range in the hundreds or even thousands.

The other approach is by leveraging “few-shot” classification. With this approach, we are leveraging prompt engineering to provide classification examples to the model. This has shown to work well with as few as five training examples per class, though it still depends on the kind of task we are working on. But this option allows us to build a working classifier when we don’t have many training examples — an all-too-common problem.

Here’s how we would build the eCommerce inquiries classifier with a few-shot approach. The following is a screenshot from the Cohere Playground, where we leverage the Classify endpoint to build a classifier.

First, we prepare the prompt containing examples of text-class pairs. With a minimum of five examples per class, and three classes, we give it a total number of fifteen examples.

The list of examples used to build the classifier


Next, we add any number of inputs that we would like to classify — here we have two inputs as examples.

The list of inputs for the classifier to classify


We can then trigger the classification, in which the model will output the predicted class for each input and the accompanying confidence level values, which indicate how confident the model is in its prediction of each class.


The predictions given by the classifier together with the confidence levels


You can test it out by accessing the saved preset.

Some example areas where text classification can be useful include:
  • Content moderation for toxic comments on online platforms
  • Intent classification in chatbots
  • Sentiment analysis on social media activity
  • eCommerce product categorization
  • Assigning customer support tickets to the right teams

Getting the best out of the Cohere API

Now that we’ve covered the seven main use case categories for LLMs, let’s consider how we can build really interesting applications — by stacking these different capabilities together. Let’s look at a few examples and start with a fun one.

Imagine that you are creating a chatbot that needs to have a certain voice or style. In our case, that bot happens to a pirate!

Let’s make it a game where people can enter a phrase, and the bot will decide whether the phrase is “pirate” enough. And if it’s not, the bot will even correct the phrase and turn it into pirate lingo!

This is actually something that our team has experimented with ourselves. But without going into the implementation details, to make it work, we had to first classify whether or not a phrase is acceptable pirate speak. If it’s not, then we put the phrase through a pirate paraphraser. We then compare the similarity between the generated phrase and the original phrase, and only if they are similar enough would the bot return the new phrase.

To make this happen, we made use of three use case categories: Classify, Rewrite, and Search/Similarity.


A summary flow of the pirate paraphraser


A more serious example would be a chatbot that answers questions on a forum. Here’s one possible basic implementation. First, we implement a classification step to determine if a user has entered a question or just a general comment or chat. And if it’s a question, then we proceed to search for the question from our database that is the most similar to the query, so we can provide a relevant answer.


A summary flow of the question answering chatbot


In another example, let’s say we are building an article recommendation system, where the goal is to provide a list of other articles most relevant to the one that a user is currently reading. This article demonstrates an example of implementing similarity search, classification, and extraction in a basic recommender.


A summary flow of the article recommender


We can take it even further by combining these steps with other APIs. In a recent blog post, we describe how to build complete, fully playable Magic the Gathering cards with AI, combining the capabilities of Cohere’s API together with a text-to-image generation API.


A summary flow of the Magic the Gathering card generator


Conclusion

With these examples, we are only just scratching the surface. The possibilities of using LLMs are limited only by our imagination. This is an exciting time where any developer and team, not just the big players anymore, can tackle some of the toughest NLP challenges by leveraging cutting-edge AI technologies that are made available via simple API calls.



What Are the Signs of True Faith in Your Children?



What Are the Signs of True Faith
in Your Children?

Meg BucherWriter and Author
Feb 01, 2021

“But the fruit of the Spirit is love, joy, peace, patience, kindness, goodness, faithfulness, gentleness and self-control; against such things there is no law.” (Galatians 5:22-23)

My 10 year old threw one arm around her momma and the other in the air as we sang our favorite worship song. Real faith moves. She had been bullied to her breaking point, and it led her to straight to Jesus. I love that memory fiercely, but I hold it loosely. Transformation is ongoing for all of us – every journey littered with highs and lows.

I still consider myself a parenting rookie. Raising kids drives me to my knees in prayer more than I could have ever understood it would when my babies were just babies. In our society, we talk a lot about self-care. My parenting self-care strategy is to get quiet with Jesus. The more time I make to faithfully seek Him, the more He prepares me to survive another day of motherhood. “A Christ-centered life begins with realizing that the source of everything we are is the Lord,” Paul Tripp explains. “He created us, he owns us, he gifted us with talents, he authors our story, and every blessing that we receive comes from him.”

Christ-centered lives parent from a place of humble submission instead of pride and authority. God has placed us purposefully to parent the particular children we are raising. He intimately knows and has purposefully designed us and them. Raising children to be Christ-centered in a world waving a self-centered banner is hard, but not impossible. Through the power of Christ in us, let’s pray our children see what it’s like to live a Christ centered life, and choose to live that way as well.

Photo Credit: © Getty Images/myshkovsky



A Prayer for Seeing True Faith
in Your Children
Father,

Today as we talk about signs of our children’s faith, we ask You to provide clarity and encouragement. We are imperfect people, and our children are imperfect too. There are bound to be clashes of character and will, disagreement and misunderstandings. Parenting is challenging. Growing up is challenging. Meet us, and our children, in our challenges daily, Father. May we be slow to speak when we want to snap, and patient when we’re rushing to be on time. Let us lean into Your timing, Your plans, Your ways, and Your will, Father.

We pray our children would honor and obey us, so they receive Your full blessing! And we pray for You to equip and guide us to lead them in Your Truth, to live Your Truth out each day, and to love others the way You command. Let our lives bring glory and honor to You. May we be the biggest witness for our children. Let our lives, imperfect and messy, but faithful and honest, be signs of our faith to them.

Father, You reign sovereign over all. Parenting can make us feel frantically out of control. Bless and uphold us. Calm our hearts, and continue to minister to the anxieties of our hearts as we lift them up to You. Thank You for Your compassionate care for us, Lord Jesus. We pray in Your powerful name,

Amen.

The illusion of perfection is something we need to release into the hands of our heavenly Father. Perfect kids and easy parenting are no one’s reality. God is faithful to remind us of His faithful pursuit of our children, regardless of our messy human nature, and the sin that so easily entangles all of us. Through mistakes, mess-ups, groundings, misunderstandings, long-winded lectures, disagreements, sassiness, and drama, God is faithful and good. I have watched my children grow in their faith as the teen years creep in to our reality, and I see the work of Jesus shining through them in the following ways.

Photo Credit: © Getty Images/Michael Truelove



1. The Way They Treat or Serve Others

“Therefore encourage one another and build each other up, just as in fact you are doing.” (1 Thessalonians 5:11)

Every time we drive by sirens and flashing lights, my youngest drops her head to pray. True faith has an instinctual reaction to pray for others. Aloud, she lifts up perfect strangers, and close family and friends. Christ-centered lives face outward.

Following Jesus allows us to see others as He sees them. Christ commanded us to love God, and love each other. True faith shows up for the kid sitting alone at the lunch table, or stands up for the one being bullied. Out of empathy gained from each unique situation, true faith activates each lesson learned to reach out in love and encouragement for someone going through what they have gone through.

Kindness to siblings and friends is a sign of faith in action. Encouraging others, finding common interests with new friends seated next to them in class, and asking for prayer on account of others is a sign of true faith. The way we treat others puts the true nature of our hearts center-stage.

The apostle Mark wrote, “For even the Son of Man did not come to be served, but to serve, and to give his life as a ransom for many” (Mark 10:45). Christ-centered lives seek to serve others. Children may ask to donate to the local food bank or food drive at church or school, drop off outgrown clothes or pass them on to friends or family in need. Often the things our children struggle with and overcome through Christ will be the very conduits for change He will use them in for the same situation in another’s life. As children grow older, their concerns grow deeper alongside the issues they work through themselves. Being the new kid at school lends empathy for other newbies as they arrive, and having been the victim of bullying allows a deeper and more compassionate perspective to lead the kindness movement among their peers.

Photo Credit: © Getty Images/omgimages



2. Confidence Rooted in Christ

“But blessed is the one who trusts in the LORD, whose confidence is in him.” (Jeremiah 17:7)

Confidence in Christ is a humble submission and commitment to work hard with the talents and gifts God has given us. Everything we do is meant to glorify Him. True faith is hard-working, and gives glory to God. We live in a world that encourages self-love. Pride can deceitfully creep into our children’s consciousness, not only recognizable by arrogance, but in reverse as they put themselves down and count themselves out. Godly confidence is humble, hard working, and God-honoring. True faith chooses to believe the truth about who God says we are, rather than believe self-destructive lies.

The apostle John recorded these words of Jesus: “As the Father has loved me, so have I loved you. Now remain in my love” (John 15:9). When children ask for wisdom, that’s our queue to bring God’s truth into their situation. Faithfully, when they are obedient to listen and apply His word to what they're walking through, they learn where to come back again next time. Loving people sounds easy until all the spots at the lunch table fill up and loneliness sets in. It’s easy to be bitter when left out and feeling lonely, instead of looking around to see who God has placed in their lives. It’s also easy to forget about the lonely when seated at a table full of friends. On both sides of the table, those with true faith choose to allow the love and peace of Christ to guide them.

Psalm 119:103 reads, “How sweet your words taste to me; they are sweeter than honey.” Craving God’s word, whether it means pouring through the Bible themselves or asking us for more of His wisdom, is a sign of true faith. Paul wrote to the Thessalonians, “pray without ceasing” (2 Thessalonians 5:17). Let me assure you, nothing is too trivial for a junior high girl to pray about! Our children, no matter what age, will learn to turn not only to us, but to God in prayer when they or those they know need help or healing. True faith produces continual and conversational prayer, bookended only by sleeping and rising. As we all grow in our faith, we learn to stay tuned to our Savior, always.

Photo Credit: © Unsplash/Ben White