Quotes & Sayings


We, and creation itself, actualize the possibilities of the God who sustains the world, towards becoming in the world in a fuller, more deeper way. - R.E. Slater

There is urgency in coming to see the world as a web of interrelated processes of which we are integral parts, so that all of our choices and actions have [consequential effects upon] the world around us. - Process Metaphysician Alfred North Whitehead

Kurt Gödel's Incompleteness Theorem says (i) all closed systems are unprovable within themselves and, that (ii) all open systems are rightly understood as incomplete. - R.E. Slater

The most true thing about you is what God has said to you in Christ, "You are My Beloved." - Tripp Fuller

The God among us is the God who refuses to be God without us, so great is God's Love. - Tripp Fuller

According to some Christian outlooks we were made for another world. Perhaps, rather, we were made for this world to recreate, reclaim, redeem, and renew unto God's future aspiration by the power of His Spirit. - R.E. Slater

Our eschatological ethos is to love. To stand with those who are oppressed. To stand against those who are oppressing. It is that simple. Love is our only calling and Christian Hope. - R.E. Slater

Secularization theory has been massively falsified. We don't live in an age of secularity. We live in an age of explosive, pervasive religiosity... an age of religious pluralism. - Peter L. Berger

Exploring the edge of life and faith in a post-everything world. - Todd Littleton

I don't need another reason to believe, your love is all around for me to see. – Anon

Thou art our need; and in giving us more of thyself thou givest us all. - Khalil Gibran, Prayer XXIII

Be careful what you pretend to be. You become what you pretend to be. - Kurt Vonnegut

Religious beliefs, far from being primary, are often shaped and adjusted by our social goals. - Jim Forest

We become who we are by what we believe and can justify. - R.E. Slater

People, even more than things, need to be restored, renewed, revived, reclaimed, and redeemed; never throw out anyone. – Anon

Certainly, God's love has made fools of us all. - R.E. Slater

An apocalyptic Christian faith doesn't wait for Jesus to come, but for Jesus to become in our midst. - R.E. Slater

Christian belief in God begins with the cross and resurrection of Jesus, not with rational apologetics. - Eberhard Jüngel, Jürgen Moltmann

Our knowledge of God is through the 'I-Thou' encounter, not in finding God at the end of a syllogism or argument. There is a grave danger in any Christian treatment of God as an object. The God of Jesus Christ and Scripture is irreducibly subject and never made as an object, a force, a power, or a principle that can be manipulated. - Emil Brunner

“Ehyeh Asher Ehyeh” means "I will be that who I have yet to become." - God (Ex 3.14) or, conversely, “I AM who I AM Becoming.”

Our job is to love others without stopping to inquire whether or not they are worthy. - Thomas Merton

The church is God's world-changing social experiment of bringing unlikes and differents to the Eucharist/Communion table to share life with one another as a new kind of family. When this happens, we show to the world what love, justice, peace, reconciliation, and life together is designed by God to be. The church is God's show-and-tell for the world to see how God wants us to live as a blended, global, polypluralistic family united with one will, by one Lord, and baptized by one Spirit. – Anon

The cross that is planted at the heart of the history of the world cannot be uprooted. - Jacques Ellul

The Unity in whose loving presence the universe unfolds is inside each person as a call to welcome the stranger, protect animals and the earth, respect the dignity of each person, think new thoughts, and help bring about ecological civilizations. - John Cobb & Farhan A. Shah

If you board the wrong train it is of no use running along the corridors of the train in the other direction. - Dietrich Bonhoeffer

God's justice is restorative rather than punitive; His discipline is merciful rather than punishing; His power is made perfect in weakness; and His grace is sufficient for all. – Anon

Our little [biblical] systems have their day; they have their day and cease to be. They are but broken lights of Thee, and Thou, O God art more than they. - Alfred Lord Tennyson

We can’t control God; God is uncontrollable. God can’t control us; God’s love is uncontrolling! - Thomas Jay Oord

Life in perspective but always in process... as we are relational beings in process to one another, so life events are in process in relation to each event... as God is to Self, is to world, is to us... like Father, like sons and daughters, like events... life in process yet always in perspective. - R.E. Slater

To promote societal transition to sustainable ways of living and a global society founded on a shared ethical framework which includes respect and care for the community of life, ecological integrity, universal human rights, respect for diversity, economic justice, democracy, and a culture of peace. - The Earth Charter Mission Statement

Christian humanism is the belief that human freedom, individual conscience, and unencumbered rational inquiry are compatible with the practice of Christianity or even intrinsic in its doctrine. It represents a philosophical union of Christian faith and classical humanist principles. - Scott Postma

It is never wise to have a self-appointed religious institution determine a nation's moral code. The opportunities for moral compromise and failure are high; the moral codes and creeds assuredly racist, discriminatory, or subjectively and religiously defined; and the pronouncement of inhumanitarian political objectives quite predictable. - R.E. Slater

God's love must both center and define the Christian faith and all religious or human faiths seeking human and ecological balance in worlds of subtraction, harm, tragedy, and evil. - R.E. Slater

In Whitehead’s process ontology, we can think of the experiential ground of reality as an eternal pulse whereby what is objectively public in one moment becomes subjectively prehended in the next, and whereby the subject that emerges from its feelings then perishes into public expression as an object (or “superject”) aiming for novelty. There is a rhythm of Being between object and subject, not an ontological division. This rhythm powers the creative growth of the universe from one occasion of experience to the next. This is the Whiteheadian mantra: “The many become one and are increased by one.” - Matthew Segall

Without Love there is no Truth. And True Truth is always Loving. There is no dichotomy between these terms but only seamless integration. This is the premier centering focus of a Processual Theology of Love. - R.E. Slater

-----

Note: Generally I do not respond to commentary. I may read the comments but wish to reserve my time to write (or write from the comments I read). Instead, I'd like to see our community help one another and in the helping encourage and exhort each of us towards Christian love in Christ Jesus our Lord and Savior. - re slater

Friday, June 14, 2024

8 Questions About Using AI Responsibly, Answered


Señor Salme


8 Questions About Using
AI Responsibly, Answered

How to avoid pitfalls around data privacy, bias,
misinformation, generative AI, and more.

May 09, 2023


Ethics in the Age of AI

Summary

Generative AI tools are poised to change the way every business operates. As your own organization begins strategizing which to use, and how, operational and ethical considerations are inevitable. This article delves into eight of them, including how your organization should prepare to introduce AI responsibly, how you can prevent harmful bias from proliferating in your systems, and how to avoid key privacy risks.close

Introduction

While the question of how organizations can (and should) use AI isn’t a new one, the stakes and urgency of finding answers have skyrocketed with the release of ChatGPT, Midjourney, and other generative AI tools. Everywhere, people are wondering:
How can we use AI tools to boost performance?
Can we trust AI to make consequential decisions?
Will AI take away my job?
The power of AI introduced by OpenAI, Microsoft, and Nvidia — and the pressure to compete in the market — make it inevitable that your organization will have to navigate the operational and ethical considerations of machine learning, large language models, and much more. And while many leaders are focused on operational challenges and disruptions, the ethical concerns are at least — if not more — pressing. Given how regulation lags technological capabilities and how quickly the AI landscape is changing, the burden of ensuring that these tools are used safely and ethically falls to companies.

In my work, at the intersection of occupations, technology, and organizations, I’ve examined how leaders can develop digital mindsets and the dangers of biased large language models. I have identified best practices for organizations’ use of technology and amplified consequential issues that ensure that AI implementations are ethical. To help you better identify how you and your company should be thinking about these issues — and make no mistake, you should be thinking about them — I collaborated with HBR to answer eight questions posed by readers on LinkedIn.

[ 1 ]
How should I prepare to introduce AI at my organization?

To start, it’s important to recognize that the optimal way to work with AI is different from the way we’ve worked with other new technologies. In the past, most new tools simply enabled us to perform tasks more efficiently. People wrote with pens, then typewriters (which were faster), then computers (which were even faster). Each new tool allowed for more-efficient writing, but the general processes (drafting, revising, editing) remained largely the same.

AI is different. It has a more substantial influence on our work and our processes because it’s able to find patterns that we can’t see and then use them to provide insights and analysis, predictions, suggestions, and even full drafts all on its own. So instead of thinking of AI as the tools we use, we should think of it as a set of systems with which we can collaborate.

To effectively collaborate with AI at your organization, focus on three things:
First, ensure that everyone has a basic understanding of how digital systems work.

A digital mindset is a collection of attitudes and behaviors that help you to see new possibilities using data, technology, algorithms, and AI. You don’t have to become a programmer or a data scientist; you simply need to take a new and proactive approach to collaboration (learning to work across platforms), computation (asking and answering the right questions), and change (accepting that it is the only constant). Everyone in your organization should be working toward at least 30% fluency in a handful of topics, such as systems’ architecture, AI, machine learning, algorithms, AI agents as teammates, cybersecurity, and data-driven experimentation.
Second, make sure your organization is prepared for continuous adaptation and change.

Bringing in new AI requires employees to get used to processing new streams of data and content, analyzing them, and using their findings and outputs to develop a new perspective. Likewise, to use data and technology most efficiently, organizations need an integrated organizational structure. Your company needs to become less siloed and should build a centralized repository of knowledge and data to enable constant sharing and collaboration. Competing with AI not only requires incorporating today’s technologies but also being mentally and structurally prepared to adapt to future advancements. For example, individuals have begun incorporating generative AI (such as ChatGPT) into their daily routines, regardless of whether companies are prepared or willing to embrace its use.
Third, build AI into your operating model.

As my colleagues Marco Iansiti and Karim R. Lakhani have shown, the structure of an organization mirrors the architecture of the technological systems within it, and vice versa. If tech systems are static, your organization will be static. But if they’re flexible, your organization will be flexible. This strategy played out successfully at Amazon. The company was having trouble sustaining its growth and its software infrastructure was “cracking under pressure,” according to Iansiti and Lakhani. So Jeff Bezos wrote a memo to employees announcing that all teams should route their data through “application programming interfaces” (APIs), which allow various types of software to communicate and share data using set protocols. Anyone who didn’t would be fired. This was an attempt to break the inertia within Amazon’s tech systems — and it worked, dismantling data siloes, increasing collaboration, and helping to build the software- and data-driven operating model we see today. While you may not want to resort to a similar ultimatum, you should think about how the introduction of AI can — and should — change your operations for the better.

[ 2 ]
How can we ensure transparency in how AI makes decisions?

Leaders need to recognize that it is not always possible to know how AI systems are making decisions. Some of the very characteristics that allow AI to quickly process huge amounts of data and perform certain tasks more accurately or efficiently than humans can also make it a black box: We can’t see how the output was produced. However, we can all play a role in increasing transparency and accountability in AI decision-making processes in two ways:
Recognize that AI is invisible and inscrutable and be transparent in presenting and using AI systems.

Callen Anthony, Beth A. Bechky, and Anne-Laure Fayard identify invisibility and inscrutability as core characteristics that differentiate AI from prior technologies. It’s invisible because it often runs in the background of other technologies or platforms without users being aware of it; for every Siri or Alexa that people understand to be AI, there are many technologies, such as antilock brakes, that contain unseen AI systems. It’s inscrutable because, even for AI developers, it’s often impossible to understand how a model reaches an outcome, or even identify all the data points it’s using to get there — good, bad, or otherwise.

As AIs rely on progressively larger datasets, this becomes increasingly true. Consider large language models (LLMs) such as OpenAI’s ChatGPT or Microsoft’s Bing. They are trained on massive datasets of books, webpages, and documents scraped from across the internet — OpenAI’s LLM was trained using 175 billion parameters and was built to predict the likelihood that something will occur (a character, word, or string of words, or even an image or tonal shift in your voice) based on either its preceding or surrounding context. The autocorrect feature on your phone is an example of the accuracy — and inaccuracy — of such predictions. But it’s not just the size of the training data: Many AI algorithms are also self-learning; they keep refining their predictive powers as they get more data and user feedback, adding new parameters along the way.

AIs often have broad capabilities because of invisibility and inscrutability — their ability to work in the background and find patterns beyond our grasp. Currently, there is no way to peer into the inner workings of an AI tool and guarantee that the system is producing accurate or fair output. We must acknowledge that some opacity is a cost of using these powerful systems. As a consequence, leaders should exercise careful judgment in determining when and how it’s appropriate to use AI, and they should document when and how AI is being used. That way people will know that an AI-driven decision was appraised with an appropriate level of skepticism, including its potential risks or shortcomings.
Prioritize explanation as a central design goal.

The research brief “Artificial Intelligence and the Future of Work,” by MIT scientists, notes that AI models can become more transparent through practices like highlighting specific areas in data that contribute to AI output, building models that are more interpretable, and developing algorithms that can be used to probe how a different model works. Similarly, leading AI computer scientist Timnit Gebru and her colleagues Emily Bender, Angelina McMillan-Major, and Margaret Mitchell (credited as “Shmargaret Shmitchell”) argue that practices like premortem analyses that prompt developers to consider both project risks and potential alternatives to current plans can increase transparency in future technologies. Echoing this point, in March of 2023, prominent tech entrepreneurs Steve Wozniak and Elon Musk, along with employees of Google and Microsoft, signed a letter advocating for AI development to be more transparent and interpretable.

[ 3 ]
How can we erect guardrails around LLMs so that their responses are true and consistent with the brand image we want to project?

LLMs come with several serious risks. They can:

  • perpetuate harmful bias by deploying negative stereotypes or minimizing minority viewpoints
  • spread misinformation by repeating falsehoods or making up facts and citations
  • violate privacy by using data without people’s consent
  • cause security breaches if they are used to generate phishing emails or other cyberattacks
  • harm the environment because of the significant computational resources required to train and run these tools

Data curation and documentation are two ways to curtail those risks and ensure that LLMs will give responses that are more consistent with, not harmful to, your brand image.
Tailor data for appropriate outputs.

LLMs are often developed using internet-based data containing billions of words. However, common sources of this data, like Reddit and Wikipedia, lack sufficient mechanisms for checking accuracy, fairness, or appropriateness. Consider which perspectives are represented on these sites and which are left out. For example, 67% of Reddit’s contributors are male. And on Wikipedia, 84% of contributors are male, with little representation from marginalized populations.

If you instead build an LLM around more-carefully vetted sources, you reduce the risk of inappropriate or harmful responses. Bender and colleagues recommend curating training datasets “through a thoughtful process of deciding what to put in, rather than aiming solely for scale and trying haphazardly to weed out…‘dangerous’, ‘unintelligible’, or ‘otherwise bad’ [data].” While this might take more time and resources, it exemplifies the adage that an ounce of prevention is worth a pound of cure.
Document data.

There will surely be organizations that want to leverage LLMs but lack the resources to train a model with a curated dataset. In situations like this, documentation is crucial because it enables companies to get context from a nonproprietary model’s developers on which datasets it uses and the biases they may contain, as well as guidance on how software built on the model might be appropriately deployed. This practice is analogous to the standardized information used in medicine to indicate which studies have been used in making health care recommendations.

AI developers should prioritize documentation to allow for safe and transparent use of their models. And people or organizations experimenting with a model must look for this documentation to understand its risks and whether it aligns with their desired brand image.

[ 4 ]
How can we ensure that the dataset we use to train AI models is representative and doesn’t include harmful biases?

Sanitizing datasets is a challenge that your organization can help overcome by prioritizing transparency and fairness over model size and by representing diverse populations in data curation.

First, consider the trade-offs you make. Tech companies have been pursuing larger AI systems because they tend to be more effective at certain tasks, like sustaining human-seeming conversations. However, if a model is too large to fully understand, it’s impossible to rid it of potential biases. To fully combat harmful bias, developers must be able to understand and document the risks inherent to a dataset, which might mean using a smaller one.

Second, if diverse teams, including members of underrepresented populations, collect and produce the data used to train models, then you’ll have a better chance of ensuring that people with a variety of perspectives and identities are represented in them. This practice also helps to identify unrecognized biases or blinders in the data.

AI will only be trustworthy once it works equitably, and that will only happen if we prioritize diversifying data and development teams and clearly document how AI has been designed for fairness.

[ 5 ]
What are the potential risks of data privacy violations with AI?

AI that uses sensitive employee and customer data is vulnerable to bad actors. To combat these risks, organizations should learn as much as they can about how their AI has been developed and then decide whether it’s appropriate to use secure data with it. They should also keep tech systems updated and earmark budget resources to keep the software secure. This requires continuous action, as a small vulnerability can leave an entire organization open to breaches.

Blockchain innovations can help on this front. A blockchain is a secure, distributed ledger that records data transactions, and it’s currently being used for applications like creating payment systems (not to mention cryptocurrencies).

When it comes to your operations more broadly, consider this privacy by design (PbD) framework from former Information and Privacy Commissioner of Ontario Ann Cavoukian, which recommends that organizations embrace seven foundational principles:

  1. Be proactive, not reactive — preventative, not remedial.
  2. Lead with privacy as the default setting.
  3. Embed privacy into design.
  4. Retain full functionality, including privacy and security.
  5. Ensure end-to-end security.
  6. Maintain visibility and transparency.
  7. Respect user privacy — keep systems user-centric.

Incorporating PbD principles into your operation requires more than hiring privacy personnel or creating a privacy division. All the people in your organization need to be attuned to customer and employee concerns about these issues. Privacy isn’t an afterthought; it needs to be at the core of digital operations, and everyone needs to work to protect it.

[ 6 ]
How can we encourage employees to use AI for productivity purposes and not simply to take shortcuts?

Even with the advent of LLMs, AI technology is not yet capable of performing the dizzying range of tasks that humans can, and there are many things that it does worse than the average person. Using each new tool effectively requires understanding its purpose.

For example, think about ChatGPT. By learning about language patterns, it has become so good at predicting which words are supposed to follow others that it can produce seemingly sophisticated text responses to complicated questions. However, there’s a limit to the quality of these outputs because being good at guessing plausible combinations of words and phrases is different from understanding the material. So ChatGPT can produce a poem in the style of Shakespeare because it has learned the particular patterns of his plays and poems, but it cannot produce the original insight into the human condition that informs his work.

By contrast, AI can be better and more efficient than humans at making predictions because it can process much larger amounts of data much more quickly. Examples include predicting early dementia from speech patterns, detecting cancerous tumors indistinguishable to the human eye, and planning safer routes through battlefields.

Employees should therefore be encouraged to evaluate whether AI’s strengths match up to a task and proceed accordingly. If you need to process a lot of information quickly, it can do that. If you need a bunch of new ideas, it can generate them. Even if you need to make a difficult decision, it can offer advice, providing it’s been trained on relevant data.

But you shouldn’t use AI to create meaningful work products without human oversight. If you need to write a quantity of documents with very similar content, AI may be a useful generator of what has long been referred to as boilerplate material. Be aware that its outputs are derived from its datasets and algorithms, and they aren’t necessarily good or accurate.

[ 7 ]
How worried should we be that AI will replace jobs?

Every technological revolution has created more jobs than it has destroyed. Automobiles put horse-and-buggy drivers out of business but led to new jobs building and fixing cars, running gas stations, and more. The novelty of AI technologies makes it easy to fear they will replace humans in the workforce. But we should instead view them as ways to augment human performance. For example, companies like Collective[i] have developed AI systems that analyze data to produce highly accurate sales forecasts quickly; traditionally, this work took people days and weeks to pull together. But no salespeople are losing their jobs. Rather, they’ve got more time to focus on more important parts of their work: building relationships, managing, and actually selling.

Similarly, services like OpenAI’s Codex can autogenerate programming code for basic purposes. This doesn’t replace programmers; it allows them to write code more efficiently and automate repetitive tasks like testing so that they can work on higher-level issues such as systems architecture, domain modeling, and user experience.

The long-term effects on jobs are complex and uneven, and there can be periods of job destruction and displacement in certain industries or regions. To ensure that the benefits of technological progress are widely shared, it is crucial to invest in education and workforce development to help people adapt to the new job market.

Individuals and organizations should focus on upskilling and scaling to prepare to make the most of new technologies. AI and robots aren’t replacing humans anytime soon. The more likely reality is that people with digital mindsets will replace those without them.

[ 8 ]
How can my organization ensure that the AI we develop or use won’t harm individuals or groups or violate human rights?

The harms of AI bias have been widely documented. In their seminal 2018 paper “Gender Shades,” Joy Buolamwini and Timnit Gebru showed that popular facial recognition technologies offered by companies like IBM and Microsoft were nearly perfect at identifying white, male faces but misidentified Black female faces as much as 35% of the time. Facial recognition can be used to unlock your phone, but is also used to monitor patrons at Madison Square Garden, surveil protesters, and tap suspects in police investigations — and misidentification has led to wrongful arrests that can derail people’s lives. As AI grows in power and becomes more integrated into our daily lives, its potential for harm grows exponentially, too. Here are strategies to safeguard AI.
Slow down and document AI development.

Preventing AI harm requires shifting our focus from the rapid development and deployment of increasingly powerful AI to ensuring that AI is safe before release.

Transparency is also key. Earlier in this article, I explained how clear descriptions of the datasets used in AI and potential biases within them helps to reduce harm. When algorithms are openly shared, organizations and individuals can better analyze and understand the potential risks of new tools before using them.

Related


Establish and protect AI ethics watchdogs.

The question of who will ensure safe and responsible AI is currently unanswered. Google, for example, employs an ethical-AI team, but in 2020 they fired Gebru after she sought to publish a paper warning of the risks of building ever-larger language models. Her exit from Google raised the question of whether tech developers are able, or incentivized, to act as ombudsmen for their own technologies and organizations. More recently, an entire team at Microsoft focused on ethics was laid off. But many in the industry recognize the risks, and as noted earlier, even tech icons have called for policymakers working with technologists to create regulatory systems to govern AI development.

Whether it comes from government, the tech industry, or another independent system, the establishment and protection of watchdogs is crucial to protecting against AI harm.

Watch where regulation is headed.

Even as the AI landscape changes, governments are trying to regulate it. In the United States, 21 AI-related bills were passed into law last year. Notable acts include an Alabama provision outlining guidelines for using facial recognition technology in criminal proceedings and legislation that created a Vermont Division of Artificial Intelligence to review all AI used by the state government and to propose a state AI code of ethics. More recently, the U.S. federal government moved to enact executive actions on AI, which will be vetted over time.

The European Union is also considering legislation — the Artificial Intelligence Act — that includes a classification system determining the level of risk AI could pose to the health and safety or the fundamental rights of a person. Italy has temporarily banned ChatGPT. The African Union has established a working group on AI, and the African Commission on Human and Peoples’ Rights adopted a resolution to address implications for human rights of AI, robotics, and other new and emerging technologies in Africa.

China passed a data protection law in 2021 that established user consent rules for data collection and recently passed a unique policy regulating “deep synthesis technologies” that are used for so-called “deep fakes.” The British government released an approach that applies existing regulatory guidelines to new AI technology.

* * * * * *

Billions of people around the world are discovering the promise of AI through their experiments with ChatGPT, Bing, Midjourney, and other new tools. Every company will have to confront questions about how these emerging technologies will apply to them and their industries. For some it will mean a significant pivot in their operating models; for others, an opportunity to scale and broaden their offerings. But all must assess their readiness to deploy AI responsibly without perpetuating harm to their stakeholders and the world at large.

About the Author
Tsedal Neeley is the Naylor Fitzhugh Professor of Business Administration and senior associate dean of faculty and research at Harvard Business School. She is the coauthor of the book The Digital Mindset: What It Really Takes to Thrive in the Age of Data, Algorithms, and AI and the author of the book Remote Work Revolution: Succeeding from Anywhere.

No comments:

Post a Comment