Quotes & Sayings


We, and creation itself, actualize the possibilities of the God who sustains the world, towards becoming in the world in a fuller, more deeper way. - R.E. Slater

There is urgency in coming to see the world as a web of interrelated processes of which we are integral parts, so that all of our choices and actions have [consequential effects upon] the world around us. - Process Metaphysician Alfred North Whitehead

Kurt Gödel's Incompleteness Theorem says (i) all closed systems are unprovable within themselves and, that (ii) all open systems are rightly understood as incomplete. - R.E. Slater

The most true thing about you is what God has said to you in Christ, "You are My Beloved." - Tripp Fuller

The God among us is the God who refuses to be God without us, so great is God's Love. - Tripp Fuller

According to some Christian outlooks we were made for another world. Perhaps, rather, we were made for this world to recreate, reclaim, redeem, and renew unto God's future aspiration by the power of His Spirit. - R.E. Slater

Our eschatological ethos is to love. To stand with those who are oppressed. To stand against those who are oppressing. It is that simple. Love is our only calling and Christian Hope. - R.E. Slater

Secularization theory has been massively falsified. We don't live in an age of secularity. We live in an age of explosive, pervasive religiosity... an age of religious pluralism. - Peter L. Berger

Exploring the edge of life and faith in a post-everything world. - Todd Littleton

I don't need another reason to believe, your love is all around for me to see. – Anon

Thou art our need; and in giving us more of thyself thou givest us all. - Khalil Gibran, Prayer XXIII

Be careful what you pretend to be. You become what you pretend to be. - Kurt Vonnegut

Religious beliefs, far from being primary, are often shaped and adjusted by our social goals. - Jim Forest

We become who we are by what we believe and can justify. - R.E. Slater

People, even more than things, need to be restored, renewed, revived, reclaimed, and redeemed; never throw out anyone. – Anon

Certainly, God's love has made fools of us all. - R.E. Slater

An apocalyptic Christian faith doesn't wait for Jesus to come, but for Jesus to become in our midst. - R.E. Slater

Christian belief in God begins with the cross and resurrection of Jesus, not with rational apologetics. - Eberhard Jüngel, Jürgen Moltmann

Our knowledge of God is through the 'I-Thou' encounter, not in finding God at the end of a syllogism or argument. There is a grave danger in any Christian treatment of God as an object. The God of Jesus Christ and Scripture is irreducibly subject and never made as an object, a force, a power, or a principle that can be manipulated. - Emil Brunner

“Ehyeh Asher Ehyeh” means "I will be that who I have yet to become." - God (Ex 3.14) or, conversely, “I AM who I AM Becoming.”

Our job is to love others without stopping to inquire whether or not they are worthy. - Thomas Merton

The church is God's world-changing social experiment of bringing unlikes and differents to the Eucharist/Communion table to share life with one another as a new kind of family. When this happens, we show to the world what love, justice, peace, reconciliation, and life together is designed by God to be. The church is God's show-and-tell for the world to see how God wants us to live as a blended, global, polypluralistic family united with one will, by one Lord, and baptized by one Spirit. – Anon

The cross that is planted at the heart of the history of the world cannot be uprooted. - Jacques Ellul

The Unity in whose loving presence the universe unfolds is inside each person as a call to welcome the stranger, protect animals and the earth, respect the dignity of each person, think new thoughts, and help bring about ecological civilizations. - John Cobb & Farhan A. Shah

If you board the wrong train it is of no use running along the corridors of the train in the other direction. - Dietrich Bonhoeffer

God's justice is restorative rather than punitive; His discipline is merciful rather than punishing; His power is made perfect in weakness; and His grace is sufficient for all. – Anon

Our little [biblical] systems have their day; they have their day and cease to be. They are but broken lights of Thee, and Thou, O God art more than they. - Alfred Lord Tennyson

We can’t control God; God is uncontrollable. God can’t control us; God’s love is uncontrolling! - Thomas Jay Oord

Life in perspective but always in process... as we are relational beings in process to one another, so life events are in process in relation to each event... as God is to Self, is to world, is to us... like Father, like sons and daughters, like events... life in process yet always in perspective. - R.E. Slater

To promote societal transition to sustainable ways of living and a global society founded on a shared ethical framework which includes respect and care for the community of life, ecological integrity, universal human rights, respect for diversity, economic justice, democracy, and a culture of peace. - The Earth Charter Mission Statement

Christian humanism is the belief that human freedom, individual conscience, and unencumbered rational inquiry are compatible with the practice of Christianity or even intrinsic in its doctrine. It represents a philosophical union of Christian faith and classical humanist principles. - Scott Postma

It is never wise to have a self-appointed religious institution determine a nation's moral code. The opportunities for moral compromise and failure are high; the moral codes and creeds assuredly racist, discriminatory, or subjectively and religiously defined; and the pronouncement of inhumanitarian political objectives quite predictable. - R.E. Slater

God's love must both center and define the Christian faith and all religious or human faiths seeking human and ecological balance in worlds of subtraction, harm, tragedy, and evil. - R.E. Slater

In Whitehead’s process ontology, we can think of the experiential ground of reality as an eternal pulse whereby what is objectively public in one moment becomes subjectively prehended in the next, and whereby the subject that emerges from its feelings then perishes into public expression as an object (or “superject”) aiming for novelty. There is a rhythm of Being between object and subject, not an ontological division. This rhythm powers the creative growth of the universe from one occasion of experience to the next. This is the Whiteheadian mantra: “The many become one and are increased by one.” - Matthew Segall

Without Love there is no Truth. And True Truth is always Loving. There is no dichotomy between these terms but only seamless integration. This is the premier centering focus of a Processual Theology of Love. - R.E. Slater

-----

Note: Generally I do not respond to commentary. I may read the comments but wish to reserve my time to write (or write from the comments I read). Instead, I'd like to see our community help one another and in the helping encourage and exhort each of us towards Christian love in Christ Jesus our Lord and Savior. - re slater

Showing posts with label Quantum AI. Show all posts
Showing posts with label Quantum AI. Show all posts

Friday, June 14, 2024

8 Questions About Using AI Responsibly, Answered


Señor Salme


8 Questions About Using
AI Responsibly, Answered

How to avoid pitfalls around data privacy, bias,
misinformation, generative AI, and more.

May 09, 2023


Ethics in the Age of AI

Summary

Generative AI tools are poised to change the way every business operates. As your own organization begins strategizing which to use, and how, operational and ethical considerations are inevitable. This article delves into eight of them, including how your organization should prepare to introduce AI responsibly, how you can prevent harmful bias from proliferating in your systems, and how to avoid key privacy risks.close

Introduction

While the question of how organizations can (and should) use AI isn’t a new one, the stakes and urgency of finding answers have skyrocketed with the release of ChatGPT, Midjourney, and other generative AI tools. Everywhere, people are wondering:
How can we use AI tools to boost performance?
Can we trust AI to make consequential decisions?
Will AI take away my job?
The power of AI introduced by OpenAI, Microsoft, and Nvidia — and the pressure to compete in the market — make it inevitable that your organization will have to navigate the operational and ethical considerations of machine learning, large language models, and much more. And while many leaders are focused on operational challenges and disruptions, the ethical concerns are at least — if not more — pressing. Given how regulation lags technological capabilities and how quickly the AI landscape is changing, the burden of ensuring that these tools are used safely and ethically falls to companies.

In my work, at the intersection of occupations, technology, and organizations, I’ve examined how leaders can develop digital mindsets and the dangers of biased large language models. I have identified best practices for organizations’ use of technology and amplified consequential issues that ensure that AI implementations are ethical. To help you better identify how you and your company should be thinking about these issues — and make no mistake, you should be thinking about them — I collaborated with HBR to answer eight questions posed by readers on LinkedIn.

[ 1 ]
How should I prepare to introduce AI at my organization?

To start, it’s important to recognize that the optimal way to work with AI is different from the way we’ve worked with other new technologies. In the past, most new tools simply enabled us to perform tasks more efficiently. People wrote with pens, then typewriters (which were faster), then computers (which were even faster). Each new tool allowed for more-efficient writing, but the general processes (drafting, revising, editing) remained largely the same.

AI is different. It has a more substantial influence on our work and our processes because it’s able to find patterns that we can’t see and then use them to provide insights and analysis, predictions, suggestions, and even full drafts all on its own. So instead of thinking of AI as the tools we use, we should think of it as a set of systems with which we can collaborate.

To effectively collaborate with AI at your organization, focus on three things:
First, ensure that everyone has a basic understanding of how digital systems work.

A digital mindset is a collection of attitudes and behaviors that help you to see new possibilities using data, technology, algorithms, and AI. You don’t have to become a programmer or a data scientist; you simply need to take a new and proactive approach to collaboration (learning to work across platforms), computation (asking and answering the right questions), and change (accepting that it is the only constant). Everyone in your organization should be working toward at least 30% fluency in a handful of topics, such as systems’ architecture, AI, machine learning, algorithms, AI agents as teammates, cybersecurity, and data-driven experimentation.
Second, make sure your organization is prepared for continuous adaptation and change.

Bringing in new AI requires employees to get used to processing new streams of data and content, analyzing them, and using their findings and outputs to develop a new perspective. Likewise, to use data and technology most efficiently, organizations need an integrated organizational structure. Your company needs to become less siloed and should build a centralized repository of knowledge and data to enable constant sharing and collaboration. Competing with AI not only requires incorporating today’s technologies but also being mentally and structurally prepared to adapt to future advancements. For example, individuals have begun incorporating generative AI (such as ChatGPT) into their daily routines, regardless of whether companies are prepared or willing to embrace its use.
Third, build AI into your operating model.

As my colleagues Marco Iansiti and Karim R. Lakhani have shown, the structure of an organization mirrors the architecture of the technological systems within it, and vice versa. If tech systems are static, your organization will be static. But if they’re flexible, your organization will be flexible. This strategy played out successfully at Amazon. The company was having trouble sustaining its growth and its software infrastructure was “cracking under pressure,” according to Iansiti and Lakhani. So Jeff Bezos wrote a memo to employees announcing that all teams should route their data through “application programming interfaces” (APIs), which allow various types of software to communicate and share data using set protocols. Anyone who didn’t would be fired. This was an attempt to break the inertia within Amazon’s tech systems — and it worked, dismantling data siloes, increasing collaboration, and helping to build the software- and data-driven operating model we see today. While you may not want to resort to a similar ultimatum, you should think about how the introduction of AI can — and should — change your operations for the better.

[ 2 ]
How can we ensure transparency in how AI makes decisions?

Leaders need to recognize that it is not always possible to know how AI systems are making decisions. Some of the very characteristics that allow AI to quickly process huge amounts of data and perform certain tasks more accurately or efficiently than humans can also make it a black box: We can’t see how the output was produced. However, we can all play a role in increasing transparency and accountability in AI decision-making processes in two ways:
Recognize that AI is invisible and inscrutable and be transparent in presenting and using AI systems.

Callen Anthony, Beth A. Bechky, and Anne-Laure Fayard identify invisibility and inscrutability as core characteristics that differentiate AI from prior technologies. It’s invisible because it often runs in the background of other technologies or platforms without users being aware of it; for every Siri or Alexa that people understand to be AI, there are many technologies, such as antilock brakes, that contain unseen AI systems. It’s inscrutable because, even for AI developers, it’s often impossible to understand how a model reaches an outcome, or even identify all the data points it’s using to get there — good, bad, or otherwise.

As AIs rely on progressively larger datasets, this becomes increasingly true. Consider large language models (LLMs) such as OpenAI’s ChatGPT or Microsoft’s Bing. They are trained on massive datasets of books, webpages, and documents scraped from across the internet — OpenAI’s LLM was trained using 175 billion parameters and was built to predict the likelihood that something will occur (a character, word, or string of words, or even an image or tonal shift in your voice) based on either its preceding or surrounding context. The autocorrect feature on your phone is an example of the accuracy — and inaccuracy — of such predictions. But it’s not just the size of the training data: Many AI algorithms are also self-learning; they keep refining their predictive powers as they get more data and user feedback, adding new parameters along the way.

AIs often have broad capabilities because of invisibility and inscrutability — their ability to work in the background and find patterns beyond our grasp. Currently, there is no way to peer into the inner workings of an AI tool and guarantee that the system is producing accurate or fair output. We must acknowledge that some opacity is a cost of using these powerful systems. As a consequence, leaders should exercise careful judgment in determining when and how it’s appropriate to use AI, and they should document when and how AI is being used. That way people will know that an AI-driven decision was appraised with an appropriate level of skepticism, including its potential risks or shortcomings.
Prioritize explanation as a central design goal.

The research brief “Artificial Intelligence and the Future of Work,” by MIT scientists, notes that AI models can become more transparent through practices like highlighting specific areas in data that contribute to AI output, building models that are more interpretable, and developing algorithms that can be used to probe how a different model works. Similarly, leading AI computer scientist Timnit Gebru and her colleagues Emily Bender, Angelina McMillan-Major, and Margaret Mitchell (credited as “Shmargaret Shmitchell”) argue that practices like premortem analyses that prompt developers to consider both project risks and potential alternatives to current plans can increase transparency in future technologies. Echoing this point, in March of 2023, prominent tech entrepreneurs Steve Wozniak and Elon Musk, along with employees of Google and Microsoft, signed a letter advocating for AI development to be more transparent and interpretable.

[ 3 ]
How can we erect guardrails around LLMs so that their responses are true and consistent with the brand image we want to project?

LLMs come with several serious risks. They can:

  • perpetuate harmful bias by deploying negative stereotypes or minimizing minority viewpoints
  • spread misinformation by repeating falsehoods or making up facts and citations
  • violate privacy by using data without people’s consent
  • cause security breaches if they are used to generate phishing emails or other cyberattacks
  • harm the environment because of the significant computational resources required to train and run these tools

Data curation and documentation are two ways to curtail those risks and ensure that LLMs will give responses that are more consistent with, not harmful to, your brand image.
Tailor data for appropriate outputs.

LLMs are often developed using internet-based data containing billions of words. However, common sources of this data, like Reddit and Wikipedia, lack sufficient mechanisms for checking accuracy, fairness, or appropriateness. Consider which perspectives are represented on these sites and which are left out. For example, 67% of Reddit’s contributors are male. And on Wikipedia, 84% of contributors are male, with little representation from marginalized populations.

If you instead build an LLM around more-carefully vetted sources, you reduce the risk of inappropriate or harmful responses. Bender and colleagues recommend curating training datasets “through a thoughtful process of deciding what to put in, rather than aiming solely for scale and trying haphazardly to weed out…‘dangerous’, ‘unintelligible’, or ‘otherwise bad’ [data].” While this might take more time and resources, it exemplifies the adage that an ounce of prevention is worth a pound of cure.
Document data.

There will surely be organizations that want to leverage LLMs but lack the resources to train a model with a curated dataset. In situations like this, documentation is crucial because it enables companies to get context from a nonproprietary model’s developers on which datasets it uses and the biases they may contain, as well as guidance on how software built on the model might be appropriately deployed. This practice is analogous to the standardized information used in medicine to indicate which studies have been used in making health care recommendations.

AI developers should prioritize documentation to allow for safe and transparent use of their models. And people or organizations experimenting with a model must look for this documentation to understand its risks and whether it aligns with their desired brand image.

[ 4 ]
How can we ensure that the dataset we use to train AI models is representative and doesn’t include harmful biases?

Sanitizing datasets is a challenge that your organization can help overcome by prioritizing transparency and fairness over model size and by representing diverse populations in data curation.

First, consider the trade-offs you make. Tech companies have been pursuing larger AI systems because they tend to be more effective at certain tasks, like sustaining human-seeming conversations. However, if a model is too large to fully understand, it’s impossible to rid it of potential biases. To fully combat harmful bias, developers must be able to understand and document the risks inherent to a dataset, which might mean using a smaller one.

Second, if diverse teams, including members of underrepresented populations, collect and produce the data used to train models, then you’ll have a better chance of ensuring that people with a variety of perspectives and identities are represented in them. This practice also helps to identify unrecognized biases or blinders in the data.

AI will only be trustworthy once it works equitably, and that will only happen if we prioritize diversifying data and development teams and clearly document how AI has been designed for fairness.

[ 5 ]
What are the potential risks of data privacy violations with AI?

AI that uses sensitive employee and customer data is vulnerable to bad actors. To combat these risks, organizations should learn as much as they can about how their AI has been developed and then decide whether it’s appropriate to use secure data with it. They should also keep tech systems updated and earmark budget resources to keep the software secure. This requires continuous action, as a small vulnerability can leave an entire organization open to breaches.

Blockchain innovations can help on this front. A blockchain is a secure, distributed ledger that records data transactions, and it’s currently being used for applications like creating payment systems (not to mention cryptocurrencies).

When it comes to your operations more broadly, consider this privacy by design (PbD) framework from former Information and Privacy Commissioner of Ontario Ann Cavoukian, which recommends that organizations embrace seven foundational principles:

  1. Be proactive, not reactive — preventative, not remedial.
  2. Lead with privacy as the default setting.
  3. Embed privacy into design.
  4. Retain full functionality, including privacy and security.
  5. Ensure end-to-end security.
  6. Maintain visibility and transparency.
  7. Respect user privacy — keep systems user-centric.

Incorporating PbD principles into your operation requires more than hiring privacy personnel or creating a privacy division. All the people in your organization need to be attuned to customer and employee concerns about these issues. Privacy isn’t an afterthought; it needs to be at the core of digital operations, and everyone needs to work to protect it.

[ 6 ]
How can we encourage employees to use AI for productivity purposes and not simply to take shortcuts?

Even with the advent of LLMs, AI technology is not yet capable of performing the dizzying range of tasks that humans can, and there are many things that it does worse than the average person. Using each new tool effectively requires understanding its purpose.

For example, think about ChatGPT. By learning about language patterns, it has become so good at predicting which words are supposed to follow others that it can produce seemingly sophisticated text responses to complicated questions. However, there’s a limit to the quality of these outputs because being good at guessing plausible combinations of words and phrases is different from understanding the material. So ChatGPT can produce a poem in the style of Shakespeare because it has learned the particular patterns of his plays and poems, but it cannot produce the original insight into the human condition that informs his work.

By contrast, AI can be better and more efficient than humans at making predictions because it can process much larger amounts of data much more quickly. Examples include predicting early dementia from speech patterns, detecting cancerous tumors indistinguishable to the human eye, and planning safer routes through battlefields.

Employees should therefore be encouraged to evaluate whether AI’s strengths match up to a task and proceed accordingly. If you need to process a lot of information quickly, it can do that. If you need a bunch of new ideas, it can generate them. Even if you need to make a difficult decision, it can offer advice, providing it’s been trained on relevant data.

But you shouldn’t use AI to create meaningful work products without human oversight. If you need to write a quantity of documents with very similar content, AI may be a useful generator of what has long been referred to as boilerplate material. Be aware that its outputs are derived from its datasets and algorithms, and they aren’t necessarily good or accurate.

[ 7 ]
How worried should we be that AI will replace jobs?

Every technological revolution has created more jobs than it has destroyed. Automobiles put horse-and-buggy drivers out of business but led to new jobs building and fixing cars, running gas stations, and more. The novelty of AI technologies makes it easy to fear they will replace humans in the workforce. But we should instead view them as ways to augment human performance. For example, companies like Collective[i] have developed AI systems that analyze data to produce highly accurate sales forecasts quickly; traditionally, this work took people days and weeks to pull together. But no salespeople are losing their jobs. Rather, they’ve got more time to focus on more important parts of their work: building relationships, managing, and actually selling.

Similarly, services like OpenAI’s Codex can autogenerate programming code for basic purposes. This doesn’t replace programmers; it allows them to write code more efficiently and automate repetitive tasks like testing so that they can work on higher-level issues such as systems architecture, domain modeling, and user experience.

The long-term effects on jobs are complex and uneven, and there can be periods of job destruction and displacement in certain industries or regions. To ensure that the benefits of technological progress are widely shared, it is crucial to invest in education and workforce development to help people adapt to the new job market.

Individuals and organizations should focus on upskilling and scaling to prepare to make the most of new technologies. AI and robots aren’t replacing humans anytime soon. The more likely reality is that people with digital mindsets will replace those without them.

[ 8 ]
How can my organization ensure that the AI we develop or use won’t harm individuals or groups or violate human rights?

The harms of AI bias have been widely documented. In their seminal 2018 paper “Gender Shades,” Joy Buolamwini and Timnit Gebru showed that popular facial recognition technologies offered by companies like IBM and Microsoft were nearly perfect at identifying white, male faces but misidentified Black female faces as much as 35% of the time. Facial recognition can be used to unlock your phone, but is also used to monitor patrons at Madison Square Garden, surveil protesters, and tap suspects in police investigations — and misidentification has led to wrongful arrests that can derail people’s lives. As AI grows in power and becomes more integrated into our daily lives, its potential for harm grows exponentially, too. Here are strategies to safeguard AI.
Slow down and document AI development.

Preventing AI harm requires shifting our focus from the rapid development and deployment of increasingly powerful AI to ensuring that AI is safe before release.

Transparency is also key. Earlier in this article, I explained how clear descriptions of the datasets used in AI and potential biases within them helps to reduce harm. When algorithms are openly shared, organizations and individuals can better analyze and understand the potential risks of new tools before using them.

Related


Establish and protect AI ethics watchdogs.

The question of who will ensure safe and responsible AI is currently unanswered. Google, for example, employs an ethical-AI team, but in 2020 they fired Gebru after she sought to publish a paper warning of the risks of building ever-larger language models. Her exit from Google raised the question of whether tech developers are able, or incentivized, to act as ombudsmen for their own technologies and organizations. More recently, an entire team at Microsoft focused on ethics was laid off. But many in the industry recognize the risks, and as noted earlier, even tech icons have called for policymakers working with technologists to create regulatory systems to govern AI development.

Whether it comes from government, the tech industry, or another independent system, the establishment and protection of watchdogs is crucial to protecting against AI harm.

Watch where regulation is headed.

Even as the AI landscape changes, governments are trying to regulate it. In the United States, 21 AI-related bills were passed into law last year. Notable acts include an Alabama provision outlining guidelines for using facial recognition technology in criminal proceedings and legislation that created a Vermont Division of Artificial Intelligence to review all AI used by the state government and to propose a state AI code of ethics. More recently, the U.S. federal government moved to enact executive actions on AI, which will be vetted over time.

The European Union is also considering legislation — the Artificial Intelligence Act — that includes a classification system determining the level of risk AI could pose to the health and safety or the fundamental rights of a person. Italy has temporarily banned ChatGPT. The African Union has established a working group on AI, and the African Commission on Human and Peoples’ Rights adopted a resolution to address implications for human rights of AI, robotics, and other new and emerging technologies in Africa.

China passed a data protection law in 2021 that established user consent rules for data collection and recently passed a unique policy regulating “deep synthesis technologies” that are used for so-called “deep fakes.” The British government released an approach that applies existing regulatory guidelines to new AI technology.

* * * * * *

Billions of people around the world are discovering the promise of AI through their experiments with ChatGPT, Bing, Midjourney, and other new tools. Every company will have to confront questions about how these emerging technologies will apply to them and their industries. For some it will mean a significant pivot in their operating models; for others, an opportunity to scale and broaden their offerings. But all must assess their readiness to deploy AI responsibly without perpetuating harm to their stakeholders and the world at large.

About the Author
Tsedal Neeley is the Naylor Fitzhugh Professor of Business Administration and senior associate dean of faculty and research at Harvard Business School. She is the coauthor of the book The Digital Mindset: What It Really Takes to Thrive in the Age of Data, Algorithms, and AI and the author of the book Remote Work Revolution: Succeeding from Anywhere.

What is Consequentialism?


Consequentialist and
Non-consequentialist theory




* * * * * * *

Deontological or Consequentialist Leadership

Published Dec 16, 2015


Yesterday, I wrote "Theory and application of critical thinking to leadership development" and developed a Red Team vs Blue Team model to divide between Essentialist and Constructionist worldviews. Today, I connected some more dots and maybe there is some truth in my proposal maybe not - I present the idea to the crowd to test the hypothesis.

Can we assess "effective leadership" by investigating the concepts of deontological and consequentialist good in relation to leadership - specifically which good is better for leadership?


Hypothesis

Deontological leadership is better than consequentialist leadership.
I define "better" as being more effective at being successful and successful being in relation to beating competitors.

Background on Ethics

There is a lot of debate about how to define good leadership.

  • We might have a usable (but vague, "I know it when I see it") understanding of leadership.
  • We have 3,000 years of difficulty in defining what is good.

We do have a good enough rule of thumb understanding for what is good and that is the Golden Rule. The Golden Rule or ethic of reciprocity is a moral maxim or principle of altruism found in nearly every human culture and religion, suggesting it is related to a fundamental human nature (source: Wikipedia).

Ethics (moral philosophy) is a branch of philosophy that attempts to define the concepts of good and bad. Trying to agree on what is “good”, such as defining what a good life is, is a never-ending debate.

Normative ethics are prescriptive by nature and deal with how we ought to act. The top three ethical systems are:

  • Consequentialism – (Ethics of Ends) - the ends justify the means. A good outcome defines good action. A good outcome is more important than the actions. Key question: Can a good and ethical solution result from the use of unethical or immoral means?
  • Virtue Ethics – actions define character and right actions are based on virtues which lie in the golden mean between the vice of excess and the vice of deficit. No universal principles.
  • Deontology – (Ethics of Means) duty/rule based ethics where each act is judged as good or bad and the end is judged by the sum of good and bad acts. The goodness (moral intentions) of actions is more important than the end result.

Deontology and consequentialism are in opposition to one another while virtue ethics are somewhere in-between. To simplify our debate of leadership let’s create two worldviews on leadership. Each has its own definition of "good".

  • Consequentialist leadership
  • Deontological leadership

This graphic extends the concept I created yesterday.



Relevance to business and leadership

There are a number of philosophies on the purpose of a business. Some people are of the opinion that shareholders come first, some believe that employees come first and others put the customer first. (I put the customer first).
I wonder: "Do leaders guided by consequentialist ethics more often put the shareholder first while customers are put first by leaders guided by deontological ethics?"

"Does your prioritization of stakeholders influence your leadership ethics or do your ethics influence your stakeholder prioritization?"


Customers first


There is only one valid definition of a business purpose:
to create a customer.
 - Peter Drucker, 1973

It is not the employer who pays the wages.
Employers only handle the money.
It is the customer who pays the wages.
- Henry Ford

Shareholders first

There is one and only one social responsibility of business
– to use its resources and engage in activities designed to increase
its profits so long as it stays within the rules of the game, which is
to say, engages in open and free competition without deception or fraud.
- Milton Friedman, Chicago School, 1970

 

“The sole purpose of a business is to maximize shareholder value.”
- Michael Jensen, and William Meckling, 1976, (paraphrased)

The following graphic illustrates the hypothesis.

- Deontological leadership is better than consequentialist leadership.


Questions pertaining to the hypothesis:

  • Does consequentialist leadership have short-term advantages?
  • Does deontological leadership show benefit in the long term?
  • Which is better? Gaining short-term advantage or working towards long-term advantage?
  • Does a focus on short-term shareholder value have a detrimental effect on long-term shareholder value?

Do you support the hypothesis?

Questions about you:

  • Which ethical system do you live by in your personal life?
  • What is of greater importance to you, winning or how you win?
  • Are you focused on short-term or long-term success?
  • Are you willing to lose if winning requires you to do something bad?
  • Can you function under one set of ethics in your personal life and another set of ethics in your business life?
  • Will we ever agree on what "good" leadership is?
  • Maybe the only important issue is what is "good" for you.

Consider this:

Values - > Ethics - > Culture - > Morals - > Actions


It is important to pick one or the other because you need consistency in your interactions with your team. Consistency leads to predictability and trust. Your reputation is developed over years.


Extra stuff

Saul Alinsky’s rules about the ethics of ends and means. (9 is my favorite.)

  1. One’s concern with the ethics of means and ends varies inversely with one’s personal interest in the issue.
  2. The judgment of the ethics of means is dependent upon the political position of those sitting in judgment.
  3. In war the end justifies almost any means. (really?)
  4. Judgment must be made in the context of the times in which the action occurred and not from any other chronological vantage point.
  5. Concern with ethics increases with the number of means available and vice versa.
  6. The less important the end to be desired, the more one can afford to engage in ethical evaluations of means.
  7. Generally, success or failure is a mighty determinant of ethics.
  8. The morality of a means depends upon whether the means is being employed at a time of imminent defeat or imminent victory.
  9. Any effective means is automatically judged by the opposition as being unethical.
  10. You do what you can with what you have and clothe it with moral garments.
  11. Goals must be phrased in general terms like “Liberty, Equality, Fraternity,” “Of the Common Welfare,” “Pursuit of Happiness,” or “Bread and Peace.”


* * * * * *


Consequentialism
Every advantage in the past is judged in the light of the final issue. — Demosthenes

In moral philosophyconsequentialism is a class of normativeteleological ethical theories that holds that the consequences of one's conduct are the ultimate basis for judgement about the rightness or wrongness of that conduct. Thus, from a consequentialist standpoint, a morally right act (or omission from acting) is one that will produce a good outcome. Consequentialism, along with eudaimonism, falls under the broader category of teleological ethics, a group of views which claim that the moral value of any act consists in its tendency to produce things of intrinsic value.[1] Consequentialists hold in general that an act is right if and only if the act (or in some views, the rule under which it falls) will produce, will probably produce, or is intended to produce, a greater balance of good over evil than any available alternative. Different consequentialist theories differ in how they define moral goods, with chief candidates including pleasure, the absence of pain, the satisfaction of one's preferences, and broader notions of the "general good".

Consequentialism is usually contrasted with deontological ethics (or deontology): deontology, in which rules and moral duty are central, derives the rightness or wrongness of one's conduct from the character of the behaviour itself, rather than the outcomes of the conduct. It is also contrasted with both virtue ethics which focuses on the character of the agent rather than on the nature or consequences of the act (or omission) itself, and pragmatic ethics which treats morality like science: advancing collectively as a society over the course of many lifetimes, such that any moral criterion is subject to revision.

Some argue that consequentialist theories (such as utilitarianism) and deontological theories (such as Kantian ethics) are not necessarily mutually exclusive. For example, T. M. Scanlon advances the idea that human rights, which are commonly considered a "deontological" concept, can only be justified with reference to the consequences of having those rights.[2] Similarly, Robert Nozick argued for a theory that is mostly consequentialist, but incorporates inviolable "side-constraints" which restrict the sort of actions agents are permitted to do.[2] Derek Parfit argued that, in practice, when understood properly, rule consequentialism, Kantian deontology, and contractualism would all end up prescribing the same behavior.[3]

Etymology

The term consequentialism was coined by G. E. M. Anscombe in her essay "Modern Moral Philosophy" in 1958.[4][5] However, the meaning of the word has changed over the time since Anscombe used it: in the sense she coined it, she had explicitly placed J.S. Mill in the nonconsequentialist and W.D. Ross in the consequentialist camp, whereas, in the contemporary sense of the word, they would be classified the other way round.[4][6] This is due to changes in the meaning of the word, not due to changes in perceptions of W.D. Ross's and J.S. Mill's views.[4][6]

Classification

One common view is to classify consequentialism, together with virtue ethics, under a broader label of "teleological ethics".[7][1] Proponents of teleological ethics (Greek: telos, 'end, purpose' + logos, 'science') argue that the moral value of any act consists in its tendency to produce things of intrinsic value,[1] meaning that an act is right if and only if it, or the rule under which it falls, produces, will probably produce, or is intended to produce, a greater balance of good over evil than any alternative act. This concept is exemplified by the famous aphorism, "the end justifies the means," variously attributed to Machiavelli or Ovid[8] i.e. if a goal is morally important enough, any method of achieving it is acceptable.[9][10]

Teleological ethical theories are contrasted with deontological ethical theories, which hold that acts themselves are inherently good or bad, rather than good or bad because of extrinsic factors (such as the act's consequences or the moral character of the person who acts).[11]

Forms of consequentialism

Utilitarianism

Jeremy Bentham, best known for his advocacy of utilitarianism

Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do, as well as to determine what we shall do. On the one hand the standard of right and wrong, on the other the chain of causes and effects, are fastened to their throne. They govern us in all we do, in all we say, in all we think...

— Jeremy Bentham, The Principles of Morals and Legislation (1789) Ch I, p 1

In summary, Jeremy Bentham states that people are driven by their interests and their fears, but their interests take precedence over their fears; their interests are carried out in accordance with how people view the consequences that might be involved with their interests. Happiness, in this account, is defined as the maximization of pleasure and the minimization of pain. It can be argued that the existence of phenomenal consciousness and "qualia" is required for the experience of pleasure or pain to have an ethical significance.[12][13]

Historically, hedonistic utilitarianism is the paradigmatic example of a consequentialist moral theory. This form of utilitarianism holds that what matters is the aggregate happiness; the happiness of everyone, and not the happiness of any particular person. John Stuart Mill, in his exposition of hedonistic utilitarianism, proposed a hierarchy of pleasures, meaning that the pursuit of certain kinds of pleasure is more highly valued than the pursuit of other pleasures.[14] However, some contemporary utilitarians, such as Peter Singer, are concerned with maximizing the satisfaction of preferences, hence preference utilitarianism. Other contemporary forms of utilitarianism mirror the forms of consequentialism outlined below.

Rule consequentialism

In general, consequentialist theories focus on actions. However, this need not be the case. Rule consequentialism is a theory that is sometimes seen as an attempt to reconcile consequentialism with deontology, or rules-based ethics[15]—and in some cases, this is stated as a criticism of rule consequentialism.[16] Like deontology, rule consequentialism holds that moral behavior involves following certain rules. However, rule consequentialism chooses rules based on the consequences that the selection of those rules has. Rule consequentialism exists in the forms of rule utilitarianism and rule egoism.

Various theorists are split as to whether the rules are the only determinant of moral behavior or not. For example, Robert Nozick held that a certain set of minimal rules, which he calls "side-constraints," are necessary to ensure appropriate actions.[2] There are also differences as to how absolute these moral rules are. Thus, while Nozick's side-constraints are absolute restrictions on behavior, Amartya Sen proposes a theory that recognizes the importance of certain rules, but these rules are not absolute.[2] That is, they may be violated if strict adherence to the rule would lead to much more undesirable consequences.

One of the most common objections to rule-consequentialism is that it is incoherent, because it is based on the consequentialist principle that what we should be concerned with is maximizing the good, but then it tells us not to act to maximize the good, but to follow rules (even in cases where we know that breaking the rule could produce better results).

In Ideal Code, Real WorldBrad Hooker avoids this objection by not basing his form of rule-consequentialism on the ideal of maximizing the good. He writes:[17]

[T]he best argument for rule-consequentialism is not that it derives from an overarching commitment to maximise the good. The best argument for rule-consequentialism is that it does a better job than its rivals of matching and tying together our moral convictions, as well as offering us help with our moral disagreements and uncertainties.

Derek Parfit described Hooker's book as the "best statement and defence, so far, of one of the most important moral theories."[18]

State consequentialism

It is the business of the benevolent man to seek to promote what is beneficial to the world and to eliminate what is harmful, and to provide a model for the world. What benefits he will carry out; what does not benefit men he will leave alone.[19]

— MoziMozi (5th century BC) Part I

State consequentialism, also known as Mohist consequentialism,[20] is an ethical theory that evaluates the moral worth of an action based on how much it contributes to the welfare of a state.[20] According to the Stanford Encyclopedia of Philosophy, Mohist consequentialism, dating back to the 5th century BCE, is the "world's earliest form of consequentialism, a remarkably sophisticated version based on a plurality of intrinsic goods taken as constitutive of human welfare."[21]

Unlike utilitarianism, which views utility as the sole moral good, "the basic goods in Mohist consequentialist thinking are...ordermaterial wealth, and increase in population."[22] During the time of Mozi, war and famine were common, and population growth was seen as a moral necessity for a harmonious society. The "material wealth" of Mohist consequentialism refers to basic needs, like shelter and clothing; and "order" refers to Mozi's stance against warfare and violence, which he viewed as pointless and a threat to social stability.[23] In The Cambridge History of Ancient ChinaStanford sinologist David Shepherd Nivison writes that the moral goods of Mohism "are interrelated: more basic wealth, then more reproduction; more people, then more production and wealth...if people have plenty, they would be good, filial, kind, and so on unproblematically."[22]

The Mohists believed that morality is based on "promoting the benefit of all under heaven and eliminating harm to all under heaven." In contrast to Jeremy Bentham's views, state consequentialism is not utilitarian because it is not hedonistic or individualistic. The importance of outcomes that are good for the community outweigh the importance of individual pleasure and pain.[24] The term state consequentialism has also been applied to the political philosophy of the Confucian philosopher Xunzi.[25] On the other hand, "legalist" Han Fei "is motivated almost totally from the ruler's point of view."[26]

Ethical egoism[edit]

Ethical egoism can be understood as a consequentialist theory according to which the consequences for the individual agent are taken to matter more than any other result. Thus, egoism will prescribe actions that may be beneficial, detrimental, or neutral to the welfare of others. Some, like Henry Sidgwick, argue that a certain degree of egoism promotes the general welfare of society for two reasons: because individuals know how to please themselves best, and because if everyone were an austere altruist then general welfare would inevitably decrease.[27]

Ethical altruism

Ethical altruism can be seen as a consequentialist theory which prescribes that an individual take actions that have the best consequences for everyone except for himself.[28] This was advocated by Auguste Comte, who coined the term altruism, and whose ethics can be summed up in the phrase "Live for others."[29]

Two-level consequentialism

The two-level approach involves engaging in critical reasoning and considering all the possible ramifications of one's actions before making an ethical decision, but reverting to generally reliable moral rules when one is not in a position to stand back and examine the dilemma as a whole. In practice, this equates to adhering to rule consequentialism when one can only reason on an intuitive level, and to act consequentialism when in a position to stand back and reason on a more critical level.[30]

This position can be described as a reconciliation between act consequentialism—in which the morality of an action is determined by that action's effects—and rule consequentialism—in which moral behavior is derived from following rules that lead to positive outcomes.[30]

The two-level approach to consequentialism is most often associated with R. M. Hare and Peter Singer.[30]

Motive consequentialism

Another consequentialist version is motive consequentialism, which looks at whether the state of affairs that results from the motive to choose an action is better or at least as good as each alternative state of affairs that would have resulted from alternative actions. This version gives relevance to the motive of an act and links it to its consequences. An act can therefore not be wrong if the decision to act was based on a right motive. A possible inference is that one can not be blamed for mistaken judgments if the motivation was to do good.[31]

Negative consequentialism

Most consequentialist theories focus on promoting some sort of good consequences. However, negative utilitarianism lays out a consequentialist theory that focuses solely on minimizing bad consequences.

One major difference between these two approaches is the agent's responsibility. Positive consequentialism demands that we bring about good states of affairs, whereas negative consequentialism requires that we avoid bad ones. Stronger versions of negative consequentialism will require active intervention to prevent bad and ameliorate existing harm. In weaker versions, simple forbearance from acts tending to harm others is sufficient. An example of this is the slippery-slope argument, which encourages others to avoid a specified act on the grounds that it may ultimately lead to undesirable consequences.[32]

Often "negative" consequentialist theories assert that reducing suffering is more important than increasing pleasure. Karl Popper, for example, claimed that "from the moral point of view, pain cannot be outweighed by pleasure."[33] (While Popper is not a consequentialist per se, this is taken as a classic statement of negative utilitarianism.) When considering a theory of justice, negative consequentialists may use a statewide or global-reaching principle: the reduction of suffering (for the disadvantaged) is more valuable than increased pleasure (for the affluent or luxurious).

Acts and omissions

Since pure consequentialism holds that an action is to be judged solely by its result, most consequentialist theories hold that a deliberate action is no different from a deliberate decision not to act. This contrasts with the "acts and omissions doctrine", which is upheld by some medical ethicists and some religions: it asserts there is a significant moral distinction between acts and deliberate non-actions which lead to the same outcome. This contrast is brought out in issues such as voluntary euthanasia.

Actualism and possibilism

The normative status of an action depends on its consequences according to consequentialism. The consequences of the actions of an agent may include other actions by this agent. Actualism and possibilism disagree on how later possible actions impact the normative status of the current action by the same agent. Actualists assert that it is only relevant what the agent would actually do later for assessing the value of an alternative. Possibilists, on the other hand, hold that we should also take into account what the agent could do, even if she would not do it.[34][35][36][37]

For example, assume that Gifre has the choice between two alternatives, eating a cookie or not eating anything. Having eaten the first cookie, Gifre could stop eating cookies, which is the best alternative. But after having tasted one cookie, Gifre would freely decide to continue eating cookies until the whole bag is finished, which would result in a terrible stomach ache and would be the worst alternative. Not eating any cookies at all, on the other hand, would be the second-best alternative. Now the question is: should Gifre eat the first cookie or not? Actualists are only concerned with the actual consequences. According to them, Gifre should not eat any cookies at all since it is better than the alternative leading to a stomach ache. Possibilists, however, contend that the best possible course of action involves eating the first cookie and this is therefore what Gifre should do.[38]

One counterintuitive consequence of actualism is that agents can avoid moral obligations simply by having an imperfect moral character.[34][36] For example, a lazy person might justify rejecting a request to help a friend by arguing that, due to her lazy character, she would not have done the work anyway, even if she had accepted the request. By rejecting the offer right away, she managed at least not to waste anyone's time. Actualists might even consider her behavior praiseworthy since she did what, according to actualism, she ought to have done. This seems to be a very easy way to "get off the hook" that is avoided by possibilism. But possibilism has to face the objection that in some cases it sanctions and even recommends what actually leads to the worst outcome.[34][39]

Douglas W. Portmore has suggested that these and other problems of actualism and possibilism can be avoided by constraining what counts as a genuine alternative for the agent.[40] On his view, it is a requirement that the agent has rational control over the event in question. For example, eating only one cookie and stopping afterward only is an option for Gifre if she has the rational capacity to repress her temptation to continue eating. If the temptation is irrepressible then this course of action is not considered to be an option and is therefore not relevant when assessing what the best alternative is. Portmore suggests that, given this adjustment, we should prefer a view very closely associated with possibilism called maximalism.[38]

Issues

Action guidance

One important characteristic of many normative moral theories such as consequentialism is the ability to produce practical moral judgements. At the very least, any moral theory needs to define the standpoint from which the goodness of the consequences are to be determined. What is primarily at stake here is the responsibility of the agent.[41]

The ideal observer

One common tactic among consequentialists, particularly those committed to an altruistic (selfless) account of consequentialism, is to employ an ideal, neutral observer from which moral judgements can be made. John Rawls, a critic of utilitarianism, argues that utilitarianism, in common with other forms of consequentialism, relies on the perspective of such an ideal observer.[2] The particular characteristics of this ideal observer can vary from an omniscient observer, who would grasp all the consequences of any action, to an ideally informed observer, who knows as much as could reasonably be expected, but not necessarily all the circumstances or all the possible consequences. Consequentialist theories that adopt this paradigm hold that right action is the action that will bring about the best consequences from this ideal observer's perspective.[citation needed]

The real observer

In practice, it is very difficult, and at times arguably impossible, to adopt the point of view of an ideal observer. Individual moral agents do not know everything about their particular situations, and thus do not know all the possible consequences of their potential actions. For this reason, some theorists have argued that consequentialist theories can only require agents to choose the best action in line with what they know about the situation.[42] However, if this approach is naïvely adopted, then moral agents who, for example, recklessly fail to reflect on their situation, and act in a way that brings about terrible results, could be said to be acting in a morally justifiable way. Acting in a situation without first informing oneself of the circumstances of the situation can lead to even the most well-intended actions yielding miserable consequences. As a result, it could be argued that there is a moral imperative for agents to inform themselves as much as possible about a situation before judging the appropriate course of action. This imperative, of course, is derived from consequential thinking: a better-informed agent is able to bring about better consequences.[citation needed]

Consequences for whom

Moral action always has consequences for certain people or things. Varieties of consequentialism can be differentiated by the beneficiary of the good consequences. That is, one might ask "Consequences for whom?"

Agent-focused or agent-neutral

A fundamental distinction can be drawn between theories which require that agents act for ends perhaps disconnected from their own interests and drives, and theories which permit that agents act for ends in which they have some personal interest or motivation. These are called "agent-neutral" and "agent-focused" theories respectively.

Agent-neutral consequentialism ignores the specific value a state of affairs has for any particular agent. Thus, in an agent-neutral theory, an actor's personal goals do not count any more than anyone else's goals in evaluating what action the actor should take. Agent-focused consequentialism, on the other hand, focuses on the particular needs of the moral agent. Thus, in an agent-focused account, such as one that Peter Railton outlines, the agent might be concerned with the general welfare, but the agent is more concerned with the immediate welfare of herself and her friends and family.[2]

These two approaches could be reconciled by acknowledging the tension between an agent's interests as an individual and as a member of various groups, and seeking to somehow optimize among all of these interests.[citation needed] For example, it may be meaningful to speak of an action as being good for someone as an individual, but bad for them as a citizen of their town.

Human-centered?[edit]

Many consequentialist theories may seem primarily concerned with human beings and their relationships with other human beings. However, some philosophers argue that we should not limit our ethical consideration to the interests of human beings alone. Jeremy Bentham, who is regarded as the founder of utilitarianism, argues that animals can experience pleasure and pain, thus demanding that 'non-human animals' should be a serious object of moral concern.[43]

More recently, Peter Singer has argued that it is unreasonable that we do not give equal consideration to the interests of animals as to those of human beings when we choose the way we are to treat them.[44] Such equal consideration does not necessarily imply identical treatment of humans and non-humans, any more than it necessarily implies identical treatment of all humans.

Value of consequences

One way to divide various consequentialisms is by the types of consequences that are taken to matter most, that is, which consequences count as good states of affairs. According to utilitarianism, a good action is one that results in an increase in pleasure, and the best action is one that results in the most pleasure for the greatest number. Closely related is eudaimonic consequentialism, according to which a full, flourishing life, which may or may not be the same as enjoying a great deal of pleasure, is the ultimate aim. Similarly, one might adopt an aesthetic consequentialism, in which the ultimate aim is to produce beauty. However, one might fix on non-psychological goods as the relevant effect. Thus, one might pursue an increase in material equality or political liberty instead of something like the more ephemeral "pleasure". Other theories adopt a package of several goods, all to be promoted equally. As the consequentialist approach contains an inherent assumption that the outcomes of a moral decision can be quantified in terms of "goodness" or "badness," or at least put in order of increasing preference, it is an especially suited moral theory for a probabilistic and decision theoretical approach.[45][46]

Virtue ethics

Consequentialism can also be contrasted with aretaic moral theories such as virtue ethics. Whereas consequentialist theories posit that consequences of action should be the primary focus of our thinking about ethics, virtue ethics insists that it is the character rather than the consequences of actions that should be the focal point. Some virtue ethicists hold that consequentialist theories totally disregard the development and importance of moral character. For example, Philippa Foot argues that consequences in themselves have no ethical content, unless it has been provided by a virtue such as benevolence.[2]

However, consequentialism and virtue ethics need not be entirely antagonistic. Iain King has developed an approach that reconciles the two schools.[47][48][49][50] Other consequentialists consider effects on the character of people involved in an action when assessing consequence. Similarly, a consequentialist theory may aim at the maximization of a particular virtue or set of virtues. Finally, following Foot's lead, one might adopt a sort of consequentialism that argues that virtuous activity ultimately produces the best consequences.[clarification needed]

Max Weber

Ultimate end

The ultimate end is a concept in the moral philosophy of Max Weber, in which individuals act in a faithful, rather than rational, manner.[51]

We must be clear about the fact that all ethically oriented conduct may be guided by one of two fundamentally differing and irreconcilably opposed maxims: conduct can be oriented to an ethic of ultimate ends or to an ethic of responsibility. [...] There is an abysmal contrast between conduct that follows the maxim of an ethic of ultimate ends — that is in religious terms, "the Christian does rightly and leaves the results with the Lord" — and conduct that follows the maxim of an ethic of responsibility, in which case one has to give an account of the foreseeable results of one's action.

— Max Weber, Politics as a Vocation, 1918[52]

Criticisms

G. E. M. Anscombe objects to the consequentialism of Sidgwick on the grounds that the moral worth of an action is premised on the predictive capabilities of the individual, relieving them of the responsibility for the "badness" of an act should they "make out a case for not having foreseen" negative consequences.[5]

The future amplification of the effects of small decisions[53] is an important factor that makes it more difficult to predict the ethical value of consequences,[54] even though most would agree that only predictable consequences are charged with a moral responsibility.[51]

Bernard Williams has argued that consequentialism is alienating because it requires moral agents to put too much distance between themselves and their own projects and commitments. Williams argues that consequentialism requires moral agents to take a strictly impersonal view of all actions, since it is only the consequences, and not who produces them, that are said to matter. Williams argues that this demands too much of moral agents—since (he claims) consequentialism demands that they be willing to sacrifice any and all personal projects and commitments in any given circumstance in order to pursue the most beneficent course of action possible. He argues further that consequentialism fails to make sense of intuitions that it can matter whether or not someone is personally the author of a particular consequence. For example, that participating in a crime can matter, even if the crime would have been committed anyway, or would even have been worse, without the agent's participation.[55]

Some consequentialists—most notably Peter Railton—have attempted to develop a form of consequentialism that acknowledges and avoids the objections raised by Williams. Railton argues that Williams's criticisms can be avoided by adopting a form of consequentialism in which moral decisions are to be determined by the sort of life that they express. On his account, the agent should choose the sort of life that will, on the whole, produce the best overall effects.[2]

Notable consequentialists

See also

References

  1. Jump up to:
    a b c "Teleological Ethics." Encyclopedia of Philosophy. via Encyclopedia.com. 28 May 2020. Retrieved 2 July 2020.
  2. Jump up to:
    a b c d e f g h Scheffler, Samuel (1988). Consequentialism and Its Critics. Oxford: Oxford University Press. ISBN 978-0-19-875073-4.
  3. ^ Parfit, Derek. 2011. On What Matters. Oxford: Oxford University Press
  4. Jump up to:
    a b c Seidel, Christian (2018). Consequentialism: New Directions, New Problems. Oxford: Oxford University Press. p. 2-3. ISBN 9780190270124.
  5. Jump up to:
    a b Anscombe, G. E. M. (1958). "Modern Moral Philosophy"Philosophy33 (124): 1–19. doi:10.1017/S0031819100037943S2CID 197875941.
  6. Jump up to:
    a b Diamond, Cora (1997), Oderberg, David S.; Laing, Jacqueline A. (eds.), "Consequentialism in Modern Moral Philosophy and in 'Modern Moral Philosophy'"Human Lives: Critical Essays on Consequentialist Bioethics, London: Palgrave Macmillan UK, pp. 13–38, doi:10.1007/978-1-349-25098-1_2ISBN 978-1-349-25098-1, retrieved 2024-03-21
  7. ^ "Teleological ethics"Encyclopedia Britannica. Retrieved 5 August 2020.
  8. ^ Cfr. "the end justifies the means" in the Wiktionary.
  9. ^ ""The end justifies the means""Cambridge English Dictionary.
  10. ^ Mizzoni, John (2009-08-31). Ethics: The Basics. John Wiley & Sons. p. 97 f., 104. ISBN 9781405189941.
  11. ^ Thomas, A. Jean. 2015. "Deontology, Consequentialism and Moral Realism." Minerva 19:1–24. ISSN 1393-614X.
  12. ^ Levy, N. (2014). "The Value of Consciousness"Journal of Consciousness Studies21 (1–2): 127–138. PMC 4001209PMID 24791144.
  13. ^ Shepherd, Joshua. 2018. Consciousness and Moral Status. Routledge. ISBN 9781315396347hdl:20.500.12657/30007.
  14. ^ Mill, John Stuart (1998). Utilitarianism. Oxford: Oxford University Press. ISBN 978-0-19-875163-2.
  15. ^ D'Souza, Jeevan. "On Measuring the Moral Value of Action". Philos, China.
  16. ^ Williams, Bernard. 1993. "Utilitarianism." In MoralityCambridge University Press.
  17. ^ Hooker, Brad. 2000. Ideal Code, Real WorldOxford University Press. p. 101.
  18. ^ Hooker, Brad (30 January 2003). Ideal Code, Real World. Oxford University Press, new edition 2002, back cover. ISBN 978-0-19-925657-0.
  19. ^ Di Mo; Xunzi; Di Mo Xunzi Fei Han; Professor Burton Watson (1967). Basic Writings of Mo Tzu, Hsün Tzu, and Han Fei Tzu. Columbia University Press. p. 110. ISBN 978-0-231-02515-7.
  20. Jump up to:
    a b Ivanhoe, P.J.; Van Norden, Bryan William (2005). Readings in classical Chinese philosophyHackett Publishing. p. 60. ISBN 978-0-87220-780-6"he advocated a form of state consequentialism, which sought to maximize three basic goods: the wealth, order, and population of the state
  21. ^ Fraser, Chris. [2002] 2015. "Mohism." The Stanford Encyclopedia of Philosophy, edited by E. N. Zalta.
  22. Jump up to:
    a b Loewe, Michael; Shaughnessy, Edward L. (1999). The Cambridge History of Ancient ChinaCambridge University Press. p. 761ISBN 978-0-521-47030-8.
  23. ^ Van Norden, Bryan W. (2011). Introduction to Classical Chinese PhilosophyHackett Publishing. p. 52. ISBN 978-1-60384-468-0.
  24. ^ Jay L. Garfield; William Edelglass (9 June 2011). The Oxford Handbook of World Philosophy. Oxford University Press. p. 62. ISBN 978-0-19-532899-8The goods that serve as criteria of morality are collective or public, in contrast, for instance, to individual happiness or well-being.
  25. ^ Deen K. Chatterjee (6 October 2011). Encyclopedia of Global Justice. Springer. p. 1170. ISBN 978-1-4020-9159-9in this sense, one can interpret Xunzi's political philosophy as a form of state utilitarianism or state consequentialism
  26. ^ Hansen, Chad (1994). "Fa (Standards: Laws) and Meaning Changes in Chinese Philosophy" (PDF)Philosophy East and West44 (3): 435–488. doi:10.2307/1399736hdl:10722/45241JSTOR 1399736Archived (PDF) from the original on 2022-10-10.
  27. ^ Sidgwick, Henry (1907). The Method of Ethics. NY: Dover (1981). ISBN 978-0-915145-28-7. Archived from the original on December 9, 2007.
  28. ^ Fisher, James; Dowdwen, Bradley. "Ethics"Internet Encyclopedia of Philosophy.
  29. ^ Moran, Gabriel (2006). "Christian Religion and National Interests" (PDF)Archived (PDF) from the original on 2006-07-06.[unreliable source?]
  30. Jump up to:
    a b c Sinnott-Armstrong, Walter. [2003] 2019. "Consequentialism." The Stanford Encyclopedia of Philosophy, edited by E. N. Zalta. (Winter 2015 ed.). Metaphysics Research Lab, Stanford University. Retrieved 2019-02-01.
  31. ^ Adams, R. M. (1976). "Motive Utilitarianism". Journal of Philosophy73 (14): 467–81. doi:10.2307/2025783JSTOR 2025783.
  32. ^ Haigh, Matthew; Wood, Jeffrey S.; Stewart, Andrew J. (2016-07-01). "Slippery slope arguments imply opposition to change" (PDF)Memory & Cognition44 (5): 819–836. doi:10.3758/s13421-016-0596-9ISSN 0090-502XPMID 26886759S2CID 25691758Archived (PDF) from the original on 2018-07-19.
  33. ^ Popper, Karl. 1945. The Open Society and Its Enemies 1. Routledge. pp. 284–85.
  34. Jump up to:
    a b c Timmerman, Travis; Cohen, Yishai (2020). Actualism and Possibilism in Ethics. Metaphysics Research Lab, Stanford University.
  35. ^ Cohen, Yishai; Timmerman, Travis (2016). "Actualism Has Control Issues"Journal of Ethics and Social Philosophy10 (3): 1–18. doi:10.26556/jesp.v10i3.104.
  36. Jump up to:
    a b Timmerman, Travis; Swenson, Philip (2019). "How to Be an Actualist and Blame People"Oxford Studies in Agency and Responsibility6: 216–240. doi:10.1093/oso/9780198845539.003.0009ISBN 9780198845539.
  37. ^ Jackson, Frank; Pargetter, Robert (1986). "Oughts, Options, and Actualism"Philosophical Review95 (2): 233–255. doi:10.2307/2185591JSTOR 2185591.
  38. Jump up to:
    a b Portmore, Douglas W. (2019). "5. Rationalist Maximalism". Opting for the Best: Oughts and Options. New York, NY, USA: Oxford University Press.
  39. ^ Goldman, Holly S. (1976). "Dated Rightness and Moral Imperfection"Philosophical Review85 (4): 449–487. doi:10.2307/2184275JSTOR 2184275.
  40. ^ Portmore, Douglas W. (2019). "3. What's the Relevant Sort of Control?". Opting for the Best: Oughts and Options. New York, NY, USA: Oxford University Press.
  41. ^ Stables, Andrew (2016). "Responsibility beyond rationality: The case for rhizomatic consequentialism". International Journal of Children's Spirituality9 (2): 219–225. doi:10.1080/1364436042000234404S2CID 214650271.
  42. ^ Mackie, J. L. (1990) [1977]. Ethics: Inventing Right and Wrong. London: Penguin. ISBN 978-0-14-013558-9.
  43. ^ Bentham, Jeremy (1996). An Introduction to the Principles of Moral Legislation. Oxford: Oxford University Press. ISBN 978-0-19-820516-6. Archived from the original on January 5, 2008.
  44. ^ Singer, Peter (2002). Helga Kuhse (ed.). Unsanctifying Human Life. Oxford: Blackwell. ISBN 978-0-631-22507-2.
  45. ^ Simmons, H. J. 1986. "The quantification of 'happinenss' in utilitarianism" (Ph.D. thesis). Hamilton, ON: McMaster University.
  46. ^ Audi, Robert. 2007. "Can Utilitarianism Be Distributive? Maximization and Distribution as Criteria in Managerial Decisions." Business Ethics Quarterly 17(4):593–611.
  47. ^ King, Iain. 2008. How to Make Good Decisions and Be Right All the Time: Solving the Riddle of Right and Wrong, London: Continuum.[page needed]
  48. ^ Chandler Brett (2014-07-16). "24 and Philosophy". Blackwell. Retrieved 2019-12-27.
  49. ^ Frezzo, Eldo (2018-10-25). Medical Ethics: A Reference Guide. Routledge. p. 5. ISBN 978-1138581074.
  50. ^ Zuckerman, Phil (2019-09-10). What it Means to be Moral. Counterpoint. p. 21. ISBN 978-1640092747.
  51. Jump up to:
    a b Siebeck, Mohr. 2018. "Revisiting Max Weber's Ethic of Responsibility." Perspektiven Der Ethik 12. p. 67.
  52. ^ Originally a speech at Munich University, 1918. Published as "Politik als Beruf," (Munich: Duncker & Humblodt, 1919). Later in Max Weber, Gesammelte Politische Schriften (Munich, 1921), 396-450. In English: H.H. Gerth and C. Wright Mills, trans. and ed., in Max Weber: Essays in Sociology (New York: Oxford University Press, 1946), 77-128.
  53. ^ Gregersen, Hal B., and Lee Sailer. 1993. "Chaos theory and its implications for social science research." Human Relations 46(7):777–802. doi:10.1177/001872679304600701Abstract.
  54. ^ Lenman, James. 2000. "Consequentialism and Cluelessness." Philosophy & Public Affairs 29(4):342–70.
  55. ^ Smart, J. J. C., and Bernard Williams. 1973. Utilitarianism: For and AgainstCambridge University Press. pp. 98 ff.

Further reading

External links