Quotes & Sayings


We, and creation itself, actualize the possibilities of the God who sustains the world, towards becoming in the world in a fuller, more deeper way. - R.E. Slater

There is urgency in coming to see the world as a web of interrelated processes of which we are integral parts, so that all of our choices and actions have [consequential effects upon] the world around us. - Process Metaphysician Alfred North Whitehead

Kurt Gödel's Incompleteness Theorem says (i) all closed systems are unprovable within themselves and, that (ii) all open systems are rightly understood as incomplete. - R.E. Slater

The most true thing about you is what God has said to you in Christ, "You are My Beloved." - Tripp Fuller

The God among us is the God who refuses to be God without us, so great is God's Love. - Tripp Fuller

According to some Christian outlooks we were made for another world. Perhaps, rather, we were made for this world to recreate, reclaim, redeem, and renew unto God's future aspiration by the power of His Spirit. - R.E. Slater

Our eschatological ethos is to love. To stand with those who are oppressed. To stand against those who are oppressing. It is that simple. Love is our only calling and Christian Hope. - R.E. Slater

Secularization theory has been massively falsified. We don't live in an age of secularity. We live in an age of explosive, pervasive religiosity... an age of religious pluralism. - Peter L. Berger

Exploring the edge of life and faith in a post-everything world. - Todd Littleton

I don't need another reason to believe, your love is all around for me to see. – Anon

Thou art our need; and in giving us more of thyself thou givest us all. - Khalil Gibran, Prayer XXIII

Be careful what you pretend to be. You become what you pretend to be. - Kurt Vonnegut

Religious beliefs, far from being primary, are often shaped and adjusted by our social goals. - Jim Forest

We become who we are by what we believe and can justify. - R.E. Slater

People, even more than things, need to be restored, renewed, revived, reclaimed, and redeemed; never throw out anyone. – Anon

Certainly, God's love has made fools of us all. - R.E. Slater

An apocalyptic Christian faith doesn't wait for Jesus to come, but for Jesus to become in our midst. - R.E. Slater

Christian belief in God begins with the cross and resurrection of Jesus, not with rational apologetics. - Eberhard Jüngel, Jürgen Moltmann

Our knowledge of God is through the 'I-Thou' encounter, not in finding God at the end of a syllogism or argument. There is a grave danger in any Christian treatment of God as an object. The God of Jesus Christ and Scripture is irreducibly subject and never made as an object, a force, a power, or a principle that can be manipulated. - Emil Brunner

“Ehyeh Asher Ehyeh” means "I will be that who I have yet to become." - God (Ex 3.14) or, conversely, “I AM who I AM Becoming.”

Our job is to love others without stopping to inquire whether or not they are worthy. - Thomas Merton

The church is God's world-changing social experiment of bringing unlikes and differents to the Eucharist/Communion table to share life with one another as a new kind of family. When this happens, we show to the world what love, justice, peace, reconciliation, and life together is designed by God to be. The church is God's show-and-tell for the world to see how God wants us to live as a blended, global, polypluralistic family united with one will, by one Lord, and baptized by one Spirit. – Anon

The cross that is planted at the heart of the history of the world cannot be uprooted. - Jacques Ellul

The Unity in whose loving presence the universe unfolds is inside each person as a call to welcome the stranger, protect animals and the earth, respect the dignity of each person, think new thoughts, and help bring about ecological civilizations. - John Cobb & Farhan A. Shah

If you board the wrong train it is of no use running along the corridors of the train in the other direction. - Dietrich Bonhoeffer

God's justice is restorative rather than punitive; His discipline is merciful rather than punishing; His power is made perfect in weakness; and His grace is sufficient for all. – Anon

Our little [biblical] systems have their day; they have their day and cease to be. They are but broken lights of Thee, and Thou, O God art more than they. - Alfred Lord Tennyson

We can’t control God; God is uncontrollable. God can’t control us; God’s love is uncontrolling! - Thomas Jay Oord

Life in perspective but always in process... as we are relational beings in process to one another, so life events are in process in relation to each event... as God is to Self, is to world, is to us... like Father, like sons and daughters, like events... life in process yet always in perspective. - R.E. Slater

To promote societal transition to sustainable ways of living and a global society founded on a shared ethical framework which includes respect and care for the community of life, ecological integrity, universal human rights, respect for diversity, economic justice, democracy, and a culture of peace. - The Earth Charter Mission Statement

Christian humanism is the belief that human freedom, individual conscience, and unencumbered rational inquiry are compatible with the practice of Christianity or even intrinsic in its doctrine. It represents a philosophical union of Christian faith and classical humanist principles. - Scott Postma

It is never wise to have a self-appointed religious institution determine a nation's moral code. The opportunities for moral compromise and failure are high; the moral codes and creeds assuredly racist, discriminatory, or subjectively and religiously defined; and the pronouncement of inhumanitarian political objectives quite predictable. - R.E. Slater

God's love must both center and define the Christian faith and all religious or human faiths seeking human and ecological balance in worlds of subtraction, harm, tragedy, and evil. - R.E. Slater

In Whitehead’s process ontology, we can think of the experiential ground of reality as an eternal pulse whereby what is objectively public in one moment becomes subjectively prehended in the next, and whereby the subject that emerges from its feelings then perishes into public expression as an object (or “superject”) aiming for novelty. There is a rhythm of Being between object and subject, not an ontological division. This rhythm powers the creative growth of the universe from one occasion of experience to the next. This is the Whiteheadian mantra: “The many become one and are increased by one.” - Matthew Segall

Without Love there is no Truth. And True Truth is always Loving. There is no dichotomy between these terms but only seamless integration. This is the premier centering focus of a Processual Theology of Love. - R.E. Slater

-----

Note: Generally I do not respond to commentary. I may read the comments but wish to reserve my time to write (or write from the comments I read). Instead, I'd like to see our community help one another and in the helping encourage and exhort each of us towards Christian love in Christ Jesus our Lord and Savior. - re slater

Saturday, June 8, 2024

Why Quantum Theory Does Not Support Materialism

article link

article link

video link


Why Quantum Theory Does Not Support Materialism

by Bruce L Gordon, Ph.D.
History and Philosophy of Physics, Baylor University
March 30, 2016


Materialism (or physicalism or naturalism) is the view that the sum and substance of everything that exists is exhausted by physical objects and processes and whatever supervenes causally upon them. The resources available to the materialist for providing an explanation of how the universe works are therefore restricted to material objects, causes, events and processes. Because quantum theory is thought to provide the bedrock for our scientific understanding of physical reality, it is to this theory that the materialist inevitably appeals in support of his worldview. But having fled to science in search of a safe haven for his doctrines, the materialist instead finds that quantum theory in fact dissolves and defeats his materialist understanding of the world.

Before we launch into a more detailed defense of this claim, it will help for those who are unfamiliar with quantum theory to have at their disposal a few non-technical definitions of central concepts. First of all, what is quantum theory? Broadly speaking, it is the mathematical theory describing the behavior of the physical world at the smallest and most fundamental level. It is comprised of quantum mechanics and quantum field theory, along with a variety of associated concepts and applications.Quantum mechanics describes the motion of objects at the atomic and subatomic scale. Fundamental to quantum mechanics is the duality of its phenomena – objects such as electrons and protons behave as either particles or waves depending on the experimental context. Similarly, radiation, such as light, exhibits both wave and particle behavior.

Quantum field theory is the quantum description of systems with an infinite number of degrees of freedom. It is frequently convenient to represent systems consisting of large numbers of objects – such as the ions and electrons in a metal or the nucleons in large nuclei – in the quantum field formalism.

Relativistic quantum field theory combines field theory (for example, the theory of the electromagnetic field), quantum mechanics and special relativity theory into a single mathematical structure. It is one of the primary tools of mathematical physicists. The search continues for an adequate quantum theory of gravity that would successfully express general relativity as a quantum field theory.

Quantum cosmology applies the quantum theory of fields to the question of the origin of the universe and its early development, but an adequate quantum cosmology ultimately requires a complete theory of quantum gravity.

One of the chief characteristics of quantum phenomena is their nonlocality and nonlocalizability. Every time a quantum object or system interacts with another quantum object or system, their existence becomes “entangled” in such a way that what happens to one of them instantaneously affects the other no matter how far apart they have separated. Since local effects obey the constraints of special relativity and propagate at speeds less than or equal to that of light, such instantaneous correlations are called nonlocal, and the quantum systems manifesting them are said to exhibit nonlocality. A result in mathematical physics called Bell’s theorem – after the Irish physicist who proved it – shows that no hidden (empirically undetectable) variables can be added to the description of quantum systems exhibiting nonlocal behavior which would explain these instantaneous correlations on the basis of local considerations.

When such local variables are introduced, the predictions of the modified theory differ from those of quantum mechanics. A series of experiments beginning with those conducted by Alain Aspect at the University of Paris in the 1980s has demonstrated quite conclusively that quantum theory, not some theory modified by local hidden parameters, generates the correct predictions. The physical world, therefore, is fundamentally nonlocal and permeated with instantaneous connections and correlations. Nonlocalizability is a related phenomenon in relativistic quantum mechanics and quantum field theory in which it is impossible to isolate an unobserved quantum object, such as an electron, in a bounded region of space. As we shall see, nonlocality and nonlocalizability present intractable problems for the materialist.

The ground has now been laid to summarize an argument showing not only that quantum theory does not support materialism but also that it is incompatible with materialism. The argument can be formulated in terms of the following premises and conclusion:

  • P1. Materialism is the view that the sum and substance of everything that exists is exhausted by physical objects and processes and whatever supervenes causally upon them.
  • P2. The explanatory resources of materialism are therefore restricted to material objects, causes, events and processes.
  • P3. Neither nonlocal quantum correlations nor (in light of nonlocalizability) the identity of the fundamental constituents of material reality can be explained or characterized if the explanatory constraints of materialism are preserved.
  • P4. These quantum phenomena require an explanation.
_____________________________________________

Therefore, materialism/naturalism/physicalism is irremediably deficient as a worldview, and consequently should be rejected as false and inadequate.

The first two premises of this argument are uncontroversial: the first is just a definition and the second is a consequence of this definition. The key premises of the argument are thus the third and fourth; once these are established, the conclusion follows directly. Let’s focus our attention, therefore, on justifying the claims in premises three and four.

In order for a particle to be a material individual, it must possess one or more well-defined and uniquely identifying properties. The prime example of such a property is spatio-temporal location. In order for something to exist as an individual material object, it must occupy a certain volume of space at a certain time. If it does not, then whatever it is – if it’s anything at all – it’s not a material object. The problem for the materialist is that the particles of relativistic quantum mechanics are not so localizable.

Stated roughly, Gerhard Hegerfeldt and David Malament have shown that if one assumes (quite reasonably) that an individual particle can neither serve as an infinite source of energy nor be in two places at once, then that particle has zero probability of being found in any bounded spatial region, no matter how large! In short, the “particle” doesn’t exist anywhere in space, and so, to be honest, it doesn’t really exist at all. Hans Halvorson and Robert Clifton have extended these results and closed some loopholes by showing that the Hegerfeldt-Malament proof still works under conditions that are even more general. In particular, they’ve shown that once relativity is taken into account, there can be no intelligible notion of microscopic material objects. Particle talk has pragmatic utility in relation to macroscopic appearances, but it has no basis in microphysical reality (and this is the rock-bottom reality for the materialist).

The underlying problem is this: there are correlations in nature that require a causal explanation but for which no physical explanation is in principle possible. Furthermore, the nonlocalizability of field quanta entails that these entities, whatever they are, fail the criterion of material individuality. So, paradoxically and ironically, the most fundamental constituents and relations of the material world cannot, in principle, be understood in terms of material substances. Since there must be some explanation for these things, the correct explanation will have to be one which is non-physical – and this is plainly incompatible with any and all varieties of materialism.

One possible materialist strategy of defense is to claim that nonlocal phenomena do not require an explanation since, while they may be a bit puzzling epistemically, they are not, ultimately, metaphysically problematic. This idea that none of the regularities in nature need causal grounding is captured in a concept that David Lewis calls “Humean supervenience.” Humean supervenience is intended as an account of how nature determines what is true about laws and chances quite independently of what we humans believe about the world – in other words, it is still to be understood as an ontological theory, not an epistemic one. The theory takes the fundamental relations of the world to be spatio-temporal in a manner consistent with special relativity, and has an ontology of points – or point-sized occupants of points – along with local qualities that are their intrinsic properties. Everything else supervenes on this spatio-temporal arrangement of local qualities. On this view, observed natural regularities are laws just in case they are the theorems of an axiomatic deductive system whose theorems are true and which strikes an optimal balance between simplicity and informativeness. Lewis postulates that there is exactly one best such system.

But this borders on incoherence. Humean supervenience would require that quantum outcomes, while nonlocally correlated, should nonetheless be understood in terms of local properties. Under such conditions, it becomes necessary to postulate random devices in harmony at spacelike separation without any deeper ontological explanation. Perhaps I can engender the requisite sense of puzzlement in the following way: accepting the plausibility of Humean supervenience in this context would be equivalent to believing that people sitting at typewriters in rooms on opposite sides of the world and simultaneously producing identical texts were not and had never been in communication with each other. The quantum description of the world is at least this improbable under Humean supervenience, with the added wrinkle that no common cause in the history of the system or locally transmitted information can account for the correlation. Incredulity is not just the natural response here, it is a necessary response. When the implications of the concept are grasped, Humean supervenience serves as a reductio of itself. So I repeat: a deeper explanation for quantum nonlocality is required and no physical explanation is possible.

The challenge to making metaphysical sense of quantum theory, therefore, is to give an account of what the world is like when it has an objective structure that does not supervene on material objects. With this stricture, the rather startling answer that begins to seem plausible is that preserving and explaining the objective structure of appearances requires reviving a type of phenomenalism in which our perception of the physical universe is constituted by sense-data conforming to certain structural constraints, but absent a material reality giving rise to these sensory perceptions. What remains, therefore, is an ontology of minds experiencing and generating mental events and processes that, when sensory in character, have a formal structure characterized by the fundamental symmetries and constraints represented in physical theory. The fact that these sensory perceptions are not mostly of our own making points to the falsity of any solipsistic inclination, but it also engenders some metaphysical and epistemological puzzlement. There is, however, one quite reasonable way to ground this ontology and obviate puzzlement: metaphysical objectivity and epistemic intersubjectivity are preserved in a theistic metaphysics that looks a lot like the immaterialism proposed by George Berkeley and Jonathan Edwards.

---

*Bruce Gordon received his Ph.D. in the history of philosophy of physics from Northwestern University. His primary research interests are in the areas of philosophy of science, philosophy of physics, analytic metaphysics, philosophical theology, and questions at the intersection of these disciplines. He has been at Baylor University since 1999 in the role of an administrator and adjunct assistant professor of philosophy. He is currently a scholar in residence at the Baylor Institute for Faith and Learning.

What is Neurophilosophy?


Credit: Perrin Ireland


REFERENCES



amazon link
Five chapters in the book's first part, "Some Elementary Neuroscience," sketch the history of the science of nervous systems and provide a general introduction to neurophysiology, neuroanatomy, and neuropsychology. In the second part, "Recent Developments in the Philosophy of Science," chapters place the mind-body problem within the wider context of the philosophy of science. Drawing on recent research in this area, a general account of intertheoretic reduction is explained, arguments for a reductionist strategy are developed, and traditional objections from dualists and other anti reductionists are answered in novel ways. The third part, "A Neurophilosophical Perspective," concludes the book with a presentation and discussion of some of the most promising theoretical developments currently under exploration in functional neurobiology and in the connectionist models within artificial intelligence research.


amazon link


Bringing together recent case studies and insights into current developments, this collection introduces philosophers to a range of experimental methods from neuroscience. Chapters provide a comprehensive survey of the discipline, covering neuroimaging such as EEG and MRI, causal
interventions like brain stimulation, advanced statistical methods, and approaches drawing on research into the development of human individuals and humankind.

A team of experts combine clear explanations of complex methods with reports of cutting-edge research, advancing our understanding of how these tools can be applied to further philosophical inquiries into agency, emotions, enhancement, perception, personhood and more. With contributions organised by neuroscientific method, this volume provides an accessible overview for students and scholars coming to neurophilosophy for the first time, presenting a range of topics from responsibility to metacognition. 

* * * * * * *

Neurophilosophy: My brain and I

Nature volume 499, page282 (2013)

Chris Frith reflects on a book that probes the knotty nexus between brain and mind.

Touching a Nerve: The Self as Brain
by Patricia Churchland
W. W. Norton: 2013. 9780393058321
ISBN: 978-0-3930-5832-1

Patricia Churchland is the doyenne of neurophilosophers. She believes, as I do, that to understand the mind, one must understand the brain, using evidence from neuroscience to refine concepts such as free will. Many philosophers and others are unhappy with this proposal. The problem, Churchland writes, is that deep down we are all dualists. Our conscious selves inhabit the world of ideas; our brains, the world of objects.

So deep is this split that we find it hard to accept an intimate relationship between the mind and brain. In Touching a Nerve, Churchland hopes to help us overcome this aversion and accept the “neural reality of our mental lives”. To encourage the general reader, she emphasizes her background as an unsophisticated country girl whose common sense stems from growing up on a farm in an isolated valley in British Columbia, Canada.

She begins by showing us how common sense and neuroscience reveal that there is no need for a soul. We are beginning to have an inkling of the underlying mechanisms that enable thinking, feeling and deciding, such as the precise way in which the anaesthetic procaine removes the sensation of pain. Common sense and neuroscience also tell us that there is no life after death. The light at the end of the tunnel associated with near-death experiences is the effect of oxygen starvation on the brain's visual system.

Churchland goes on to discuss morality, aggression, free will and consciousness. But if you were expecting thorough-going interpretations of these concepts in neuroscientific terms, you will be disappointed. She promotes the 'ordinary' meaning of free will — “intending your action, knowing what you are doing, and being of a sound mind”. She does not consider the disturbing results of neuroscience research, which suggest that awareness of action — intending and knowing — occur after the action has been selected. We are also told that moral values such as honesty, loyalty and courage depend on learning local conventions and hearing the “stories [that] give you a sense of the right way to act”.

I have no quarrel with the idea that upbringing and culture have important roles in determining behaviour, but this does not seem compatible with Churchland's view that “our brains determine everything about who we are and how we experience the world”. She also misses the opportunity to present studies that explore the links between brain and culture. There are special processes in the human brain, such as the ability to imitate others with high fidelity, that enable the cumulative development of culture. At the same time, culture moulds the brain and may even drive genetic evolution (see S. E. Fisher & M. Ridley Science 340, 929–930; 2013). Each human brain is part of a dynamic, interacting system of other brains embedded in culture.

What neuroscience there is in Touching a Nerve is accurate and commendably up to date. There are useful notes associated with each chapter, including primary sources. Yet I became increasingly irritated by the mixture of science and homespun wisdom. Stories about badly behaved schoolgirls and White Leghorn hens did not help my understanding of the basis of aggression and sex. And the referencing is patchy: why does the statement “not every disappointment can be remedied” deserve a reference, whereas the neural basis of Charles Bonnet syndrome and the claim that patients with schizophrenia can tickle themselves do not? As for common sense, I agree with developmental biologist Lewis Wolpert that the important findings of science typically go against it. It is the data supporting the common-sense interpretation that need to be most carefully checked.

Nevertheless, it may well be true that dualism is deeply ingrained in our nature. A recent brain-imaging study revealed that we have two circumscribed brain circuits: one enables us to think about mental causation, such as how unfairness makes us angry; the other enables us to think about physical causality, such as how heat activates pain receptors. These circuits are mutually antagonistic, so we cannot do both at once (see A. I. Jack et al. NeuroImage 66, 385–401; 2013). But if mind–brain dualism is so deeply ingrained, why are the shops full of books such as Touching a Nerve, which show that it is the brain that makes decisions, determines moral values and explains political attitudes?

I can only assume that these are the modern equivalent of Gothic horror stories. We love to be frightened by the thought that we are nothing more than the 1.5 kilograms of sentient meat that is our brain, but we don't really believe it. I don't think Churchland really believes it either.

---

*Chris Frith is emeritus professor of neuropsychology at the Wellcome Trust Centre for Neuroimaging at University College London, and a fellow of All Souls College, Oxford. His books include Making Up the Mind: How the Brain Creates Our Mental World.


Related links in Nature Research - An illusionary rival


* * * * * * *


What is neurophilosophy: Do we need a non-reductive form?
Empirical sciences and naturalized philosophy are located on the same mutual continuum. The branches/domains and methodologies (observational-experimental vs. rational-argumentative) of both disciplines are thus faced with a possible union. Naturalized philosophy consequently allows for an interdisciplinary and systematical bidirectional interaction between neuroscience and philosophy, as represented by non-reductive neurophilosophy (part 1.4), to become possible.

Neurophilosophy

Neurophilosophy or the philosophy of neuroscience is the interdisciplinary study of neuroscience and philosophy that explores the relevance of neuroscientific studies to the arguments traditionally categorized as philosophy of mind. The philosophy of neuroscience attempts to clarify neuroscientific methods and results using the conceptual rigor and methods of philosophy of science.

Specific issues

Below is a list of specific issues important to philosophy of neuroscience:

  • "The indirectness of studies of mind and brain"[1]
  • "Computational or representational analysis of brain processing"[2]
  • "Relations between psychological and neuroscientific inquiries"[3]
  • Modularity of mind[2]
  • What constitutes adequate explanation in neuroscience?[4]
  • "Location of cognitive function"[5]

Indirectness of studies of the mind and brain

Many of the methods and techniques central to neuroscientific discovery rely on assumptions that can limit the interpretation of the data. Philosophers of neuroscience have discussed such assumptions in the use of functional magnetic resonance imaging (fMRI),[6][7] dissociation in cognitive neuropsychology,[8][9] single unit recording,[10] and computational neuroscience.[11] Following are descriptions of many of the current controversies and debates about the methods employed in neuroscience.

fMRI

Many fMRI studies rely heavily on the assumption of localization of function[12] (same as functional specialization).

Localization of function means that many cognitive functions can be localized to specific brain regions. An example of functional localization comes from studies of the motor cortex.[13] There seem to be different groups of cells in the motor cortex responsible for controlling different groups of muscles.

Many philosophers of neuroscience criticize fMRI for relying too heavily on this assumption. Michael Anderson points out that subtraction-method fMRI misses a lot of brain information that is important to the cognitive processes.[14] Subtraction fMRI only shows the differences between the task activation and the control activation, but many of the brain areas activated in the control are obviously important for the task as well.

Rejections of fMRI

Some philosophers entirely reject any notion of localization of function and thus believe fMRI studies to be profoundly misguided.[15] These philosophers maintain that brain processing acts holistically, that large sections of the brain are involved in processing most cognitive tasks (see holism in neurology and the modularity section below). One way to understand their objection to the idea of localization of function is the radio repairman thought experiment.[16] In this thought experiment, a radio repairman opens up a radio and rips out a tube. The radio begins whistling loudly and the radio repairman declares that he must have ripped out the anti-whistling tube. There is no anti-whistling tube in the radio and the radio repairman has confounded function with effect. This criticism was originally targeted at the logic used by neuropsychological brain lesion experiments, but the criticism is still applicable to neuroimaging. These considerations are similar to Van Orden's and Paap's criticism of circularity in neuroimaging logic.[17] According to them, neuroimagers assume that their theory of cognitive component parcellation is correct and that these components divide cleanly into feed-forward modules. These assumptions are necessary to justify their inference of brain localization. The logic is circular if the researcher then uses the appearance of brain region activation as proof of the correctness of their cognitive theories.

Reverse inference

A different problematic methodological assumption within fMRI research is the use of reverse inference.[18] A reverse inference is when the activation of a brain region is used to infer the presence of a given cognitive process. Poldrack points out that the strength of this inference depends critically on the likelihood that a given task employs a given cognitive process and the likelihood of that pattern of brain activation given that cognitive process. In other words, the strength of reverse inference is based upon the selectivity of the task used as well as the selectivity of the brain region activation.

A 2011 article published in the New York Times has been heavily criticized for misusing reverse inference.[19] In the study, participants were shown pictures of their iPhones and the researchers measured activation of the insula. The researchers took insula activation as evidence of feelings of love and concluded that people loved their iPhones. Critics were quick to point out that the insula is not a very selective piece of cortex, and therefore not amenable to reverse inference.

The neuropsychologist Max Coltheart took the problems with reverse inference a step further and challenged neuroimagers to give one instance in which neuroimaging had informed psychological theory.[20] Coltheart takes the burden of proof to be an instance where the brain imaging data is consistent with one theory but inconsistent with another theory.

Roskies maintains that Coltheart's ultra cognitive position makes his challenge unwinnable.[21] Since Coltheart maintains that the implementation of a cognitive state has no bearing on the function of that cognitive state, then it is impossible to find neuroimaging data that will be able to comment on psychological theories in the way Coltheart demands. Neuroimaging data will always be relegated to the lower level of implementation and be unable to selectively determine one or another cognitive theory.

In a 2006 article, Richard Henson suggests that forward inference can be used to infer dissociation of function at the psychological level.[22] He suggests that these kinds of inferences can be made when there is crossing activations between two task types in two brain regions and there is no change in activation in a mutual control region.

Pure insertion

One final assumption is the assumption of pure insertion in fMRI.[23] The assumption of pure insertion is the assumption that a single cognitive process can be inserted into another set of cognitive processes without affecting the functioning of the rest. For example, to find the reading comprehension area of the brain, researchers might scan participants while they were presented with a word and while they were presented with a non-word (e.g. "Floob"). If the researchers then infer that the resulting difference in brain pattern represents the regions of the brain involved in reading comprehension, they have assumed that these changes are not reflective of changes in task difficulty or differential recruitment between tasks. The term pure insertion was coined by Donders as a criticism of reaction time methods.

Resting-state functional-connectivity MRI

Recently, researchers have begun using a new functional imaging technique called resting-state functional-connectivity MRI.[24] Subjects' brains are scanned while the subject sits idly in the scanner. By looking at the natural fluctuations in the blood-oxygen-level-dependent (BOLD) pattern while the subject is at rest, the researchers can see which brain regions co-vary in activation together. Afterward, they can use the patterns of covariance to construct maps of functionally-linked brain areas.

The name "functional-connectivity" is somewhat misleading since the data only indicates co-variation. Still, this is a powerful method for studying large networks throughout the brain.

Methodological issues

There are a couple of important methodological issues that need to be addressed. Firstly, there are many different possible brain mappings that could be used to define the brain regions for the network. The results could vary significantly depending on the brain region chosen.

Secondly, what mathematical techniques are best to characterize these brain regions?

The brain regions of interest are somewhat constrained by the size of the voxelsRs-fcMRI uses voxels that are only a few millimeters cubed, so the brain regions will have to be defined on a larger scale. Two of the statistical methods that are commonly applied to network analysis can work on the single voxel spatial scale, but graph theory methods are extremely sensitive to the way nodes are defined.

Brain regions can be divided according to their cellular architecture, according to their connectivity, or according to physiological measures. Alternatively, one could take a "theory-neutral" approach, and randomly divide the cortex into partitions with an arbitrary size.

As mentioned earlier, there are several approaches to network analysis once the brain regions have been defined. Seed-based analysis begins with an a priori defined seed region and finds all of the regions that are functionally connected to that region. Wig et al.caution that the resulting network structure will not give any information concerning the inter-connectivity of the identified regions or the relations of those regions to regions other than the seed region.

Another approach is to use independent component analysis (ICA) to create spatio-temporal component maps, and the components are sorted into those that carry information of interest and those that are caused by noise. Wigs et al. once again warns that inference of functional brain region communities is difficult under ICA. ICA also has the issue of imposing orthogonality on the data.[25]

Graph theory uses a matrix to characterize covariance between regions, which is then transformed into a network map. The problem with graph theory analysis is that network mapping is heavily influenced by a priori brain region and connectivity (nodes and edges). This places the researcher at risk of cherry-picking regions and connections according to their own preconceived theories. However, graph theory analysis is still considered extremely valuable, as it is the only method that gives pair-wise relationships between nodes.

While ICA may have an advantage in being a fairly principled method, it seems that using both methods will be important to better understanding the network connectivity of the brain. Mumford et al. hoped to avoid these issues and use a principled approach that could determine pair-wise relationships using a statistical technique adopted from analysis of gene co-expression networks.

Dissociation in cognitive neuropsychology

Cognitive neuropsychology studies brain damaged patients and uses the patterns of selective impairment in order to make inferences on the underlying cognitive structure. Dissociation between cognitive functions is taken to be evidence that these functions are independent. Theorists have identified several key assumptions that are needed to justify these inferences:[26]

  1. Functional modularity – the mind is organized into functionally separate cognitive modules.
  2. Anatomical modularity – the brain is organized into functionally separate modules. This assumption is very similar to the assumption of functional localization. These assumptions differ from the assumption of functional modularity, because it is possible to have separable cognitive modules that are implemented by diffuse patterns of brain activation.
  3. Universality – The basic organization of functional and anatomical modularity is the same for all normal humans. This assumption is needed if we are to make any claim about functional organization based on dissociation that extrapolates from the instance of a case study to the population.
  4. Transparency / Subtractivity – the mind does not undergo substantial reorganization following brain damage. It is possible to remove one functional module without significantly altering the overall structure of the system. This assumption is necessary in order to justify using brain damaged patients in order to make inferences about the cognitive architecture of healthy people.

There are three principal types of evidence in cognitive neuropsychology: association, single dissociation and double dissociation.[27] Association inferences observe that certain deficits are likely to co-occur. For example, there are many cases who have deficits in both abstract and concrete word comprehension following brain damage. Association studies are considered the weakest form of evidence, because the results could be accounted for by damage to neighboring brain regions and not damage to a single cognitive system.[28] Single Dissociation inferences observe that one cognitive faculty can be spared while another can be damaged following brain damage. This pattern indicates that a) the two tasks employ different cognitive systems b) the two tasks occupy the same system and the damaged task is downstream from the spared task or c) that the spared task requires fewer cognitive resources than the damaged task. The "gold standard" for cognitive neuropsychology is the double dissociation. Double dissociation occurs when brain damage impairs task A in Patient1 but spares task B and brain damage spares task A in Patient 2 but damages task B. It is assumed that one instance of double dissociation is sufficient proof to infer separate cognitive modules in the performance of the tasks.

Many theorists criticize cognitive neuropsychology for its dependence on double dissociations. In one widely cited study, Joula and Plunkett used a model connectionist system to demonstrate that double dissociation behavioral patterns can occur through random lesions of a single module.[29] They created a multilayer connectionist system trained to pronounce words. They repeatedly simulated random destruction of nodes and connections in the system and plotted the resulting performance on a scatter plot. The results showed deficits in irregular noun pronunciation with spared regular verb pronunciation in some cases and deficits in regular verb pronunciation with spared irregular noun pronunciation. These results suggest that a single instance of double dissociation is insufficient to justify inference to multiple systems.[30]

Charter offers a theoretical case in which double dissociation logic can be faulty.[31] If two tasks, task A and task B, use almost all of the same systems but differ by one mutually exclusive module apiece, then the selective lesioning of those two modules would seem to indicate that A and B use different systems. Charter uses the example of someone who is allergic to peanuts but not shrimp and someone who is allergic to shrimp and not peanuts. He argues that double dissociation logic leads one to infer that peanuts and shrimp are digested by different systems. John Dunn offers another objection to double dissociation.[32] He claims that it is easy to demonstrate the existence of a true deficit but difficult to show that another function is truly spared. As more data is accumulated, the value of your results will converge on an effect size of zero, but there will always be a positive value greater than zero that has more statistical power than zero. Therefore, it is impossible to be fully confident that a given double dissociation actually exists.

On a different note, Alphonso Caramazza has given a principled reason for rejecting the use of group studies in cognitive neuropsychology.[33] Studies of brain damaged patients can either take the form of a single case study, in which an individual's behavior is characterized and used as evidence, or group studies, in which a group of patients displaying the same deficit have their behavior characterized and averaged. In order to justify grouping a set of patient data together, the researcher must know that the group is homogenous, that their behavior is equivalent in every theoretically meaningful way. In brain damaged patients, this can only be accomplished a posteriori by analyzing the behavior patterns of all the individuals in the group. Thus according to Caramazza, any group study is either the equivalent of a set of single case studies or is theoretically unjustified. Newcombe and Marshall pointed out that there are some cases (they use Geschwind's syndrome as an example) and that group studies might still serve as a useful heuristic in cognitive neuropsychological studies.[34]

Single-unit recordings

It is commonly understood in neuroscience that information is encoded in the brain by the firing patterns of neurons.[35] Many of the philosophical questions surrounding the neural code are related to questions about representation and computation that are discussed below. There are other methodological questions including whether neurons represent information through an average firing rate or whether there is information represented by the temporal dynamics. There are similar questions about whether neurons represent information individually or as a population.

Computational neuroscience

Many of the philosophical controversies surrounding computational neuroscience involve the role of simulation and modeling as explanation. Carl Craver has been especially vocal about such interpretations.[36] Jones and Love wrote an especially critical article targeted at Bayesian behavioral modeling that did not constrain the modeling parameters by psychological or neurological considerations[37] Eric Winsberg has written about the role of computer modeling and simulation in science generally, but his characterization is applicable to computational neuroscience.[38]

Computation and representation in the brain

The computational theory of mind has been widespread in neuroscience since the cognitive revolution in the 1960s. This section will begin with a historical overview of computational neuroscience and then discuss various competing theories and controversies within the field.

Historical overview

Computational neuroscience began in the 1930s and 1940s with two groups of researchers.[citation needed] The first group consisted of Alan TuringAlonzo Church and John von Neumann, who were working to develop computing machines and the mathematical underpinnings of computer science.[39] This work culminated in the theoretical development of so-called Turing machines and the Church–Turing thesis, which formalized the mathematics underlying computability theory. The second group consisted of Warren McCulloch and Walter Pitts who were working to develop the first artificial neural networks. McCulloch and Pitts were the first to hypothesize that neurons could be used to implement a logical calculus that could explain cognition. They used their toy neurons to develop logic gates that could make computations.[40] However these developments failed to take hold in the psychological sciences and neuroscience until the mid-1950s and 1960s. Behaviorism had dominated the psychology until the 1950s when new developments in a variety of fields overturned behaviorist theory in favor of a cognitive theory. From the beginning of the cognitive revolution, computational theory played a major role in theoretical developments. Minsky and McCarthy's work in artificial intelligence, Newell and Simon's computer simulations, and Noam Chomsky's importation of information theory into linguistics were all heavily reliant on computational assumptions.[41] By the early 1960s, Hilary Putnam was arguing in favor of machine functionalism in which the brain instantiated Turing machines. By this point computational theories were firmly fixed in psychology and neuroscience. By the mid-1980s, a group of researchers began using multilayer feed-forward analog neural networks that could be trained to perform a variety of tasks. The work by researchers like Sejnowski, Rosenberg, Rumelhart, and McClelland were labeled as connectionism, and the discipline has continued since then.[42] The connectionist mindset was embraced by Paul and Patricia Churchland who then developed their "state space semantics" using concepts from connectionist theory. Connectionism was also condemned by researchers such as Fodor, Pylyshyn, and Pinker. The tension between the connectionists and the classicists is still being debated today.

Representation

One of the reasons that computational theories are appealing is that computers have the ability to manipulate representations to give meaningful output. Digital computers use strings of 1s and 0s in order to represent the content. Most cognitive scientists posit that the brain uses some form of representational code that is carried in the firing patterns of neurons. Computational accounts seem to offer an easy way of explaining how human brains carry and manipulate the perceptions, thoughts, feelings, and actions of individuals.[43] While most theorists maintain that representation is an important part of cognition, the exact nature of that representation is highly debated. The two main arguments come from advocates of symbolic representations and advocates of associationist representations.

Symbolic representational accounts have been famously championed by Fodor and Pinker. Symbolic representation means that the objects are represented by symbols and are processed through rule governed manipulations that are sensation to the constitutive structure. The fact that symbolic representation is sensitive to the structure of the representations is a major part of its appeal. Fodor proposed the language of thought hypothesis, in which mental representations are manipulated in the same way that language is syntactically manipulated in order to produce thought. According to Fodor, the language of thought hypothesis explains the systematicity and productivity seen in both language and thought.[44]

Associativist representations are most often described with connectionist systems. In connectionist systems, representations are distributed across all the nodes and connection weights of the system and thus are said to be sub symbolic.[45] A connectionist system is capable of implementing a symbolic system. There are several important aspects of neural nets that suggest that distributed parallel processing provides a better basis for cognitive functions than symbolic processing. Firstly, the inspiration for these systems came from the brain itself indicating biological relevance. Secondly, these systems are capable of storing content addressable memory, which is far more efficient than memory searches in symbolic systems. Thirdly, neural nets are resilient to damage while even minor damage can disable a symbolic system. Lastly, soft constraints and generalization when processing novel stimuli allow nets to behave more flexibly than symbolic systems.

The Churchlands described representation in a connectionist system in terms of state space. The content of the system is represented by an n-dimensional vector where the n= the number of nodes in the system and the direction of the vector is determined by the activation pattern of the nodes. Fodor rejected this method of representation on the grounds that two different connectionist systems could not have the same content.[46] Further mathematical analysis of connectionist system revealed that connectionist systems that could contain similar content could be mapped graphically to reveal clusters of nodes that were important to representing the content.[47] However, state space vector comparison was not amenable to this type of analysis. Recently, Nicholas Shea has offered his own account for content within connectionist systems that employs the concepts developed through cluster analysis.

Views on computation

Computationalism, a kind of functionalist philosophy of mind, is committed to the position that the brain is some sort of computer, but what does it mean to be a computer? The definition of a computation must be narrow enough so that we limit the number of objects that can be called computers. For example, it might seem problematic to have a definition wide enough to allow stomachs and weather systems to be involved in computations. However, it is also necessary to have a definition broad enough to allow all of the wide varieties of computational systems to compute. For example, if the definition of computation is limited to syntactic manipulation of symbolic representations, then most connectionist systems would not be able to compute.[48] Rick Grush distinguishes between computation as a tool for simulation and computation as a theoretical stance in cognitive neuroscience.[49] For the former, anything that can be computationally modeled counts as computing. In the latter case, the brain is a computing function that is distinct from systems like fluid dynamic systems and the planetary orbits in this regard. The challenge for any computational definition is to keep the two senses distinct.

Alternatively, some theorists choose to accept a narrow or wide definition for theoretical reasons. Pancomputationalism is the position that everything can be said to compute. This view has been criticized by Piccinini on the grounds that such a definition makes computation trivial to the point where it is robbed of its explanatory value.[50]

The simplest definition of computations is that a system can be said to be computing when a computational description can be mapped onto the physical description. This is an extremely broad definition of computation and it ends up endorsing a form of pancomputationalism. Putnam and Searle, who are often credited with this view, maintain that computation is observer-related. In other words, if you want to view a system as computing then you can say that it is computing. Piccinini points out that, in this view, not only is everything computing, but also everything is computing in an indefinite number of ways.[51] Since it is possible to apply an indefinite number of computational descriptions to a given system, the system ends up computing an indefinite number of tasks.

The most common view of computation is the semantic account of computation. Semantic approaches use a similar notion of computation as the mapping approaches with the added constraint that the system must manipulate representations with semantic content. Note from the earlier discussion of representation that both the Churchlands' connectionist systems and Fodor's symbolic systems use this notion of computation. In fact, Fodor is famously credited as saying "No computation without representation".[52] Computational states can be individuated by an externalized appeal to content in a broad sense (i.e. the object in the external world) or by internalist appeal to the narrow sense content (content defined by the properties of the system).[53] In order to fix the content of the representation, it is often necessary to appeal to the information contained within the system. Grush provides a criticism of the semantic account.[49] He points out that appeal to the informational content of a system to demonstrate representation by the system. He uses his coffee cup as an example of a system that contains information, such as the heat conductance of the coffee cup and the time since the coffee was poured, but is too mundane to compute in any robust sense. Semantic computationalists try to escape this criticism by appealing to the evolutionary history of system. This is called the biosemantic account. Grush uses the example of his feet, saying that by this account his feet would not be computing the amount of food he had eaten because their structure had not been evolutionarily selected for that purpose. Grush replies to the appeal to biosemantics with a thought experiment. Imagine that lightning strikes a swamp somewhere and creates an exact copy of you. According to the biosemantic account, this swamp-you would be incapable of computation because there is no evolutionary history with which to justify assigning representational content. The idea that for two physically identical structures one can be said to be computing while the other is not should be disturbing to any physicalist.

There are also syntactic or structural accounts for computation. These accounts do not need to rely on representation. However, it is possible to use both structure and representation as constrains on computational mapping. Oron Shagrir identifies several philosophers of neuroscience who espouse structural accounts. According to him, Fodor and Pylyshyn require some sort of syntactic constraint on their theory of computation. This is consistent with their rejection of connectionist systems on the grounds of systematicity. He also identifies Piccinini as a structuralist quoting his 2008 paper: "the generation of output strings of digits from input strings of digits in accordance with a general rule that depends on the properties of the strings and (possibly) on the internal state of the system".[54] Though Piccinini undoubtedly espouses structuralist views in that paper, he claims that mechanistic accounts of computation avoid reference to either syntax or representation.[53] It is possible that Piccinini thinks that there are differences between syntactic and structural accounts of computation that Shagrir does not respect.

In his view of mechanistic computation, Piccinini asserts that functional mechanisms process vehicles in a manner sensitive to the differences between different portions of the vehicle, and thus can be said to generically compute. He claims that these vehicles are medium-independent, meaning that the mapping function will be the same regardless of the physical implementation. Computing systems can be differentiated based upon the vehicle structure and the mechanistic perspective can account for errors in computation.

Dynamical systems theory presents itself as an alternative to computational explanations of cognition. These theories are staunchly anti-computational and anti-representational. Dynamical systems are defined as systems that change over time in accordance with a mathematical equation. Dynamical systems theory claims that human cognition is a dynamical model in the same sense computationalists claim that the human mind is a computer.[55] A common objection leveled at dynamical systems theory is that dynamical systems are computable and therefore a subset of computationalism. Van Gelder is quick to point out that there is a big difference between being a computer and being computable. Making the definition of computing wide enough to incorporate dynamical models would effectively embrace pancomputationalism.

List of neurophilosophers

See also

Notes

  1. ^ Bechtel, Mandik & Mundale 2001, p. 15.
  2. Jump up to:a b Bechtel, Mandik & Mundale 2001, pp. 15–16, 18–19.
  3. ^ Bechtel, Mandik & Mundale 2001, p. 16.
  4. ^ Craver, "Explaining the Brain: Mechanisms and Mosaic Unity of Neuroscience" 2007, Oxford University Press, citation: preface vii
  5. ^ Bickle, John, Mandik, Peter and Landreth, Anthony, "The Philosophy of Neuroscience", The Stanford Encyclopedia of Philosophy (Summer 2010 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/sum2010/entries/neuroscience/
  6. ^ Poldrack (2010). "Subtraction and Beyond" in Hanson and Bunzl, Human Brain Mapping. pp. 147–160
  7. ^ Klein C. (2010) "Philosophical Issues in Neuroimaging" Philosophy Compass 5(2) pp. 186–198
  8. ^ Dunn (2003) "The Elusive Dissociation" cortex 39 no. 1 pp. 21–37
  9. ^ Dunn and Kirsner. 2003. What can we infer from double dissociations?
  10. ^ deCharms and Zandor (2000) "Neural Representation and the temporal code" Annual Review of Neuroscience 23: pp. 613–47
  11. ^ Winsberg (2003). "Simulated Experiments: a Methodology for the Virtual World" Philosophy of Science.vol 70 no 1 105–125
  12. ^ Huettel, Song and McCarthy Functional Magnetic Resonance Imaging 2009 Sinauer Associates pp. 1
  13. ^ Passingham, R. E. Stephan, K. E. Kotter, R. "The anatomical basis of functional localization in the cortex" Nature Reviews Neuroscience. 2002, VOL 3; PART 8, pages 606–616
  14. ^ Anderson.(2007) "The Massive Redeployment Hypothesis and Functional Topography of the Brain" Philosophical Psychology Vol20 no 2 pp.144–149
  15. ^ The Massive Redeployment Hypothesis and Functional Topography of the Brain" Philosophical Psychology Vol20 no 2 pp.149–152
  16. ^ Bunzel, Hanson, and Poldrack "An Exchange about Localization of Function" Human Brain Mapping. pp.50
  17. ^ VanOrden, G and Paap, K "Functional Neuroimaging fails to discover Pieces of the Mind" Philosophy of science. 64 pp. S85-S94
  18. ^ Poldrack (2006). "Can Cognitive Processes be inferred from Neuroimaging Data"Trends in Cognitive Sciences. vol 10 no 2
  19. ^ Hayden, B "Do you Really love Your iPhone that Way" http://www.psychologytoday.com/blog/the-decision-tree/201110/do-you-really-love-your-iphone-way
  20. ^ Coltheart, M(2006b), "What Has Functional Neuroimaging Told Us about the Mind (So Far)?", Cortex 42: 323–331.
  21. ^ Rooskies, A. (2009). "Brain-Mind and Structure-Function Relations: A methodological Response to Coltheart" Philosophy of Science. vol 76
  22. ^ Henson, R (2006). "Forward Inference Using Functional Neuroimaging: Dissociations vs Associations" Trends in Cognitive Sciences vol 10 no 2
  23. ^ Poldrack "Subtraction and Beyond" in Hanson and Bunzl Human Brain Mapping pp. 147–160
  24. ^ Wig, Schlaggar, and Peterson (2011) "Concepts and Principals in the Analysis of Brain Networks" Annals of the New York Academy of Sciences 1224
  25. ^ Mumford et al (2010) "Detecting network modules in fMRI time series: A weighted network analysis approach" Neuroimage. 52
  26. ^ Coltheart, M "Assumptions and Methods in Cognitive Neuropsychology" in The Handbook of Cognitive Neuropsychology. 2001
  27. ^ Patterson, K and Plaut, D (2009) "Shallow Droughts Intoxicate the Brain: Lessons from Cognitive Science for Cognitive Neuropsychology"
  28. ^ Davies, M (2010) "Double Dissociation: Understanding its Role in Cognitive Neuropsychology" Mind & Language vol 25 no 5 pp500-540
  29. ^ Joula and Plunkett (1998). "Why Double Dissociations Don't Mean Much" Proceedings of the Cognitive Science Society
  30. ^ Keren, G and Schuly (2003) "Two is not Always Better than One: a Critical Evaluation of Two System Theories" Perspectives on Psychological Science Vol 4 no 6
  31. ^ Charter, N (2003). "How Much Can We Learn From Double Dissociations" Cortex 39 pp.176–179
  32. ^ Dunn, J (2003) "The elusive Dissociation" Cortex 39 no 1 21–37
  33. ^ Caramazza, A (1986) "On Drawing Inferences about the Structure of Normal Cognitive Systems From the Analysis of Patterns of Impaired Performance: the Case for Single Case Studies"
  34. ^ Newcombe and Marshall (1988). "Idealization Meets Psychometrics. The case for the Right Groups and the Right Individuals" Human Cognitive Neuropsychology edited by Ellis and Young
  35. ^ deCharms and Zandor (2000) "Neural Representations and the Cortical Code" Annual Review of Neuroscience 23:613–647
  36. ^ Craver, Carl Explaining the Brain. Oxford University Press New York, New York. 2007
  37. ^ Jones and Love (2011) "Bayesian Fundemantalism or Enlightenment? on the explanatory status and theoretical contribution of Bayesian models of cognition" Brain and Behavioral Sciences vol 34 no 4
  38. ^ Winberg, E (2003). "Simulated Experiments: Methodology for a Virtual World" Philosophy of Science.vol 70 no 1
  39. ^ Horst, Steven, "The Computational Theory of Mind", The Stanford Encyclopedia of Philosophy (Spring 2011 Edition), Edward N. Zalta (ed.), URL = http://plato.stanford.edu/archives/spr2011/entries/computational-mind/
  40. ^ Piccini, G (2009) "Computationalism in the Philosophy of Mind" Philosophical Compass vol 4
  41. ^ Miller, G (2003) "The Cognitive Revolution: a Historical Perspective" Trends in Cognitive Sciences. vol 7 no 3
  42. ^ Garson, James, "Connectionism", The Stanford Encyclopedia of Philosophy (Winter 2010 Edition), Edward N. Zalta (ed.), URL = http://plato.stanford.edu/archives/win2010/entries/connectionism/
  43. ^ Pitt, David, "Mental Representation", The Stanford Encyclopedia of Philosophy (Fall 2008 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2008/entries/mental-representation/>
  44. ^ Aydede, Murat, "The Language of Thought Hypothesis", The Stanford Encyclopedia of Philosophy (Fall 2010 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2010/entries/language-thought/>
  45. ^ Bechtel and Abrahamsen. Connectionism and the Mind. 2nd ed. Malden, Mass. : Blackwell, 2002.
  46. ^ Shea, N. "Content and its Vehicles in Connectionist Systems" Mind and Language. 2007
  47. ^ Laakso, Aarre & Cottrell, Garrison W. (2000). Content and cluster analysis: Assessing representational similarity in neural systems. Philosophical Psychology 13 (1):47–76
  48. ^ Shagrir (2010). "Computation San Diego Style" Philosophy of science vol 77
  49. Jump up to:a b Grush, R (2001) "The semantic Challenge to Computational Neuroscience"In Peter K. Machamer, Peter McLaughlin & Rick Grush (eds.), Theory and Method in the Neurosciences. University of Pittsburgh Press.
  50. ^ Piccinini, G. (2010). "The Mind as Neural Software? Understanding Functionalism, Computationalism, and Computational Functionalism." Philosophy and Phenomenological Research
  51. ^ Piccinini, G. (2010b). "The Mind as Neural Software? Understanding Functionalism, Computationalism, and Computational Functionalism." Philosophy and Phenomenological Research 81
  52. ^ Piccinini, G (2009) "Computation in the Philosophy of Mind" Philosophical Compass. vol 4
  53. Jump up to:a b Piccinini, Gualtiero, "Computation in Physical Systems", The Stanford Encyclopedia of Philosophy (Fall 2010 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2010/entries/computation-physicalsystems/>
  54. ^ Piccinini (2008). "Computation without Representation" Philosophical Studies vol 137 no 2
  55. ^ van Gelder, T. J. (1998) The dynamical hypothesis in cognitive science. Behavioral and Brain Sciences 21, 1–14

References

Further reading

External links