To anyone who is reading this or has read any of my previous posts, please accept my apologies for not updating in a while. There has been other things on my mind, and through experimenting with blogging as a means of expression, I have reached the conclusion that it probably isn’t my thing. That said, I have no intention right now of continuing this project and will revert to reading news of interest here on wordpress, rather than attempting to write my own.
It surprised me that i hadn’t come across this video before, featuring a debate between a fascinating selection of speakers, about the relation between facts and values.
If you read my earlier post ”Can ethics be moved in the direction of becoming scientific?”, you may recall the criticism directed at Sam Harris’s book The Moral Landscape. The argument he puts forth in the debate is pretty much identical to the main point of the book (that moral values can in principle be deduced from facts).
Granted that values reduce to facts about the well-being of conscious creatures, Patricia Churchland, also a neuroscientist, makes a good point; we can disagree about values while agreeing about the facts.
We can look upon the same evidence and draw different normative conclusions about what should be done taking this evidence into account. Furthermore, well-being is not some predefined ultimate goal, even though evolution might have favored creatures capable of attaining it.
On a side note, it’s also interesting that some people not only disagree about values, but also about evidence. Take the debate between Richard Dawkins and Wendy Wright for example, where Dawkins repeatedly encourages Wright to ”just look at the evidence”, while she doesn’t seem to understand what he means and clearly thinks she has seen the evidence for herself.
And also in less extreme cases, we might interpret evidence differently, based on our values and what we want to be true according to some preconception of a favored world-view, or our beliefs about the world.
Reducing all values to well-being would require an extremely broad definition of the term ”well-being”, to include the most minimal feelings of satisfaction one gets from finding an interpretation of the facts that agrees with ones world-view.
Another problem with Harris’s ideas is expanded upon by philosopher Simon Blackburn, to be discussed later.
Next up is Peter Singer, who holds the probably most common-sensical view that science can inform morality but not determine it. Science can’t really tell us what is right and wrong, but it can indeed help us make that decision.
(…I’m having a somewhat disturbing association to this Hitler rant, telling us why we need philosophers to do the dirty work science can’t complete on its own. But this was added mostly for the lulz.)
This time, Lawrence Krauss departed from his tradition of using image slides, replacing them with quotes ”to appear scholarly”, surely impressing the philosophers while conveniently opening up for discussion. He goes on to argue that it’s impossible to tell right from wrong without science, because science teaches us about the reality by which we are constraiend. His point is not that we unanimously reach the same value judgments by following the methods of science, but that as rational beings, it helps us determine the consequences of our actions by distinguishing reality from non-reality. So, both the ”sciencentific method of secular empiricism” and rationality is required for moral truth, according to his argument.
As Krauss concludes, we are here because science works. People viewed as uncivilized or immoral by western standards have turned thier backs on science, denied the reality around them or are simply irrational. This produes societies with poverty and disorder, and excludes the benefits of modern technology. It seems to me, although my knowledge is probably inadequate to judge, that this view of the reasons for poverty and lack of civilization is rather naive, the same goes for the conception of humans as rational beings.
”Reality is that which, when you stop believing in it, is still there.”
~Philip K. Dick
”The only way to have real success in science …is to describe the evidence very carefully without regard to the way you feel it should be. If you have a theory, you must try and explain what’s good about it and what’s bad about it equally. In science you learn a sort of standard integrity and honesty.”
(Ugh, i should’ve used that Feynman quote on last weeks religion class, where a debate sparked up about whether creationism should be taught on biology lessons. People trying to equate acceptance of evidence and religious belief honestly makes me want to slam my head into a wall, at least when i have to listen to their arguments…)
Simon Blackburn raises an interesting point about our mental states that prompts a distinction between a) how the world is represented internally; information, knowledge and beliefs about our environment, and b) our desires, intentions and concerns (things we care about), that in turn warrant a division between facts and values. This is because the latter puts you in a disposition to change something about the world to conform to what we care about, while beliefs and knowledge simply represent the world without intention or desire to change it.
Steven Pinker also problematizes the question, opening with an enigmatic ”yes and no” to the question of whether science can tell us right from wrong. Also, the composition of the panel might reflect what the organizers of the debate meant by science as simply ”un-religion” and he goes on to discuss why religion cannot determine right from wrong for various reasons, much to the awe of the (not-randomly selected) audience.
For more on this question of morality and religion, i’d recommend the book, God is not Great, by Christopher Hitchens.
In line with Peter Singer, Pinker holds the view that in the sense that most people define science, it’s a necessary but not sufficient condition to tell us right from wrong.
For readers who are still with me at this point, i encourage anyone who’s interested in the subject matter to listen to this debate. Although if you’re familiar with the speakers, most of what they say in the opening statements will probably be fairly unsurprising, maybe a bit disappointing in it’s brevity.
Since my previous entry on ethics was better received than expected (i didn’t really anticipate any noticing), i now feel tempted to write another entry on the subject of philosophy. This time down the path of logic, but beware that this text will only be scraping on the surface of something deeper and more extensive than i have hence familiarized myself with.
Since reading a biography about Ludwig Wittgenstein some time ago, i felt compelled to look into the subject of formal logic, as approached by Gottlob Frege and Bertrand Russell, and then expanded upon by Wittgenstein. The pursuit of creating a perfect language of logic seemed a bit overambitious and i didn’t check the original work, particularly Tractatus Logico Philosophicus (further referenced as TLP), until recently.
This is a humble attempt however, to convey my interpretation and connect it with a few other ideas.
The main conclusion in TLP is that the relation between language and facts cannot itself be expressed in language. And that which cannot be thought of, cannot be expressed in language, and thereof one shall be silent.
TLP is a rather odd but eloquent book, structured into propositions numbered to clarify their order and with decimals indicating if a proposition follows from the previous one.
In the preface Wittgenstein seems to proceed from the premise that a limit can only exist if something can be known about both sides of it. He says ”The book will therefore draw a limit to thinking, or rather -not to thinking, but to the expression of thoughts; for, in order to draw a limit to thinking we should have to be able to think of both sides of this limit (we should therefore have to be able to think what cannot be thought)”. At least to me, this required some contemplation of the reader to understand why this premise is true (i was trying to think of a mathematical example to disprove it, but insufficient mathematical knowledge prevented me from doing so). In trying to understand something it is often useful to put it in your own words; If one cannot envision both sides of a limit, one cannot know if the limit exists. Hence this becomes an epistemological problem instead of a logical one. Although it remains unclear whether Wittgenstein makes this distinction. After all, he is talking about thoughts which are often constrained by epistemology, in the sense that our thoughts about the world is constrained by what we can know about the world. In a way, this insight would have been sufficient for Wittgenstein to arrive at his final conclusion, but there are other steps and parts of the book, perhaps not easily available to a general reader like me, but still of importance.
Let’s move on to the terms Wittgenstein uses in his propositions. Here, the introduction by Russell was of great help in making sense of them (although Wittgenstein himself claimed this introduction was riddled with misunderstandings).
A ”fact” is anything that can be true or false. Conversely, facts are what make propositions true or false. If a proposition (statement or sentence) is broken down to its smallest possible constituents, we get something called ”simples” or ”objects” (apparently synonymous, but i prefer ”object” and shall use it further). In turn, objects can be part of ”atomic facts”, which are parts of propositions, that if reduced further, cannot be true or false. Thus ”atomic facts” are indivisible, or at least lose their status as facts if divided. For example if ”Socrates is wise” is an atomic fact, then ”Socrates” and ”wise” are its objects.
From Wittgenstein’s propositions about these terms, and proposition 1.13 ”The facts in logical space are the world”, Russell deduces ”The world is fully described if all atomic facts are known, together with the fact that these are all of them”.
A proposition stating an atomic fact is conveniently named ”atomic proposition”. All atomic propositions are logically independent of each other, which means they don’t imply one another or follow from one another, and neither are they inconsistent.
Propositions used in logical inference however, are named ”molecular”. These can be inputs in truth functions by using logical operators, AND, OR and NOT. In conventional formal logic (for exceptions, see fuzzy logic) a truth function, f(x), can take on either of two values, true or false. The variable x is molecular, for example the proposition ”x is a philosopher”. If every element in x is a philosopher, then f(x) holds true for all values of x, expressed as; ∀x.f(x)
f(x) being true for at least one x is expressed as;
Note the symbolic or algebraic way of expressing the same things as in everyday language, but much with much shorter sentences and clearer meaning.
Although the two systems are not analogous, since everyday language is much open to ambiguities, as we shall see. In formal logic, a double negation, !(!p), is equal to an affirmation of p. While in ordinary language, one can say for instance ”this doesn’t mean that i don’t want it”, and mean something different from saying ”this means i want it”. Clearly there’s an ambiguity in saying the former, for it shows uncertainty of whether you want something or not. While the latter is definitively affirmative. (The semantics of everyday language does not seem to follow the law of the excluded middle.)
I think one of the aims of creating a language of formal logic was to resolve the misunderstandings that may arise from this and similar examples. And i wouldn’t be surprised if there was a strong psychological drive to do this, since it would make communication with other individuals much more straight forward, and accessible to a contemplative and introverted mind. (But that is not to say i know much about the enigmatic personality of Mr. Wittgenstein.)
It’s also worth mentioning that other interpretations exist of what formal/mathematical logic attempted to do. Douglas Hofstadter wrote in his famous Gödel, Escher, Bach: an Eternal Golden Braid (GEB) that mathematical logic ”began with the attempts to mechanize the thought processes of reasoning”. Thus not only communication would theoretically be influenced by this language, but also internal thoughts and the processes we use to arrive at conclusions and make decisions.
However, it’s important not to confuse formal logic, an a-priori system of deduction, with the psychological process of reasoning. They are of course in practice inevitably related, but separate as subjects of study. Furthermore, the assumption that all human thought follows logical steps is obviously idealistic, since the arguments our minds concoct are often prone to logical fallacies and cognitive biases. Although, going back to Wittgenstein, if one uses the more restrictive definition of a thought as ”a logical picture of facts”, it can be argued that all thoughts are logical.
But what if you think of something that doesn’t correspond to a fact about the world, such as an invisible pink unicorn, is that not a thought? Are you merely imagining things, which is different from the act of thinking? Or to complicate matters, you could be thinking of something you believe is a fact, but later turns out to not be the case. Does this thought spontaneously disappear from the category of thoughts in a puff of logic? Rather than having found a loophole in the TLP, these are probably cases Wittgenstein would have dismissed of as nonsensical, ”whereof one cannot speak, thereof one must be silent”.
Other such cases include the broad domains of ethics, metaphysics and aesthetics, which by their subjective nature, do not correspond to facts about the world. The following (Wittgensteinean) position not to speak of these topics, since they lie outside the limits of language, could be viewed as an ethical stance in itself. (Although this would put ethics in the ”nonsense” category which seems a bit absurd or at least nihilistic.)
I’m feeling a gradual loss of interest in finishing writing this, maybe because of indolence or since i had no planned structure to begin with. Also, i noticed that i have been alternating between writing about the system created in TLP, and within it. But i have been writing mostly about it, perhaps not as good a way to make sense of the contents.
Describing the content and terminology used in the book, is what i mean by being within the system. Here, one should not mix personal interpretation with the words of the original author, to convey as accurate a sample as possible of what it is like to read the book, and to delve into the thoughts of the author. When writing about the system, however, one can convey the personal experience of reading by describing the personal interpretation of the content, as opposed to the content itself. As well as relate it to other works or ideas.
This might be a blurry line though, probably more so than Wittgenstein’s line between sense and nonsense. Stepping outside the TLP, one does not simply transcend the limits of language (although it feels quite liberating to escape the state of confusion).
It was probably a bad idea to start with the TLP to get acquainted with formal logic, since it’s very purpose was to criticize philosophy and point out its limitations. That said, i don’t think it’s completely destroyed my curiosity, and any suggestions for modern textbooks on formal logic would be greatly appreciated in the comments section. Thanks for reading.
- Tribute to Wittgenstein (footnotes2plato.com)
- Wittgenstein and Rejectionism (maverickphilosopher.typepad.com)
- A Wittgensteinian critique of conceptual confusion in psychological research (epistemicepistles.wordpress.com)
In this post, i would like to submit an essay on moral philosophy that i wrote last year as part of a philosophy class. Since i didn’t reach a coherent conclusion, the title is set as a question, rather evasively formulated. (Although i’m considering pragmatic ethics, as in that of John Dewey, as an appealing ethical standpoint after exploring a little more.)
Without further ado;
In the history of philosophy, one of the greatest problems in ethics is the problem of deriving true value statements from empirical facts, the ”is-ought problem”. The purpose of this essay is to examine whether normativity is objective, and how (or if) moral knowledge can be obtained.
Let me start by presenting two different views on human nature in the history of philosophy. Rousseau defended the view that human beings are born altruistic and that society eventually corrupts them, whereas Hobbes defended the opposite argument; that humans are born selfish and it is society that teaches them better.1 These alternative hypotheses are important for describing human actions, but do they matter when considering how we ”should” act?
The word ”should” is value-laden and expresses something driven by an intention or purpose, often toward a corresponding goal. It goes beyond a simple declaration of facts about how the world ”is”. For value-statements to exist, there must be a subject/conscious agent who holds the expressed value, it won’t simply appear out of nowhere. Clearly, there is a mental aspect involved in linking facts to values. But does the same fact always relate to the same value by every conscious being encountered with that fact? Apparently not, as will hopefully be clarified in the next section.
Upon examining different cultures, some of which practice examples of what western traditions consider ”wrong” and morally abhorrent (such as female genital mutilation or mass-suicide), there seems to be a conflict between different belief-systems about what ways of life are morally favorable. The existence of -and sometimes incoherence in- opinions of right-and wrongness is perhaps the best argument for moral relativism, although arguably not a relevant defense of moral normative relativism (in the sense that you ought to be tolerant of every traditional practice).
The opposite view has also been taken, for example by utilitarians and consequentialists, that there is a ”true” way of reasoning to arrive at the ”best” theory of what is right and wrong. Utilitarian philosopher Sam Harris argues that the best way of determining moral worth, is by considering the well-being of conscious creatures as ”objectively good”2. At least in my interpretation, he is also saying that this is the only sensible way of arriving at a pragmatic moral truth. A key point in his argument is that ”moral view A is truer than moral view B, if A entails a more accurate understanding of the connections between human thoughts/intentions/behavior and human well-being.” This implies the convenience of being able to provide evidence for or against a moral theory, for example in experimental psychology, assuming that you have an established definition of what is meant by ”well-being” (which Harris does not really provide in his book The Moral Landscape), and on which scale well-being is to be pursued (e.g. individual or societal, current or potential lives, other species etc.).
Contemporary experimental psychologists and cognitive neuroscientists, (such as Patricia Churchland, Antonio Damasio and Joshua Greene) have examined moral judgment with the aim of explaining how the mind works when making morally significant decisions. According to a review article by Greene, they have found support for a ”dual-process theory”, according to which utilitarian thinking (e.g. considering the ”greater good”) is enabled by controlled cognitive processes, while deontological judgment of matters such as individual rights, are driven mostly by intuitive emotions.
These investigations could be included in the emerging interdisciplinary field of neurophilosophy, in which neuroscientific evidence is used to better understand issues related to philosophy of mind. Although many philosophers used to dismiss the findings of neuroscience as irrelevant with the claim that ”what matters is the software, not the hardware”, there is a growing interest in this intersection with neuroscience nowadays.
Arguably, most people would agree that moral traits include aspects such as sympathy, altruism, cooperation and justice. These are thus probably some of the traits meant by researchers trying to find the ”neural mechanisms of moral cognition”, when they use the term ”moral behavior”.
But even if there was a well-established inter-subjective agreement, on which traits were morally right and wrong, would that be a sufficient condition for morality to be objective? Can neuroscientists and experimental psychologists study ethical imperatives without themselves committing a naturalistic fallacy?
Since morality is about separation of good and bad, we immediately encounter the problem of judging what is good and what is bad in a scientific description of how the brain works. There seems to be no way around Hume’s law, since morality is inevitably, and by definition concerned with value-statements.
Let us examine the implications that would follow from regarding inter-subjective agreement as a sufficient condition for moral universality. Here, I think it is useful to make a comparison with other subjects, such as physics and mathematics, which are universal in the sense that even a hypothetical alien species could discover them by observing the universe and deriving the laws of nature (assuming they do not suddenly change). On the other hand, we could compare it to a subject such as art, which revolves around matters of taste, and often involves a personal categorization of what is considered aesthetically pleasing. Is morality merely a ”matter of taste”?
I think not, at least if we confine the definition to our own species, and examine the genetic (a-priori) ways we engage in social interaction and cooperation. Cultural differences aside, we are in general confined by natural predispositions to interact in certain ways, and also through reasoning , to arrive at similar conclusions about right and wrong actions. Clearly there are exceptions, but on average, the fluctuations in value systems are relatively small.
Now consider, as a thought experiment, that we created artificial intelligence with consciousness and the same reasoning capacities as humans, except with no emotions or ability to feel pleasure and pain. According to what desirable standard would they evaluate the right- or wrongness of actions that involve what we humans call ”well-being” or ”suffering”?
Although statistical inter-subjective agreement would be a reasonable requirement for a universal standard of right and wrong, in practice this agreement would be highly anthropocentric. However, there might be a way to compromise this conclusion by defining truth in a pragmatic sense; since, at present, the world is controlled largely of a human population, and all the relevant moral decisions are thus made by members of our species. Perhaps the exploration of value-sources of other potentially conscious beings remains beyond the scope of necessity and limits of contemporary technology. I think it is worth considering as a possibility, however, that human conceptions of maximum well-being may be something entirely different, or even non-existent for other types of intelligence. Doesn’t this somewhat undermine the controversial view of an objective morality?
In my current view, the most important aspect of ethics is its practical function. In a world full of individual, somewhat independently thinking brains, there will inevitably be different opinions in questions of what is right or wrong. Nevertheless, it is how people act on the large scale that shows the effects of these opinions, and thus how well they function in building and maintaining a society. I also think that it is most likely impossible to construct a moral code or theory that applies in every situation. All theories yet examined have been found, by critics, to contain problems and loopholes in certain contexts. Therefore it is fortunate that there exist many different opinions, so that we can have an ongoing discussion and not just blindly accept and follow what a certain theory says without questioning its validity.
In quite a disheartening way, this conclusion takes us back to the starting point; finding the most realistic theory of ethics. Whether this is done by relating values to facts, or by other means, remains unclear. One of the main reasons why I remain unconvinced by Sam Harris’ thesis is that it is based on the premise that well-being is intrinsically good, and suffering intrinsically bad, which is not arbitrary, but not really self-evident either. Another problem is that of measurement; how is well-being measured? When simply asking individuals to evaluate the quality of their lives or questioning them moment by moment, they tend to give different, sometimes contradicting answers.
Trying to measure the well-being in a large group of people is even more difficult. Psychologists, economists and followers of Bentham have tried to find a connection between economic welfare and happiness, but it’s hard to pin-point the relevant indicators. One thing seems to be for certain though; well-being is not proportional to income beyond a minimum level.
1. The debate in biology of ”nature vs. nurture” is, I think, similar to the opposing views of Rousseau and Hobbes, and reflects the philosophical debate between rationalism and empiricism, but from an ethics-perspective. If every individual human being had the same predispositions in judgment of moral values, there would probably be no argument of whether morality is determined by nature, or enforced by nurture (a-priori vs. a-posteriori). But since this is evidently not the case (otherwise there would be no debate in ethics), there must exist individual differences. The question of whether these differences are genetic or cultural could be philosophically relevant if the answer would limit or constrain human psychology. Not being able to behave in a certain way constrains the expectations on how you “ought” to behave. Other than this, it is unclear how much a thorough examination of the psychology of moral judgments would yield on the subject of ”true morality”.
2. In his definition, values relate to facts about the well-being of conscious creatures, which is the “most important source of value”. Although he admits that most of these relations remain to be discovered by scientists, his thesis asserts that they are discoverable in principle. The well-being of conscious creatures on a greater scale can be metaphorically illustrated as peaks on a moral landscape, while the troughs represent the depths of suffering.
Perhaps of further interest:
Although i do accept the premise that mandatory education acts as a demotivational force, destroying creativity, i have seen counterexamples where the demands of the educational system lift individuals to accomplish what lethargy and indolence otherwise prevent them from doing. It was mainly the idea of grades and being judged that served as a motivator, but for some, it only drove them to panic attacks and sleepless nights.
Last friday on the TED blog, the attention was devoted to education. Starting with the influential Sir Ken Robinson (without whom a post on education would be incomplete), a whole adventure in link maze opened up to the avid explorer. I couldn’t say it more eloquently than Robinson himself; the dominant culture of education is a bit analogous to dieting without losing/gaining weight. In other words, the purpose of teaching is lost when no learning occurs. Curiosity and imagination constitute the engine that drives learning, but if we’re all part of a manufactural process, forced onto a conveyor belt of testing and involuntary group projects, there’s no wonder that the spark of curiosity doesn’t ignite. If students are not treated as individuals, creativity diminishes. The problems of teaching and the stressful responsibilities of a teacher takes time and energy away from engaging with students. That fruitful feedback loop between teacher and student is forgotten in this mechanical process. As are the conditions under which people thrive. According to Robinson, this is a cultural problem prevalent throughout most of the world.
But what is that force that makes people flourish, what are the causes of creativity and what are its effects?
In psychology, the term ‘flow’ is used to describe the balance between challenge and automaticity that gives rise to a sense of devotion to a single task, often coupled with euphoric feelings and productivity. Since this state is dependent on the task being sufficiently challenging but not too difficult, the individual differences make a classroom learning style most likely an impossible means for achieving this effect in everyone. In other words, some are bound to fall behind, while others are bored to tears. Few or no one fits the narrow point of balance.
Sadly, students have little or nothing they can do to influence this, and at an early age it’s common to think the actual way is the only way possible. Or at least the only probable, since change is unlikely to come about. I still believe in this pessimistic outlook, even though my frustration drives me to write about it in an attempt to spread the dissatisfaction that i think many students share.
It’s quite clear that both under – and over-stimulation is harmful to creativity, but why is creativity so important in the first place? The reasons are both the subjective well-being it induces, the role as a basic need (as necessary as eating and sleeping in some individuals), and the practical contributions being produced because of an underlying creative desire.
Let me mention an inspirational person as an example of this, neuroanatomist Santiago Ramon y Cajal. By the end of the 17th century he hypothesized that neurons were individual components as opposed to the contemporary doctrine of a continuos neural mass as seen before the invention of electron microscopy. He illustrated this through a series of drawings and is also known for his artistic work. I’m biased to mention Cajal, as i am currently reading his book, Advice for a Young Investigator (which i would recommend to anyone interested in the processes of science and learning). Some of the quotes found there are mind-blowingly prescient and accurate still today. It’s interesting to find oneself guilty of many of the examples described as errors or ”diseases of the will”. This quote for instance;
”I believe that excessive admiration for the work of great minds is one of the most unfortunate preoccupations of intellectual youth—along with a conviction that certain problems cannot be attacked, let alone solved, because of one’s relatively limited abilities.”
Maybe my own admiration for people with great minds (like Cajal himself) places this ingenuity as an unreachable ideal, and thus all efforts exerted in such a direction are perceived as futile. On a more encouraging note, however (the following is also taken from Advice for a Young Investigator);
”I continue to believe that there is always room for anyone with average intelligence and an eagerness for recognition to utilize his energy and tempt fate. Like the lottery, fate doesn’t always smile on the rich; from time to time it brings joy to the homes of the lowly. Instead, consider the possibility that any man could, if he were so inclined, be the sculptor of his own brain, and that even the least gifted may,like the poorest land that has been well cultivated and fertilized, produce an abundant harvest.”
Honestly, i don’t care much for this preoccupation with the results or recognition of ones labour, at least not yet. I believe the pursuit in itself not only has instrumental value, but also serves as a mechanism for individual and collective development, albeit subjectively.
This digression leads back to the intended subject of education, where the measurable results (grades, qualifications, report cards) shadow curiosity and devotion to the subject in terms of importance. I find this quite sad, not seeing how they need to be mutually exclusive.
A possible combination perhaps, could be achieved through learning at one’s own pace. Online education is in my opinion, a neat example of that. Ironically, the work of compulsory education takes much time away from an individual’s own pursuit of knowledge. Most people seem to only want to learn what they ‘have to’ learn, for a test to achieve the grades they strive for. Obviously i’m generalizing from personal experience, this might not be as common elsewhere (i live in Sweden by the way).
A final sub-topic i wanted to mention, is group work. It has been found that work efforts of individuals are often greater than those of groups, where productivity actually diminishes. This is due in part to the ‘free rider’ and ‘sucker’ effects, similar to the tragedy of the commons, but concerning the motivation of group members to collaborate on a mutual project. An individual can choose not too contribute at all to the group (the free rider) yet still enjoy the benefits of the others’ work, provided they don’t do the same. Or one person can do all the work for the rest of the group members (the sucker), and not get any extra credit. In both cases productivity is lost.
Furthermore, most introverted people i’ve heard from (myself included) have had bad experiences from group work. Unfortunately it still has a prevalent role in education, which i think is both disturbing and unnecessary.