13.2 Language & Culture
Perspectives: An Open Invitation to Cultural Anthropology 2 e
Linda Light, California State University, Long Beach
Linda.Light@csulb.edu
Learning Objectives
-
Explain the relationship between human language and culture.
-
Identify the universal features of human languages and the design features that make them unique.
-
Describe the structures of language: phonemes, morphemes, syntax, semantics, and pragmatics.
-
Assess the relationship between language variations and ethnic or cultural identity.
-
Explain how language is affected by social class, ethnicity, gender and other aspects of identity.
-
Evaluate the reasons why languages change and efforts that can be made to preserve endangered languages.
THE IMPORTANCE OF HUMAN LANGUAGE TO HUMAN CULTURE
Students in my cultural anthropology classes are required to memorize a six-point thumbnail definition of culture, which includes all of the features most anthropologists agree are key to its essence. Then, I refer back to the definition as we arrive at each relevant unit in the course. Here it is—with the key features in bold type.
Culture is:
- An integrated system of mental elements (beliefs, values, worldview, attitudes, norms), the behaviors motivated by those mental elements, and the material items created by those behaviors;
- A system shared by the members of the society;
- 100 percent learned, not innate;
- Based on symbolic systems, the most important of which is language;
- Humankind’s most important adaptive mechanism, and
- Dynamic, constantly changing.
This definition serves to underscore the crucial importance of language to all human cultures. In fact, human language can be considered a culture’s most important feature since complex human culture could not exist without language and language could not exist without culture. They are inseparable because language encodes culture and provides the means through which culture is shared and passed from one generation to the next. Humans think in language and do all cultural activities using language. It surrounds our every waking and sleeping moments, although we do not usually think about its importance. For that matter, humans do not think about their immersion in culture either, much as fish, if they were endowed with intelligence, would not think much about the water that surrounds them. Without language and culture, humans would be just another great ape. Anthropologists must have skills in linguistics so they can learn the languages and cultures of the people they study.
All human languages are symbolic systems that make use of symbols to convey meaning. A symbol is anything that serves to refer to something else, but has a meaning that cannot be guessed because there is no obvious connection between the symbol and its referent. This feature of human language is called arbitrariness. For example, many cultures assign meanings to certain colors, but the meaning for a particular color may be completely different from one culture to another. Western cultures like the United States use the color black to represent death, but in China it is the color white that symbolizes death. White in the United States symbolizes purity and is used for brides’ dresses, but no Chinese woman would ever wear white to her wedding. Instead, she usually wears red, the color of good luck. Words in languages are symbolic in the same way. The word key in English is pronounced exactly the same as the word qui in French, meaning “who,” and ki in Japanese, meaning “tree.” One must learn the language in order to know what any word means.
THE BIOLOGICAL BASIS OF LANGUAGE
The human anatomy that allowed the development of language emerged six to seven million years ago when the first human ancestors became bipedal—habitually walking on two feet. Most other mammals are quadrupedal—they move about on four feet. This evolutionary development freed up the forelimbs of human ancestors for other activities, such as carrying items and doing more and more complex things with their hands. It also started a chain of anatomical adaptations. One adaptation was a change in the way the skull was placed on the spine. The skull of quadrupedal animals is attached to the spine at the back of the skull because the head is thrust forward. With the new upright bipedal position of pre-humans, the attachment to the spine moved toward the center of the base of the skull. This skeletal change in turn brought about changes in the shape and position of the mouth and throat anatomy.
Humans have all the same organs in the mouth and throat that the other great apes have, but the larynx, or voice box (you may know it as the Adam’s apple), is in a lower position in the throat in humans. This creates a longer pharynx, or throat cavity, which functions as a resonating and amplifying chamber for the speech sounds emitted by the larynx. The rounding of the shape of the tongue and palate, or the roof of the mouth, enables humans to make a greater variety of sounds than any great ape is capable of making (see Figure 1).
Speech is produced by exhaling air from the lungs, which passes through the larynx. The voice is created by the vibration of the vocal folds in the larynx when they are pulled tightly together, leaving a narrow slit for the air to pass through under pressure. The narrower the slit, the higher the pitch of the sound produced. The sound waves in the exhaled air pass through the pharynx then out through the mouth and/or the nose. The different positions and movements of the articulators—the tongue, the lips, the jaw—produce the different speech sounds.
Along with the changes in mouth and throat anatomy that made speech possible came a gradual enlargement and compartmentalization of the brain of human ancestors over millions of years. The modern human brain is among the largest, in proportion to body size, of all animals. This development was crucial to language ability because a tremendous amount of brain power is required to process, store, produce, and comprehend the complex system of any human language and its associated culture. In addition, two areas in the left brain are specifically dedicated to the processing of language; no other species has them. They are Broca’s area in the left frontal lobe near the temple, and Wernicke’s area, in the temporal lobe just behind the left ear.
Language Acquisition in Childhood
Linguist Noam Chomsky proposed that all languages share the properties of what he called Universal Grammar (UG), a basic template for all human languages, which he believed was embedded in our genes, hard-wiring the brains of all human children to acquire language. Although the theory of UG is somewhat controversial, it is a fact that all normally developing human infants have an innate ability to acquire the language or languages used around them. Without any formal instruction, children easily acquire the sounds, words, grammatical rules, and appropriate social functions of the language(s) that surround them. They master the basics by about age three or four. This also applies to children, both deaf and hearing, who are exposed to signed language.
If a child is not surrounded by people who are using a language, that child will gradually lose the ability to acquire language naturally without effort. If this deprivation continues until puberty, the child will no longer be biologically capable of attaining native fluency in any language, although they might be able to achieve a limited competency. This phenomenon has been called the Critical Age Range Hypothesis. A number of abused children who were isolated from language input until they were past puberty provide stark evidence to support this hypothesis. The classic case of “Genie” is an example of this evidence.[1]
Found at the age of almost 14, Genie had been confined for all of her life to her room and, since the age of two, had been tied to a potty chair during the day and to a crib at night with almost no verbal interaction and only minimal attention to her physical needs. After her rescue, a linguist worked with her intensively for about five years in an attempt to help her learn to talk, but she never achieved language competence beyond that of a two-year old child. The hypothesis also applies to the acquisition of a second language. A person who starts the study of another language after puberty will have to exert a great deal of effort and will rarely achieve native fluency, especially in pronunciation. There is plenty of evidence for this in the U.S. educational system. You might very well have had this same experience. It makes you wonder why our schools rarely offer foreign language classes before the junior high school level.
The Gesture Call System and Non-Verbal Human Communication
All animals communicate and many animals make meaningful sounds. Others use visual signs, such as facial expressions, color changes, body postures and movements, light (fireflies), or electricity (some eels). Many use the sense of smell and the sense of touch. Most animals use a combination of two or more of these systems in their communication, but their systems are closed systems in that they cannot create new meanings or messages. Human communication is an open system that can easily create new meanings and messages. Most animal communication systems are basically innate; they do not have to learn them, but some species’ systems entail a certain amount of learning. For example, songbirds have the innate ability to produce the typical songs of their species, but most of them must be taught how to do it by older birds.
Great apes and other primates have relatively complex systems of communication that use varying combinations of sound, body language, scent, facial expression, and touch. Their systems have therefore been referred to as a gesture-call system. Humans share a number of forms of this gesture-call, or non-verbal system with the great apes. Spoken language undoubtedly evolved embedded within it. All human cultures have not only verbal languages, but also non-verbal systems that are consistent with their verbal languages and cultures and vary from one culture to another. We will discuss the three most important human non-verbal communication systems.
Kinesics
Kinesics is the term used to designate all forms of human body language, including gestures, body position and movement, facial expressions, and eye contact. Although all humans can potentially perform these in the same way, different cultures may have different rules about how to use them. For example, eye contact for Americans is highly valued as a way to show we are paying attention and as a means of showing respect. But for the Japanese, eye contact is usually inappropriate, especially between two people of different social statuses. The lower status person must look down and avoid eye contact to show respect for the higher status person.
Facial expressions can convey a host of messages, usually related to the person’s attitude or emotional state. Hand gestures may convey unconscious messages, or constitute deliberate messages that can replace or emphasize verbal ones.
Proxemics
Proxemics is the study of the social use of space, specifically the distance an individual tries to maintain around himself in interactions with others. The size of the “space bubble” depends on a number of social factors, including the relationship between the two people, their relative status, their gender and age, their current attitude toward each other, and above all their culture. In some cultures, such as in Brazil, people typically interact in a relatively close physical space, usually along with a lot of touching. Other cultures, like the Japanese, prefer to maintain a greater distance with a minimum amount of touching or none at all. If one person stands too far away from the other according to cultural standards, it might convey the message of emotional distance. If a person invades the culturally recognized space bubble of another, it could mean a threat. Or, it might show a desire for a closer relationship. It all depends on who is involved.
Paralanguage
Paralanguage refers to those characteristics of speech beyond the actual words spoken. These include the features that are inherent to all speech: pitch, loudness, and tempo or duration of the sounds. Varying pitch can convey any number of messages: a question, sarcasm, defiance, surprise, confidence or lack of it, impatience, and many other often subtle connotations. An utterance that is shouted at close range usually conveys an emotional element, such as anger or urgency. A word or syllable that is held for an undue amount of time can intensify the impact of that word. For example, compare “It’s beautiful” versus It’s beauuuuu-tiful!” Often the latter type of expression is further emphasized by extra loudness of the syllable, and perhaps higher pitch; all can serve to make a part of the utterance more important. Other paralinguistic features that often accompany speech might be a chuckle, a sigh or sob, deliberate throat clearing, and many other non-verbal sounds like “hm,” “oh,” “ah,” and “um.”
Most non-verbal behaviors are unconsciously performed and not noticed unless someone violates the cultural standards for them. In fact, a deliberate violation itself can convey meaning. Other non-verbal behaviors are done consciously like the U.S. gestures that indicate approval, such as thumbs up, or making a circle with your thumb and forefinger—“OK.” Other examples are waving at someone or putting a forefinger to your lips to quiet another person. Many of these deliberate gestures have different meanings (or no meaning at all) in other cultures. For example, the gestures of approval in U.S. culture mentioned above may be obscene or negative gestures in another culture.
Try this: As an experiment in the power of non-verbal communication, try violating one of the cultural rules for proxemics or eye contact with a person you know. Choosing your “guinea pigs” carefully (they might get mad at you!), try standing or sitting a little closer or farther away from them than you usually would for a period of time, until they notice (and they will notice). Or, you could choose to give them a bit too much eye contact, or too little, while you are conversing with them. Note how they react to your behavior and how long it takes them to notice.
HUMAN LANGUAGE COMPARED WITH THE COMMUNICATION SYSTEMS OF OTHER SPECIES
Human language is qualitatively and quantitatively different from the communication systems of all other species of animals. Linguists have long tried to create a working definition that distinguishes it from non-human communication systems. Linguist Charles Hockett’s solution was to create a hierarchical list of what he called design features, or descriptive characteristics, of the communication systems of all species, including that of humans.[2] Those features of human language not shared with any other species illustrate exactly how it differs from all other species.
Hockett’s Design Features
The communication systems of all species share the following features:
1. A mode of communication by which messages are transmitted through a system of signs, using one or more sensory systems to transmit and interpret, such as vocal-auditory, visual, tactile, or kinesic;
2. Semanticity: the signs carry meaning for the users, and
3. Pragmatic function: all signs serve a useful purpose in the life of the users, from survival functions to influencing others’ behavior.
Some communication systems (including humans) also exhibit the following features:
4. Interchangeability: the ability of individuals within a species to both send and receive messages. One species that lacks this feature is the honeybee. Only a female “worker bee” can perform the dance that conveys to her hive-mates the location of a newly discovered food source. Another example is the mockingbird whose songs are performed only by the males to attract a mate and mark his territory.
5. Cultural transmission: the need for some aspects of the system to be learned through interaction with others, rather than being 100 percent innate or genetically programmed. The mockingbird learns its songs from other birds, or even from other sounds in its environment that appeal to it.
6. Arbitrariness: the form of a sign is not inherently or logically related to its meaning; signs are symbols. It could be said that the movements in the honeybees’ dance are arbitrary since anyone who is not a honeybee could not interpret their meaning.
Only true human language also has the following characteristics:
7. Discreteness: every human language is made up of a small number of meaningless discrete sounds. That is, the sounds can be isolated from each other, for purposes of study by linguists, or to be represented in a writing system.
8. Duality of patterning (two levels of combination): at the first level of patterning, these meaningless discrete sounds, called phonemes, are combined to form words and parts of words that carry meaning, or morphemes. In the second level of patterning, morphemes are recombined to form an infinite possible number of longer messages such as phrases and sentences according to a set of rules called syntax. It is this level of combination that is entirely lacking in the communication abilities of all other animals and makes human language an open system while all other animal systems are closed.
9. Displacement: the ability to communicate about things that are outside of the here and now made possible by the features of discreteness and duality of patterning. While other species are limited to communicating about their immediate time and place, we can talk about any time in the future or past, about any place in the universe, or even fictional places.
10. Productivity/creativity: the ability to produce and understand messages that have never been expressed before or to express new ideas. People do not speak according to prepared scripts, as if they were in a movie or a play; they create their utterances spontaneously, according to the rules of their language. It also makes possible the creation of new words and even the ability to lie.
A number of great apes, including gorillas, chimpanzees, bonobos and orangutans, have been taught human sign languages with all of the human design features. In each case, the apes have been able to communicate as humans do to an extent, but their linguistic abilities are reduced by the limited cognitive abilities that accompany their smaller brains.
UNIVERSALS OF LANGUAGE
Languages we do not speak or understand may sound like meaningless babble to us, but all the human languages that have ever been studied by linguists are amazingly similar. They all share a number of characteristics, which linguists call language universals. These language universals can be considered properties of the Universal Grammar that Chomsky proposed. Here is a list of some of the major ones.
- All human cultures have a human language and use it to communicate.
- All human languages change over time, a reflection of the fact that all cultures are also constantly changing.
- All languages are systematic, rule driven, and equally complex overall, and equally capable of expressing any idea that the speaker wishes to convey. There are no primitive languages.
- All languages are symbolic systems.
- All languages have a basic word order of elements, like subject, verb, and object, with variations.
- All languages have similar basic grammatical categories such as nouns and verbs.
- Every spoken language is made up of discrete sounds that can be categorized as vowels or consonants.
- The underlying structure of all languages is characterized by the feature duality of patterning, which permits any speaker to utter any message they need or wish to convey, and any speaker of the same language to understand the message.
DESCRIPTIVE LINGUISTICS: STRUCTURES OF LANGUAGE
The study of the structures of language is called descriptive linguistics. Descriptive linguists discover and describe the phonemes of a language, research called phonology. They study the lexicon (the vocabulary) of a language and how the morphemes are used to create new words, or morphology. They analyze the rules by which speakers create phrases and sentences, or the study of syntax. And they look at how these features all combine to convey meaning in certain social contexts, fields of study called semantics and pragmatics.
The Sounds of Language: Phonemes
A phoneme is defined as the minimal unit of sound that can make a difference in meaning if substituted for another sound in a word that is otherwise identical. The phoneme itself does not carry meaning. For example, in English if the sound we associate with the letter “p” is substituted for the sound of the letter “b” in the word bit, the word’s meaning is changed because now it is pit, a different word with an entirely different meaning. The human articulatory anatomy is capable of producing many hundreds of sounds, but no language has more than about 100 phonemes. English has about 36 or 37 phonemes, including about eleven vowels, depending on dialect. Hawaiian has only five vowels and about eight consonants. No two languages have the same exact set of phonemes.
Linguists use a written system called the International Phonetic Alphabet (IPA) to represent the sounds of a language. Unlike the letters of our alphabet that spell English words, each IPA symbol always represents only one sound no matter the language. For example, the letter “a” in English can represent the different vowel sounds in such words as cat, make, papa, law, etc., but the IPA symbol /a/ always and only represents the vowel sound of papa or pop.
The Units That Carry Meaning: Morphemes
A morpheme is a minimal unit of meaning in a language; a morpheme cannot be broken down into any smaller units that still relate to the original meaning. It may be a word that can stand alone, called an unbound morpheme (dog, happy, go, educate). Or it could be any part of a word that carries meaning that cannot stand alone but must be attached to another morpheme, bound morphemes. They may be placed at the beginning of the root word, such as un– (“not,” as in unhappy), or re– (“again,” as in rearrange). Or, they may follow the root, as in -ly (makes an adjective into an adverb: quickly from quick), -s (for plural, possessive, or a verb ending) in English. Some languages, like Chinese, have very few if any bound morphemes. Others, like Swahili have so many that nouns and verbs cannot stand alone as separate words; they must have one or more other bound morphemes attached to them.
The Structure of Phrases and Sentences: Syntax
Rules of syntax tell the speaker how to put morphemes together grammatically and meaningfully. There are two main types of syntactic rules: rules that govern word order, and rules that direct the use of certain morphemes that perform a grammatical function. For example, the order of words in the English sentence “The cat chased the dog” cannot be changed around or its meaning would change: “The dog chased the cat” (something entirely different) or “Dog cat the chased the” (something meaningless). English relies on word order much more than many other languages do because it has so few morphemes that can do the same type of work.
For example, in our sentence above, the phrase “the cat” must go first in the sentence, because that is how English indicates the subject of the sentence, the one that does the action of the verb. The phrase “the dog” must go after the verb, indicating that it is the dog that received the action of the verb, or is its object. Other syntactic rules tell us that we must put “the” before its noun, and “–ed” at the end of the verb to indicate past tense. In Russian, the same sentence has fewer restrictions on word order because it has bound morphemes that are attached to the nouns to indicate which one is the subject and which is the object of the verb. So the sentence koshka [chased] sobaku, which means “the cat chased the dog,” has the same meaning no matter how we order the words, because the –a on the end of koshka means the cat is the subject, and the –u on the end of sobaku means the dog is the object. If we switched the endings and said koshku [chased] sobaka, now it means the dog did the chasing, even though we haven’t changed the order of the words. Notice, too, that Russian does not have a word for “the.”
Conveying Meaning in Language: Semantics and Pragmatics
The whole purpose of language is to communicate meaning about the world around us so the study of meaning is of great interest to linguists and anthropologists alike. The field of semantics focuses on the study of the meanings of words and other morphemes as well as how the meanings of phrases and sentences derive from them. Recently linguists have been enjoying examining the multitude of meanings and uses of the word “like” among American youth, made famous through the film Valley Girl in 1983. Although it started as a feature of California English, it has spread all across the country, and even to many young second-language speakers of English. It’s, like, totally awesome dude!
The study of pragmatics looks at the social and cultural aspects of meaning and how the context of an interaction affects it. One aspect of pragmatics is the speech act. Any time we speak we are performing an act, but what we are actually trying to accomplish with that utterance may not be interpretable through the dictionary meanings of the words themselves. For example, if you are at the dinner table and say, “Can you pass the salt?” you are probably not asking if the other person is capable of giving you the salt. Often the more polite an utterance, the less direct it will be syntactically. For example, rather than using the imperative syntactic form and saying “Give me a cup of coffee,” it is considered more polite to use the question form and say “Would you please give me a cup of coffee?”
LANGUAGE VARIATION: SOCIOLINGUISTICS
Languages Versus Dialects
The number of languages spoken around the world is somewhat difficult to pin down, but we usually see a figure between 6,000 and 7,000. Why are they so hard to count? The term language is commonly used to refer to the idealized “standard” of a variety of speech with a name, such as English, Turkish, Swedish, Swahili, or Urdu. One language is usually considered to be incomprehensible to speakers of another one. The word dialect is often applied to a subordinate variety of a language and the common assumption is that we can understand someone who speaks another dialect of our own language.
These terms are not really very useful to describe actual language variation. For example, many of the hundreds of “dialects” spoken in China are very different from each other and are not mutually comprehensible to speakers of other Chinese “dialects.” The Chinese government promotes the idea that all of them are simply variants of the “Chinese language” because it helps to promote national solidarity and loyalty among Chinese people to their country and reduce regional factionalism. In contrast, the languages of Sweden, Denmark, and Norway are considered separate languages, but actually if a Swede, a Dane, and a Norwegian were to have a conversation together, each could use their own language and understand most of what the others say. Does this make them dialects or languages? The Serbian and Croatian languages are considered by their speakers to be separate languages due to distinct political and religious cultural identities. They even employ different writing systems to emphasize difference, but they are essentially the same and easily understandable to each other.
So in the words of linguist John McWhorter, actually “dialects is all there is.”[3] What he means by this is that a continuum of language variation is geographically distributed across populations in much the same way that human physical variation is, with the degree of difference between any two varieties increasing across increasing distances. This is the case even across national boundaries. Catalan, the language of northeastern Spain, is closer to the languages of southern France, Provençal and Occitan than any one is to its associated national language, Spanish or French. One language variety blends with the next geographically like the colors of the rainbow. However, the historical influence of colonizing states has affected that natural distribution. Thus, there is no natural “language” with variations called “dialects.” Usually one variety of a language is considered the “standard,” but this choice is based on the social and political prestige of the group that speaks that variety; it has no inherent superiority over the other variants called its “dialects.” The way people speak is an indicator of who they are, where they come from, and what social groups they identify with, as well as what particular situation they find themselves in, and what they want to accomplish with a specific interaction.
How Does Language Variation Develop?
Why do people from different regions in the United States speak so differently? Why do they speak differently from the people of England? A number of factors have influenced the development of English dialects, and they are typical causes of dialect variation in other languages as well.
Settlement patterns: The first English settlers to North America brought their own dialects with them. Settlers from different parts of the British Isles spoke different dialects (they still do), and they tended to cluster together in their new homeland. The present-day dialects typical of people in various areas of the United States, such as New England, Virginia, New Jersey, and Delaware, still reflect these original settlement sites, although they certainly have changed from their original forms.
Migration routes: After they first settled in the United States, some people migrated further west, establishing dialect boundaries as they traveled and settled in new places.
Geographical factors: Rivers, mountains, lakes and islands affected migration routes and settlement locations, as well as the relative isolation of the settlements. People in the Appalachian mountains and on certain islands off the Atlantic coast were relatively isolated from other speakers for many years and still speak dialects that sound very archaic compared with the mainstream.
Language contact: Interactions with other language groups, such as Native Americans, French, Spanish, Germans, and African-Americans, along paths of migration and settlement resulted in mutual borrowing of vocabulary, pronunciation, and some syntax.
Have you ever heard of “Spanglish”? It is a form of Spanish spoken near the borders of the United States that is characterized by a number of words adopted from English and incorporated into the phonological, morphological and syntactic systems of Spanish. For example, the Spanish sentence Voy a estationar mi camioneta, or “I’m going to park my truck” becomes in Spanglish Voy a parquear mi troca. Many other languages have such English-flavored versions, including Franglais and Chinglish. Some countries, especially France, actively try to prevent the incursion of other languages (especially English) into their language, but the effort is always futile. People will use whatever words serve their purposes, even when the “language police” disapprove. Some Franglais words that have invaded in spite of the authorities protestations include the recently acquired binge-drinking, beach, e-book, and drop-out, while older ones include le weekend and stop.
Region and occupation: Rural farming people may continue to use archaic expressions compared with urban people, who have much more contact with contemporary life styles and diverse speech communities.
Social class: Social status differences cut across all regional variations of English. These differences reflect the education and income level of speakers.
Group reference: Other categories of group identity, including ethnicity, national origin of ancestors, age, and gender can be symbolized by the way we speak, indicating in-group versus out-group identity. We talk like other members of our groups, however we define that group, as a means of maintaining social solidarity with other group members. This can include occupational or interest-group jargon, such as medical or computer terms, or surfer talk, as well as pronunciation and syntactic variations. Failure to make linguistic accommodation to those we are speaking to may be interpreted as a kind of symbolic group rejection even if that dialect might be relatively stigmatized as a marker of a disrespected minority group. Most people are able to use more than one style of speech, also called register, so that they can adjust depending on who they are interacting with: their family and friends, their boss, a teacher, or other members of the community.
Linguistic processes: New developments that promote the simplification of pronunciation or syntactic changes to clarify meaning can also contribute to language change.
These factors do not work in isolation. Any language variation is the result of a number of social, historical, and linguistic factors that might affect individual performances collectively and therefore dialect change in a particular speech community is a process that is continual.
Try This: Which of these terms do you use, pop versus soda versus coke? Pail versus bucket? Do you say “vayse” or “vahze” for the vessel you put flowers in? Where are you from? Can you find out where each term or pronunciation is typically used? Can you find other regional differences like these?
What Is a “Standard” Variety of a Language?
The standard of any language is simply one of many variants that has been given special prestige in the community because it is spoken by the people who have the greatest amount of prestige, power, and (usually) wealth. In the case of English its development has been in part the result of the invention of the printing press in the sixteenth-century and the subsequent increase in printed versions of the language. This then stimulated more than a hundred years of deliberate efforts by grammarians to standardize spelling and grammatical rules. Their decisions invariably favored the dialect spoken by the aristocracy. Some of their other decisions were rather arbitrarily determined by standards more appropriate to Latin, or even mathematics. For example, as it is in many other languages, it was typical among the common people of the time (and it still is among the present-day working classes and in casual speech), to use multiple negative particles in a sentence, like “I don’t have no money.” Those eighteenth-century grammarians said we must use either don’t or no, but not both, that is, “I don’t have any money” or “I have no money.” They based this on a mathematical rule that says that two negatives make a positive. (When multiplying two signed negative numbers, such as -5 times -2, the result is 10.) These grammarians claimed that if we used the double negative, we would really be saying the positive, or “I have money.” Obviously, anyone who utters that double-negative sentence is not trying to say that they have money, but the rule still applies for standard English to this day.
Non-standard varieties of English, also known as vernaculars, are usually distinguished from the standard by their inclusion of such stigmatized forms as multiple negatives, the use of the verb form ain’t (which was originally the normal contraction of am not, as in “I ain’t,” comparable to “you aren’t,” or “she isn’t”); pronunciation of words like this and that as dis and dat; pronunciation of final “–ing” as “–in;” and any other feature that grammarians have decreed as “improper” English.
The standard of any language is a rather artificial, idealized form of language, the language of education. One must learn its rules in school because it is not anyone’s true first language. Everyone speaks a dialect, although some dialects are closer to the standard than others. Those that are regarded with the least prestige and respect in society are associated with the groups of people who have the least amount of social prestige. People with the highest levels of education have greater access to the standard, but even they usually revert to their first dialect as the appropriate register in the context of an informal situation with friends and family. In other words, no language variety is inherently better or worse than any other one. It is due to social attitudes that people label some varieties as “better” or “proper,” and others as “incorrect” or “bad.” Recall Language Universal 3: “All languages are systematic, rule driven, and equally complex overall, and equally capable of expressing any idea that the speaker wishes to convey.”
In 1972 sociolinguist William Labov did an interesting study in which he looked at the pronunciation of the sound /r/ in the speech of New Yorkers in two different department stores. Many people from that area drop the /r/ sound in words like fourth and floor (fawth, floah), but this pronunciation is primarily associated with lower social classes and is not a feature of the approved standard for English, even in New York City. In two different contexts, an upscale store and a discount store, Labov asked customers what floor a certain item could be found on, already knowing it was the fourth floor. He then asked them to repeat their answer, as though he hadn’t heard it correctly. He compared the first with the second answers by the same person, and he compared the answers in the expensive store versus the cheaper store. He found 1) that the responders in the two stores differed overall in their pronunciation of this sound, and 2) that the same person may differ between situations of less and more self-consciousness (first versus second answer). That is, people in the upscale store tended to pronounce the /r/, and responders in both stores tended to produce the standard pronunciation more in their second answers in an effort to sound “higher class.” These results showed that the pronunciation or deletion of /r/ in New York correlates with both social status and context.[4]
There is nothing inherently better or worse in either pronunciation; it depends entirely on the social norms of the community. The same /r/ deletion that is stigmatized in New York City is the prestigious, standard form in England, used by the upper class and announcers for the BBC. The pronunciation of the /r/ sound in England is stigmatized because it is used by lower-status people in some industrial cities.
It is important to note that almost everyone has access to a number of different language variations and registers. They know that one variety is appropriate to use with some people in some situations, and others should be used with other people or in other situations. The use of several language varieties in a particular interaction is known as code-switching.
Try This: To understand the importance of using the appropriate register in a given context, the next time you are with a close friend or family member try using the register, or style of speech, that you might use with your professor or a respected member of the clergy. What is your friend’s reaction? I do not recommend trying the reverse experiment, using a casual vernacular register with such a respected person (unless they are also a close friend). Why not?
Linguistic Relativity: The Whorf Hypothesis
In the 1920s, Benjamin Whorf was a graduate student studying with linguist Edward Sapir at Yale University in New Haven, Connecticut. Sapir, considered the father of American linguistic anthropology, was responsible for documenting and recording the languages and cultures of many Native American tribes, which were disappearing at an alarming rate. This was due primarily to the deliberate efforts of the United States government to force Native Americans to assimilate into the Euro-American culture. Sapir and his predecessors were well aware of the close relationship between culture and language because each culture is reflected in and influences its language. Anthropologists need to learn the language of the culture they are studying in order to understand the world view of its speakers. Whorf believed that the reverse is also true, that a language affects culture as well, by actually influencing how its speakers think. His hypothesis proposes that the words and the structures of a language influence how its speakers think about the world, how they behave, and ultimately the culture itself. (See our definition of culture above.) Simply stated, Whorf believed that human beings see the world the way they do because the specific languages they speak influence them to do so. He developed this idea through both his work with Sapir and his work as a chemical engineer for the Hartford Insurance Company investigating the causes of fires.
One of his cases while working for the insurance company was a fire at a business where there were a number of gasoline drums. Those that contained gasoline were surrounded by signs warning employees to be cautious around them and to avoid smoking near them. The workers were always careful around those drums. On the other hand, empty gasoline drums were stored in another area, but employees were more careless there. Someone tossed a cigarette or lighted match into one of the “empty” drums, it went up in flames, and started a fire that burned the business to the ground. Whorf theorized that the meaning of the word empty implied to the worker that “nothing” was there to be cautious about so the worker behaved accordingly. Unfortunately, an “empty” gasoline drum may still contain fumes, which are more flammable than the liquid itself.
Whorf’s studies at Yale involved working with Native American languages, including Hopi. The Hopi language is quite different from English, in many ways. For example, let’s look at how the Hopi language deals with time. Western languages (and cultures) view time as a flowing river in which we are being carried continuously away from a past, through the present, and into a future. Our verb systems reflect that concept with specific tenses for past, present, and future. We think of this concept of time as universal, that all humans see it the same way. A Hopi speaker has very different ideas and the structure of their language both reflects and shapes the way they think about time. The Hopi language has no present, past, or future tense. Instead, it divides the world into what Whorf called the manifested and unmanifest domains. The manifested domain deals with the physical universe, including the present, the immediate past and future; the verb system uses the same basic structure for all of them. The unmanifest domain involves the remote past and the future, as well as the world of desires, thought, and life forces. The set of verb forms dealing with this domain are consistent for all of these areas, and are different from the manifested ones. Also, there are no words for hours, minutes, or days of the week.
Native Hopi speakers often had great difficulty adapting to life in the English speaking world when it came to being “on time” for work or other events. It is simply not how they had been conditioned to behave with respect to time in their Hopi world, which followed the phases of the moon and the movements of the sun. In a book about the Abenaki who lived in Vermont in the mid-1800s, Trudy Ann Parker described their concept of time, which very much resembled that of the Hopi and many of the other Native American tribes. “They called one full day a sleep, and a year was called a winter. Each month was referred to as a moon and always began with a new moon. An Indian day wasn’t divided into minutes or hours. It had four time periods—sunrise, noon, sunset, and midnight. Each season was determined by the budding or leafing of plants, the spawning of fish or the rutting time for animals. Most Indians thought the white race had been running around like scared rabbits ever since the invention of the clock.”[5]
The lexicon, or vocabulary, of a language is an inventory of the items a culture talks about and has categorized in order to make sense of the world and deal with it effectively. For example, modern life is dictated for many by the need to travel by some kind of vehicle—cars, trucks, SUVs, trains, buses, etc. We therefore have thousands of words to talk about them, including types of vehicles, models, brands, or parts.
The most important aspects of each culture are similarly reflected in the lexicon of its language. Among the societies living in the islands of Oceania in the Pacific, fish have great economic and cultural importance. This is reflected in the rich vocabulary that describes all aspects of the fish and the environments that islanders depend on for survival. For example, in Palau there are about 1,000 fish species and Palauan fishermen knew, long before biologists existed, details about the anatomy, behavior, growth patterns and habitat of most of them—in many cases far more than modern biologists know even today. Much of fish behavior is related to the tides and the phases of the moon. Throughout Oceania, the names given to certain days of the lunar months reflect the likelihood of successful fishing. For example, in the Caroline Islands, the name for the night before the new moon is otolol, which means “to swarm.” The name indicates that the best fishing days cluster around the new moon. In Hawai`i and Tahiti two sets of days have names containing the particle `ole or `ore; one occurs in the first quarter of the moon and the other in the third quarter. The same name is given to the prevailing wind during those phases. The words mean “nothing,” because those days were considered bad for fishing as well as planting.
Parts of Whorf’s hypothesis, known as linguistic relativity, were controversial from the beginning, and still are among some linguists. Yet Whorf’s ideas now form the basis for an entire sub-field of cultural anthropology: cognitive or psychological anthropology. A number of studies have been done that support Whorf’s ideas. Linguist George Lakoff’s work looks at the pervasive existence of metaphors in everyday speech that can be said to predispose a speaker’s world view and attitudes on a variety of human experiences.[6]
A metaphor is an expression in which one kind of thing is understood and experienced in terms of another entirely unrelated thing; the metaphors in a language can reveal aspects of the culture of its speakers. Take, for example, the concept of an argument. In logic and philosophy, an argument is a discussion involving differing points of view, or a debate. But the conceptual metaphor in American culture can be stated as ARGUMENT IS WAR. This metaphor is reflected in many expressions of the everyday language of American speakers: I won the argument. He shot down every point I made. They attacked every argument we made. Your point is right on target. I had a fight with my boyfriend last night. In other words, we use words appropriate for discussing war when we talk about arguments, which are certainly not real war. But we actually think of arguments as a verbal battle that often involve anger, and even violence, which then structures how we argue.
To illustrate that this concept of argument is not universal, Lakoff suggests imagining a culture where an argument is not something to be won or lost, with no strategies for attacking or defending, but rather as a dance where the dancers’ goal is to perform in an artful, pleasing way. No anger or violence would occur or even be relevant to speakers of this language, because the metaphor for that culture would be ARGUMENT IS DANCE.
LANGUAGE IN ITS SOCIAL SETTINGS: LANGUAGE AND IDENTITY
The way we speak can be seen as a marker of who we are and with whom we identify. We talk like the other people around us, where we live, our social class, our region of the country, our ethnicity, and even our gender. These categories are not homogeneous. All New Yorkers do not talk exactly the same; all women do not speak according to stereotypes: all African-Americans do not speak an African-American dialect. No one speaks the same way in all situations and contexts, but there are some consistencies in speaking styles that are associated with many of these categories.
Social Class
As discussed above, people can indicate social class by the way they speak. The closer to the standard version their dialect is, the more they are seen as a member of a higher social class because the dialect reflects a higher level of education. In American culture, social class is defined primarily by income and net worth, and it is difficult (but not impossible) to acquire wealth without a high level of education. However, the speech of people in the higher social classes also varies with the region of the country where they live, because there is no single standard of American English, especially with respect to pronunciation. An educated Texan will sound different from an educated Bostonian, but they will use the standard version of English from their own region. The lower the social class of a community, the more their language variety will differ from both the standard and from the vernaculars of other regions.
Ethnicity
An ethnicity, or ethnic group, is a group of people who identify with each other based on some combination of shared cultural heritage, ancestry, history, country of origin, language, or dialect.In the United States such groups are frequently referred to as “races,” but there is no such thing as biological race, and this misconception has historically led to racism and discrimination. Because of the social implications and biological inaccuracy of the term “race,” it is often more accurate and appropriate to use the terms ethnicity or ethnic group. A language variety is often associated with an ethnic group when its members use language as a marker of solidarity. They may also use it to distinguish themselves from a larger, sometimes oppressive, language group when they are a minority population.
A familiar example of an oppressed ethnic group with a distinctive dialect is African-Americans. They have a unique history among minorities in the United States, with their centuries-long experience as captive slaves and subsequent decades under Jim Crow laws. (These laws restricted their rights after their emancipation from slavery.) With the Civil Rights Acts of 1964 and 1968 and other laws, African-Americans gained legal rights to access public places and housing, but it is not possible to eliminate racism and discrimination only by passing laws; both still exist among the white majority. It is no longer “politically correct” to openly express racism, but it is much less frowned upon to express negative attitudes about African-American Vernacular English (AAVE). Typically, it is not the language itself that these attitudes are targeting; it is the people who speak it.
As with any language variety, AAVE is a complex, rule-driven, grammatically consistent language variety, a dialect of American English with a distinctive history. A widely accepted hypothesis of the origins of AAVE is as follows. When Africans were captured and brought to the Americas, they brought their own languages with them. But some of them already spoke a version of English called a pidgin. A pidgin is a language that springs up out of a situation in which people who do not share a language must spend extended amounts of time together, usually in a working environment. Pidgins are the only exception to the Language Universal number 3 (all languages are systematic, rule driven, and equally complex overall, and equally capable of expressing any idea that the speaker wishes to convey).
There are no primitive languages, but a pidgin is a simplified language form, cobbled together based mainly on one core language, in this case English, using a small number of phonemes, simplified syntactic rules, and a minimal lexicon of words borrowed from the other languages involved. A pidgin has no native speakers; it is used primarily in the environment in which it was created. An English-based pidgin was used as a common language in many areas of West Africa by traders interacting with people of numerous language groups up and down the major rivers. Some of the captive Africans could speak this pidgin, and it spread among them after the slaves arrived in North America and were exposed daily to English speakers. Eventually, the use of the pidgin expanded to the point that it developed into the original forms of what has been called a Black English plantation creole. A creole is a language that develops from a pidgin when it becomes so widely used that children acquire it as one of their first languages. In this situation it becomes a more fully complex language consistent with Universal number 3.
All African-Americans do not speak AAVE, and people other than African-Americans also speak it. Anyone who grows up in an area where their friends speak it may be a speaker of AAVE like the rapper Eminem, a white man who grew up in an African-American neighborhood in Detroit. Present-day AAVE is not homogeneous; there are many regional and class variations. Most variations have several features in common, for instance, two phonological features: the dropped /r/ typical of some New York dialects, and the pronunciation of the “th” sound of words like this and that as a /d/ sound, dis and dat. Most of the features of AAVE are also present in many other English dialects, but those dialects are not as severely stigmatized as AAVE is. It is interesting, but not surprising, that AAVE and southern dialects of white English share many features. During the centuries of slavery in the south, African-American slaves outnumbered whites on most plantations. Which group do you think had the most influence on the other group’s speech? The African-American community itself is divided about the acceptability of AAVE. It is probably because of the historical oppression of African-Americans as a group that the dialect has survived to this day, in resistance to the majority white society’s disapproval.
Language and Gender
In any culture that has differences in gender role expectations—and all cultures do—there are differences in how people talk based on their sex and gender identity. These differences have nothing to do with biology. Children are taught from birth how to behave appropriately as a male or a female in their culture, and different cultures have different standards of behavior. It must be noted that not all men and women in a society meet these standards, but when they do not they may pay a social price. Some societies are fairly tolerant of violations of their standards of gendered behavior, but others are less so.
In the United States, men are generally expected to speak in a low, rather monotone pitch; it is seen as masculine. If they do not sound sufficiently masculine, American men are likely to be negatively labeled as effeminate. Women, on the other hand, are freer to use their entire pitch range, which they often do when expressing emotion, especially excitement. When a woman is a television news announcer, she will modulate the pitch of her voice to a sound more typical of a man in order to be perceived as more credible. Women tend to use minimal responses in a conversation more than men. These are the vocal indications that one is listening to a speaker, such as m-hm, yeah, I see, wow, and so forth. They tend to face their conversation partners more and use more eye contact than men. This is one reason women often complain that men do not listen to them.
Deborah Tannen, a professor of linguistics at Georgetown University in Washington, D.C., has done research for many years on language and gender. Her basic finding is that in conversation women tend to use styles that are relatively cooperative, to emphasize an equal relationship, while men seem to talk in a more competitive way in order to establish their positions in a hierarchy. She emphasizes that both men and women may be cooperative and competitive in different ways.[7]
Other societies have very different standards for gendered speech styles. In Madagascar, men use a very flowery style of talk, using proverbs, metaphors and riddles to indirectly make a point and to avoid direct confrontation. The women on the other hand speak bluntly and say directly what is on their minds. Both admire men’s speech and think of women’s speech as inferior. When a man wants to convey a negative message to someone, he will ask his wife to do it for him. In addition, women control the marketplaces where tourists bargain for prices because it is impossible to bargain with a man who will not speak directly. It is for this reason that Malagasy women are relatively independent economically.
In Japan, women were traditionally expected to be subservient to men and speak using a “feminine” style, appropriate for their position as wife and mother, but the Japanese culture has been changing in recent decades so more and more women are joining the work force and achieving positions of relative power. Such women must find ways of speaking to maintain their feminine identities and at the same time express their authority in interactions with men, a challenging balancing act. Women in the United States do as well, to a certain extent. Even Margaret Thatcher, prime minister of England, took speech therapy lessons to “feminize” her language use while maintaining an expression of authority.
The Deaf Culture and Signed Languages
Deaf people constitute a linguistic minority in many societies worldwide based on their common experience of life. This often results in their identification with a local Deaf culture. Such a culture may include shared beliefs, attitudes, values, norms, and values, like any other culture, and it is invariably marked by communication through the use of a sign language. It is not enough to be physically deaf (spelled with a lower case “d”) to belong to a Deaf culture (written with a capital “D”). In fact, one does not even need to be deaf. Identification with a Deaf culture is a personal choice. It can include family members of deaf people or anyone else who associates with deaf people, as long as the community accepts them. Especially important, members of Deaf culture are expected to be competent communicators in the sign language of the culture. In fact, there have been profoundly deaf people who were not accepted into the local Deaf community because they could not sign. In some deaf schools, at least in the United States, the practice has been to teach deaf children how to lip read and speak orally, and to prevent them from using a signed system. They were expected to blend in with the hearing community as much as possible. This is called the oralist approach to education, but it is considered by members of the Deaf community to be a threat to the existence of their culture. For the same reason, the development of cochlear implants, which can restore hearing for some deaf children, has been controversial in U.S. Deaf communities. The members often have a positive attitude toward their deafness and do not consider it to be a disability. To them, regaining hearing represents disloyalty to the group and a desire to leave it.
According to the World Federation of the Deaf, there are over 200 distinct sign languages in the world, which are not mutually comprehensible. They are all considered by linguists to be true languages, consistent with linguistic definitions of all human languages. They differ only in the fact that they are based on a gestural-visual rather than a vocal-auditory sensory mode. Each is a true language with basic units comparable to phonemes but composed of hand positions, shapes, and movements, plus some facial expressions. Each has its own unique set of morphemes and grammatical rules. American Sign Language (ASL), too, is a true language separate from English; it is not English on the hands. Like all other signed languages, it is possible to sign with a word-for-word translation from English, using finger spelling for some words, which is helpful in teaching the deaf to read, but they prefer their own language, ASL, for ordinary interactions. Of course, Deaf culture identity intersects with other kinds of cultural identity, like nationality, ethnicity, gender, class, and sexual orientation, so each Deaf culture is not only small but very diverse.
LANGUAGE CHANGE: HISTORICAL LINGUISTICS
Recall the language universal stating that all languages change over time. In fact, it is not possible to keep them from doing so. How and why does this happen? The study of how languages change is known as historical linguistics. The processes, both historical and linguistic, that cause language change can affect all of its systems: phonological, morphological, lexical, syntactic, and semantic.
Historical linguists have placed most of the languages of the world into taxonomies, groups of languages classified together based on words that have the same or similar meanings. Language taxonomies create something like a family tree of languages. For example, words in the Romance family of languages, called sister languages, show great similarities to each other because they have all derived from the same “mother” language, Latin (the language of Rome). In turn, Latin is considered a “sister” language to Sanskrit (once spoken in India and now the mother language of many of India’s modern languages, and still the language of the Hindu religion) and classical Greek. Their “mother” language is called “Indo-European,” which is also the mother (or grandmother!) language of almost all the rest of European languages.
Let’s briefly examine the history of the English language as an example of these processes of change. England was originally populated by Celtic peoples, the ancestors of today’s Irish, Scots, and Welsh. The Romans invaded the islands in the first-century AD, bringing their Latin language with them. This was the edge of their empire; their presence there was not as strong as it was on the European mainland. When the Roman Empire was defeated in about 500 AD by Germanic speaking tribes from northern Europe (the “barbarians”), a number of those related Germanic languages came to be spoken in various parts of what would become England. These included the languages of the Angles and the Saxons, whose names form the origin of the term Anglo-Saxon and of the name of England itself—Angle-land. At this point, the languages spoken in England included those Germanic languages, which gradually merged as various dialects of English, with a small influence from the Celtic languages, some Latin from the Romans, and a large influence from Viking invaders. This form of English, generally referred to as Old English, lasted for about 500 years. In 1066 AD, England was invaded by William the Conqueror from Normandy, France. New French rulers brought the French language. French is a Latin-based language, and it is by far the greatest source of the Latin-based words in English today; almost 10,000 French words were adopted into the English of the time period. This was the beginning of Middle English, which lasted another 500 years or so.
The change to Modern English had two main causes. One was the invention of the printing press in the fifteenth-century, which resulted in a deliberate effort to standardize the various dialects of English, mostly in favor of the dialect spoken by the elite. The other source of change, during the fifteenth and sixteenth-centuries, was a major shift in the pronunciation of many of the vowels. Middle English words like hus and ut came to be pronounced house and out. Many other vowel sounds also changed in a similar manner.
None of the early forms of English are easily recognizable as English to modern speakers. Here is an example of the first two lines of the Lord’s Prayer in Old English, from 995 AD, before the Norman Invasion:
Fæder ūre, ðū ðē eart on heofonum,
Sī ðīn nama gehālgod.
Here are the same two lines in Middle English, English spoken from 1066 AD until about 1500 AD. These are taken from the Wycliffe Bible in 1389 AD:
Our fadir that art in heuenes,
halwid be thi name. [8]
The following late Middle English/early Modern English version from the 1526 AD Tyndale Bible, shows some of the results of grammarians’ efforts to standardize spelling and vocabulary for wider distribution of the printed word due to the invention of the printing press:
O oure father which arte in heven,
halowed be thy name.
And finally, this example is from the King James Version of the Bible, 1611 AD, in the early Modern English language of Shakespeare. It is almost the same archaic form that modern Christians use.
Our father which art in heauen,
hallowed be thy name.
Over the centuries since the beginning of Modern English, it has been further affected by exposure to other languages and dialects worldwide. This exposure brought about new words and changed meanings of old words. More changes to the sound systems resulted from phonological processes that may or may not be attributable to the influence of other languages. Many other changes, especially in recent decades, have been brought about by cultural and technological changes that require new vocabulary to deal with them.
Try This: Just think of all the words we use today that have either changed their primary meanings, or are completely new: mouse and mouse pad, google, app, computer (which used to be a person who computes!), texting, cool, cell, gay. How many more can you think of?
GLOBALIZATION AND LANGUAGE
Globalization is the spread of people, their cultures and languages, products, money, ideas, and information around the world. Globalization is nothing new; it has been happening throughout the existence of humans, but for the last 500 years it has been increasing in its scope and pace, primarily due to improvements in transportation and communication. Beginning in the fifteenth-century, English explorers started spreading their language to colonies in all parts of the world. English is now one of the three or four most widely spoken languages. It has official status in at least 60 countries, and it is widely spoken in many others. Other colonizers also spread their languages, especially Spanish, French, Portuguese, Arabic, and Russian. Like English, each has its regional variants. One effect of colonization has often been the suppression of local languages in favor of the language of the more powerful colonizers.
In the past half century, globalization has been dominated by the spread of North American popular culture and language to other countries. Today it is difficult to find a country that does not have American music, movies and television programs, or Coca Cola and McDonald’s, or many other artifacts of life in the United States, and the English terms that go with them.
In addition, people are moving from rural areas to cities in their own countries, or they are migrating to other countries in unprecedented numbers. Many have moved because they are refugees fleeing violence, or they found it increasingly difficult to survive economically in their own countries. This mass movement of people has led to the on-going extinction of large numbers of the world’s languages as people abandon their home regions and language in order to assimilate into their new homes.
Language Shift, Language Maintenance, and Language Death
Of the approximately 6,000 languages still surviving today, about half the world’s more than seven billion people speak only ten. These include Mandarin Chinese, two languages from India, Spanish, English, Arabic, Portuguese, Russian, Japanese, and German. Many of the rest of the world’s languages are spoken by a few thousand people, or even just a few hundred, and most of them are threatened with extinction, called language death. It has been predicted that by the end of this century up to 90 percent of the languages spoken today will be gone. The rapid disappearance of so many languages is of great concern to linguists and anthropologists alike. When a language is lost, its associated culture and unique set of knowledge and worldview are lost with it forever. Remember Whorf’s hypothesis. An interesting website shows short videos of the last speakers of several endangered languages, including one speaking an African “click language.”
Some minority languages are not threatened with extinction, even those that are spoken by a relatively small number of people. Others, spoken by many thousands, may be doomed. What determines which survive and which do not? Smaller languages that are associated with a specific country are likely to survive. Others that are spoken across many national boundaries are also less threatened, such as Quechua, an indigenous language spoken throughout much of South America, including Colombia, Ecuador, Peru, Chile, Bolivia, and Argentina. The great majority of the world’s languages are spoken by people with minority status in their countries. After all, there are only about 193 countries in the world, and over 6,000 languages are spoken in them. You can do the math.
The survival of the language of a given speech community is ultimately based on the accumulation of individual decisions by its speakers to continue using it or to abandon it. The abandonment of a language in favor of a new one is called language shift. These decisions are usually influenced by the society’s prevailing attitudes. In the case of a minority speech community that is surrounded by a more powerful majority, an individual might keep or abandon the native language depending on a complex array of factors. The most important factors will be the attitudes of the minority people toward themselves and their language, and the attitude of the majority toward the minority.
Language represents a marker of identity, an emblem of group membership and solidarity, but that marker may have a downside as well. If the majority look down on the minority as inferior in some way and discriminates against them, some members of the minority group may internalize that attitude and try to blend in with the majority by adopting the majority’s culture and language. Others might more highly value their identity as a member of that stigmatized group, in spite of the discrimination by the majority, and continue to speak their language as a symbol of resistance against the more powerful group. One language that is a minority language when spoken in the United States and that shows no sign of dying out either there or in the world at large, is Spanish. It is the primary language in many countries and in the United States it is by far the largest minority language.
A former student of mine, James Kim (pictured in Figure 3 as a child with his brother), illustrates some of the common dilemmas a child of immigrants might go through as he loses his first language. Although he was born in California, he spoke only Korean for the first six years of his life. Then he went to school, where he was the only Korean child in his class. He quickly learned English, the language of instruction and the language of his classmates. Under peer pressure, he began refusing to speak Korean, even to his parents, who spoke little English. His parents tried to encourage him to keep his Korean language and culture by sending him to Korean school on Saturdays, but soon he refused to attend. As a college student, James began to regret the loss of the language of his parents, not to mention his relationship with them. He tried to take a college class in Korean, but it was too difficult and time consuming. After consulting with me, he created a six-minute radio piece, called “First Language Attrition: Why My Parents and I Don’t Speak the Same Language,” while he was an intern at a National Public Radio station. He interviewed his parents in the piece and was embarrassed to realize he needed an interpreter.[9] Since that time, he has started taking Korean lessons again, and he took his first trip to Korea with his family during the summer of 2014. He was very excited about the prospect of reconnecting with his culture, with his first language, and especially with his parents.
The Korean language as a whole is in no danger of extinction, but many Korean speaking communities of immigrants in the United States, like other minority language groups in many countries, are having difficulty maintaining their language and culture. Those who are the most successful live in large, geographically coherent neighborhoods; they maintain closer ties to their homeland by frequent visits, telephone, and email contact with relatives. There may also be a steady stream of new immigrants from the home country. This is the case with most Spanish speaking communities in the United States, but it is less so with the Korean community.[10]
Another example of an oppressed minority group that has struggled with language and culture loss is Native Americans. Many were completely wiped out by the European colonizers, some by deliberate genocide but the great majority (up to 90 percent) by the diseases that the white explorers brought with them, against which the Native Americans had no immunity. In the twentieth-century, the American government stopped trying to kill Native Americans but instead tried to assimilate them into the white majority culture. It did this in part by forcing Native American children to go to boarding schools where they were required to cut their hair, practice Christianity, and speak only English. When they were allowed to go back home years later, they had lost their languages and their culture, but had not become culturally “white” either. The status of Native Americans in the nineteenth and twentieth-centuries as a scorned minority prompted many to hide their ethnic identities even from their own children. In this way, the many hundreds of original Native American languages in the United States have dwindled to less than 140 spoken today, according to UNESCO. More than half of those could disappear in the next few years, since many are spoken by only a handful of older members of their tribes. However, a number of Native American tribes have recently been making efforts to revive their languages and cultures, with the help of linguists and often by using texts and old recordings made by early linguists like Edward Sapir.
Revitalization of Indigenous Languages
A fascinating example of a tribal language revitalization program is that of the Wampanoag tribe in Massachusetts. The Wampanoag were the Native Americans who met the Puritans when they landed at Plymouth Rock, helped them survive the first winter, and who were with them at the first Thanksgiving. The contemporary descendants of that historic tribe still live in Massachusetts, but bringing back their language was not something Wampanoag people had ever thought possible because no one had spoken it for more than a century.
A young Wampanoag woman named Jessie Little Doe Baird (pictured in Figure 4 with her daughter Mae) was inspired by a series of dreams in which her ancestors spoke to her in their language, which she of course did not understand. She eventually earned a master’s degree in Algonquian linguistics at Massachusetts Institute of Technology in Boston and launched a project to bring her language back from the dead. This process was made possible by the existence of a large collection of documents, including copies of the King James Bible, written phonetically in Wampanoag during the seventeenth and eighteenth-centuries. She also worked with speakers of languages related to the Algonquian family to help in the reconstruction of the language. The community has established a school to teach the language to the children and promote its use among the entire community. Her daughter Mae is among the first new native speakers of Wampanoag.[11]
How Is the Digital Age Changing Communication?
The invention of the printing press in the fifteenth-century was just the beginning of technological transformations that made the spread of information in European languages and ideas possible across time and space using the printed word. Recent advances in travel and digital technology are rapidly transforming communication; now we can be in contact with almost anyone, anywhere, in seconds. However, it could be said that the new age of instantaneous access to everything and everyone is actually continuing a social divide that started with the printing press.
In the fifteenth-century, few people could read and write, so only the tiny educated minority were in a position to benefit from printing. Today, only those who have computers and the skills to use them, the educated and relatively wealthy, have access to this brave new world of communication. Some schools have adopted computers and tablets for their students, but these schools are more often found in wealthier neighborhoods. Thus, technology is continuing to contribute to the growing gap between the economic haves and the have-nots.
There is also a digital generation gap between the young, who have grown up with computers, and the older generations, who have had to learn to use computers as adults. These two generations have been referred to as digital natives and digital immigrants.[12] The difference between the two groups can be compared to that of children versus adults learning a new language; learning is accomplished much more easily by the young.
Computers, and especially social media, have made it possible for millions of people to connect with each other for purposes of political activism, including “Occupy Wall Street” in the United States and the “Arab Spring” in the Middle East. Some anthropologists have introduced computers and cell phones to the people they studied in remote areas, and in this way they were able to stay in contact after finishing their ethnographic work. Those people, in turn, were now able to have greater access to the outside world.
Facebook and Twitter are becoming key elements in the survival of a number of endangered indigenous languages. Facebook is now available in over 70 languages, and Twitter in about 40 languages. For example, a website has been created that seeks to preserve Anishinaabemowin, an endangered Native American language from Michigan. The language has 8,000-10,000 speakers, but most of the native speakers are over 70 years old, which means the language is threatened with extinction. Modern social media are an ideal medium to help encourage young people to communicate in their language to keep it alive.[13] Clearly, language and communication through modern technology are in the forefront of a rapidly changing world, for better or for worse. It’s anybody’s guess what will happen next.
Discussion Questions
-
How do you think modern communication technologies like cell phones and computers are changing how people communicate? Is the change positive or negative?
-
How is language related to social and economic inequality? Do you think that attitudes about language varieties have affected you and/or your family?
-
How has the use of specific terms in the news helped to shape public opinion? For example, what are the different implications of the terms terrorist versus freedom fighter? Downsizing versus firing staff at a company? Euphemistic terms used in reference to war include friendly fire, pacification, collateral damage? Can you think of other examples?
- Think about the different styles you use when speaking to your siblings and parents, your friends, your significant other, your professors, your grandparents. What are some of the specific differences among these styles? What do these differences indicate about the power relationships between you and others?
GLOSSARY
Arbitrariness: the relationship between a symbol and its referent (meaning), in which there is no obvious connection between them.
Bound morpheme: a unit of meaning that cannot stand alone; it must be attached to another morpheme.
Closed system: a form of communication that cannot create new meanings or messages; it can only convey pre-programmed (innate) messages.
Code-switching: using two or more language varieties in a particular interaction.
Creole: a language that develops from a pidgin when the pidgin becomes so widely used that children acquire it as one of their first languages. Creoles are more fully complex than creoles.
Critical age range hypothesis: research suggesting that a child will gradually lose the ability to acquire language naturally and without effort if he or she is not exposed to other people speaking a language until past the age of puberty. This applies to the acquisition of a second language as well.
Cultural transmission: the need for some aspects of the system to be learned; a feature of some species’ communication systems.
Design features: descriptive characteristics of the communication systems of all species, including that of humans, proposed by linguist Charles Hockett to serve as a definition of human language.
Dialect: a variety of speech. The term is often applied to a subordinate variety of a language. Speakers of two dialects of the same language do not necessarily always understand each other.
Discreteness: a feature of human speech that they can be isolated from others.
Displacement: the ability to communicate about things that are outside of the here and now.
Duality of patterning: at the first level of patterning, meaningless discrete sounds of speech are combined to form words and parts of words that carry meaning. In the second level of patterning, those units of meaning are recombined to form an infinite possible number of longer messages such as phrases and sentences.
Gesture-call system: a system of non-verbal communication using varying combinations of sound, body language, scent, facial expression, and touch, typical of great apes and other primates, as well as humans.
Historical linguistics: the study of how languages change.
Interchangeability: the ability of all individuals of the species to both send and receive messages; a feature of some species’ communication systems.
Kinesics: the study of all forms of human body language.
Language: an idealized form of speech, usually referred to as the standard variety.
Language death: the total extinction of a language.
Language shift: when a community stops using their old language and adopts a new one.
Language universals: characteristics shared by all linguists.
Larynx: the voice box, containing the vocal bands that produce the voice.
Lexicon: the vocabulary of a language.
Linguistic relativity: the idea that the structures and words of a language influence how its speakers think, how they behave, and ultimately the culture itself (also known as the Whorf Hypothesis).
Middle English: the form of the English language spoken from 1066 AD until about 1500 AD.
Minimal response: the vocal indications that one is listening to a speaker.
Modern English: the form of the English language spoken from about 1500 AD to the present.
Morphemes: the basic meaningful units in a language.
Morphology: the study of the morphemes of language.
Old English: English language from its beginnings to about 1066 AD.
Open system: a form of communication that can create an infinite number of new messages; a feature of human language only.
Oralist approach: an approach to the education of deaf children that emphasizes lip reading and speaking orally while discouraging use of signed language.
Palate: the roof of the mouth.
Paralanguage: those characteristics of speech beyond the actual words spoken, such as pitch, loudness, tempo.
Pharynx: the throat cavity, located above the larynx.
Phonemes: the basic meaningless sounds of a language.
Phonology: the study of the sounds of language.
Pidgin: a simplified language that springs up out of a situation in which people who do not share a language must spend extended amounts of time together.
Pragmatic function: the useful purpose of a communication. Usefulness is a feature of all species’ communication systems.
Pragmatics: how social context contributes to meaning in an interaction.
Productivity/creativity: the ability to produce and understand messages that have never been expressed before.
Proxemics: the study of the social use of space, including the amount of space an individual tries to maintain around himself in his interactions with others.
Register: a style of speech that varies depending on who is speaking to whom and in what context.
Semanticity: the meaning of signs in a communication system; a feature of all species’ communication systems.
Semantics: how meaning is conveyed at the word and phrase level.
Speech act: the intention or goal of an utterance; the intention may be different from the dictionary definitions of the words involved.
Standard: the variant of any language that has been given special prestige in the community.
Symbol: anything that serves to refer to something else.
Syntax: the rules by which a language combines morphemes into larger units.
Taxonomies: a system of classification.
Universal grammar (UG): a theory developed by linguist Noam Chomsky suggesting that a basic template for all human languages is embedded in our genes.
Unbound morpheme: a morpheme that can stand alone as a separate word.
Vernaculars: non-standard varieties of a language, which are usually distinguished from the standard by their inclusion of stigmatized forms.
ABOUT THE AUTHOR
Linda Light has been a lecturer in linguistic and cultural anthropology at California State University Long Beach since 1995. During much of that period she also taught as adjunct professor at Cypress College, Santa Ana College, Rancho Santiago College, and Golden West College, all in Orange County, California. She was a consultant to Coastline Community College District in the production of thirty-five educational videos that were used in three series, including the cultural anthropology series Our Diverse World. Her main areas of interest have been indigenous language loss and maintenance, language and gender, and first language attrition in the children of immigrants.
- You can find a documentary film about Genie via Google or YouTube under the title Genie, Secret of the Wild Child, a NOVA production. ↵
- Adapted here from Nick Cipollone, Steven Keiser, and Shravan Vasishth, Language Files (Columbus: Ohio State University Press 1998), 20-23. ↵
- John McWhorter, The Power of Babel: A Natural History of Language (New York: Times Books, Henry Holt, 2001), 53. ↵
- William Labov, The Social Stratification of English in New York City (Cambridge, UK: Cambridge University Press, 1964). ↵
- Trudy Ann Parker, Aunt Sarah, Woman of the Dawnland (Lancaster, NH, Dawnland Publications 1994), 56. ↵
- George Lakoff and Mark Johnson, Metaphors We Live By (Chicago and London: The University of Chicago Press, 1980), 4-5. ↵
- For more information see Deborah Tannen, Gender and Discourse (Oxford, UK: Oxford University Press, 1996). Or, Deborah Tannen, You Just Don’t Understand: Women and Men in Conversation (New York: Harper Collins, 2010). ↵
- From Wikipedia: History of the Lord’s Prayer in English. ↵
- You can hear the 6-minute piece at http://www.scpr.org/programs/offramp/2012/04/05/25912/first-language-attrition-why-my-parents-and-i-dont/ ↵
- From François Grosjean, Life with Two Languages: An Introduction to Bilingualism (Cambridge, Mass: Harvard University Press, 1982), chapter two. ↵
- Filmmaker Anne Makepeace created a documentary of the story, called We Still Live Here: Âs Nutayuneân, which PBS broadcast in 2010. You can watch the clips from the video online. ↵
- Terms first coined by John Palfrey and Urs Gasser, Born Digital: Understanding the First Generation of Digital Native (New York, Basic Books, 2008). ↵
- Lydia Emmanouilidou, For Rare Languages, Social Media Provide New Hope. http://www.npr.org/sections/alltechconsidered/2014/07/26/333732206/for-rare-languages-social-media-provide-new-hope ↵
Chapter Outline
Few symbols of global trade and Americanization abroad have been as powerful as Coca-Cola (Figure 15.1). Though it is the most widely consumed soft drink in the world, its conquest of the globe has not been without controversy. Coca-Cola bottlers were accused of interfering with labor union organization in South America in the 1990s, and in 2014, the company was forced to close a bottling plant in northern India that was depriving farmers of water. Today, although Coca-Cola is making an effort to restore the water that it uses in places like India and South Africa, some critics claim that it still uses more than it replenishes. Coke is an apt symbol of the interconnectedness of our contemporary world and the challenges it presents. Many people enjoy the benefits that come with an increasingly globalized economy, but many have also been harmed in the process.
Learning Objectives
By the end of this section, you should be able to:
- Define globalization and describe its manifestation in modern society
- Discuss the pros and cons of globalization from an economic standpoint
What Is Globalization?
Globalization refers to the process of integrating governments, cultures, and financial markets through international trade into a single world market. Often, the process begins with a single motive, such as market expansion (on the part of a corporation) or increased access to healthcare (on the part of a nonprofit organization). But usually there is a snowball effect, and globalization becomes a mixed bag of economic, philanthropic, entrepreneurial, and cultural efforts. Sometimes the efforts have obvious benefits, even for those who worry about cultural colonialism, such as campaigns to bring clean-water technology to rural areas that do not have access to safe drinking water.
Other globalization efforts, however, are more complex. Let us look, for example, at the North American Free Trade Agreement (NAFTA). The agreement was among the countries of North America, including Canada, the United States, and Mexico, and allowed much freer trade opportunities without the kind of tariffs (taxes) and import laws that restrict international trade. Often, trade opportunities are misrepresented by politicians and economists, who sometimes offer them up as a panacea to economic woes. For example, trade can lead to both increases and decreases in job opportunities. This is because while easier, more lax export laws mean there is the potential for job growth in the United States, imports can mean the exact opposite. As the United States imports more goods from outside the country, jobs typically decrease, as more and more products are made overseas.
Many prominent economists believed that when NAFTA was created in 1994 it would lead to major gains in jobs. But by 2010, the evidence showed an opposite impact; the data showed 682,900 U.S. jobs lost across all states (Parks 2011). While NAFTA did increase the flow of goods and capital across the northern and southern U.S. borders, it also increased unemployment in Mexico, which spurred greater amounts of illegal immigration motivated by a search for work. NAFTA was renegotiated in 2018, and was formally replaced by the United States-Mexico-Canada Agreement in 2020.
There are several forces driving globalization, including the global economy and multinational corporations that control assets, sales, production, and employment (United Nations 1973). Characteristics of multinational corporations include the following: A large share of their capital is collected from a variety of different nations, their business is conducted without regard to national borders, they concentrate wealth in the hands of core nations and already wealthy individuals, and they play a key role in the global economy.
We see the emergence of global assembly lines, where products are assembled over the course of several international transactions. For instance, Apple designs its next-generation Mac prototype in the United States, components are made in various peripheral nations, they are then shipped to another peripheral nation such as Malaysia for assembly, and tech support is outsourced to India.
Globalization has also led to the development of global commodity chains, where internationally integrated economic links connect workers and corporations for the purpose of manufacture and marketing (Plahe 2005). For example, in maquiladoras, mostly found in northern Mexico, workers may sew imported precut pieces of fabric into garments.
Globalization also brings an international division of labor, in which comparatively wealthy workers from core nations compete with the low-wage labor pool of peripheral and semi-peripheral nations. This can lead to a sense of xenophobia, which is an illogical fear and even hatred of foreigners and foreign goods. Corporations trying to maximize their profits in the United States are conscious of this risk and attempt to “Americanize” their products, selling shirts printed with U.S. flags that were nevertheless made in Mexico.
Aspects of Globalization
Globalized trade is nothing new. Societies in ancient Greece and Rome traded with other societies in Africa, the Middle East, India, and China. Trade expanded further during the Islamic Golden Age and after the rise of the Mongol Empire. The establishment of colonial empires after the voyages of discovery by European countries meant that trade was going on all over the world. In the nineteenth century, the Industrial Revolution led to even more trade of ever-increasing amounts of goods. However, the advance of technology, especially communications, after World War II and the Cold War triggered the explosive acceleration in the process occurring today.
One way to look at the similarities and differences that exist among the economies of different nations is to compare their standards of living. The statistic most commonly used to do this is the domestic process per capita. This is the gross domestic product, or GDP, of a country divided by its population. The table below compares the top 11 countries with the bottom 11 out of the 228 countries listed in the CIA World Factbook.
Country | GDP Per Capita in U.S. Dollars |
---|---|
Monaco | 185,829.00 |
Liechtenstein | 181,402.80 |
Bermuda | 117,089.30 |
Luxembourg | 114,704.60 |
Isle of Man | 89,108.40 |
Cayman Islands | 85,975.00 |
Macao SAR, China | 84,096.40 |
Switzerland | 81,993.70 |
Ireland | 78,661.00 |
Norway | 75,419.60 |
Burundi | 261.2 |
Malawi | 411.6 |
Sudan | 441.5 |
Central African Republic | 467.9 |
Mozambique | 503.6 |
Afghanistan | 507.1 |
Madagascar | 523.4 |
Sierra Leone | 527.5 |
Niger | 553.9 |
Congo, Dem. Rep. | 580.7 |
There are benefits and drawbacks to globalization. Some of the benefits include the exponentially accelerated progress of development, the creation of international awareness and empowerment, and the potential for increased wealth (Abedian 2002). However, experience has shown that countries can also be weakened by globalization. Some critics of globalization worry about the growing influence of enormous international financial and industrial corporations that benefit the most from free trade and unrestricted markets. They fear these corporations can use their vast wealth and resources to control governments to act in their interest rather than that of the local population (Bakan 2004). Indeed, when looking at the countries at the bottom of the list above, we are looking at places where the primary benefactors of mineral exploitation are major corporations and a few key political figures.
Other critics oppose globalization for what they see as negative impacts on the environment and local economies. Rapid industrialization, often a key component of globalization, can lead to widespread economic damage due to the lack of regulatory environment (Speth 2003). Further, as there are often no social institutions in place to protect workers in countries where jobs are scarce, some critics state that globalization leads to weak labor movements (Boswell and Stevis 1997). Finally, critics are concerned that wealthy countries can force economically weaker nations to open their markets while protecting their own local products from competition (Wallerstein 1974). This can be particularly true of agricultural products, which are often one of the main exports of poor and developing countries (Koroma 2007). In a 2007 article for the United Nations, Koroma discusses the difficulties faced by “least developed countries” (LDCs) that seek to participate in globalization efforts. These countries typically lack the infrastructure to be flexible and nimble in their production and trade, and therefore are vulnerable to everything from unfavorable weather conditions to international price volatility. In short, rather than offering them more opportunities, the increased competition and fast pace of a globalized market can make it more challenging than ever for LDCs to move forward (Koroma 2007).
The increasing use of outsourcing of manufacturing and service-industry jobs to developing countries has caused increased unemployment in some developed countries. Countries that do not develop new jobs to replace those that move, and train their labor force to do them, will find support for globalization weakening.
Chapter Outline
Few symbols of global trade and Americanization abroad have been as powerful as Coca-Cola. Though it is the most widely consumed soft drink in the world, its conquest of the globe has not been without controversy. Coca-Cola bottlers were accused of interfering with labor union organization in South America in the 1990s, and in 2014, the company was forced to close a bottling plant in northern India that was depriving farmers of water. Today, although Coca-Cola is making an effort to restore the water that it uses in places like India and South Africa, some critics claim that it still uses more than it replenishes. Coke is an apt symbol of the interconnectedness of our contemporary world and the challenges it presents. Many people enjoy the benefits that come with an increasingly globalized economy, but many have also been harmed in the process.
Chapter Outline
Few symbols of global trade and Americanization abroad have been as powerful as Coca-Cola. Though it is the most widely consumed soft drink in the world, its conquest of the globe has not been without controversy. Coca-Cola bottlers were accused of interfering with labor union organization in South America in the 1990s, and in 2014, the company was forced to close a bottling plant in northern India that was depriving farmers of water. Today, although Coca-Cola is making an effort to restore the water that it uses in places like India and South Africa, some critics claim that it still uses more than it replenishes. Coke is an apt symbol of the interconnectedness of our contemporary world and the challenges it presents. Many people enjoy the benefits that come with an increasingly globalized economy, but many have also been harmed in the process.
Learning Objectives
By the end of this section, you will be able to:
- Explain how trade agreements and attempts to regulate world trade have shaped the global economy since the 1990s
- Analyze the way multinational corporations have affected politics, workers, and the environment in developing nations
- Discuss the way globalization has affected workers around the world
In many ways, World Wars I and II were only temporary interruptions in a centuries-long process of global integration. This process is often called globalization, the interconnectedness of societies and economies throughout the world as a result of trade, technology, and the adoption and sharing of elements of culture. Globalization facilitates the movement of goods, people, technologies, and ideas across international borders. Historians of globalization note that it has a very long history. In the days of the Roman Empire and the Han dynasty, Europeans and Asians were connected to one another through trade along the Silk Roads. In the fourteenth century, the Black Death spread from Asia to Europe and North Africa, killing people on all three continents. With the European colonization of the Americas in sixteenth and seventeenth centuries and British colonization of Australia, all of the world’s inhabited continents became enmeshed in exchanges of peoples, products, and ideas that increased in the nineteenth century as the result of both technological developments and the imperialist impulses of industrialized nations. Only the world wars of the twentieth century brought a temporary halt to these exchanges. Furthermore, once the world wars were over, globalization not only resumed its pre–World War I trajectory but even gained speed, despite the Cold War and decolonization efforts in Asia and Africa. As the Cold War came to a close, the United States and increasingly powerful corporations ensured that capitalism and free-market economics would dominate the globe.
Global Trade
Even during the Cold War and decolonization, economic development and industrialization continued around the world. Japan and West Germany, destroyed and defeated in the 1940s, were striking examples. Each dove headlong into postwar rebuilding efforts that paid huge dividends. They invested heavily in their economies and saw industrial production and economic growth skyrocket over the 1950s and 1960s. By 1970, both had become economic powerhouses in their regions.
Similar, but smaller, economic miracles occurred in other places, especially in Europe. Spain underwent a period of spectacular growth fueled by imported technology, government funding, and increased tourism and industrialization in the 1960s. Italy began even earlier. By the early 1960s, its annual gross domestic product (GDP) growth—the increase in value of all the goods and services the country was producing—had peaked at just over 8 percent. France bounced back from the war years with a rise in population and an impressive new consumer culture that became accustomed to a high standard of living and access to modern conveniences like automobiles, televisions, and household appliances. Comparable growth occurred in Belgium, Greece, and the Netherlands.
Contributing to this postwar economic growth was the emergence of regional economic cooperation in Western Europe. The process began with the creation of the European Coal and Steel Community (ECSC) in 1951. The original member countries of the ECSC were West Germany, France, Italy, the Netherlands, Belgium, and Luxembourg. To both foster economic integration and preserve the peace, these member states agreed to break down trade barriers between them by creating a common market in steel and coal. Across the six countries, these products were allowed to flow without restrictions, like customs duties.
The success of the ECSC eventually paved the way for further economic integration in Europe with the formation of the European Economic Community (EEC), also called the Common Market, in 1957. The EEC used the economic-cooperation model developed by the ECSC and greatly expanded it to remove trading and investment restrictions across its member states, far beyond just coal and steel. While generally successful, the EEC’s efforts at economic integration occasionally met resistance. Farmers who stood to lose economically protested the way it opened national markets to agricultural products produced more cheaply in other countries and sought protectionist policies. Yet despite these protests and concerns, the EEC continued to expand.
Although Britain supported the EEC’s economic goals, it was initially not interested in the political cooperation the group represented. When it did signal its desire to join, Britain wanted special protections for its agriculture and other exceptions for its Commonwealth connections, such as Canada. This meant negotiating with France, then the EEC’s most powerful member. Since decisions at that time were made unanimously and member countries had the power to unilaterally veto, France’s approval was crucial. President Charles de Gaulle of France did not approve, however, and he cut off negotiations with Britain in 1960. Over the next several years, Britain officially applied for EEC membership twice: in 1961 and again in 1967. Both times de Gaulle worried that admitting Britain, with its strong ties to the United States, would transform the organization into an Atlantic community controlled by Washington. Britain was admitted in 1973 (along with Denmark and Ireland), when de Gaulle was no longer president of France. But many in the United Kingdom had wanted to stay out, especially those in the Conservative Party. It took a UK referendum in 1975 to confirm the country’s EEC membership.
In Their Own Words
Charles de Gaulle Vetoes British Admission to the EEC
In January 1963, French president Charles de Gaulle made the following statement at a press conference explaining his opposition to Britain’s application to join the EEC or Common Market:
England in effect is insular, she is maritime, she is linked through her exchanges, her markets, her supply lines to the most diverse and often the most distant countries; she pursues essentially industrial and commercial activities, and only slight agricultural ones. She has in all her doings very marked and very original habits and traditions. [. . .]
One might sometimes have believed that our English friends, in posing their candidature to the Common Market, were agreeing to transform themselves to the point of applying all the conditions which are accepted and practised by the Six [the six founding members: Belgium, France, Italy, Luxembourg, the Netherlands, and West Germany]. But the question, to know whether Great Britain can now place herself like the Continent and with it inside a tariff which is genuinely common, to renounce all Commonwealth preferences, to cease any pretence that her agriculture be privileged, and, more than that, to treat her engagements with other countries of the free trade area as null and void—that question is the whole question. [. . .]
Further, this community, increasing in such fashion, would see itself faced with problems of economic relations with all kinds of other States, and first with the United States. It is to be foreseen that the cohesion of its members, who would be very numerous and diverse, would not endure for long, and that ultimately it would appear as a colossal Atlantic community under American dependence and direction, and which would quickly have absorbed the community of Europe.
It is a hypothesis which in the eyes of some can be perfectly justified, but it is not at all what France is doing or wanted to do—and which is a properly European construction.
Yet it is possible that one day England might manage to transform herself sufficiently to become part of the European community, without restriction, without reserve and preference for anything whatsoever; and in this case the Six would open the door to her and France would raise no obstacle, although obviously England’s simple participation in the community would considerably change its nature and its volume.
—Charles de Gaulle, Veto on British Membership of the EEC
- Why does de Gaulle note that Britain’s relationship with the United States is a problem?
- What does this statement suggest about the special protections Britain wanted in order to become an EEC member?
- Do you think that de Gaulle was right to worry about the influence of the United States? Why or why not?
By 1993, the EEC had been integrated into the newly created European Union (EU), which had grown to fifteen member states by 1995 and in 2022 included twenty-seven member states. The EU was conceived as a single market for the free movement of goods, services, money, and people. Citizens of member countries can freely move to other EU countries and legally work there, just as they can in their own country. In 1999, the EU introduced its own currency, the euro. Initially used only for commercial and financial transactions in eleven of the EU countries, euro notes and coins had become the legal currency in the majority of EU countries by the start of 2002. The euro was not universally adopted by member states, though. The United Kingdom, Denmark, Sweden, and a few others kept their own currencies. The adoption of the euro was swiftly followed by a major expansion of the EU to include several Central and Eastern European countries including Poland, Hungary, and the Czech Republic.
In 2016, however, 52 percent of the citizens of the United Kingdom voted to leave the EU, and Britain officially cut its ties with the organization, its largest trading partner, on January 31, 2020. Some EU opponents in the United Kingdom had claimed the group was economically dysfunctional, especially after the recession of 2008, and that it reduced British sovereignty. Many also disliked the fact that its membership in the EU made it easier for people from elsewhere in Europe, including recent immigrants from the Middle East and Africa, to enter Britain. Those who wanted to stay argued that leaving would hurt British trade with Europe and make Britain poorer. Some also argued that losing the trade advantages that came with belonging to the EU could result in shortages in stores and more expensive products.
The breakup, colloquially known as “Brexit,” was hardly amicable. In Britain, many who supported staying in the EU were shocked by the decision and demanded a re-vote, though none was taken. In Europe, the European Parliament, the EU’s legislative body, was angry that Brexit might threaten the entire EU project. To show its displeasure and possibly to deter any other member states from leaving, the European Parliament wanted the separation to be as painful for the United Kingdom as possible, and many member countries supported this hardline stance. In today’s post-Brexit world, there are new regulations on British goods entering the EU and no automatic recognition of British professional licenses in the EU. Britons seeking to make long-term stays in EU countries now must apply for visas.
Among the forces driving European economic integration was a desire for Europe to be more independent of the United States. This wish was understandable given the massive economic and political power the United States began wielding after World War II. Not only did the United States possess a huge military force with a global reach and growing installations in Europe and beyond, but its economic strength was the envy of the world. Having experienced the Great Depression in the years before the war, the United States emerged after it as an economic powerhouse with a highly developed industrial sector. More importantly, with the exception of the attack on Pearl Harbor, it had avoided the wartime destruction experienced in Europe and Asia. As a result, not only was it able to provide funds to war-torn nations to rebuild their economies and their infrastructure as part of the Marshall Plan, but U.S.-manufactured goods also flooded into markets around the world. By 1960, U.S. GDP had grown to $543 billion a year (in current dollars). By comparison, the United Kingdom had the second-largest economy with a GDP of $73 billion (in current dollars).
The economic might of the United States brought unprecedented growth and a rise in the population’s standard of living in the decades after the war. With ready access to well-paying industrial jobs, the middle class boomed in the 1950s and 1960s. Their buying having been restrained during the war years because of the rationing of goods, consumers were eager to use their wartime savings to purchase new automobiles, televisions, household appliances, and suburban homes. Many enjoyed steady employment with new union-won benefits like weekends off and paid vacations. They also began sending their children to college in ever-greater numbers. By 1960, about 3.6 million young students were enrolled in higher education, an increase of 140 percent over the previous two decades. For many, access to higher education was the gateway to the middle or upper class.
By the 1970s, the U.S. economy began to cool, partially because of domestic inflation and increased competition from abroad. Oil embargoes by Arab nations belonging to the Organization of the Petroleum Exporting Countries, in retaliation for U.S. support of Israel during the 1973 Arab-Israeli war, and a general strategy by oil-producing nations to raise their prices, further stressed the economy by creating gas shortages for consumers. Other countries that had supported Israel, such as the Netherlands, also suffered from the embargo. Once dominant in most major industrial sectors, including the production of steel and automobiles, around the world, by the end of the 1970s, U.S. manufacturers were suffering as a result of competition from Japan and Western Europe. This reality led to a number of difficult economic transformations in the United States. But it also encouraged some national leaders to seek regional international economic integration along lines similar to Western Europe’s achievement. Beginning in the 1980s, the United States and Canada entered into negotiations to create their own regional free-trading zone. This made sense because Canada not only shared a long border with the United States but was also its largest trading partner. In 1988, the two countries agreed to the Canada-U.S. Free Trade Agreement (Canada-US FTA), which eliminated barriers to the movement of goods and services between the two.
Almost immediately, Mexico signaled its interest in creating a similar free trade agreement with the United States. U.S. president Ronald Reagan had floated the concept during his 1980 election campaign. But successful completion of the Canada-US FTA convinced Mexican leaders that the time was ripe to act on the idea. Ultimately, Canada joined the plan with Mexico as well, and by the end of 1992, all three countries had signed the North American Free Trade Agreement (NAFTA) (Figure 15.4). The intent of NAFTA was to reduce trade barriers and allow goods to flow freely among the three countries. Despite considerable resistance within the United States, largely from industrial workers who feared their factories and jobs would be relocated to Mexico where wages were far lower, the agreement was ratified by all three countries and went into effect in 1994.
The creation of Canada-US FTA and later NAFTA represented an important policy shift for the United States. Until the early 1980s, the country had largely avoided limited regional trading deals, preferring instead to seek comprehensive global agreements. Such efforts had begun relatively soon after World War II in the form of the General Agreement on Tariffs and Trade (GATT), signed in 1947 by twenty-three countries. Initially conceived as a way to reinforce other postwar economic recovery efforts, GATT was designed to prevent the reemergence of prewar-style trade barriers, to lower trade barriers overall, and to create a system for arbitrating international trade disputes. Since its initial acceptance in 1947, GATT has undergone a number of revisions, completed in sessions referred to as rounds, to promote free trade and international investments.
In 1995, at the Uruguay Round, GATT transformed itself into the World Trade Organization (WTO) and cleared the path for free trade among 123 countries. Like GATT, the intent of the WTO was to support international trade, reduce trade barriers, and resolve trade disputes between countries. Unlike GATT, however, the WTO is not a free trade agreement. Rather, it is an organization that ensures nondiscriminatory trade between WTO countries. This means that trade barriers are allowed, but they must apply equally to all members. Many observers saw the creation of the WTO as a triumph of globalization, or the emergence of a single integrated global economy. China joined in 2001, a clear sign of its integration into global market systems.
Because the WTO is regarded as a major force for globalization, its meetings often attract protestors who oppose corporate power and the economic, political, and cultural influence of wealthy nations in the developed world on less-developed nations. In 1999, in Seattle, Washington, a diverse group of students, labor union representatives, environmentalists, and activists of many kinds protested the abuses associated with globalization. Police confronted activists staging marches and sit-ins with rubber bullets and tear gas, and trade talks ground to a halt. Meetings of the WTO continue to attract protestors.
As the creation of the EU and NAFTA demonstrate, the signing of international trade agreements like GATT, which was ratified by countries on six continents, did not prevent regions from establishing their own free trade blocs like MERCOSUR, the South American trading bloc created in 1991. Nor have such blocs been confined to the West. The most notable to emerge in Asia are the Association of Southeast Asian Nations (ASEAN) and the Asia-Pacific Economic Cooperation (APEC) (Figure 15.5).
ASEAN had its beginnings in the 1960s when Indonesia, Malaysia, the Philippines, Singapore, and Thailand agreed to cooperate economically to foster regional development and resist the expansion of communism in Asia. Largely successful, the organization expanded in the 1980s and 1990s to include more countries. By the early 2000s, it was openly advocating the creation of EU-style integration in the area.
APEC, launched in 1989, was in many ways a response to the growth of regional trading blocs like ASEAN and the EEC. Initially composed of twelve Asia-Pacific countries including Australia, the United States, South Korea, and Singapore, it has since grown to include Mexico, China, Chile, Russia, and more. It promotes free trade and international economic cooperation among its members.
Multinationals and the Push to Privatize
The growing global economic integration represented by the rise of the WTO and regional trading blocs opened new opportunities for multinational corporations to extend their reach and influence around the world. A multinational corporation, or MNC, is a corporate business entity that controls the production of goods and services in multiple countries. MNCs are not new. Some, like the British East India Company and the Hudson’s Bay Company, exerted great influence during Europe’s imperial expansion in the early modern and modern periods. But with globalization and improvements in transportation and communication technology, MNCs have thrived, especially since the 1950s. They have used their growing wealth to lobby governments to create conditions favorable to their operation, enabling them to become even more powerful.
Multinationals have also benefited greatly from the lowering of trading barriers around the world. These developments have encouraged major automobile manufacturers like Volkswagen, Toyota, Chevrolet, Kia, and Nissan to build and operate assembly plants in Mexico, for example, where workers are paid lower wages than they are in countries like Germany, Japan, South Korea, or the United States. This translates to significant cost savings and thus higher profits for them. And because Mexico is part of a free trade bloc with the United States and Canada, cars made there can be exported for sale in the United States or Canada without the need to pay tariffs.
Supporters of MNCs claim that the benefits for workers in this arrangement are also substantial. They get access to well-paying and reliable industrial jobs not available before, and their paychecks flow into the local community and contribute to a general rise in the standard of living. The companies themselves invest in local infrastructure like roads, powerlines, and factories, and their presence often helps expand support industries like restaurants that cater to workers and shipping companies that employ them to move their goods. Technology spillovers also occur when MNCs either help to develop necessary job skills in local workers or introduce new technologies that ultimately become available to domestic industries in the host country. Finally, MNCs that focus on retail and establish branches in other countries, such as Walmart, Aldi, Costco, Carrefour, Ikea, and many more, provide access to high-quality consumer goods like clothing, appliances, and electronics at competitive prices, raising the standard of living in the countries where they operate.
There are drawbacks, however. Critics of MNCs note that while workers may be paid more than they could earn working for local businesses, they are still paid far less than the multinational can afford. Workers are often prevented from forming unions and forced to work long hours in unsafe environments. Furthermore, many multinationals are based in developed countries, mostly in the West, and they tend to express the interests and cultural norms of those countries. This bias has sometimes led to accusations of neocolonialism (the use of economic, political, or cultural power by developed countries to influence or control less-developed countries), particularly for the way MNCs have helped accelerate the homogenization of cultures around the world by exporting not only goods from the West but also ideas and behaviors.
Link to Learning
These short Planet Money videos follow the manufacture of a simple t-shirt and reveal how many people around the world are involved. Click on “Chapters” to navigate among the five short videos.
One such idea is that countries should encourage the privatization of public services like utilities, transportation systems, and postal services. Privatization means delegating these services to mostly private companies that operate to earn a profit rather than such services being delivered by arms of the government. Organizations like the World Bank and the International Monetary Fund (IMF) have pushed for privatization as a way to make these public services more efficient. The World Bank is an international organization that offers financing and support to developing countries seeking to improve their economies. Founded in 1944 to rebuild countries after World War II, it later shifted its focus to global development. The IMF, also created in 1944, promotes global monetary stability by helping countries improve their economies with fiscal plans and sometimes loans. In exchange, it requires countries to adopt plans that often include privatizing public services and paying off their debts.
In some places, this privatization process has been successful. For decades after independence, for instance, India employed a mixed-economy strategy, with a combination of free-market policies and heavy intervention by the government. The result, however, was that a large part of the public sector operated under cumbersome bureaucratic controls that critics complained slowed economic growth. Beginning in the 1990s, new leadership in India began pursuing economic liberalization by privatizing aspects of its large public sector including airlines, shipbuilding, telecommunications, electric power, and heavy industry. As expected, these efforts increased productivity and efficiency, but the public often was confronted by higher prices and loss of access to services.
The World Bank made similar efforts across Latin America. Some view privatization there as largely successful, noting that as the number of state-owned industries declined, the profitability and efficiency of the privatized companies increased. But sometimes, as in Bolivia, privatization came at considerable cost. In 1999, the Bechtel Corporation, a U.S.-based multinational, was awarded a contract by the Bolivian city of Cochabamba to improve the efficiency of the city’s water delivery system. By January 2000, however, water delivery in Cochabamba had actually become worse. Service rates increased even for people who did not receive any water at all. This result led to large and sometimes violent protests later known as the Cochabamba Water War. The Bolivian government sided with the protestors, expelled Bechtel, and passed the Bolivian Water Law in April 2000. Water in Bolivia was no longer privatized. In the aftermath of the protests, which drew international media attention, the World Bank promised to study and revise its procedures and recommendations.
Multinationals have been crucial in the emergence of modern China as an economic powerhouse. Starting in the 1970s under Deng Xiaoping, the country began to pursue a market-based growth strategy, adopting some of the tools of capitalist economies without ending its Communist system. These economic changes, sometimes called the “Opening of China,” vastly changed the country’s role in the world’s economy and improved a new generation’s prospects for a higher standard of living. Factories constructed for MNCs in coastal cities like Shanghai, Guangzhou, and Shenzhen were filled with workers from the Chinese countryside. Large ships stacked high with Chinese factory-made products crossed the Pacific and Indian Oceans and passed through the Suez and Panama Canals. With the world’s largest population, China holds its largest group of consumers. In 2001 it joined the WTO, and in 2010 it became the world’s second-largest economy, after the United States. As of 2022, China’s economy was predicted to become the world’s largest in 2030.
Exporting Culture
Besides promoting controversial ideas like privatization, MNCs, many of which are headquartered in the United States, also are responsible for exporting elements of western culture, especially popular culture. Although many people around the world enjoy such things as western fashions, movies and television shows, popular music, and fast food, other people fear that such influences harm local cultures and contribute to the Americanization (and homogenization) of the world.
Few countries have been as culturally influential as the United States, thanks to its global dominance after World War II. U.S. troops at military bases around the world were often the first to expose their hosts in Europe and Asia to American traditions, sports, and norms. Consumers around the world purchased a wide range of “Made in USA” products, including Coca-Cola, Levi’s jeans, and Hollywood movies, which, along with American music, helped to spread the American dialect of the English language.
Some early Americanization efforts were intentional, such as in Germany and Japan where the goal was to lay a foundation for U.S.-style democracy by projecting ideas like freedom and affluence via popular culture. These ideas were also attractive in South Korea and South Vietnam. Typically, young people patronized fast-food restaurants like McDonald’s and Pizza Hut, purchased blue jeans and T-shirts, watched American television shows, and sought out recordings of the latest popular music.
The popularity of American culture led many countries to fear the loss of their own unique cultural characteristics and the weakening of their domestic industries. Brazil, Greece, Spain, South Korea, and others imposed screen quotas, limiting the hours theaters could show foreign movies. In 1993, France forced the nation’s radio stations to allocate 40 percent of their airtime to exclusively French music.
Hollywood movies and American recording artists are still major players, but in the twenty-first century, diversity has returned to the global stage. Japanese anime and manga have become global phenomena. South Korean K-pop bands like BTS, iKon, and Got7 have gained audiences in the United States, Europe, and across Asia. In 2021, the Korean-made serial thriller Squid Game became the most-watched Netflix show of all time. Korean television dramas are also very popular in Southeast Asian countries like the Philippines. Korean popular culture borrows American cultural styles but invigorates them with distinctly Korean elements. For example, K-pop, with its large groups and flashy choreographed dancing, was clearly influenced by hip-hop. K-pop itself is thus a potent reminder that globalization is often the product of cultural sharing rather than a one-directional flow of cultural norms.
Winners and Losers in a Globalizing World
While it is tempting to see globalization and the rise of MNCs as generally benefiting the people of developed countries, the reality is more complex. Improvements in transportation and MNCs’ use of labor resources in countries around the world have sometimes harmed workers in developed countries like the United States. Supporters of NAFTA, for example, argued that the United States would benefit from reduced trade barriers, lower prices for agricultural goods from Mexico, and newly available jobs for lower-wage workers. Opponents, however, recognized that the groups hardest hit by transformations in the labor market would be those least able to withstand the damage, mainly working-class manufacturing workers in the United States and Canada. Both sets of predictions proved accurate.
After NAFTA was implemented in 1994, trade and development across the three participating countries surged. Trade across the U.S.-Mexico border surpassed $480 billion by 2015, an inflation-adjusted increase of 465 percent over the pre-NAFTA total. A similar, if smaller, increase occurred in U.S.-Canadian trade during the same time. In Mexico, per capita GDP grew by more than 24 percent, topping $9,500. Even more impressive growth occurred in Canada and the United States.
But between 1993 and 2021, the United States lost nearly eighteen million manufacturing jobs when some companies found it more profitable to relocate to Mexico. Not all these job losses can be attributed to NAFTA, but many can, as manufacturing that otherwise would have taken place in the United States was moved to maquiladoras, factories in Mexico along the U.S. border that employ people for low wages. Maquiladoras often receive materials from U.S. manufacturers, transform them into finished products, and then ship them back to the United States for the manufacturers to use. After the passage of NAFTA, U.S. car manufacturers began to make use of parts produced inexpensively in Mexico that would have been much more expensive had they been produced in the United States. For example, in 2015, Brake Parts Inc. moved its operations from California to Nuevo Laredo, Mexico, and almost three hundred U.S. workers lost their jobs in the process. Workers in the automobile industry, once the backbone of the U.S. industrial sector, suffered as jobs and automotive plants were relocated to Mexico. Some economists, however, argue that the use of inexpensive parts produced in maquiladoras allowed the U.S. automobile industry to survive. Jobs in the clothing industry also declined 85 percent. There were simply fewer obvious advantages to keeping such jobs in the United States.
Economists say the loss of manufacturing jobs was less a result of NAFTA than of other structural economic changes in the United States, such as automation. And while many U.S. workers lost their jobs as a result of NAFTA, millions of others found work in industries that produced goods for sale in Mexico. Nevertheless, the impression that NAFTA and globalization have brought poverty and misery to the working class in the United States remains strong and has influenced the nation’s politics since the 1990s. Responding to these beliefs, in 2017, President Donald Trump instigated a renegotiation of NAFTA, creating the United States-Mexico-Canada Agreement (USMCA). The USMCA included strong property rights protections, compelled Canada to open its dairy market more broadly, and required that workers in the automotive sector in all three countries be paid competitive wages. The agreement replaced NAFTA and went into effect in 2020.
The complaints that arose during the NAFTA debates had been voiced for decades. Since the 1970s, many in the United States had argued that globalization allowed Japan, its enemy during World War II, to race ahead and outcompete domestic manufacturers. By the 1980s, Japan was exporting a huge volume of consumer electronics and automobiles into the U.S. market. Its economic resurgence created a massive trade deficit (the difference in value between what a country imports and what it exports) in the United States and rising concerns about its global competitiveness.
Political figures like Walter Mondale, the 1984 Democratic Party nominee for president, called the trade deficit a threat to the United States and spoke in dire terms about an emerging global trade war. Many others drew connections between Japan’s economic rise and growing unemployment in the United States. Auto workers were especially vocal, declaring that Japan was using unfair practices and artificially limiting the number of American cars that could be sold in Japan. While the reality was more complicated, politicians were primed to respond with tough talk and reforms. In 1981, President Reagan pressured Japan to limit the number of cars it exported to the United States. The United States also made efforts to limit the importation of foreign steel and semiconductors for the same reasons.
By the start of the 1990s, Japan’s economic engine was starting to cool, and so were American concerns about Japanese dominance. However, companies in the developed world faced challenges rooted in the high cost of living and resulting high wages they had to pay to their employees. Globalization offered a solution in the form of outsourcing and offshoring. Outsourcing occurs when a company hires an outside firm, sometimes abroad, to perform tasks it used to perform internally, like accounting, payroll, human resources, and data processing services. Offshoring occurs when a company continues to conduct its own operations but physically moves them overseas to access cheaper labor markets. Outsourcing and offshoring were hardly new in the 1990s. But with globalization, trade agreements like NAFTA, and the ability to ship goods around the world, they became a major cost-savings option for large companies. Trade agreements like NAFTA made it possible to build plants in Mexico and still sell the products they produced in the United States. Companies could also offshore some of their operations to countries in Asia where labor was much cheaper.
Those hired overseas experienced their own problems, however, because outsourcing often led to the rise of sweatshops, factories where poorly paid workers labor in dangerous environments. Images of women and children in horrific working conditions in Central America, India, and Southeast Asia circulated around the developed world in the 1990s as examples of the consequences of outsourcing. Major MNCs like Nike, Gap, Forever 21, Walmart, Victoria’s Secret, and others have been harshly criticized for using sweatshops to produce their shoes and clothing lines. In 2013, the plight of sweatshop workers gained widespread attention when a building called Rana Plaza, a large complex of garment factories in Bangladesh, collapsed, killing 1,134 and injuring another 2,500 of the low-wage workers who made clothing for luxury brands like Gucci, Versace, and Moncler. The disaster led to a massive protest in Bangladesh demanding better wages and reforms. In the aftermath, studies and inspections of similar factories revealed that almost none had adequate safety infrastructure in place.
The Past Meets the Present
Sweatshops and Factory Safety: Then and Now
On March 25, 1911, a fire started at the Triangle shirtwaist factory in New York City that caused the deaths of 146 workers, most of them immigrant women and girls of Italian or Jewish heritage. It was soon discovered that the factory had poor safety features, and the doors were locked during the workday, making it impossible for the workers to flee.
News of the tragedy spread quickly in New York and around the country. Government corruption, which was widely reported, had allowed the factory to continue operating despite its poor conditions and safety deficiencies. In the end, the tragedy led to massive protests in New York. The city even set up a Factory Investigating Commission to prevent such events from happening again.
Just over a century later in Bangladesh, a similar tragedy occurred. On April 24, 2013, the Rana Plaza building collapsed, killing 1,134 workers and injuring thousands of others. The owners of the garment factories housed in the building knew it had structural problems but still demanded that employees continue to work or lose their jobs.
A massive public outcry in Bangladesh followed the building’s collapse. People were outraged by the disaster and by the fact that the workers made some of the lowest wages in the world sewing garments for major multinational companies. The public demanded that the families be compensated and those responsible be prosecuted, leading many of the garment companies to donate money to the families of those lost. The disaster also inspired a public movement to make the garment industry in Bangladesh more transparent.
- Why do you think it remains difficult to get garment factories like these to prioritize safety?
- Beyond the obvious, what are some of the similarities between the two tragedies?
Link to Learning
To better understand the similarities between the Triangle shirtwaist factory fire and the Rana Plaza collapse, view photos of the Triangle shirtwaist factory fire and read interviews with survivors of the fire. Then look at the photo essay covering the Rana Plaza collapse by award-winning photographer and activist Taslima Akhter.
The multinational technology company Apple Inc. has faced intense criticism in recent years for its use of foreign-owned sweatshops in Asia. Since the early 2000s, reports of sprawling factories with hundreds of thousands of workers assembling iPhones have made the news and led to a public-relations nightmare for the company. Investigative journalists have revealed that workers suffer a pattern of harsh and humiliating punishments, fines, physical assaults, and withheld wages. In some instances, conditions in the factories have pushed assembly-line workers to commit suicide rather than continue. Apple has insisted it takes all such accusations seriously and has tried to end relationships with such assembly plants, but those familiar with the problems have complained that little progress has been made.
Even when MNCs commit to providing a safe working environment and fair wages abroad, the practice of subcontracting often makes this impossible to guarantee. Foreign companies to whom multinationals send work often distribute it among a number of smaller companies that may also subcontract it, in turn. It is sometimes difficult for MNCs to know exactly where their goods are actually produced and thus to enforce rules about wages and working conditions.
Multinationals have harmed the countries in which they operate in other ways as well, including deforestation and the depletion of clean, drinkable water. They are also major producers of greenhouse gases and responsible for air pollution and the dumping of toxic waste. MNCs leave little in the way of profit in the countries they exploit, so funds are often lacking to repair or mitigate the damage they do.
The desire for a better life and opportunities in the developed world has led many in the developing world to migrate. Millions of immigrants from Mexico and other parts of Latin America have made their way into the United States over the last few decades. They typically find low-paid labor harvesting crops, cleaning homes, and serving as caretakers for children. In these jobs, they serve an important role in the U.S. economy, often doing work domestic workers are unwilling to do. Many entered the country illegally and live and work in the shadows to avoid deportation. This makes them vulnerable to abuse, and they are sometimes preyed on by human traffickers and unscrupulous employers. The United States is not the only country where this dynamic occurs. Saudi Arabia, for example, depends heavily on foreign workers to fill jobs Saudi Arabian citizens are reluctant to take, such as caretaker or domestic servant. Some workers have reported physical and emotional abuse to international human rights watchdog organizations like Human Rights Watch and Amnesty International. Being immigrants, they often have little access to relief from the country’s justice system.
Learning Objectives
By the end of this section, you will be able to:
- Discuss the development of complex digital computers and their effects on human society
- Analyze the effects of the internet and social media on society
- Describe important medical developments of the last fifty years and current medical challenges
World War II brought about a massive technological transformation as countries like Germany and the United States rapidly innovated to avoid destruction and defeat their enemies. In the decades after the war, there was major progress in medical technology, the creation of new vaccines, and the elimination of deadly diseases. All these achievements had profound effects on the way people lived, traveled, and worked. Underlying them were major advancements in the field of information technologies, such as computers. Once the war was over, the development of increasingly powerful computers ushered in a computer revolution as powerful as the nineteenth century’s Industrial Revolution, and with it a digital age.
The Digital Computer Revolution
Many of the technological advancements of the 1940s and 1950s came in the form of increasingly powerful analog computers, which analyze a continuous stream of information, much like that recorded on vinyl records. Analog computers worked well for solving big mathematical problems, such as the calculations related to electrical power delivery systems or the study of nuclear physics. However, one of their weaknesses was that they were inefficient at managing large amounts of data. Digital computers, or those that translate information into a complex series of ones and zeros, were far more capable of managing bulk data. Just a few years after the war, digital computing received a huge boost with the invention of the transistor, a device with far more computing potential than its predecessor the vacuum tube. Scientists could amplify this enlarged computing capacity even further by wiring multiple transistors together in increasingly complex ways.
The use of multiple transistors for computing purposes was an important step, but it had obvious drawbacks. Making machines capable of processing a great deal of information required connecting many transistors, which took up a great deal of space. Then, in the late 1950s, inventors in the United States developed an innovative solution. Using silicon, they could integrate transistors and capacitors in a way that clumsy wiring could not accomplish. The silicon-based integrated circuit freed computer technology from size constraints and opened the door to additional advancements in computing power.
Even so, digital computers remained large, expensive, and complicated to operate, and their use was largely confined to universities and the military. Only gradually over the 1970s did computing technology become more widely available, largely thanks to mass-produced general-purpose computers, sometimes called minicomputers, designed by IBM and the Digital Equipment Company. These served a variety of government and private purposes, such as calculating the Census, managing the flow of tax monies, and processing calculations related to creditworthiness (Figure 15.15). But despite being somewhat cheaper, minicomputers remained out of reach for average users.
The journey from minicomputers to personal computers began with the Intel Corporation, established in 1968 in Mountain View, California, in a region now commonly called Silicon Valley. During the 1970s, Intel developed a line of integrated circuits that were not only more powerful than their predecessors but also programmable. These became known as microprocessors, and they revolutionized computing by holding all of a computer’s processing power in a single integrated circuit. In 1975, a company in New Mexico released the first marketed personal computer, the Altair 8800. This used an Intel microprocessor and was promoted to computer hobbyists eager to wield a level of computing power once available to only a few. The Altair’s popularity inspired competing products like the Apple, the Commodore, and the Tandy Radio Shack computer (Figure 15.16). These personal computer systems were far easier to use and appealed to a much larger market than just hobbyists.
By 1982, there were 5.5 million personal computers in the United States, and over the next decade, their number and computing power rose exponentially. Computers proliferated in government offices, private firms, and family homes. Then, in 1984, Apple introduced the world to the Macintosh computer, which not only used a mouse but also replaced the standard code-based user interface with one based on graphics and icons. Recognizing the user-friendly possibilities of this graphic interface, competitors followed suit. Before long, the design popularized by Apple had become the norm.
By the end of the 1980s, not only had personal computers become common, but the microprocessor itself could be found everywhere. Microprocessors were incorporated into automobiles, cash registers, televisions, and household appliances and made possible a variety of other electronic devices like videocassette recorders and video game systems (Figure 15.17). Computer systems were created to store and manage financial, educational, and health-care information. In one form or another and whether they realized it or not, by the 1990s, almost everyone in the developed world was interacting with computers.
Modems were hardly new in the 1990s, but they became much faster and more common with the rise of the internet. The origins of the internet date back to the 1960s and the efforts by government researchers in the United States to use computers to share information. These developments were especially important for the U.S. Department of Defense during the Cold War and resulted in the emergence of the Advanced Research Projects Agency Network (ARPANET). In creating ARPANET, researchers developed many of the technologies that over the next few decades formed the basis for the internet we know today.
The Internet and Social Media
The process of globalization has been accelerated by the rise of the internet and the various social media platforms like Instagram, Facebook, and Twitter that exist there. Many people were introduced to the potential of computer networks for sharing information and creating small social networks in the 1980s, when individual users became able to connect their computers to others by using modems and telephone networks. This connectivity gave rise to regional bulletin board systems (BBSs), in which one person’s computer served as a host for those of other users (Figure 15.18). BBSs functioned much like websites today. Though they ran far more slowly and had limited capabilities, they allowed users to share computer files like games and images, post messages for others to read, participate in virtual discussions and debates, and play text-based online games. BBSs used phone networks to communicate, and long-distance calls were then expensive, so their users tended to be local.
Throughout the 1980s, BBSs continued to be popular with computer hobbyists and those intrigued by the idea of unique virtual communities, while networking technology improved steadily behind the scenes. The United States, Europe, and other developed countries were busy adopting a uniform protocol system that would allow computers around the world to easily communicate with others. Once this protocol had been established, the commercial internet as we currently understand it was born.
As early as 1987, about thirty thousand hosts resided on the burgeoning internet. Soon telecommunications and software companies began to exploit this new network by creating online service providers like America Online (AOL) to act as gateways to the internet. Initially, they used standard phone lines and modems to connect, much as BBSs had. But as the volume of information on the internet increased exponentially, service providers turned to more expensive broadband connections that used cable television lines and even dedicated lines to connect. During the 1990s, the first websites, the first internet search engines, and the first commercial internet platforms were established.
By 2005, more than one billion people worldwide were using the internet regularly. They were able to shop online, make phone calls around the world, and even create their own websites with almost no technical training. Never before had the world been so connected. In 2004, Facebook was launched. Originally a networking tool for Harvard students, it quickly expanded globally to become a giant in the new world of social media. By 2010, nearly half a billion Facebook users around the world were sharing images and messages, creating communities, and linking to news stories. By 2022, the number of Facebook users had reached nearly three billion.
Before 2007, almost all internet users gained access to the network via a personal computer, either at home or at work. That year, however, Apple Inc. released the first iPhone, a powerful cell phone but also a portable computer capable of performing all the tasks it once required a desktop computer to do. Even more revolutionary, it connected to the internet wirelessly through cell-phone infrastructure. While the iPhone was not the first phone to connect to the internet, its revolutionary touch-screen interface was far superior to earlier systems. Within just a few years, other cell-phone manufacturers were imitating its design and putting smartphones, and thus internet access, in the pockets of users around the world.
Smartphones have transformed life in developing countries, where they have helped bypass some of the traditional stages of infrastructure creation. In Africa, for example, people living where no landlines existed can now communicate with others using cell phones. Small farmers and traders can use cell phones for banking and to connect with potential suppliers and customers. In communities without libraries, schoolchildren can access the internet’s resources to study.
Smartphones have also democratized the internet, serving as powerful tools for organizing and promoting political change. The large pro-democracy movement in Cairo’s Tahrir Square captured the world’s attention in 2011, for example. But it began with twenty-five-year-old activist Asmaa Mahfouz’s YouTube video of January 18, 2011, in which she spoke directly to the camera and urged young Egyptians to protest at the square as part of the larger Arab Spring, a call for government reform and democracy that echoed across in the Arab world.
The Arab Spring was touched off in December 2010 when Muhammad Bouazizi, a young college graduate, set himself on fire in Tunisia after government officials there tried to interfere with the fruit cart that was his only source of income. Other young Tunisians took to the streets in protest, and demonstrations began again in January 2011. As people died in confrontations with government forces, President Zine al-Abidine Ben Ali fled the country, and Tunisia’s prime minister resigned shortly thereafter.
The Tunisian protests led to similar demonstrations in Egypt. On January 17, 2011, an Egyptian set himself on fire near the nation’s Parliament to protest the lack of economic opportunities. Crowds of mostly young people responded with massive demonstrations that lasted weeks (Figure 15.19). These demonstrations were fueled by and broadcast to the world through text messages, photos, tweets, videos, and Facebook posts sent by thousands of mobile phones, including that of Mahfouz. The devices amplified the calls for democracy and showed the world the Egyptian government’s use of violence to try to silence the protestors. Egyptian president Hosni Mubarak resigned on February 11, 2011. He was later convicted for his role in ordering government forces to harm and kill protestors.
In the wake of the Egyptian protests, activists in Libya, Yemen, Syria, Morocco, Lebanon, Jordan, and other countries coordinated their activities using computers and smartphones to access social media, video, and mobile phone messaging. These efforts resulted in protests, changes to the laws, and even the toppling of governments, such as in Egypt and Tunisia. They also led to civil war in Syria, Iraq, and Libya, leading to thousands of deaths and a refugee crisis in the Mediterranean. While Twitter and Facebook were useful for scaling up protests, the movements to which they gave birth often struggled to find a purpose in countries without a well-established resistance movement.
Link to Learning
In this interview, Egyptian-American journalist and pro-democracy activist Mona Eltahawy talks about the Arab Spring and revolution in Egypt and the use of social media as a tool for organizing. She addresses the role of social media in two parts. Take a look at her answers to “Did the government completely misjudge what they were doing?” and “Could this have happened without social media, without these new technologies?”
Since 2011, governments around the world have come to recognize the power of social media to bring about change, and many authoritarian and even ostensibly democratic leaders have moved to limit or block social media use in their countries. China has blocked Facebook and Twitter since 2009 and encourages its citizens to instead use the state-authorized app WeChat, which shares information with the government. In 2020, India banned the social media app TikTok, claiming it threatened state security and public order. In March 2022, following its February invasion of Ukraine, Russia banned Instagram and Facebook because, the government alleged, the platforms carried messages calling for violence against Russian troops and against Russian president Vladimir Putin. Turkmenistan has gone further than China, India, or Russia. It not only bans Facebook and Twitter, but it also requires citizens applying for internet access to swear they will not try to evade state censorship.
Link to Learning
China is noted for its strict internet censorship, and its government blocks access to a large number of sites in a policy colloquially known as the Great Firewall of China. The Comparitech service allows you to see websites blocked in China by entering and searching them.
In the United States, lawmakers have recognized that social media platforms like Facebook and Twitter can both promote and endanger democracy. Social media provides extremist groups with the ability to attract followers from across the nation and incite violence. Groups can use the platforms to spread fake news, and a report by the U.S. Senate has concluded that Russian intelligence operatives used Facebook, Twitter, and Instagram to manipulate voters. Legislators have called on social media to more actively censor the content on their platforms and limit or block access by groups or persons spreading hate speech or disinformation. The potential for misuse of technology is heightened by advances that enable the creation of deepfakes, computer-generated images that closely resemble real people.
Medical Miracles and Ongoing Health Challenges
Advances in computer technology were not the only technological success stories of the post–World War II world. In 1947, scientists perfected an artificial kidney, and just five years later, the first successful kidney transplant was performed. In the 1950s, antipsychotic drugs were developed and used to treat neurological disorders that once consigned patients to a lifetime of difficult treatment in a psychiatric hospital. In the 1950s, geneticists discovered the double-helix structure of DNA, information that was crucial for later advancements such as the ability to use DNA to diagnose and treat genetic diseases. In 1962, a surgical team successfully reattached a severed limb for the first time, and in 1967, the first human heart transplant took place. Over the next decade and a half, medical advances made it possible to conduct telemedicine, view and monitor internal organs without performing surgery, and monitor the heartbeat of a fetus during pregnancy.
Medical science also made enormous gains in eradicating diseases that had been common for centuries. For example, polio had caused paralysis and even death since the late nineteenth century, but in 1950, the first successful polio vaccine, developed by the Polish-born virologist Hilary Koprowski, was demonstrated as effective in children. This was an orally ingested live vaccine, a weakened form of the virus designed to help the immune system develop antibodies. In the meantime, researcher Jonas Salk at the University of Pittsburgh was developing an injectable dead-virus vaccine (Figure 15.20). This vaccine rendered the virus inactive but still triggered the body to produce antibodies. In 1955, Salk’s vaccine was licensed for use in the United States, and mass distribution began there. Other vaccines were developed in the United States and other countries over the next several years. Their use has nearly eradicated polio cases, which once numbered in the hundreds of thousands. When polio was detected in an adult in New York in July 2022, it was the first case in the United States since 2013.
The eradication of smallpox is another important success story. Centuries ago, smallpox devastated communities around the world, especially Native American groups, which had no immunity to the disease when Europeans brought it to their shores. Early vaccines based on the cowpox virus were deployed in the United States and Europe in the eighteenth century with great effect. In the twentieth century, advancements made the vaccine safer and easier to administer. However, by the 1950s, much of the world remained unvaccinated and susceptible. Beginning in 1959, the World Health Organization (WHO) began working to eradicate smallpox through mass vaccination, redoubling efforts in 1967 through its Intensified Eradication Program. During the 1970s, smallpox was eradicated in South America, Asia, and Africa. In 1980, the WHO declared it had been eliminated globally.
The WHO’s smallpox program is considered the most effective disease-eradication initiative in history, but it was an aggressive campaign not easily replicated. And without a vaccine, the problems of controlling transmissible diseases can be immense. A novel disease was first reported among Los Angeles’s gay community in 1981, and by 1982 it had become known as AIDS (acquired immunodeficiency syndrome). Researchers realized it was commonly transmitted through sexual intercourse but could also be passed by shared needles and blood transfusions. At that time, the U.S. Centers for Disease Control explained that AIDS was not transmitted through casual contact, but the information did little to calm rising concerns about this still largely mysterious and deadly disease. By 1987, more than 60,000 people in the world had died of AIDS. In the United States, the government was slow to fund research to develop treatments or to find a cure. That year, activists at the Lesbian and Gay Community Services Center in New York City, concerned with the toll that AIDS was taking on the gay community and the government’s seeming lack of concern regarding a disease that the media depicted as affecting primarily gay men, an already stigmatized group, formed the AIDS Coalition to Unleash Power (ACT UP). ACT UP engaged in nonviolent protest to bring attention to their cause and worked to correct misinformation regarding the disease and those who were infected with it.
By the year 2000, scientists in the developed world had acquired a sophisticated understanding of AIDS and the human immunodeficiency virus (HIV), and treatments have emerged that make it a manageable rather than a lethal disease, at least in the developed world. But in parts of the developing world, like Sub-Saharan Africa, infection rates were still rising. One difficulty was that HIV infection and AIDS had become associated with homosexuality, which carried stigma and, in some places, even legal penalties that made those infected reluctant to seek help. Addressing transmission with the general public also meant broaching sometimes culturally sensitive topics like sexual intercourse. Those attempting to control the spread of the disease often found themselves trying to influence social and cultural practices, a complicated task fraught with pitfalls.
This does not mean there were not successes. The proliferation of condom use, circumcision, and public information campaigns, along with the declining cost of treatment, have greatly reduced the extent of the epidemic in Africa. But AIDS is still an enormous and devastating reality for Africans today. Sub-Saharan Africa is home to nearly 70 percent of the world’s HIV-positive cases. Women and children are particularly affected; Africa accounts for 92 percent of all cases of infected pregnant women and 90 percent of all infected children.
Ebola virus has also threatened the health of Africans. The first known outbreak of Ebola, a hemorrhagic fever, took place in Central Africa in 1976. Since then, there have been several other outbreaks. In 2013–2016, an outbreak in West Africa quickly spread across national borders and threatened to become a global epidemic. Approximately ten thousand people fell ill in Liberia alone, and nearly half of those infected died.
The most recent challenge to world health, the COVID-19 pandemic, demonstrates the effects of both globalization and technological developments. The coronavirus SARS-CoV-2 appeared in Wuhan, China, an industrial and commercial hub, in December 2019. Airplane and cruise ship passengers soon unwittingly spread it throughout the world; the first confirmed case in the United States appeared in January 2020. As every continent reported infections, offices, stores, and schools closed and travel bans appeared. Despite these restrictions, middle-class and wealthy people in the developed world continued almost as normal. Many worked, studied, shopped, visited friends and family, and consulted doctors online from their homes.
Low-paid workers in service industries often lost their jobs, however, as restaurants and hotels closed, and children without access to computers or stable internet connections struggled to keep up with their classes. Even the more fortunate in the developed world confronted shortages of goods from toilet paper to medicines to infant formula when global supply chains stalled as farm laborers, factory workers, dock hands, and railroad employees fell ill or workplaces closed to prevent the spread of infection. Developing countries lacked funds to support their citizens through prolonged periods of unemployment. Although vaccines were developed in several countries, they were available primarily to people in wealthier nations. As of March 2022, only 1 percent of all vaccine doses administered worldwide had been given to people in low-income countries.
Beyond the Book
Public Art and Modern Pandemics
Dangerous diseases like HIV/AIDS can energize more people than doctors working in laboratories and global leaders publishing reports. During the early years of the HIV/AIDS crisis, grassroots organizers from around the world strove to focus attention on the problem. Their actions were necessary because governments often did little to prevent the spread of the disease or provide treatment for those infected. The AIDS Coalition to Unleash Power (ACT UP) became known for staging loud protests in public and sometimes private places to raise awareness about the disease. In the United States, the publicity generated through groups like ACT UP forced the government to pay greater attention and to budget more money to the search for a cure. Some artists responded to this movement with murals in well-known locations like the Berlin Wall (Figure 15.21).
While some murals about diseases were a call to action, especially about HIV/AIDS, others have aimed to educate the public. A mural painted on a wall in Kenya for World Malaria Day 2014 showed viewers the proper use of bed nets to help lower the rate of infection (Figure 15.22).
During the COVID-19 pandemic, artists also went to the streets. Some of the murals they painted demanded action or celebrated health workers. Others called for awareness about the rising number of elderly people dying of the disease (Figure 15.23).
- What makes art a powerful medium for conveying messages about awareness? What aspects of these murals seem especially powerful to you?
- Do you recall seeing artwork from the COVID-19 pandemic or any other disease outbreak? What stood out in it?
- What other art forms might an artist use to communicate political or social messages? How are these methods effective?
Learning Objectives
By the end of this section, you will be able to:
- Explain the reasons for the rise of environmentalism around the world
- Identify ways in which environmental groups have faced resistance
- Describe the ways in which the global community has attempted to address environmental issues like climate change
While concern for the Earth and anxiety about the negative effects humans have wrought is hardly new, the modern environmental movement, with its characteristic public activism, has roots in nineteenth-century reactions to industrialization. Writers of that time, like George Perkins Marsh, John Ruskin, Octavia Hill, and many others, expressed a romantic view of nature that contrasted sharply with the industrial transformations they witnessed around them. Their ideas gave birth to preservation efforts and the creation of national parks, first in the United States and later in Australia, South Africa, India, and nations in Europe. Environmental concern has only grown stronger as societies around the world must now grapple with the enormous and ongoing consequences of increasing industrialization, manifested particularly in the global threat of climate change.
The Rise of Environmentalism
In the post–World War II period, as the United States and Europe experienced unprecedented economic growth and a rapidly rising standard of living, anxiety regarding the condition of the environment rose to the surface and gained political significance. In the developed West, members of the generation reaching maturity in the 1960s occasionally struggled with their affluence and the recognition that it came with disastrous environmental consequences. These consequences were not limited to the developed world. In the 1950s and 1960s, as part of the Green Revolution, agricultural scientists developed new high-yielding varieties of rice and wheat and synthetic fertilizers and pesticides that increased food production in both the developed and the developing world. The result was that millions were saved from hunger, and infant mortality in developing nations decreased. Scientist Norman Borlaug, who was considered largely responsible for the Green Revolution, received a Nobel Peace Prize in 1970 for his work. Such benefits, however, came with a heavy price tag, in many ways. Small farmers had to borrow money to purchase the new high-yielding seeds, fertilizers, and pesticides, and many found themselves deeply in debt. Reliance on the new varieties of crops reduced biodiversity, and chemical fertilizers and pesticides polluted the soil and water.
In 1962, Rachel Carson published her bestselling book Silent Spring, which railed against the proliferation of dangerous pesticides like DDT (Figure 15.9). Carson drew connections between the political power of the chemical industry and the many adverse effects of chemicals that made their way into food supplies and human bodies. Though strongly condemned by large chemical companies, the book was undeniably influential. It was a finalist for the National Book Awards for nonfiction, and its ideas inspired a generation of young activists. In 1972, the use of DDT in U.S. agriculture was banned.
Carson’s book tapped into the growing anxiety of many who felt the economic growth from which they benefited was environmentally unsustainable. These fears were confirmed by a series of ecological disasters that galvanized public attention. They included the 1958 Niger Delta oil spill in Nigeria, the 1962 start of the Centralia mine fire in Pennsylvania (which is still burning), the 1967 Torrey Canyon oil spill in the United Kingdom, the 1969 Cuyahoga River fire in Ohio, and the 1969 Santa Barbara oil spill. By the 1970s, environmental concerns had translated into political action. In April 1970, approximately twenty million people in the United States participated in the world’s first Earth Day celebration, a grassroots movement intended to raise public awareness about the environment (Figure 15.10).
In 1972, scientist Donella Meadows and others from the Massachusetts Institute of Technology published a report called The Limits to Growth, which used computer models to predict that humanity would soon reach absolute limits on its use of resources, with disastrous consequences. The report had been commissioned by the Club of Rome, a nonprofit group of scientists, economists, and other intellectuals founded in 1968 to address global problems like pollution and environmental degradation. The Limits to Growth circulated widely and reinforced public concerns about a widespread environmental crisis on Earth.
Over the next decade, green parties, political parties organized around environmental concerns, proliferated in countries around Europe, proving popular with the young and highly educated. Some green party founders, such as Petra Kelly of the German Green Party, had studied in the United States and were influenced by its environmental movement. By the 1990s, there were green parties in almost every country in Europe and also in the United States, Canada, Argentina, Chile, Australia, and New Zealand.
One of the factors motivating green parties in Europe was growing concern about nuclear technology. Following the 1951 creation of the first nuclear reactor for producing energy, nuclear power plants became common in the United States, Europe, and the Soviet Union. Once hailed as a cleaner alternative to polluting coal-burning power, nuclear energy began to stall as environmentally conscious populations around the world voiced concerns about its potential dangers. News of the partial meltdown of the Three Mile Island nuclear reactor in Pennsylvania in 1979 gave new vigor to the already strong antinuclear movement.
Still considered the worst nuclear accident in U.S. history, the Three Mile Island disaster released radioactive gases through the plant and into the surrounding area. After news of it reached the public, more than 100,000 residents fled the area. Despite President Jimmy Carter’s efforts to calm the public, the event shattered the country’s belief that such plants could be operated safely. Just a few years later, in 1986, an accident at the Chernobyl nuclear power plant in Ukraine, which was then part of the Soviet Union, resulted in the single largest uncontrolled radioactive release ever recorded. Although the Soviet Union reported that only thirty-one people died as a direct result of the accident, more than 200,000 had to be resettled in the wake of the disaster, and in 2005 the United Nations estimated that another four thousand could still die as a result of exposure to radiation released in Chernobyl. The area around Chernobyl was declared off limits, and the Soviet Union suffered irreparable harm to its reputation as a competent and technologically advanced superpower.
In Their Own Words
Voices from Chernobyl
The 1986 Chernobyl catastrophe in the Union of Soviet Socialist Republics (USSR) required the creation of an “exclusion zone” of towns, villages, forests, and farms that had to be abandoned due to radioactive contamination (Figure 15.11). However, the Soviet government minimized information about the crisis to reduce its embarrassment and maintain an image of technical dominance. Decades later, we can glimpse what people were thinking and doing at the time.
At that time my notions of nuclear power stations were utterly idyllic. At school and at the university we’d been taught that this was a magical factory that made ‘energy out of nothing,’ where people in white robes sat and pushed buttons. Chernobyl blew up when we weren’t prepared. And also there wasn’t any information. We got stacks of paper marked ‘Top Secret.’ ‘Reports of the accident: secret;’ ‘Results of medical observations: secret;’ ‘Reports about the radioactive exposure of personnel involved in the liquidation of the accident: secret.’ And so on. There were rumors: someone read in some paper, someone heard, someone said . . . . Some people listened to what was being said in the West, they were the only ones talking about what pills to take and how to take them. But most often the reaction was: our enemies are celebrating, but we still have it better.
—Zoya Danilovna Bruk, environmental inspector interviewed by Svetlana Alexievich, Voices from Chernobyl, Translated by Keith Gessen
At first everyone said, ‘It’s a catastrophe,’ and then everyone said, ‘It’s nuclear war.’ I’d read about Hiroshima and Nagasaki, I’d seen documentary footage. It’s frightening, but understandable: atomic warfare, the explosion’s radius. I could even imagine it. But what happened to us didn’t fit into my consciousness. You feel how some completely unseen thing can enter and then destroy the whole world, can crawl in and enter you. I remember a conversation with this scientist: ‘This is for thousands of years,’ he explained. ‘The decomposition of uranium: that’s 238 half-lives. Translated into time: that’s a billion years. And for thorium: its fourteen billion years.’ Fifty, one hundred, two hundred. But beyond that? Beyond that my consciousness couldn’t go.
—Anatoly Shimanskiy, journalist interviewed by Svetlana Alexievich, Voices from Chernobyl, Translated by Keith Gessen
- Do you think Chernobyl changed the prospects for nuclear energy use? Why or why not?
- How should governments handle disasters of this magnitude? If your government dealt with this event, how would you want it to do so?
It was also in the 1980s that scientists first detected the existence of an “ozone hole” over Antarctica. The ozone layer is a portion of the Earth’s upper atmosphere with especially high concentrations of ozone molecules (Figure 15.12). It serves to block much of the potentially harmful solar radiation the Earth naturally receives from the sun and thus is essential for life on this planet as we know it. The news that this layer had a hole—an area severely depleted by the use of manufactured chemicals in common consumer items like aerosols, refrigerants, and food packaging—startled the public. Some environmentalists predicted an apocalyptic near-future in which billions would die of skin cancer caused by the sun, and the Earth’s surface would become unlivable.
In the 1980s and early 1990s, people also became concerned about the plight of the environmentally invaluable Amazon rainforest. The movement known as Save the Rainforest brought professional environmentalists and concerned citizens together to raise awareness about deforestation. Brazil’s extensive rainforests had been under threat since the 1960s, when cattle ranchers and others began clearing thousands of acres of pristine forest. Until the 1980s, few people had paid much attention. But concerns rose in wealthy countries about the harm done by major beef producers and other multinational corporations to the people and resources in developing countries, including by eliminating the “lungs of the planet” (trees that produce oxygen and absorb the carbon dioxide created by industrial processes), and more people began to take notice.
Environmentalists stressed the rainforest’s unique biodiversity and warned of the consequences of destroying the sole source of potentially world-changing drugs and species of animals found nowhere else on earth. Anthropologists and Indigenous activists spoke of the effect on Indigenous peoples who lived and hunted in the rainforest and were threatened with loss of both home and livelihood. These warnings merged with developed countries’ anxieties about overconsumption and living beyond their means. The result was the Save the Rainforest campaign. By 1991, the effort had borne fruit, and deforestation in the Amazon had declined to one of the lowest recorded rates. Between 2005 and 2010, Brazil managed to reduce the destruction, but millions of hectares are still being cleared each year.
It was also during the late 1980s that much of the world was first introduced to the concept of global warming, the general rise in Earth’s temperature that scientists have observed over approximately the last two hundred years. The consensus is that this warming is the result of a steady increase in fossil-fuel burning since the start of the Industrial Revolution. The process has contributed to a rise in greenhouse gas levels, which trap yet more heat within the Earth’s atmosphere. Global warming is just one aspect of climate change, a broader phenomenon that includes changes in temperature, weather, storm activity, wind patterns, sea levels, and other influences on the planet.
Both global warming and climate change present enormous challenges for the future. Rising sea levels may make some large coastal cities around the world unlivable. Stronger storms, floods, and more intense heat can make life unbearable in entire regions. Extreme weather events like hurricanes, heat waves, and forest fires caused by drought and high temperatures may kill and injure thousands and cause billions of dollars in property losses. Hotter, wetter conditions may encourage the breeding of insects that spread infectious diseases like malaria and West Nile virus. These changes in turn will likely lead to worldwide problems. The World Bank estimates that more than 200 million people could become climate refugees, people forced to flee their homes to find livable climates in other areas, over the next few decades.
Link to Learning
Artists have always tried to highlight problems around the world. Climate change is no exception. Take a look at this PBS report on the Ghost Forest exhibit created by artist Maya Lin for one stunning example.
Environmentalism Today
Many of the most outspoken proponents of environmentalism and policies to curb fossil-fuel use have come from the United States and developed countries in Europe. Some of their anxieties about the state of the environment are a product of their own affluence and the sense that it is unsustainable. Yet frequently their environmental concerns have clashed with the interests of developing countries, which are largely geared toward growth. Concerns about the Amazon rainforest is one example of this dynamic.
The depletion of the rainforest was largely the result of the expansion of cattle ranching and later of farming in the Amazon River basin. When rainforest trees are removed for these purposes, the greenhouse gas carbon dioxide (CO2) is released, contributing to global warming. Ranching also contributes to a rise in methane, another greenhouse gas, produced by the cattle themselves. While environmentalists in the United States and Europe viewed with horror the increase of greenhouse gases and the loss of animal habitats, plants, and trees, the clearing of land brought economic opportunities for many Brazilians. It created jobs for poor workers, produced lumber for construction, and opened space for ranchers to graze cattle and farmers to grow crops that could be consumed domestically or exported for profit.
Debates about whether these outcomes are positive or negative reflect the fact that developed countries have mostly freed themselves from concerns about survival. They are able now to focus on sustainability, while developing countries like Brazil must exploit their natural resources to get by and improve their economic position. Between 1990 and 2019, China’s coal consumption increased nearly four times. Its energy needs have become enormous as China has industrialized and its citizens have experienced a rising standard of living. While the Chinese are not immune to criticisms about high coal consumption, which emits greenhouses gases and contributes to global warming, their policies suggest their larger concern is maintaining their country’s continued growth and development.
Groups like Greenpeace, an environmental organization founded in 1971, sometimes express dismay about such perspectives. For example, in the 1970s, Greenpeace and other environmental groups pushed for a ban on seal hunting around the world, calling it unsustainable and cruel to the animals. However, many in Canada and other arctic regions depend on sealing for their livelihoods. They felt Greenpeace and other environmentalists undermined their industry and hampered their efforts to provide for their families. They resented being portrayed as unfeeling killers by organizations that appeared to know little about their lives.
A similar conflict erupted in the forests of the Pacific Northwest in the 1980s as environmentalists ramped up protests against logging in order to preserve the spotted owl, a threatened species indigenous to states like Washington and Oregon (Figure 15.13). (Threatened species are those the government identifies as likely to soon be endangered.) The tens of thousands of loggers who depended on the industry to survive complained that while the environmental concerns were real, their livelihoods should take priority over the survival of an individual species. The conflict became known as the Timber Wars and gained enormous media attention in the late 1980s and early 1990s. In the end, the U.S. government sought a compromise in the Northwest Forest Plan of 1994, which restricted forest exploitation and satisfied neither group.
The Global Response to Climate Change
Countries from around the world have regularly tried to address environmental problems together. The Montreal Protocol of 1987 was a global agreement to ban and phase out specific chemicals in industrial and consumer products in the expectation that natural processes would restore the Earth’s ozone layer. The accord went into effect almost immediately, and the ozone layer has been steadily recovering ever since. Scientists anticipate that it will be fully healed by 2070, if not earlier.
At the United Nations in 1989, Prime Minister Margaret Thatcher of the United Kingdom gave an urgent warning about the environment and especially climate change. She called for the world to embrace nuclear power as a substitute for coal-burning power plants, but she acknowledged that the increasingly influential green party movement remained firmly committed to preventing the proliferation of nuclear technology.
Thatcher’s speech marked a global acknowledgment that warming and climate change were pressing problems requiring international solutions. Given the success of the Montreal Protocol, she suggested a similar solution to climate change, but it met with resistance from the United States. President Reagan and Thatcher had both risen to power by embracing neoliberal economic policies, which call for market-oriented approaches and a rollback of regulation. Reagan remained largely unmoved even as Dr. James Hansen, an atmospheric scientist working for the U.S. government, testified in Congress that global warming was a real and dangerous threat (Figure 15.14).
Despite U.S. resistance, the larger international community proceeded apace. In 1988, the United Nations was able to establish the United Nations Framework Convention on Climate Change (UNFCCC). Created in 1992 in Rio de Janeiro, Brazil, the UNFCCC convenes annually for negotiations among member countries. Notable meetings have occurred in Kyoto (1997), Copenhagen (2009), Paris (2015), and Edinburgh (2021). The UNFCCC Paris Agreement of 2015 took into account how much each member country is able to pay for a major transition away from fossil fuels. On a five-year cycle, the agreement asks each country to make a contribution proportionate to its own needs and its ability to pay for new energy infrastructure, though it has no enforcement mechanism. The goal of the Paris Agreement is total warming of less than 2°C (3.6°F) above levels from the time of industrialization, around the year 1750. While some have called the agreement successful in helping approach this goal, others feel many countries are not fulfilling the promises they made.
Learning Objectives
By the end of this section, you will be able to:
- Discuss the development of complex digital computers and their effects on human society
- Analyze the effects of the internet and social media on society
- Describe important medical developments of the last fifty years and current medical challenges
World War II brought about a massive technological transformation as countries like Germany and the United States rapidly innovated to avoid destruction and defeat their enemies. In the decades after the war, there was major progress in medical technology, the creation of new vaccines, and the elimination of deadly diseases. All these achievements had profound effects on the way people lived, traveled, and worked. Underlying them were major advancements in the field of information technologies, such as computers. Once the war was over, the development of increasingly powerful computers ushered in a computer revolution as powerful as the nineteenth century’s Industrial Revolution, and with it a digital age.
The Digital Computer Revolution
Many of the technological advancements of the 1940s and 1950s came in the form of increasingly powerful analog computers, which analyze a continuous stream of information, much like that recorded on vinyl records. Analog computers worked well for solving big mathematical problems, such as the calculations related to electrical power delivery systems or the study of nuclear physics. However, one of their weaknesses was that they were inefficient at managing large amounts of data. Digital computers, or those that translate information into a complex series of ones and zeros, were far more capable of managing bulk data. Just a few years after the war, digital computing received a huge boost with the invention of the transistor, a device with far more computing potential than its predecessor the vacuum tube. Scientists could amplify this enlarged computing capacity even further by wiring multiple transistors together in increasingly complex ways.
The use of multiple transistors for computing purposes was an important step, but it had obvious drawbacks. Making machines capable of processing a great deal of information required connecting many transistors, which took up a great deal of space. Then, in the late 1950s, inventors in the United States developed an innovative solution. Using silicon, they could integrate transistors and capacitors in a way that clumsy wiring could not accomplish. The silicon-based integrated circuit freed computer technology from size constraints and opened the door to additional advancements in computing power.
Even so, digital computers remained large, expensive, and complicated to operate, and their use was largely confined to universities and the military. Only gradually over the 1970s did computing technology become more widely available, largely thanks to mass-produced general-purpose computers, sometimes called minicomputers, designed by IBM and the Digital Equipment Company. These served a variety of government and private purposes, such as calculating the Census, managing the flow of tax monies, and processing calculations related to creditworthiness (Figure 15.15). But despite being somewhat cheaper, minicomputers remained out of reach for average users.
The journey from minicomputers to personal computers began with the Intel Corporation, established in 1968 in Mountain View, California, in a region now commonly called Silicon Valley. During the 1970s, Intel developed a line of integrated circuits that were not only more powerful than their predecessors but also programmable. These became known as microprocessors, and they revolutionized computing by holding all of a computer’s processing power in a single integrated circuit. In 1975, a company in New Mexico released the first marketed personal computer, the Altair 8800. This used an Intel microprocessor and was promoted to computer hobbyists eager to wield a level of computing power once available to only a few. The Altair’s popularity inspired competing products like the Apple, the Commodore, and the Tandy Radio Shack computer (Figure 15.16). These personal computer systems were far easier to use and appealed to a much larger market than just hobbyists.
By 1982, there were 5.5 million personal computers in the United States, and over the next decade, their number and computing power rose exponentially. Computers proliferated in government offices, private firms, and family homes. Then, in 1984, Apple introduced the world to the Macintosh computer, which not only used a mouse but also replaced the standard code-based user interface with one based on graphics and icons. Recognizing the user-friendly possibilities of this graphic interface, competitors followed suit. Before long, the design popularized by Apple had become the norm.
By the end of the 1980s, not only had personal computers become common, but the microprocessor itself could be found everywhere. Microprocessors were incorporated into automobiles, cash registers, televisions, and household appliances and made possible a variety of other electronic devices like videocassette recorders and video game systems (Figure 15.17). Computer systems were created to store and manage financial, educational, and health-care information. In one form or another and whether they realized it or not, by the 1990s, almost everyone in the developed world was interacting with computers.
Modems were hardly new in the 1990s, but they became much faster and more common with the rise of the internet. The origins of the internet date back to the 1960s and the efforts by government researchers in the United States to use computers to share information. These developments were especially important for the U.S. Department of Defense during the Cold War and resulted in the emergence of the Advanced Research Projects Agency Network (ARPANET). In creating ARPANET, researchers developed many of the technologies that over the next few decades formed the basis for the internet we know today.
The Internet and Social Media
The process of globalization has been accelerated by the rise of the internet and the various social media platforms like Instagram, Facebook, and Twitter that exist there. Many people were introduced to the potential of computer networks for sharing information and creating small social networks in the 1980s, when individual users became able to connect their computers to others by using modems and telephone networks. This connectivity gave rise to regional bulletin board systems (BBSs), in which one person’s computer served as a host for those of other users (Figure 15.18). BBSs functioned much like websites today. Though they ran far more slowly and had limited capabilities, they allowed users to share computer files like games and images, post messages for others to read, participate in virtual discussions and debates, and play text-based online games. BBSs used phone networks to communicate, and long-distance calls were then expensive, so their users tended to be local.
Throughout the 1980s, BBSs continued to be popular with computer hobbyists and those intrigued by the idea of unique virtual communities, while networking technology improved steadily behind the scenes. The United States, Europe, and other developed countries were busy adopting a uniform protocol system that would allow computers around the world to easily communicate with others. Once this protocol had been established, the commercial internet as we currently understand it was born.
As early as 1987, about thirty thousand hosts resided on the burgeoning internet. Soon telecommunications and software companies began to exploit this new network by creating online service providers like America Online (AOL) to act as gateways to the internet. Initially, they used standard phone lines and modems to connect, much as BBSs had. But as the volume of information on the internet increased exponentially, service providers turned to more expensive broadband connections that used cable television lines and even dedicated lines to connect. During the 1990s, the first websites, the first internet search engines, and the first commercial internet platforms were established.
By 2005, more than one billion people worldwide were using the internet regularly. They were able to shop online, make phone calls around the world, and even create their own websites with almost no technical training. Never before had the world been so connected. In 2004, Facebook was launched. Originally a networking tool for Harvard students, it quickly expanded globally to become a giant in the new world of social media. By 2010, nearly half a billion Facebook users around the world were sharing images and messages, creating communities, and linking to news stories. By 2022, the number of Facebook users had reached nearly three billion.
Before 2007, almost all internet users gained access to the network via a personal computer, either at home or at work. That year, however, Apple Inc. released the first iPhone, a powerful cell phone but also a portable computer capable of performing all the tasks it once required a desktop computer to do. Even more revolutionary, it connected to the internet wirelessly through cell-phone infrastructure. While the iPhone was not the first phone to connect to the internet, its revolutionary touch-screen interface was far superior to earlier systems. Within just a few years, other cell-phone manufacturers were imitating its design and putting smartphones, and thus internet access, in the pockets of users around the world.
Smartphones have transformed life in developing countries, where they have helped bypass some of the traditional stages of infrastructure creation. In Africa, for example, people living where no landlines existed can now communicate with others using cell phones. Small farmers and traders can use cell phones for banking and to connect with potential suppliers and customers. In communities without libraries, schoolchildren can access the internet’s resources to study.
Smartphones have also democratized the internet, serving as powerful tools for organizing and promoting political change. The large pro-democracy movement in Cairo’s Tahrir Square captured the world’s attention in 2011, for example. But it began with twenty-five-year-old activist Asmaa Mahfouz’s YouTube video of January 18, 2011, in which she spoke directly to the camera and urged young Egyptians to protest at the square as part of the larger Arab Spring, a call for government reform and democracy that echoed across in the Arab world.
The Arab Spring was touched off in December 2010 when Muhammad Bouazizi, a young college graduate, set himself on fire in Tunisia after government officials there tried to interfere with the fruit cart that was his only source of income. Other young Tunisians took to the streets in protest, and demonstrations began again in January 2011. As people died in confrontations with government forces, President Zine al-Abidine Ben Ali fled the country, and Tunisia’s prime minister resigned shortly thereafter.
The Tunisian protests led to similar demonstrations in Egypt. On January 17, 2011, an Egyptian set himself on fire near the nation’s Parliament to protest the lack of economic opportunities. Crowds of mostly young people responded with massive demonstrations that lasted weeks (Figure 15.19). These demonstrations were fueled by and broadcast to the world through text messages, photos, tweets, videos, and Facebook posts sent by thousands of mobile phones, including that of Mahfouz. The devices amplified the calls for democracy and showed the world the Egyptian government’s use of violence to try to silence the protestors. Egyptian president Hosni Mubarak resigned on February 11, 2011. He was later convicted for his role in ordering government forces to harm and kill protestors.
In the wake of the Egyptian protests, activists in Libya, Yemen, Syria, Morocco, Lebanon, Jordan, and other countries coordinated their activities using computers and smartphones to access social media, video, and mobile phone messaging. These efforts resulted in protests, changes to the laws, and even the toppling of governments, such as in Egypt and Tunisia. They also led to civil war in Syria, Iraq, and Libya, leading to thousands of deaths and a refugee crisis in the Mediterranean. While Twitter and Facebook were useful for scaling up protests, the movements to which they gave birth often struggled to find a purpose in countries without a well-established resistance movement.
Link to Learning
In this interview, Egyptian-American journalist and pro-democracy activist Mona Eltahawy talks about the Arab Spring and revolution in Egypt and the use of social media as a tool for organizing. She addresses the role of social media in two parts. Take a look at her answers to “Did the government completely misjudge what they were doing?” and “Could this have happened without social media, without these new technologies?”
Since 2011, governments around the world have come to recognize the power of social media to bring about change, and many authoritarian and even ostensibly democratic leaders have moved to limit or block social media use in their countries. China has blocked Facebook and Twitter since 2009 and encourages its citizens to instead use the state-authorized app WeChat, which shares information with the government. In 2020, India banned the social media app TikTok, claiming it threatened state security and public order. In March 2022, following its February invasion of Ukraine, Russia banned Instagram and Facebook because, the government alleged, the platforms carried messages calling for violence against Russian troops and against Russian president Vladimir Putin. Turkmenistan has gone further than China, India, or Russia. It not only bans Facebook and Twitter, but it also requires citizens applying for internet access to swear they will not try to evade state censorship.
Link to Learning
China is noted for its strict internet censorship, and its government blocks access to a large number of sites in a policy colloquially known as the Great Firewall of China. The Comparitech service allows you to see websites blocked in China by entering and searching them.
In the United States, lawmakers have recognized that social media platforms like Facebook and Twitter can both promote and endanger democracy. Social media provides extremist groups with the ability to attract followers from across the nation and incite violence. Groups can use the platforms to spread fake news, and a report by the U.S. Senate has concluded that Russian intelligence operatives used Facebook, Twitter, and Instagram to manipulate voters. Legislators have called on social media to more actively censor the content on their platforms and limit or block access by groups or persons spreading hate speech or disinformation. The potential for misuse of technology is heightened by advances that enable the creation of deepfakes, computer-generated images that closely resemble real people.
Medical Miracles and Ongoing Health Challenges
Advances in computer technology were not the only technological success stories of the post–World War II world. In 1947, scientists perfected an artificial kidney, and just five years later, the first successful kidney transplant was performed. In the 1950s, antipsychotic drugs were developed and used to treat neurological disorders that once consigned patients to a lifetime of difficult treatment in a psychiatric hospital. In the 1950s, geneticists discovered the double-helix structure of DNA, information that was crucial for later advancements such as the ability to use DNA to diagnose and treat genetic diseases. In 1962, a surgical team successfully reattached a severed limb for the first time, and in 1967, the first human heart transplant took place. Over the next decade and a half, medical advances made it possible to conduct telemedicine, view and monitor internal organs without performing surgery, and monitor the heartbeat of a fetus during pregnancy.
Medical science also made enormous gains in eradicating diseases that had been common for centuries. For example, polio had caused paralysis and even death since the late nineteenth century, but in 1950, the first successful polio vaccine, developed by the Polish-born virologist Hilary Koprowski, was demonstrated as effective in children. This was an orally ingested live vaccine, a weakened form of the virus designed to help the immune system develop antibodies. In the meantime, researcher Jonas Salk at the University of Pittsburgh was developing an injectable dead-virus vaccine (Figure 15.20). This vaccine rendered the virus inactive but still triggered the body to produce antibodies. In 1955, Salk’s vaccine was licensed for use in the United States, and mass distribution began there. Other vaccines were developed in the United States and other countries over the next several years. Their use has nearly eradicated polio cases, which once numbered in the hundreds of thousands. When polio was detected in an adult in New York in July 2022, it was the first case in the United States since 2013.
The eradication of smallpox is another important success story. Centuries ago, smallpox devastated communities around the world, especially Native American groups, which had no immunity to the disease when Europeans brought it to their shores. Early vaccines based on the cowpox virus were deployed in the United States and Europe in the eighteenth century with great effect. In the twentieth century, advancements made the vaccine safer and easier to administer. However, by the 1950s, much of the world remained unvaccinated and susceptible. Beginning in 1959, the World Health Organization (WHO) began working to eradicate smallpox through mass vaccination, redoubling efforts in 1967 through its Intensified Eradication Program. During the 1970s, smallpox was eradicated in South America, Asia, and Africa. In 1980, the WHO declared it had been eliminated globally.
The WHO’s smallpox program is considered the most effective disease-eradication initiative in history, but it was an aggressive campaign not easily replicated. And without a vaccine, the problems of controlling transmissible diseases can be immense. A novel disease was first reported among Los Angeles’s gay community in 1981, and by 1982 it had become known as AIDS (acquired immunodeficiency syndrome). Researchers realized it was commonly transmitted through sexual intercourse but could also be passed by shared needles and blood transfusions. At that time, the U.S. Centers for Disease Control explained that AIDS was not transmitted through casual contact, but the information did little to calm rising concerns about this still largely mysterious and deadly disease. By 1987, more than 60,000 people in the world had died of AIDS. In the United States, the government was slow to fund research to develop treatments or to find a cure. That year, activists at the Lesbian and Gay Community Services Center in New York City, concerned with the toll that AIDS was taking on the gay community and the government’s seeming lack of concern regarding a disease that the media depicted as affecting primarily gay men, an already stigmatized group, formed the AIDS Coalition to Unleash Power (ACT UP). ACT UP engaged in nonviolent protest to bring attention to their cause and worked to correct misinformation regarding the disease and those who were infected with it.
By the year 2000, scientists in the developed world had acquired a sophisticated understanding of AIDS and the human immunodeficiency virus (HIV), and treatments have emerged that make it a manageable rather than a lethal disease, at least in the developed world. But in parts of the developing world, like Sub-Saharan Africa, infection rates were still rising. One difficulty was that HIV infection and AIDS had become associated with homosexuality, which carried stigma and, in some places, even legal penalties that made those infected reluctant to seek help. Addressing transmission with the general public also meant broaching sometimes culturally sensitive topics like sexual intercourse. Those attempting to control the spread of the disease often found themselves trying to influence social and cultural practices, a complicated task fraught with pitfalls.
This does not mean there were not successes. The proliferation of condom use, circumcision, and public information campaigns, along with the declining cost of treatment, have greatly reduced the extent of the epidemic in Africa. But AIDS is still an enormous and devastating reality for Africans today. Sub-Saharan Africa is home to nearly 70 percent of the world’s HIV-positive cases. Women and children are particularly affected; Africa accounts for 92 percent of all cases of infected pregnant women and 90 percent of all infected children.
Ebola virus has also threatened the health of Africans. The first known outbreak of Ebola, a hemorrhagic fever, took place in Central Africa in 1976. Since then, there have been several other outbreaks. In 2013–2016, an outbreak in West Africa quickly spread across national borders and threatened to become a global epidemic. Approximately ten thousand people fell ill in Liberia alone, and nearly half of those infected died.
The most recent challenge to world health, the COVID-19 pandemic, demonstrates the effects of both globalization and technological developments. The coronavirus SARS-CoV-2 appeared in Wuhan, China, an industrial and commercial hub, in December 2019. Airplane and cruise ship passengers soon unwittingly spread it throughout the world; the first confirmed case in the United States appeared in January 2020. As every continent reported infections, offices, stores, and schools closed and travel bans appeared. Despite these restrictions, middle-class and wealthy people in the developed world continued almost as normal. Many worked, studied, shopped, visited friends and family, and consulted doctors online from their homes.
Low-paid workers in service industries often lost their jobs, however, as restaurants and hotels closed, and children without access to computers or stable internet connections struggled to keep up with their classes. Even the more fortunate in the developed world confronted shortages of goods from toilet paper to medicines to infant formula when global supply chains stalled as farm laborers, factory workers, dock hands, and railroad employees fell ill or workplaces closed to prevent the spread of infection. Developing countries lacked funds to support their citizens through prolonged periods of unemployment. Although vaccines were developed in several countries, they were available primarily to people in wealthier nations. As of March 2022, only 1 percent of all vaccine doses administered worldwide had been given to people in low-income countries.
Beyond the Book
Public Art and Modern Pandemics
Dangerous diseases like HIV/AIDS can energize more people than doctors working in laboratories and global leaders publishing reports. During the early years of the HIV/AIDS crisis, grassroots organizers from around the world strove to focus attention on the problem. Their actions were necessary because governments often did little to prevent the spread of the disease or provide treatment for those infected. The AIDS Coalition to Unleash Power (ACT UP) became known for staging loud protests in public and sometimes private places to raise awareness about the disease. In the United States, the publicity generated through groups like ACT UP forced the government to pay greater attention and to budget more money to the search for a cure. Some artists responded to this movement with murals in well-known locations like the Berlin Wall (Figure 15.21).
While some murals about diseases were a call to action, especially about HIV/AIDS, others have aimed to educate the public. A mural painted on a wall in Kenya for World Malaria Day 2014 showed viewers the proper use of bed nets to help lower the rate of infection (Figure 15.22).
During the COVID-19 pandemic, artists also went to the streets. Some of the murals they painted demanded action or celebrated health workers. Others called for awareness about the rising number of elderly people dying of the disease (Figure 15.23).
- What makes art a powerful medium for conveying messages about awareness? What aspects of these murals seem especially powerful to you?
- Do you recall seeing artwork from the COVID-19 pandemic or any other disease outbreak? What stood out in it?
- What other art forms might an artist use to communicate political or social messages? How are these methods effective?