"

5.2 Research Issues in Cultural Psychology

L D Worthy; T Lavigne; and F Romero

Research methods are the elements used in a psychological investigation (experiment) to describe and explain psychological phenomena and constructs. Research methods can also be used to predict and control for methodological issues through the use of objective and systematic analysis. Information, sometimes called data, for psychological research can be collected from different sources like human participants (e.g., surveys, interviews), animal studies (e.g., learning and behavior) and archival sources (e.g, tweets, and social media posts). Research is done with the help of an experiment, through observation, analysis and comparison.

When conducting research within a culture (indigenous study) or across cultures (cross-cultural study) many things can go wrong that will make conducting, analyzing and interpreting data difficult. This section will review four common methodological issues in cultural research (He, 2010).

  • Sampling Bias
  • Procedural Bias
  • Instrument Bias
  • Interpretation Issues

Another type of methodological bias is procedural bias, which is sometimes referred to as administration bias. This type of bias is related to the study conditions including the setting and how the instruments are administered across cultures (He, 2010). The interaction between the research participant and interviewer is another type of procedural bias that can interfere with cultural comparisons.

Setting

Where the study is conducted can have a major influence on how the data is collected, analyzed and later interpreted. Settings can be small (e.g., home or community center) or settings can be large (e.g., countries or regions) and can influence how a survey is administered or how participants might respond. In a large cross-cultural health study Steels and colleagues (2014) found that the postal system in Vietnam was unreliable and demanded a major, and unexpected, change in survey methodology. The researchers were forced to use more participants from urban areas than rural areas as a result of these challenges. Harzing and Reiche (2013) found that their online survey was blocked in China due to internet censoring practices of the Chinese government but with minor changes it was later made available for administering.

 

A courier is getting on his motorcycle. The materials being delivered are in a large container at the back of the motorcycle..
Problems with infrastructure and services may limit research participation. [Image By tomcatgeorge (Mail Delivery Taiwan) [CC BY 2.0 https://commons.wikimedia.org/wiki/File:Taiwan_Mail_Delivery.jpg]

Instrument Administration

In addition to the setting, how the data is collected (e.g., paper-and-pencil mode versus online survey) may influence different levels of social desirability and response rates. Dwight and Feigelson (2000) completed a meta-analysis of computerized testing on socially desirable responding and found that impression management (one dimension of social desirability) was lower in online assessment. The impact was small but it does have broad implications for how results are interpreted and compared across cultural groups when testing occurs online.

 

A male student is sitting in front a two computers. There are books and paper in front of the student.
How you take a survey or test may influence how you respond to certain types of questions. [Image by Mr Stein Open Cheat CC-NC-SA 2.0 https://www.flickr.com/photos/5tein/2348649408]

Harzing and Reiche (2013) found that paper/pencil surveys were overwhelmingly preferred by their participants, a sample of international human resource managers, and had much higher response rates when compared to the online survey. It is important to note that online survey response rates were likely higher in Japan and Korea largely because of difficulties in photocopying and mailing paper versions of the survey.

Interviewer and Interviewee Issues

The interviewer effect can easily occur when there are communication problems between interviewers and interviewees, especially, when they have different first languages and cultural backgrounds (van de Vijver and Tanzer, 2003). Interviewers, not familiar with cultural norms and values may unintentionally offend participants or colleagues or compromise the integrity of the study.

An example of the interviewer effect was summarized by Davis and Silver (2003). The researchers found that when answering questions regarding political knowledge, African American respondents got fewer answers right when interviewed by a European American interviewer than by an African American interviewer. Administration conditions that can lead to bias should be taken into consideration before beginning the research and researchers should exercise caution when interpreting and generalizing results.

 

A female sign language interpreter is signing to an audience.
A translator or interpreter can unintentionally change a question that may change how a participant responds [Image by daveynin CC-BY 2.0]

Using a translator is not a guarantee that interviewer bias will be reduced. Translators may unintentionally change the intent of a question or item by omitting, revising or reducing content. These language changes can alter the intent or nuance of a survey item (Berman, 2011), which will alter the answer provided by the participant.

In the United States, and other Western countries, it is common to recruit university undergraduate students to participate in psychological research studies. Using samples of convenience from this very thin slice of humanity presents a problem when trying to generalize to the larger public and across cultures. Aside from being an over-representation of young, middle-class Caucasians, college students may also be more compliant and more susceptible to attitude change, have less stable personality traits and interpersonal relationships, and possess stronger cognitive skills than samples reflecting a wider range of age and experience (Peterson & Merunka, 2014; Visser, Krosnick, & Lavrakas, 2000).

Put simply, these traditional samples (college students) may not be sufficiently representative of the broader population. Furthermore, considering that 96% of participants in psychology studies come from western, educated, industrialized, rich, and democratic countries (so-called WEIRD cultures; Henrich, Heine, & Norenzayan, 2010), and that the majority of these are also psychology students, the question of non-representativeness becomes even more serious.

 

A crowd of students wearing college graduation regalia.
How confident can we be that the results of social psychology studies generalize to the wider population if study participants are largely of the WEIRD variety? [Image: Mike Miley, http://goo.gl/NtvlU8, CC BY-SA 2.0]

When studying a basic cognitive process (e.g., working memory) or an aspect of social behavior that appears to be fairly universal (e.g., cooperation), a non-representative sample may not be a big deal but over time research has repeatedly demonstrated the important role that individual differences (e.g., personality traits and cognitive abilities) and culture (e.g., individualism vs. collectivism) play in shaping social behavior.

For instance, even if we only consider a tiny sample of research on aggression, we know that narcissists are more likely to respond to criticism with aggression (Bushman & Baumeister, 1998); conservatives, who have a low tolerance for uncertainty, are more likely to prefer aggressive actions against those considered to be “outsiders” (de Zavala et al., 2010); countries where men hold the bulk of power in society have higher rates of physical aggression directed against female partners (Archer, 2006); and males from the southern part of the United States are more likely to react with aggression following an insult (Cohen et al., 1996).

 

Two image of the Madonna and Child are side by side. On the left is an image of the Madonna and Child from Japan. The image on right shows the Madonna and Child from Renaissance Italy.
Madonna and Child that reflect unique cultural differences and practices. [Image by Adriatikus Japanese Madonna and Child CC BY-SA 3.0 https://commons.wikimedia.org/w/index.php?curid=3551312; School of Lucca Madonna and Child CC0 Public Domain http://www.publicdomainfiles.com/show_file.php?id=13931100012102]

When conducting research across cultures it is important to ensure that there is equivalence across samples from other cultures to maintain the internal consistency (validity) of the research study (Harzing, et al., 2013; Matsumoto and Juang, 2013). Asking middle-school students in the United States about their online shopping experiences may not be a representative sample for middle school students in Kenya, Africa. Even when trying to control for demographic differences there are some experiences that cannot be separated from culture (Matsumoto and Luang, 2013). For example, being Catholic in the United States does not have the same meaning as being Catholic in Japan or Brazil. Researchers must consider the experiences of the sample in addition to basic demographic information.

A final type of method bias is called instrument bias but it does not have anything to do with the instrument, survey or test but rather refers to the experience and familiarity of the participant with test taking. There are two main types of instrument bias discussed in cross-cultural research (He, 2012), familiarity with the type of test (e.g., cognitive versus educational) and familiarity with response methods (e.g., multiple choice or rating scales).

Demetriou and colleagues describe an example of familiarity with test type (2005) when they compared Chinese and Greek children on visual-spatial tasks. The researchers found that Chinese children outperformed Greek children on the task but not because of cultural differences in visual spatial performance but because writing Chinese is a visual spatial task. Chinese children performed better because learning to write (in all cultures) requires practice and writing in Chinese language is a highly visual spatial task.

 

A pencil is laying on a partially completed scantron.
Using a scantron assumes participants use the same letter system [Image by By lecroitg [CC0, via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Test-986769_640.jpg]

An example of how instrument bias can be reduced comes from a study that included Zambian and British children (Serpell, 1979). The children were asked to reproduce a pattern using several different types of response method including paper-and-pencil, plasticine, configurations of hand positions, and iron wire. The British children scored significantly higher on the paper-and-pencil method while the Zambians scored higher when iron wires were utilized (Serpell, 1979). These results make sense within cultural contexts. Paper pencil testing is a common experience in formal, Western education systems and making models with iron wire was a popular pastime among Zambian children. By using different response methods (i.e., paper/pencil, iron wire) the researchers were able to separate performance from bias related to response methods.

Another issue related to instrument bias is response bias, which is the systematic tendency to respond in a certain way to items or questions. There are many things that may lead to response bias including how survey questions are phrased, the demeanor of the researcher, or the desire of the participant to be a good participant and provide “the right’ answers. There are three common types of response bias:

Socially desirable responding (SDR) is the tendency to respond in a way that makes you look good. Studies that examine sensitive topics (e.g., sexuality, sexual behaviors, and mental health) or behaviors that violate social norms (e.g., fetishes, binge drinking, smoking and drug use) are particularly susceptible to SDR.

Acquiescence bias is the tendency to agree rather than disagree with items on a questionnaire. It can also mean agreeing with statements when you are unsure or in doubt. Studies have consistently shown that acquiescence response bias occurs more frequently among participants from low socioeconomic status and from collectivistic cultures (Harzing, 2006; Smith & Fischer, 2008). Additionally, work by Ross and Mirowsky (1984) found that Mexicans were more likely to engage in acquiescence and socially desirable responding than European Americans on a survey about mental health.

 

Five rows with five stars each. Some of the stars are shaded yellow to reflect ratings and choices.
Understanding the response method [Image by sthenstudio: Star ratings CC0 https://pixabay.com/en/ratings-stars-quality-best-ranking-1482011/]

Extreme response bias is the tendency to use the ends of the scale (all high or all low values) regardless of what the item is asking or measuring. A demonstration of extreme response bias can be found in the work of Hui and Triandis (1989). These authors found that Hispanics tended to choose extremes on a five-point rating scale more often than did European Americans although no significant cross-cultural differences were found for 10-point scales.

One problem with cross-cultural studies is that they are vulnerable to ethnocentric bias. This means that the researcher who designs the study might be influenced by personal biases that could affect research outcomes, without even being aware of it. For example, a study on happiness across cultures might investigate the ways that personal freedom is associated with feeling a sense of purpose in life. The researcher might assume that when people are free to choose their own work and leisure, they are more likely to pick options they care deeply about. Unfortunately, this researcher might overlook the fact that in much of the world it is considered important to sacrifice some personal freedom in order to fulfill one’s duty to the group (Triandis, 1995). Because of the danger of this type of bias, cultural psychologists must continue to improve their methodology.

Another problem with cross-cultural studies is that they are susceptible to cultural attribution fallacy. This happens when the researcher concludes that there are real cultural differences between groups without any actual support for this conclusion. Yoo (2013) explains that, if a researcher concludes that two countries are different based on a psychological construct because one country is an individualistic (I) culture and the other is a collectivistic (C) culture, without connecting differences to IC, then the researcher has made a cultural attribution fallacy.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

5.2 Research Issues in Cultural Psychology Copyright © by L D Worthy; T Lavigne; and F Romero is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.