Effects of Altering Grammar and Spelling on Perceived Author Credibility
Clemson University - Psychology

Abstract

This study aimed to examine the relationship between passage quality and perceived author credibility. Passage quality was defined as the number of mechanical errors in a text, whereas perceived author credibility was defined as the degree to which participants believed that the author was a reliable and truthful source. Two surveys and a passage in three different forms (a good passage containing no errors, a fair passage with a few errors, and a poor passage with many errors) were completed by 69 college students in a between-subjects design. Survey questions measuring perceived author credibility were examined in order to detect a difference in participants’ responses among the three passage quality conditions. A significant difference was detected between participants perceived author credibility. The results of this study suggest that a text of better quality will elicit greater degrees of perceived author credibility from the readers of the text.

 

            The purpose of this study was to investigate a potential relationship between passage quality with respect to grammar and spelling, and perceived author credibility. Heyman (1992) described how credibility can be divided into two dimensions, or factors: a trust factor and a qualification factor. That is, an individual’s perceived credibility is based upon the degree to which the perceiver finds the individual to be trustworthy and competent in the given topic area. While much research has been gathered on factors affecting credibility, the majority of this research has focused on face-to-face or verbal communication between the participant and the individual whose credibility is being assessed (King, Minami, & Samovar, 1985). In addition, subject variables contributing to perceived credibility such as gender and ethnicity have been studied exhaustively (Gomez & Pearson, 1990). There is a scarce amount of literature regarding potential impacts of grammar and spelling on perceived credibility available, therefore this is one particular variable that is worth exploring. Brown (2004) points out that grammatical and spelling errors make up the number one reason why articles are rejected upon submission to publishers. He goes on to say that an individual who is inattentive to detail could be perceived as being inattentive to the quality of the research being done for the journal. While this logic is generally accepted by the public, there is no mechanism that has been formed to explain why this occurs. This study is imperative to understanding if there is a relationship between passage quality and perceived author credibility.

            The coherence of a text-based passage has been shown to affect the interest of the reader in that passage (Schraw, 2001). Coherence is defined in this case as the degree to which the reader can understand the text. Alterations in grammar and spelling can cause a passage to be less coherent; therefore, the interest of the reader in the passage is decreased. This idea should be applicable to perceived author credibility, since one of the factors affecting credibility is interest (Pratkanis & Gliner, 2004-2005). Due to the fact that subject material, passage length, and use of jargon can also affect the interest of a reader in a passage, these variables must be controlled (or at least held constant) in order to observe only the affect of changes in grammar and spelling.

            Citera, Beauregard, & Mitsuya (2005) conducted an interesting experiment that relates to text-based credibility. Participants were placed into two treatments: a face-to-face (FTF) negotiation condition and an e-negotiation condition. In the FTF negotiation condition, participants were paired and each played the role of a buyer and a seller (these roles were then interchanged so that everyone could be a buyer and a seller). Various costs of an automobile were negotiated before a survey was completed that examined the participants’ degree of trust and perceived credibility in the dealings with their partner. The same conditions were followed for the e-negotiation condition. However, the dealings between participants took place in an internet chat environment. The results of this experiment indicate that the FTF negotiation approach fosters greater trust between the buyer and seller, and it also improves the buyer’s perceived credibility of the seller. This is explained by the Psychological Distance Theory, which states that closer physical interaction with an individual fosters relationship building, whereas a lack of physical interaction causes a person to be more wary of that individual (Citera, Beauregard, & Mitsuya, 2005). These results indicate that the perceived credibility of an author is likely to be less when reading a text by that author than it would be if the reader were to speak about the topic face-to-face with the author.

            The fact that the reader has no physical basis on which to judge the author of a text (as in a face-to-face situation) leaves a limited number of factors on which the reader can base the credibility of an author. Fragale (2004) has identified familiarity with a text to be correlated with an increased acceptance that the text comes from a credible source. Fragale tested this by revealing to participants six urban legends regarding food products. In one condition, the participants were given these legends twice, and in another condition, the six urban legends were repeated five times each. Participants were then asked if it was likely for each of the legends to include facts found in Consumer Reports (a credible source). There were significantly more participants who believed that the legends could be found in Consumer Reports for the five-times repetition condition than there were for the two-times repetition condition, indicating that familiarity with a topic can cause the reader to attribute that topic to a credible source. This provides a warning that the subject matter of a passage can influence the perceived credibility of an author, based upon how experienced the reader is with the information in the passage.

            Work by Pratkanis & Gliner (2004-2005) has produced results that concur with these findings. By examining the effects of message and message source on message effectiveness, Pratkanis & Gliner were able to show that the impact of a topic on the reader can be predicted by the subject matter of the topic and who is presenting the topic. This was done by having two conditions for each independent variable. For the message, a protection message was used (nuclear disarmament) and a technical message was used (the possible existence of a tenth planet in our solar system). For the message source, either a second grade schoolgirl was used or an older, professional-looking male doctor was used. Participants were found to be impacted more by the nuclear disarmament message (protection) when it came from the schoolgirl, and they were impacted more by the tenth planet message (technical) when it came from the doctor. This interaction shows that perceived credibility can also be influenced by the subject matter. Expertise in a particular area causes an individual to be more credible in their area. Therefore, it is important to make a passage that could come from a variety of sources exhibiting a gradient of reliability based upon their degree of expertise in the given subject area.

            This research on credibility identifies factors such as passage interest, coherence, technicality, expertise, etc. as being significant influences upon perceived author credibility. However, none of these issues specifically addresses the mechanics of language use. What if a highly credible source (say, an astrophysicist) writes a paper on a topic in which they have expertise, but it is a poorly written paper (mechanically speaking)? If the same paper had better mechanics, would it be perceived as being written by a more credible source? This study will attempt to answer these questions by dividing participants (consisting of University students) into three groups. One group of participants will receive a passage that has no grammatical or spelling errors, another group will have a passage with minor grammatical and spelling errors, and the final group will read a passage with severe grammatical and spelling errors. A survey completed before the passage is read will introduce controls, such as interest in the passage topic and judging the credibility of varying levels of academic status. The survey taken after the passage is read will measure the participants perceived credibility of the author of the passage.

            The primary goal of this study was to investigate the relationship between the use of proper grammar and spelling and the perceived credibility of an author. The literature presented indicates that coherent, interesting, and repeated passages will result in a greater degree of perceived author credibility. However, research also shows that information presented in a text format will be perceived as being less credible than when the same information is received in a face-to-face situation. Due to the fact that this study involves a text-based passage and not a physical interaction between the participant and the author of the passage, grammar and spelling should play a significant role in the coherence, and therefore perceived credibility, of the author. Proper grammar and spelling were expected to improve perceptions of an author’s credibility, whereas improper grammar and spelling were expected to worsen perceptions of an author’s credibility.

Method

Participants

            Recruited participants consisted of students on campus at Clemson University who were over the age of 18. Thirty male and 39 female participants were recruited, resulting in a total of 69 participants. The average age of the students was 20.26, with a range of 18 to 30 and a standard deviation of 1.86. There was one 30-year old participant who was not dropped, because age was not expected to influence perceived author credibility. Two of the 69 participants were dropped. The first participant that was dropped disagreed with question 5 on the pre-passage survey (gave a response of “2”), so their response to question 5 on the post-passage survey was defined as being invalid. The second participant was dropped for the same reason, and also because they responded in a significantly different manner to questions 3 and 4 of the pre-passage survey as they did to questions 1 and 2 of the post-passage survey. This discrepancy was significant enough to suggest that the passage affected the participants’ responses, so their remaining responses could not be considered valid. There were three levels of the independent variable, and 23 participants were assigned to each level. Participants were collected via convenience sampling, since the investigator recruited fellow students by word of mouth. The participants were then assigned to one of the three levels of the independent variable (“good passage,” “fair passage,” or “poor passage”) through the use of a random number generator. Participants were volunteers, and therefore received no compensation for their inclusion in the study. There were no specialized prerequisite skills that the participants were screened for. Before completing the surveys, the participants needed to read and understand an informational letter. The participants were asked if they had any questions regarding the content within the informational letter before they participated in the study.

Materials

            Informational letter. Approached participants were required to read the informational letter. This letter identified the purpose of the study, the participants eligible for recruitment, the tasks that the participants would complete, the amount of time that the surveys would take, the voluntary nature of the study, and the contact information of the investigators.

            Pre-passage survey. The first survey that the participants completed will be referred to as the “pre-passage survey,” and it can be found in Appendix A. The first two questions (regarding age and gender) on the pre-passage survey were collected to describe the sample. The next three questions were based on a six point Likert scale, with “1” being a strong disagreement and “6” being a strong agreement with the statement being made. Questions 3 and 4 asked the participant to indicate how strongly they felt about the importance of proper grammar and spelling, respectively. These two questions were designed as control questions to see if the participant was affected by the passage (the post-passage survey contained the same two questions to see if the passage changed the participant’s feelings regarding proper grammar and spelling). Question 5 asked the participant if they felt that a college professor is a more credible source of information regarding their specialty than a graduate student. This was a control question to ensure that the participant believed that a college professor is a more credible source than a graduate school student. This was necessary to give validity to question 5 in the post-passage survey, which asked participants to identify who they thought was most likely the author of the passage.

            Post-passage survey. The second survey that participants completed will be called the “post-passage survey,” and it can be found in Appendix C. After reading the given passage, participants answered five questions. The first four questions were based on a six point Likert scale, with “1” being a strong disagreement and “6” being a strong agreement with the statement being made. Questions 1 and 2 regarding proper grammar and spelling respectively were intended to be control questions, and they are identical to questions 3 and 4 in the pre-passage survey. Their purpose was to examine if reading the passage had any effect on participants’ feelings towards proper grammar and spelling. Question 3 asked the participant if they believed that the passage consisted of accurate information. This question was used to quantitatively assess the participants’ degree of perceived author credibility. Question 4 asked the participant if they believe that the author of the passage is a credible source. This was another measure taken into account when rating the participants’ perceived author credibility. Question 5 also measured credibility, but it is not on a Likert scale. It asked the participant to identify who was most likely the author of the passage: an elementary school student (the least credible choice), a middle school student, a high school student, an undergraduate student, a graduate student, or a college professor (the most credible choice). In order to ensure that these options were presented from the least credible to the most credible source, precautions were taken in question 5 of the pre-passage survey to measure the magnitude of credibility for two of the responses present in question 5 of the post-passage survey. This made question 5 a more reliable measure of perceived author credibility. No questions on the pre-passage or post-passage survey needed to be reverse-scored.

            Passage. The passage came in three different forms, representative of the three levels of the independent variable: the “good passage,” “fair passage,” and “poor passage,” which can be found in Appendix B. The good passage contained no grammatical or spelling errors. The fair passage contained relatively easy to make grammatical and spelling errors, whereas the poor passage contained a large number of grammatical and spelling errors. The passage was selected from a technical writing website by Dennis G. Jerz (2000). D. G. Jerz provided permission to alter the passage into the “fair” and “poor” forms (personal communication, February 17, 2006). In the “fair passage,” there were a total of ten mistakes, six of which were spelling errors and four of which were grammatical errors. In the “poor passage,” there were a total of 31 mistakes, 20 of which were spelling errors and 11 of which were grammatical errors. All mistakes made in the fair and poor passages were underlined in Appendix B. Note that the passages given to participants did not have headings (such as “good passage,” “fair passage,” and “poor passage”), and they also did not have any underlined mistakes.

            Debriefing form. After collection of the post-passage survey, the debriefing form was given to participants. This form provided the participants with the purpose of the study, the hypothesis of the investigator, and the contact information for the principal investigator and the co-investigator.

            Other materials. Pens and clipboards were provided for the participants’ convenience. Also, the investigator provided participants with a folder to put the pre-passage survey, passage, and post-passage survey into. The folder was provided for the participants to place these items into in order to protect the participants’ anonymity.

            Study environment. Participants were recruited in the Cooper Library at Clemson University from multiple levels of the library building. The collection process took place over the course of two nights, and recruitment occurred on these two consecutive nights between the hours of 6:00 PM and 12:00 AM.

Apparatus

            The only specialized equipment used in this study was an online application called the Random Number Generator (Daniels, 2001-2003). This program was written in Javascript, and selects integers at random based upon an algorithm that is organized by time. The program was employed to randomly assign participants to each level of the independent variable. Block randomization was also used by assigning the three levels of the independent variable once to each of three participants.

Procedures

            If a volunteer was interested in being involved in the study, the individual was given the informational letter to read. When the participant had finished reading the informational letter, the investigator asked the participant if they had any questions regarding the letter. If the participant agreed to take part in the study, then the participant kept the letter. The investigator then gave the participant the pre-passage survey, a clipboard, and a pen. The pre-passage survey had previously been numbered on the top-right corner of the page. The passage and post-passage survey had this same number on their top-right corners in order to identify which surveys and passages went together (for example, one participant received a pre-passage survey, passage, and post-passage survey that all had a “1” on their top-right corners. The next participant had a “2” on the top-right corners of all three of their forms). Once the participant had completed the pre-passage survey, they placed it into the folder provided by the investigator in order to maintain their anonymity. After placing the pre-passage survey into the folder, the investigator gave the participant the passage. Participants received one of three types of the same passage: a “good passage,” a “fair passage,” or a “poor passage.” A random number generator was used in order to assign participants to one of these three passages. Upon completion of reading the passage, the investigator gave the participant the post-passage survey. The participant was then notified by the investigator that they were allowed to refer to the passage while completing the post-passage survey. Once the participant had completed the post-passage survey, the participant placed the passage and post-passage survey into the same folder that they had placed the pre-passage survey into. The investigator then collected the folder from the participant.

Design

            This study was set up in order to compare the means of participants’ responses to three questions (Questions 3, 4, and 5 of the post-passage survey) across three levels of the independent variable (passage quality) and see if a difference among each level was present. In order to do this, three separate one-way ANOVAs (one for each of the three questions) were run, and Bonferroni’s Post Hoc Test was also conducted for each question. Since each participant only received one of the three levels of the independent variable, and each participant was randomly assigned a level, this was a between-subjects design.

            The independent variable that was examined was passage quality. Passage quality in this case referred specifically to the proper use of grammar and spelling in a written work. There were three treatments of the independent variable: the “good passage,” “fair passage,” and “poor passage.” The good passage contained no grammatical or spelling errors, the fair passage contained a total of ten errors (six spelling errors and four grammatical errors), and the poor passage contained a total of 31 errors (20 spelling errors and 11 grammatical errors). Passage quality was an assessment of the grammatical and spelling mechanics used in a written work, and it was split into three treatments to see if it related to perceived author credibility.

            The dependent variable examined was perceived author credibility. The perceived author credibility in the case of this study was a reader’s confidence that the information that they read comes from a reliable, truthful source. In this study, perceived author credibility was placed on a discrete scale, ranging in magnitude from “1” (low perceived author credibility) to “6” (high perceived author credibility). The post-passage survey examined this in questions 3, 4, and 5. Questions 3 and 4 were already in their quantitative forms (with a response of “1” indicating low perceived author credibility and a response of “6” indicating high perceived author credibility), but question 5 was converted to a quantitative form. A response of “Elementary School Student” indicated a value of “1” (low perceived author credibility), whereas a response of “College Professor” for question 5 indicated a value of “6” (high perceived author credibility). Note that these values were only assigned to the responses on question 5 of the post-passage survey if participants agreed with question 5 of the pre-passage survey (a response of “3,” “4,” “5,” or “6” was given). Perceived author credibility was therefore quantified, so as to give a basis for defining low and high confidence that the participant has in the reliability of the information within the passage.

Data Quantification

            The questions on the pre-passage survey and post-passage survey were separated into “question types.” Question 3 on the pre-passage survey (which asks how strongly participants feel about proper grammar use) was designated as the “Grammar” question type. Question 4 on the pre-passage survey (which asks how strongly participants feel about proper spelling use) was likewise defined as the “Spelling” question type. Question 5 on the pre-passage survey (which asks how strongly participants feel that a professor is a more credible source in their specialty area than a graduate student is) was condensed to the “Professor” question type. On the post-passage survey, a similar nominalization process was conducted in order to display results in a more succinct manner. Question 1 (which was the same as Question 3 on the pre-passage survey) was also labeled “Grammar,” and Question 2 (which was the same as Question 4 on the pre-passage survey) was also labeled “Spelling.” Question 3 on the post-passage survey (which asks participants to identify how accurate the information in the passage was) was labeled as “Accuracy.” Question 4 on the post-passage survey (which asks how credible the author is) was labeled as “Credibility,” and Question 5 (which asks who is most likely the author of the passage) was labeled “Author.” Questions 3, 4, and 5 (Accuracy, Credibility, and Author, respectively) were used as measures of perceived author credibility.

            For the pre-passage survey, one-way ANOVAs were conducted for the Grammar, Spelling, and Professor question types with response to the three passage quality conditions. This checked for pre-existing differences between groups. For the post-passage survey, one-way ANOVAs were also conducted for Grammar, Spelling, Accuracy, Credibility, and Author in regards to passage quality. Grammar and Spelling from the post-passage survey were examined to ensure that no differences existed in responses presented among the groups of participants in each condition. Accuracy, Credibility, and Author were examined to see if perceived author credibility changed with passage quality. Finally, a 3 (passage quality) x 2 (survey type) design was examined in a repeated measures ANOVA. The two questions examined within each survey type were Grammar and Spelling. This was conducted in order to determine if the passage had any effect upon participants’ responses to the Grammar and Spelling question types in the pre-passage and post-passage surveys.

Results

Analysis of Pre-passage Survey

            Figure 1 shows the responses of participants on proper grammar use, proper spelling use, and whether or not they see a professor as being a more credible source than a graduate student for the pre-passage survey. Responses of participants to proper grammar use (“grammar”) was compared between passage quality using a one-way ANOVA to determine whether or not there were any pre-existing differences between groups. The ANOVA revealed that there were no significant differences in participants’ responses to the “grammar” question. In a similar manner, the responses of participants on proper spelling use (“spelling”) was also compared between groups using a one-way ANOVA. Again, no significant differences among participants’ responses to the “spelling” question were found. Finally, participants reported how strongly they felt that a college professor is a more credible source in their area of specialty than a graduate student is (“professor”), and their responses were also compared between the three groups by using a one-way ANOVA. There was no significant difference between the responses of participants to the “professor” question among passage qualities.

Analysis of Post-passage Survey

            In a similar manner to the pre-passage survey analysis, means and standard error of the means for the participants’ responses on the use of proper grammar and spelling in the post-passage survey are displayed in Figure 2. Passage quality was examined by using a one-way ANOVA. Participants’ responses regarding grammar were also examined against passage quality for the post-passage survey. This was to examine if any significant differences existed between groups after the passage was read. No significant difference was discovered between groups for grammar. Participants’ responses regarding spelling were then examined against passage quality for the post-passage survey. No significant difference was discovered between groups for spelling.

            Figure 3 shows the responses of participants to each question measuring perceived author credibility (“accuracy,” “credibility,” and “author”) for each passage condition on the post-passage survey. For the question measuring accuracy, a main effect was revealed, F(2, 66) = 8.80, p < 0.001. Bonferroni’s Post Hoc Test revealed a difference between the good passage and the poor passage, but no significant differences between means for the good passage and the fair passage or between the fair passage and the poor passage were found. There was also a significant main effect found on the question measuring credibility, F(2, 66) = 12.80, p < 0.001. A difference was revealed between the good passage and the fair passage on the question measuring credibility. Furthermore, a significant difference was also detected on the credibility question between means for the good passage and the poor passage. No significant difference was found between the fair passage and the poor passage. The question regarding the author of the passage revealed a significant main effect, F(2, 66) = 21.53, p < 0.001. A significant difference in responses between the good passage and the fair passage was detected. Another significant difference was found between the responses to the good passage and the poor passage. There was no significant difference detected between the fair passage and the poor passage for the question regarding the author of the passage.

Comparison of Means for Grammar and Spelling Between Pre-passage and Post-passage Survey

            Figure 4 summarizes the results of a repeated measures ANOVA that was examined to determine if differences existed between groups for the questions regarding grammar and spelling on the pre-passage survey and the post-passage survey. There were no significant main effects or interactions found between or within groups in regards to the questions on grammar for the pre-passage survey or the post passage survey. There were also no significant differences found between or within groups in regards to the questions on spelling for the pre-passage survey and the post-passage survey.

Discussion

            In order to examine whether or not a relationship existed between perceived author credibility and passage quality, 69 students at Clemson University were given two brief surveys and a passage to read. The passage that students read came in three forms, which was representative of the three levels of the independent variable: a “good passage” (with no grammatical or spell errors), a “fair passage” (with some grammatical and spelling errors), and a “poor passage” (with many grammatical and spelling errors). Other than grammatical and spelling errors, the three passages were identical. Twenty-three students were assigned to each condition, but two participants were dropped (both from the “fair passage” condition). Three of the survey questions that participants answered after reading the passage were used to measure their degree of perceived author credibility, which was defined as being the degree to which the participants believed that the author of the passage was trustworthy and competent in the given subject matter. The three questions on the post-passage survey used to measure their degree of perceived author credibility asked participants if they thought that the information in the article was accurate (Question 4), if they thought that the author of the passage was a credible source (Question 5), and who they thought that the author of the passage was (Question 6). Participants responded that the author of the “good passage” was a more credible source than the author of the “poor passage,” suggesting a relationship between perceived author credibility and passage quality.

            The questions that measured perceived author credibility (Questions 4, 5, and 6 on the post-passage survey) were referred to in the results as “Accuracy,” “Credibility,” and “Author,” respectively. Figure 3 shows the relationship exhibited between perceived passage credibility and passage quality with respect to these three credibility questions. Participants perceived the author of the “good passage” to be a more credible source than the author of the “fair” and “poor” passages. The perceived author credibility of the “fair passage,” however, was not found to be significantly different from the perceived author credibility of the “poor passage.” These results suggest that the coherence of a text-based passage (which was manipulated by passage quality) has an effect on an individual’s level of perceived author credibility.

            The findings of this research concur with the work done by Ambler and Hollier (2004), which showed how “waste” (excessive product descriptions and euphemisms) caused participants to have a greater degree of credibility in the item being advertised. Ambler and Hollier’s work demonstrates that information is judged as being credible when it has a high quality (in this case, quality was described by the amount of “waste” in the presentation). Participants who were shown a presentation for a product with less quality generally did not find the product to be credible. While this research focuses on the credibility of an object, it must be emphasized that presentation does matter in any sort of text or graphical format. The combination of this research with findings on the potential relationship between perceived author credibility and passage quality imply that graphical or text representations can be perceived differently based upon the quality of the presented information.

            Schraw’s (2001) study on reader interest as a function of coherence in a text-based passage is also convergent with the results of this research. Schraw found that less coherent passages decreased the interest level of participants. Pratkanis & Gliner (2004-2005) identified interest as being a factor that influences reader’s perceptions of an author’s credibility. Therefore, the presented research extends the implications of coherence as being a key factor determining the perceived credibility of an author. This finding will prove to be very useful when  information is presented to audiences without a face-to-face interaction between the presenter and the audience (Citera et al., 2004-2005). Citera et al. found that face-to-face interactions between people cause greater perceived credibility between the two people than when the two people interact over the internet. In times when physical interaction between people is not possible, the presented research suggests that the quality of the information being presented is detrimental to the audience’s perceived author credibility.

            To describe the level of perceived credibility of an author, coherence was a factor that was heavily depended upon. This factor was shown to be a valid measure of credibility on the Content-Based Criteria Analysis (CBCA) (Zaparniuk, Yuille, & Taylor, 1995). Zaparniuk et al. examined truthfulness by using the CBCA as a criterion for judging the responses of adult witnesses when on trial. Coherence, that is, the repeated and accurate presentation of events, was the first scale that the CBCA uses to measure the credibility of a witness’ story. A disjointed, discontinuous story (such as the poor grammar and spelling inherent in the “poor passage”) was viewed as having little credibility. Zaparniuk et al. therefore supports the assumption of using coherence as a valid measure for perceived author credibility.

            While the use of passage quality as a representation of coherence was justified, the results of the investigation between perceived author credibility and passage quality could be challenged. First, there could have been more questions used that measure a participant’s perceived author credibility. Since only three survey questions measured this, there may be an aspect of credibility that was not covered in one of those three questions. For example, the reader could have been asked how truthful they thought that the author of the passage was. Furthermore, the interpretation of each question to the reader must be addressed. Question 3 on the post-passage survey asks the reader to identify how accurate they thought that the passage was. Accuracy, however, can be defined in many different ways (for example, how close the passage was to being accepted and known information, whether or not the author thought the information he was giving was true, etc.). Question 5 on the post-passage survey asks participants to identify who they thought that the author of the passage was. However, after collecting the surveys, the researcher would occasionally hear the remark that the participants merely assumed that the researcher was the author of the passage. Further measures must be taken to collect validity information on the questions contained in the survey of this study. Additionally, convenience sampling was employed to gather participants for this study. This research employed the use of students who were only found in the library during the nighttime. A confound as large as this greatly decreases the generalizability of the obtained results to the student population at large. The generalizability of this study was also limited by the subject matter of the passage. Given that the topic of the passage was technical writing, the participants may have assumed that the writer of the article would naturally write with few (if any) spelling or grammatical errors. Had the passage been about a different topic (for example, a passage about environmental sustainability), the participants may have expected there to be a greater number of grammatical and spelling errors. The expectation that the subject matter elicited in the participants may have served as a threat to the validity of this study.

            Despite these limitations of the study, internal validity was protected (especially in respect to a selection threat) by using a random number generator to randomly assign participants to each passage quality condition (“good passage,” “fair passage,” and “poor passage”). Also, a script was used to ensure that each participant was recruited in the same manner and given survey items at the correct times and in the correct order. Question 3 on the pre-passage survey and Question 1 on the post-passage survey were identical, and asked the participant to indicate how strongly they felt about the proper use of grammar. Likewise, Question 4 on the pre-passage survey and Question 2 on the post-passage survey were identical, and asked how strongly the participant felt about the proper use of spelling. These questions were used as controls to ensure that the coherence of the participant’s responses were maintained. They also ensured that the passage did not influence participant’s feelings about grammar and spelling. Two outliers were identified by the discrepancy between their responses to these four items, and they were dropped accordingly to maintain the integrity of the data analysis.

            The results of this research suggest that a blind or double-blind experiment should be conducted in the future. Since the researcher interacted with the participants, an effect on their responses to survey items may have been produced. It is also imperative that more valid measures of perceived author credibility are introduced into future research in order to justify a causal relationship between perceived author credibility and passage quality. Naturally, the purpose of this study should be extrapolated to other age groups and in other localities in order to test for certain region effects. A potentially interesting longitudinal study could be conducted that examines the perceived author credibility over time for a single author when participants are given multiple works over time by that same author (which would be revealed for them). Perhaps a “familiarity” effect could be identified, analogous to the Psychological Distance Theory described by Citera et al. (2005). Moreover, it would be interesting to see if this proposed idea could be modified to manipulate passage quality as well (for example, participants read the same identified author over time, but over time, the quality of the author’s writing is intentionally decreased or increased). An examination of perceived author credibility in response to manipulated passage quality certainly opens up many venues for further research.

 

 

References

Ambler, T., & Hollier, A. E. (2004). The waste in advertising is the part that works. Journal of Advertising Research, 44(4), 375-389.

Brown, R. T. (2004). Editorial: A general approach to publication in the Journal of Pediatric Psychology: From the process of preparing your manuscript to revisions and resubmissions. Journal of Pediatric Psychology, 29(1), 1-5.

Citera, M., Beauregard, R., & Mitsuya, T. (2005). An experimental study of credibility in e-negotiations. Psychology & Marketing, 22(2), 163-179.

Daniels, M. (2001-2003). Random Number Generator. Retrieved February 21, 2006, from http://www.mdani.demon.co.uk/para/random.htm.

Fragale, A. R. (2004). Evolving informational credentials: The (mis)attribution of believable facts to credible sources. Personality and Social Psychology Bulletin, 30(2), 225-236.

Gomez, C. F. R., & Pearson, J. C. (1990). Students' perceptions of the credibility and homophility of native and non-native English speaking teaching assistants. Communication Research Reports, 7(1), 58-62.

Heyman, S. (1992). A study of Australian and Singaporean perceptions of source credibility. Communication Research Reports, 9(2), 137-150.

Jerz, D. G. (2000). Technical Writing: What is It?. Retrieved February 16, 2006, from http://jerz.setonhill.edu/resources/FAQ/TW.htm.

King, S. W., Minami, Y., & Samovar, L. (1985). A comparison of Japanese and American perceptions of source credibility. Communication Research Reports, 2(1), 76-79.

Pratkanis, A. R., & Gliner, M. D. (2004-2005). And when shall a little child lead them? Evidence for an altercasting theory of source credibility. Current Psychology, 23(4), 279-304.

Schraw, G., & Lehman, S. (2001). Situational interest: A review of the literature and directions for future research. Educational Psychology Review, 13(1), 23-52.

Zaparniuk, J., Yuille, J. C., & Taylor, S. (1995). Assessing the credibility of true and false statements. International Journal of Law and Psychiatry, 18(3), 343-352.