Comprehension ability. For this purpose, 115 male and female university students majoring in English Translation participated in this study. Having being homogenized by an MELAB test, 60 learners were selected and they were randomly assigned into two groups, control and experimental. Then both groups sat for a pre-test, which was a reading comprehension test. The purpose of this test was to measure the learners’ initial subject knowledge of reading comprehension ability. Afterwards, the experimental group received treatment based on textual Modification strategy. However, the control group received no treatment. The treatment procedure took 10 sessions. Finally at the end of the course both groups sat for the post test of reading comprehension. Then the statistical analysis was run through ANCOVA. It was explored from the study that learners’ reading ability improves more when they are provided with textual Modification strategy. Key words: textual Modification, Reading comprehension Table of Contents Title Page Chapter 1: Introduction 1.0) Introduction…………………………………………………………………………………1 1.1) Theoretical framework …………………………………………..……….……….1 1.2) Statement of the Problem…………………………………………………….…….5 1.3) Purpose of the Study………………………………………….………………..…..7 1.4) Research Question…………………………………….…………….………………7 1.5) Research Hypothesis……….……………………………………………………..7 1.6) Significance of the Study……………………………………………………………7 1.7) Definitions of Key Terms ………………………………………….……………..9 1.7.1) Textual modification……………………..……………………….……………..9 1.7.2) Reading comprehension………………………………………….……………..9 1.8) Summary ………………………………………………..………………………..10 Chapter 2: Review of the Related Literature 2.0) Introduction …………………………………………………………………………………11 2.1) Theoretical framework…………………………………………………………….11 2.2) Reading Comprehension, Past and Present ………………………………..…..…15 2.2.1) The Top down (Concept-Driven) Approach ………………………..…………18 2.2.2) The Bottom up (Serial) Approach (Text-based)…………………..…..…..……19 2.2.3) The Interactive Approach ……………………………………………………….20 2.3) Schema theory ……………………………………….……………….……..…. 22 2.4) Parsing ……………………………………………….………………..…….……23 2.5) Reading materials …………………………………….……………….…….…. 24 2.5.1) Interest……………………………………………………….…………..………25 2.5.2) Objectives…………………………………………………………..……………25 2.5.3) Readability………………………………………………………..…..………..26 2.5.4) Authenticity ………………………………………………….……………..….26 2.6) Some Sources of Syntactic Complexity………………………..……………………..27 2.6.1) Surface complexity …………………………………………………………………..28 2.6.1.1) Amount ………………………………………………..……..………………..28 2.6.1.2) Density ………………………………………………….………..……..…….29 2.6.1.3) Ambiguity ……………………………………………….……………..……..29 2.6.2) Interpretive Complexity…………………………………………………………………………..29 2.6.3) Systematic Complexity …………………………………..………………………….29 2.6.3.1) Sentence Length ………………………………….……………………..……31 2.6.3.2) Preposed Clause……………………………………………………………………….31 2.6.3.3) Passive Sentences ……………………………………………………….…….32 2.6.3.4) Relative clause and Embedding ………………………………………………….…33 2.6.3.5) A Proposition-based Measure of Comprehensibility.………………………..34 2.7) Syntactic Complexity and Reading…………………………………..……..…….35 2.8) Simplification of Reading Materials ……………………………………..….…..38 2.8.1) Splitting the sentence………………………………………..…………………40 2.8.2) Changing discourse marker………………………………….………..…………41 2.8.3) Transformation to active voice …………………………….……..………….…41 2.8.4) Inversion of clause ordering ………………………………………..…………..42 2.8.5) Subject-Verb-Object ordering ………………………………..…………….….42 2.8.6) Topicalization and Detopicalization……………………….……………………42 2.9) Simplification and Authenticity…………………………………..…..………….45 2.10) Summary ………………………………………………………..………………47 Chapter 3: Methodology 3.0)Introduction……………………………………………………………………..…48 3.1) Design of the study ………………………………………………..…………..…48 3.2) Participants of the Study……………………………………..…….……………..49 3.3) Materials of the Study …………………………………………………………..……..49 3.4) Procedures of the Study………………………………………………………………..49 3.5) Statistical Collection………………………………………..………….………….50 3.7) Summary……………………………………….…………………..….…………50 Chapter 4: Results 4.0) Data Analysis and Findings …………………….……………………..…………51 4.1) Results of Hypothesis Testing ……………………………………………..….…53 2) Summary …………………………………………………………………………54 Chapter 5: Discussion and Implication 5.0) Discussion ……………………………………………………………….……….55 5.1) Pedagogical Implication ……………………………………………………..…..56 5.3) Implication for teaching …………………………………………..……………..57 5.4) limitations of The Study ……………………………………….……………..…..57 5.5) Suggestions for Further Research …………………………………………………57 References ………………………..……………………………………..…..……..……..59 Appendices Appendix A: MELAB Test ………………………………………………….………..…..66 Appendix B: Pre-test (A test from Nelson-Denny Reading Comprehension Tests)…………82 Appendix C: Treatment procedure for experimental group (syntactically simplified text) …84 Appendix D: Post-test ……………………………………………………………..….87 List of Tables Title Page Table 2.1 Survey of Simplification Studies and Results……………………….……….14 Table 4.1.Group Statistics……………………………………………………………………51 Table 4.2. Independent Samples Test…………………………………………………………51 Table 4.3. Descriptive statistics and independent t-test for the comparison of pre-test results………52 Table 4.4. Independent Samples Test……………………………………………….………..53 Table 4.5. Paired Samples Test……………………………………………………….………53 Chapter One Introduction Introduction Textual modification can be defined as any process that reduces the syntactic or lexical complexity of a text while attempting to preserve its meaning and information content. The aim of Textual modification is to make text easier to comprehend for a human user or process by a program. A common method for assessing whether a text is suitable for a particular reading age is by means of using readability metric, such as the Flesch readability score, proposed in 1943 and more recently popularized by Microsoft Word. These metrics are based solely on surface attributes of a text, such as average sentence and word lengths. The term readability is therefore a misnomer; these metrics do not attempt to judge how readable, well written or cohesive a text is, or even whether it is grammatical. Rather, they suggest what reading age a text (that is assumed to be well written, cohesive and relevant in content) is suitable for, by means of a calibration with school reading grades. Theoretical Framework Compared to controlled generation and text summarization, there has been significantly less work done on the automatic textual modification of existing text. Interestingly, the two main groups involved with textual Modification have had very different motivations. The group at UPenn (Chandrasekar et al., 1996; Chandrasekar and Srinivas, 1997) viewed text simplification as a preprocessing tool to improve the performance of their parser. The PSET project on the other hand focused its research on simplifying newspaper text for aphasics (Carroll et al., 1998; Carroll et al., 1999b). Chandrasekar et al.’s motivation for textual modification was largely to reduce sentence length as a preprocessing step for a parser. They treated textual modification as a two-stage process— analysis followed by transformation. Their research focused on dis-embedding relative clauses and appositives and separating out coordinated clauses. Their first approach (Chandrasekar et al., 1996) was to hand-craft simplification rules, the example from their paper being: V W:NP, X:REL PRON Y, Z. −→ V W Z. W Y. which can be read as “if a sentence consists of any text V followed by a noun phrase W, a relative pronoun X and a sequence of words Y enclosed in commas and a sequence of words Z, then the embedded clause can be made into a new sentence with W as the subject noun phrase”. This rule can, for example, be used to perform the following modification: John, who was the CEO of a company, played golf. John played golf. John was the CEO of a company. In practice, linear pattern-matching rules like the handcrafted one above do not work very well. For example, to simplify: A friend from London, who was the CEO of a company, played golf, usually on Sundays. it is necessary to decide whether the relative clause attaches to friend or London and whether the clause ends at company or golf. And if a parser is used to resolve these ambiguities (as in their second approach summarized below), the intended use of text simplification as a preprocessor to a parser is harder to justify. Their second approach (Chandrasekar and Srinivas, 1997) was to have the program learn simplification rules from an aligned corpus of sentences and their hand-simplified forms. The original and simplified sentences were parsed using a Lightweight Dependency Analyser (LDA) (Srinivas, 1997) that acted on the output of a super tagger (Joshi and Srinivas, 1994). These parses were chunked into phrases. Simplification rules were induced from a comparison of the structures of the chunked parses of the original and hand simplified text. The learning algorithm worked by flattening sub trees that were the same on both sides of the rule, replacing identical strings of words with variables and then computing tree→trees transformations to obtain rules in terms of these variables. This approach involved the manual simplification of a reasonable quantity of text. The authors justified this approach on the basis that handcrafting rules is time consuming. However, it is likely that the intuitions used to manually simplify sentences can be encodable in rules without too much time overhead. In addition, while this approach is interesting from the machine-learning point of view, it seems unlikely that a system that learns from a corpus that has been simplified by hand will outperform a system in which the rules themselves have been hand-crafted. Textual modification can increase the throughput of a parser only if it reduces the syntactic ambiguity in the text. Hence, a Textual modification system has to be able to make disambiguation decisions without a parser in order to be of use to parsing. This early work on Textual modification therefore raised more issues than it addressed. Moreover, since the authors did not provide any evaluations, it is difficult to assess how well their approaches to text simplification worked. The PSET project (Devlin and Tait, 1998; Carroll et al., 1998), in contrast, was aimed at people with aphasia rather than at parsers and was more justified in making use of a parser for the analysis stage. For syntactic simplification, the PSET project roughly followed the approach of Chandrasekar et al. PSET used a probabilistic LR parser (Briscoe and Carroll, 1995) for the analysis stage and unification-based pattern matching of handcrafted rules over phrase-marker trees for the transformation Comprehension ability. For this purpose, 115 male and female university students majoring in English Translation participated in this study. Having being homogenized by an MELAB test, 60 learners were selected and they were randomly assigned into two groups, control and experimental. Then both groups sat for a pre-test, which was a reading comprehension test. The purpose of this test was to measure the learners’ initial subject knowledge of reading comprehension ability. Afterwards, the experimental group received treatment based on textual Modification strategy. However, the control group received no treatment. The treatment procedure took 10 sessions. Finally at the end of the course both groups sat for the post test of reading comprehension. Then the statistical analysis was run through ANCOVA. It was explored from the study that learners’ reading ability improves more when they are provided with textual Modification strategy. Key words: textual Modification, Reading comprehension Table of Contents Title Page Chapter 1: Introduction 1.0) Introduction…………………………………………………………………………………1 1.1) Theoretical framework …………………………………………..……….……….1 1.2) Statement of the Problem…………………………………………………….…….5 1.3) Purpose of the Study………………………………………….………………..…..7 1.4) Research Question…………………………………….…………….………………7 1.5) Research Hypothesis……….……………………………………………………..7 1.6) Significance of the Study……………………………………………………………7 1.7) Definitions of Key Terms ………………………………………….……………..9 1.7.1) Textual modification……………………..……………………….……………..9 1.7.2) Reading comprehension………………………………………….……………..9 1.8) Summary ………………………………………………..………………………..10 Chapter 2: Review of the Related Literature 2.0) Introduction …………………………………………………………………………………11 2.1) Theoretical framework…………………………………………………………….11 2.2) Reading Comprehension, Past and Present ………………………………..…..…15 2.2.1) The Top down (Concept-Driven) Approach ………………………..…………18 2.2.2) The Bottom up (Serial) Approach (Text-based)…………………..…..…..……19 2.2.3) The Interactive Approach ……………………………………………………….20 2.3) Schema theory ……………………………………….……………….……..…. 22 2.4) Parsing ……………………………………………….………………..…….……23 2.5) Reading materials …………………………………….……………….…….…. 24 2.5.1) Interest……………………………………………………….…………..………25 2.5.2) Objectives…………………………………………………………..……………25 2.5.3) Readability………………………………………………………..…..………..26 2.5.4) Authenticity ………………………………………………….……………..….26 2.6) Some Sources of Syntactic Complexity………………………..……………………..27 2.6.1) Surface complexity …………………………………………………………………..28 2.6.1.1) Amount ………………………………………………..……..………………..28 2.6.1.2) Density ………………………………………………….………..……..…….29 2.6.1.3) Ambiguity ……………………………………………….……………..……..29 2.6.2) Interpretive Complexity…………………………………………………………………………..29 2.6.3) Systematic Complexity …………………………………..………………………….29 2.6.3.1) Sentence Length ………………………………….……………………..……31 2.6.3.2) Preposed Clause……………………………………………………………………….31 2.6.3.3) Passive Sentences ……………………………………………………….…….32 2.6.3.4) Relative clause and Embedding ………………………………………………….…33 2.6.3.5) A Proposition-based Measure of Comprehensibility.………………………..34 2.7) Syntactic Complexity and Reading…………………………………..……..…….35 2.8) Simplification of Reading Materials ……………………………………..….…..38 2.8.1) Splitting the sentence………………………………………..…………………40 2.8.2) Changing discourse marker………………………………….………..…………41 2.8.3) Transformation to active voice …………………………….……..………….…41 2.8.4) Inversion of clause ordering ………………………………………..…………..42 2.8.5) Subject-Verb-Object ordering ………………………………..…………….….42 2.8.6) Topicalization and Detopicalization……………………….……………………42 2.9) Simplification and Authenticity…………………………………..…..………….45 2.10) Summary ………………………………………………………..………………47 Chapter 3: Methodology 3.0)Introduction……………………………………………………………………..…48 3.1) Design of the study ………………………………………………..…………..…48 3.2) Participants of the Study……………………………………..…….……………..49 3.3) Materials of the Study …………………………………………………………..……..49 3.4) Procedures of the Study………………………………………………………………..49 3.5) Statistical Collection………………………………………..………….………….50 3.7) Summary……………………………………….…………………..….…………50 Chapter 4: Results 4.0) Data Analysis and Findings …………………….……………………..…………51 4.1) Results of Hypothesis Testing ……………………………………………..….…53 2) Summary …………………………………………………………………………54 Chapter 5: Discussion and Implication 5.0) Discussion ……………………………………………………………….……….55 5.1) Pedagogical Implication ……………………………………………………..…..56 5.3) Implication for teaching …………………………………………..……………..57 5.4) limitations of The Study ……………………………………….……………..…..57 5.5) Suggestions for Further Research …………………………………………………57 References ………………………..……………………………………..…..……..……..59 Appendices Appendix A: MELAB Test ………………………………………………….………..…..66 Appendix B: Pre-test (A test from Nelson-Denny Reading Comprehension Tests)…………82 Appendix C: Treatment procedure for experimental group (syntactically simplified text) …84 Appendix D: Post-test ……………………………………………………………..….87 List of Tables Title Page Table 2.1 Survey of Simplification Studies and Results……………………….……….14 Table 4.1.Group Statistics……………………………………………………………………51 Table 4.2. Independent Samples Test…………………………………………………………51 Table 4.3. Descriptive statistics and independent t-test for the comparison of pre-test results………52 Table 4.4. Independent Samples Test……………………………………………….………..53 Table 4.5. Paired Samples Test……………………………………………………….………53 Chapter One Introduction Introduction Textual modification can be defined as any process that reduces the syntactic or lexical complexity of a text while attempting to preserve its meaning and information content. The aim of Textual modification is to make text easier to comprehend for a human user or process by a program. A common method for assessing whether a text is suitable for a particular reading age is by means of using readability metric, such as the Flesch readability score, proposed in 1943 and more recently popularized by Microsoft Word. These metrics are based solely on surface attributes of a text, such as average sentence and word lengths. The term readability is therefore a misnomer; these metrics do not attempt to judge how readable, well written or cohesive a text is, or even whether it is grammatical. Rather, they suggest what reading age a text (that is assumed to be well written, cohesive and relevant in content) is suitable for, by means of a calibration with school reading grades. Theoretical Framework Compared to controlled generation and text summarization, there has been significantly less work done on the automatic textual modification of existing text. Interestingly, the two main groups involved with textual Modification have had very different motivations. The group at UPenn (Chandrasekar et al., 1996; Chandrasekar and Srinivas, 1997) viewed text simplification as a preprocessing tool to improve the performance of their parser. The PSET project on the other hand focused its research on simplifying newspaper text for aphasics (Carroll et al., 1998; Carroll et al., 1999b). Chandrasekar et al.’s motivation for textual modification was largely to reduce sentence length as a preprocessing step for a parser. They treated textual modification as a two-stage process— analysis followed by transformation. Their research focused on dis-embedding relative clauses and appositives and separating out coordinated clauses. Their first approach (Chandrasekar et al., 1996) was to hand-craft simplification rules, the example from their paper being: V W:NP, X:REL PRON Y, Z. −→ V W Z. W Y. which can be read as “if a sentence consists of any text V followed by a noun phrase W, a relative pronoun X and a sequence of words Y enclosed in commas and a sequence of words Z, then the embedded clause can be made into a new sentence with W as the subject noun phrase”. This rule can, for example, be used to perform the following modification: John, who was the CEO of a company, played golf. John played golf. John was the CEO of a company. In practice, linear pattern-matching rules like the handcrafted one above do not work very well. For example, to simplify: A friend from London, who was the CEO of a company, played golf, usually on Sundays. it is necessary to decide whether the relative clause attaches to friend or London and whether the clause ends at company or golf. And if a parser is used to resolve these ambiguities (as in their second approach summarized below), the intended use of text simplification as a preprocessor to a parser is harder to justify. Their second approach (Chandrasekar and Srinivas, 1997) was to have the program learn simplification rules from an aligned این مطلب را هم بخوانید : این مطلب را هم بخوانید : corpus of sentences and their hand-simplified forms. The original and simplified sentences were parsed using a Lightweight Dependency Analyser (LDA) (Srinivas, 1997) that acted on the output of a super tagger (Joshi and Srinivas, 1994). These parses were chunked into phrases. Simplification rules were induced from a comparison of the structures of the chunked parses of the original and hand simplified text. The learning algorithm worked by flattening sub trees that were the same on both sides of the rule, replacing identical strings of words with variables and then computing tree→trees transformations to obtain rules in terms of these variables. This approach involved the manual simplification of a reasonable quantity of text. The authors justified this approach on the basis that handcrafting rules is time consuming. However, it is likely that the intuitions used to manually simplify sentences can be encodable in rules without too much time overhead. In addition, while this approach is interesting from the machine-learning point of view, it seems unlikely that a system that learns from a corpus that has been simplified by hand will outperform a system in which the rules themselves have been hand-crafted. Textual modification can increase the throughput of a parser only if it reduces the syntactic ambiguity in the text. Hence, a Textual modification system has to be able to make disambiguation decisions without a parser in order to be of use to parsing. This early work on Textual modification therefore raised more issues than it addressed. Moreover, since the authors did not provide any evaluations, it is difficult to assess how well their approaches to text simplification worked. The PSET project (Devlin and Tait, 1998; Carroll et al., 1998), in contrast, was aimed at people with aphasia rather than at parsers and was more justified in making use of a parser for the analysis stage. For syntactic simplification, the PSET project roughly followed the approach of Chandrasekar et al. PSET used a probabilistic LR parser (Briscoe and Carroll, 1995) for the analysis stage and unification-based pattern matching of handcrafted rules over phrase-marker trees for the transformation stage. The project reports that on 100 news articles, the parser returned 81% full parses, 15% parse fragments and 4% parse failures. An example of the kind of simplification rule used in the textual modification component of the PSET project is: (S (?a) (S (?b) (S (?c) ) ) ) −→ (?a) (?c) stage. The project reports that on 100 news articles, the parser returned 81% full parses, 15% parse fragments and 4% parse failures. An example of the kind of simplification rule used in the textual modification component of the PSET project is: (S (?a) (S (?b) (S (?c) ) ) ) −→ (?a) (?c)

موضوعات: بدون موضوع  لینک ثابت


فرم در حال بارگذاری ...