Adjective Representation in Contextual Vocabulary Acquisition



Download 46.22 Kb.
Date conversion01.05.2018
Size46.22 Kb.
Adjective Representation in Contextual Vocabulary Acquisition

Christopher Garver

May 6, 2002

CSE 663: Advanced Topics in Knowledge Representation

Abstract

This paper examines the representation of knowledge dealing with adjectives and how this knowledge can be derived from context using the SNePS knowledge representation system. The first area of discussion is the representation of a passage where the adjective to be defined is “taciturn” and the derivation comes from background knowledge about what is entailed when two objects or properties are “unlike.” The next section discusses the general information that would be useful in defining an unknown adjective, and provides an outline for how an adjective algorithm similar to Karen Ehrlich’s noun algorithm would be built. The third section lists alternative passages that might be represented in the future and provides a brief discussion of some of these passages. The paper concludes with a description of the next steps that need to be taken in this segment of contextual vocabulary acquisition study.


Passage Representation

The initial passage which was chosen to be used for adjective research was the following.




  1. “Unlike his brothers, who were noisy, outgoing, and very talkative, Fred was quite taciturn.” (Kawachi 2)

The representation of this passage that is detailed below was built off of the representation created by Kazuhiro Kawachi in his own paper dealing with this passage. A completely new representation was not needed, since the original used the case frames already recognized by Ehrlich’s algorithm. The changes that were made were generally of the information being represented, not the way it was represented. All figures that are referred to are located at the back of this paper. The incomplete SNePS coding of this representation and a sample run of this code are also included in the back. The case frames that were used for this representation are the same that are listed in Kawachi’s report in his section on syntax and semantics (Kawachi 3-7).

The direct translation of this passage into a semantic network is actually fairly straightforward. As shown in Figure (1), there are only a few basic ideas that make up the representation. There is an object with the name Fred that has the property of being taciturn. Ehrlich’s algorithm does not currently support the “mod-head” relationship needed for expressing the idea of “quite taciturn,” but it is possible to say that Fred is simply “taciturn” without affecting the meaning of the passage. There is a node that expresses that Fred possesses an object, and the two are in the “brothers” relationship, meaning that Fred has brothers. Fred’s brothers have the properties of being noisy, outgoing, and talkative. Again, “very talkative” can be shortened to “talkative” without changing meaning. Finally, Fred and his brothers are arguments in the relationship “unlike.” The original representation also contained “before-after” case frames to indicate that Fred and his brothers had these properties in the past. However, since most narratives take place in the past tense, this seemed to be an unnecessary distinction. Also, “taciturn” was an argument in the “unlike” relationship with “noisy,” “outgoing,” and “talkative,” separately. This is something that we wish to show, and should be inferred by SNePS when the appropriate background knowledge is added. At this point, though, ideas about what properties are unlike “taciturn” can not be discerned directly from the passage, so they should not be included (Kawachi 8).

The remainder of the representation deals with the background knowledge necessary to infer that “taciturn” is unlike the properties that belong to Fred’s brothers. The first piece of background knowledge that was suggested by Kawachi is as follows.



  1. “For all x and y, if x and y are unlike each other, then there is some property z such that either x or y (but not both) has the property z.” (Kawachi 9)

The purpose of this is to infer that either Fred is not noisy, not outgoing, or not talkative, or that Fred’s brothers are not taciturn. The semantic network for this is shown in Figure (2). The only change made from Kawachi’s version was the removal of an unnecessary “min-max-arg” case frame (Kawachi 9).

The next piece of background knowledge that Kawachi used is below.


  1. “For all x and y, if x and y are unlike each other, then there is some property w such that x has w and y does not have w, and there is some property z such that y has z and x does not have z.” (Kawachi 12)

This knowledge is not used in the current representation, however. This is because there are cases where this rule can be shown to be false. Suppose that there is an object with five properties and another object with the same five properties plus one more. If these two objects are stated to be unlike, then this rule says that both objects have a property that the other doesn’t have. But all of the properties of the first object also belong to the second object, meaning that the first object has no property that makes the rule true. While it could be argued that the first object has the property of not having the extra property, the point seems moot, as the rule itself doesn’t seem to serve much purpose in Kawachi’s representation except as a model to be used by the next rule (Kawachi 12).

Presumably, we now know that Fred is taciturn and does not have the property of being noisy, while his brothers are noisy (only one property will be used to keep the discussion simple). The next step is to infer that “taciturn” and “noisy” are unlike each other. This is done through Kawachi’s next rule. The semantic network for this can be found in Figure (3).



  1. “For all w and z, if there is some object x such that has w and does not have z, and there is some object y such that has z and does not have w, and if x and y are unlike each other, then w and z are unlike each other.” (Kawachi 12)

With the exception of some unnecessary “min-max-arg” relationships, Kawachi’s representation required no changes (Kawachi 12).

Now that it can be inferred that “taciturn” is unlike “noisy,” it is necessary to define what it means for two properties to be unlike each other. This is done through two final pieces of background knowledge. The first, created by Kawachi, is as follows. This rule is displayed in Figure (4).


  1. “For all v1 and v2, if v1 and v2 are in the relation “unlike,” they are not equivalent to each other and are members of or subclasses of the same class.” (Kawachi 16)

The idea that two properties that are unlike are not equivalent to each other is the definition of what it is for two things to be unlike. The second part is an assumption that we make, that “taciturn” is somehow related to “noisy.” We should assume that the two words used in the “unlike” relationship must be comparable to each other in some way; otherwise, they would not have been mentioned together. For example, we wouldn’t expect to come across a sentence like the following.




  1. “Unlike his brothers, who were noisy, outgoing, and very talkative, Fred was quite tall.” (Kawachi 16)

Likewise, two objects being compared in the “unlike” relationship should have something in common. As Kawachi points out, replacing “Fred” with “the building” also makes little sense. The representation that Kawachi uses covers both of these cases, and was only changed to include “member-class” and “subclass-superclass” case frames instead of “ISA” relationships (Kawachi 15-6). The last piece of background knowledge needed about “unlike” is below and is shown in Figure (5).



  1. For all r, s, and x, such that r is unlike s and r is a property of x, then s is not a property of x.

This means that two properties that are unlike each other can not both be possessed by the same object. In this case, Fred can not be both taciturn and noisy, because Fred is taciturn and taciturn is unlike noisy. The reason this rule was developed was to deal with some of the issues that Kawachi brought up at the end of his paper. There are cases where the two properties that are said to be unlike are not necessarily opposites of each other. For example, in the following case, “happy” and “ecstatic,” while having the same general meaning, are varying degrees of the same concept.




  1. “Unlike his brother, who was happy, Fred was ecstatic.” (Kawachi 18)

While the sentence is grammatically correct, the current knowledge would lead SNePS to believe that “happy” is unlike “ecstatic.” Even if this isn’t the exact interpretation desired, the above rules can at least clarify the situation. We can infer that happy and ecstatic are not equivalent to each other (which is true, they just aren’t opposites), that they are related to each other (members of the class “mood”), and that they both can’t occur at the same time in a person (a person can’t have two moods at once). This can also be useful in cases where the properties being described have no distinct opposites, as in the case of color (Kawachi 18-9).

The next step was to take all of these semantic networks and input them into SNePS. This representation and its testing were not complete at the time this paper was written. The code for this and a sample run are included at the end of this paper. The immediate next step in this process is to determine the effectiveness of using Skolem representation. While SNePS does accept the current representation, the addition of rule (4) does not cause SNePS to infer any new information. This could be because of an inaccurate representation of the rule. Another possible cause is the presence of existentially quantified variables in the antecedent. In this case, even though the properties “taciturn” and “noisy” meet all of the requirements for the rule to infer that they are unlike, the rule might be too weak for SNePS to act on it. A stronger version of the rule, where the objects x and y are no longer Skolem variables and are linked to the rule by “forall,” has been written, but has not been added successfully, as it causes contradictions to be inferred. This rule has been commented out, but can be seen in both the code and the sample run. As this new rule is still tentative, it won’t be discussed in great detail in this paper.

Adjective Algorithm

Currently, work is being done on algorithms that can traverse the semantic networks developed in SNePS and acquire information that provides useful definitions for certain words. At this time, progress is being made on algorithms for nouns and for verbs. A future goal for the CVA project is to develop a similar algorithm for defining adjectives. The following section discusses what information would be necessary for such an algorithm to derive and the general representations needed for this. It also raises a number of issues for future consideration. An algorithm for adverbs could also be built off of this model, since adverbs serve as descriptions for verbs in the same way that adjectives describe nouns.

Probably the most useful information that the algorithm could provide would be a list of synonyms for the adjective in question. If the assumption is made that all words other than the adjective in question are known to the reader, then this list would provide a direct definition for the adjective. The final version of the algorithm should list all adjectives that are either equivalent to or synonymous with the word being defined. Precedence should be given first to equivalencies, represented through the “equiv-equiv” case frame defined in Karen Ehrlich’s noun algorithm. If no equivalencies exist, it should list words with similar, but not equal, meanings, represented by the “synonym-synonym” case frame, discussed in the online dictionary of CVA SNePS case frames (Rapaport, Ehrlich, and Broklawski). If there is no direct knowledge about synonyms, it might be desirable to list the adjectives that can co-exist with the undefined adjective as possible synonyms. For example, consider the following passage.

  1. “I believe there never existed in his station a more respectable-looking man. He was taciturn, soft-footed, very quiet in his manner, deferential, observant, always at hand when wanted, and never near when not wanted; but his great claim to consideration was his respectability.” (Dickens)

It is never directly stated that the word “taciturn” is synonymous to any of the other adjectives that are used, and without any background knowledge, the algorithm wouldn’t find any synonyms. It could, however, list “respectable-looking,” “soft-footed,” “very quiet,” “deferential,” “observant,” and so on, as possible synonyms. The value of this functionality in the final version of the algorithm is debatable. With the right passage, it might give the reader a feeling of what the true definition is. On the other hand, it might be misleading; being “taciturn” has nothing to do with being “respectable-looking,” for example.

A list of the antonyms of the adjective in question could potentially be as useful as the list of synonyms. If the assumption was made that a person who does not possess a property will always possess the opposite of the property, then the antonyms could provide as strong a definition as the synonyms. Even if such an assumption were not made, it would still serve to narrow down the possible definition. If the writer of the algorithm deems it necessary, an “antonym-antonym” case frame can be created for this purpose, with a definition similar to the pre-existing “synonym-synonym” case frame. However, this might not be necessary, based on suggestions made in Kawachi’s paper. Logically, the knowledge that two adjectives are not equivalent to each other does not necessarily mean they are opposites: “hot” and “short” are not equivalent, but as they are not related, they are also not opposites. But for SNePS to draw a conclusion about the relationship between two words, it needs direct input about these words. Kawachi mentions that it should be assumed that the two words that are being compared have some relevance to each other, otherwise the comparison wouldn’t be made (Kawachi 15-6). Thus, if that stipulation is made, it might be enough to represent antonyms just by representing that they are not equivalent or not synonymous, since that information would not have been inputted unless the words had definitions that are relevant for comparison. Unlike synonyms, a list of possible antonyms detailing the adjectives that are not being used with the adjective in question would not be useful, since it could potentially list nothing but synonyms. For example, if Fred is described as “taciturn,” and his brother is described as having a number of properties similar to being taciturn, the fact that these properties are not listed as being common to Fred would mark them as potential antonyms, which would be very misleading to the user.

The next piece of information that would be useful is the general class that the adjective belongs to. The word “hot” would belong to the class of adjectives describing “temperature,” for example, and the word “quick” would belong to the class “speed.” In the absence of useful synonyms or antonyms, this information would at least provide the reader with a topic that the adjective is referring to. Membership in a class could also provide a list of possible synonyms derived from other members of the same class. This membership could be expressed by the pre-existing “member-class” relationship. However, this brings up the question of what happens when an adjective is part of two or more classes. The adjective “heated” can be used both in reference to the temperature of an object, as in “the muffins were heated,” and the tone of something, as in “the heated debate.” This isn’t a serious problem in the scope of the adjective algorithm, since both definitions would be included when the algorithm was used, but could cause confusion in the noun algorithm. References to the properties of objects have always assumed that a property can only belong to a single class, probably because most of the work on CVA has dealt with nouns, and the existence of these properties is what is important to the definition of a noun, not the definitions of these properties. However, to clear up possible ambiguities, it would be useful to parenthetically include the known class of the property being listed by the noun algorithm. To do this, one would have to alter the representation of properties in semantic networks. Currently, an object and a property possessed by that object are linked by an “object-property” relationship. If one wanted to include the class membership of the property, the “property” arc of the previous relationship would point to the molecular node connecting the property and its class via the “member-class” case frame. The paths would also have to be adjusted so that the “object-property” case frame extended the “property” arc to the property in question. An example of this is illustrated in Figure (6).

The next two items that could be useful in defining an adjective are a list of the objects that are known to have that adjective refer to them and a list of the objects that can’t have that adjective refer to them. In the case where the reader knows the definitions of all words other than the adjective in question, knowledge of an object that possesses the property would at least narrow down the definition of the adjective. If there are multiple objects that the adjective refers to, the reader, or possibly even the algorithm, might be able to detect properties common to all of the objects that would narrow down the definition even further. The same logic would apply for the list of objects that can not have the adjective refer to them, except that the result would be an antonym for the adjective. In the semantic network, this information would be represented either through the usage and possible negation of the “object-property” case frame or by the extended “object-property” relationship described above. If guidelines are needed for the information that is returned by the algorithm, it should follow those set forth in the streamlined version of Ehrlich’s noun algorithm. The most specific class that can possess the property should be reported back with highest priority. If no class memberships can be found, then the specific objects that have the property should be reported (Rapaport, Broklawski, and Napieralski). The problem with these guidelines is that they are only useful if all of the objects in the class share many similarities. For example, if we know that Fred’s ball is “spherical,” and that Fred’s ball is a baseball, having the algorithm return the class membership “baseball” is useful in deriving the definition of “spherical,” since all baseballs possess the same properties. On the other hand, if we know that Fred is “taciturn” and that Fred is human, the algorithm would return “human” as the class membership. This would tell us very little about what “taciturn” means, since humans vary in many different ways. In this case, it might be more useful to simply return that “Fred” is taciturn, depending on how much is known about Fred. Ultimately, it should be the decision of the implementer of the algorithm whether or not the above guidelines would be useful for defining the class of the adjective.

The final items that the algorithm should return are a list of the actions that an object can perform while possessing the adjective in question, and a list of the actions that can’t be performed. There are some cases where the definition has to be derived from actions that are limited by the presence of a certain property, as in the passage below.



  1. "He was, in short, the least communicative of men. He talked very little, and seemed all the more mysterious for his taciturn manner." (Verne)

Even though the knowledge that the man is “least communicative” could be considered a sufficient definition, the fact that he also doesn’t talk much adds more depth to the definition. These lists can be derived from “object-act-agent” relationships that are linked to an object in the “object-property” relationship with the adjective being defined. There are also some cases where a possible action is connected to an object by the “object-property” case frame. For example, with “a bird flies,” the bird is the object and “flies” is the property. The “object-act-agent” case frame isn’t used because the bird isn’t known to be flying at this point in time; it could just be resting on the ground. However, at this point there is no way to differentiate between verbs being stored as properties and adjectives being stored as properties. Because of this, the verbs in the “object-property” relationship would be listed among the possible synonyms of the adjective.


Other Passages

Even though an adjective algorithm has not been created yet, it will eventually be necessary to represent other passages. In addition to the current passage, it would be a good idea to represent a passage where the definition of the adjective “taciturn” is derived from the properties that the taciturn person does possess, rather than those that he doesn’t. Below are the passages that seemed the most promising for future work, a brief discussion of the background knowledge that will be necessary for some of these, and the issues that are raised. Eventually, passages using adjectives other than “taciturn” will need to be represented as well.



  1. He was, in short, the least communicative of men. He talked very little, and seemed all the more mysterious for his taciturn manner." (Verne)

Out of all of the passages listed here, this is the shortest and would probably be the easiest to represent. Most of the descriptions used are related to the definition of “taciturn,” even though none of the other adjectives are synonymous with “taciturn,” unless “least communicative” is counted as a single property. The first piece of background knowledge that would be necessary is that if an object x possesses a manner that is described by an adjective y, then adjective y also describes the object x. In this case, because this man has a taciturn manner, he himself is taciturn. This enables the properties of the man to be considered as possible synonyms for “taciturn.” If the adjective algorithm is written and provides listings of the actions that can and can’t be associated with “taciturn,” then it is sufficient to represent that if an object x performs very little of action y, then object x does not perform action y, at least in the present scope. This would result in “talk” being listed as a verb that is not associated with the adjective “taciturn.” Otherwise, the necessary knowledge for expressing this would be that if an object x performs very little of an action y, then object x will not possess any of the properties that might result from action y. In English, because the man talked very little, he couldn’t be considered talkative, noisy, loud, and so on. This isn’t too useful with just the passage by itself, but combined with another rule that describes properties that all people who talk possess, it would be helpful in the definition. The last key bit of background knowledge is that, out of all objects of class x, if an object x1of class x is “least of” a property y, then object x1 does not possess property y, in the scope of the passage. Since this man is the least communicative of men, he therefore can’t be considered communicative. A possible problem in representing this knowledge, or even in representing that the man is the least communicative of men, is that the “mod-head” case frame that is used to connect modifiers such as “least,” “very,” and “quite” to adjectives isn’t supported in the algorithms that are currently in development. As stated earlier, this is probably because the focus has been on representing nouns and verbs, where this case frame isn’t as necessary. However, background knowledge regarding certain modifiers and how they relate their property to the object possessing the property can be useful in the definition of adjectives, particularly in cases like this one, where the modifier indicates that the property is non-existent. This would require editing of the noun and verb algorithms to accommodate the “mod-head” relationship. If such a change isn’t deemed necessary, it is still possible to translate these relations into roughly equivalent nodes for use in the “object-property” relationship.



  1. "I believe there never existed in his station a more respectable-looking man. He was taciturn, soft-footed, very quiet in his manner, deferential, observant, always at hand when wanted, and never near when not wanted; but his great claim to consideration was his respectability." (Dickens)

This passage contains multiple synonyms for the word “taciturn,” but also contains a fair amount of misleading information. To establish what the possible synonyms are, we need a rule saying that if an object x has a property y, then any other property z of object x can either be equivalent to or synonymous with property y. Note that this rule isn’t very strong; it is possible that any, all, or none of other properties of the object would be synonymous with the adjective being defined. From this, we would end up with a list of properties, some of which are relevant while others aren’t. Depending on how the coder chooses to represent the information about the man being “very quiet in his manner,” the background knowledge about an object’s manner from the previous passage might be useful. This is another instance where the “mod-head” case frame would be used to represent “very quiet,” even though it is not as critical in this case, since all we need to know is that the man is quiet. As it stands, if nothing is known about the adjective “taciturn,” it would be difficult to establish which of the many possible synonyms are actual synonyms. This passage would be a good test for the functionality of the part of the algorithm dealing with possible synonyms.



  1. "The patient, a neurotic, respectable business man thirty-three years of age, a good husband and father, on his return from a business journey of some weeks' duration is found to have become depressed and taciturn, and as the days pass his melancholy deepens. At first he would not speak, but soon when he wished to speak could not, making vain attempts at articulation." (Fraser)

This passage stands out from the others because it was taken from a psychological journal, meaning that it fits in with one of the final goals of the CVA project, to use the strategies that are developed during the project to assist students in understanding difficult vocabulary in scientific texts. While the remainder of the article does not seem to be completely scientific in content, this description of the symptoms matches those that might be found in other medical journals (Rapaport and Kibby 2). The background knowledge for this passage must associate the fact that the man would not speak with the fact that he is taciturn. If the properties like “neurotic,” “respectable,” and “good,” are represented as having been true in the past, then it might be possible to come up with a rule making it so that those properties aren’t listed as possible synonyms of “taciturn,” which seems to be a property that didn’t exist in the past. However, inferring that those properties no longer exist would be an inaccurate representation. It would also be useful to represent what it means to be in a state of melancholy. One possible way of doing this would to be to develop two new case frames, “object-state” and “state-property.” The “object-state” relationship would be between an object and the state that the object is in. Here, the businessman is in a state of melancholy. The “state-property” case frame would be used to represent all of the properties that are associated with that state. Being depressed is one of the properties that is associated with being in a state of melancholy, for example. A rule would then be needed to state that if object x has a state y, and state y has a property z, then object x also has property z. With enough information about what properties are associated with melancholy, a number of possible synonyms for taciturn could be found. An example of the “object-state” and “state-property” case frames is shown in Figure (7).


Even though there was not enough time to consider all of these passages in as much detail as the ones above, here are the remainder of the passages that might be promising to represent.


  1. "In fact, the Indians that I have had an opportunity of seeing in real life are quite different from those described in poetry. They are by no means the stoics that they are represented, taciturn, unbending, without a tear or a smile. Taciturn they are, it is true, when in company with white men, whose goodwill they distrust, and whose language they do not understand; but the white man is equally taciturn under like circumstances. When the Indians are among themselves, however, there cannot be greater gossips." (Irving)




  1. "In Xiu Xiu: The Sent Down Girl, a first feature film from actress Joan Chen (The Last Emperor, Heaven and Earth), a tailor's young daughter growing up in the last days of Mao's Cultural Revolution is abruptly [suddenly] "sent down" to the countryside in 1975 to learn horse-herding, with the promise that when she returns she will lead her own all-girl horse cavalry unit. Little does she know that the Revolution is on its last legs and the unit has long since been disbanded [broken up]. Billeted [housed] in a ragged tent with a taciturn Tibetan herder, the naïve [innocent] Xiu Xiu pines [longs] for home, indifferent [not showing care] both to her lovely natural surroundings and to the quiet integrity of her host […]." (Taylor)



  1. "Newsweek's story [concerning the debate about drilling for oil in Alaska's National Wildlife refuge] has the same thrust, but a different approach. It opens with a taciturn Alaskan pilot, "a former rodeo rider and crop duster," who flies the magazine's reporter to his destination. "Nobody would mistake Dirk Nickisch for a tree hugger," writes Jeffrey Bartholet. "But as he takes off and flies over the northern mountains of Alaska into one of the last unspoiled wilderness areas of America, he explains (if you ask him) why he doesn't want multinational oil companies to explore and drill for oil in any part of the refuge." (Powers)





  1. "Mr. Higgin's acidic novel [Bomber's Law] opens with two plainclothes policemen sitting in a car waiting for a suspect to appear. They are a grouchy veteran and the younger colleague to whom he is turning over the investigation, and they hate each other for reasons of origin, education, connections, temperment, and previous association. Most of these reasons emerge in their garrulous [wordy], raspy conversation. No Higgins character has ever been taciturn." (Adams)




  1. "Inside the farmhouse, the family greetings were casual and restrained. His parents and his brothers and in-laws did not seem overly impressed by the prospect that the eldest son would soon occupy one of the most powerful positions of government. […] As sometimes happens in those families, however, the energy and ambition seemed to have been concentrated disproportionately [unevenly] in one child, David, perhaps at the expense of others. His mother, Carol, a big-boned woman with metallic blond hair, was the one who made David work for A's in school. In political debate, David Stockman was capable of dazzling opponents with words; his brothers seemed shy and taciturn in his presence." (Greider)

Future Work

The most important step that needs to be taken is the correct implementation of one of these passages in SNePS. The passage that was discussed in the beginning is the most likely candidate for this, as the semantic networks for this have already been mapped, but this is to the discretion of the implementer. After at least one passage has been represented properly, it will be possible to work on an adjective algorithm, based on the guidelines discussed earlier. The testing of this algorithm will require at least one SNePS representation, which is the reason it is crucial to have one. After this, other passages with adjectives that need defining should be represented. These can either focus on the word “taciturn,” in order to see what kind of knowledge base can be built from multiple passages, or could focus on other words, to test the range of the algorithm. The latter would mimic the work being done currently with nouns and verbs, so it seems like the better of the two options.
Works Cited

Adams, P.L. “Brief Reviews: Bomber’s Law by George V. Higgins.” The Atlantic Monthly. December 1993.

Dickens, Charles. “David Copperfield.” About Classic Literature Guide. 2002. About, Inc. 5 May 2002 .

Fraser, Donald. “Journal of Abnormal Psychology: A Case of Possession.” The Modern English Collection. Ed. Charles Keller. University of Virginia. 5 May 2002 .

Greider, W. The Atlantic Monthly. December 1981.

Irving, Washington. “A Tour on the Prairies.” The Modern English Collection. University of Virginia. 5 May 2002 .

Kawachi, Kazuhiro. “Vocabulary Acquisition from the Context Containing unlike.” 2001.

Powers, W. “The Arctic persuasion.” The Atlantic Monthly. 15 August 2001.

Rapaport, William J., Karen Ehrlich, and Marc K. Broklawski. “A Dictionary of CVA SNePS Case Frames.” 21 February 2002. 6 May 2002 .

Rapaport, William J., Marc K. Broklawski, and Scott T. Napieralski. “Streamlined version of the Ehrlich algorithm in English.” 5 May 2002 .

Rapaport, William J., and Michael W. Kibby. “Contextual Vocabulary Acquisition: Development of a Computational Theory and Educational Curriculum.” 2000.

Taylor, E. “Film: A Lost Generation.” The Atlantic Monthly. May 1999.



Verne, Jules. “Around the World in 80 Days.” About Classic Literature Guide. 2002. About, Inc. 5 May 2002 .








The database is protected by copyright ©hestories.info 2017
send message

    Main page