Sei sulla pagina 1di 132

ALICE

Chua, Hans Gabriel


Cu, Geraldine Elaine
Ibarrientos, Chester Paul
Paguiligan, Moira Denise
Adviser : Ms. Ethel Ong
Children of the digital generation needs a
media-rich learning environment to simply
hold their attention (Prensky, 2001)

Storytelling is a fun activity for
children as it can be done with
peers. Moreover, it serves as a
bridge to hone their literacy skills
(Cassell, Ananny, et al., 2000)
To develop a story writing system that integrates
a virtual agent to collaborate with children
through suggesting a story text, or giving
prompts when the children are unable to
continue with the story they are writing.

GENERAL OBJECTIVE
SPECIFIC OBJECTIVES
See Section 4.2.2

To allow users to choose between structured and


free-form writing

Structured Free-form
SPECIFIC OBJECTIVES
See Section 4.2.2

To allow users to input story text


SPECIFIC OBJECTIVES
See Section 4.2.2

To use story understanding techniques to analyze the


input story text and generate an ASR

Input Sentence:
John is a doctor and is handsome.
SPECIFIC OBJECTIVES
See Section 4.2.2

To use story generation techniques to generate a


story segment, when the peer is acting as a
collaborator
SPECIFIC OBJECTIVES
See Section 4.2.2

To generate prompts to help the user to continue


his/her story, when the peer is acting as a facilitator
SPECIFIC OBJECTIVES
See Section 4.2.2

To allow the user to save his/her story, regardless if


it is complete or incomplete, into his/her library
SPECIFIC OBJECTIVES
See Section 4.2.2

To allow the user to read or remove saved stories


from his/her library
SYSTEM ARCHITECTURE
1.
UNDERSTANDING STORY
TEXT
SCOPE AND
LIMITATIONS
Input...
1. Must be in English natural language
2. Must be a simple sentence in active/passive voice
3. Must have no spelling and grammatical errors
4. First introduced character must not be a pronoun
5. Can be a paragraph or a sentence
6. Must be in sequential events
STORY UNDERSTANDING
TEXT COREFERENCING
Determine the nouns referred by the pronouns
Coreference story from the start
Allows the system to understand the
pronouns in the new input

Text is processed twice


They is only referred to one noun
TEXT COREFERENCING
Two Processes
Code Listing 5-8 Text Coreferencing of Alice

Input
James and Steve are students. They went to school. They saw Steve's friend by the classroom.

Output after First Process:


James and Steve are students . James and Steve went to school . James and Steve saw Steve 's friend by
the classroom .

Output after second process

Preprocessing:124 - 1 1 : 1 5
// 1st word in Sentence 1 (James and Steve) = 5th word in Sentence 1 (students)
Preprocessing:124 - 2 1 : 1 1
// 1st word in Sentence 2 (James and Steve) = 1st word in Sentence 1 (James and Steve)
Preprocessing:124 - 3 1 : 1 1
// 1st word in Sentence 3 (James and Steve) = 1st word in Sentence 1 (James and Steve)
...
Preprocessing:124 - 3 10 : 3 10
// 10th word in Sentence 3 (classroom) = 10th word in Sentence 3 (classroom)
DEPENDENCY PARSING
Relations between the words in the sentence
Relations recognized by the system
compound nmod:agent nmod:to

Nsubj Xcomp Nmod:in


(nominal subject) (open clause (nominal modifier : in)
complement)

Auxpass Amod Nmod:on


(passive auxiliary) (adjectival modifier) (nominal modifier : on)

Iobj Advmod Nmod:poss


(indirect object) (adverbial modifier) (nominal
modifier:possession)
Dobj Nmod:at Neg
(direct object) (nominal modifier : at) (negation relation)

Nmod:for Nmod:near Conj:and


(nominal modifier : for) (nominal modifier : ( conjunct relation : and)
near)
SEMANTIC ROLE LABELLING
Detect common nouns in the story
SRL Existing Tool issues
Dependent on syntactic relations
o Location must be an object of the
preposition
(see page 92)
SEMANTIC ROLE LABELLING
Detect common nouns in the story
Own SRL
Check in knowledge base if the word falls
under any of the categories:
o Person
o Place
o Object
o else...Unknown

(see page 92)


NAMED ENTITY RECOGNIZER
Detect proper nouns in the story
Used CoreNLPs NER
Categorizes if a noun is a person, place,
object or unknown
Limitations/Issues
o Must be written correctly
(start with capital letters)
o Specific categorization
(see page 92)
PATTERN MATCHING
Extract information using linguistic patterns
Target Information
Doer
Location
Indirect object
Direct object
Negation
Possession
Description
References
(see page 94 to 97 for linguistic patterns used)
PATTERN MATCHING
Extract information using linguistic patterns
Needed information Additional Information/Grammar purposes

Compound Passive auxiliary

Nominal subject Indirect object

Direct object

Open/closed clausal complement

Adjectival modifier

Adverbial modifier

Nominal modifiers

Nominal modifier: possession

Negation modifier

Conjunct: and
CONCEPT PARSER
Extracts the main idea in the sentence
Also use linguistic patterns and templates
see page 99 to 101

CONCEPT

SENTICNET
POLARITY
ELEMENT DETECTION
Rules/restrictions to detect each element
Character
Preferably named
if isA person (for common nouns)

Location
Preferably named
if isA place (for common nouns)
ELEMENT DETECTION
Rules/restrictions to detect each element
Events
Must have an action

Conflict
Clause with least negative polarity that
reached the threshold (-0.2)
ELEMENT DETECTION
Rules/restrictions to detect each element
Resolution
Clause with positive emotion found at
the end
Must be experienced by at least 1
doer in the conflict
ELEMENT DETECTION
Rules/restrictions to detect each element
Resolution
If not a negated clause, related to the
conflict within three hops
(See figure 5-8 in page 105)

Example
Conflict : injury
Resolution : live
ELEMENT DETECTION
Rules/restrictions to detect each element
Resolution
If the conflicts main verb signifies
negative emotion, all concepts from the
conflict must be found in the resolution
with the exception of the negation verb

Example
Conflict : hate, going to school
Resolution : love, going to school
ELEMENT DETECTION
Rules/restrictions to detect each element
Resolution
If negated by the word not, all
concepts must be similar to
conflict without not

Example
Conflict : not pretty
Resolution : pretty
EXAMPLE
John went to China. John got an injury in China. So, John went to
the hospital. Doctor Chow gave John a medicine. John applied the
medicine. John healed and lived.
EXAMPLE
John went to China. John got an injury in China. So, John went to
the hospital. Doctor Chow gave John a medicine. John applied the
medicine. John healed and lived.
Start
Characters: John, Doctor Chow
Location: China
Conflict: John got an injury
Middle
Series of actions: went to hospital
gave medicine
applied medicine
End
Resolution: John lived
2.
KNOWLEDGE BASE
2.1.
SENTICNET
SENTICNET
Only utilized the concepts and polarity values
Code Listing 5-2 Example Concept in SenticNet

<rdf:Description rdf:about="http://sentic.net/api/en/concept/a_lot_of_flowers">
<rdf:type rdf:resource="http://sentic.net/api/concept"/>
<text>a lot of flowers</text>
<semantics rdf:resource="http://sentic.net/api/en/concept/flower"/>
<semantics rdf:resource="http://sentic.net/api/en/concept/love"/>
<semantics rdf:resource="http://sentic.net/api/en/concept/show_love"/>
<semantics rdf:resource="http://sentic.net/api/en/concept/rose"/>
<semantics rdf:resource="http://sentic.net/api/en/concept/give_flower"/>
<pleasantness>0.027</pleasantness>
<attention>0.093</attention>
<sensitivity>0.025</sensitivity>
<aptitude>0.071</aptitude>
<polarity>0.055</polarity>
</rdf:Description>
2.2.
CONCEPTNET/
COMMON SENSE
KNOWLEDGE BASE
FILTERING
Code Listing 5-1 Summary of steps done for filtering ConceptNet

1. Only extract English-to-English assertions


2. Filter using semantic relations: IsA, HasA, UsedFor,
AtLocation, HasProperty, Causes, HasPrerequisite,
HasSubevent, HasFirstSubevent, HasLastSubevent
3. Threshold of assertion > 1.0
4. Manual filtering
5. Filter via POS: Noun, verb, adjective, preposition, particle,
adverb, personal pronoun, possessive pronoun, proper noun

(see page 79)


FILTERING
Issues encountered

Concepts are unfamiliar to children

No certainty of knowledge extracted

Generates irrelevant and


incomprehensible story segments
RELATIONS ADAPTED
Story Understanding Module
isA

Concept A relation Concept B

doctor isA person

park isA place

ball isA object


RELATIONS ADAPTED
Text Generation Module
atLocation, isA, hasA, hasProperty,
usedFor, causes
(for TG explanation, see page 115)

Concept A Relation Concept B Description


accountant atLocation firm A is a typical location for B
friction causes heat A is typically the source of B
air hasA oxygen B belongs to A
alcohol hasProperty addictive A can be described as B.
abbey isA church A is a subtype or a specific instance
of B
yoga usedFor meditation A is used for B; the purpose of A is B
2.3.
SPECIALIZED
KNOWLEDGE BASE
DERIVATION
1. Extract nouns, adjectives and verbs from 30
childrens stories

2. Search algorithm extracts substring matches


Example word: happy
Extracted: happy, happy go lucky, happy kid

3. Filter using a threshold of 1.3 weight

4. Maximum of 3 words

5. All relations
ADDITIONAL KNOWLEDGE
Person attributes

Instances of persons

Location

Nationality

Size

Texture

Talent
2.4.
ABSTRACT STORY
REPRESENTATION
SENTENCE
John healed and lived.
SENTENCE
John healed and lived.
= John healed + John lived
SENTENCE
John healed and lived.
= John healed + John lived

CLAUSE
SENTENCE
John healed and lived.
= John healed + John lived

CLAUSE
EVENT
DESCRIPTION
SENTENCE
John healed and lived.
= John healed + John lived

CLAUSE
CONCEPT(S) EVENT
DESCRIPTION
SENTICNET

POLARITY
CLAUSE
Code Listing 5-4 Properties of Clauses
protected List<String> concepts;
protected Map<String, Noun> doers;
protected boolean isNegated;

DESCRIPTION
Gabriel is a student and is tall.
= Gabriel is a student + Gabriel is tall
REFERENCE ATTRIBUTE
hasA, isA, notIsA, capableOf, hasProperty,
notHasA, atLocation notHasProperty
OTHER RELATIONS
Sentence Assertion Description
Shelie is not ugly. Shelie notHasProperty Concept B is a property
ugly not possessed by
Concept A
Jill is not in Manila Jill notAtLocation Manila Concept A is not in
location Concept B
Harry does not have a Harry notHasA dog Concept B is a noun not
dog. possessed by Concept
A
Paul is not an Paul notIsA employee Concept A is not an
employee instance of Concept B
Leon has a car Car isOwnedBy Leon Concept B is possessed
by Concept A;
Partnered with HasA
CLAUSE
Code Listing 5-4 Properties of Clauses
protected List<String> concepts;
protected Map<String, Noun> doers;
protected boolean isNegated;

EVENTS
Felisa bought her sister a cake
INDIRECT DIRECT
OBJECT OBJECT
Felisa capableOf buy
SPECIAL CLAUSE
Also stores the polarity

Conflict

Resolution
NOUNS
Code Listing 5-3 Properties of Nouns
protected String id;
protected boolean isCommon;
protected Map<String, List<String>> attributes;
protected Map<String, Map<String, Noun>> references;

4 CATEGORIES
Character
Object
Location
else...Unknown
NOUNS
Marie is a student, is beautiful and loves to
read.
Properties Values Values
id Marie student
isCommon false true
attributes Marie hasProperty beautiful
Marie capableOf love
references Marie isA student
STORY ELEMENTS
Description of each element
Start
Characters
a person that can be named or not
Location
a place that can be named or not
Conflict
a special clause with least negative
polarity that reached the threshold
STORY ELEMENTS
Description of each element
Middle
Series of Actions
At least 2 event clauses
End
Resolution
A Special clause that is the opposite or
related to the conflict
...ideally, the solution to the conflict...
3.
GENERATING STORY
TEXTS AND PROMPTS
TWO ROLES OF ALICE

Collaborator Facilitator
COLLABORATOR

There is a temple in China.


John has a car.
Doctor Chow is a human.
John has a belly button.
FACILITATOR

Tell me more about John.


Where did John go to the
hospital.
What is the nationality of John?
Write more about why Doctor
Chow gave the medicine.
CONTENT PLANNER
ALGORITHM
Code Listing 5-13 Algorithm of the Content Planner

Initial threshold = 3
Check latest sentence if frequency of all nouns mentioned is less than
the threshold
If frequency of a noun is less than the threshold, add noun as a
candidate to prompt
Else if all the frequency of the nouns in current sentence reached
the threshold, check the frequency of the nouns found in the
previous sentences.
If frequency of a noun is less than the threshold, add noun
as a candidate to prompt
Else if all the frequency of the nouns reached the threshold,
threshold + 2
Randomly pick a noun from the candidates to prompt
COLLABORATOR

CATEGORIES
Character
Object
Location
CONCEPTNET
used as supplementary knowledge
TEMPLATE-BASED APPROACH
COLLABORATOR
Relation Part of Story Templates Text Generated

atLocation Start There is <start> in There is chair in room.


<end>.

<start> is in <end>. Jaime is in concert.


isA <start> is a <end>. John is a human.

<start> is <end>. John is rock climbing.


hasA <start> has <end>. Mary has belly button.
hasProperty <start> can be <end>. Pie can be good with ice
cream
causes/usedFor Middle/End <start> produces <end>. Injury produces pain.

<start> <end> <object>. Jill wrote using pencil.

<start> became <end>. John suffered.


FACILITATOR
General Prompt
Imperative sentence
Start of the story
Description of the nouns
Specific Prompt
Interrogative sentence
List of topics
Start of the story
Description of the nouns
Special Prompt
Interrogative/Imperative
Details of events
Middle/End of story
TEMPLATE-BASED APPROACH
GENERAL PROMPTS

Code Listing 5-12 Templates of General Prompts


private String[] nounStartDirective = { "Describe <noun>.",
"Tell me more about <noun>.",
"Write more about <noun>.",
"I want to hear more about <noun>.",
"Tell something more about <noun>." };
ADDITIONAL RULES
GENERAL PROMPTS
Not identical to previous prompt
Noun used by prompt has a frequency less than the
threshold
Specific Prompt is given to the user if he/she failed to
correctly respond to the general prompt
(only applicable to person and object)
Another general prompt is given if all topics in the
specific prompts were answered
Use the same noun if user failed to give a valid
response, especially if the noun is an unknown/location
TEMPLATE-BASED APPROACH
SPECIFIC PROMPTS

Code Listing 5-18 Topics for Specific Prompts


private String[] objectTopics = {"color", "shape", "size", "texture"};
private String[] personTopics = {"attitude", "nationality", "talent"};

What is the <topic> of <noun>?


Noun to be asked Text Generated
Person What is the nationality of Jenny?
Object What is the color of Nadines bag?
ADDITIONAL RULES
SPECIFIC PROMPTS
Not identical to previous prompt
A prompt with a new topic with previous noun is
given if user asked for another prompt
If the question has been previously asked and has
been answered correctly, it must not be asked again
If the answer is incorrect, Alice will provide an
example as a guide for the same prompt.
TEMPLATE-BASED APPROACH
SPECIAL PROMPTS
Code Listing 5-14 Templates of Special Prompts

private String[] causeEffectDirectivePhraseFormat = {


"Tell me why <phrase>.",
"Explain why <phrase>.",
"Write more about why <phrase>.",
"Write the reason why <phrase>."};

private String[] causeEffectAlternative = {


"Tell me more about what happened.",
"Tell me what happened next.",
"Then what happened?"};

See page 109 to 112 for algorithm in generating <phrase>


SIMPLENLG APPROACH
SPECIAL PROMPTS
Why or How Question
The default type of question asked
Where Question
Asked if the verb is a location verb, but there
is no location present in the sentence
What Question
Asked if the sentence does not have a direct
object
See Code Listing 5-16 in page 111
for algorithm in generating questions
ADDITIONAL RULES
SPECIAL PROMPTS
Not identical to previous prompt
New prompt is given if user asked for another
prompt
ANSWER CHECKER
General Prompt
Answer must contain an expression that can be
coreferenced to the noun being asked
Answer must be complete, especially if the verb
used exists in CoreNLPs copula
ANSWER CHECKER
Specific Prompt
Answer must be an instance of the topic of the
prompt, with reference to the knowledge base
Answer must have an expression that can be
coreferenced to the noun being asked in the
prompt
Answer must have the dependency relation:
nsubj(<subject>,<instance of topic>)
ANSWER CHECKER
Special Prompt
Answer must contain an expression that can be
coreferenced to the noun being asked
Number of doers in answer == number of doers
in prompts
Doers of the sentence must be the subject
DEMO TIME
4.
RESULTS
TYPES OF TESTS/EVALUATION
Black-box Testing
Story Understanding Module
Information Extraction
Element Detection
Text Generation Module
User Acceptance Testing
Children Evaluation
Round 1
Round 2
Expert Evaluation
Round 1
Round 2
BLACK-BOX TESTING
Information Extraction

Description Clauses : 24 cases

Events Clauses : 35 cases

Combination : 15 cases

Other Structures : 12 cases


Description Events Combination Other
structure
Correct 21 17 11 9

Incorrect 3 18 4 3

See page 118 for analysis


BLACK-BOX TESTING
Element Detection

Character/Location

Conflict

Two Events

Resolution

See appendix for test cases and their results


BLACK-BOX TESTING
Text Generation Module

Restrictions

Answer Checker
See appendix for test cases and their results
CHILDREN EVALUATION
User Acceptance Testing
Round 1
17 kids
Grade 3 to 4
Public and private schools
Nine 8 year olds
Eight 9 year olds
Round 2
7 kids
Grade 3 to 4
Private schools
Only 5 were able to answer the evaluation form
CHILDREN EVALUATION
User Acceptance Testing
Methodology
The students were briefed about the system (its purpose
and features) and were shown a demo on how to use the
system prior to testing
The students were made aware that an evaluation form is
to be answered after using the system
The students were observed while they were interacting
with Alice
The students interaction with Alice is logged in a file
The students were also interviewed when they were
answering the evaluation form
CHILDREN EVALUATION
Item 1. Were you able to finish your story using Alice?

Yes No

Round 1 9 8

Round 2 5 2
CHILDREN EVALUATION
Item 1. Were you able to finish your story using Alice?

Round 1
BASED ON SELF-
Yes

9
No

Round 2 EVALUATION 5 2
CHILDREN EVALUATION
Item 1. Were you able to finish your story using Alice?

Finished Not Finished

Round 1 0 17

Round 2 2 5

Round1 Round2
Strict rule to detect Most reached the end
location Lenient location rule
Relies on NER for Most used hates and does
character/location not like for the conflict
Conflict did not reach Uses SRL for common
the -0.2 threshold nouns
CHILDREN EVALUATION
Item 2 . Was Alice able to help you when you were
writing a story?
Yes No
Round 1 13 04

Round 2 07 00

VAGUE QUESTION Round1/Round2: Yes


Round1: No Difficulty in writing a
2 already have a story
story in mind Want to have more
2 said that Alice has options to continue
too many errors - their story
they did not consider Amusing prompts/
it as a help story segments
CHILDREN EVALUATION
Item 3. Did you find Alices suggestions useful or not?

Yes No Did not ask No


for it evaluation
Round 1 8 6 3 0

Round 2 5 0 0 2

Round1: No Round2: Observation


Many grammatical More comprehensible
errors concepts - used the
Incomprehensible/ specialized knowledge
irrelevant/too base
obvious concepts Still has grammar
No suggestions problems
CHILDREN EVALUATION
Examples why the children did not find the story segments
useful
(Round 1) Input:
Rosana has a car. They went to SM. They looked around SM.
They shopped for clothes.
Many grammatical Rosana is a rock climbing.
errors

Incomprehensible Rosana is a grammatical category.


concept

Irrelevant concept Rosana is an individual.

Too obvious concept Rosana is a human.


CHILDREN EVALUATION
Examples when children got ideas from story segments

Story Text 6-7 Get ideas from Story Segments (Round 2)


Story Segment generated: Read caused an idea
Child wrote: Lexine can read

Story Segment generated: The rj won using play sport.


Child wrote:Rj won the game.

Story Segment Generated: Jonah olso has a belly button.


Child wrote : the big fish has a belly button then jonah climb up to go to
belly button of the big fish
CHILDREN EVALUATION
Actual appended story segments
Round 1 Round 2

Rosana has a car Jenna is honest

A chicken can be delicious Ava learned using the digital library

A hamburger is a sandwich A day is a time period

A teacher is an educator Kevin has a car

The prince is royalty Kevin's car can be a fun

Alices dress can be blue Kevin's car can be expensive

A day is a time period

Reasons for appending:


Cannot think of anything to write
Story segment stated a fact
Story segment gave new detail to their story
CHILDREN EVALUATION
Item 4. Did you find Alices ideas useful or not?

Yes No Did not ask No


for it evaluation
Round 1 7 8 2 0

Round 2 5 0 0 2

Round1: No Round1/Round2: Yes


Repeatedly asks about Give them ideas on
a character/location who/what to talk
Prompted the wrong about
Round2: Observation
topic
Cannot correctly
Delayed prompt
answer the specific
Broad prompts prompts.
CHILDREN EVALUATION
Examples why the children did not find the prompts
useful
Repeatedly asks Prompt Generated: Write more about the family.
about a noun Prompt Generated: Tell something more about the family.
Prompt Generated: I want to hear more about the family.
Prompt Generated: Describe the family.
Prompt Generated: Write more about the family.

Wrong topic Child: once opon a time there was a rabbit who was having a tour
around the forest and saw a house and he saw many poitions and he
wanted to touch it

Prompt Generated: Tell something more about the time

Delayed Child wrote : Once a upon a time there is a baby tiger play with his
prompt friends and he became big now and he are wild and dangerous. and he
killed his family and now he is in the zoo

Prompt Generated: Write more about the family.

Broad prompt Describe the basketball.


CHILDREN EVALUATION
Example when child encountered a specific prompt

Story Text 6-5 Specific Prompt encountered by a child

[1] Child wrote : Kevin is alive


[2] Prompt Generated: Tell something more about Kevin.
[3] Child Answered : Kevin is color yellow
[4] Prompt Generated: What is the attitude of Kevin?
[5] Child Answered : kevin is kind
[6] Prompt Generated: An example of attitude is inclination.
What is the attitude of Kevin?
[7] Child Answered : Kevin is inclination
[8] Prompt Generated: An example of attitude is hardball. What
is the attitude of Kevin?
* child asked for a story segment
CHILDREN EVALUATION
Examples when children used the prompt

Story Text 6-4 Followed prompts


Child wrote : He like to play basketball.
Prompt Generated: Describe the basketball.
Child wrote : His favorite team is Golden State Wariors and his favorite place
is Ocean park.

Child wrote : The church is in Singapore. And after that he went home.Ralph
ask his father,daddy
I went to church.
Prompt Generated: I want to hear more about boy's father.
Child wrote : Ralph's daddy is Eric.
CHILDREN EVALUATION
Item 5. Are the ideas & suggestions helpful?

Yes No Did not ask No


for it evaluation
Round 1 9 6 2 0

Round 2 5 0 0 2

Round1: No Round1/Round2: Yes


Grammatical errors They were able to
continue their story
Repetitive
Round2: Observation
Unknown/too obvious
concepts Based on the logs, 6/7
acknowledged Alices
Broad prompts
help
CHILDREN EVALUATION
Item 6. Is Alice a friend or a teacher?

Friend Teacher Neither No


evaluation
Round 1 9 6 2 0

Round 2 5 0 0 2

Friend
A student in appearance
Helped them in writing by giving ideas/suggestions
Teacher
Guided them in writing their story
Helped them in writing by giving ideas/suggestions
Neither
Several mistakes
Did not use Alice
CHILDREN EVALUATION
Item 7. What other features would you like to see in
Alice?
CHILDREN EVALUATION
Item 8. What can you say about the user interface?
Comments
Easy to navigate
Colors, layout and fonts were fine
o 3 children said that they want it to be in other
colors
The peers appearance is ok
o One child asked for a male counterpart of
Alice

Round 1 Observation
Children thought that theres only one To Do List
even when they were briefed
Children were confused on how to terminate the
dialog boxes
EXPERT EVALUATION
User Acceptance Testing
Objective
Evaluate the appropriateness and the effectiveness of
Alice (see page 51)

Methodology
Evaluators were briefed about the criteria and the logs
from the childrens interaction with Alice
Evaluators were given two similar evaluation criteria,
one for round 1 and one for round 2 testing
Rate the prompts and story segments from 1.0 to 5.0,
with 1.0 being the lowest and 5.0 as the highest
EXPERT EVALUATION
User Acceptance Testing
Evaluators
Ms. Pacis
Thoroughly evaluated the logs for the round
1 testing
Gave the qualitative rating for the logs of the
round 2 testing

Mr. Gojo-Cruz
Only evaluated the round 2 testing logs
FACILITATOR
Expert Evaluation

Ideal characteristics
Found under the 5.0 of the evaluation criteria

Round 1 Testing Round 2Testing Average


Metrics
Ms.Pacis Ms.Pacis Mr.Gojo-Cruz

Vocabulary 3.0 3.5 5.0 3.83

Information and 1.0 3.5 4.0 2.83


Grammar

Tone 2.0 3.5 4.0 3.17

Their comments are found in page 132


FACILITATOR
Expert Evaluation
Round 1 Round 2 Average
Metrics
Ms.Pacis Ms.Pacis Mr.Gojo-Cruz

Vocabulary 3.0 3.5 5.0 3.83

Reason for scores given in Round 1


Appropriate vocabulary for the target age-group
o They were consulted before about the template
Children might have difficulty in responding to the prompts because they
are too broad
o Observed in the logs as children kept on asking for another prompt
Reasons for scores in Round 2
Appropriate vocabulary for the target age-group
General prompts still exist
The special prompts are specific, in context and are relevant to the story
FACILITATOR
Expert Evaluation
Round 1 Round 2 Average
Metrics
Ms.Pacis Ms.Pacis Mr.Gojo-Cruz

Information and 1.0 3.5 4.0 2.83


Grammar

Reason for scores given in Round 1


Numerous grammatical, punctuation and spelling errors
Reasons for scores in Round 2
Less spelling, grammatical and punctuation errors were present
o Most grammatical errors were due to the template
o Most of the childrens inputs were spelled/punctuated correctly
because of the implementation of the grammar-and-spelling checker
Specific prompts may contain unfamiliar concepts to children
FACILITATOR
Expert Evaluation
Round 1 Round 2 Average
Metrics
Ms.Pacis Ms.Pacis Mr.Gojo-Cruz

Tone 2.0 3.5 4.0 3.17

Reason for scores given in Round 1


Templates were not comic
Some templates lacks pronouns, which made the prompt sound strict
Reasons for scores in Round 2
Improved templates
Asks for specific details that made Alice sound more like a peer
FACILITATOR
Most common errors found for both testing

Round 1 : 119 unique prompts


Round 2 : 22 unique prompts
Frequency
Issues
Round 1 Round 2
Too general 100% 68%
Prompted the 17% 32%
wrong topic
Wrong grammar 23% 9%

NOTE: A prompt may fall under more than one category


See page 6-133 for analysis
FACILITATOR
Examples of prompt errors

Error Examples Reason

Too general Prompt Generated: Write more about the family. Prompt
Prompt Generated: Tell something more about the family. Generator
Prompt Generated: I want to hear more about the family.
Prompt Generated: Describe the family.
Prompt Generated: Write more about the family.

Wrong Child: she saw the children while Nanny was asking for Content
topic help she introduced herself and her name was Lucy the big planner
person and the children told her to open the window

Prompt Generated: Describe the window.

Wrong Prompt Generated: Describe the aherd. User input/


grammar Prompt Generated: Tell something more about the Surface
ingridien. Realizer
FACILITATOR
Categories of wrong grammar
Category Example

Child error Child wrote : and he put it out of the house and the rabbit did not
nkow he was out of the house and that morning the wisard went out
of the house to get ingridiens for a potion

Prompt Generated: Describe the wisard.

Template Issues Erroneous Prompt: Tell me more what happened

Corrected Prompt:
Tell me more! What happened?
Tell me what happened.

Missing pronouns Prompt Generated: Tell something more about boys day
or articles Correct Prompt: Tell me something more about the boys day

Inappropriate Prompt Generated: I want to hear more about the mother


pronouns or Correct Prompt: I want to hear more about her mother
articles
FACILITATOR
Examples of good prompts in round 2

Child: Rj plays basketball. Rj won the game.


Prompt Generated: How did Rj win the game?
Child wrote: because me and my team have teamwork.

Prompt Generated: What is the attitude of Kevin?


Child wrote: kevin is kind.
COLLABORATOR
Expert Evaluation

Ideal characteristics
Found under the 5.0 of the evaluation criteria

Initial Testing Final Testing Average


Metrics
Ms.Pacis Ms.Pacis Mr.Gojo-Cruz

Vocabulary 3.0 3.5 4.0 3.5

Grammar 1.0 3.5 4.0 2.83

Tone 3.0 3.5 4.0 3.5

Relevance 3.0 3.5 4.0 3.5

Their comments are found in page 6-136


COLLABORATOR
Expert Evaluation
Initial Testing Final Testing Average
Metrics
Ms.Pacis Ms.Pacis Mr.Gojo-Cruz

Vocabulary 3.0 3.5 4.0 3.5

Reason for scores given in Round 1


Concepts are unfamiliar, but may still be comprehensible to children
o Example: A fairy is a spiritual being

Reasons for scores in Round 2


More concepts were familiar and child- friendly
COLLABORATOR
Expert Evaluation
Initial Testing Final Testing Average
Metrics
Ms.Pacis Ms.Pacis Mr.Gojo-Cruz

Grammar 1.0 3.5 4.0 2.83

Reason for scores given in Round 1


Grammar errors
o Inappropiate articles example: Ralphs friend can be a fun
o Not so appropriate verbs
o System error example: A windowwindow can be clear

Reasons for scores in Round 2


Minor grammatical errors
Few spelling errors
COLLABORATOR
Expert Evaluation
Initial Testing Final Testing Average
Metrics
Ms.Pacis Ms.Pacis Mr.Gojo-Cruz

Tone 3.0 3.5 4.0 3.5

Reason for scores given in Round 1


Plain and blunt templates

Reasons for scores in Round 2


More templates used, but are still monotonous
COLLABORATOR
Expert Evaluation
Initial Testing Final Testing Average
Metrics
Ms.Pacis Ms.Pacis Mr.Gojo-Cruz

Relevance 3.0 3.5 4.0 3.5

Reason for scores given in Round 1


The story segments used characters/locations/objects that are present in
the story
The story segments generated do not have a firm connection to the story
being written and are repetitive

Reasons for scores in Round 2


The generated story segments are more varied, more instances of story
segments were connected to the story
The story segments used characters/locations/objects that are present in
the story
COLLABORATOR
Most common errors found for both testing

Round 1 : 87 unique prompts


Round 2 : 30 unique prompts

Frequency
Issues
Round 1 Round 2
Concepts are not age appropriate 40% 17%

Wrong Grammar 33% 33%

Unconnected to Story 24% 17%

Too obvious/Nonsense concepts 33% 17%

See page 138 for analysis


COLLABORATOR
Examples of story segments whose concepts are not age-
appropriate
Round 1 Round 2

A student is an enrollee. Jenna is a populate

A lot is a divide. Jenna is an omnivore

A beggar is an impoverish. Jonah olsos beard is a rim

A prince is an aristocrat. A day is a time period

A day is a work time. A day is an era

A mother is an abbess.

Japan is a lacquerware.

A recess is a concave shape.


COLLABORATOR
Examples of story segments whose concepts are too obvious

Round 1 Round 2

A girl is a young female human. Jonah olsos beard is a hair

A girl can be a female. Jonah olsos water is not air

Janella is a human. A water can be wet


Japan is an island.

A jar is a containerful.
COLLABORATOR
Examples of story segments that makes no sense

Round 1 Round 2
Jenna is a populate. A water can be blue
Gabriela is a grammatical Jonah olso's water is Life.
category.
COLLABORATOR
Examples of story segments that are unconnected to the story

Child wrote : A rabbit jump on Alice shoes and she follow the rabbit and
the rabbit went to a hole and Alice fall to a house and drink a potion and
make her small and her dress was bigger than her

Story Segment Generated: Alice has a belly button.


COLLABORATOR
Examples of story segments with more appropriate verbs

Story Segment Generated Better Story Segment


Janella is a rock climbing Janella went rock climbing
Japan is a lacquerware Japan has a lacquerware
Janella is a family Janella has family
COLLABORATOR
Categories of wrong grammar

Category Example

Template Issues Erroneous Story Segment: Lexine learned something using


watch tv.

Corrected Prompt: Lexine learned something by watching tv

Missing Story Segment Generated: Ava learned using digital library.


pronouns or Correct Story Segment: Ava learned using a digital library.
articles

Inappropriate Story Segment Generated: Kevins car can be a fun


pronouns or Correct Story Segment: Kevins car can be fun
articles
5.
CONCLUSION
SPECIFIC OBJECTIVES
See Section 4.2.2

To allow users to choose between structured


and free-form writing
To allow users to input story text
To use story understanding techniques to
analyze the input story text and generate an
ASR
To use story generation techniques to
generate a story segment, when the peer is
acting as a collaborator
SPECIFIC OBJECTIVES
See Section 4.2.2

To generate prompts to help the user to


continue his/her story, when the peer is
acting as a facilitator
To allow the user to save his/her story,
regardless if it is complete or incomplete, into
his/her library
To allow the user to read or remove saved
stories from his/her library
ISSUES
1. Additional concepts in Knowledge Base
2. Introduce the concept of time
3. Additional linguistic patterns
4. Accurate polarity
5. Culture Appropriate
6. Better surface realizer
7. Utilize WordNet
8. User Interface
FUTURE STUDIES
1. Add a role of a critic
2. Empathic peer
3. Character Dialogue
4. Dialogue History
5. Profiling
6. Learning from input
7. Utilize a story-driven approach
8. Story Themes
9. Manipulate childs mood
THE END!
Any questions?
References
Blair, K., Schwartz, D. L., Biswas, G., & Leelawong, K. (2007). Pedagogical agents for learning by teaching: Teachable agents.
Educational Technology & Society, 47, 56-61.

Cambria, E., Olsher, D., & Rajagopal, D. (2014). Senticnet 3: A common and common-sense knowledge base for cognition-driven
sentiment analysis. In C. E. Brodley & P. Stone (Eds.), Proceedings of the twenty-eighth AAAI conference on artificial
intelligence, july 27 -31, 2014, quebec city, quebec, canada. (pp. 1515-1521). AAAI Press. Retrieved from
http://www.aaai.org/ocs/index.php/AAAI/AAAI14/paper/view/8479

Cassell, J. (2001). Towards a model of technology and literacy development: Story listening systems (Tech. Rep. No. ML-GNL-01-1).
Massachusetts Institute of Technology.

Cassell, J., Ryokai, K., Druin, A., Klaff, J., Laurel, B., & Pinkard, N. (2000). Storyspaces: Interfaces for children's voices. In Chi
'00 extended abstracts on human factors in computing systems (pp. 243-244). New York, NY, USA: ACM. Retrieved from
http://doi.acm.org/10.1145/633292.633434 doi: 10.1145/633292.633434

Chuu, C., & Kim, H. (2012). Storyfighter: A common sense storytelling game.

Conceptnet 5. (n.d.). Retrieved from http://conceptnet5.media.mit.edu/


References
Consignado, D. G., Ong, S. J., & Ong, E. C. J. (2014). Designing interactive stories to teach positive social behavior to children
with autism. In Proceedings of the 5th international workshop on empathic computing. Australia.

Framenet. (n.d.). Retrieved from https://framenet.icsi.berkeley.edu/fndrupal/about

Gottlieb, D. & Juster, J. (n.d.). Generating a dynamic gaming environment using omcs.

Hourcade, J. P., Bederson, B. B., Druin, A., Taxn, G., & Bb, N. N. (2002). Kidpad: a collaborative storytelling tool for children. In
In extended abstracts chi 2002. ACM.

Jerz, D., & Kennedy, K. (n.d.). Short story tips: 10 ways to improve your creative writing. Retrieved from
http://jerz.setonhill.edu/writing/creative1/shortstory/

Liu, H., & Singh, P. (2002). Makebelieve: Using commonsense knowledge to generate stories. In R. Dechter & R. S. Sutton (Eds.),
Aaai/iaai (p. 957-958). AAAI Press / The MIT Press. Retrieved from http://dblp.uni-
trier.de/db/conf/aaai/aaai2002.html#LiuS02

Liu, H., & Singh, P. (2004). Conceptnet - a practical commonsense reasoning tool-kit. BT Technology Journal, 22 (4), 211-226.
Retrieved from http://dx.doi.org/ 10.1023/B:BTTJ.0000047600.45421.6d doi: 10.1023/B:BTTJ.0000047600.45421.6d
References
Manning, C. D., Surdeanu, M., Bauer, J., Finkel, J. R., Bethard, S., & McClosky, D. (2014). The stanford corenlp natural language
processing toolkit. In Proceedings of the 52nd annual meeting of the association for computational linguistics, ACL 2014,
june 22-27, 2014, baltimore, md, usa, system demonstrations (pp. 55-60). Retrieved from
http://aclweb.org/anthology/P/P14/P14-5010.pdf

McIntyre, N., & Lapata, M. (2009). Learning to tell tales: A data-driven approach to story generation. In Proceedings of the joint
conference of the 47th annual meeting of the acl and the 4th international joint conference on natural language processing of
the afnlp: Volume 1 - volume 1 (pp. 217-225). Stroudsburg, PA, USA: Association for Computational Linguistics. Retrieved
from http://dl.acm.org/citation.cfm?id=1687878.1687910

Ong, E. (2014). Picture Books: Challenges and opportunities in automatic story generation. In S. Ona & Z. C. Pablo (Eds.),
Information and Communications Technology in the philippines: Contemporary Perspective (pp. 1-17). Manila, Philippines:
De La Salle University Publishing House.

Ong, E., Bienes, K., Jimenez, N., Miranda, E., & Pascual, G. (2014). A system for collecting commonsense knowledge from
children. DLSU Research Congress 2014, De La Salle University, Manila.

Rambo, R. (2015). English Composition 1. Retrieved April 04, 2016, from http://www2.ivcc.edu/rambo/eng1001/sentences.htm
References
Robertson, J., & Good, J. (2003). Ghostwriter: A narrative virtual environment for children. In Proceedings of the 2003 conference
on interaction design and children (pp. 85-91). New York, NY, USA: ACM. Retrieved from
http://doi.acm.org/10.1145/953536.953549 doi: 10.1145/953536.953549

Roxas, R. J., Huang, D. L., Peralta, B. E., & Ong, E. (2014). Generating text descriptions in the alex interactive storytelling system
using a semantic ontology. Philippine Computing Journal, 9 (1), 34-43.

Simple sentences. (n.d.). Retrieved April 04, 2016, from


https://www.dlsweb.rmit.edu.au/lsu/content/4_WritingSkills/writing_tuts/sentences_LL/simple.html

Singh, P. (2001). The public acquisition of commonsense knowledge. Retrieved from citeseer.ist.psu.edu/singh02public.html

Xu, Y., Park, H., & Baek, Y. (2011). A new approach toward digital storytelling: An activity focused on writing self-e cacy in a
virtual learning environment. Educational Technology & Society, 14 (4), 181-191. Retrieved from http://dblp.uni-
trier.de/db/journals/ ets/ets14.html#XuPB11

Williams, R., Barry, B., & Singh, P. (2005). Comickit: Acquiring story scripts using common sense feedback. In Proceedings of the
10th international conference on intelligent user interfaces (pp. 302-304). New York, NY, USA: ACM. Retrieved from
http://doi.acm.org/10.1145/1040830.1040907 doi: 10.1145/1040830.1040907
References
Wordnet. (2015). Retrieved from https://wordnet.princeton.edu/

Potrebbero piacerti anche