The Switchboard Dialog Act Corpus

  1. Overview
  2. Getting and using the corpus
    1. Downloads
    2. Python classes (preferred)
      1. Transcript objects
      2. Utterance objects
      3. CorpusReader objects
    3. Working directly with the CSV file (dispreferred but okay)
  3. Annotations
    1. Dialog act annotations
    2. Penn Discourse Treebank 3 POS
    3. Penn Discourse Treebank 3 Trees
  4. Exercises

Overview

The Switchboard Dialog Act Corpus (SwDA) extends the Switchboard-1 Telephone Speech Corpus, Release 2, with turn/utterance-level dialog-act tags. The tags summarize syntactic, semantic, and pragmatic information about the associated turn. The SwDA project was undertaken at UC Boulder in the late 1990s.

Recommended reading:

Note: Here is updated SwDA code that is Python 2/3 compatible. It is recommended over the code below.

Code and data:

Getting and using the corpus

Downloads

The SDA trascripts are a free download:

The files are human-readable text files with lines like this:

b          B.22 utt1: Uh-huh. /

sd          A.23 utt1: I work off and on just temporarily and usually find friends to babysit,  /
sd          A.23 utt2: {C but } I don't envy anybody who's in that <laughter> situation to find day care. /

b          B.24 utt1: Yeah. /

It's worth unpacking the archive file and opening up a few of the transcripts to get a feel for what they are like.

The SwDA is not inherently linked to the Penn Treebank 3 parses of Switchboard, and it is far from straightforward to align the two resources Calhoun et al. 2010, §2.4. In addition, the SwDA is not distributed with the Switchboard's tables of metadata about the conversations and their participants. I'd like us to have easy access to all this information, so I created a version of the corpus that pools all of this information to the best of my ability:

When you unpack swda.zip, you get a directory with the same basic structure as that of swb1_dialogact_annot.tar.gz. The file swda-metadata.csv contains the transcript and caller metadata for this subset of the Switchboard.

The format for all the transcript files is the same. I describe the column values below, in the context of the Python code I wrote for us to work with this corpus.

Python classes (preferred)

The Python classes:

Transcript objects

The code's Transcript objects model the individual files in the corpus. A Transcript object is built from a transcript filename and the corpus metadata file:

  1. from swda import Transcript
  2. trans = Transcript('swda/sw00utt/sw_0001_4325.utt.csv', 'swda/swda-metadata.csv')

Transcript objects have the following attributes:

Attribute name Object type Value
ptb_basename str The filename: directory/basename
conversation_no int The numerical conversation Id.
talk_day datetime with methods like month, year, ...
topic_description str short description
length int in seconds
prompt str long decription/query/instruction
from_caller_no int The numerical Id of the from (A) caller
from_caller_sex str MALE, FEMALE
from_caller_education int 0, 1, 2, 3, 9
from_caller_birth_year datetime YYYY
from_caller_dialect_area str MIXED, NEW ENGLAND, NORTH MIDLAND, NORTHERN, NYC, SOUTH MIDLAND, SOUTHERN, UNK, WESTERN
to_caller_no int The numerical Id of the to (B) caller
to_caller_sex str MALE, FEMALE
to_caller_education int 0, 1, 2, 3, 9
to_caller_birth_year datetime YYYY
to_caller_dialect_area str MIXED, NEW ENGLAND, NORTH MIDLAND, NORTHERN, NYC, SOUTH MIDLAND, SOUTHERN, UNK, WESTERN
utterances list A list of Utterance objects.
Table TRANSCRIPT
The attributes of Transcript objects, with their associated Python classes and possible values.

The attributes permit easy access to the properties of transcripts. Continuing the above:

  1. trans.topic_description
  2. 'CHILD CARE'
  3. trans.prompt
  4. 'FIND OUT WHAT CRITERIA THE OTHER CALLER WOULD USE IN SELECTING CHILD \ CARE SERVICES FOR A PRESCHOOLER. IS IT EASY OR DIFFICULT TO FIND SUCH CARE?'
  5. trans.talk_day
  6. datetime.datetime(1992, 3, 23, 0, 0)
  7. trans.talk_day.year
  8. 1992
  9. trans.talk_day.month
  10. 3
  11. trans.from_caller_sex
  12. 'FEMALE'

The utterances attribute of Transcript objects is the list of Utterance objects for that corpus, in the order in which they appear in the original transcripts.

Utterance objects

Utterance objects have the following attributes:

AttributeObject typeValue
caller str A, B, @A, @B, @@A, @@B
caller_no int The caller Id.
caller_sex str MALE or FEMALE
caller_education str 0, 1, 2, 3, 9
caller_birth_year int 4-digit year
caller_dialect_areastr MIXED, NEW ENGLAND, NORTH MIDLAND, NORTHERN, NYC, SOUTH MIDLAND, SOUTHERN, UNK, WESTERN
transcript_index int line number relative to the whole transcript
utterance_index int Utterance number (can span multiple TranscriptIndex numbers)
subutterance_Index int Utterances can be broken across line. This gives the internal position.
tag list strings; see below
text str the text of the utterance
pos str the part-of-speech tagged portion of the utterance
trees nltk.tree.Tree the parse of Text; see below for discussion
Table UTTERANCE
The attributes of Utterance objects, with their associated Python classes and possible values.

Assuming you still have your Python interpreter open and the trans instance set as before, you can continue with code like the following:

  1. utt = trans.utterances[19]
  2. OUT
  3. utt.caller
  4. 'B'
  5. utt.act_tag
  6. 'sv'
  7. utt.text
  8. '[ I guess + --'
  9. utt.pos
  10. '[ I/PRP ] guess/VBP --/:'
  11. len(utt.trees)
  12. 1
  13. utt.trees[0].pprint()
  14. '(S (EDITED (RM (-DFL- \\[)) (S (NP-SBJ (PRP I)) (VP-UNF (VBP guess))) (IP (-DFL- \\+))) (NP-SBJ (PRP I)) (VP (VBP guess) (RS (-DFL- \\])) (SBAR (-NONE- 0) (S (NP-SBJ (PRP we)) (VP (MD can) (VP (VB start)))))) (. .))'

Perhaps the most noteworthy attribute is utt.trees. This is always a set of nltk.tree.Tree objects (sometimes an empty set, because only a subset of the Switchboard was parsed). For our utt instance, there is just one tree, and it properly contains the actual utterance content. In this case, the rest of the tree occurs two lines later, because speaker A interrupts:

  1. trans.utterances[19].text
  2. '[ I guess + --'
  3. trans.utterances[20].text
  4. 'Okay. /'
  5. trans.utterances[21].text
  6. '-- I guess ] we can start. {F Uh, } /'
  7. trans.utterances[21].trees[0].pprint()
  8. '(S (EDITED (RM (-DFL- \\[)) (S (NP-SBJ (PRP I)) (VP-UNF (VBP guess))) (IP (-DFL- \\+))) (NP-SBJ (PRP I)) (VP (VBP guess) (RS (-DFL- \\])) (SBAR (-NONE- 0) (S (NP-SBJ (PRP we)) (VP (MD can) (VP (VB start)))))) (. .))'
  9. trans.utterances[21].trees[1].pprint()
  10. '(INTJ (UH Uh) (, ,) (-DFL- E_S))'

Cautionary note: Because the trees often properly contain the utterance, they cannot be used to gather word- or phrase-level statistics unless care is taken to restrict attention to the subtrees, or fragments thereof, that represent the utterance itself. For additional discussion, see the Penn Discourse Treebank 3 Trees section below.

CorpusReader objects

The main interface provided by swda.py is the CorpusReader, which allows you to iterate through the entire corpus, gathering information as you go. CorpusReader objects are built from just the root of the directory containing your csv files. (It assumes that swda-metadata.csv is in the first directory below that root.)

  1. from swda import CorpusReader
  2. # CorpusReader objects are built from the name of the corpus root:
  3. corpus = CorpusReader('swda')

The two central methods for CorpusReader objects are iter_transcripts() and iter_utterances().

Here's a function that uses iter_transcripts() to gather information relating education levels and dialect areas:

  1. #!/usr/bin/env python
  2.  
  3. from collections import defaultdict
  4. from operator import itemgetter
  5. from swda import CorpusReader
  6.  
  7. def swda_education_region():
  8. """Create a count dictionary relating education and region."""
  9. d = defaultdict(int)
  10. corpus = CorpusReader('swda')
  11. # Iterate through the transcripts; display_progress=True tracks progress:
  12. for trans in corpus.iter_transcripts(display_progress=True):
  13. d[(trans.from_caller_education, trans.from_caller_dialect_area)] += 1
  14. d[(trans.to_caller_education, trans.to_caller_dialect_area)] += 1
  15. # Turn d into a list of tuples as d.items(), sort it based on the
  16. # second (index 1 member) of those tuples, largest first, and
  17. # print out the results:
  18. for key, val in sorted(d.items(), key=itemgetter(1), reverse=True):
  19. print key, val

The method iter_utterances() is basically an abbreviation of the following nested loop:

  1. for trans in corpus.iter_transcripts():
  2. for utt in trans.utterances:
  3. yield utt

The following code uses iter_utterances() to drill right down to the utterances to count the raw tags:

  1. #!/usr/bin/env python
  2.  
  3. from collections import defaultdict
  4. from operator import itemgetter
  5. from swda import CorpusReader
  6.  
  7. def tag_counts():
  8. """Gather and print counts of the tags."""
  9. d = defaultdict(int)
  10. corpus = CorpusReader('swda')
  11. # Loop, counting tags:
  12. for utt in corpus.iter_utterances(display_progress=True):
  13. d[utt.act_tag] += 1
  14. # Print the results sorted by count, largest to smallest:
  15. for key, val in sorted(d.items(), key=itemgetter(1), reverse=True):
  16. print key, val

The output is a list that is very much like the one under "Finally, for reference, here are the original 226 tags" at the Coders' Manual page. (I don't know why the counts differ slightly from the ones given there. I tried many variations — adding/removing * or @ from the tags; adding/removing a hard-to-detect nameless file in the distribution repeating sw09utt/sw_0904_2767.utt, etc., but I was never able to reproduce the counts exactly.)

Working directly with the CSV file (dispreferred but okay)

It is possible to work with our SwDA CSV-based distribution using a program like Excel or R. The following code shows how to read in the CSV files and work with them a bit in R:

  1. filenames = Sys.glob(file.path('swda', '*', '*.csv'))
  2. for (i in 2:length(filenames)){ swda = rbind(swda, read.csv(filenames[i])) }
  3. xtabs(~ act_tag, data=swda)
  4. act_tag " % + aa ad b b^m ... 26 15547 17813 10136 666 36180 688 ...

We can also read in the metadata and relate an utterance to it via the conversation_no value:

  1. metadata = read.csv('swda/swda-metadata.csv')
  2. utt = swda[2011, ]
  3. uttMeta = subset(metadata, conversation_no==utt$conversation_no)
  4. uttMeta$from_caller_birth_year
  5. 1969

In principle, this could be every bit as useful as the Python classes. Indeed, there are advantages to working with data in tabular/database format, as opposed to constantly looping through all the files. However, if you take this route, you'll have to write your own methods for dealing with the special values for trees, tags, dates, and so forth. I think Python is ultimately a better tool for grappling with the diverse information in the SwDA.

Annotations

I now briefly review the special annotations of this subset of the Switchboard: the act tags, the POS annotations, and the parsetrees.

Dialog act annotations

There are over 200 tags in the corpus. The Coders' Manual defines a system for collapsing them down to 44 tags. (They say 42; I am not sure what they do with 'x', and their table has 43 rows, so it might be that 42 is just a minor miscount.)

The Utterance object method damsl_act_tag() converts the original tags to this 44 member subset:

  1. from swda import Transcript
  2. trans = Transcript('swda/sw00utt/sw_0001_4325.utt.csv', 'swda/swda-metadata.csv')
  3. utt = trans.utterances[80]
  4. utt.act_tag
  5. 'sd^e'
  6. utt.damsl_act_tag()
  7. 'sd'

The tags are the main addition to the corpus. Here is the table of training-set stats from the Coders' Manual extended with a column giving the total counts for the entire corpus, using damsl_act_tag().

name act_tag example train_count full_count
1 Statement-non-opinion sd Me, I'm in the legal department. 72824 75145
2 Acknowledge (Backchannel) b Uh-huh. 37096 38298
3 Statement-opinion sv I think it's great 25197 26428
4 Agree/Accept aa That's exactly it. 10820 11133
5 Abandoned or Turn-Exit % So, - 10569 15550
6 Appreciation ba I can imagine. 4633 4765
7 Yes-No-Question qy Do you have to have any special training? 4624 4727
8 Non-verbal x [Laughter], [Throat_clearing] 3548 3630
9 Yes answers ny Yes. 2934 3034
10 Conventional-closing fc Well, it's been nice talking to you. 2486 2582
11 Uninterpretable % But, uh, yeah 2158 15550
12 Wh-Question qw Well, how old are you? 1911 1979
13 No answers nn No. 1340 1377
14 Response Acknowledgement bk Oh, okay. 1277 1306
15 Hedge h I don't know if I'm making any sense or not. 1182 1226
16 Declarative Yes-No-Question qy^d So you can afford to get a house? 1174 1219
17 Other fo_o_fw_by_bc Well give me a break, you know. 1074 883
18 Backchannel in question form bh Is that right? 1019 1053
19 Quotation ^q You can't be pregnant and have cats 934 983
20 Summarize/reformulate bf Oh, you mean you switched schools for the kids. 919 952
21 Affirmative non-yes answers na It is. 836 847
22 Action-directive ad Why don't you go first 719 746
23 Collaborative Completion ^2 Who aren't contributing. 699 723
24 Repeat-phrase b^m Oh, fajitas 660 688
25 Open-Question qo How about you? 632 656
26 Rhetorical-Questions qh Who would steal a newspaper? 557 575
27 Hold before answer/agreement ^h I'm drawing a blank. 540 556
28 Reject ar Well, no 338 346
29 Negative non-no answers ng Uh, not a whole lot. 292 302
30 Signal-non-understanding br Excuse me? 288 298
31 Other answers no I don't know 279 286
32 Conventional-opening fp How are you? 220 225
33 Or-Clause qrr or is it more of a company? 207 209
34 Dispreferred answers arp_nd Well, not so much that. 205 207
35 3rd-party-talk t3 My goodness, Diane, get down from there. 115 117
36 Offers, Options, Commits oo_co_cc I'll have to check that out 109 110
37 Self-talk t1 What's the word I'm looking for 102 103
38 Downplayer bd That's all right. 100 103
39 Maybe/Accept-part aap_am Something like that 98 105
40 Tag-Question ^g Right? 93 92
41 Declarative Wh-Question qw^d You are what kind of buff? 80 80
42 Apology fa I'm sorry. 76 79
43 Thanking ft Hey thanks a lot 67 78
Table DAMSL
The DAMSL tags with their training-set counts as reported in the Coders' Manual and the counts for the full corpus as calculated by damsl_act_tag().

Most of the Coders' Manual is devoted to explaining how to make decisions about the tags. This is extremely valuable information if you decide to study the tags for scientific purposes, because the instructions provide insights into what the tags mean and how the annotators made decisions.

Penn Discourse Treebank 3 POS

Utterance objects have methods for accessing the POS-tagged version of the utterance as a plain string, and as a list of (string, tag) tuples. In addition, optional parameters to the methods allow you to regularize the words and tags in various ways:

  1. from swda import Transcript
  2. trans = Transcript('swda/sw00utt/sw_0001_4325.utt.csv', 'swda/swda-metadata.csv')
  3. utt = trans.utterances[53]
  4. utt.text
  5. "{C And } it's a small office that she works in -- /"

utt.pos() gives you the raw string of the POS version:

  1. utt.pos
  2. "And/CC [ it/PRP ] 's/BES [ a/DT small/JJ office/NN ] that/WDT [ she/PRP ] works/VBZ in/RB --/:"

You can use utt.text_words() to break the raw text on whitespace. More interesting is utt.pos_words(), which does the same for the POS-tagged version, which is often simpler, in that it lacks disfluency markers and information about the nature of the turn.

  1. utt.pos_words()
  2. ['And', 'it', "'s", 'a', 'small', 'office', 'that', 'she', 'works', 'in', '--']

The option wn_lemmatize=True runs the WordNet lemmatizer:

  1. utt.pos_words(wn_lemmatize=True)
  2. ['And', 'it', "'s", 'a', 'small', 'office', 'that', 'she', 'work', 'in', '--']

pos_lemmas() has the same options as pos_words() but it returns the (string, tag) tuples:

  1. utt.pos_lemmas(wn_lemmatize=True)
  2. [('And', 'cc'), ('it', 'prp'), ("'s", 'bes'), ('a', 'dt'), ('small', 'a'), \ ('office', 'n'), ('that', 'wdt'), ('she', 'prp'), ('work', 'v'), ('in', 'r'), ('--', ':')

As far as I can tell, the alignment between the raw text and the POS tags is extremely reliable, with differences largely concerning elements that were not tagged (mostly disfluency markers and non-verbal elements).

Penn Discourse Treebank 3 Trees

Not all utterances have trees; only a subset of the Switchboard is fully parsed. Here's a quick count of the utterances with parsetrees:

  1. sum([1 for utt in CorpusReader('swda').iter_utterances() if utt.trees])
  2. 118218

There are 221616 utterances in all, so about 53% have trees.

The relationship between the utterances/POS and the trees is highly frought. There is no simple mapping from the original release of the corpus, or the POS version, to the trees. For the parsing, some utterances were merged together into single trees, others were split across trees, and the basic numbering was changed, often dramatically. I myself did the text–POS–tree alignments automatically (not by hand!) using a wide range of heuristic matching techniques. There are definitely lingering misalignments. (If you notice any, please send me the transcript and utterance number.)

In the example used just above, the utterance and its POS match the tree, with the non-matching material being just trace markers and disfluency tags:

  1. [tree.pprint() for tree in utt.trees]
  2. ["(S (CC And) (NP-SBJ (PRP it)) (VP (BES 's) (NP-PRD (NP (DT a) (JJ small) (NN office)) (SBAR (WHNP-1 (WDT that)) (S (NP-SBJ (PRP she)) (VP (VBZ works) (PP-LOC (RB in) (NP (-NONE- *T*-1)))))))) (-DFL- E_S))"]
  3. utt.tree_lemmas(wn_lemmatize=True)
  4. [('And', 'CC'), ('it', 'PRP'), ("'s", 'BES'), ('a', 'DT'), ('small', 'JJ'), \ ('office', 'NN'), ('that', 'WDT'), ('she', 'PRP'), ('works', 'VBZ'), ('in', 'RB'), \ ('*T*-1', '-NONE-'), ('E_S', '-DFL-')]

Sometimes the utterance corresponds to a subtree of a given tree. In that case, utt.trees includes the entire tree, and it is important to restrict attention to the utterance's substructure when thinking about (counting elements of) the tree(s):

  1. trans = Transcript('swda/sw01utt/sw_0116_2406.utt.csv', 'swda/swda-metadata.csv')
  2. utt = trans.utterances[66]
  3. utt.text
  4. 'if not more /'
  5. utt.trees[0].pprint()
  6. '(S (CC but) (NP-SBJ (NNP Chuck) (NNP Norris)) (, ,) (PP (IN of) (NP (NN course))) (, ,) (VP (MD could) (VP (VB be) (ADJP-PRD (ADVP (RB just) (IN about)) (JJ equal)) (, ,) (FRAG (IN if) (RB not) (ADJP (JJR more))))) (-DFL- E_S))'

Here, one can imagine pulling out (FRAG (IN if) (RB not) (ADJP (JJR more))) to work with it separately from its containing tree. NLTK tree libraries have a subtrees() method that makes this easy:

  1. from nltk.tree import Tree
  2. frag = Tree('(FRAG (IN if) (RB not) (ADJP (JJR more)))')
  3. frag in utt.trees[0].subtrees()
  4. True

The most challenging situation is where the utterance overlaps two trees, but does not correspond to either of them, or even to identifiable subtrees of them:

  1. trans = Transcript('swda/sw00utt/sw_0020_4109.utt.csv', 'swda/swda-metadata.csv')
  2. utt = trans.utterances[15]
  3. utt.text
  4. 'right? /'
  5. utt.trees[0].pprint()
  6. (S (INTJ (UH so)) (NP-SBJ (PRP I)) (ADVP (RB just)) (VP (VBP press) (NP (CD one)) (ADVP (RB then)) (-DFL- E_S) (INTJ (JJ right))) (. ?) (-DFL- E_S))

Here, there is no unique node that dominates right, ?, and the disfluency marker but excludes the rest of the utterance

Of course, the easiest tree structures to deal with are those that correspond exactly to the utterance itself. The Utterance method tree_is_perfect_match() allows you to pick out just those situations. It does this by heuristically matching the raw-text terminals with the leaves of the tree structure. The following function counts the number of such utterances:

  1. #!/usr/bin/env python
  2.  
  3. from collections import defaultdict
  4. from swda import CorpusReader
  5.  
  6. def count_matches():
  7. """Determine how many utterances have a single precisely matching tree."""
  8. d = defaultdict(int)
  9. corpus = CorpusReader('swda')
  10. for utt in corpus.iter_utterances():
  11. if len(utt.trees) == 1:
  12. if utt.tree_is_perfect_match():
  13. d['match'] += 1
  14. else:
  15. d['mismatch'] += 1
  16. print "match: %s (%s percent)" % (d['match'], d['match']/float(sum(d.values())))

The output of the above is 96370 (0.829738688708 percent). This suggests that, when studying the trees, we can limit attention to matching-tree subset. However, we should first look to make sure that the overall distribution of tags is the same for this subset; it is conceivable that a specific tag never gets its own tree and thus would appear less in this subset.

Figure PERCOMPARE compares the percentages in Table DAMSL with the percentages from the restricted subset that that have full-tree matches. The distributions looks largely the same, suggesting that work involving parsetrees can limit attention to the matching-tree subset. However, if an analysis focuses on a specific subset of the tags, then more careful comparison is advised. (For example, x (non-verbal) and ^g (tag-questions) seem to be quite different from this perspective: non-verbal utterances are typically not parsed at all, and tag-questions are often treated as their own dialogue act but merged with the preceding tree when parsed.)

figures/swda/matching-tree-cmp.png
Figure PERCOMPARE
Comparing percentages of tags for the full corpus and the restricted subset that have single, precisely matching trees.

Exercises

SAMPLE Pick a transcript at random and study it a bit, to get a sense for what the data are like. Some things you might informally assess:

  1. How often to the callers speak in complete sentences?
  2. Where do you see the influence of their assigned topic?
  3. Do the callers stay on topic most of the time?
  4. Do you see any reflection of the dialect-area meta-data in the speech of the participants?

META The following code skeleton loops through the transcripts, creating an opportunity to count pieces of meta-data at that level. Complete the code by counting two different pieces of meta-data. Submit both the code and its output as your answer.

  1. def swda_transcript_metadata_counter():
  2. # A one-dimensional count dictionary with 0 as the default value:
  3. d = defaultdict(int)
  4. # Instantiate the corpus:
  5. corpus = CorpusReader('swda')
  6. # Iterate through the transcripts; display_progress=True tracks progress:
  7. for trans in corpus.iter_transcripts(display_progress=True):
  8. # Keep track of the meta-data using d...
  9.  
  10. # Turn d into a list of tuples as d.items(), sort it based on the
  11. # second (index 1 member) of those tuples, largest first, and
  12. # print out the results:
  13. for key, val in sorted(d.items(), key=itemgetter(1), reverse=True):
  14. print key, val

Advanced extension: allow the user to supply a Transcript attribute as the argument to the function, and then use that attribute inside the loop, to compile its cont distribution.

ROOTS The following skeletal code loops through the utterances, creating an opportunity to counts utterance-level information.

  1. Finish this function so that it keeps track of the distribution of root node labels on nltk.tree.Tree objects. Submit the output from this run.
  2. Modify the function so that it uses tree_is_perfect_match() to restrict attention to utterances with exactly one tree. Submit both the code and output from this run.
  3. Do the distributions of the root nodes differ in nay worrisome ways between the full corpus and the subset?
  1. def swda_root_nodes():
  2. # A one-dimensional count dictionary with 0 as the default value:
  3. d = defaultdict(int)
  4. # Instantiate the corpus:
  5. corpus = CorpusReader('swda')
  6. # Iterate through the utterances:
  7. for trans in corpus.iter_utterances(display_progress=True):
  8. # Count tree root nodes here using d ...
  9.  
  10. # Turn d into a list of tuples as d.items(), sort it based on the
  11. # second (index 1 member) of those tuples, largest first, and
  12. # print out the results:
  13. for key, val in sorted(d.items(), key=itemgetter(1), reverse=True):
  14. print key, val

POSThis question compares heavily edited newspaper text with naturalistic dialogue by looking at the distribution of POS tags in two such resources.

  1. Build a probability distribution over raw (not WordNet-lemmatized) part-of-speech tags.
  2. Run the following NLTK code, which builds such a distribution for the NLTK fragment of the Wall Street Journal Penn Treebank corpus.
  3. Identify 3-5 ways in which the two distributions differ.
  1. from collections import defaultdict
  2. from nltk.corpus import treebank
  3.  
  4. def treebank_pos_dist():
  5. """Build a POS relative frequency distribution for the NLTK subset of the WSJ Treebank."""
  6. d = defaultdict(int)
  7. for fileid in treebank.fileids():
  8. for word in treebank.tagged_words(fileid):
  9. d[word[1]] += 1
  10. dist = {}
  11. total = float(sum(d.values()))
  12. for key, val in d.iteritems():
  13. dist[key] = d[key] / total
  14. return dist

TAGS How are tag questions parsed? Choose one of the following two methods for addressing this:

  1. Easier option: browse around in the CSV files looking for utterances marked with the dialog-act tag of a tag question. Study the associated trees and provide a characterizatio of the tag question structure or structures using a diagram or labeled bracketing.
  2. Harder but more satisfying option: write code to extract all the things that have the dialog-act tag of a tag question and look at what the associated trees are like. Write a separate function that takes an nltk.tree.Tree object as its argument and returns a list (possibly empty) of all the tag-question substructures in that tree.