This function builds the word frequency list from a corpus.
Usage
freqlist(
x,
re_drop_line = NULL,
line_glue = NULL,
re_cut_area = NULL,
re_token_splitter = re("[^_\\p{L}\\p{N}\\p{M}'-]+"),
re_token_extractor = re("[_\\p{L}\\p{N}\\p{M}'-]+"),
re_drop_token = NULL,
re_token_transf_in = NULL,
token_transf_out = NULL,
token_to_lower = TRUE,
perl = TRUE,
blocksize = 300,
verbose = FALSE,
show_dots = FALSE,
dot_blocksize = 10,
file_encoding = "UTF-8",
ngram_size = NULL,
max_skip = 0,
ngram_sep = "_",
ngram_n_open = 0,
ngram_open = "[]",
as_text = FALSE
)
Arguments
- x
Either a list of filenames of the corpus files (if
as_text
isTRUE
) or the actual text of the corpus (ifas_text
isFALSE
).If
as_text
isTRUE
and the length of the vectorx
is higher than one, then each item inx
is treated as a separate line (or a separate series of lines) in the corpus text. Within each item ofx
, the character"\\n"
is also treated as a line separator.- re_drop_line
NULL
or character vector. IfNULL
, it is ignored. Otherwise, a character vector (assumed to be of length 1) containing a regular expression. Lines inx
that contain a match forre_drop_line
are treated as not belonging to the corpus and are excluded from the results.- line_glue
NULL
or character vector. IfNULL
, it is ignored. Otherwise, all lines in a corpus file (or inx
, ifas_text
isTRUE
), are glued together in one character vector of length 1, with the stringline_glue
pasted in between consecutive lines. The value ofline_glue
can also be equal to the empty string""
. The 'line glue' operation is conducted immediately after the 'drop line' operation.- re_cut_area
NULL
or character vector. IfNULL
, it is ignored. Otherwise, all matches in a corpus file (or inx
, ifas_text
isTRUE
), are 'cut out' of the text prior to the identification of the tokens in the text (and are therefore not taken into account when identifying the tokens). The 'cut area' operation is conducted immediately after the 'line glue' operation.- re_token_splitter
Regular expression or
NULL
. Regular expression that identifies the locations where lines in the corpus files are split into tokens. (See Details.)The 'token identification' operation is conducted immediately after the 'cut area' operation.
- re_token_extractor
Regular expression that identifies the locations of the actual tokens. This argument is only used if
re_token_splitter
isNULL
. (See Details.)The 'token identification' operation is conducted immediately after the 'cut area' operation.
- re_drop_token
Regular expression or
NULL
. IfNULL
, it is ignored. Otherwise, it identifies tokens that are to be excluded from the results. Any token that contains a match forre_drop_token
is removed from the results. The 'drop token' operation is conducted immediately after the 'token identification' operation.- re_token_transf_in
Regular expression that identifies areas in the tokens that are to be transformed. This argument works together with the argument
token_transf_out
.If both
re_token_transf_in
andtoken_transf_out
differ fromNA
, then all matches, in the tokens, for the regular expressionre_token_transf_in
are replaced with the replacement stringtoken_transf_out
.The 'token transformation' operation is conducted immediately after the 'drop token' operation.
- token_transf_out
Replacement string. This argument works together with
re_token_transf_in
and is ignored ifre_token_transf_in
isNULL
orNA
.- token_to_lower
Logical. Whether tokens must be converted to lowercase before returning the result. The 'token to lower' operation is conducted immediately after the 'token transformation' operation.
- perl
Logical. Whether the PCRE regular expression flavor is being used in the arguments that contain regular expressions.
- blocksize
Number that indicates how many corpus files are read to memory
at each individual step' during the steps in the procedure; normally the default value of
300` should not be changed, but when one works with exceptionally small corpus files, it may be worthwhile to use a higher number, and when one works with exceptionally large corpus files, it may be worthwhile to use a lower number.- verbose
If
TRUE
, messages are printed to the console to indicate progress.- show_dots, dot_blocksize
If
TRUE
, dots are printed to the console to indicate progress.- file_encoding
File encoding that is assumed in the corpus files.
- ngram_size
Argument in support of ngrams/skipgrams (see also
max_skip
).If one wants to identify individual tokens, the value of
ngram_size
should beNULL
or1
. If one wants to retrieve token ngrams/skipgrams,ngram_size
should be an integer indicating the size of the ngrams/skipgrams. E.g.2
for bigrams, or3
for trigrams, etc.- max_skip
Argument in support of skipgrams. This argument is ignored if
ngram_size
isNULL
or is1
.If
ngram_size
is2
or higher, andmax_skip
is0
, then regular ngrams are being retrieved (albeit that they may contain open slots; seengram_n_open
).If
ngram_size
is2
or higher, andmax_skip
is1
or higher, then skipgrams are being retrieved (which in the current implementation cannot contain open slots; seengram_n_open
).For instance, if
ngram_size
is3
andmax_skip
is2
, then 2-skip trigrams are being retrieved. Or ifngram_size
is5
andmax_skip
is3
, then 3-skip 5-grams are being retrieved.- ngram_sep
Character vector of length 1 containing the string that is used to separate/link tokens in the representation of ngrams/skipgrams in the output of this function.
- ngram_n_open
If
ngram_size
is2
or higher, and moreoverngram_n_open
is a number higher than0
, then ngrams with 'open slots' in them are retrieved. These ngrams with 'open slots' are generalizations of fully lexically specific ngrams (with the generalization being that one or more of the items in the ngram are replaced by a notation that stands for 'any arbitrary token').For instance, if
ngram_size
is4
andngram_n_open
is1
, and if moreover the input contains a 4-gram"it_is_widely_accepted"
, then the output will contain all modifications of"it_is_widely_accepted"
in which one (sincengram_n_open
is1
) of the items in this n-gram is replaced by an open slot. The first and the last item inside an ngram are never turned into an open slot; only the items in between are candidates for being turned into open slots. Therefore, in the example, the output will contain"it_[]_widely_accepted"
and"it_is_[]_accepted"
.As a second example, if
ngram_size
is5
andngram_n_open
is2
, and if moreover the input contains a 5-gram"it_is_widely_accepted_that"
, then the output will contain"it_[]_[]_accepted_that"
,"it_[]_widely_[]_that"
, and"it_is_[]_[]_that"
.- ngram_open
Character string used to represent open slots in ngrams in the output of this function.
- as_text
Logical. Whether
x
is to be interpreted as a character vector containing the actual contents of the corpus (ifas_text
isTRUE
) or as a character vector containing the names of the corpus files (ifas_text
isFALSE
). If ifas_text
isTRUE
, then the argumentsblocksize
,verbose
,show_dots
,dot_blocksize
, andfile_encoding
are ignored.
Value
An object of class freqlist
, which is based on the class table
.
It has additional attributes and methods such as:
base
print()
,as_data_frame()
,summary()
andsort
,an interactive
explore()
method,various getters, including
tot_n_tokens()
,n_types()
,n_tokens()
, values that are also returned bysummary()
, and more,subsetting methods such as
keep_types()
,keep_pos()
, etc. including[]
subsetting (see brackets).
Additional manipulation functions include type_freqs()
to extract the frequencies
of different items, freqlist_merge()
to combine frequency lists, and
freqlist_diff()
to subtract a frequency list from another.
Objects of class freqlist
can be saved to file with write_freqlist()
;
these files can be read with read_freqlist()
.
Details
The actual token identification is either based on the re_token_splitter
argument, a regular expression that identifies the areas between the tokens,
or on re_token_extractor
, a regular expression that identifies the area
that are the tokens.
The first mechanism is the default mechanism: the argument re_token_extractor
is only used if re_token_splitter
is NULL
.
Currently the implementation of
re_token_extractor
is a lot less time-efficient than that of re_token_splitter
.
Examples
toy_corpus <- "Once upon a time there was a tiny toy corpus.
It consisted of three sentences. And it lived happily ever after."
(flist <- freqlist(toy_corpus, as_text = TRUE))
#> Frequency list (types in list: 19, tokens in list: 21)
#> rank type abs_freq nrm_freq
#> ---- --------- -------- --------
#> 1 a 2 952.381
#> 2 it 2 952.381
#> 3 after 1 476.190
#> 4 and 1 476.190
#> 5 consisted 1 476.190
#> 6 corpus 1 476.190
#> 7 ever 1 476.190
#> 8 happily 1 476.190
#> 9 lived 1 476.190
#> 10 of 1 476.190
#> 11 once 1 476.190
#> 12 sentences 1 476.190
#> 13 there 1 476.190
#> 14 three 1 476.190
#> 15 time 1 476.190
#> 16 tiny 1 476.190
#> 17 toy 1 476.190
#> 18 upon 1 476.190
#> 19 was 1 476.190
print(flist, n = 20)
#> Frequency list (types in list: 19, tokens in list: 21)
#> rank type abs_freq nrm_freq
#> ---- --------- -------- --------
#> 1 a 2 952.381
#> 2 it 2 952.381
#> 3 after 1 476.190
#> 4 and 1 476.190
#> 5 consisted 1 476.190
#> 6 corpus 1 476.190
#> 7 ever 1 476.190
#> 8 happily 1 476.190
#> 9 lived 1 476.190
#> 10 of 1 476.190
#> 11 once 1 476.190
#> 12 sentences 1 476.190
#> 13 there 1 476.190
#> 14 three 1 476.190
#> 15 time 1 476.190
#> 16 tiny 1 476.190
#> 17 toy 1 476.190
#> 18 upon 1 476.190
#> 19 was 1 476.190
as.data.frame(flist)
#> rank type abs_freq nrm_freq
#> 1 1 a 2 952.3810
#> 2 2 it 2 952.3810
#> 3 3 after 1 476.1905
#> 4 4 and 1 476.1905
#> 5 5 consisted 1 476.1905
#> 6 6 corpus 1 476.1905
#> 7 7 ever 1 476.1905
#> 8 8 happily 1 476.1905
#> 9 9 lived 1 476.1905
#> 10 10 of 1 476.1905
#> 11 11 once 1 476.1905
#> 12 12 sentences 1 476.1905
#> 13 13 there 1 476.1905
#> 14 14 three 1 476.1905
#> 15 15 time 1 476.1905
#> 16 16 tiny 1 476.1905
#> 17 17 toy 1 476.1905
#> 18 18 upon 1 476.1905
#> 19 19 was 1 476.1905
as_tibble(flist)
#> # A tibble: 19 × 4
#> rank type abs_freq nrm_freq
#> <dbl> <chr> <dbl> <dbl>
#> 1 1 a 2 952.
#> 2 2 it 2 952.
#> 3 3 after 1 476.
#> 4 4 and 1 476.
#> 5 5 consisted 1 476.
#> 6 6 corpus 1 476.
#> 7 7 ever 1 476.
#> 8 8 happily 1 476.
#> 9 9 lived 1 476.
#> 10 10 of 1 476.
#> 11 11 once 1 476.
#> 12 12 sentences 1 476.
#> 13 13 there 1 476.
#> 14 14 three 1 476.
#> 15 15 time 1 476.
#> 16 16 tiny 1 476.
#> 17 17 toy 1 476.
#> 18 18 upon 1 476.
#> 19 19 was 1 476.
summary(flist)
#> Frequency list (types in list: 19, tokens in list: 21)
print(summary(flist))
#> Frequency list (types in list: 19, tokens in list: 21)
t_splitter <- "(?xi) [:\\s.;,?!\"]+"
freqlist(toy_corpus,
re_token_splitter = t_splitter,
as_text = TRUE)
#> Frequency list (types in list: 19, tokens in list: 21)
#> rank type abs_freq nrm_freq
#> ---- --------- -------- --------
#> 1 a 2 952.381
#> 2 it 2 952.381
#> 3 after 1 476.190
#> 4 and 1 476.190
#> 5 consisted 1 476.190
#> 6 corpus 1 476.190
#> 7 ever 1 476.190
#> 8 happily 1 476.190
#> 9 lived 1 476.190
#> 10 of 1 476.190
#> 11 once 1 476.190
#> 12 sentences 1 476.190
#> 13 there 1 476.190
#> 14 three 1 476.190
#> 15 time 1 476.190
#> 16 tiny 1 476.190
#> 17 toy 1 476.190
#> 18 upon 1 476.190
#> 19 was 1 476.190
freqlist(toy_corpus,
re_token_splitter = t_splitter,
token_to_lower = FALSE,
as_text = TRUE)
#> Frequency list (types in list: 20, tokens in list: 21)
#> rank type abs_freq nrm_freq
#> ---- --------- -------- --------
#> 1 a 2 952.381
#> 2 And 1 476.190
#> 3 It 1 476.190
#> 4 Once 1 476.190
#> 5 after 1 476.190
#> 6 consisted 1 476.190
#> 7 corpus 1 476.190
#> 8 ever 1 476.190
#> 9 happily 1 476.190
#> 10 it 1 476.190
#> 11 lived 1 476.190
#> 12 of 1 476.190
#> 13 sentences 1 476.190
#> 14 there 1 476.190
#> 15 three 1 476.190
#> 16 time 1 476.190
#> 17 tiny 1 476.190
#> 18 toy 1 476.190
#> 19 upon 1 476.190
#> 20 was 1 476.190
t_extractor <- "(?xi) ( [:;?!] | [.]+ | [\\w'-]+ )"
freqlist(toy_corpus,
re_token_splitter = NA,
re_token_extractor = t_extractor,
as_text = TRUE)
#> Frequency list (types in list: 20, tokens in list: 24)
#> rank type abs_freq nrm_freq
#> ---- --------- -------- --------
#> 1 . 3 1250.000
#> 2 a 2 833.333
#> 3 it 2 833.333
#> 4 after 1 416.667
#> 5 and 1 416.667
#> 6 consisted 1 416.667
#> 7 corpus 1 416.667
#> 8 ever 1 416.667
#> 9 happily 1 416.667
#> 10 lived 1 416.667
#> 11 of 1 416.667
#> 12 once 1 416.667
#> 13 sentences 1 416.667
#> 14 there 1 416.667
#> 15 three 1 416.667
#> 16 time 1 416.667
#> 17 tiny 1 416.667
#> 18 toy 1 416.667
#> 19 upon 1 416.667
#> 20 was 1 416.667
freqlist(letters, ngram_size = 3, as_text = TRUE)
#> Frequency list (types in list: 24, tokens in list: 24)
#> rank type abs_freq nrm_freq
#> ---- ----- -------- --------
#> 1 a_b_c 1 416.667
#> 2 b_c_d 1 416.667
#> 3 c_d_e 1 416.667
#> 4 d_e_f 1 416.667
#> 5 e_f_g 1 416.667
#> 6 f_g_h 1 416.667
#> 7 g_h_i 1 416.667
#> 8 h_i_j 1 416.667
#> 9 i_j_k 1 416.667
#> 10 j_k_l 1 416.667
#> 11 k_l_m 1 416.667
#> 12 l_m_n 1 416.667
#> 13 m_n_o 1 416.667
#> 14 n_o_p 1 416.667
#> 15 o_p_q 1 416.667
#> 16 p_q_r 1 416.667
#> 17 q_r_s 1 416.667
#> 18 r_s_t 1 416.667
#> 19 s_t_u 1 416.667
#> 20 t_u_v 1 416.667
#> ...
#>
freqlist(letters, ngram_size = 2, ngram_sep = " ", as_text = TRUE)
#> Frequency list (types in list: 25, tokens in list: 25)
#> rank type abs_freq nrm_freq
#> ---- ---- -------- --------
#> 1 a b 1 400
#> 2 b c 1 400
#> 3 c d 1 400
#> 4 d e 1 400
#> 5 e f 1 400
#> 6 f g 1 400
#> 7 g h 1 400
#> 8 h i 1 400
#> 9 i j 1 400
#> 10 j k 1 400
#> 11 k l 1 400
#> 12 l m 1 400
#> 13 m n 1 400
#> 14 n o 1 400
#> 15 o p 1 400
#> 16 p q 1 400
#> 17 q r 1 400
#> 18 r s 1 400
#> 19 s t 1 400
#> 20 t u 1 400
#> ...
#>