Read Google Analytics Exports
read_ga.Rd
Read exported Google Analytics data saved as .csv
.
Arguments
- ga_file
The path to a Google Analytics
.csv
file.- read_all
Whether all tables in the exported file should be read, as a logical scalar. If
FALSE
(default), only the first (main) table is read.- tidy
Whether tables should be tidied with
tidy_ga_tbl()
, as a logical scalar (default:TRUE
).- keep_total
Whether to keep the row with totals that appears at the bottom of the table, as a logical scalar (default:
FALSE
).- ...
Arguments passed on to
readr::read_csv
file
Either a path to a file, a connection, or literal data (either a single string or a raw vector).
Files ending in
.gz
,.bz2
,.xz
, or.zip
will be automatically uncompressed. Files starting withhttp://
,https://
,ftp://
, orftps://
will be automatically downloaded. Remote gz files can also be automatically downloaded and decompressed.Literal data is most useful for examples and tests. To be recognised as literal data, the input must be either wrapped with
I()
, be a string containing at least one new line, or be a vector containing at least one string with a new line.Using a value of
clipboard()
will read from the system clipboard.quote
Single character used to quote strings.
col_names
Either
TRUE
,FALSE
or a character vector of column names.If
TRUE
, the first row of the input will be used as the column names, and will not be included in the data frame. IfFALSE
, column names will be generated automatically: X1, X2, X3 etc.If
col_names
is a character vector, the values will be used as the names of the columns, and the first row of the input will be read into the first row of the output data frame.Missing (
NA
) column names will generate a warning, and be filled in with dummy names...1
,...2
etc. Duplicate column names will generate a warning and be made unique, seename_repair
to control how this is done.col_types
One of
NULL
, acols()
specification, or a string. Seevignette("readr")
for more details.If
NULL
, all column types will be inferred fromguess_max
rows of the input, interspersed throughout the file. This is convenient (and fast), but not robust. If the guessed types are wrong, you'll need to increaseguess_max
or supply the correct types yourself.Column specifications created by
list()
orcols()
must contain one column specification for each column. If you only want to read a subset of the columns, usecols_only()
.Alternatively, you can use a compact string representation where each character represents one column:
c = character
i = integer
n = number
d = double
l = logical
f = factor
D = date
T = date time
t = time
? = guess
_ or - = skip
By default, reading a file without a column specification will print a message showing what
readr
guessed they were. To remove this message, setshow_col_types = FALSE
or set `options(readr.show_col_types = FALSE).col_select
Columns to include in the results. You can use the same mini-language as
dplyr::select()
to refer to the columns by name. Usec()
to use more than one selection expression. Although this usage is less common,col_select
also accepts a numeric column index. See?tidyselect::language
for full details on the selection language.id
The name of a column in which to store the file path. This is useful when reading multiple input files and there is data in the file paths, such as the data collection date. If
NULL
(the default) no extra column is created.locale
The locale controls defaults that vary from place to place. The default locale is US-centric (like R), but you can use
locale()
to create your own locale that controls things like the default time zone, encoding, decimal mark, big mark, and day/month names.na
Character vector of strings to interpret as missing values. Set this option to
character()
to indicate no missing values.quoted_na
Should missing values inside quotes be treated as missing values (the default) or strings. This parameter is soft deprecated as of readr 2.0.0.
comment
A string used to identify comments. Any text after the comment characters will be silently ignored.
trim_ws
Should leading and trailing whitespace (ASCII spaces and tabs) be trimmed from each field before parsing it?
skip
Number of lines to skip before reading data. If
comment
is supplied any commented lines are ignored after skipping.n_max
Maximum number of lines to read.
guess_max
Maximum number of lines to use for guessing column types. Will never use more than the number of lines read. See
vignette("column-types", package = "readr")
for more details.name_repair
Handling of column names. The default behaviour is to ensure column names are
"unique"
. Various repair strategies are supported:"minimal"
: No name repair or checks, beyond basic existence of names."unique"
(default value): Make sure names are unique and not empty."check_unique"
: no name repair, but check they areunique
."universal"
: Make the namesunique
and syntactic.A function: apply custom name repair (e.g.,
name_repair = make.names
for names in the style of base R).A purrr-style anonymous function, see
rlang::as_function()
.
This argument is passed on as
repair
tovctrs::vec_as_names()
. See there for more details on these terms and the strategies used to enforce them.num_threads
The number of processing threads to use for initial parsing and lazy reading of data. If your data contains newlines within fields the parser should automatically detect this and fall back to using one thread only. However if you know your file has newlines within quoted fields it is safest to set
num_threads = 1
explicitly.progress
Display a progress bar? By default it will only display in an interactive session and not while knitting a document. The automatic progress bar can be disabled by setting option
readr.show_progress
toFALSE
.skip_empty_rows
Should blank rows be ignored altogether? i.e. If this option is
TRUE
then blank rows will not be represented at all. If it isFALSE
then they will be represented byNA
values in all the columns.lazy
Read values lazily? By default, this is
FALSE
, because there are special considerations when reading a file lazily that have tripped up some users. Specifically, things get tricky when reading and then writing back into the same file. But, in general, lazy reading (lazy = TRUE
) has many benefits, especially for interactive use and when your downstream work only involves a subset of the rows or columns.Learn more in
should_read_lazy()
and in the documentation for thealtrep
argument ofvroom::vroom()
.