Projects per year
Abstract
Common Crawl is a massive multi-petabyte dataset hosted by Amazon. It contains archived HTML web page data from 2008 to date. Common Crawl has been widely used for text mining purposes. Using data extracted from Common Crawl has several advantages over a direct crawl of web data, among which is removing the likelihood of a user's home IP address becoming blacklisted for accessing a given web site too frequently. However, Common Crawl is a data sample, and so questions arise about the quality of Common Crawl as a representative sample of the original data. We perform systematic tests on the similarity of topics estimated from Common Crawl compared to topics estimated from the full data of online forums. Our target is online discussions from a user forum for automotive enthusiasts, but our research strategy can be applied to other domains and samples to evaluate the representativeness of topic models. We show that topic proportions estimated from Common Crawl are not significantly different than those estimated on the full data. We also show that topics are similar in terms of their word compositions, and not worse than topic similarity estimated under true random sampling, which we simulate through a series of experiments. Our research will be of interest to analysts who wish to use Common Crawl to study topics of interest in user forum data, and analysts applying topic models to other data samples.
Original language | American English |
---|---|
Title of host publication | 2017 IEEE International Conference |
DOIs | |
State | Published - 2017 |
Fingerprint
Dive into the research topics of 'Representativeness of latent dirichlet allocation topics estimated from data samples with application to common crawl'. Together they form a unique fingerprint.Projects
- 1 Finished
-
Teaching and research with LexisNexis HPCC systems with Clemson University
Xu, L. (CoI) & Apon, A. (PI)
01/1/18 → 12/1/18
Project: Research