IndicCorp has been developed by discovering and scraping thousands of web sources - primarily news, magazines and books, over a duration of several months.
IndicCorp is one of the largest publicly-available corpora for Indian languages. It has also been used to train our released models which have obtained state-of-the-art performance on many tasks.
The corpus is a single large text file containing one sentence per line. The publicly released version is randomly shuffled, untokenized and deduplicated.
|Language||# News Articles*||Sentences||Tokens||Link|
* Excluding articles obtained from the OSCAR corpus
For processing the corpus into other forms (tokenized, transliterated etc.), you can use the indicnlp library. As an example, the following code snippet can be used to tokenize the corpus:
from indicnlp.tokenize.indic_tokenize import trivial_tokenize from indicnlp.normalize.indic_normalize import IndicNormalizerFactory lang = 'kn' input_path = 'kn' output_path = 'kn.tok.txt' normalizer_factory = IndicNormalizerFactory() normalizer = normalizer_factory.get_normalizer(lang) def process_sent(sent): normalized = normalizer.normalize(sent) processed = ' '.join(trivial_tokenize(normalized, lang)) return processed with open(input_path, 'r', encoding='utf-8') as in_fp,\ open(output_path, 'w', encoding='utf-8') as out_fp: for line in in_fp.readlines(): sent = line.rstrip('\n') toksent = process_sent(sent) out_fp.write(toksent) out_fp.write('\n')