Resources



Please suggest any other resources you may be aware of. Raise a pull request or an issue to add more resources to the catalog. Put the proposed entry in the following format:

[Wikipedia Dumps](https://dumps.wikimedia.org/)

Add a small, informative description of the dataset and provide links to any paper/article/site documenting the resource. Mention your name too. We would like to acknowlege your contribution to building this catalog in the CONTRIBUTORS list.

:new: Added Evaluation Benchmarks sections

:+1: Featured Resources

  • IIT Bombay English-Hindi Parallel Corpus: Largest en-hi parallel corpora in public domain (about 1.5 million semgents)
  • CVIT-IIITH PIB Multilingual Corpus: Mined from Press Information Bureau for many Indian languages. Contains both English-IL and IL-IL corpora (IL=Indian language).
  • CVIT-IIITH Mann ki Baat Corpus: Mined from Indian PM Narendra Modi’s Mann ki Baat speeches.
  • AI4Bharat IndicNLP Project: Text corpora, word embeddings, text classification datasets for Indian languages.
  • iNLTK: iNLTK aims to provide out of the box support for various NLP tasks that an application developer might need for Indic languages.
  • Dakshina Dataset: The Dakshina dataset is a collection of text in both Latin and native scripts for 12 South Asian languages. Contains an aggregate of around 300k word pairs and 120k sentence pairs. Useful for transliteration.

Browse the entire catalog…

:raising_hand:Note: Many known resources have not yet been classified into the catalog. They can be found as open issues in the repo.

Major Indic Language NLP Repositories

Libraries

  • Indic NLP Library: Python Library for various Indian language NLP tasks like tokenization, sentece splitting, normalization, script conversion, transliteration, etc
  • pyiwn: Python Interface to IndoWordNet
  • Indic-OCR : OCR for Indic Scripts
  • CLTK: Toolkit for many of the world’s classical languages. Support for Sanskrit. Some parts of the Sanskrit library are forked from the Indic NLP Library.
  • iNLTK: iNLTK aims to provide out of the box support for various NLP tasks that an application developer might need for Indic languages.
  • Sanskrit Coders Indic Transliteration: Script conversion and ronaization for Indian languages.

Evaluation Benchmarks

Benchmarks spanning multiple tasks.

  • GLUECoS: For Hindi-English code-mixed data containing the following tasks - Language Identification (LID), POS Tagging (POS), Named Entity Recognition (NER), Sentiment Analysis (SA), Question Answering (QA), Natural Language Inference (NLI).
  • AI4Bharat Text Classification: A compilation of classification datasets for 10 languages.

Standards

Text Corpora

Monolingual Corpus

Language Identification

Lexical Resources

NER Corpora

Parallel Translation Corpus

Parallel Transliteration Corpus

Text Classification

Textual Entailment

  • XNLI corpus: Hindi and Urdu test sets and machine translated training sets (from English MultiNLI).

Paraphrase

Sentiment, Sarcasm, Emotion Analysis

Question Answering

Dialog

Discourse

Information Extraction

  • EventXtract-IL: Event extraction for Tamil and Hindi. Described in this paper.
  • [EDNIL-FIRE2020]https://ednilfire.github.io/ednil/2020/index.html): Event extraction for Tamil, Hindi, Bengali, Marathi, English. Described in this paper.

POS Tagged corpus

Chunk Corpus

Dependency Parse Corpus

Coreference Corpus

Models

Word Embeddings

Sentence Embeddings

  • BERT Multilingual: BERT model trained on Wikipedias of many languages (including major Indic languages).
  • iNLTK: ULMFit and TransformerXL pre-trained embeddings for many languages trained on Wikipedia and some News articles.
  • albert-base-sanskrit: ALBERT-based model trained on Sanskrit Wikipedia.
  • RoBERTa-hindi-guj-san: Multilingual RoBERTa like model trained on Hindi, Sanskrit and Gujarati.

Multilingual Word Embeddings

Morphanalyzers

SMT Models

Speech Corpora

OCR Corpora

Multimodal Corpora

Language Specific Catalogs

Pointers to language-specific NLP resource catalogs