OSCAR

OSCAR

Open Source Project on Multilingual Resources for Machine Learning

OSCAR

The OSCAR project (Open Super-large Crawled Aggregated coRpus) is an Open Source project aiming to provide web-based multilingual resources and datasets for Machine Learning (ML) and Artificial Intelligence (AI) applications. The project focuses specifically in providing large quantities of unannotated raw data that is commonly used in the pre-training of large deep learning models. The OSCAR project has developed high-performance data pipelines specifically conceived to classify and filter large amounts of web data. The project has also put special attention in improving the data quality of web-based corpora as well as providing data for low-resource languages, so that these new ML/AI technologies are accessible to as many communities as possible.

The new OSCAR 23.01 is finally available, check it out here! 🚀
Join our Discord community here! 💬

Data is distributed by language in both original and deduplicated form. There are currently 166 different languages available. If you use OSCAR please consider giving us some feedback by writing to our mail address down below. Also consider citing our papers.

If you want to contribute to OSCAR, please open a pull request here.

Since 2019, The OSCAR Project has been funded by Inria (project-team ALMAnaCH) and the PRAIRIE institute. Starting in 2023, DFKI and the German Federal Ministry for Economic Affairs and Climate Action (BMWK) through the project OpenGPT-X, have joined Inria, ALMAnaCH and the PRAIRIE institute in providing funding for the OSCAR Project. During 2022 and at the beginning of 2023, OSCAR was also shortly funded by The University of Mannheim.

If you are interested in OSCAR and would like to access the corpus, send us a mail using the mail address down below, with “OSCAR Access Request” as mail title. Please include your name, last name, affiliation, contact details, which languages do you need and a brief description of how you intend to use OSCAR.

Funding provided by

Avatar

Inria

Funding Organization

Avatar

ALMAnaCH

Funding Lab

Avatar

Prairie Institute

Funding Institute

Avatar

DFKI

Funding Organization

Avatar

OpenGPT-X

Funding Project

The OSCAR Team

Core

Avatar

Julien Abadji

Research Engineer

Avatar

Rua Ismail

Research Engineer

Avatar

Laurent Romary

Senior Researcher

Avatar

Benoît Sagot

Senior Researcher

Collaborators

Avatar

Sebastian Nagel

Crawl Engineer & Data Scientist

Avatar

Ayyoob Imani

Ph.D. student at LMU Munich

Contributors

Avatar

Sotaro Takeshita

Ph.D. Student

Avatar

Patrick Teufert

Data Scientist

Partners

Avatar

Common Crawl

Partner Organization

Lincense

Corpus License

These data are released under this licensing scheme:

  • We do not own any of the text from which these data has been extracted.
  • We license the actual packaging and annotations of these data under the Creative Commons CC0 license (“no rights reserved”).
  • To the extent possible under French law, Inria has waived all copyright and related or neighboring rights to OSCAR.
  • To the extent possible under German law, DFKI GmbH and Universität Mannheim have waived all copyright and related or neighboring rights to OSCAR.
  • This work is published from: France.

CC0

Code Licenses

All of the software repositories produced by the OSCAR Project are available on GitHub and include repository-specific licensing information. For more information please visit the OSCAR Project Organization on GitHub.

Notice and take down policy

Notice: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:

  • Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
  • Clearly identify the copyrighted work claimed to be infringed.
  • Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
  • And use the contact form below.

Take down: We will comply to legitimate requests by removing the affected sources from the next release of the corpus.