Skip to main content

John Snow Labs NLU provides state of the art algorithms for NLP&NLU with 10000+ of pretrained models in 200+ languages. It enables swift and simple development and research with its powerful Pythonic and Keras inspired API. It is powerd by John Snow Labs powerful Spark NLP library.

Project description

NLU: The Power of Spark NLP, the Simplicity of Python

John Snow Labs' NLU is a Python library for applying state-of-the-art text mining, directly on any dataframe, with a single line of code.

As a facade of the award-winning Spark NLP library, it comes with 1000+ of pretrained models in 100+, all production-grade, scalable, and trainable, with everything in 1 line of code.

NLU in Action

See how easy it is to use any of the thousands of models in 1 line of code, there are hundreds of tutorials and simple examples you can copy and paste into your projects to achieve State Of The Art easily.

NLU & Streamlit in Action

This 1 line let's you visualize and play with 1000+ SOTA NLU & NLP models in 200 languages

streamlit run https://raw.githubusercontent.com/JohnSnowLabs/nlu/master/examples/streamlit/01_dashboard.py

NLU provides tight and simple integration into Streamlit, which enables building powerful webapps in just 1 line of code which showcase the.

View the NLU&Streamlit documentation or NLU & Streamlit examples section.

The entire GIF demo and

All NLU resources overview

Take a look at our official NLU page: https://nlu.johnsnowlabs.com/ for user documentation and examples

| Ressource | Description|

|-----------------------------------------------------------------------|-------------------------------------------|

| Install NLU | Just run pip install nlu pyspark==3.0.2

| The NLU Namespace | Find all the names of models you can load with nlu.load()

| The nlu.load(<Model>) function | Load any of the 1000+ models in 1 line

| The nlu.load(<Model>).predict(data) function | Predict on Strings, List of Strings, Numpy Arrays, Pandas, Modin and Spark Dataframes

| The nlu.load(<train.Model>).fit(data) function | Train a text classifier for 2-Class, N-Classes Multi-N-Classes, Named-Entitiy-Recognition or Parts of Speech Tagging

| The nlu.load(<Model>).viz(data) function | Visualize the results of Word Embedding Similarity Matrix, Named Entity Recognizers, Dependency Trees & Parts of Speech, Entity Resolution,Entity Linking or Entity Status Assertion

| The nlu.load(<Model>).viz_streamlit(data) function | Display an interactive GUI which lets you explore and test every model and feature in NLU in 1 click.

| General Concepts | General concepts in NLU

| The latest release notes | Newest features added to NLU

| Overview NLU 1-liners examples | Most common used models and their results

| Overview NLU 1-liners examples for healthcare models | Most common used healthcare models and their results

| Overview of all NLU tutorials and Examples | 100+ tutorials on how to use NLU on text datasets for various problems and from various sources like Twitter, Chinese News, Crypto News Headlines, Airline Traffic communication, Product review classifier training,

| Connect with us on Slack | Problems, questions or suggestions? We have a very active and helpful community of over 2000+ AI enthusiasts putting NLU, Spark NLP & Spark OCR to good use

| Discussion Forum | More indepth discussion with the community? Post a thread in our discussion Forum

| John Snow Labs Medium | Articles and Tutorials on the NLU, Spark NLP and Spark OCR

| John Snow Labs Youtube | Videos and Tutorials on the NLU, Spark NLP and Spark OCR

| NLU Website | The official NLU website

|Github Issues | Report a bug

Getting Started with NLU

To get your hands on the power of NLU, you just need to install it via pip and ensure Java 8 is installed and properly configured. Checkout Quickstart for more infos

pip install nlu pyspark==3.0.2

Loading and predicting with any model in 1 line python

import nlu 

nlu.load('sentiment').predict('I love NLU! <3') 

Loading and predicting with multiple models in 1 line

Get 6 different embeddings in 1 line and use them for downstream data science tasks!

nlu.load('bert elmo albert xlnet glove use').predict('I love NLU! <3') 

What kind of models does NLU provide?

NLU provides everything a data scientist might want to wish for in one line of code!

  • NLU provides everything a data scientist might want to wish for in one line of code!

  • 1000 + pre-trained models

  • 100+ of the latest NLP word embeddings ( BERT, ELMO, ALBERT, XLNET, GLOVE, BIOBERT, ELECTRA, COVIDBERT) and different variations of them

  • 50+ of the latest NLP sentence embeddings ( BERT, ELECTRA, USE) and different variations of them

  • 100+ Classifiers (NER, POS, Emotion, Sarcasm, Questions, Spam)

  • 300+ Supported Languages

  • Summarize Text and Answer Questions with T5

  • Labeled and Unlabeled Dependency parsing

  • Various Text Cleaning and Pre-Processing methods like Stemming, Lemmatizing, Normalizing, Filtering, Cleaning pipelines and more

Classifiers trained on many different datasets

Choose the right tool for the right task! Whether you analyze movies or twitter, NLU has the right model for you!

  • trec6 classifier

  • trec10 classifier

  • spam classifier

  • fake news classifier

  • emotion classifier

  • cyberbullying classifier

  • sarcasm classifier

  • sentiment classifier for movies

  • IMDB Movie Sentiment classifier

  • Twitter sentiment classifier

  • NER pretrained on ONTO notes

  • NER trainer on CONLL

  • Language classifier for 20 languages on the wiki 20 lang dataset.

Utilities for the Data Science NLU applications

Working with text data can sometimes be quite a dirty job. NLU helps you keep your hands clean by providing components that take away from data engineering intensive tasks.

  • Datetime Matcher

  • Pattern Matcher

  • Chunk Matcher

  • Phrases Matcher

  • Stopword Cleaners

  • Pattern Cleaners

  • Slang Cleaner

Where can I see all models available in NLU?

For NLU models to load, see the NLU Namespace or the John Snow Labs Modelshub or go straight to the source.

Supported Data Types

  • Pandas DataFrame and Series

  • Spark DataFrames

  • Modin with Ray backend

  • Modin with Dask backend

  • Numpy arrays

  • Strings and lists of strings

Overview of all tutorials using the NLU-Library

In the following tabular, all available tutorials using NLU are listed. These tutorials will help you learn the

usage of the NLU library and on how to use it for your own tasks. Some of the tasks NLU does are

translating from any language to the english language, lemmatizing, tokenizing, cleaning text from

Symbol or unwanted syntax, spellchecking, detecting entities, analyzing sentiments and many more!

{:.table2}

| Tutorial Description | NLU Spells Used |Open In Colab | Dataset and Paper References |

|-----------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|

| Albert Word Embeddings with NLU | albert, sentiment pos albert emotion |Open In Colab | Albert-Paper, Albert on Github, Albert on TensorFlow, T-SNE, T-SNE-Albert, Albert_Embedding |

| Bert Word Embeddings with NLU | bert, pos sentiment emotion bert |Open In Colab | Bert-Paper, Bert Github, T-SNE, T-SNE-Bert, Bert_Embedding |

| BIOBERT Word Embeddings with NLU | biobert , sentiment pos biobert emotion |Open In Colab | BioBert-Paper, Bert Github , BERT: Deep Bidirectional Transformers, Bert Github, T-SNE, T-SNE-Biobert, Biobert_Embedding |

| COVIDBERT Word Embeddings with NLU | covidbert, sentiment covidbert pos |Open In Colab | CovidBert-Paper, Bert Github, T-SNE, T-SNE-CovidBert, Covidbert_Embedding |

| ELECTRA Word Embeddings with NLU | electra, sentiment pos en.embed.electra emotion |Open In Colab | Electra-Paper, T-SNE, T-SNE-Electra, Electra_Embedding |

| ELMO Word Embeddings with NLU | elmo, sentiment pos elmo emotion |Open In Colab | ELMO-Paper, Elmo-TensorFlow, T-SNE, T-SNE-Elmo, Elmo-Embedding |

| GLOVE Word Embeddings with NLU | glove, sentiment pos glove emotion |Open In Colab | Glove-Paper, T-SNE, T-SNE-Glove , Glove_Embedding |

| XLNET Word Embeddings with NLU | xlnet, sentiment pos xlnet emotion |Open In Colab | XLNet-Paper, Bert Github, T-SNE, T-SNE-XLNet, Xlnet_Embedding |

| Multiple Word-Embeddings and Part of Speech in 1 Line of code | bert electra elmo glove xlnet albert pos |Open In Colab | Bert-Paper, Albert-Paper, ELMO-Paper, Electra-Paper, XLNet-Paper, Glove-Paper |

| Normalzing with NLU | norm |Open In Colab | - |

| Detect sentences with NLU | sentence_detector.deep, sentence_detector.pragmatic, xx.sentence_detector |Open In Colab | Sentence Detector |

| Spellchecking with NLU | n.a. | n.a. | - |

| Stemming with NLU | en.stem, de.stem |Open In Colab | - |

| Stopwords removal with NLU | stopwords |Open In Colab | Stopwords |

| Tokenization with NLU | tokenize |Open In Colab | - |

| Normalization of Documents | norm_document |Open In Colab | - |

| Open and Closed book question answering with Google's T5 | en.t5 , answer_question |Open In Colab | T5-Paper, T5-Model |

| Overview of every task available with T5 | en.t5.base |Open In Colab | T5-Paper, T5-Model |

| Translate between more than 200 Languages in 1 line of code with Marian Models | tr.translate_to.fr, en.translate_to.fr ,fr.translate_to.he , en.translate_to.de |Open In Colab | Marian-Papers, Translation-Pipeline (En to Fr), Translation-Pipeline (En to Ger) |

| BERT Sentence Embeddings with NLU | embed_sentence.bert, pos sentiment embed_sentence.bert |Open In Colab | Bert-Paper, Bert Github, Bert-Sentence_Embedding |

| ELECTRA Sentence Embeddings with NLU | embed_sentence.electra, pos sentiment embed_sentence.electra |Open In Colab | Electra Paper, Sentence-Electra-Embedding |

| USE Sentence Embeddings with NLU | use, pos sentiment use emotion |Open In Colab | Universal Sentence Encoder, USE-TensorFlow, Sentence-USE-Embedding |

| Sentence similarity with NLU using BERT embeddings | embed_sentence.bert, use en.embed_sentence.electra embed_sentence.bert |Open In Colab | Bert-Paper, Bert Github, Bert-Sentence_Embedding |

| Part of Speech tagging with NLU | pos |Open In Colab | Part of Speech |

| NER Aspect Airline ATIS | en.ner.aspect.airline |Open In Colab | NER Airline Model, Atis intent Dataset |

| NLU-NER_CONLL_2003_5class_example | ner |Open In Colab | NER-Piple |

| Named-entity recognition with Deep Learning ONTO NOTES | ner.onto |Open In Colab | NER_Onto |

| Aspect based NER-Sentiment-Restaurants | en.ner.aspect_sentiment |Open In Colab | - |

| Detect Named Entities (NER), Part of Speech Tags (POS) and Tokenize in Chinese | zh.segment_words, zh.pos, zh.ner, zh.translate_to.en | Open In Colab | Translation-Pipeline (Zh to En) |

| Detect Named Entities (NER), Part of Speech Tags (POS) and Tokenize in Japanese | ja.segment_words, ja.pos, ja.ner, ja.translate_to.en | Open In Colab | Translation-Pipeline (Ja to En) |

| Detect Named Entities (NER), Part of Speech Tags (POS) and Tokenize in Korean | ko.segment_words, ko.pos, ko.ner.kmou.glove_840B_300d, ko.translate_to.en | Open In Colab | - |

| Date Matching | match.datetime | Open In Colab | - |

| Typed Dependency Parsing with NLU | dep | Open In Colab | Dependency Parsing |

| Untyped Dependency Parsing with NLU | dep.untyped | Open In Colab | - |

| E2E Classification with NLU | e2e | Open In Colab | e2e-Model |

| Language Classification with NLU | lang | Open In Colab | - |

| Cyberbullying Classification with NLU | classify.cyberbullying | Open In Colab | Cyberbullying-Classifier |

| Sentiment Classification with NLU for Twitter | emotion | Open In Colab | Emotion detection |

| Fake News Classification with NLU | en.classify.fakenews | Open In Colab | Fakenews-Classifier |

| Intent Classification with NLU | en.classify.intent.airline | Open In Colab | Airline-Intention classifier, Atis-Dataset |

| Question classification based on the TREC dataset | en.classify.questions | Open In Colab | Question-Classifier |

| Sarcasm Classification with NLU | en.classify.sarcasm | Open In Colab | Sarcasm-Classifier |

| Sentiment Classification with NLU for Twitter | en.sentiment.twitter | Open In Colab | Sentiment_Twitter-Classifier |

| Sentiment Classification with NLU for Movies | en.sentiment.imdb | Open In Colab | Sentiment_imdb-Classifier |

| Spam Classification with NLU | en.classify.spam | Open In Colab | Spam-Classifier |

| Toxic text classification with NLU | en.classify.toxic | Open In Colab | Toxic-Classifier |

| Unsupervised keyword extraction with NLU using the YAKE algorithm | yake | Open In Colab | - |

| Grammatical Chunk Matching with NLU | match.chunks | Open In Colab | - |

| Getting n-Grams with NLU | ngram | Open In Colab | - |

| Assertion | en.med_ner.clinical en.assert, en.med_ner.clinical.biobert en.assert.biobert, ... | Open In Colab | Healthcare-NER, NER_Clinical-Classifier, Toxic-Classifier |

| De-Identification Model overview | med_ner.jsl.wip.clinical en.de_identify, med_ner.jsl.wip.clinical en.de_identify.clinical, ... | Open In Colab | NER-Clinical |

| Drug Normalization | norm_drugs | Open In Colab | - |

| Entity Resolution | med_ner.jsl.wip.clinical en.resolve_chunk.cpt_clinical, med_ner.jsl.wip.clinical en.resolve.icd10cm, ... | Open In Colab | NER-Clinical, Entity-Resolver clinical |

| Medical Named Entity Recognition | en.med_ner.ade.clinical, en.med_ner.ade.clinical_bert, en.med_ner.anatomy,en.med_ner.anatomy.biobert, ... | Open In Colab | - |

| Relation Extraction | en.med_ner.jsl.wip.clinical.greedy en.relation, en.med_ner.jsl.wip.clinical.greedy en.relation.bodypart.problem, ... | Open In Colab | - |

| Visualization of NLP-Models with Spark-NLP and NLU | ner, dep.typed, med_ner.jsl.wip.clinical resolve_chunk.rxnorm.in, med_ner.jsl.wip.clinical resolve.icd10cm | Open In Colab | NER-Piple, Dependency Parsing, NER-Clinical, Entity-Resolver (Chunks) clinical |

| NLU Covid-19 Emotion Showcase | emotion | Open In GitHub | Emotion detection |

| NLU Covid-19 Sentiment Showcase | sentiment | Open In GitHub | Sentiment classification |

| NLU Airline Emotion Demo | emotion | Open In GitHub | Emotion detection |

| NLU Airline Sentiment Demo | sentiment | Open In GitHub | Sentiment classification |

| Bengali NER Hindi Embeddings for 30 Models | bn.ner, bn.lemma, ja.lemma, am.lemma, bh.lemma, en.ner.onto.bert.small_l2_128,.. | Open In Colab | Bengali-NER, Bengali-Lemmatizer, Japanese-Lemmatizer, Amharic-Lemmatizer |

| Entity Resolution | med_ner.jsl.wip.clinical en.resolve.umls, med_ner.jsl.wip.clinical en.resolve.loinc, med_ner.jsl.wip.clinical en.resolve.loinc.biobert | Open In Colab | - |

| NLU 20 Minutes Crashcourse - the fast Data Science route | spell, sentiment, pos, ner, yake, en.t5, emotion, answer_question, en.t5.base ... | Open In Colab | T5-Model, Part of Speech, NER-Piple, Emotion detection , Spellchecker, Sentiment classification |

| Chapter 0: Intro: 1-liners | sentiment, pos, ner, bert, elmo, embed_sentence.bert | Open In Colab | Part of Speech, NER-Piple, Sentiment classification, Elmo-Embedding, Bert-Sentence_Embedding |

| Chapter 1: NLU base-features with some classifiers on testdata | emotion, yake, stem | Open In Colab | Emotion detection |

| Chapter 2: Translation between 300+ languages with Marian | tr.translate_to.en, en.translate_to.fr, en.translate_to.he | Open In Colab | Translation-Pipeline (En to Fr), Translation (En to He) |

| Chapter 3: Answer questions and summarize Texts with T5 | answer_question, en.t5, en.t5.base | Open In Colab | T5-Model |

| Chapter 4: Overview of T5-Tasks | en.t5.base | Open In Colab | T5-Model |

| Graph NLU 20 Minutes Crashcourse - State of the Art Text Mining for Graphs | spell, sentiment, pos, ner, yake, emotion, med_ner.jsl.wip.clinical, ... | Open In Colab | Part of Speech, NER-Piple, Emotion detection, Spellchecker, Sentiment classification |

| Healthcare with NLU | med_ner.human_phenotype.gene_biobert, med_ner.ade_biobert, med_ner.anatomy, med_ner.bacterial_species,... | Open In Colab | - |

| Part 0: Intro: 1-liners | spell, sentiment, pos, ner, bert, elmo, embed_sentence.bert | Open In Colab | Bert-Paper, Bert Github, T-SNE, T-SNE-Bert , Part of Speech, NER-Piple, Spellchecker, Sentiment classification, Elmo-Embedding , Bert-Sentence_Embedding |

| Part 1: NLU base-features with some classifiers on Testdata | yake, stem, ner, emotion | Open In Colab | NER-Piple, Emotion detection |

| Part 2: Translate between 200+ Languages in 1 line of code with Marian-Models | en.translate_to.de, en.translate_to.fr, en.translate_to.he | Open In Colab | Translation-Pipeline (En to Fr), Translation-Pipeline (En to Ger), Translation (En to He) |

| Part 3: More Multilingual NLP-translations for Asian Languages with Marian | en.translate_to.hi, en.translate_to.ru, en.translate_to.zh | Open In Colab | Translation (En to Hi), Translation (En to Ru), Translation (En to Zh) |

| Part 4: Unsupervise Chinese Keyword Extraction, NER and Translation from chinese news | zh.translate_to.en, zh.segment_words, yake, zh.lemma, zh.ner | Open In Colab | Translation-Pipeline (Zh to En), Zh-Lemmatizer |

| Part 5: Multilingual sentiment classifier training for 100+ languages | train.sentiment, xx.embed_sentence.labse train.sentiment | n.a. | Sentence_Embedding.Labse |

| Part 6: Question-answering and Text-summarization with T5-Modell | answer_question, en.t5, en.t5.base | Open In Colab | T5-Paper |

| Part 7: Overview of all tasks available with T5 | en.t5.base | Open In Colab | T5-Paper |

| Part 8: Overview of some of the Multilingual modes with State Of the Art accuracy (1-liner) | bn.lemma, ja.lemma, am.lemma, bh.lemma, zh.segment_words, ... | Open In Colab | Bengali-Lemmatizer, Japanese-Lemmatizer , Amharic-Lemmatizer |

| Overview of some Multilingual modes avaiable with State Of the Art accuracy (1-liner) | bn.ner.cc_300d, ja.ner, zh.ner, th.ner.lst20.glove_840B_300D, ar.ner | Open In Colab | Bengali-NER

| NLU 20 Minutes Crashcourse - the fast Data Science route | - | Open In Colab | - |

Need help?

Simple NLU Demos

Features in NLU Overview

  • Tokenization

  • Trainable Word Segmentation

  • Stop Words Removal

  • Token Normalizer

  • Document Normalizer

  • Stemmer

  • Lemmatizer

  • NGrams

  • Regex Matching

  • Text Matching,

  • Chunking

  • Date Matcher

  • Sentence Detector

  • Deep Sentence Detector (Deep learning)

  • Dependency parsing (Labeled/unlabeled)

  • Part-of-speech tagging

  • Sentiment Detection (ML models)

  • Spell Checker (ML and DL models)

  • Word Embeddings (GloVe and Word2Vec)

  • BERT Embeddings (TF Hub models)

  • ELMO Embeddings (TF Hub models)

  • ALBERT Embeddings (TF Hub models)

  • XLNet Embeddings

  • Universal Sentence Encoder (TF Hub models)

  • BERT Sentence Embeddings (42 TF Hub models)

  • Sentence Embeddings

  • Chunk Embeddings

  • Unsupervised keywords extraction

  • Language Detection & Identification (up to 375 languages)

  • Multi-class Sentiment analysis (Deep learning)

  • Multi-label Sentiment analysis (Deep learning)

  • Multi-class Text Classification (Deep learning)

  • Neural Machine Translation

  • Text-To-Text Transfer Transformer (Google T5)

  • Named entity recognition (Deep learning)

  • Easy TensorFlow integration

  • GPU Support

  • Full integration with Spark ML functions

  • 1000 pre-trained models in +200 languages!

  • Multi-lingual NER models: Arabic, Chinese, Danish, Dutch, English, Finnish, French, German, Hewbrew, Italian, Japanese, Korean, Norwegian, Persian, Polish, Portuguese, Russian, Spanish, Swedish, Urdu and more

  • Natural Language inference

  • Coreference resolution

  • Sentence Completion

  • Word sense disambiguation

  • Clinical entity recognition

  • Clinical Entity Linking

  • Entity normalization

  • Assertion Status Detection

  • De-identification

  • Relation Extraction

  • Clinical Entity Resolution

Citation

We have published a paper that you can cite for the NLU library:

@article{KOCAMAN2021100058,

    title = {Spark NLP: Natural language understanding at scale},

    journal = {Software Impacts},

    pages = {100058},

    year = {2021},

    issn = {2665-9638},

    doi = {https://doi.org/10.1016/j.simpa.2021.100058},

    url = {https://www.sciencedirect.com/science/article/pii/S2665963821000063},

    author = {Veysel Kocaman and David Talby},

    keywords = {Spark, Natural language processing, Deep learning, Tensorflow, Cluster},

    abstract = {Spark NLP is a Natural Language Processing (NLP) library built on top of Apache Spark ML. It provides simple, performant & accurate NLP annotations for machine learning pipelines that can scale easily in a distributed environment. Spark NLP comes with 1100+ pretrained pipelines and models in more than 192+ languages. It supports nearly all the NLP tasks and modules that can be used seamlessly in a cluster. Downloaded more than 2.7 million times and experiencing 9x growth since January 2020, Spark NLP is used by 54% of healthcare organizations as the world’s most widely used NLP library in the enterprise.}

    }

}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Aj_Zsl_nlu-4.2.0.tar.gz (564.0 kB view details)

Uploaded Source

Built Distribution

Aj_Zsl_nlu-4.2.0-py3-none-any.whl (646.3 kB view details)

Uploaded Python 3

File details

Details for the file Aj_Zsl_nlu-4.2.0.tar.gz.

File metadata

  • Download URL: Aj_Zsl_nlu-4.2.0.tar.gz
  • Upload date:
  • Size: 564.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.10

File hashes

Hashes for Aj_Zsl_nlu-4.2.0.tar.gz
Algorithm Hash digest
SHA256 21e560c3d0c221c0918c89d239fe72a47102d61d8d28f076bb9802925d64cd86
MD5 ee1e988f26242e4b2b19f651b99cecf3
BLAKE2b-256 bb465806c3eedbfcb45e31846dd1b1f1ca09d38647599d9787f523dd837e9ae2

See more details on using hashes here.

File details

Details for the file Aj_Zsl_nlu-4.2.0-py3-none-any.whl.

File metadata

  • Download URL: Aj_Zsl_nlu-4.2.0-py3-none-any.whl
  • Upload date:
  • Size: 646.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.10

File hashes

Hashes for Aj_Zsl_nlu-4.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2354b34a0376e186c8b11b296163f5051bb97cbadf1973a3193ae68a186625ce
MD5 9d1a04966368e3e57e9e5406fe0247a9
BLAKE2b-256 1d8c7138e51f5a1a7de8b8a456c7f9559d0f0c0037d50afafab81ac5ebb73e1c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page