Skip to main content

Word2Vec Library

Project description

Word Embeddings

Distributed representations (DR) of words (i.e., word embeddings) are used to capture semantic and syntactic regularities of the language by analyzing distributions of word relations within the textual data. Modeling methods generating DRs rely on the assumption that 'words that occur in similar contexts tend to have similar meanings' (distributional hypothesis) which stems from the nature of language itself. Due to their unsupervised nature, these modeling methods do not require any human judgement input to train, which allows researchers to train very large datasets in relatively low costs.

Traditional representations of words (i.e., one-hot vectors) are based on word-word (W x W) co-occurrence sparse matrices where W is the number of distinct words in the corpus. On the other hand, distributed word representations (DRs) (i.e., word embeddings) are word-context (W x C) dense matrices where C < W and C is the number of context dimensions which are determined by underlying model assumptions. Dense representations are arguably better at capturing generalized information and more resistant to overfitting due to context vectors representing shared properties of words. DRs are real valued vectors where each context can be considered as a continuous feature of a word. Due to their ability to represent abstract features of a word, DRs are considered as reusable across higher level tasks in ease, even if they are trained with totally different datasets.

Prediction based DR models gained much attention after Mikolov et al.’s neural network based SkipGram model in 2013. The secret behind the prediction based models is simple: never build a sparse matrix at all. Prediction based models construct dense matrix representations directly instead of reducing sparse ones to dense ones. These models are trained like any other supervised learning task by giving lots of positive and negative samples without adding any human supervision costs. Aim of these models is to maximize the probability of each context c with the same distributional assumptions on word-context co-occurrences, similar to count based models.

SkipGram is a prediction based distributional semantic model (DSM) consisting of a shallow neural network architecture inspired from neural language modeling (LM) intuitions. It is commonly known for its open-source implementation library word2vec. SkipGram acts like a log-linear classifier maximizing the prediction of the surrounding words of a word within a context (center window). Probabilistic word and sentence prediction by local neighbors of a word has been successfully applied on LM tasks under Markov assumption. SkipGram leverages the same idea by considering the words within the window as positive and negative instances and learning weights (for k contexts) which maximizes word predictions. In the training process, each word vector starts as a random vector, and then iteratively shifts to the neighboring vector.

Video Lectures

For Developers

You can also see Cython, Java, C++, Swift, Js or C# repository.

Requirements

Python

To check if you have a compatible version of Python installed, use the following command:

python -V

You can find the latest version of Python here.

Git

Install the latest version of Git.

Pip Install

pip3 install NlpToolkit-WordToVec

Download Code

In order to work on code, create a fork from GitHub page. Use Git for cloning the code to your local or below line for Ubuntu:

git clone <your-fork-git-link>

A directory called WordToVec will be created. Or you can use below link for exploring the code:

git clone https://github.com/starlangsoftware/WordToVec-Py.git

Open project with Pycharm IDE

Steps for opening the cloned project:

  • Start IDE
  • Select File | Open from main menu
  • Choose WordToVec-Py file
  • Select open as project option
  • Couple of seconds, dependencies will be downloaded.

Detailed Description

To initialize artificial neural network:

NeuralNetwork(self, corpus: Corpus, parameter: WordToVecParameter)

To train neural network:

train(self) -> VectorizedDictionary

Cite

@inproceedings{ercan-yildiz-2018-anlamver,
	title = "{A}nlam{V}er: Semantic Model Evaluation Dataset for {T}urkish - Word Similarity and Relatedness",
	author = {Ercan, G{\"o}khan  and
  	Y{\i}ld{\i}z, Olcay Taner},
	booktitle = "Proceedings of the 27th International Conference on Computational Linguistics",
	month = aug,
	year = "2018",
	address = "Santa Fe, New Mexico, USA",
	publisher = "Association for Computational Linguistics",
	url = "https://www.aclweb.org/anthology/C18-1323",
	pages = "3819--3836",
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

NlpToolkit-WordToVec-1.0.8.tar.gz (21.0 kB view details)

Uploaded Source

File details

Details for the file NlpToolkit-WordToVec-1.0.8.tar.gz.

File metadata

  • Download URL: NlpToolkit-WordToVec-1.0.8.tar.gz
  • Upload date:
  • Size: 21.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.7.9

File hashes

Hashes for NlpToolkit-WordToVec-1.0.8.tar.gz
Algorithm Hash digest
SHA256 e2e3bb71422feb74e4d687087f6cd274c44afe719bab532d65113fa55b42c379
MD5 567647d8275495d048ceb7e350492843
BLAKE2b-256 87ef79541a5c3b893e1d93a113f757b031672196dd1c4f535a7cc460ed668aae

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page