How to classify text using Word2Vec

Table of Contents

What is Transfer Learning?

Transfer learning is one of the most important breakthroughs in machine learning! It helps us to use the models created by others.

Since everyone doesn’t have access to billions of text snippets and GPU’s/TPU’s to extract patterns from it. If someone can do it and pass on the learnings then we can directly use it and solve business problems.

When someone else creates a model on a huge generic dataset and passes only the model to others for use. This is known as transfer learning because everyone doesn’t have to train the model on such a huge amount of data, hence, they “transfer” the learnings from others to their system.

Transfer learning is really helpful in NLP. Specially vectorization of text, because converting text to vectors for 50K records also is slow. So if we can use the pre-trained models from others, that helps to resolve the problem of converting the text data to numeric data, and we can continue with the other tasks, such as classification or sentiment analysis, etc.

Stanford’s GloVe and Google’s Word2Vec are two really popular choices in Text vectorization using transfer learning.


Word2Vec

Word2vec is not a single algorithm but a combination of two techniques – CBOW(Continuous bag of words) and Skip-gram model.

Both of these are shallow neural networks that map word(s) to the target variable which is also a word(s). Both of these techniques learn weights of the neural network which acts as word vector representations.

Basically each word is represented as a vector of numbers.

word2vec vectors for sample words
Sample word2vec vectors

CBOW

CBOW(Continuous bag of words) predicts the probability of a word to occur given the words surrounding it. We can consider a single word or a group of words.

Skip-gram model

The Skip-gram model architecture usually tries to achieve the reverse of what the CBOW model does. It tries to predict the source context words (surrounding words) given a target word (the center word)

Which one should be used?

For a large corpus with higher dimensions, it is better to use skip-gram but it is slow to train. Whereas CBOW is better for small corpus and is faster to train too.

skip gram vs CBOW models in word2Vec
CBOW vs Skip-Gram

Word2Vec vectors are basically a form of word representation that bridges the human understanding of language to that of a machine.

They have learned representations of text in an n-dimensional space where words that have the same meaning have a similar representation. Meaning that two similar words are represented by almost similar vectors(set of numbers) that are very closely placed in a vector space.

For example, look at the below diagram, the words King and Queen appear closer to each other. Similarly, the words Man and Woman appear closer to each other due to the kind of numeric vectors assigned to these words by Word2Vec. If you compute the distance between two words using their numeric vectors, then those words which are related to each other with a context will have less distance between them.

GloVe word representation
Word2Vec word representation


Case Study: Support Ticket Classification using Word2Vec

In a previous case study, I showed you how can you convert Text data into numeric using TF-IDF. And then use it to create a classification model to predict the priority of support tickets.

In this case study, I will use the same dataset and show you how can you use the numeric representations of words from Word2Vec and create a classification model.

Problem Statement: Use the Microsoft support ticket text description to classify a new ticket into P1/P2/P3.

You can download the data required for this case study here.

Reading the support ticket data

This data contains 19,796 rows and 2 columns. The column”body” represents the ticket description and the column “urgency” represents the Priority.

Support Ticket Data


Visualising the distribution of the Target variable

Now we try to see if the Target variable has a balanced distribution or not? Basically each priority type has enough rows to be learned.

If the data would have been imbalanced, for example very less number of rows for the P1 category, then you need to balance the data using any of the popular techniques like over-sampling, under-sampling, or SMOTE.

Support ticket data target variable

The above bar plot shows that there are enough rows for each ticket type. Hence, this is balanced data for classification.


Count Vectorization: converting text data to numeric

This step will help to remove all the stopwords and create a document term matrix.

We will use this matrix to do further processing. For each word in the document term matrix, we will use the Word2Vec numeric vector representation.

Count Vectorization of Text Data for Glove


Word2Vec conversion:

Now we will use the Word2Vec representation of words to convert the above document term matrix to a smaller matrix, where the columns are the sum of the vectors for each word present in the document.

For example, look at the below diagram. The flow is shown for one sentence, the same happens for every sentence in the corpus.

  • The numeric representation of each word is taken from Word2Vec.
  • All the vectors are added, hence producing a single vector
  • That single vector represents the information of the sentence, hence treated as one row.

word2vec conversion from text

Note: If you feel that your laptop is hanging due to the processing required for the below commands, you can use google colab notebooks!

Downloading Google’s word2Vec model

  • We will Use the Pre-trained word2Vec model from google, It contains word vectors for a vocabulary of 3 million words.
  • Trained on around 100 billion words from the google news dataset.

Download link: https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit?usp=sharing

This contains a binary file, that contains numeric representations for each word.

Word2Vec shape

Sample Word2Vec vector


Finding Similar words

This is one of the interesting features of Word2Vec. You can pass a word and find out the most similar words related to the given word.

In the below example, you can see the most relatable word to “king” is “kings” and “queen”. This was possible because of the context learned by the Word2Vec model. Since words like “queen” and “prince” are used in the context of “king”. the numeric word vectors for these words will have similar numbers, hence, the cosine similarity score is high.

word2vec similarity

sample glove words


Converting every sentence to a numeric vector

For each word in a sentence, we extract the numeric form of the word and then simply add all the numeric forms for that sentence to represent the sentence.


Preparing Data for ML

Word2Vec data for ML


Splitting the data into training and testing

word2Vec data train test split


Standardization/Normalization

This is an optional step. It can speed up the processing of the model training.

Standardization output for Word2Vec data


Training ML classification models

Now the data is ready for machine learning. There are 300-predictors and one target variable. We will use the below algorithms and select the best one out of them based on the accuracy scores you can add more algorithms to this list as per your preferences.

  • Naive Bayes
  • KNN
  • Logistic Regression
  • Decision Trees
  • AdaBoost


Naive Bayes

This algorithm trains very fast! The accuracy may not be very high always but the speed is guaranteed!

I have commented the cross-validation section just to save computing time. You can uncomment and execute those commands as well.

Naive Bayes output for Word2Vec data


KNN

This is a distance-based supervised ML algorithm. Make sure you standardize/normalize the data before using this algorithm, otherwise the accuracy will be low.

KNN model output for word2vec


Logistic Regression

This algorithm also trains very fast. Hence, whenever we are using high dimensional data, trying out Logistic regression is sensible. The accuracy may not be always the best.

Logistic regression output for word2vec data


Decision Tree

This algorithm trains slower as compared to Naive Bayes or Logistic, but it can produce better results.

Decision Tree output for word2vec data


Adaboost

This is a tree based boosting algorithm. If the data is not high dimensional, we can use this algorithm. otherwise it takes lot of time to train.

Adaboost output for word2vec


Training the best model on full data

Logistic regression algorithm is producing the highest accuracy on this data, hence, selecting it as final model for deployment.


Making predictions on new cases

To deploy this model, all we need to do is write a function which takes the new data as input, performs all the pre-processing required and passes the data to the Final model.

prediction output for word2vec


Conclusion

Transfer learning has made NLP research faster by providing an easy way to share the models produced by big companies and build on top of that. Similar to Word2Vec we have other algorithms like GloVe, Doc2Vec, and BERT which I have discussed in separate case studies.

I hope this post helped you to understand how Word2Vec vectors are created and how to use them to convert any text into numeric form.

Consider sharing this post with your friends to spread the knowledge and help me grow as well! 🙂

Author Details
Lead Data Scientist
Farukh is an innovator in solving industry problems using Artificial intelligence. His expertise is backed with 10 years of industry experience. Being a senior data scientist he is responsible for designing the AI/ML solution to provide maximum gains for the clients. As a thought leader, his focus is on solving the key business problems of the CPG Industry. He has worked across different domains like Telecom, Insurance, and Logistics. He has worked with global tech leaders including Infosys, IBM, and Persistent systems. His passion to teach inspired him to create this website!

10 thoughts on “How to classify text using Word2Vec”

  1. Hi Farrukh,
    Nice code. Can you let me know if these lines are correct as I am getting error while executing it.
    for word in WordsVocab[CountVecData.iloc[i,:]>=1]:

    if word in GoogleModel.key_to_index.keys():
    Sentence=Sentence+GoogleModel[word]

      1. Nice code.
        I had the same error.

        /tmp/ipykernel_7835/1460014381.py in FunctionText2Vec(inpTextData)
        19 # Looping thru each word in the sentence and if its present in
        20 # the Word2Vec model then storing its vector
        —> 21 for word in WordsVocab[CountVecData.iloc[i,:] >= 1]:
        22 #print(word)
        23 if word in GoogleModel.key_to_index.keys():

        ~/anaconda3/envs/nltk/lib/python3.7/site-packages/pandas/core/indexes/base.py in __getitem__(self, key)
        4614 key = np.asarray(key, dtype=bool)
        4615
        -> 4616 result = getitem(key)
        4617 if not is_scalar(result):
        4618 # error: Argument 1 to “ndim” has incompatible type “Union[ExtensionArray,

        IndexError: boolean index did not match indexed array along dimension 0; dimension is 8764 but corresponding boolean dimension is 8765

        1. I believe the error is in the previous line:

          #CountVectorizedData maybe not have “columns[:-1]”
          WordsVocab=CountVectorizedData.columns[:-1]

          —-

          # Creating the list of words which are present in the Document term matrix
          WordsVocab=CountVectorizedData.columns[:-1]

          # Printing sample words
          WordsVocab[0:10]

  2. Hey, thanks for the article! Really helpful in getting me up and running on my first NLP classification use case.
    I refactored the sentence pooling function to pre-populate the results object and overwrite rows and to work with numpy arrays instead of Pandas dfs and was able to speed it up around 50x:
    def FunctionText2Vec(inpTextData, CountVecData=CountVecData, vectorizer=vectorizer, vec_len=300, nlp=GoogleModel):
    # Converting the text to numeric data
    X = vectorizer.transform(inpTextData)
    CountVecData=pd.DataFrame(X.toarray(), columns=vectorizer.get_feature_names())

    # Creating empty dataframe to hold sentences
    # W2Vec_Data=pd.DataFrame(np.zeros(X.shape[0], vec_len))
    W2Vec_Data=np.zeros([X.shape[0], vec_len]) # ndarray version

    # Looping through each row for the data
    for i in range(CountVecData.shape[0]):

    # initiating a sentence with all zeros
    Sentence = np.zeros(vec_len)

    # Looping thru each word in the sentence and if its present in
    # the Word2Vec model then storing its vector
    for word in WordsVocab[CountVecData.iloc[i,:]>=1]:
    #print(word)
    if word in nlp.key_to_index.keys():
    Sentence=Sentence+nlp[word]
    # inserting the sentence to the dataframe
    W2Vec_Data[i] = Sentence
    W2Vec_pd = pd.DataFrame(W2Vec_Data)
    return(W2Vec_pd)

  3. Hi , if word vector is not present in the Word2vec model, then you are adding only 0 padded vector of length 300, right?

  4. very good tutorial.thnx.but there is a problem in def FunctionText2Vec(inpTextData): that i cant run further from it. this is the exact error:
    for word in WordsVocab[CountVecData.iloc[i,:]>=1]:
    ^
    SyntaxError: invalid syntax

Leave a Reply!

Your email address will not be published. Required fields are marked *