Machine Learning :: Text feature extraction (tf-idf) – Part I
Short introduction to Vector Space Model (VSM)
In information retrieval or text mining, the term frequency – inverse document frequency (also called tf-idf), is a well know method to evaluate how important is a word in a document. tf-idf are is a very interesting way to convert the textual representation of information into a Vector Space Model (VSM), or into sparse features, we’ll discuss more about it later, but first, let’s try to understand what is tf-idf and the VSM.
VSM has a very confusing past, see for example the paper The most influential paper Gerard Salton Never Wrote that explains the history behind the ghost cited paper which in fact never existed; in sum, VSM is an algebraic model representing textual information as a vector, the components of this vector could represent the importance of a term (tf–idf) or even the absence or presence (Bag of Words) of it in a document; it is important to note that the classical VSM proposed by Salton incorporates local and global parameters/information (in a sense that it uses both the isolated term being analyzed as well the entire collection of documents). VSM, interpreted in a lato sensu, is a space where text is represented as a vector of numbers instead of its original string textual representation; the VSM represents the features extracted from the document.
Let’s try to mathematically define the VSM and tf-idf together with concrete examples, for the concrete examples I’ll be using Python (as well the amazing scikits.learn Python module).
Going to the vector space
The first step in modeling the document into a vector space is to create a dictionary of terms present in documents. To do that, you can simple select all terms from the document and convert it to a dimension in the vector space, but we know that there are some kind of words (stop words) that are present in almost all documents, and what we’re doing is extracting important features from documents, features do identify them among other similar documents, so using terms like “the, is, at, on”, etc.. isn’t going to help us, so in the information extraction, we’ll just ignore them.
Let’s take the documents below to define our (stupid) document space:
Train Document Set: d1: The sky is blue. d2: The sun is bright. Test Document Set: d3: The sun in the sky is bright. d4: We can see the shining sun, the bright sun.
Now, what we have to do is to create a index vocabulary (dictionary) of the words of the train document set, using the documents and from the document set, we’ll have the following index vocabulary denoted as where the is the term:
Note that the terms like “is” and “the” were ignored as cited before. Now that we have an index vocabulary, we can convert the test document set into a vector space where each term of the vector is indexed as our index vocabulary, so the first term of the vector represents the “blue” term of our vocabulary, the second represents “sun” and so on. Now, we’re going to use the term-frequency to represent each term in our vector space; the term-frequency is nothing more than a measure of how many times the terms present in our vocabulary are present in the documents or , we define the term-frequency as a couting function:
where the is a simple function defined as:
So, what the returns is how many times is the term is present in the document . An example of this, could be since we have only two occurrences of the term “sun” in the document . Now you understood how the term-frequency works, we can go on into the creation of the document vector, which is represented by:
Each dimension of the document vector is represented by the term of the vocabulary, for example, the represents the frequency-term of the term 1 or (which is our “blue” term of the vocabulary) in the document .
Let’s now show a concrete example of how the documents and are represented as vectors:
which evaluates to:
As you can see, since the documents and are:
d3: The sun in the sky is bright. d4: We can see the shining sun, the bright sun.
The resulting vector shows that we have, in order, 0 occurrences of the term “blue”, 1 occurrence of the term “sun”, and so on. In the , we have 0 occurences of the term “blue”, 2 occurrences of the term “sun”, etc.
But wait, since we have a collection of documents, now represented by vectors, we can represent them as a matrix with shape, where is the cardinality of the document space, or how many documents we have and the is the number of features, in our case represented by the vocabulary size. An example of the matrix representation of the vectors described above is:
As you may have noted, these matrices representing the term frequencies tend to be very sparse (with majority of terms zeroed), and that’s why you’ll see a common representation of these matrix as sparse matrices.
Python practice
Environment Used: Python v.2.7.2, Numpy 1.6.1, Scipy v.0.9.0, Sklearn (Scikits.learn) v.0.9.
Since we know the theory behind the term frequency and the vector space conversion, let’s show how easy is to do that using the amazing scikit.learn Python module.
Scikit.learn comes with lots of examples as well real-life interesting datasets you can use and also some helper functions to download 18k newsgroups posts for instance.
Since we already defined our small train/test dataset before, let’s use them to define the dataset in a way that scikit.learn can use:
train_set = ("The sky is blue.", "The sun is bright.") test_set = ("The sun in the sky is bright.", "We can see the shining sun, the bright sun.")
In scikit.learn, what we have presented as the term-frequency, is called CountVectorizer, so we need to import it and create a news instance:
from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer()
The CountVectorizer already uses as default “analyzer” called WordNGramAnalyzer, which is responsible to convert the text to lowercase, accents removal, token extraction, filter stop words, etc… you can see more information by printing the class information:
print vectorizer CountVectorizer(analyzer__min_n=1, analyzer__stop_words=set(['all', 'six', 'less', 'being', 'indeed', 'over', 'move', 'anyway', 'four', 'not', 'own', 'through', 'yourselves', (...)
Let’s create now the vocabulary index:
vectorizer.fit_transform(train_set) print vectorizer.vocabulary {'blue': 0, 'sun': 1, 'bright': 2, 'sky': 3}
See that the vocabulary created is the same as (except because it is zero-indexed).
Let’s use the same vectorizer now to create the sparse matrix of our test_set documents:
smatrix = vectorizer.transform(test_set) print smatrix (0, 1) 1 (0, 2) 1 (0, 3) 1 (1, 1) 2 (1, 2) 1
Note that the sparse matrix created called smatrix is a Scipy sparse matrix with elements stored in a Coordinate format. But you can convert it into a dense format:
smatrix.todense() matrix([[0, 1, 1, 1], ........[0, 2, 1, 0]], dtype=int64)
Note that the sparse matrix created is the same matrix we cited earlier in this post, which represents the two document vectors and .
We’ll see in the next post how we define the idf (inverse document frequency) instead of the simple term-frequency, as well how logarithmic scale is used to adjust the measurement of term frequencies according to its importance, and how we can use it to classify documents using some of the well-know machine learning approaches.
I hope you liked this post, and if you really liked, leave a comment so I’ll able to know if there are enough people interested in these series of posts in Machine Learning topics.
As promised, here is the second part of this tutorial series.
References
The classic Vector Space Model
The most influential paper Gerard Salton never wrote
Updates
21 Sep 11 – fixed some typos and the vector notation
22 Sep 11 – fixed import of sklearn according to the new 0.9 release and added the environment section
02 Oct 11 – fixed Latex math typos
18 Oct 11 – added link to the second part of the tutorial series
04 Mar 11 – Fixed formatting issues
latex path not specified.
all over the text
I’m using the latex from wordpress.com service, for me it is working, maybe they service is down for a while =( thanks for reporting.
The link to the The most influential paper Gerard Salton Never Wrote fails. Try the cached copy at CiteSeer: The most influential paper Gerard Salton Never Wrote.
Very enjoyable post! I have pointed to it from my blog: http://tm.durusau.net/?p=15199
Thank you Patrick, I’m glad you liked it. I updated the link with the CiteSeer copy.
Very interesting read. Keep the good work.
Thanks, the mix of actual examples with theory is very handy to see the theory in action and helps retain the theory better. Though in this particular post, i was a little disappointed as i felt it ended too soon. I would like more longer articles. But i guess longer articles turn off majority of the readers.
Very interesting blogpost, I’m sure up for more on the topic :)!
I recently had to handle VSM & TF-IDF in Python too, in a text-processing task of returning most similar strings of an input-string. I haven’t looked at scikits.learn, but it sure looks useful and straightforward.
I use Gensim (VSM for human beings: http://radimrehurek.com/gensim/) together with NLTK for preparing the data (aka word tokenizing, lowering words, and removing stopwords). I can highly recommend both libraries!
For some more (slightly out of date) details of my approach, see: http://graus.nu/blog/simple-keyword-extraction-in-python/
Thanks for the post, and looking forward to part II :).
Informative Blog Post, helped me a lot in understanding the concept. Please, keep the series going.
Thanks for posting this, would love to see more.
Very well written and interesting!
Thanks for this, most interesting. I look forward to reading your future posts on the subject.
It is very useful and easy for start and is well organized. Thanks.
Thanks, I’m glad you liked it.
Thanks a lot for this writeup. At times its really good to know what is cooking backstage behind all fancy and magical functions.
Thank you! It is very useful for me to learn about the vector space model. But I am having some doubts, please make me clear.
1. In my work I have added terms,Inverse document frequency. I want to achieve more accuracy. So I want to add some more, please suggest me…
Great post, I will certainly try this out.
I would be interested to see a similar detailed break down on using something like svmlight in conjunction with these techniques.
Thanks!
Hey again – my outputs are slighlty different to yours.. Think there may have been changes to the module on scikit-learn.
Will let you know what i find out.
Take care
Hello there,
so basically the class feature_selection.text.Vectorizer in Sklearn is now deprecated and replaced by feature_selection.text.TfidfVectorizer.
The whole module has been completely re-factored –
here’s the change-log from the Scikit-learn website:
http://scikit-learn.org/dev/whats_new.html
See under ‘API changes summary’ for what’s changed
Just thought I’d give you a headsup about this.
Enjoyed your post regardless!
Take care
Hello Jaques, great thanks for the feedback !
Thank you for your post. I am currently working on a way how to index documents, but with vocabulary terms taken from a thesaurus in SKOS format.
Your posts are interesting and very helpful to me.
Thanks for the feedback Anita, I’m glad you liked it.
Hey thanks for the very insightful post! I had no idea modules existed in Python that could do that for you ( I calculated it the hard way :/)
Just curious did you happen to know about using tf-idf weighting as a feature selection or text categorization method. I’ve been looking at many papers (most from China for some reason) but am finding numerous ways of approaching this question.
If there’s any advice or direction to steer me towards as far as additional resources, that would be greatly appreciated.
Hi
I am using python-2.7.3, numpy-1.6.2-win32-superpack-python2.7, scipy-0.11.0rc1-win32-superpack-python2.7, scikit-learn-0.11.win32-py2.7
I tried to repeat your steps but I couldn´t print the vectorizer.vocabulary (see below).
Any suggestions?
Regards
Andres Soto
>>> train_set = (“The sky is blue.”, “The sun is bright.”)
>>> test_set = (“The sun in the sky is bright.”,
“We can see the shining sun, the bright sun.”)
>>> from sklearn.feature_extraction.text import CountVectorizer
>>> vectorizer = CountVectorizer()
>>> print vectorizer
CountVectorizer(analyzer=word, binary=False, charset=utf-8,
charset_error=strict, dtype=, input=content,
lowercase=True, max_df=1.0, max_features=None, max_n=1, min_n=1,
preprocessor=None, stop_words=None, strip_accents=None,
token_pattern=bww+b, tokenizer=None, vocabulary=None)
>>> vectorizer.fit_transform(train_set)
<2×6 sparse matrix of type '’
with 8 stored elements in COOrdinate format>
>>> print vectorizer.vocabulary
Traceback (most recent call last):
File “”, line 1, in
print vectorizer.vocabulary
AttributeError: ‘CountVectorizer’ object has no attribute ‘vocabulary’
>>>
use underscore:
print vectorizer.vocabulary_
I tried to fix the parameters of CountVectorizer (analyzer = WordNGramAnalyzer, vocabulary = dict) but it didn’t work. Therefore I decided to install sklearn 0.9 and it works, so we could say that everything is OK but I still would like to know what is wrong with version sklearn 0.11
Hello Andres, what I know is that this API has changed a lot on the sklearn 0.10/0.11, I heard some discussions about these changes but I can’t remember where right now.
Thanks for the great overview, looks like the part 2 link is broken. It would be great if you could fix it. Thank You.
Thanks for the feedback Gavin, the link is ok, it seems that the problem is sourceforge hosting that is throwing some errors.
I am using a mac and running 0.11 version but I got the following error I wonder how i change this according to the latest api
>> train_set
(‘The sky is blue.’, ‘The sun is bright.’)
>>> vectorizer.fit_transform(train_set)
<2×6 sparse matrix of type '’
with 8 stored elements in COOrdinate format>
>>> print vectorizer.vocabulary
Traceback (most recent call last):
File “”, line 1, in
AttributeError: ‘CountVectorizer’ object has no attribute ‘vocabulary’
>>> vocabulary
Hello, Mr. Perone! Thank you very much, I’m newbie in TF-IDF and your posts have helped me a lot to understand it. Greetings from Japan^^
Great thanks for the feedback, I’m very glad you liked and that the post helped you !
wonderful post… It helps me to understand VSM concept 🙂
Thanks for this awesome post! Eminently readable introduction to the topic.
very well written. Good example presented in a form that makes it easy to follow and understand. Curious now to read more…
Thank you for sharing your knowledge
Thomas, Germany
Thanks Thomas, I appreciate your feedback.
Hi. I am having trouble understanding how to compute tf-idf weights for a text file I have which contains 300k lines of text. Each line is considered as a document. Example excerpt from the text file:
hacking hard
jeetu smart editor
shyamal vizualizr
setting demo hacks
vivek mans land
social routing guys minute discussions learn photography properly
naseer ahmed yahealer
sridhar vibhash yahoo search mashup
vaibhav chintan facebook friend folio
vaibhav chintan facebook friend folio
judgess comments
slickrnot
I’m pretty confused as to what I should do. Thanks
Thanks. It was helpful. Was looking for a good Python Vectorizer tutorial.
That’s really interesting post, thanks a lot!
Thanks for the feedback Igor.
Really helpful post
Really a very good effort in explaining in such a simple way.
it is a very good text.
Thanks for explanation.
Its giving idf vector as zero if test is same as train. why???
thank you very much i encourage you to continue this is very helpful post <3
Thanks
A well written clear explanation
Definitely a reference when taking the first steps in text mining.
Thank you Matthieu !
Good post with example
thank you very much. this post helped me alot. very well written and clear explanation.
Hannah
Awesome stuff. I really appreciate the simplicity and clarity of the information. A great, great help.
Great thanks Tim, I’m glad you liked it.
Thanks for the great post. You have explained it in simple words, so that a novice like me can understand. Will move on to read the next part!
Solution to question of Andres and Gavin:
>>> print vectorizer.vocabulary_
(with underscore at the end in new versions of scikit!)
will output:
{u’blue’: 0, u’bright’: 1, u’sun’: 4, u’is’: 2, u’sky’: 3, u’the’: 5}
Hey brother,
Do you know exactly what is the difference between (vectorizer.vocabulary_) and (vectorizer.get_feature_names() )?
Thanks, a great one and useful!
” … you can simple select …” -> “>>> you can simply select …”
Very nice post!
You made tf-idf look really interesting. I am looking forward to some more such posts.
Hi Christian,
Thank u for sharing. Seeing more updates from you
This is Great!
Really helpful for starters like me!
Thanks & Keep up the Good Work!
Cheers!
Could you tell me please what is the difference between feature names and vocabulary_ ?
I printed them both after vectorizing,, they seem having different words??
Also, I need to print out the most informative words in each class, could you suggest me a way please?
thanks
Thank you. This was a very informative post.
nice one….helped a lot…!!
Thanks…very nicely explained. I feel I could understand the concept and now I will experiment. It will be very helpful in my work
Thank u for u r post..it is very helpful.if possible can you tell in matlab how it will work
Great and simple. Thanks,very helpful!
Thanks .. it was very inspiring tutorial for me
Well written blog..I really loved it..:) Thank you..
Hi, you have a nice blog.
You mentioned by text mining, stop words like “the, is, at, on”, etc.. isn’t going to help us”. This is partly true, for example in case of analyzing webpages, you want to ignore the Advertisements on a webpage, one good thing is to ignore the those sentences that do no have stop words. compared to normal sentences which do have these words.
Very helpful. Like your writing style.
I tried out this, did not quite get the expected result:
Please see below:
train_set = (“The sky is blue.”, “The sun is bright.”)
test_set = (“The sun in the sky is bright.”, “We can see the shining sun, the bright sun.”)
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
stopwords = nltk.corpus.stopwords.words(‘english’)
vectorizer.stop_words = stopwords
print vectorizer
vectorizer.fit_transform(train_set)
print vectorizer.vocabulary
And I get ‘None’
Instead, try:
print(vectorizer.vocabulary_)
great, it works. Thanks!
very helpful post!
Great post! Thanks.
Great tutorial!! Thankz
Really nice tutorial. Very helpful to get some context additional to the official skikit-learn tutorial and user guide. Thanks.
It cool work
Thank you for helping in understanding.
Thank you so much, Christian! This post helped me a lot!
i need a java program for indexing a set of files by computing tf and idf please help me
Thanks a lot……..Post really helped me a lot!!!!!!!!!
The article is helpful. Thanks.
As a PhD candidate in sociology who is diving into the world of machine learning, this post was also very helpful for me. Thanks!
This is very helpful … it gave me thorough understanding of the concepts …
Great article!! Read much but this belongs definitely to the “good stuff”!!!
It was cool man
Hello there!
I have a question regarding natural language processing. There are two terms in this field ‘feature extraction’ and ‘feature selection’. I don’t exactly understand the difference between them and whether we only use one of them or is it possible to use both for text classification?
My second question is whether ‘tf’ and ‘tfidf’ are considered feature extraction methods in NLP?
TypeError: __init__() got an unexpected keyword argument ‘analyzer__stop_words’
Could you hel with this error?
It is an interesting article indeed. Personally, I know everything that has been mentioned in this post and I did all of them before, but sometimes it is worth spending little time to review some stuff that you already know. Keep up the good work!
Really appreciate you taking the time to write this post.
Pretty detailed and well explained.
Appreciated.
I tried with print(vectorizer.vocabulary_) and it’s works, but my output is:
{‘the’: 5, ‘sky’: 3, ‘is’: 2, ‘blue’: 0, ‘sun’: 4, ‘bright’: 1}
Do you know why doesn’t ignored “is’ and “the” ?
I was also facing the same issue but got solution. You can initialize the vectorizer as follow:
vectorizer = CountVectorizer(stop_words=”english”)
above will escape all english stop words.
cheers..
Nice work Christian…
This is by far the best article on TF-IDF and Vector spaces. Thank you for posting such a helpful article. Please keep writing more articles on Machine learning basics and concepts. Thank you!
very nice explanation allah bless you
Good tutorial. It explains things in a simple and clear way to new bees like me… Thanks for sharing it…
Thank you
You made it so easy to understand!
excellent article…very informative and way of explanation is very good.
Thank you so much!!!
This post helped a lot….waiting for next article….
print vectorizer.vocabulary_ (_) is missing.
CountVectorizer() method for stopword removal does not seem to be clear, please complete the function with correct syntax
Great post..Very clean explanation of the concept. Python codes are an added bonus. Thank you 🙂
Thanks Christian, very good. Too bad it took me to start studying about this.
hi! i’m currently make a search engine for journals with tfidf method for my undergraduate. but my professor said that my method is too old. could you recommend some new method in this past 5 years? or maybe method to optimize the tfidf? additional research paper about the method will be great. thankyou very much!
Very interesting and succinct read! I am ramping on to ML and it really helped. Going to read your other posts too.
this post is soo great keep the good work
Very well explained with examples step by step. Easy to understand and really very helpful. Thanks a lot for such efforts. Please post further also.
Thanks for detailed explanation
Detailed and simplified explanation .
Thank you so much !!! Keep up the good work 🙂
Hello Christian,
I want to thank you for the great work you are doing.Please i was give a project on similarity scoring system that uses modern plagiarism checking technology and returns a similarity score for the student submissions based on how similar the answers provided by 2 students.My question is that,i have started learning python already,which NLP language do i learn to achieve this.I already have c# and asp.net experience.Please help me please.