c, Python, Uncategorized

Hacking into Python objects internals

You know, Python represents every object using the low-level C API PyObject (or PyVarObject for variable-size objects) structure, so, concretely, you can cast any Python object pointer to this type; this inheritance is built by hand, every new object must have a leading macro called PyObject_HEAD which defines the PyObject header for the object. The PyObject structure is declared in Include/object.h as:

[enlighter lang=”c”]
typedef struct _object {
PyObject_HEAD
} PyObject;
[/enlighter]

and the PyObject_HEAD macro is defined as:

[enlighter lang=”c”]
#define PyObject_HEAD                   \
_PyObject_HEAD_EXTRA                \
Py_ssize_t ob_refcnt;               \
struct _typeobject *ob_type;
[/enlighter]

… with two fields (forget the _PyObject_HEAD_EXTRA, it’s only used for a tracing debug feature) called ob_refcnt and ob_type, representing the reference counting for the object and the type of the object. I know you can use sys.getrefcount to get the reference counting of an object, but hacking the object memory using ctypes is by far more powerful, since you can get the contents of any field of the object (in cases where you don’t have a native API for that), I’ll show more examples later, but lets focus on the reference counting field of the object.

Getting the reference count (ob_refcnt)

So, in Python, we have the built-in function id(), this function returns the identity of the object, but, looking at its definition on CPython implementation, you’ll notice that id() returns the memory address of the object, see the source in Python/bltinmodule.c:

[enlighter lang=”c”]
static PyObject *
builtin_id(PyObject *self, PyObject *v)
{
return PyLong_FromVoidPtr(v);
}
[/enlighter]

… the function PyLong_FromVoidPtr returns a Python long object from a void pointer. So, in CPython, this value is the address of the object in the memory as shown below:

[enlighter lang=”python”]
>>> value = 666
>>> hex(id(value))
‘0x8998e50’ # memory address of the ‘value’ object
[/enlighter]

Now that we have the memory address of the object, we can use the Python ctypes module to get the reference counting by accessing the attribute ob_refcnt, here is the code needed to do that:

[enlighter lang=”python”]
>>> value = 666
>>> value_address = id(value)
>>>
>>> ob_refcnt = ctypes.c_long.from_address(value_address)
>>> ob_refcnt
c_long(1)
[/enlighter]

What I’m doing here is getting the integer value from the ob_refcnt attribute of the PyObject in memory.  Let’s add a new reference for the object ‘value’ we created, and then check the reference count again:

[enlighter lang=”python”]
>>> value_ref = value
>>> id(value_ref) == id(value)
True
>>> ob_refcnt
c_long(2)
[/enlighter]

Note that the reference counting was increased by 1 due to the new reference variable called ‘value_ref’.

Interned strings state (ob_sstate)

Now, getting the reference count wasn’t even funny, we already had the sys.getrefcount API for that, but what about the interned state of the strings ? In order to avoid the creation of different allocations for the same string (and to speed comparisons), Python uses a dictionary that works like a “cache” for strings, this dictionary is defined in Objects/stringobject.c:

[enlighter lang=”c”]

/* This dictionary holds all interned strings.  Note that references to
strings in this dictionary are *not* counted in the string’s ob_refcnt.
When the interned string reaches a refcnt of 0 the string deallocation
function will delete the reference from this dictionary.

Another way to look at this is that to say that the actual reference
count of a string is:  s->ob_refcnt + (s->ob_sstate?2:0)
*/
static PyObject *interned;
[/enlighter]

I also copied here the comment about the dictionary, because is interesting to note that the strings in the dictionary aren’t counted in the string’s ob_refcnt.

So, the interned state of a string object is hold in the attribute ob_sstate of the string object, let’s see the definition of the Python string object:

[enlighter lang=”c”]

typedef struct {
PyObject_VAR_HEAD
long ob_shash;
int ob_sstate;
char ob_sval[1];

/* Invariants:
*     ob_sval contains space for ‘ob_size+1’ elements.
*     ob_sval[ob_size] == 0.
*     ob_shash is the hash of the string or -1 if not computed yet.
*     ob_sstate != 0 iff the string object is in stringobject.c’s
*       ‘interned’ dictionary; in this case the two references
*       from ‘interned’ to this object are *not counted* in ob_refcnt.
*/
} PyStringObject;
[/enlighter]

As you can note, strings objects inherit from the PyObject_VAR_HEAD macro, which defines another header attribute, let’s see the definition to get the complete idea of the structure:

[enlighter lang=”c”]
#define PyObject_VAR_HEAD               \
PyObject_HEAD                       \
Py_ssize_t ob_size; /* Number of items in variable part */
[/enlighter]

The PyObject_VAR_HEAD macro adds another field called ob_size, which is the number of items on the variable part of the Python object (i.e. the number of items on a list object). So, before getting to the ob_sstate field, we need to shift our offset to skip the fields ob_refcnt (long), ob_type (void*) (from PyObject_HEAD), the field ob_size (long) (from PyObject_VAR_HEAD) and the field ob_shash (long) from the PyStringObject. Concretely, we need to skip this offset (3 fields with size long and one field with size void*) of bytes:

[enlighter lang=”python”]
>>> ob_sstate_offset = ctypes.sizeof(ctypes.c_long)*3 + ctypes.sizeof(ctypes.c_voidp)
>>> ob_sstate_offset
16
[/enlighter]

Now, let’s prepare two cases, one that we know that isn’t interned and another that is surely interned, then we’ll force the interned state of the other non-interned string to check the result of the ob_sstate attribute:

[enlighter lang=”python”]
>>> a = “lero”
>>> b = “”.join([“l”, “e”, “r”, “o”])
>>> ctypes.c_long.from_address(id(a) + ob_sstate_offset)
c_long(1)
>>> ctypes.c_long.from_address(id(b) + ob_sstate_offset)
c_long(0)
>>> ctypes.c_long.from_address(id(intern(b)) + ob_sstate_offset)
c_long(1)
[/enlighter]

Note that the interned state for the object “a” is 1 and for the object “b” is 0. After forcing the interned state of the variable “b”, we can see that the field ob_sstate has changed to 1.

Changing internal states (evil mode)

Now, let’s suppose we want to change some internal state of a Python object through the interpreter. Let’s try to change the value of an int object. Int objects are defined in Include/intobject.h:

[enlighter lang=”c”]
typedef struct {
PyObject_HEAD
long ob_ival;
} PyIntObject;
[/enlighter]

As you can see, the internal value of an int is stored in the field ob_ival, to change it, we just need to skip the ob_refcnt (long) and the ob_type (void*) from the PyObject_HEAD:

[enlighter lang=”python”]
>>> value = 666
>>> ob_ival_offset = ctypes.sizeof(ctypes.c_long) + ctypes.sizeof(ctypes.c_voidp)
>>> ob_ival = ctypes.c_int.from_address(id(value)+ob_ival_offset)
>>> ob_ival
c_long(666)
>>> ob_ival.value = 8
>>> value
8
[/enlighter]

And that is it, we have changed the value of the int value directly in the memory.

I hope you liked it, you can play with lots of other Python objects like lists and dicts, note that this method is just intended to show how the Python objects are structured in the memory and how you can change them using the native API, but obviously, you’re not supposed to use this to change the value of ints lol.

Update 11/29/11: you’re not supposed to do such things on your production code or something like that, in this post I’m doing lazy assumptions about arch details like sizes of primitives, etc. Be warned.

Cite this article as: Christian S. Perone, "Hacking into Python objects internals," in Terra Incognita, 23/11/2011, https://blog.christianperone.com/2011/11/hacking-into-python-objects-internals/.
CPP

C++11 user-defined literals and some constructions

I was taking a look at the proposal N2765 (user-defined literals) already implemented on the development snapshots of the GCC 4.7 and I was thinking in how user-defined literals can be used to create some interesting and sometimes strange constructions.

Introduction to user-defined literals

C++03 has some literals, like the “f” in “12.2f” that converts the double value to float. The problem is that these literals aren’t very flexible since they’re pretty fixed, so you can’t change them or create new ones. To overcome this situation, C++11 introduced the concept of “user-defined literals” that will give to the user, the ability to create new custom literal modifiers. The new user-defined literals can create either built-in types (e.g. int) or user-define types (e.g. classes), and the fact that they could be very useful is an effect that they can return objects instead of only primitives.

The new syntax for the user-defined literals is:

[enlighter lang=”C++”]
OutputType operator “” _suffix(const char *literal_string);
[/enlighter]

… in the case of a literal string. The OutputType is anything you want (object or primitive), the “_suffix” is the name of the literal modifier, isn’t required to use the underline in front of it, but if you don’t use you’ll get some warnings telling you that suffixes not preceded by the underline are reserved for future standardization.

Examples

Kmh to Mph converter

[enlighter lang=”C++” escaped=”true” lines=”1000″]
// stupid converter class
class Converter
{
public:
Converter(double kmph) : m_kmph(kmph) {};
~Converter() {};

double to_mph(void)
{ return m_kmph / 1.609344; }

private:
double m_kmph;
};

// user-defined literal
Converter operator “” kmph(long double kmph)
{ return Converter(kmph); }

int main(void)
{
std::cout << “Converter: ” << (80kmph).to_mph() << std::endl;
// note that I’m using parenthesis in order to
// be able to call the ‘to_mph’ method
return 0;
}
[/ccb]

Note that the literal for for numeric types should be either long double (for floating point literals) or unsigned long long (for integral literals). There is no signed type, because a signed literal is parsed as an expression with a sign as unary prefix and the unsigned number part.

std::string literal

[enlighter lang=”C++” escaped=”true” lines=”1000″]
std::string operator “” s (const char* p, size_t n)
{ return std::string(p,n); }

int main(void)
{
std::cout << "convert me to a string"s.length() << std::endl; // here you don't need the parenthesis, note that the // c-string was automagically converted to std::string return 0; } [/ccb]

system() call

[enlighter lang=”C++” escaped=”true” lines=”1000″]
int operator “” ex(const char *cmd, size_t num_chars)
{ return system(cmd); }

int main(void)
{
“ls -lah”ex;
return 0;
}
[/ccb]

alias and std::map

[enlighter lang=”C++” escaped=”true” lines=”1000″]
typedef std::map MyMap;
MyMap create_map()
{
MyMap m;
m[“lol”] = 7;
return m;
}
auto m = create_map();

int& operator “” m(const char *key, size_t length)
{ return m[key]; }

int main(void)
{
std::cout << "lol"m << std::endl; // 7 "lol"m = 2; std::cout << "lol"m << std::endl; // 2 return 0; } [/ccb]

References

Wikipedia :: C++11 (User-defined literals)

Proposal N2765

Machine Learning, Python

Machine Learning :: Text feature extraction (tf-idf) – Part II

Read the first part of this tutorial: Text feature extraction (tf-idf) – Part I.

This post is a continuation of the first part where we started to learn the theory and practice about text feature extraction and vector space model representation. I really recommend you to read the first part of the post series in order to follow this second post.

Since a lot of people liked the first part of this tutorial, this second part is a little longer than the first.

Introduction

In the first post, we learned how to use the term-frequency to represent textual information in the vector space. However, the main problem with the term-frequency approach is that it scales up frequent terms and scales down rare terms which are empirically more informative than the high frequency terms. The basic intuition is that a term that occurs frequently in many documents is not a good discriminator, and really makes sense (at least in many experimental tests); the important question here is: why would you, in a classification problem for instance, emphasize a term which is almost present in the entire corpus of your documents ?

The tf-idf weight comes to solve this problem. What tf-idf gives is how important is a word to a document in a collection, and that’s why tf-idf incorporates local and global parameters, because it takes in consideration not only the isolated term but also the term within the document collection. What tf-idf then does to solve that problem, is to scale down the frequent terms while scaling up the rare terms; a term that occurs 10 times more than another isn’t 10 times more important than it, that’s why tf-idf uses the logarithmic scale to do that.

But let’s go back to our definition of the \mathrm{tf}(t,d) which is actually the term count of the term t in the document d. The use of this simple term frequency could lead us to problems like keyword spamming, which is when we have a repeated term in a document with the purpose of improving its ranking on an IR (Information Retrieval) system or even create a bias towards long documents, making them look more important than they are just because of the high frequency of the term in the document.

To overcome this problem, the term frequency \mathrm{tf}(t,d) of a document on a vector space is usually also normalized. Let’s see how we normalize this vector.

Vector normalization

Suppose we are going to normalize the term-frequency vector \vec{v_{d_4}} that we have calculated in the first part of this tutorial. The document d4 from the first part of this tutorial had this textual representation:

d4: We can see the shining sun, the bright sun.

And the vector space representation using the non-normalized term-frequency of that document was:

\vec{v_{d_4}} = (0,2,1,0)

To normalize the vector, is the same as calculating the Unit Vector of the vector, and they are denoted using the “hat” notation: \hat{v}. The definition of the unit vector \hat{v} of a vector \vec{v} is:

  \displaystyle \hat{v} = \frac{\vec{v}}{\|\vec{v}\|_p}

Where the \hat{v} is the unit vector, or the normalized vector, the \vec{v} is the vector going to be normalized and the \|\vec{v}\|_p is the norm (magnitude, length) of the vector \vec{v} in the L^p space (don’t worry, I’m going to explain it all).

The unit vector is actually nothing more than a normalized version of the vector, is a vector which the length is 1.

The normalization process (Source: http://processing.org/learning/pvector/)
The normalization process (Source: http://processing.org/learning/pvector/)

But the important question here is how the length of the vector is calculated and to understand this, you must understand the motivation of the L^p spaces, also called Lebesgue spaces.

Lebesgue spaces

How long is this vector ? (Source: Source: http://processing.org/learning/pvector/)
How long is this vector ? (Source: Source: http://processing.org/learning/pvector/)

Usually, the length of a vector \vec{u} = (u_1, u_2, u_3, \ldots, u_n) is calculated using the Euclidean norma norm is a function that assigns a strictly positive length or size to all vectors in a vector space -, which is defined by:

(Source: http://processing.org/learning/pvector/)
(Source: http://processing.org/learning/pvector/)

  \|\vec{u}\| = \sqrt{u^2_1 + u^2_2 + u^2_3 + \ldots + u^2_n}

But this isn’t the only way to define length, and that’s why you see (sometimes) a number p together with the norm notation, like in \|\vec{u}\|_p. That’s because it could be generalized as:

  \displaystyle \|\vec{u}\|_p = ( \left|u_1\right|^p + \left|u_2\right|^p + \left|u_3\right|^p + \ldots + \left|u_n\right|^p )^\frac{1}{p}

and simplified as:

  \displaystyle \|\vec{u}\|_p = (\sum\limits_{i=1}^{n}\left|\vec{u}_i\right|^p)^\frac{1}{p}

So when you read about a L2-norm, you’re reading about the Euclidean norm, a norm with p=2, the most common norm used to measure the length of a vector, typically called “magnitude”; actually, when you have an unqualified length measure (without the p number), you have the L2-norm (Euclidean norm).

When you read about a L1-norm, you’re reading about the norm with p=1, defined as:

  \displaystyle \|\vec{u}\|_1 = ( \left|u_1\right| + \left|u_2\right| + \left|u_3\right| + \ldots + \left|u_n\right|)

Which is nothing more than a simple sum of the components of the vector, also known as Taxicab distance, also called Manhattan distance.

Taxicab geometry versus Euclidean distance: In taxicab geometry all three pictured lines have the same length (12) for the same route. In Euclidean geometry, the green line has length 6 \times \sqrt{2} \approx 8.48, and is the unique shortest path.
Source: Wikipedia :: Taxicab Geometry

Note that you can also use any norm to normalize the vector, but we’re going to use the most common norm, the L2-Norm, which is also the default in the 0.9 release of the scikits.learn. You can also find papers comparing the performance of the two approaches among other methods to normalize the document vector, actually you can use any other method, but you have to be concise, once you’ve used a norm, you have to use it for the whole process directly involving the norm (a unit vector that used a L1-norm isn’t going to have the length 1 if you’re going to take its L2-norm later).

Back to vector normalization

Now that you know what the vector normalization process is, we can try a concrete example, the process of using the L2-norm (we’ll use the right terms now) to normalize our vector \vec{v_{d_4}} = (0,2,1,0) in order to get its unit vector \hat{v_{d_4}}. To do that, we’ll simple plug it into the definition of the unit vector to evaluate it:

  \hat{v} = \frac{\vec{v}}{\|\vec{v}\|_p} \\ \\  \hat{v_{d_4}} = \frac{\vec{v_{d_4}}}{||\vec{v_{d_4}}||_2} \\ \\ \\  \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{0^2 + 2^2 + 1^2 + 0^2}} \\ \\  \hat{v_{d_4}} = \frac{(0,2,1,0)}{\sqrt{5}} \\ \\  \small \hat{v_{d_4}} = (0.0, 0.89442719, 0.4472136, 0.0)

And that is it ! Our normalized vector \hat{v_{d_4}} has now a L2-norm \|\hat{v_{d_4}}\|_2 = 1.0.

Note that here we have normalized our term frequency document vector, but later we’re going to do that after the calculation of the tf-idf.

 The term frequency – inverse document frequency (tf-idf) weight

Now you have understood how the vector normalization works in theory and practice, let’s continue our tutorial. Suppose you have the following documents in your collection (taken from the first part of tutorial):

Train Document Set:

d1: The sky is blue.
d2: The sun is bright.

Test Document Set:

d3: The sun in the sky is bright.
d4: We can see the shining sun, the bright sun.

Your document space can be defined then as D = \{ d_1, d_2, \ldots, d_n \} where n is the number of documents in your corpus, and in our case as D_{train} = \{d_1, d_2\} and D_{test} = \{d_3, d_4\}. The cardinality of our document space is defined by \left|{D_{train}}\right| = 2 and \left|{D_{test}}\right| = 2, since we have only 2 two documents for training and testing, but they obviously don’t need to have the same cardinality.

Let’s see now, how idf (inverse document frequency) is then defined:

  \displaystyle \mathrm{idf}(t) = \log{\frac{\left|D\right|}{1+\left|\{d : t \in d\}\right|}}

where \left|\{d : t \in d\}\right| is the number of documents where the term t appears, when the term-frequency function satisfies \mathrm{tf}(t,d) \neq 0, we’re only adding 1 into the formula to avoid zero-division.

The formula for the tf-idf is then:

  \mathrm{tf\mbox{-}idf}(t) = \mathrm{tf}(t, d) \times \mathrm{idf}(t)

and this formula has an important consequence: a high weight of the tf-idf calculation is reached when you have a high term frequency (tf) in the given document (local parameter) and a low document frequency of the term in the whole collection (global parameter).

Now let’s calculate the idf for each feature present in the feature matrix with the term frequency we have calculated in the first tutorial:

  M_{train} =  \begin{bmatrix}  0 & 1 & 1 & 1\\  0 & 2 & 1 & 0  \end{bmatrix}

Since we have 4 features, we have to calculate \mathrm{idf}(t_1), \mathrm{idf}(t_2), \mathrm{idf}(t_3), \mathrm{idf}(t_4):

  \mathrm{idf}(t_1) = \log{\frac{\left|D\right|}{1+\left|\{d : t_1 \in d\}\right|}} = \log{\frac{2}{1}} = 0.69314718

 

  \mathrm{idf}(t_2) = \log{\frac{\left|D\right|}{1+\left|\{d : t_2 \in d\}\right|}} = \log{\frac{2}{3}} = -0.40546511

 

  \mathrm{idf}(t_3) = \log{\frac{\left|D\right|}{1+\left|\{d : t_3 \in d\}\right|}} = \log{\frac{2}{3}} = -0.40546511

 

  \mathrm{idf}(t_4) = \log{\frac{\left|D\right|}{1+\left|\{d : t_4 \in d\}\right|}} = \log{\frac{2}{2}} = 0.0

These idf weights can be represented by a vector as:

  \vec{idf_{train}} = (0.69314718, -0.40546511, -0.40546511, 0.0)

Now that we have our matrix with the term frequency (M_{train}) and the vector representing the idf for each feature of our matrix (\vec{idf_{train}}), we can calculate our tf-idf weights. What we have to do is a simple multiplication of each column of the matrix M_{train} with the respective \vec{idf_{train}} vector dimension. To do that, we can create a square diagonal matrix called M_{idf} with both the vertical and horizontal dimensions equal to the vector \vec{idf_{train}} dimension:

  M_{idf} =   \begin{bmatrix}   0.69314718 & 0 & 0 & 0\\   0 & -0.40546511 & 0 & 0\\   0 & 0 & -0.40546511 & 0\\   0 & 0 & 0 & 0   \end{bmatrix}

and then multiply it to the term frequency matrix, so the final result can be defined then as:

  M_{tf\mbox{-}idf} = M_{train} \times M_{idf}

Please note that the matrix multiplication isn’t commutative, the result of A \times B will be different than the result of the B \times A, and this is why the M_{idf} is on the right side of the multiplication, to accomplish the desired effect of multiplying each idf value to its corresponding feature:

   \begin{bmatrix}   \mathrm{tf}(t_1, d_1) & \mathrm{tf}(t_2, d_1) & \mathrm{tf}(t_3, d_1) & \mathrm{tf}(t_4, d_1)\\   \mathrm{tf}(t_1, d_2) & \mathrm{tf}(t_2, d_2) & \mathrm{tf}(t_3, d_2) & \mathrm{tf}(t_4, d_2)   \end{bmatrix}   \times   \begin{bmatrix}   \mathrm{idf}(t_1) & 0 & 0 & 0\\   0 & \mathrm{idf}(t_2) & 0 & 0\\   0 & 0 & \mathrm{idf}(t_3) & 0\\   0 & 0 & 0 & \mathrm{idf}(t_4)   \end{bmatrix}   \\ =   \begin{bmatrix}   \mathrm{tf}(t_1, d_1) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_1) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_1) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_1) \times \mathrm{idf}(t_4)\\   \mathrm{tf}(t_1, d_2) \times \mathrm{idf}(t_1) & \mathrm{tf}(t_2, d_2) \times \mathrm{idf}(t_2) & \mathrm{tf}(t_3, d_2) \times \mathrm{idf}(t_3) & \mathrm{tf}(t_4, d_2) \times \mathrm{idf}(t_4)   \end{bmatrix}

Let’s see now a concrete example of this multiplication:

   M_{tf\mbox{-}idf} = M_{train} \times M_{idf} = \\   \begin{bmatrix}   0 & 1 & 1 & 1\\   0 & 2 & 1 & 0   \end{bmatrix}   \times   \begin{bmatrix}   0.69314718 & 0 & 0 & 0\\   0 & -0.40546511 & 0 & 0\\   0 & 0 & -0.40546511 & 0\\   0 & 0 & 0 & 0   \end{bmatrix} \\   =   \begin{bmatrix}   0 & -0.40546511 & -0.40546511 & 0\\   0 & -0.81093022 & -0.40546511 & 0   \end{bmatrix}

And finally, we can apply our L2 normalization process to the M_{tf\mbox{-}idf} matrix. Please note that this normalization is “row-wise” because we’re going to handle each row of the matrix as a separated vector to be normalized, and not the matrix as a whole:

   M_{tf\mbox{-}idf} = \frac{M_{tf\mbox{-}idf}}{\|M_{tf\mbox{-}idf}\|_2}      = \begin{bmatrix}   0 & -0.70710678 & -0.70710678 & 0\\   0 & -0.89442719 & -0.4472136 & 0   \end{bmatrix}

And that is our pretty normalized tf-idf weight of our testing document set, which is actually a collection of unit vectors. If you take the L2-norm of each row of the matrix, you’ll see that they all have a L2-norm of 1.

 Python practice

Environment Used: Python v.2.7.2, Numpy 1.6.1, Scipy v.0.9.0, Sklearn (Scikits.learn) v.0.9.

Now the section you were waiting for ! In this section I’ll use Python to show each step of the tf-idf calculation using the Scikit.learn feature extraction module.

The first step is to create our training and testing document set and computing the term frequency matrix:

from sklearn.feature_extraction.text import CountVectorizer

train_set = ("The sky is blue.", "The sun is bright.")
test_set = ("The sun in the sky is bright.",
"We can see the shining sun, the bright sun.")

count_vectorizer = CountVectorizer()
count_vectorizer.fit_transform(train_set)
print "Vocabulary:", count_vectorizer.vocabulary

# Vocabulary: {'blue': 0, 'sun': 1, 'bright': 2, 'sky': 3}

freq_term_matrix = count_vectorizer.transform(test_set)
print freq_term_matrix.todense()

#[[0 1 1 1]
#[0 2 1 0]]

Now that we have the frequency term matrix (called freq_term_matrix), we can instantiate the TfidfTransformer, which is going to be responsible to calculate the tf-idf weights for our term frequency matrix:

from sklearn.feature_extraction.text import TfidfTransformer

tfidf = TfidfTransformer(norm="l2")
tfidf.fit(freq_term_matrix)

print "IDF:", tfidf.idf_

# IDF: [ 0.69314718 -0.40546511 -0.40546511  0.        ]

Note that I’ve specified the norm as L2, this is optional (actually the default is L2-norm), but I’ve added the parameter to make it explicit to you that it it’s going to use the L2-norm. Also note that you can see the calculated idf weight by accessing the internal attribute called idf_. Now that fit() method has calculated the idf for the matrix, let’s transform the freq_term_matrix to the tf-idf weight matrix:

tf_idf_matrix = tfidf.transform(freq_term_matrix)
print tf_idf_matrix.todense()

# [[ 0.         -0.70710678 -0.70710678  0.        ]
# [ 0.         -0.89442719 -0.4472136   0.        ]]

And that is it, the tf_idf_matrix is actually our previous M_{tf\mbox{-}idf} matrix. You can accomplish the same effect by using the Vectorizer class of the Scikit.learn which is a vectorizer that automatically combines the CountVectorizer and the TfidfTransformer to you. See this example to know how to use it for the text classification process.

I really hope you liked the post, I tried to make it simple as possible even for people without the required mathematical background of linear algebra, etc. In the next Machine Learning post I’m expecting to show how you can use the tf-idf to calculate the cosine similarity.

If you liked it, feel free to comment and make suggestions, corrections, etc.

Cite this article as: Christian S. Perone, "Machine Learning :: Text feature extraction (tf-idf) – Part II," in Terra Incognita, 03/10/2011, https://blog.christianperone.com/2011/10/machine-learning-text-feature-extraction-tf-idf-part-ii/.

References

Understanding Inverse Document Frequency: on theoretical arguments for IDF

Wikipedia :: tf-idf

 The classic Vector Space Model

Sklearn text feature extraction code

Updates

13 Mar 2015 Formating, fixed images issues.
03 Oct 2011 Added the info about the environment used for Python examples

Machine Learning, Python

Machine Learning :: Text feature extraction (tf-idf) – Part I

Short introduction to Vector Space Model (VSM)

In information retrieval or text mining, the term frequency – inverse document frequency (also called tf-idf), is a well know method to evaluate how important is a word in a document. tf-idf are is a very interesting way to convert the textual representation of information into a Vector Space Model (VSM), or into sparse features, we’ll discuss more about it later, but first, let’s try to understand what is tf-idf and the VSM.

VSM has a very confusing past, see for example the paper The most influential paper Gerard Salton Never Wrote that explains the history behind the ghost cited paper which in fact never existed; in sum, VSM is an algebraic model representing textual information as a vector, the components of this vector could represent the importance of a term (tf–idf) or even the absence or presence (Bag of Words) of it in a document; it is important to note that the classical VSM proposed by Salton incorporates local and global parameters/information (in a sense that it uses both the isolated term being analyzed as well the entire collection of documents). VSM, interpreted in a lato sensu, is a space where text is represented as a vector of numbers instead of its original string textual representation; the VSM represents the features extracted from the document.

Let’s try to mathematically define the VSM and tf-idf together with concrete examples, for the concrete examples I’ll be using Python (as well the amazing scikits.learn Python module).

Going to the vector space

The first step in modeling the document into a vector space is to create a dictionary of terms present in documents. To do that, you can simple select all terms from the document and convert it to a dimension in the vector space, but we know that there are some kind of words (stop words) that are present in almost all documents, and what we’re doing is extracting important features from documents, features do identify them among other similar documents, so using terms like “the, is, at, on”, etc.. isn’t going to help us, so in the information extraction, we’ll just ignore them.

Let’s take the documents below to define our (stupid) document space:

Train Document Set:

d1: The sky is blue.
d2: The sun is bright.

Test Document Set:

d3: The sun in the sky is bright.
d4: We can see the shining sun, the bright sun.

Now, what we have to do is to create a index vocabulary (dictionary) of the words of the train document set, using the documents d1 and d2 from the document set, we’ll have the following index vocabulary denoted as \mathrm{E}(t) where the t is the term:

   \mathrm{E}(t) =   \begin{cases}   1, & \mbox{if } t\mbox{ is ``blue''} \\   2, & \mbox{if } t\mbox{ is ``sun''} \\   3, & \mbox{if } t\mbox{ is ``bright''} \\   4, & \mbox{if } t\mbox{ is ``sky''} \\   \end{cases}

 

Note that the terms like “is” and “the” were ignored as cited before. Now that we have an index vocabulary, we can convert the test document set into a vector space where each term of the vector is indexed as our index vocabulary, so the first term of the vector represents the “blue” term of our vocabulary, the second represents “sun” and so on. Now, we’re going to use the term-frequency to represent each term in our vector space; the term-frequency is nothing more than a measure of how many times the terms present in our vocabulary \mathrm{E}(t) are present in the documents d3 or d4, we define the term-frequency as a couting function:

   \mathrm{tf}(t,d) = \sum\limits_{x\in d} \mathrm{fr}(x, t)

where the \mathrm{fr}(x, t) is a simple function defined as:

   \mathrm{fr}(x,t) =   \begin{cases}   1, & \mbox{if } x = t \\   0, & \mbox{otherwise} \\   \end{cases}

So, what the tf(t,d) returns is how many times is the term t is present in the document d. An example of this, could be  tf(``sun'', d4) = 2 since we have only two occurrences of the term “sun” in the document d4. Now you understood how the term-frequency works, we can go on into the creation of the document vector, which is represented by:

   \displaystyle \vec{v_{d_n}} =(\mathrm{tf}(t_1,d_n), \mathrm{tf}(t_2,d_n), \mathrm{tf}(t_3,d_n), \ldots, \mathrm{tf}(t_n,d_n))

Each dimension of the document vector is represented by the term of the vocabulary, for example, the \mathrm{tf}(t_1,d_2) represents the frequency-term of the term 1 or t_1 (which is our “blue” term of the vocabulary) in the document d_2.

Let’s now show a concrete example of how the documents d_3 and d_4 are represented as vectors:

   \vec{v_{d_3}} = (\mathrm{tf}(t_1,d_3), \mathrm{tf}(t_2,d_3), \mathrm{tf}(t_3,d_3), \ldots, \mathrm{tf}(t_n,d_3)) \\   \vec{v_{d_4}} = (\mathrm{tf}(t_1,d_4), \mathrm{tf}(t_2,d_4), \mathrm{tf}(t_3,d_4), \ldots, \mathrm{tf}(t_n,d_4))

which evaluates to:

   \vec{v_{d_3}} = (0, 1, 1, 1) \\   \vec{v_{d_4}} = (0, 2, 1, 0)

As you can see, since the documents d_3 and d_4 are:

d3: The sun in the sky is bright.
d4: We can see the shining sun, the bright sun.

The resulting vector \vec{v_{d_3}} shows that we have, in order, 0 occurrences of the term “blue”, 1 occurrence of the term “sun”, and so on. In the \vec{v_{d_3}}, we have 0 occurences of the term “blue”, 2 occurrences of the term “sun”, etc.

But wait, since we have a collection of documents, now represented by vectors, we can represent them as a matrix with |D| \times F shape, where |D| is the cardinality of the document space, or how many documents we have and the F is the number of features, in our case represented by the vocabulary size. An example of the matrix representation of the vectors described above is:

   M_{|D| \times F} =   \begin{bmatrix}   0 & 1 & 1 & 1\\   0 & 2 & 1 & 0   \end{bmatrix}

As you may have noted, these matrices representing the term frequencies tend to be very sparse (with majority of terms zeroed), and that’s why you’ll see a common representation of these matrix as sparse matrices.

Python practice

Environment Used: Python v.2.7.2, Numpy 1.6.1, Scipy v.0.9.0, Sklearn (Scikits.learn) v.0.9.

Since we know the  theory behind the term frequency and the vector space conversion, let’s show how easy is to do that using the amazing scikit.learn Python module.

Scikit.learn comes with lots of examples as well real-life interesting datasets you can use and also some helper functions to download 18k newsgroups posts for instance.

Since we already defined our small train/test dataset before, let’s use them to define the dataset in a way that scikit.learn can use:

train_set = ("The sky is blue.", "The sun is bright.")
test_set = ("The sun in the sky is bright.",
    "We can see the shining sun, the bright sun.")

In scikit.learn, what we have presented as the term-frequency, is called CountVectorizer, so we need to import it and create a news instance:

from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()

The CountVectorizer already uses as default “analyzer” called WordNGramAnalyzer, which is responsible to convert the text to lowercase, accents removal, token extraction, filter stop words, etc… you can see more information by printing the class information:

print vectorizer

CountVectorizer(analyzer__min_n=1,
analyzer__stop_words=set(['all', 'six', 'less', 'being', 'indeed', 'over', 'move', 'anyway', 'four', 'not', 'own', 'through', 'yourselves', (...)

Let’s create now the vocabulary index:

vectorizer.fit_transform(train_set)
print vectorizer.vocabulary
{'blue': 0, 'sun': 1, 'bright': 2, 'sky': 3}

See that the vocabulary created is the same as E(t) (except because it is zero-indexed).

Let’s use the same vectorizer now to create the sparse matrix of our test_set documents:

smatrix = vectorizer.transform(test_set)

print smatrix

(0, 1)        1
(0, 2)        1
(0, 3)        1
(1, 1)        2
(1, 2)        1

Note that the sparse matrix created called smatrix is a Scipy sparse matrix with elements stored in a Coordinate format. But you can convert it into a dense format:

smatrix.todense()

matrix([[0, 1, 1, 1],
........[0, 2, 1, 0]], dtype=int64)

Note that the sparse matrix created is the same matrix M_{|D| \times F} we cited earlier in this post, which represents the two document vectors \vec{v_{d_3}} and \vec{v_{d_4}}.

We’ll see in the next post how we define the idf (inverse document frequency) instead of the simple term-frequency, as well how logarithmic scale is used to adjust the measurement of term frequencies according to its importance, and how we can use it to classify documents using some of the well-know machine learning approaches.

I hope you liked this post, and if you really liked, leave a comment so I’ll able to know if there are enough people interested in these series of posts in Machine Learning topics.

As promised, here is the second part of this tutorial series.

Cite this article as: Christian S. Perone, "Machine Learning :: Text feature extraction (tf-idf) – Part I," in Terra Incognita, 18/09/2011, https://blog.christianperone.com/2011/09/machine-learning-text-feature-extraction-tf-idf-part-i/.

References

The classic Vector Space Model

The most influential paper Gerard Salton never wrote

Wikipedia: tf-idf

Wikipedia: Vector space model

Scikits.learn Examples

Updates

21 Sep 11 fixed some typos and the vector notation
22 Sep 11 – fixed import of sklearn according to the new 0.9 release and added the environment section
02 Oct 11 – fixed Latex math typos
18 Oct 11 – added link to the second part of the tutorial series
04 Mar 11 – Fixed formatting issues

Genetic Algorithms, Pyevolve, Python

The Interactive Robotic Painting Machine !

I’m glad to announce a project created by Benjamin Grosser called “Interactive Robotic Painting Machine“. The machine uses Python and Pyevolve as it’s Genetic Algorithm core, the concept is very interesting:

What I’ve built to consider these questions is an interactive robotic painting machine that uses artificial intelligence to paint its own body of work and to make its own decisions. While doing so, it listens to its environment and considers what it hears as input into the painting process. In the absence of someone or something else making sound in its presence, the machine, like many artists, listens to itself. But when it does hear others, it changes what it does just as we subtly (or not so subtly) are influenced by what others tell us.

Read more about the project in the Benjamin Grosser website.

 

News, Science

News: Algorithm searches for models that best explain experimental data

From the original news article from PhysOrg:

An evolutionary computation approach developed by Franklin University’s Esmail Bonakdarian, Ph.D., was used to analyze data from two classical economics experiments. As can be seen in this figure, optimization of the search over subsets of the maximum model proceeds initially at a quick rate and then slowly continues to improve over time until it converges. The top curve (red) shows the optimum value found so far, while the lower, jagged line (green) shows the current average fitness value for the population in each generation.

(…)

Regression analysis has been the traditional tool for finding and establishing statistically significant relationships in research projects, such as for the economics examples Bonakdarian chose. As long as the number of independent variables is relatively small, or the experimenter has a fairly clear idea of the possible underlying relationship, it is feasible to derive the best model using standard software packages and methodologies.

However, Bonakdarian cautioned that if the number of independent variables is large, and there is no intuitive sense about the possible relationship between these variables and the dependent variable, “the experimenter may have to go on an automated ‘fishing expedition’ to discover the important and relevant independent variables.”

You can see the original research paper here.