Python

Capturing and comparing writers profiles through Markov chains

Any text structure can be represented using Markov Chain states and transition probabilities, for the sake of simplicity I’ll not use probabilities to describe the state transitions, I’ll use only true or false, saying only that this transition is possible or not (100% or 0%).

A Markov Chain in this simple form can be formulated as a set of possible states, lets say S = \left\{q_1,q_2,q_3,\ldots,q_n\right\} as well a dictionary-like data structure in the form of D(q_x,q_y) = \left\{q_1, q_2, \dots, q_3\right\} representing the possible transition states, but let’s put an example text on it to make it clear:

the white rabbit and the white fox

Then the possible states of this text are S = \left\{ ``the'', ``white'', ``rabbit'', ``and'', ``fox''\right\}, and the dictionary structure should be:

D(``rabbit'', ``and'') = \left\{``the''\right\}
D(``white'', ``rabbit'') = \left\{``and''\right\}
D(``and'', ``the'') = \left\{``white''\right\}
D(``the'', ``white'') = \left\{``rabbit', ``fox''\right\}

These transition rules can be incrementally updated. Note that these rules represent the structure of the text and not the text by itself; when updated for instance, with a whole book, it captures the profile of the writer, the characteristics often used by the author, even more enforced when incrementally updated with more than one book of the same author.

I implemented a simple tool using Python and NLTK (just to parse and tokenize the text) that can be used to read an input text and incrementally update a Markov Chain data structure in a way to be reused later in order check similarity with another text.

The most interesting part of these chain is that they also offer a property present in Bloom Filters: false positives are possible but not false negatives. In terms of our implementation and formulation, that property means that if we update a chain with the text of two books from Lewis Carroll, there would be (and of course, is a very low probability) another author (that the chain was not trained with their books) who can have the same chain, creating in this way a false positive, because the same chain from Lewis Carroll books could have the same chain of another different book, but you should note that is almost impossible (actually this fact would be very strange, since the authors style and words used are required to be almost the same) to find books from different authors who share the same chain.

You should note that we can also compare different chains in order to measure similarity between two authors/book profiles. This comparison is done by searching a previous trained chain (with the same author books or not) for the entries of the current chain in order to match the same structure of the chains and then calculate a ratio between the matches found and the total rules of the chain we are checking.

But hey, that’s a lot of blah blah blah and nothing of code, let’s see it in practice. Here is the help of the implemented tool (I created a repository in GitHub if you’re interested in the source-code):

$ python -m pymarksim.markov --help

pymarksim v.0.1
By Christian S. Perone <christian.perone@gmail.com>
https://blog.christianperone.com

Type --help parameter for help.

Options:
 -h, --help            show this help message and exit
 -d DUMP_FILE, --df=DUMP_FILE
 Pickle object file ex. pickle.db
 -i INPUT_TEXT, --input-text=INPUT_TEXT
 Input text
 -m MODE, --mode=MODE  Mode of operation: train, check
 -c COMPRESS_LEVEL, --compress-level=COMPRESS_LEVEL
 The gzip compression level, default is 9 (max).

So let’s train a chain using the “Alice Adventure’s in Wonderland” book (alice_adventures.txt) from Lewis Carroll (took from the Project Gutenberg):

$ python -m pymarksim.markov -d lewis.db -i alice_adventures.txt -m train

pymarksim v.0.1
By Christian S. Perone <christian.perone@gmail.com>
https://blog.christianperone.com

Type --help parameter for help.

File lewis.db not found, creating a new one...
Updating...
Done !

Here the tool created the file lewis.db which is the chain in a Python dictionary structure (I’ve used the cPickle module to serialize it and gzip compression). Now let’s check how close is this chain from the original text book:

$ python -m pymarksim.markov -d lewis.db -i alice_adventures.txt -m check

pymarksim v.0.1
By Christian S. Perone <christian.perone@gmail.com>
https://blog.christianperone.com

Type --help parameter for help.

File lewis.db found, loading...
Checking...
Match 100.00 %

As you can see, we have a perfect match between the chain and the text book. Now comes the interesting part, let’s check an unrelated book with this chain trained with the Alice Adventure’s in Wonderland, for this, I took the book “The Adventures of Sherlock Holmes” (doyle.txt) from Sir Arthur Conan Doyle (also from the Project Gutenberg):

$ python -m pymarksim.markov -d lewis.db -i doyle.txt -m check 

pymarksim v.0.1
By Christian S. Perone <christian.perone@gmail.com>
https://blog.christianperone.com

Type --help parameter for help.

File lewis.db found, loading...
Checking...
Match 7.14 %

See that now the match of the chain from Sherlock Holmes with the Alice Adventure’s is now 7.14%, remember that a perfect match is 100.00 % !

Let’s check now this chain trained only with the Alice Adventure’s in Wonderland of Lewis Carroll with another book from the same author, called “Through the looking-glass” (looking-glass.txt):

$ python -m pymarksim.markov -d lewis.db -i looking-glass.txt -m check 

pymarksim v.0.1
By Christian S. Perone <christian.perone@gmail.com>
https://blog.christianperone.com

Type --help parameter for help.

File lewis.db found, loading...
Checking...
Match 25.93 %

See that now we have a match of 25.93% ! Let’s check with a more related book also from Lewis Carroll, called “Alice Adventure’s Under Ground” (underground.txt):

$ python -m pymarksim.markov -d lewis.db -i underground.txt -m check 

pymarksim v.0.1
By Christian S. Perone <christian.perone@gmail.com>
https://blog.christianperone.com

Type --help parameter for help.

File lewis.db found, loading...
Checking...
Match 57.84 %

That’s very related don’t you think ?

I hope you liked it, Markov Chains are a very powerful way to capture system processes, and can be used in ways we usually do not expect.

“So many out-of-the-way things had happened lately, that Alice had begun to think that very few things indeed were really impossible.” – Alice Adventure’s in Wonderland


News, Pyevolve, Python, Science

PyOhio 2010: Genetic Programming in Python

Eric Floehr (from Intellovations)  kindly sent me the presentation he presented at PyOhio 2010. I think Eric has captured some nice features of Pyevolve which few people use, like DB Adapters, dot plotting, Interactive Mode, Real Time statistics, etc. He also presents an interesting use case where he uses Genetic Programming in order to forecast weather based on some historical data:



Thank you Eric !

Genetic Algorithms, Pyevolve, Python

The future can be written in RPython now

Following the recent article arguing why PyPy is the future of Python, I must say, PyPy is not the future of Python, is the present. When I have tested it last time (PyPy-c 1.1.0) with Pyevolve into the optimization of a simple Sphere function, it was at least 2x slower than Unladen Swallow Q2, but in that time, PyPy was not able to JIT. Now, with this new release of PyPy and the JIT’ing support, the scenario has changed.

PyPy has evolved a lot (actually, you can see this evolution here), a nice work was done on the GC system, saving (when compared to CPython) 8 bytes per object allocated, which is very interesting for applications that makes heavy use of object allocation (GP system are a strong example of this, since when they are implemented on object oriented languages, each syntax tree node is an object). Efforts are also being done to improve support for CPython extensions (written in C/C++), one of them is a little tricky: the use of RPyC, to proxy through TCP the remote calls to CPython; but the other seems by far more effective, which is the creation of the CPyExt subsystem. By using CPyExt, all you need is to have your CPython API functions implemented in CPyExt, a lot of people is working on this right now and you can do it too, it’s a long road to have a good API coverage, but when you think about advantages, this road becomes small.

In order to benchmark CPython, Jython, CPython+Psyco, Unladen Swallow and PyPy, I’ve used the Rastrigin function optimization (an example of that implementation is here in the Example 7 of Pyevolve 0.6rc1):

f(x) = 10n + \sum_{i=1}^{n}{x_{i}^{2}} -10\cos(2\pi x_{i})

Due to its large search space and number of local minima, Rastrigin function is often used to measure the performance of Genetic Algorithms. Rastrigin function has a global minimum at x=0 where the f(x) = 0; in order to increase the search space and required resources, I’ve used 40 variables (n=40)  and 10k generations.

Here are the information about versions used in this benchmark:

No warmup was performed in JVM or in PyPy. PyPy translator was executed using the “-Ojit” option in order to get the JIT version of the Python interpreter. The JVM was executed using the server mode, I’ve tested the client and server mode for Sun JVM and IcedTea6, the best results were observed from the server mode using Sun JVM, however when I’ve compared the client mode of IcedTea6 with the client mode of Sun JVM, the best results observed were from IcedTea6 (the same as using server mode in IcedTea6). Unladen Swallow was compiled using the project wiki instructions for building an optimized binary.

The machine used was an Intel(R) Core(TM) 2 Duo E4500 (2×2.20Ghz) with 2GB of RAM.

The result of the benchmark (measured using wall time) in seconds for each setup (these results are the best of 3 sequential runs):

As you can see, PyPy with JIT got a speedup of 2.57x when compared to CPython 2.6.5 and 2.0x  faster than Unladen Swallow current trunk.

PyPy is not only the future of Python, but is becoming the present right now. PyPy will not bring us only an implementation of Python in Python (which in itself is the valuable result of great efforts), but also will bring the performance back (which many doubted at the beginning, wondering how could it be possible for an implementation of Python in Python be faster than an implementation in C ? And here is where the translation and JIT magic enters). When the time comes that Python interpreter can be entire written in a high level language (actually almost the same language, which is really weird), Python community can put their focus on improving the language itself instead of spending time solving the complexity of the lower level languages, is this not the great point of those efforts ?

By the way, just to note, PyPy isn’t only a translator for the Python interpreter written in RPython, it’s a translator of RPython, what means that PyPy isn’t only the future of Python, but probably, the future of many interpreters.

Genetic Algorithms, genetic programming, News, Pyevolve, Python

Pyevolve 0.6rc1 released !

I’m proud to announce the Pyevolve 0.6rc1 ! This is the first release candidate, but it’s pretty stable for production use (as from this 0.6 version we are very closer to a stable codebase, thanks to community).

See the documentation site and the What’s New.

I would like to thank the people who directly or indirectly have contributed with this release: Boris Gorelik, Amit Saha, Jelle Feringa, Henrik Rudstrom, Matteo de Felice, SIGEVOlution Board, Mike Benoit, Ryan Campbell, Jonas A. Gustavsson, Soham Sadhu, Ernesto Costa, Ido Ben-Tsion, Frank Goodman, Vishal Belsare, Benjamin Bush; and a lot of people who gave us feedback of the experience with the use of Pyevolve on their applications.

For downloads, go to the Downloads section at Documentation site.

If you see something wrong with this release candidate, please create a ticket reporting your problem at Pyevolve Trac, so we can provide fixes to the official release.

Happy coding !

– Christian S. Perone

Math

Riemann Zeta function visualizations with Python

While playing with mpmpath and it’s Riemann Zeta function evaluator, I came upon those interesting animated plottings using Matplotlib (the source code is in the end of the post).

Riemann zeta function is an analytic function and is defined over the complex plane with one complex variable denoted as “s“. Riemann zeta is very important to mathematics due it’s deep relation with primes; the zeta function is given by:

\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s} = \frac{1}{1^s} + \frac{1}{2^s} + \frac{1}{3^s} + \cdots

for Re(s) > 1.

So, let \zeta(s)=v where s = a + b\imath and v = u + k\imath.

The first plot uses the triplet (x,y,z) coordinates to plot a 3D space where each component is given by:

  • x = Re(v) (or x = u, previously defined);
  • y = Im(v) (or y = k, previously defined);
  • z = Im(s) (or z = b, previously defined);
  • The time component used in the animation is called \theta and it’s given by \theta = Re(s) or simply \theta = a.

To plot the animation I’ve used the follow intervals:

  • For Re(s): from \left[0.01, 10.0\right), this were calculated at every 0.01 step – shown in the plot at top right;
  • For Im(s): from \left[0.1, 200.0\right), calculated at every 0.1 step – shown as the z axis.

This plot were done using a fixed interval (no auto scale) for the (x,y,z) coordinates. Where Re(s) = 1/2 (\theta = 1/2) is when the non-trivial zeroes of Riemann Zeta function lies.

Now see the same plot but this time using auto scale (automatically resized x,y coordinates):

Note the (x,y) auto scaling.

See now from another plotting using a 2D space where each component is given by:

  • x = Im(s) (or x = b, previously defined);
  • y = Im(v) (blue) and Re(v) (green);
  • The time component used in the animation is called \theta and it’s given by \theta = Re(s) or simply \theta = a.

To plot the animation I’ve used the follow intervals:

  • For Re(s): from \left[0.01,  10.0\right), this were calculated at every 0.01 step – shown in title of the plot at top;
  • For Im(s): from \left[0.1, 50.0\right), calculated at every 0.1 step – shown as the x axis.

This plot were done using a fixed interval (no auto scale) for the (x,y) coordinates. Where Re(s) = 1/2 (\theta = 1/2) is when the non-trivial zeroes of Riemann Zeta function lies. The first 10 non-trivial zeroes from Riemann Zeta function is shown as a red dot, when the two series, the Im(v) and Re(v) cross each other on the red dot at the critical line (Re(s) = 1/2) is where lies the zeroes of the Zeta Function, note how the real and imaginary part turns away from each other as the Re(s) increases.

Now see the same plot but this time using auto scale (automatically resized y coordinate):

If you are interested in more visualizations of Riemann Zeta function, you’ll like the well-done paper from J. Arias-de-Reyna called “X-Ray of Riemann zeta-function“.

I always liked the way visualization affects the understanding of math functions. Anscombe’s quartet is a clear example of how important visualization is.

The source-code used to create the plot are available here:

Source code for the 2D plots

Source code for the 3D plots

I hope you liked the post ! To make the plots and videos I’ve used matplotlib, mpmath and MEncoder.

– Christian S. Perone

Genetic Algorithms, genetic programming, News, Science

New issue of SIGEVOlution (Volume 4 Issue 3)

The new issue of SIGEVOlution (the newsletter of ACM Special Interest Group on Genetic and Evolutionary Computation) was released:

In this release you can read about:

Issues in applying computational intelligence
By Arthur Kordon

JavaXCSF
By Patrick O. Stalph, Martin V. Butz

And a lot of information about new PhD theses, new journal issues and about events to come.

Genetic Algorithms, News, Science

News: Using genetic algorithms to optimise current and future health planning

From a publication of 28/10/2010, of the authors Saoshi Sasaki, Alexis J Comber, Hiroshi Suzuki and Chris Brunsdon:

Background

Ambulance response time is a crucial factor in patient survival. The number of emergency cases (EMS cases) requiring an ambulance is increasing due to changes in population demographics. This is decreasing ambulance response times to the emergency scene. This paper predicts EMS cases for 5-year intervals from 2020, to 2050 by correlating current EMS cases with demographic factors at the level of the census area and predicted population changes. It then applies a modified grouping genetic algorithm to compare current and future optimal locations and numbers of ambulances. Sets of potential locations were evaluated in terms of the (current and predicted) EMS case distances to those locations.

Results

Future EMS demands were predicted to increase by 2030 using the model (R2=0.71). The optimal locations of ambulances based on future EMS cases were compared with current locations and with optimal locations modelled on current EMS case data. Optimising the location of ambulance stations locations reduced the average response times by 57 seconds. Current and predicted future EMS demand at modelled locations were calculated and compared.

Conclusions

The reallocation of ambulances to optimal locations improved response times and could contribute to higher survival rates from life-threatening medical events. Modelling EMS case ‘demand’ over census areas allows the data to be correlated to population characteristics and optimal ‘supply’ locations to be identified. Comparing current and future optimal scenarios allows more nuanced planning decisions to be made. This is a generic methodology that could be used to provide evidence in support of public health planning and decision making.

Read the original paper here.