Python

Programming, Python

Simple and effective coin segmentation using Python and OpenCV

The new generation of OpenCV bindings for Python is getting better and better with the hard work of the community. The new bindings, called “cv2” are the replacement of the old “cv” bindings; in this new generation of bindings, almost all operations returns now native Python objects or Numpy objects, which is pretty nice since it simplified a lot and also improved performance on some areas due to the fact that you can now also use the optimized operations from Numpy and also enabled the integration with other frameworks like the scikit-image which also uses Numpy arrays for image representation.

In this example, I’ll show how to segment coins present in images or even real-time video capture with a simple approach using thresholding, morphological operators, and contour approximation. This approach is a lot simpler than the approach using Otsu’s thresholding and Watershed segmentation here in OpenCV Python tutorials, which I highly recommend you to read due to its robustness. Unfortunately, the approach using Otsu’s thresholding is highly dependent on an illumination normalization. One could extract small patches of the image to implement something similar to an adaptive Otsu’s binarization (like the one implemented in Letptonica – the framework used by Tesseract OCR) to overcome this problem, but let’s see another approach. For reference, see the output of the Otsu’s thresholding using an image taken with my webcam with a non-normalized illumination:

 

Original image vs Otsu binarization
Original image vs Otsu binarization

1. Setting the Video Capture configuration

The first step to create a real-time Video Capture using the Python bindings is to instantiate the VideoCapture class, set the properties and then start reading frames from the camera:

import numpy as np
import cv2

cap = cv2.VideoCapture(0)
cap.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 1280)
cap.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 720)

In newer versions (unreleased yet), the constants for CV_CAP_PROP_FRAME_WIDTH are now in the cv2 module, for now, let’s just use the cv2.cv module.

2. Reading image frames

The next step is to use the VideoCapture object to read the frames and then convert them to gray color (we are not going to use color information to segment the coins):

while True:
    ret, frame = cap.read()
    roi = frame[0:500, 0:500]
    gray = cv2.cvtColor(roi, cv2.COLOR_BGR2GRAY)

Note that here I’m extracting a small portion of the complete image (where the coins are located), but you don’t have to do that if you have only coins on your image. At this moment, we have the following gray image:

The original Gray image captured.
The original Gray image captured.

3. Applying adaptive thresholding

In this step we will apply the Adaptive Thresholding after applying a Gaussian Blur kernel to eliminate the noise that we have in the image:

gray_blur = cv2.GaussianBlur(gray, (15, 15), 0)
thresh = cv2.adaptiveThreshold(gray_blur, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY_INV, 11, 1)

See the effect of the Gaussian Kernel in the image:

The original gray image and the image after applying the Gaussian Kernel.
The original gray image and the image after applying the Gaussian Kernel.

And now the effect of the Adaptive Thresholding with the blurry image:

The effect of the adaptive thresholding into the blurry image

Note that at that moment we already have the coins segmented except for the small noisy inside the center of the coins and also in some places around them.

4. Morphology

The Morphological Operators are used to dilate, erode and other operations on the pixels of the image. Here, due to the fact that sometimes the camera can present some artifacts, we will use the Morphological Operation of Closing to make sure that the borders of the coins are always close, otherwise, we may found a coin with a semi-circle or something like that. To understand the effect of the Closing operation (which is the operation of erosion of the pixels already dilated) see the image below:

Morphological Closing

You can see that after some iterations of the operation, the circles start to become filled. To use the Closing operation, we’ll use the morphologyEx function from the OpenCV Python bindings:

kernel = np.ones((3, 3), np.uint8)
closing = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, iterations=4)

See now the effect of the Closing operation on our coins:

Closing Operation in the coins

The operations of Morphological Operators are very simple, the main principle is the application of an element (in our case we have a block element of 3×3) into the pixels of the image. If you want to understand it, please see this animation explaining the operation of Erosion.

5. Contour detection and filtering

After applying the morphological operators, all we have to do is to find the contour of each coin and then filter the contours having an area smaller or larger than a coin area. You can imagine the procedure of finding contours in OpenCV as the operation of finding connected components and their boundaries. To do that, we’ll use the OpenCV findContours function.

cont_img = closing.copy()
contours, hierarchy = cv2.findContours(cont_img, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)

Note that we made a copy of the closing image because the function findContours will change the image passed as the first parameter, we’re also using the RETR_EXTERNAL flag, which means that the contours returned are only the extreme outer contours. The parameter CHAIN_APPROX_SIMPLE will also return a compact representation of the contour, for more information see here.

After finding the contours, we need to iterate into each one and check the area of them to filter the contours containing an area greater or smaller than the area of a coin. We also need to fit an ellipse to the contour found. We could have done this using the minimum enclosing circle, but since my camera isn’t perfectly above the coins, the coins appear with a small inclination describing an ellipse.

for cnt in contours:
    area = cv2.contourArea(cnt)
    if area < 2000 or area > 4000:
        continue

    if len(cnt) < 5:
        continue

    ellipse = cv2.fitEllipse(cnt)
    cv2.ellipse(roi, ellipse, (0,255,0), 2)

Note that in the code above we are iterating on each contour, filtering coins with area smaller than 2000 or greater than 4000 (these are hardcoded values I found for the Brazilian coins at this distance from the camera), later we check for the number of points of the contour because the function fitEllipse needs a number of points greater or equal than 5 and finally we use the ellipse function to draw the ellipse in green over the original image.

To show the final image with the contours we just use the imshow function to show a new window with the image:

cv2.imshow('final result', roi)

And finally, this is the result in the end of all steps described above:

The final image with the contours detected
The final image with the contours detected

The complete source-code:

import numpy as np
import cv2

def run_main():
    cap = cv2.VideoCapture(0)
    cap.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 1280)
    cap.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 720)

    while(True):
        ret, frame = cap.read()
        roi = frame[0:500, 0:500]
        gray = cv2.cvtColor(roi, cv2.COLOR_BGR2GRAY)

        gray_blur = cv2.GaussianBlur(gray, (15, 15), 0)
        thresh = cv2.adaptiveThreshold(gray_blur, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
                                       cv2.THRESH_BINARY_INV, 11, 1)

        kernel = np.ones((3, 3), np.uint8)
        closing = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE,
        kernel, iterations=4)

        cont_img = closing.copy()
        contours, hierarchy = cv2.findContours(cont_img, cv2.RETR_EXTERNAL,
                                               cv2.CHAIN_APPROX_SIMPLE)

        for cnt in contours:
            area = cv2.contourArea(cnt)
            if area < 2000 or area > 4000:
                continue

            if len(cnt) < 5:
                continue

            ellipse = cv2.fitEllipse(cnt)
            cv2.ellipse(roi, ellipse, (0,255,0), 2)

        cv2.imshow("Morphological Closing", closing)
        cv2.imshow("Adaptive Thresholding", thresh)
        cv2.imshow('Contours', roi)

        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

    cap.release()
    cv2.destroyAllWindows()

if __name__ == "__main__":
    run_main()
Bitcoin, Programming, Python

The beauty of Bitcoin P2P network

So, in the last days I just released Protocoin, a framework in pure Python with a Bitcoin P2P network implementation. While I’m in process of development of the v.0.2 of the framework (with new and nice features like Bitcoin keys management – you can see some preview here) I would like to show a real-time visualization I’ve made with Protocoin and Ubigraph of a node connecting to a seed node and then issuing GetAddr message for each node and connecting on the received nodes in a breadth-first search fashion. I’ll release the code used to create this visualization in the next release of Protocoin as soon as possible. I hope you enjoy it !

Color legend

Yellow = Connecting
Green = Connected
Blue = Disconnected after connection

Video

Programming, Python

Mapa de calor dos dados de acidentes de transito do DataPoa

Esta semana será disponibilzada a nova versão do Django GIS Brasil, segue abaixo um exemplo de mapa criado usando os novos dados do Django GIS Brasil importados do DataPoa.

O exemplo abaixo é um mapa de calor utilizando os dados de acidentes de trânsito em Porto Alegre /RS durante os anos de 2000 até 2012. Os eixos (ruas, avenidas, etc.) também estarão presentes no Django GIS Brasil.

 

Heatmap POA
Mapa de Acidentes de Trânsito em PoA/RS. Clique para ampliar.
Machine Learning, Programming, Python

Machine Learning :: Cosine Similarity for Vector Space Models (Part III)

* It has been a long time since I wrote the TF-IDF tutorial (Part I and Part II) and as I promissed, here is the continuation of the tutorial. Unfortunately I had no time to fix the previous tutorials for the newer versions of the scikit-learn (sklearn) package nor to answer all the questions, but I hope to do that in a close future.

So, on the previous tutorials we learned how a document can be modeled in the Vector Space, how the TF-IDF transformation works and how the TF-IDF is calculated, now what we are going to learn is how to use a well-known similarity measure (Cosine Similarity) to calculate the similarity between different documents.

The Dot Product

Let’s begin with the definition of the dot product for two vectors: \vec{a} = (a_1, a_2, a_3, \ldots) and \vec{b} = (b_1, b_2, b_3, \ldots), where a_n and b_n are the components of the vector (features of the document, or TF-IDF values for each word of the document in our example) and the \mathit{n} is the dimension of the vectors:

  \vec{a} \cdot \vec{b} = \sum_{i=1}^n a_ib_i = a_1b_1 + a_2b_2 + \cdots + a_nb_n

As you can see, the definition of the dot product is a simple multiplication of each component from the both vectors added together. See an example of a dot product for two vectors with 2 dimensions each (2D):

  \vec{a} = (0, 3) \\   \vec{b} = (4, 0) \\   \vec{a} \cdot \vec{b} = 0*4 + 3*0 = 0

The first thing you probably noticed is that the result of a dot product between two vectors isn’t another vector but a single value, a scalar.

This is all very simple and easy to understand, but what is a dot product ? What is the intuitive idea behind it ? What does it mean to have a dot product of zero ? To understand it, we need to understand what is the geometric definition of the dot product:

  \vec{a} \cdot \vec{b} = \|\vec{a}\|\|\vec{b}\|\cos{\theta}

Rearranging the equation to understand it better using the commutative property, we have:

  \vec{a} \cdot \vec{b} = \|\vec{b}\|\|\vec{a}\|\cos{\theta}

So, what is the term \displaystyle \|\vec{a}\|\cos{\theta} ? This term is the projection of the vector \vec{a} into the vector \vec{b} as shown on the image below:

The projection of the vector A into the vector B. By Wikipedia.

Now, what happens when the vector \vec{a} is orthogonal (with an angle of 90 degrees) to the vector \vec{b} like on the image below ?

Two orthogonal vectors (with 90 degrees angle).

There will be no adjacent side on the triangle, it will be equivalent to zero, the term \displaystyle \|\vec{a}\|\cos{\theta} will be zero and the resulting multiplication with the magnitude of the vector \vec{b} will also be zero. Now you know that, when the dot product between two different vectors is zero, they are orthogonal to each other (they have an angle of 90 degrees), this is a very neat way to check the orthogonality of different vectors. It is also important to note that we are using 2D examples, but the most amazing fact about it is that we can also calculate angles and similarity between vectors in higher dimensional spaces, and that is why math let us see far than the obvious even when we can’t visualize or imagine what is the angle between two vectors with twelve dimensions for instance.

The Cosine Similarity

The cosine similarity between two vectors (or two documents on the Vector Space) is a measure that calculates the cosine of the angle between them. This metric is a measurement of orientation and not magnitude, it can be seen as a comparison between documents on a normalized space because we’re not taking into the consideration only the magnitude of each word count (tf-idf) of each document, but the angle between the documents. What we have to do to build the cosine similarity equation is to solve the equation of the dot product for the \cos{\theta}:

  \displaystyle  \vec{a} \cdot \vec{b} = \|\vec{a}\|\|\vec{b}\|\cos{\theta} \\ \\  \cos{\theta} = \frac{\vec{a} \cdot \vec{b}}{\|\vec{a}\|\|\vec{b}\|}

And that is it, this is the cosine similarity formula. Cosine Similarity will generate a metric that says how related are two documents by looking at the angle instead of magnitude, like in the examples below:

The Cosine Similarity values for different documents, 1 (same direction), 0 (90 deg.), -1 (opposite directions).

Note that even if we had a vector pointing to a point far from another vector, they still could have an small angle and that is the central point on the use of Cosine Similarity, the measurement tends to ignore the higher term count on documents. Suppose we have a document with the word “sky” appearing 200 times and another document with the word “sky” appearing 50, the Euclidean distance between them will be higher but the angle will still be small because they are pointing to the same direction, which is what matters when we are comparing documents.

Now that we have a Vector Space Model of documents (like on the image below) modeled as vectors (with TF-IDF counts) and also have a formula to calculate the similarity between different documents in this space, let’s see now how we do it in practice using scikit-learn (sklearn).

Vector Space Model

Practice Using Scikit-learn (sklearn)

* In this tutorial I’m using the Python 2.7.5 and Scikit-learn 0.14.1.

The first thing we need to do is to define our set of example documents:

documents = (
"The sky is blue",
"The sun is bright",
"The sun in the sky is bright",
"We can see the shining sun, the bright sun"
)

And then we instantiate the Sklearn TF-IDF Vectorizer and transform our documents into the TF-IDF matrix:

from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_vectorizer = TfidfVectorizer()
tfidf_matrix = tfidf_vectorizer.fit_transform(documents)
print tfidf_matrix.shape
(4, 11)

Now we have the TF-IDF matrix (tfidf_matrix) for each document (the number of rows of the matrix) with 11 tf-idf terms (the number of columns from the matrix), we can calculate the Cosine Similarity between the first document (“The sky is blue”) with each of the other documents of the set:

from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(tfidf_matrix[0:1], tfidf_matrix)
array([[ 1.        ,  0.36651513,  0.52305744,  0.13448867]])

The tfidf_matrix[0:1] is the Scipy operation to get the first row of the sparse matrix and the resulting array is the Cosine Similarity between the first document with all documents in the set. Note that the first value of the array is 1.0 because it is the Cosine Similarity between the first document with itself. Also note that due to the presence of similar words on the third document (“The sun in the sky is bright”), it achieved a better score.

If you want, you can also solve the Cosine Similarity for the angle between vectors:

  \cos{\theta} = \frac{\vec{a} \cdot \vec{b}}{\|\vec{a}\|\|\vec{b}\|}

We only need to isolate the angle (\theta) and move the \cos to the right hand of the equation:

  \theta = \arccos{\frac{\vec{a} \cdot \vec{b}}{\|\vec{a}\|\|\vec{b}\|}}

The \arccos is the same as the inverse of the cosine (\cos^-1).

 Lets for instance, check the angle between the first and third documents:
import math
# This was already calculated on the previous step, so we just use the value
cos_sim = 0.52305744
angle_in_radians = math.acos(cos_sim)
print math.degrees(angle_in_radians)
58.462437107432784

And that angle of ~58.5 is the angle between the first and the third document of our document set.

That is it, I hope you liked this third tutorial !
Cite this article as: Christian S. Perone, "Machine Learning :: Cosine Similarity for Vector Space Models (Part III)," in Terra Incognita, 12/09/2013, https://blog.christianperone.com/2013/09/machine-learning-cosine-similarity-for-vector-space-models-part-iii/.

Related Material

A video about Dot Product on The Khan Academy

Wikipedia: Dot Product

Wikipedia: Cosine Similarity

Scikit-learn (sklearn) – The de facto Machine Learning package for Python

genetic programming, Python

Genetic Programming and a LLVM JIT for restricted Python AST expressions

A small intro on the rationale

So I’m working on a Symbolic Regression Machine written in C/C++ called Shine, which is intended to be a JIT for Genetic Programming libraries (like Pyevolve for instance). The main rationale behind Shine is that we have today a lot of research on speeding Genetic Programming using GPUs (the GPU fever !) or any other special hardware, etc, however we don’t have many papers talking about optimizing GP using the state of art compilers optimizations like we have on clang, gcc, etc.

The “hot spot” or the component that consumes a lot of CPU resources today on Genetic Programming is the evaluation of each individual in order to calculate the fitness of the program tree. This evaluation is often executed on each set of parameters of the “training” set. Suppose you want to make a symbolic regression of a single expression like the Pythagoras Theorem and you have a linear space of parameters from 1.0 to 1000.0 with a step of 0.1 you have 10.000 evaluations for each individual (program tree) of your population !

What Shine does is described on the image below:

It takes the individual of the Genetic Programming engine and then converts it to LLVM Intermediate Representation (LLVM assembly language), after that it runs the transformation passes of the LLVM (here is where the true power of modern compilers enter on the GP context) and then the LLVM JIT converts the optimized LLVM IR into native code for the specified target (X86, PowerPC, etc).

You can see below the Shine architecture:

This architecture brings a lot of flexibility for Genetic Programming, you can for instance write functions that could be used later on your individuals on any language supported by the LLVM, what matters to Shine is the LLVM IR, you can use any language that LLVM supports and then use the IR generated by LLVM, you can mix code from C, C++, Ada, Fortran, D, etc and use your functions as non-terminal nodes of your Genetic Programming trees.

Shine is still on its earlier development, it looks a simple idea but I still have a lot of problems to solve, things like how to JIT the evaluation process itself instead of doing calls from Python using ctypes bindings of the JITed trees.

Doing Genetic Programming on the Python AST itself

During the development of Shine, an idea happened to me, that I could use a restricted Python Abstract Syntax Tree (AST) as the representation of individuals on a Genetic Programming engine, the main advantage of this is the flexibility and the possibility to reuse a lot of things. Of course that a shared library written in C/C++ would be useful for a lot of Genetic Programming engines that doesn’t uses Python, but since my spare time to work on this is becoming more and more rare I started to rethink the approach and use Python and the LLVM bindings for LLVM (LLVMPY) and I just discovered that is pretty easy to JIT a restricted set of the Python AST to native code using LLVM, and this is what this post is going to show.

JIT’ing a restricted Python AST

The most amazing part of LLVM is obviously the amount of transformation passes, the JIT and of course the ability to use the entire framework through a simple API (ok, not so simple sometimes). To simplify this example, I’m going to use an arbitrary restricted AST set of the Python AST that supports only subtraction (-), addition (+), multiplication (*) and division (/).

To understand the Python AST, you can use the Python parser that converts source into AST:

>>> import ast
>>> astp = ast.parse("2*7")
>>> ast.dump(astp)
'Module(body=[Expr(value=BinOp(left=Num(n=2), op=Mult(), right=Num(n=7)))])'

What the parse created was an Abstract Syntax Tree containing the BinOp (Binary Operation) with the left operator as the number 2, the right operator as the number 7 and the operation itself as Multiplication(Mult), very easy to understand. What we are going to do to create the LLVM IR is to create a visitor that is going to visit each node of the tree. To do that, we can subclass the Python NodeVisitor class from the ast module. What the NodeVisitor does is to visit each node of the tree and then call the method ‘visit_OPERATOR’ if it exists, when the NodeVisitor is going to visit the node for the BinOp for example, it will call the method ‘visit_BinOp’ passing as parameter the BinOp node itself.

The structure of the class for for the JIT visitor will look like the code below:

# Import the ast and the llvm Python bindings
import ast
from llvm import *
from llvm.core import *
from llvm.ee import *
import llvm.passes as lp

class AstJit(ast.NodeVisitor):
    def __init__(self):
        pass

What we need to do now is to create an initialization method to keep the last state of the JIT visitor, this is needed because we are going to JIT the content of the Python AST into a function and the last instruction of the function needs to return what was the result of the last instruction visited by the JIT. We also need to receive a LLVM Module object in which our function will be created as well the closure type, for the sake of simplicity I’m not type any object, I’m just assuming that all numbers from the expression are integers, so the closure type will be the LLVM integer type.

def __init__(self, module, parameters):
    self.last_state = None
    self.module = module
    # Parameters that will be created on the IR function
    self.parameters = parameters
    self.closure_type = Type.int()
    # An attribute to hold a link to the created function
    # so we can use it to JIT later
    self.func_obj = None
    self._create_builder()


def _create_builder(self):
    # How many parameters of integer type
    params = [self.closure_type] * len(self.parameters)

    # The prototype of the function, returning a integer
    # and receiving the integer parameters
    ty_func = Type.function(self.closure_type, params)

    # Add the function to the module with the name 'func_ast_jit'
    self.func_obj = self.module.add_function(ty_func, 'func_ast_jit')

    # Create an argument in the function for each parameter specified
    for index, pname in enumerate(self.parameters):
        self.func_obj.args[index].name = pname

    # Create a basic block and the builder
    bb = self.func_obj.append_basic_block("entry")
    self.builder = Builder.new(bb)

Now what we need to implement on our visitor is the ‘visit_OPERATOR’ methods for the BinOp and for the Numand Name operators. We will also implement the method to create the return instruction that will return the last state.

# A 'Name' is a node produced in the AST when you
# access a variable, like '2+x+y', 'x' and 'y' are
# the two names created on the AST for the expression.
def visit_Name(self, node):
    # This variable is what function argument ?
    index = self.parameters.index(node.id)
    self.last_state = self.func_obj.args[index]
    return self.last_state

# Here we create a LLVM IR integer constant using the
# Num node, on the expression '2+3' you'll have two
# Num nodes, the Num(n=2) and the Num(n=3).
def visit_Num(self, node):
    self.last_state = Constant.int(self.closure_type, node.n)
    return self.last_state

# The visitor for the binary operation
def visit_BinOp(self, node):
    # Get the operation, left and right arguments
    lhs = self.visit(node.left)
    rhs = self.visit(node.right)
    op = node.op

    # Convert each operation (Sub, Add, Mult, Div) to their
    # LLVM IR integer instruction equivalent
    if isinstance(op, ast.Sub):
        op = self.builder.sub(lhs, rhs, 'sub_t')
    elif isinstance(op, ast.Add):
        op = self.builder.add(lhs, rhs, 'add_t')
    elif isinstance(op, ast.Mult):
        op = self.builder.mul(lhs, rhs, 'mul_t')
    elif isinstance(op, ast.Div):
        op = self.builder.sdiv(lhs, rhs, 'sdiv_t')

    self.last_state = op
    return self.last_state

# Build the return (ret) statement with the last state
def build_return(self):
    self.builder.ret(self.last_state)

And that is it, our visitor is ready to convert a Python AST to a LLVM IR assembly language, to run it we’ll first create a LLVM module and an expression:

module = Module.new('ast_jit_module')
# Note that I'm using two variables 'a' and 'b'
expr = "(2+3*b+33*(10/2)+1+3/3+a)/2"
node = ast.parse(expr)
print ast.dump(node)

Will output:

Module(body=[Expr(value=BinOp(left=BinOp(left=BinOp(left=BinOp(
left=BinOp(left=BinOp(left=Num(n=2), op=Add(), right=BinOp(
left=Num(n=3), op=Mult(), right=Name(id='b', ctx=Load()))), op=Add(),
right=BinOp(left=Num(n=33), op=Mult(), right=Num(n=2))), op=Add(),
right=Num(n=1)), op=Add(), right=Num(n=3)), op=Add(),
right=Name(id='a', ctx=Load())), op=Div(), right=Num(n=2)))])

 

Now we can finally run our visitor on that generated AST the check the LLVM IR output:

visitor = AstJit(module, ['a', 'b'])
visitor.visit(node)
visitor.build_return()
print module

Will output the LLVM IR:

; ModuleID = 'ast_jit_module'

define i32 @func_ast_jit(i32 %a, i32 %b) {
entry:
  %mul_t = mul i32 3, %b
  %add_t = add i32 2, %mul_t
  %add_t1 = add i32 %add_t, 165
  %add_t2 = add i32 %add_t1, 1
  %add_t3 = add i32 %add_t2, 1
  %add_t4 = add i32 %add_t3, %a
  %sdiv_t = sdiv i32 %add_t4, 2
  ret i32 %sdiv_t
}

Now is when the real fun begins, we want to run LLVM optimization passes to optimize our code with an equivalent GCC -O2 optimization level, to do that we create a PassManagerBuilder and a PassManager, the PassManagerBuilder is the component that adds the passes to the PassManager, you can also manually add arbitrary transformations like dead code elimination, function inlining, etc:

pmb = lp.PassManagerBuilder.new()
# Optimization level
pmb.opt_level = 2

pm = lp.PassManager.new()
pmb.populate(pm)

# Run the passes into the module
pm.run(module)
print module

Will output:

; ModuleID = 'ast_jit_module'

define i32 @func_ast_jit(i32 %a, i32 %b) nounwind readnone {
entry:
  %mul_t = mul i32 %b, 3
  %add_t3 = add i32 %a, 169
  %add_t4 = add i32 %add_t3, %mul_t
  %sdiv_t = sdiv i32 %add_t4, 2
  ret i32 %sdiv_t
}

And here we have the optimized LLVM IR of the Python AST expression. The next step is to JIT that IR into native code and then execute it with some parameters:

ee = ExecutionEngine.new(module)
arg_a = GenericValue.int(Type.int(), 100)
arg_b = GenericValue.int(Type.int(), 42)

retval = ee.run_function(visitor.func_obj, [arg_a, arg_b])
print "Return: %d" % retval.as_int()

Will output:

Return: 197

And that’s it, you have created a AST->LLVM IR converter, optimized the LLVM IR with the transformation passes and then converted it to native code using the LLVM execution engine. I hope you liked =)

Cite this article as: Christian S. Perone, "Genetic Programming and a LLVM JIT for restricted Python AST expressions," in Terra Incognita, 15/08/2012, https://blog.christianperone.com/2012/08/genetic-programming-and-a-llvm-jit-for-restricted-python-ast-expressions/.
Python

Review Board – Review Requests plotting using Python

Review Board is one of these projects that Python community is always proud of, I really think that it became the de facto standard for code reviews nowadays. Review Board comes with an interesting an very useful REST API which you can use to retrieve information about the code reviews, comments, diffs and every single information stored on its database. So, I created a small Python script called rbstats that retrieves information about the Review Requests done by the users and then plots a heat map using matplotlib. I’ll show an example of the use on the Review Board system of the Apache foundation.

To use the tool, just call it with the API URL of the Review Boars system, i.e.:

python rb_stats.py
--max-results 80 https://reviews.apache.org/api

an then you’ll get a graphical plot like this (click to enlarge):

Where the “hottest” points are weighted according to the number of the code reviews that the user have created to the other axis user. You can also plot the statistics by user, for instance of the user benjaminhindman using the autumn colormap and with 400 max results:

python rb_stats.py 
--max-results 400 
--from-user benjaminhindman 
--colormap autumn https://reviews.apache.org/api

 

Click to enlarge the image:

I'm starting a new course "Machine Learning: Foundations and Engineering" for 2024.