## Benford’s law emerges from deep language model

Reading Time: 1 minute

I was experimenting with the digits distribution from a pre-trained (weights from the OpenAI repositoryTransformer language model (LM) and I found a very interesting correlation between the Benford’s law and the digit distribution of the language model after conditioning it with some particular phrases.

Below is the correlation between the Benford’s law and the language model with conditioning on the phrase (shown in the figure):

## PyTorch 1.0 tracing JIT and LibTorch C++ API to integrate PyTorch into NodeJS

Reading Time: 8 minutes

Update 28 Feb 2019: I added a new blog post with a slide deck containing the presentation I did for PyData Montreal.

Today, at the PyTorch Developer Conference, the PyTorch team announced the plans and the release of the PyTorch 1.0 preview with many nice features such as a JIT for model graphs (with and without tracing) as well as the LibTorch, the PyTorch C++ API, one of the most important release announcement made today in my opinion.

Given the huge interest in understanding how this new API works, I decided to write this article showing an example of many opportunities that are now open after the release of the PyTorch C++ API. In this post, I’ll integrate PyTorch inference into native NodeJS using NodeJS C++ add-ons, just as an example of integration between different frameworks/languages that are now possible using the C++ API.

Below you can see the final result:

As you can see, the integration is seamless and I could use a traced ResNet as the computational graph model and feed any tensor to it to get the output predictions.

### Introduction

Simply put, the libtorch is a library version of the PyTorch. It contains the underlying foundation that is used by PyTorch, such as the ATen (the tensor library), which contains all the tensor operations and methods. Libtorch also contains the autograd, which is the component that adds the automatic differentiation to the ATen tensors.

A word of caution for those who are starting now is to be careful with the use of the tensors that can be created both from ATen and autograd, do not mix them, the ATen will return the plain tensors (when you create them using the at namespace) while the autograd functions (from the torch namespace) will return Variable, by adding its automatic differentiation mechanism.

For a more extensive tutorial on how PyTorch internals work, please take a look on my previous tutorial on the PyTorch internal architecture.

Libtorch can be downloaded from the Pytorch website and it is only available as a preview for a while. You can also find the documentation in this site, which is mostly a Doxygen rendered documentation. I found the library pretty stable, and it makes sense because it is actually exposing the stable foundations of PyTorch, however, there are some issues with headers and some minor problems concerning the library organization that you might find while starting working with it (that will hopefully be fixed soon).

For NodeJS, I’ll use the Native Abstractions library (nan) which is the most recommended library (actually is basically a header-only library) to create NodeJS C++ add-ons and the cmake-js, because libtorch already provide the cmake files that make our building process much easier. However, the focus here will be on the C++ code and not on the building process.

The flow for the development, tracing, serializing and loading the model can be seen in the figure on the left side.

It starts with the development process and tracing being done in PyTorch (Python domain) and then the loading and inference on the C++ domain (in our case in NodeJS add-on).

### Wrapping the Tensor

In NodeJS, to create an object as a first-class citizen of the JavaScript world, you need to inherit from the ObjectWrap class, which will be responsible for wrapping a C++ component.

#ifndef TENSOR_H
#define TENSOR_H

#include <nan.h>
#include <torch/torch.h>

namespace torchjs {

class Tensor : public Nan::ObjectWrap {
public:
static NAN_MODULE_INIT(Init);

void setTensor(at::Tensor tensor) {
this->mTensor = tensor;
}

torch::Tensor getTensor() {
return this->mTensor;
}

static v8::Local<v8::Object> NewInstance();

private:
explicit Tensor();
~Tensor();

static NAN_METHOD(New);
static NAN_METHOD(toString);
static Nan::Persistent<v8::Function> constructor;

private:
torch::Tensor mTensor;

};

}  // namespace torchjs

#endif

As you can see, most of the code for the definition of our Tensor class is just boilerplate. The key point here is that the torchjs::Tensor will wrap a torch::Tensor and we added two special public methods (setTensor and getTensor) to set and get this internal torch tensor.

I won’t show all the implementation details because most parts of it are NodeJS boilerplate code to construct the object, etc. I’ll focus on the parts that touch the libtorch API, like in the code below where we are creating a small textual representation of the tensor to show on JavaScript (toString method):

NAN_METHOD(Tensor::toString) {
Tensor* obj = ObjectWrap::Unwrap<Tensor>(info.Holder());
std::stringstream ss;

at::IntList sizes = obj->mTensor.sizes();
ss << "Tensor[Type=" << obj->mTensor.type() << ", ";
ss << "Size=" << sizes << std::endl;

info.GetReturnValue().Set(Nan::New(ss.str()).ToLocalChecked());
}

What we are doing in the code above, is just getting the internal tensor object from the wrapped object by unwrapping it. After that, we build a string representation with the tensor size (each dimension sizes) and its type (float, etc).

### Wrapping Tensor-creation operations

Let’s create now a wrapper code for the torch::ones function which is responsible for creating a tensor of any defined shape filled with constant 1’s.

NAN_METHOD(ones) {
// Sanity checking of the arguments
if (info.Length() < 2)
return Nan::ThrowError(Nan::New("Wrong number of arguments").ToLocalChecked());

if (!info[0]->IsArray() || !info[1]->IsBoolean())
return Nan::ThrowError(Nan::New("Wrong argument types").ToLocalChecked());

// Retrieving parameters (require_grad and tensor shape)
const bool require_grad = info[1]->BooleanValue();
const v8::Local<v8::Array> array = info[0].As<v8::Array>();
const uint32_t length = array->Length();

// Convert from v8::Array to std::vector
std::vector<long long> dims;
for(int i=0; i<length; i++)
{
v8::Local<v8::Value> v;
int d = array->Get(i)->NumberValue();
dims.push_back(d);
}

// Call the libtorch and create a new torchjs::Tensor object
// wrapping the new torch::Tensor that was created by torch::ones
auto newinst = Tensor::NewInstance();
Tensor* obj = Nan::ObjectWrap::Unwrap<Tensor>(newinst);
obj->setTensor(v);

info.GetReturnValue().Set(newinst);
}

So, let’s go through this code. We are first checking the arguments of the function. For this function, we’re expecting a tuple (a JavaScript array) for the tensor shape and a boolean indicating if we want to compute gradients or not for this tensor node. After that, we’re converting the parameters from the V8 JavaScript types into native C++ types. Soon as we have the required parameters, we then call the torch::ones function from the libtorch, this function will create a new tensor where we use a torchjs::Tensor class that we created earlier to wrap it.

And that’s it, we just exposed one torch operation that can be used as native JavaScript operation.

### Intermezzo for the PyTorch JIT

The introduced PyTorch JIT revolves around the concept of the Torch Script. A Torch Script is a restricted subset of the Python language and comes with its own compiler and transform passes (optimizations, etc).

This script can be created in two different ways: by using a tracing JIT or by providing the script itself. In the tracing mode, your computational graph nodes will be visited and operations recorded to produce the final script, while the scripting is the mode where you provide this description of your model taking into account the restrictions of the Torch Script.

Note that if you have branching decisions on your code that depends on external factors or data, tracing won’t work as you expect because it will record that particular execution of the graph, hence the alternative option to provide the script. However, in most of the cases, the tracing is what we need.

To understand the differences, let’s take a look at the Intermediate Representation (IR) from the script module generated both by tracing and by scripting.

@torch.jit.script
def happy_function_script(x):
ret = torch.rand(0)
if True == True:
ret = torch.rand(1)
else:
ret = torch.rand(2)
return ret

def happy_function_trace(x):
ret = torch.rand(0)
if True == True:
ret = torch.rand(1)
else:
ret = torch.rand(2)
return ret

traced_fn = torch.jit.trace(happy_function_trace,
(torch.tensor(0),),
check_trace=False)

In the code above, we’re providing two functions, one is using the @torch.jit.script decorator, and it is the scripting way to create a Torch Script, while the second function is being used by the tracing function torch.jit.trace. Not that I intentionally added a “True == True” decision on the functions (which will always be true).

Now, if we inspect the IR generated by these two different approaches, we’ll clearly see the difference between the tracing and scripting approaches:

# 1) Graph from the scripting approach
graph(%x : Dynamic) {
%16 : int = prim::Constant[value=2]()
%10 : int = prim::Constant[value=1]()
%7 : int = prim::Constant[value=1]()
%8 : int = prim::Constant[value=1]()
%9 : int = aten::eq(%7, %8)
%ret : Dynamic = prim::If(%9)
block0() {
%11 : int[] = prim::ListConstruct(%10)
%12 : int = prim::Constant[value=6]()
%13 : int = prim::Constant[value=0]()
%14 : int[] = prim::Constant[value=[0, -1]]()
%ret.2 : Dynamic = aten::rand(%11, %12, %13, %14)
-> (%ret.2)
}
block1() {
%17 : int[] = prim::ListConstruct(%16)
%18 : int = prim::Constant[value=6]()
%19 : int = prim::Constant[value=0]()
%20 : int[] = prim::Constant[value=[0, -1]]()
%ret.3 : Dynamic = aten::rand(%17, %18, %19, %20)
-> (%ret.3)
}
return (%ret);
}

# 2) Graph from the tracing approach
graph(%0 : Long()) {
%7 : int = prim::Constant[value=1]()
%8 : int[] = prim::ListConstruct(%7)
%9 : int = prim::Constant[value=6]()
%10 : int = prim::Constant[value=0]()
%11 : int[] = prim::Constant[value=[0, -1]]()
%12 : Float(1) = aten::rand(%8, %9, %10, %11)
return (%12);
}

​

As we can see, the IR is very similar to the LLVM IR, note that in the tracing approach, the trace recorded contains only one path from the code, the truth path, while in the scripting we have both branching alternatives. However, even in scripting, the always false branch can be optimized and removed with a dead code elimination transform pass.

PyTorch JIT has a lot of transformation passes that are used to do loop unrolling, dead code elimination, etc. You can find these passes here. Not that conversion to other formats such as ONNX can be implemented as a pass on top of this intermediate representation (IR), which is quite convenient.

### Tracing the ResNet

Now, before implementing the Script Module in NodeJS, let’s first trace a ResNet network using PyTorch (using just Python):

traced_net = torch.jit.trace(torchvision.models.resnet18(),
torch.rand(1, 3, 224, 224))
traced_net.save("resnet18_trace.pt")

As you can see from the code above, we just have to provide a tensor example (in this case a batch of a single image with 3 channels and size 224×224. After that we just save the traced network into a file called resnet18_trace.pt.

Now we’re ready to implement the Script Module in NodeJS in order to load this file that was traced.

### Wrapping the Script Module

This is now the implementation of the Script Module in NodeJS:

// Class constructor
ScriptModule::ScriptModule(const std::string filename) {
// Load the traced network from the file
}

// JavaScript object creation
NAN_METHOD(ScriptModule::New) {
if (info.IsConstructCall()) {
// Get the filename parameter
v8::String::Utf8Value param_filename(info[0]->ToString());
const std::string filename = std::string(*param_filename);

// Create a new script module using that file name
ScriptModule *obj = new ScriptModule(filename);
obj->Wrap(info.This());
info.GetReturnValue().Set(info.This());
} else {
v8::Local<v8::Function> cons = Nan::New(constructor);
info.GetReturnValue().Set(Nan::NewInstance(cons).ToLocalChecked());
}
}


As you can see from the code above, we’re just creating a class that will call the torch::jit::load function passing a file name of the traced network. We also have the implementation of the JavaScript object, where we convert parameters to C++ types and then create a new instance of the torchjs::ScriptModule.

The wrapping of the forward pass is also quite straightforward:

NAN_METHOD(ScriptModule::forward) {
ScriptModule* script_module = ObjectWrap::Unwrap<ScriptModule>(info.Holder());
Nan::MaybeLocal<v8::Object> maybe = Nan::To<v8::Object>(info[0]);

Tensor *tensor =
Nan::ObjectWrap::Unwrap<Tensor>(maybe.ToLocalChecked());

torch::Tensor torch_tensor = tensor->getTensor();
torch::Tensor output = script_module->mModule->forward({torch_tensor}).toTensor();

auto newinst = Tensor::NewInstance();
Tensor* obj = Nan::ObjectWrap::Unwrap<Tensor>(newinst);
obj->setTensor(output);
info.GetReturnValue().Set(newinst);
}

As you can see, in this code, we just receive a tensor as an argument, we get the internal torch::Tensor from it and then call the forward method from the script module, we wrap the output on a new torchjs::Tensor and then return it.

And that’s it, we’re ready to use our built module in native NodeJS as in the example below:

var torchjs = require("./build/Release/torchjs");
var script_module = new torchjs.ScriptModule("resnet18_trace.pt");
var data = torchjs.ones([1, 3, 224, 224], false);
var output = script_module.forward(data);

I hope you enjoyed ! Libtorch opens the door for the tight integration of PyTorch in many different languages and frameworks, which is quite exciting and a huge step towards the direction of production deployment code.

– Christian S. Perone

## Concentration inequalities – Part I

Reading Time: 4 minutes

### Introduction

Concentration inequalities, or probability bounds, are very important tools for the analysis of Machine Learning algorithms or randomized algorithms. In statistical learning theory, we often want to show that random variables, given some assumptions, are close to its expectation with high probability. This article provides an overview of the most basic inequalities in the analysis of these concentration measures.

### Markov’s Inequality

The Markov’s inequality is one of the most basic bounds and it assumes almost nothing about the random variable. The assumptions that Markov’s inequality makes is that the random variable $$X$$ is non-negative $$X > 0$$ and has a finite expectation $$\mathbb{E}\left[X\right] < \infty$$. The Markov’s inequality is given by:

$$\underbrace{P(X \geq \alpha)}_{\text{Probability of being greater than constant } \alpha} \leq \underbrace{\frac{\mathbb{E}\left[X\right]}{\alpha}}_{\text{Bounded above by expectation over constant } \alpha}$$

What this means is that the probability that the random variable $$X$$ will be bounded by the expectation of $$X$$ divided by the constant $$\alpha$$. What is remarkable about this bound, is that it holds for any distribution with positive values and it doesn’t depend on any feature of the probability distribution, it only requires some weak assumptions and its first moment, the expectation.

Example: A grocery store sells an average of 40 beers per day (it’s summer !). What is the probability that it will sell 80 or more beers tomorrow ?

\begin{align} P(X \geq \alpha) & \leq\frac{\mathbb{E}\left[X\right]}{\alpha} \\\\ P(X \geq 80) & \leq\frac{40}{80} = 0.5 = 50\% \end{align}

The Markov’s inequality doesn’t depend on any property of the random variable probability distribution, so it’s obvious that there are better bounds to use if information about the probability distribution is available.

### Chebyshev’s Inequality

When we have information about the underlying distribution of a random variable, we can take advantage of properties of this distribution to know more about the concentration of this variable. Let’s take for example a normal distribution with mean $$\mu = 0$$ and unit standard deviation $$\sigma = 1$$ given by the probability density function (PDF) below:

$$f(x) = \frac{1}{\sqrt{2\pi}}e^{-x^2/2}$$

Integrating from -1 to 1: $$\int_{-1}^{1} \frac{1}{\sqrt{2\pi}}e^{-x^2/2}$$, we know that 68% of the data is within $$1\sigma$$ (one standard deviation) from the mean $$\mu$$ and 95% is within $$2\sigma$$ from the mean. However, when it’s not possible to assume normality, any other amount of data can be concentrated within $$1\sigma$$ or $$2\sigma$$.

Chebyshev’s inequality provides a way to get a bound on the concentration for any distribution, without assuming any underlying property except a finite mean and variance. Chebyshev’s also holds for any random variable, not only for non-negative variables as in Markov’s inequality.

The Chebyshev’s inequality is given by the following relation:

$$P( \mid X – \mu \mid \geq k\sigma) \leq \frac{1}{k^2}$$

that can also be rewritten as:

$$P(\mid X – \mu \mid < k\sigma) \geq 1 – \frac{1}{k^2}$$

For the concrete case of $$k = 2$$, the Chebyshev’s tells us that at least 75% of the data is concentrated within 2 standard deviations of the mean. And this holds for any distribution.

Now, when we compare this result for $$k = 2$$ with the 95% concentration of the normal distribution for $$2\sigma$$, we can see how conservative is the Chebyshev’s bound. However, one must not forget that this holds for any distribution and not only for a normally distributed random variable, and all that Chebyshev’s needs, is the first and second moments of the data. Something important to note is that in absence of more information about the random variable, this cannot be improved.

### Chebyshev’s Inequality and the Weak Law of Large Numbers

Chebyshev’s inequality can also be used to prove the weak law of large numbers, which says that the sample mean converges in probability towards the true mean.

That can be done as follows:

• Consider a sequence of i.i.d. (independent and identically distributed) random variables $$X_1, X_2, X_3, \ldots$$ with mean $$\mu$$ and variance $$\sigma^2$$;
• The sample mean is $$M_n = \frac{X_1 + \ldots + X_n}{n}$$ and the true mean is $$\mu$$;
• For the expectation of the sample mean we have: $$\mathbb{E}\left[M_n\right] = \frac{\mathbb{E}\left[X_1\right] + \ldots +\mathbb{E}\left[X_n\right]}{n} = \frac{n\mu}{n} = \mu$$
• For the variance of the sample we have: $$Var\left[M_n\right] = \frac{Var\left[X_1\right] + \ldots +Var\left[X_n\right]}{n^2} = \frac{n\sigma^2}{n^2} = \frac{\sigma^2}{n}$$
• By the application of the Chebyshev’s inequality we have: $$P(\mid M_n – \mu \mid \geq \epsilon) \leq \frac{\sigma^2}{n\epsilon^2}$$ for any (fixed) $$\epsilon > 0$$, as $$n$$ increases, the right side of the inequality goes to zero. Intuitively, this means that for a large $$n$$ the concentration of the distribution of $$M_n$$ will be around $$\mu$$.

### Improving on Markov’s and Chebyshev’s with Chernoff Bounds

Before getting into the Chernoff bound, let’s understand the motivation behind it and how one can improve on Chebyshev’s bound. To understand it, we first need to understand the difference between a pairwise independence and mutual independence. For the pairwise independence, we have the following for A, B, and C:

$$P(A \cap B) = P(A)P(B) \\ P(A \cap C) = P(A)P(C) \\ P(B \cap C) = P(B)P(C)$$

Which means that any pair (any two events) are independent, but not necessarily that:

$$P(A \cap B\cap C) = P(A)P(B)P(C)$$

which is called “mutual independence” and it is a stronger independence. By definition, the mutual independence assumes the pairwise independence but the opposite isn’t always true. And this is the case where we can improve on Chebyshev’s bound, as it is not possible without doing these further assumptions (stronger assumptions leads to stronger bounds).

We’ll talk about the Chernoff bounds in the second part of this tutorial !

## NLP word representations and the Wittgenstein philosophy of language

Reading Time: 6 minutes

I made an introductory talk on word embeddings in the past and this write-up is an extended version of the part about philosophical ideas behind word vectors. The aim of this article is to provide an introduction to Ludwig Wittgenstein’s main ideas on linguistics that are closely related to techniques that are distributional (I’ll talk what this means later) by design, such as word2vec [Mikolov et al., 2013], GloVe [Pennington et al., 2014], Skip-Thought Vectors [Kiros et al., 2015], among others.

One of the most interesting aspects of Wittgenstein is perhaps that fact that he had developed two very different philosophies during his life, and each of which had great influence. Something quite rare for someone who spent so much time working on these ideas and retreating even after the major influence they exerted, especially in the Vienna Circle. A true lesson of intellectual honesty, and in my opinion, one important legacy.

Wittgenstein was an avid reader of the Schopenhauer’s philosophy, and in the same way that Schopenhauer inherited his philosophy from Kant, especially regarding the division of what can be experimented (phenomena) or not (noumena), contrasting things as they appear for us from things as they are in themselves, Wittgenstein concluded that Schopenhauer philosophy was fundamentally right. He believed that in the noumena realm, we have no conceptual understanding and therefore we will never be able to say anything (without becoming nonsense), in contrast to the phenomena realm of our experience, where we can indeed talk about and try to understand. By adding secure foundations, such as logic, to the phenomenal world, he was able to reason about how the world is describable by language and thus mapping what are the limits of how and what can be expressed in language or in conceptual thought.

The first main theory of language from Wittgenstein, described in his Tractatus Logico-Philosophicus, is known as the “Picture theory of language” (aka Picture theory of meaning). This theory is based on an analogy with painting, where Wittgenstein realized that a painting is something very different than a natural landscape, however, a skilled painter can still represent the real landscape by placing patches or strokes corresponding to the natural landscape reality. Wittgenstein gave the name “logical form” to this set of relationships between the painting and the natural landscape. This logical form, the set of internal relationships common to both representations, is why the painter was able to represent reality because the logical form was the same in both representations (here I call both as “representations” to be coherent with Schopenhauer and Kant terms because the reality is also a representation for us, to distinguish between it and the thing-in-itself).

This theory was important, especially in our context (NLP), because Wittgenstein realized that the same thing happens with language. We are able to assemble words in sentences to match the same logical form of what we want to describe. The logical form was the core idea that made us able to talk about the world. However, later Wittgenstein realized that he had just picked a single task, out of the vast amount of tasks that language can perform and created a whole theory of meaning around it.

The fact is, language can do many other tasks besides representing (picturing) the reality. With language, as Wittgenstein noticed, we can give orders, and we can’t say that this is a picture of something. Soon as he realized these counter-examples, Wittgenstein abandoned the picture theory of language and adopted a much more powerful metaphor of a tool. And here we’re approaching the modern view of the meaning in language as well as the main foundational idea behind many modern Machine Learning techniques for word/sentence representations that works quite well. Once you realize that language works as a tool, if you want to understand the meaning of it, you just need to understand all the possible things you can do with it. And if you take for instance a word or concept in isolation, the meaning of it is the sum of all its uses, and this meaning is fluid and can have many different faces. This important thought can be summarized in the well-known quote below:

The meaning of a word is its use in the language.

(…)

One cannot guess how a word functions. One has to look at its use, and learn from that.

– Ludwig Wittgenstein, Philosophical Investigations

And indeed it makes complete sense because once you exhaust all the uses of a word, there is nothing left on it. Reality is also by far more fluid than usually thought, because:

Our language can be seen as an ancient city: a maze of little streets and squares, of old and new houses, and of houses with additions from various periods (…)

– Ludwig Wittgenstein, Philosophical Investigations

John R. Firth was a linguist also known for the popularization of this context-dependent nature of the meaning who also used Wittgenstein’s Philosophical Investigations as a recourse to emphasize the importance of the context in meaning, in which I quote below:

The placing of a text as a constituent in a context of situation contributes to the statement of meaning since situations are set up to recognize use. As Wittgenstein says, ‘the meaning of words lies in their use.’ (Phil. Investigations, 80, 109). The day-to-day practice of playing language games recognizes customs and rules. It follows that a text in such established usage may contain sentences such as ‘Don’t be such an ass !’, ‘You silly ass !’, ‘What an ass he is !’ In these examples, the word ass is in familiar and habitual company, commonly collocated with you silly-, he is a silly-, don’t be such an-. You shall know a word by the company it keeps ! One of the meanings of ass is its habitual collocation with such other words as those above quoted. Though Wittgenstein was dealing with another problem, he also recognizes the plain face-value, the physiognomy of words. They look at us ! ‘The sentence is composed of words and that is enough’.

– John R. Firth

This idea of learning the meaning of a word by the company it keeps is exactly what word2vec (and other count-based methods based on co-occurrence as well) is doing by means of data and learning on an unsupervised fashion with a supervised task that was by design built to predict context (or vice-versa, depending if you use skip-gram or cbow), which was also a source of inspiration for the Skip-Thought Vectors. Nowadays, this idea is also known as the “Distributional Hypothesis“, which is also being used on fields other than linguistics.

Now, it is quite amazing that if we look at the work by Neelakantan, et al., 2015, called “Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space“, where they mention about an important deficiency in word2vec in which each word type has only one vector representation, you’ll see that this has deep philosophical motivations if we relate it to the Wittgenstein and Firth ideas, because, as Wittgenstein noticed, the meaning of a word is unlikely to wear a single face and word2vec seems to be converging to an approximation of the average meaning of a word instead of capturing the polysemy inherent in language.

A concrete example of the multi-faceted nature of words can be seen in the example of the word “evidence”, where the meaning can be quite different to a historian, a lawyer and a physicist. The hearsay cannot count as evidence in a court while it is many times the only evidence that a historian has, whereas the hearsay doesn’t even arise in physics. Recent works such as ELMo [Peters, Matthew E. et al. 2018], which used different levels of features from a LSTM trained with a language model objective are also a very interesting direction with excellent results towards incorporating a context-dependent semantics into the word representations and breaking the tradition of shallow representations as seen in word2vec.

We’re in an exciting time where it is really amazing to see how many deep philosophical foundations are actually hidden in Machine Learning techniques. It is also very interesting that we’re learning a lot of linguistic lessons from Machine Learning experimentation, that we can see as important means for discovery that is forming an amazing virtuous circle. I think that we have never been self-conscious and concerned with language as in the past years.

I really hope you enjoyed reading this !

– Christian S. Perone

### References

Magee, Bryan. The history of philosophy. 1998.

Mikolov, Thomas et al. Efficient Estimation of Word Representations in Vector Space. 2013. https://arxiv.org/abs/1301.3781

Pennington, Jeffrey et al. GloVe: Global Vectors for Word Representation. 2014. https://nlp.stanford.edu/projects/glove/

Kiros, Ryan et al. Skip-Thought Vectors. 2015. https://arxiv.org/abs/1506.06726

Neelakantan, Arvind et al. Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space. 2015. https://arxiv.org/abs/1504.06654

Léon, Jacqueline. Meaning by collocation. The Firthian filiation of Corpus Linguistics. 2007.