The “tribes” of machine learning

That's a Google auto translation. Sorry if there's any mistakes...

It seems that talking about machine learning is talking about a very novel concept that is closely related to artificial intelligence. This term already begins to appear in the strategic reflections when we think about the profiles and technologies that are going to be necessary very soon in our organizations.
And, what is basically a question of determining how machines must learn to interpret the data that we have today, in order to learn from them. Something that, over time, will be much more booming, as it is considered as the future of technology within the computer science.

Pedro Domingos, in his book “The Master Algorithm” describes the five currents or schools that differ in the discipline with which they approach the machine learning, or in other words, depending on the type of solution (seen like algorithm) that Use. The five tribes are: Symbolists, Connectivists, Evolutionists, Bayesians and Analysts. A stream that encompasses what is known as deeplearning, which is nothing more than the set of algorithms that are used to interpret and draw conclusions from an abstract data model.

The inverse deduction of the symbolists

To speak of the symbolist current is the same as to speak of a current that has logic and philosophy as basic and fundamental pillars. From here they practice what is called the inverse deduction. A technique that tries to cover those plots that remain unresolved within the knowledge of the machine.


Connectivists and Neuroscience

As for the proponents of the connectivist current, it must be said that they do not believe that logic has much impact on the learning of machines. In their case, they try to create small brains from what they call retropropagation. And is that from the interconnections that allows a structure like that of a brain, as proposed by neuroscience, it is as best you can interpret something as complex as the big data.


Evolutionists stick to Darwin

This is a resounding statement. Not in vain are the theories of evolution proposed by Charles Darwin at the time that serve as the basis for learning the machines they want to impose. In this way, they raise the possibility of a learning from several theories, that is, what is involved is that the machine seeks a solution to a problem from several ways. In this way, not only is sought and found a solution, but at the same time also discard some theories that do not serve for that goal.


Bayesians betting on probability

The basis of the Bayesians is the theory of probability and statistics. In this case, it is a question of calculating how unlikely it is a fact to be able to discard it as a possible solution. From here, and taking into account that the process is not at all simple, the database will be updated, which is equivalent to a learning process.


Analogists and the principle of the whole

Analogists, however, assert that the principle of analogy best sums up what can be regarded as learning. And it is that, as the human being, machines must raise certain analogies to be able to solve the problems that arise. Such analogies will be stored in a database so that, with the passage of time, it takes less time to draw an analogy. A current that is one of the most impactful, as it is one of the simplest to implement at a technical level.


As can be verified, there are many approaches from which the machine learning can be applied, obtaining, at least in theory, quite similar results.


If you are interested in the subject, you can find these interesting sources:

  • The book of Pedro Domingos: