Blog Archives

The question of question understanding

I had a very long flight on my last easter vacation and I decided to use that spare time to reread some chapters from Physics of the impossible by Michio Kaku.

While I was reading the robots chapter I began to ponder about some of the ideas mentioned. I understand that some scientists argue whether our brain is the most complicated system ever made by nature and this is why it will be impossible to be replicated. I would not say that this is untrue, but I think that we trend to be really egocentric when we think about our own specie capabilities. Although our brain can be an incredible tool there are still many other brains that nature has developed and that are capable to deal in an amazingway with a hostile environment although we do not exactly know today what is their level of consciousness when doing so.

It is mentioned that robots are capable of achieving goals but without any real understanding about the pursued goal. A robot might be able to fix a car’s tire, but without any understanding of what a tire or a pump really is.  Is this true? Do we understand what a tire or a pump is? Even when dealing with this simple objects human definitions will surely vary from one to another. So what knowledge should the robot have in order to really understand his goal purpose? What is a tire? what materials is made of?  Should it understand his use? Or might it be the fact that tires are usually round? .. I think human beings trend to get lost sometimes in the sea of their own abstract ideas and concepts and that this should not be the purpose of a common robot. I think our sometimes egocentric vanity does not help us realise human beings are not perfect  although evolution implies exactly that capacities are not at their best and can always get better. I think a good example of this is the square game where robots trend to be pretty better than us although it implies a little bit of thinking and very little operations.

If you don’t know the answer already try thinking about it, it will give you a better idea of what I am trying to explain here.

Human brain is not so good sometimes when we have to think out of the box.

So I think we need to create new robotic systems so that they understand the processes they will be dealing with. Robots that can understand their own plant system. System failures usually happen due to fails in the control system more than in the system itself, control engineers usually fix this problems adjusting the control system parameters. Therefore, complex systems should be able to understand their own system plant and be able to understand their controllers in order to adjust their parameters so that they can act consequently if any problem arises. We need to create robots that can understand their interactions with the world and their capacity to pursue effectively their goals in the way that is better suited for “surviving” in the environment they have to deal with.

Michio Kaku explains:

“Animals can be conscious , but in a different conscious level as the human being. We should try to classify the different types and levels of consciousness before debating about any philosophical issues of it. In time robots might develop a silicon consciousness (…) erasing the difference between semantics and syntax’s and their answers would be indistinguishable from  human’s. This way the question of whether they actually or not understand the question would be basically irrelevant.

The tale of the boy who saw without eyes

I don’t know if you have already heard about the human echolocation phenomenom. For those of you who haven’t,  this post’s title probably has left you a bit astonished, but human echolocation in an ability that has been known for at least the 1950s.

We could say that human echolocation its a process similar, in a way, to the one used by  bats, dolphins and some whales to recognise their surroundings and location. Equally to the way a sonar works using sound echoes to recognise objects.What we normally see is just the light reflection on an objects surface, that gives us the trace, form and size of what we are looking at. Evolution has developed eyes as light “sensors” and eyes plus their brain connections provides us with a really powerful tool to cope with the environment, walk around,  be able to recognise objects, enabling easy space positioning. In contrast we could define human echolocation as the ability of humans to detect objects in the environment by sensing echoes from those objects by actively creating sounds, for example by making clicking noises or tapping a cane. People trained to orientate with echolocation are able to interpret the sound waves reflected by nearby objects, accurately identifying their location and size, so this ability is used by some blind people to navigate within their environment using their auditory senses rather than their visual ones.

Vision and hearing are closely related in that they can process reflected waves of energy. Both systems can extract a great deal of information about the environment by interpreting the complex patterns of the reflected energy they receive. Sound carries information about the nature, arrangement of objects and other environmental features. Giving information about location, dimension and density of the object the sound reflects from.

It has been recently shown that blind echolocator’s experts use what is normally the “visual” part of their brain to process the echoes, primary visual cortex. Most interestingly, the brain areas that process auditory information were not activated in this subjects, in the performed experiments, more than in other normal subjects. Which gives the idea that blind echolocator’s experts sense the world similarly the way other people do, but using a completely different strategy for information gathering.

When talking about human echolocator’s, we must mention Daniel Kish, born in 1966 in Montebello, California. Blind since he was 13 months old, he is an expert in human echolocation and president of World Access for the Blind, a non-profit founded in 2000 which helps people with all forms of blindness. Kish and his Organisation have taught echolocation to at least 500 blind children around the world inspiring other scientists to study human echolocation. Other remarkable human echolocator’s are Lucas Murray from Poole, Dorset. Who was born blind and was one of the first british people to learn to visualise his surroundings using human echolocation, taught by Daniel Kish and Ben Underwood (1992-2009)

The human brain continuous to embrace incredible features, evolution was the great tool that boosted  its creation and guides its capabilities. Do we use the hole capabilities of our brain?  The most reasonable option is that we must use most of them. Brains are expensive organs in terms of energy costs and I believe it is reasonable that our evolution would’t have allowed those nonsense great energy expenses. So I believe that if we weren’t using our hole brain, the brain would had probably shrunken. There is an example of this. When talking about some type of birds, while mating season, the male which is the one who searches for food, has to remember where the best food is and where the nest is in order to return to it with the food, so it’s brain is bigger than the female one, which does not have to gather this knowledge. But is incredible to realise, that when not on mating season, when the birds do not have to remember where is easy to find lots of food and how to return to the nest, the male brain shrinks. In order to lose less energy, and when back on mating season the male’s brain expands again.

Do we use the brain the only way it can be used? I believe this is a totally different question, I believe perception is the key fact, usually missing when trying to understand cognition. Human echolocators do not “hear” the echoes as we would do it, they really” see” the sound reflected from the objects, I mean, their brain constructs the image the same way we do, but using sound instead of light.  Obtaining at the end very similar results from evolution’s point of view which is, at the end, helping the entity surviving in the environment. So I believe that only time will tell if there are totally different ways of using our brain, but we must be always really open minded for any new ways of thinking that will surely arise in the future.

At last I would like to mention Kevin Warwik, born the 9th of february in 1954 in Coventry, Uk. Is a British scientist and professor of cybernetics at the University of Reading. He is well know for his studies on direct interfaces between computer systems and the human nervous system. Has done interesting researches in echolocation’s field. Kevin Warwik is an incredible scientist, related with cognition and artificial intelligence and deserves a future entrance in this blog, so for now I will only talk about his particular conception with respect to artificial intelligence, he claims that we have many limits, such as our sensorimotor abilities, that we can overcome with machines. In his own words:

“There is no way I want to stay a mere human”

I would like to add to this blog entrance this documentary on Ben Underwood (1992-2009) and the human echolocation phenomenon. Not only for the scientific purposes but also for telling his beautiful biography of overcoming and struggle. When this was recorded he was still alive. Hope you enjoy it.

The tale of the Parasitic Wasp

“I cannot persuade myself that a beneficent and omnipotent God would have designedly created parasitic wasps with the express intention of their feeding within the living bodies of Caterpillars.”

Charles Darwin

I’m mainly an evolutionist. I have always felt astonished about how Darwin was able to synthesise nature’s complexity into a pattern of laws which were able to predict natures behaviour. Explaining, not only why all species where shaped by evolution, but more importantly why they had to evolve.

In order to fight the aggressive environment, billions of years of evolution, statistically made chemical components to associate into little cells and latter made those little cells interact and associate into more complex agents, which were more adapted in terms of fighting this environment and surviving. Those cells developed and specialised in order to execute tasks more easily and faster. At a large scale seems that perceiving and understanding the environment is vital for survival. Even more, looks like evolution had chosen to embrace abilities that could help to understand better the environment and cope with it. Seems that modelling the world, a perception of the world, is a common task of  the brain. Working as a filter of information so that it maximises energy, time…in order to efficiently survive.

I believe that the main propose of an autonomous agent is to “survive” against the environment. This has always been a great problem for autonomous robotics researchers. The environment is always changing, we are not able to predict all of the possible paths or possible problems, so we can not anticipate them. Even if this would be possible it would be to complex to try to analyse all of the information gathered.

Now, if evolution’s goal is survival of species and perception, world modelling and specialising of functionalities the tools which it uses to try to fight and cope the environment. I believe we should try to understand how this tools are implemented in order to develop better autonomous agents which can cope with this changeable environment.

Parasitic wasps grow inside of caterpillars in order to feed themselves, for some reason they have evolved this way in order to survive as an specie. This is only one little example of how “intelligent” evolution can be, although its result may seem strange and a little disgusting for a lot of us, for the parasitic wasp is really a great way of fighting the environment.