Monthly Archives: April 2013

The question of question understanding

I had a very long flight on my last easter vacation and I decided to use that spare time to reread some chapters from Physics of the impossible by Michio Kaku.

While I was reading the robots chapter I began to ponder about some of the ideas mentioned. I understand that some scientists argue whether our brain is the most complicated system ever made by nature and this is why it will be impossible to be replicated. I would not say that this is untrue, but I think that we trend to be really egocentric when we think about our own specie capabilities. Although our brain can be an incredible tool there are still many other brains that nature has developed and that are capable to deal in an amazingway with a hostile environment although we do not exactly know today what is their level of consciousness when doing so.

It is mentioned that robots are capable of achieving goals but without any real understanding about the pursued goal. A robot might be able to fix a car’s tire, but without any understanding of what a tire or a pump really is.  Is this true? Do we understand what a tire or a pump is? Even when dealing with this simple objects human definitions will surely vary from one to another. So what knowledge should the robot have in order to really understand his goal purpose? What is a tire? what materials is made of?  Should it understand his use? Or might it be the fact that tires are usually round? .. I think human beings trend to get lost sometimes in the sea of their own abstract ideas and concepts and that this should not be the purpose of a common robot. I think our sometimes egocentric vanity does not help us realise human beings are not perfect  although evolution implies exactly that capacities are not at their best and can always get better. I think a good example of this is the square game where robots trend to be pretty better than us although it implies a little bit of thinking and very little operations.

If you don’t know the answer already try thinking about it, it will give you a better idea of what I am trying to explain here.

Human brain is not so good sometimes when we have to think out of the box.

So I think we need to create new robotic systems so that they understand the processes they will be dealing with. Robots that can understand their own plant system. System failures usually happen due to fails in the control system more than in the system itself, control engineers usually fix this problems adjusting the control system parameters. Therefore, complex systems should be able to understand their own system plant and be able to understand their controllers in order to adjust their parameters so that they can act consequently if any problem arises. We need to create robots that can understand their interactions with the world and their capacity to pursue effectively their goals in the way that is better suited for “surviving” in the environment they have to deal with.

Michio Kaku explains:

“Animals can be conscious , but in a different conscious level as the human being. We should try to classify the different types and levels of consciousness before debating about any philosophical issues of it. In time robots might develop a silicon consciousness (…) erasing the difference between semantics and syntax’s and their answers would be indistinguishable from  human’s. This way the question of whether they actually or not understand the question would be basically irrelevant.

Advertisements