Blog Archives

The question of question understanding

I had a very long flight on my last easter vacation and I decided to use that spare time to reread some chapters from Physics of the impossible by Michio Kaku.

While I was reading the robots chapter I began to ponder about some of the ideas mentioned. I understand that some scientists argue whether our brain is the most complicated system ever made by nature and this is why it will be impossible to be replicated. I would not say that this is untrue, but I think that we trend to be really egocentric when we think about our own specie capabilities. Although our brain can be an incredible tool there are still many other brains that nature has developed and that are capable to deal in an amazingway with a hostile environment although we do not exactly know today what is their level of consciousness when doing so.

It is mentioned that robots are capable of achieving goals but without any real understanding about the pursued goal. A robot might be able to fix a car’s tire, but without any understanding of what a tire or a pump really is.  Is this true? Do we understand what a tire or a pump is? Even when dealing with this simple objects human definitions will surely vary from one to another. So what knowledge should the robot have in order to really understand his goal purpose? What is a tire? what materials is made of?  Should it understand his use? Or might it be the fact that tires are usually round? .. I think human beings trend to get lost sometimes in the sea of their own abstract ideas and concepts and that this should not be the purpose of a common robot. I think our sometimes egocentric vanity does not help us realise human beings are not perfect  although evolution implies exactly that capacities are not at their best and can always get better. I think a good example of this is the square game where robots trend to be pretty better than us although it implies a little bit of thinking and very little operations.

If you don’t know the answer already try thinking about it, it will give you a better idea of what I am trying to explain here.

Human brain is not so good sometimes when we have to think out of the box.

So I think we need to create new robotic systems so that they understand the processes they will be dealing with. Robots that can understand their own plant system. System failures usually happen due to fails in the control system more than in the system itself, control engineers usually fix this problems adjusting the control system parameters. Therefore, complex systems should be able to understand their own system plant and be able to understand their controllers in order to adjust their parameters so that they can act consequently if any problem arises. We need to create robots that can understand their interactions with the world and their capacity to pursue effectively their goals in the way that is better suited for “surviving” in the environment they have to deal with.

Michio Kaku explains:

“Animals can be conscious , but in a different conscious level as the human being. We should try to classify the different types and levels of consciousness before debating about any philosophical issues of it. In time robots might develop a silicon consciousness (…) erasing the difference between semantics and syntax’s and their answers would be indistinguishable from  human’s. This way the question of whether they actually or not understand the question would be basically irrelevant.

The Human Brain Project

This past week a new interesting project about the human brain was approved by the European Commission. This project has been selected with an european flagship FET (Future and emerging technologies) which means it’s set to receive a billion euros and also to be funded as FET “flagships” over 10 years through its research and innovation funding programmes.

Modern neuroscience has been enormously productive but unsystematic. The data it produces describes different levels of biological organisation, in different areas of the brain in different species, at different stages of development. Today we urgently need to integrate this data, to show how the parts fit together in a single multi-level system.The “Human Brain Project” will create the world’s largest experimental facility for developing the most detailed model of the brain, for studying how the human brain works and ultimately to develop personalised treatment of neurological and related diseases. This research lays the scientific and technical foundations for medical progress that has the potential to will dramatically improve the quality of life for millions of Europeans.

The project will pursue four goals:

  1. Generate strategically selected data essential to seed brain atlases, build brain models and catalyse contributions from other groups.
  2. Identify mathematical principles underlaying the relationships between different levels of brain organisation.
  3. Integrate systems of Information and Communications Technologies, providing platforms offering services to neuroscientists, clinical researches and technology developers.
  4. Develop first draft models and prototype technologies, demonstrating how the platforms can be used to produce results with immediate value for basic neuroscience, medicine and computing technology.

From this goals they generate other subgoals that the project wants to achieve, I want to remark the next ones:

Understanding the relationships between brain structure and function, integrate the principles of cognition, that from a technological perspective it would give developers the tools they need to develop robots with the potential to achieve human-like capabilities, impossible to realise in systems that do not have a brain-like controller. The Human Brain Project work in neuromorphic computing and neurorobotics would open the road for the development of compact low-power systems with the long-term potential to achieve brain-like intelligence.

I want to end this post with the next paragraph quoted from the International Technology Roadmap for Semiconductors,2011.

The appeal of neuromorphic architectures lies in i) their potential to achieve (human-like) intelligence based on unreliable devices typically found in neuronal tissue, ii) their strategies to deal with anomalies, emphasising not only tolerance to noise and faults, but also the active exploitation of noise to increase the effectiveness of operations, and iii) their potential for low-power operation. Traditional von Neumann machines are less suitable with regard to item i), since for this type of tasks they require a machine complexity ( the number of gates and computational power), that tends to increase exponentially with the complexity of the environment (the size of the input). Neuromorphic systems, on the other hand, exhibit a more gradual increase of their machine complexity with respect to the environmental complexity. Therefore, at the level of human-like computing tasks, neuromorphic machines have the potential to be superior to von Neumann machines

HBP-videoverview from Human Brain Project on Vimeo.

source for the information: The Human Brain Project. A report to the European Commission.

Dr. Marvin Minsky: Building Intelligent Machines

Most of you, have probably heard about Dr. Marvin Minsky , one of the most influential authorities inside Cognitive Science. Co-founder of the Massachusetts Institute of Technologies AI laboratory. In my opinion, he is one of the most intelligent thinker I have ever heard.

This is a talk in 2009 in which Marvin Minsky tries to explain why we need intelligent machines and what we can expect from them. His research group has been working on cognitive architectures trying to develop better intelligent machines using AI.  Although it is a few years old,  it is one of the most interesting and funny  talks I have experienced so far, and I was able to find it recently on iTunes.

Education is no good, unless you teach them to question it. And it is not popular.

Sarcastic Marvin Minsky begins talking about why we need intelligent machines. He claims that if we knew how to build a machine that could think more or less the same way that people can, we should surely understand its behaviour, being capable in the future to know how to fix and replace all of this machines parts and equally all of the the brain’s ones to. In his own words “Immortality would be easy to obtain”  This way we would be able to make backup copies of our brains and download and upload them into this machines.

One thing he points out and which I can not do anything but totally agree, is that in the Universities, high-schools and collages, people are developing again and again the same kinds of robots. In his own words” You can learn a lot from doing this, like that if you step on connectors …. they brake.” 🙂 People should be working on something more useful. For example, robot-soccer might be beautiful to watch for a few minutes but it is useless and has not much value for humanity. Why not try to research on robots that do something more useful?

The best is the enemy of the good. Don’t spend a lot of time trying to find the best way. Find six good ways, it will take you half the time and your machine will be six times better when the environment changes

Dr. Minsky remarks that the ambiguity of language is really a useful tool we use to learn, as it is possible to obtain a better result at the end by misunderstanding what you where initially told. As when somebody realises a different idea from something you have read before, but you didn’t thought of that at that moment. But now, that you have been told about it, the idea seems really plausible. It is clear that people learn in various ways, and that people develop different ways to learn. I agree with Dr. Minsky, that we should focus our efforts not on education as we are doing it right now, but more on understanding ways to learn and improving old commonly used techniques as reinforcement by reward. Instead, we should better ask ourselves why some kids learn more from the same experience than others and improve our ways of  teaching how to learn.

If you can tell yourself what you did, you might misunderstand it and do something better next time

Talking about AI, Dr. Minsky gives a brief history description through the first AI to symbolic calculation. How this development has provided powerful tools for mathematic integration using Matlab or Mathematica. But right now, computers can only solve logical formulated problems and can not operate by analogy. He explains that there are not enough people researching on example based reasoning, which he believes will be the future of computer programming and instead that to much effort and money is spent on neural networks, statistical learning, genetic algorithms…which will inevitably pass away and become obsolete.

Thousands of people are doing something because they see thousands of other people doing it, so therefore it must be good. I know only about twenty or thirty people in the hole world who are working on how to get computers to do anything like ordinary common sense reasoning and thats where I think, the future lays.

Through the whole lecture Dr. Marvin gives his point of view on a bunch of different topics and problems of society, science, education, politics, even sports, in a very sarcastic way. Although I do not agree with all of them, I have found them really funny and makes the lecture easy to listen and understand.

On a menu, when you have a choice I always order chicken this days, because your children won’t have any chicken to order

For anyone who is interested in reading Dr. Minsky: The Emotion Machine

Building Intelligent Machines

Sparking Minds

I came across the other day a really interesting project named FIONA. Maybe some of you have heard of it. You can check out its cool web here.

Project Fiona or The community for the creation of the artificial mind, aims as their developers explain, to be an online platform to create the next generation of virtual avatars. Many blog and internet users knows what an avatar is, basically we could define it as a personalized graphic file or rendering that represents a computer user.

By what The community calls sparks which trends to be something like little applications that can contribute to the platform uploading their killer feature, a specific behavior, an amazing character or the best of their knowledge in a topic.

With this type of technology we can expect in the near future a new generation of avatars which might be able to export your own ideas, knowledge even your personality in ways only science fiction visionaries dreamt about. Virtual teachers could attend to virtual classes, this technology could change the way we understand meetings, they could be used as interfaces for waiters, sellers and I can imagine more and more other possibilities…

I have not been able to test the platform, neither the interface or the sparks programming, I hope to have some time in the near future to test it (it looks it has a nice and friendly interface to work with, see second video) so I’m not able to judge right now if FIONA’s avatars can really manage complex tasks and develop real, clear conversations and interactions, but I believe that this project can be a great platform to test if this are real actual possibilities or only steps towards future technologies, time will tell.

You are able to find a few tutorials for beginners on their website and youtube like this one:

Robots with a mind of their own

Divide and conquer is a common technique used by strategists in many fields. Comprehension usually involves understanding the details and it means usually better adaptation to the “environment”. I’ll try to explain myself with an example: When trying to face a problem in any science  without understanding the main physics involving all the procedure. You might have encountered, you are able to resolve that specific problem or even a similar one but, when the “environment changes”, by this I mean that the problem differs from the initial one in a simpler way, but in a way that is such that your abstract idea of the physics confuses you and makes you solve it the wrong way. But instead anyone who already understands that physic fact will easily evolve its previous answer the right way.

Imagine you memorized the math table for two, then you will be able to accomplish any multiplication relying number 2. When asked to multiply 3*2 you are able to answer easily, but if the question is 3*3 you wont. But instead if you learn the math table for two, understanding that 2*5 is five times two (2+2+2+2+2) you will be able to answer 3*8 immediately. And you might be able to infer that 8/2 equals 4 because there are two fours “inside” an eight (4*2) (4+4). By mathematical terms you could say, this situation is more adapted to a changeable mathematical environment.

For many years control engineers have tried to reduced complex, abstract processes to simpler and more achievable ones, which interact with each other to develop the more complex process which englobes the simpler ones. This way using feed-back loops engineers are able to automate a control system towards its goal.

As robots usually are really complex systems, they are very difficult to control. Interactions between its components can easily lead to malfunctions that can make impossible for the system to achieve it´s desired goal. Now, new simpler, adaptable robots are being researched by Amsterdam engineers working inside an European project, a great idea to solve complex difficulties, making robots which will be able to interact with each other in incredible ways (show by the video)

I imagine future robotics, as multiple interactive robots specialised in little tasks working in groups, being capable of sharing information and developing solutions as a group. Understanding perception will be very important for this robots, in order to be able to communicate to each other its own situation and the final group’s goal, in which I believe cognitive science’s and the study of social interactions will clearly help the research of new cognitive architectures, which will allows us in the near future to master new control techniques for this new multiple robot platforms.

For now we will have to settle with this incredible mock-up.

Cognitive computers? A reality for IBM

Listening to Dharmendra Modha manager from IBM Cognitive Computing Systems, makes me wonder how far we are from really interacting with cognitive computers. IBM has been able to simulate about 500 billion neurons, 100 trillion synapses all running on a collection of ninety-six of the world’s fastest computers this year. The project  code name is Compass which goal is to be able to simulate a brain of a macaque monkey making of this project, the most ambitious attempted to this day.

Compass is part of long-standing effort known as neuromorphic engineering, an approach to build computers developed by the engineer Carver Mead. The premise behind Mead’s approach is that brains and computers are fundamentally different, and the best way to build smart machines is to build computers that work more like brains. Especially when relating with common sense interpretation, understanding language and sensations.

Whereas traditional computers largely work executing serial tasks (one step after another) and using classical logic (if-while and-or), neuromorphic systems work in parallel, and draw their inspiration as much as possible from the human brain describing functionalities in terms of neurones, dendrites and axons.

If Moore’s law continues to be fulfilled and the number of transistors on integrated circuits continue to double every two years. What would we be able to accomplish in a few years? A growing crew of neuroscientists and engineers believe that the key on building better autonomous machines that emulate brain capacities is by implementing them neurones by neurones. I humbly that there is to much research still to be done on neurones connections and that implementing just neurone by neurone knowing how they are connected isn’t enough to deduce the complete behaviour.

We should better emphasise on understanding how the brain and nervous system works in other simpler species otherwise we could be just trying to mimic something we really don’t understand, time will tell.

The human brain has awesome powers of sensation, perception, cognition, emotion, action and interaction. It can bring together multiple sensory modalities, while consuming less power than a light ball and occupying less volume than a two litter bottle of soda

The tale of the Parasitic Wasp

“I cannot persuade myself that a beneficent and omnipotent God would have designedly created parasitic wasps with the express intention of their feeding within the living bodies of Caterpillars.”

Charles Darwin

I’m mainly an evolutionist. I have always felt astonished about how Darwin was able to synthesise nature’s complexity into a pattern of laws which were able to predict natures behaviour. Explaining, not only why all species where shaped by evolution, but more importantly why they had to evolve.

In order to fight the aggressive environment, billions of years of evolution, statistically made chemical components to associate into little cells and latter made those little cells interact and associate into more complex agents, which were more adapted in terms of fighting this environment and surviving. Those cells developed and specialised in order to execute tasks more easily and faster. At a large scale seems that perceiving and understanding the environment is vital for survival. Even more, looks like evolution had chosen to embrace abilities that could help to understand better the environment and cope with it. Seems that modelling the world, a perception of the world, is a common task of  the brain. Working as a filter of information so that it maximises energy, time…in order to efficiently survive.

I believe that the main propose of an autonomous agent is to “survive” against the environment. This has always been a great problem for autonomous robotics researchers. The environment is always changing, we are not able to predict all of the possible paths or possible problems, so we can not anticipate them. Even if this would be possible it would be to complex to try to analyse all of the information gathered.

Now, if evolution’s goal is survival of species and perception, world modelling and specialising of functionalities the tools which it uses to try to fight and cope the environment. I believe we should try to understand how this tools are implemented in order to develop better autonomous agents which can cope with this changeable environment.

Parasitic wasps grow inside of caterpillars in order to feed themselves, for some reason they have evolved this way in order to survive as an specie. This is only one little example of how “intelligent” evolution can be, although its result may seem strange and a little disgusting for a lot of us, for the parasitic wasp is really a great way of fighting the environment.