Monthly Archives: November 2012

Robots with a mind of their own

Divide and conquer is a common technique used by strategists in many fields. Comprehension usually involves understanding the details and it means usually better adaptation to the “environment”. I’ll try to explain myself with an example: When trying to face a problem in any science  without understanding the main physics involving all the procedure. You might have encountered, you are able to resolve that specific problem or even a similar one but, when the “environment changes”, by this I mean that the problem differs from the initial one in a simpler way, but in a way that is such that your abstract idea of the physics confuses you and makes you solve it the wrong way. But instead anyone who already understands that physic fact will easily evolve its previous answer the right way.

Imagine you memorized the math table for two, then you will be able to accomplish any multiplication relying number 2. When asked to multiply 3*2 you are able to answer easily, but if the question is 3*3 you wont. But instead if you learn the math table for two, understanding that 2*5 is five times two (2+2+2+2+2) you will be able to answer 3*8 immediately. And you might be able to infer that 8/2 equals 4 because there are two fours “inside” an eight (4*2) (4+4). By mathematical terms you could say, this situation is more adapted to a changeable mathematical environment.

For many years control engineers have tried to reduced complex, abstract processes to simpler and more achievable ones, which interact with each other to develop the more complex process which englobes the simpler ones. This way using feed-back loops engineers are able to automate a control system towards its goal.

As robots usually are really complex systems, they are very difficult to control. Interactions between its components can easily lead to malfunctions that can make impossible for the system to achieve it´s desired goal. Now, new simpler, adaptable robots are being researched by Amsterdam engineers working inside an European project, a great idea to solve complex difficulties, making robots which will be able to interact with each other in incredible ways (show by the video)

I imagine future robotics, as multiple interactive robots specialised in little tasks working in groups, being capable of sharing information and developing solutions as a group. Understanding perception will be very important for this robots, in order to be able to communicate to each other its own situation and the final group’s goal, in which I believe cognitive science’s and the study of social interactions will clearly help the research of new cognitive architectures, which will allows us in the near future to master new control techniques for this new multiple robot platforms.

For now we will have to settle with this incredible mock-up.

Cognitive computers? A reality for IBM

Listening to Dharmendra Modha manager from IBM Cognitive Computing Systems, makes me wonder how far we are from really interacting with cognitive computers. IBM has been able to simulate about 500 billion neurons, 100 trillion synapses all running on a collection of ninety-six of the world’s fastest computers this year. The project  code name is Compass which goal is to be able to simulate a brain of a macaque monkey making of this project, the most ambitious attempted to this day.

Compass is part of long-standing effort known as neuromorphic engineering, an approach to build computers developed by the engineer Carver Mead. The premise behind Mead’s approach is that brains and computers are fundamentally different, and the best way to build smart machines is to build computers that work more like brains. Especially when relating with common sense interpretation, understanding language and sensations.

Whereas traditional computers largely work executing serial tasks (one step after another) and using classical logic (if-while and-or), neuromorphic systems work in parallel, and draw their inspiration as much as possible from the human brain describing functionalities in terms of neurones, dendrites and axons.

If Moore’s law continues to be fulfilled and the number of transistors on integrated circuits continue to double every two years. What would we be able to accomplish in a few years? A growing crew of neuroscientists and engineers believe that the key on building better autonomous machines that emulate brain capacities is by implementing them neurones by neurones. I humbly that there is to much research still to be done on neurones connections and that implementing just neurone by neurone knowing how they are connected isn’t enough to deduce the complete behaviour.

We should better emphasise on understanding how the brain and nervous system works in other simpler species otherwise we could be just trying to mimic something we really don’t understand, time will tell.

The human brain has awesome powers of sensation, perception, cognition, emotion, action and interaction. It can bring together multiple sensory modalities, while consuming less power than a light ball and occupying less volume than a two litter bottle of soda

The tale of Daniel Tammet: The boy with the incredible brain

I remember a few years ago I came across with a documentary called something like “Incredible minds“. It was about a man called Daniel Tammet which was an autistic savant, able to work out great numerical problems in an incredible way. I was really impressed, not only of what the documentary showed he was capable of, but more about the way he explained how he did it. He claimed he had never seen numbers the way, “others” did. Instead he was synaesthetic with numbers, he felt sensations related with them. And he claimed that when he was resolving a numerical problem he was really abstracting a form out of a mental image that the numbers involved sketched in his mind.

This made me wonder if the way maths were thought, were not the ideal way they should be. Or maybe, I understood maths in a way differently from the way others did. And when I say maths I mean any other subject or concept that might be studied. We mostly learn the most simple concepts by patters that are repeated constantly again and again and finally become kind of subconscious. Think, for example on math tables, or first vocabulary lessons in most foreign language classes you might have attended. Maybe, understanding the basics by repetition is not the most effective way to learn, even when infants seem to learn this concepts fast, I believe we don’t exactly make them know in a  meaningful way.

Understanding that your own-percepcion models any concept you might learn, it would be really interesting finding other ways of learning common concepts that could be a lot more powerful than the way it is done today. Researching on mental syndromes, synaesthetics and the way they understand the environment could help us open our eyes.

I came across, recently with this TED talk from Daniel Tammet that might be well self-explanatory about this ideas: