A theory on Intelligence

I doubt the contents of this article are in any way profound, yet I’d like to open my blog with a theory I developed the other day after reading an interesting article on language I stumbled across on Planet Debian. Here goes…

Intelligence

I am coming to think of human intelligence as being formed from a set of processors in the brain developing to being able to achieve learned purposes very rapidly and precisely, as opposed to some generic reasoning machine.

Examples of processing learned intuitively:

  • language processing
  • recognizing facial expressions
  • mathematical reasoning
  • precise recognition of colours, as painters learn (*)
  • “supposed” colour, as everyone learns to deduce in shadow, differently coloured lights, etc.
  • rapidly coming up with a response to arguments
  • critically reasoning about heard evidence rather than (intuitively?) absorbing information
  • how to manipulate the many joints in our arms to precisely place fingers, and how muscles need to be used to achieve this

All but one of these (the starred) appear to be items difficult for machines to master, requiring at least a well thought-out algorithm. The starred item may be difficult for humans because it involves unlearning/bypassing standard processing.

Decision making

The above could all be considered intermediate processors: they either interpret information from the world around them into a higher-level (less detailed) representation, or they are concerned with how decisions should be enacted at achieve the desired goal. They are mostly tasks computers are learning to achieve.

However, how does the brain decide what to do with these — to make decisions such as what to touch or whether to respond to an argument? One possibility is that of a specialised processor operating along similar lines — for example reducing each considered option to a numerical score, and taking the best. Since time is often of importance, some available options are abandoned early and if a time-critical decision is needed, a decision may be taken before all analyses have reached their conclusion.

It would appear to me that this may not be exactly what happens — instead an evaluation processor just gives a rough verdict such as poor, reasonable, very bad idea, or great, and that once an adequately good option has been found it is selected (rather than waiting until all options have been considered or some time-out). If no options appear sufficiently good, further evaluation is often taken, rather than making a rash choice of an inadequate option.

Perhaps sometimes actions become so common that detailed processing of its appropriateness is curtailed or missed out completely.

This all gives rise to another question: where do suggested options come from? I would suggest largely from memory: learned responses to similar actions. This does not explain how ingenuity comes into play: is it derived from more abstract thinking (less a learned response), and/or due to chance (a natural coincidence leading to the same end, or a misplaced thought)? This does, however, seem consistent with the idea that ingenuity is a rare resource.

Learning

Well, given the above, one could in theory write a “decision maker”, and even all of the auxiliary processors this requires. Whether or not such a system could equal a human in intelligence is not fully clear; I would like to suggest that in theory such a system could would similarly to a human brain but that mistakes and oversights in its design would flaw its abilities, and without a self-adaptation ability it would be unable to make up for these.

What an intelligent system would therefore need is the ability to adapt itself: to build and improve new processors for particular tasks (such as the examples of tasks humans learn above), and above all to be able to build processors to handle new tasks completely unaided.

Motivation

Another thing such a system needs is motivation. In humans this can take the form of both rewards and penalties: rewards such as things which theoretically advantage oneself (being given a resource such as money, learning a new ability) or satisfaction in a job well done, and penalties such as pain, injury or loss of a resource.

One question I must ask is can these motivations be learned and influenced on onself, are they learned only as a child by the influence of another, or are they entirely hard-wired behaviour (in humans)? I suspect not the latter since people can learn to enjoy a coffee, see a pupil succeed, etc. (although this isn’t very hard evidence).

How motivation influences decisions seems reasonably clear: decisions are evaluated in the light of these motivating factors (there’s little point in deciding whether to bother avoiding a burnt hand, for example, without any way of evaluating whether a burnt hand is a positive or negative thing).

So the final question, in the light of the above and the context of artificial intelligence, should be how to decide what motivating factors an artificial intelligence should be given. Should these be primarily focused on making artificial intelligences our ideal slaves (“tools” does not seem appropriate for an intelligent machine), or should they aim more at ensuring the success of our creation? Should they aim to embed a deep bond between humans and AIs, or would this be likely to lead to unfortunate outcomes (such as humans developing animosity towards AIs and AIs in return coming to hate humans)? Would AIs be likely to resent humans if they were enforced with crippling motivations?

Further notes: learning

Is the ability to learn perhaps in fact (a) a general computational/reasoning node which recognises how to split a problem into small parts and solve each part, and eventually optimises by building “hardware” or recorded algorithms to solve each of these recognised parts, or (b) a “genetic evolution” algorithm based on random trial and error, or (c) something in between? I suspect that learning in humans happens along lines closer to (a) than (b), on the evidence that learning in humans usually happens a little bit at a time, with the next step best done only when the previous has been understood and at least partially intuitively learned.

Advertisements

About dhardy

A software developer who landed in Switzerland, I love conjecturing over a few things computer-related, open collaboration, and quietly promoting linux/KDE as a desktop OS.
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s