Ir. Hj. Othman bin Hj. Ahmad


Since this theory was presented on the 4th of March, 1992, in an internal seminar at the Nanyang Technological University, there is not much development. I am still trying to finish a book on this theory explaining further what could be done with the mathematical manipulations of the common perceptions of intelligence. Even the Information Theory is not yet well accepted by the Information Technologists, that are dominated by so called computer scientists. Only communication engineers use this Information Theory exclusively. Even this Intelligence Theory is already being used by robot engineers, G.N. Saridis, even before I propose this formal measurement technique of Intelligence.

The date, 4th of March, 1992 is the date of the file on the slides intell_s, written in Word Perfect 5.1. I cannot remember exactly when the theory first came into my mind. What I remembered was that it occurred just after an Indian professor, from an Indian University, gave lectures on the prowess of the Indian mathematicians that actually invented the zero and all the numbering systems which is the foundation of all the transform mathematics. That lecture also touch on the mystics of the Hindu religion where infinity converges into one, in a circle. That mind gymnastics was probably the trigger for this theory.

The theory came about as I was designing my processor, a general purpose digital signal processor, DSP, which I tried to incorporate general purpose functions, as described by a high level language, C. The purpose was to reduce the amount of effort that engineers had to endure in writing code for a DSP. I already equate informally,  general purpose to intelligence because of the ability to program and modify the codes easily. I had spent endless hours writing codes for a TMS32010 DSP in assembler language. A DSP needs a single cycle multiplier so that its cycle time cannot be as short as pure RISC type computers. The cycle time for a conditional jump instruction can be adjusted. It can be as low a a single cycle in a RISC computer but it can be slowed down to accommodate the decision process and reduce problems of latency.

I need a method to justify the various techniques. There is a relationship between latency, and the number of conditional or unconditional jumps. Conditional jumps are much faster to implement than unconditional jumps. It is therefore logical to use the frequency of conditional jumps as a measure of the need for high speed high latency, conditional jumps. If there is very few conditional jump, we can afford to reduce the cycle time of jumps but suffer penalty when a conditional jump were to be executed. If there are too many conditional jumps, we must optimise by reducing the latency penalty, and increase cycle time of conditional jumps, but overall cycle time will have to be increased.

Then it struck me that a program that has many conditional jumps appear to be more intelligent than a program that just compute. Conditional jumps also called an variant that cannot be proven to be correct by formal software verification technique I learned at Edinburg University, and cannot be optimised by optimising compilers. For a program that does not require conditional jump, it can be optimised to as low as a single instruction, i.e. just retrieve the answer, it is just a straight look-up table.

Since 1980, in my final year at City University, London, I was introduced to Information Theory in a practical way. It was not like what Shannon had written. It was more of a way of measuring information content. It was probably my best subject, scoring 100%. Ever since I learn that theory, I started applying it to measure the information content of microprocessors. My results were not encouraging because they were too complex and meaningless. After learning about Numerical methods, I was already dissatisfied by linear equations that we use to solve engineering problems. The real world is non-linear. For example, hysteresis is very  common and cannot be modeled by linear equations. The best method is the trial and error method of numerical methods, where we calculate point by point and time by time. This is the only method. We do not need to know the formulae which assume linear conditions. A computer can solve problems more accurately by analysing point by point. It also broke my faith when formulae for Solid State physics, were similarly unreliable. The formulae are just mere assumptions.

With the above background, I started to analyse how I could mathematically equate conditional jumps to intelligence. Since intelligence is an abstract object, as abstract as information, I started using Information Theory, to analyse the conditional jump. The turning point was my realisation that conditional jumps deal with changing addresses, not the contents of the address. The result is so easy to calculate. Just count the number of conditional jumps. Earlier I was trying to calculate the information content of the instruction. Just imagine, how difficult for us to calculate the amount of information of an add instruction with the various operands, versus the multiplier instruction. It is possible by counting the logical gates which are used to implement the instructions. The result will not be consistent and cannot be represented by a simple formula. I was not interested in calculating the semantic content of an instruction because I also believe that the value of an instruction is up to the user of that instruction. Mathematically it is already possible to measure the complexity by the truth table necessary to implement an instruction, without its redundancies. Logic designers should be well aware of this method.

When I first discover the implications, I perspire and my heart beat faster. I was worried because this discovery will make the study of intelligent systems more scientific and productive. It should also remove a lot of myths about intelligence. I was also disappointed because no one else, who are better qualified and in better positions than I was, did not propose this theory before. That include Alan Turing, known as the father of computer. After a literature search, I found one person, a robotic researcher, G. N. Saridis, already implementing this definition of intelligence. I was only a part-time researcher. My main job is as a Telecommunication engineer. I was not interested in the politics of publications.

After my internal seminar presentation, I tried to publish this paper. It was rejected by many publishers, even in unmoderated channels. My aim was to publish this theory so that it will be in the public domain for ever. Someone else who are better equipped than I am, may develop it further. I was again disappointed when it was rejected by many publishers. Finally it was accepted for publication at the ISITA1992 in Singapore. The program chairman was Professor Hirota, a Japanese. Somehow, I suspected that nationality plays a role. However  a colleague of mine, a regular publisher of technical journals, pointed out that Japanese are good at mathematics. Even professor Hirota, who chaired my presentation, questioned me on the use of this theory. My answer, that was published in the paper, was to characterise machines and programs so that they can be matched well. Programs that need more intelligence be run on machines that are optimised for intelligence, i.e. those than can handle conditional jumps and interrupts fastest. I have not heard of these machines yet.