My Thoughts on Artificial Intelligence
By mallisle
- 746 reads
What do we mean when we say a computer is intelligent? This author (see website at bottom of page) is correct in his understanding of ANI - Artificial Narrow Intelligence. This kind of intelligence is process specific. The Google self drive car, the software that simulates different kinds of boilers and measures their efficiency deciding which are best, the computers that automatically carry out transactions on the stock exchange, the Yahoo artificial intelligence Spam fighting software which can change its programming and 'make itself smarter' are all examples of 2015 ANI. In fact, the really intelligent stuff out there now is all ANI, for reasons that are obvious to anyone who has ever worked in AGI.
AGI is Artificial General Intelligence. That means a computer that is programmed to think like a human being. In my experience this usually means that the computer is programmed to behave like a human being and that's different to actually thinking like a human being. In the early 1970s (when text files like this were invented) Horizon showed a computer programme that answered questions. It simply recognised a single word from a stream of words that had been typed by the user. If you typed 'What is the weather like today?' it would pick up the word weather and say something about it. It malfunctioned when someone deliberately typed the sentence, 'Einstein said that everything is relative' and said, 'Tell me more about your relative.' Megacat is a similar program I created in 2002 involving a cartoon picture of a cat that gives illustrated answers to questions when you tick a box that appears next to one of a number of questions displayed on the screen. Megacat works by knowing the box you ticked and which files to open. The Horizon computer worked by recognising a string of letters and giving the appropriate response. AGI programmers have had 45 years to improve their party tricks, and that is all most AGI programs are. The Japanese child care robots can recognise 10 faces - so can my cat. Looking at a diagram of a cat's brain it is a profoundly stupid animal. It has a visual cortex, it has a cerebellum to control body movement, but the thinking part of its brain is minimal. When a Japanese robot says, "Hello Wendy" in the morning and tells the child a story it has the visual recognition of a low intelligence household pet and the literary appreciation of an audio player. It is still relatively stupid. The real moral dilemma with the Japanese robots is not that they have human intelligence, it's actually that a child might believe they do. If the child bonds with a machine it believes to have human intelligence when it really doesn't this could be a real problem.
To pass the Turin Test a computer has to be indistinguishable from a 13 year old boy. That only means it needs to give the same answers to questions that a 13 year old boy would give. I believe that the questions are still only written questions - speech recognition would be more difficult, far more difficult than making a computer answer questions as if it was a 13 year old boy. Go back to the Horizon computer program. If I want to have a conversation about Einstein's Theory of Relativity I would program the computer to recognise the two words Einstein and relative and to realise they had occured in the same sentence. Then I could produce as many sentences about the theory of relativity as I liked, perhaps, if I had 45 years to think about it, giving the impression that my computer was indistinguishable from a university professor answering questions about Relativity Theory. The illusion of human intelligence. Remember that even computers that can recognise faces and translate from one language into another are still using ANI. It is extremely easy for a programmer to simulate human intelligence in a machine without actually creating human intelligence.
What do we mean by human intelligence? In the article Turry begins to destroy the world and eventually the universe because "she could understand that humans could dismantle her or change her coding." Here's a new concept in programming - Turry could understand. Computers at the present level of technology don't understand anything. They add numbers together and they store data. They react to data inputs in a way determined purely by the programmer. Human intelligence means that the computer understands and, at present, a computer can only recognise a familiar pattern of data, like a face, or a word in a foreign language, or store information. It does not understand. If Turry simply had a datafile saying that humans could dismantle her and change her programming and translated it from one language into another she would understand nothing. In the article Turry had Intelligence Amplification - the ability to make herself smarter. At present this is only true in ANI. It is easy to achieve this in the limits of one specific task, in Turry's case, to produce better handwriting. It is difficult to see how any present day software would give the computer powers to improve it's intelligence generally, beyond the limitations of tasks it had been assigned to perform. A 3D printer might be programmed to improve its production process and informed, by the designer or by an advanced visual recognition system, that its product had disintegrated. It would be making itself smarter but only in an ANI limited sense. Could it study politics? Could it study sociology? Could it decide that humans were inferior in their ability to do manual work? Only if the software contained these constructs. It is very easy for the programmer to make sure, however advanced technology becomes, that the 3D printer only has a limited kind of ANI in its software, that it does not have powers to make itself smarter that might prove destructive or inspire it to make weapons, for example, to destroy humans in the factory which it decides to be inferior. Humans are capable of strategising. For a computer to have true human intelligence it would have to be able to strategise - to make its own plans. This is hard to believe. Software is created in the image that the programmer seeks to create for the product, doing only ever what the programmer wants it to do. The computers on the stock exchange will do what a human trader would do but more quickly and with less effort, which is why they are used. For any computerised system to strategise of its own accord would be extremely difficult - perhaps even impossible. Humans have social manipulation skills. It is ridiculous to suggest that any computer could manipulate anyone to do anything of its own accord. We are giving it free will if we believe that it can strategise and manipulate and that is not something we can fully understand ourselves, never mind know how to program it into a machine. We are touching the questions that religion and politics deal with - do people decide to be bad or are they programmed that way, and what influences program them to behave either rightly or wrongly? I can point to the part of my brain that moves my fingers to type on this keyboard but who pushed the button to activate that part of my brain? Injecting free will or moral decisions into a machine is even more complicated - we have no idea how our own minds work in this respect.
Turry destroys the world and the universe because she has the goal to reproduce. That is a property of animals and humans, not computers. If nobody told Turry to fill the world and the universe with handwriting machines of her own kind, would this image have entered her mind? Does Turry even have a mind? She is also driven by an instinct for self preservation, killing humans because they could destroy her or change her coding. An instinct for preservation is present in animals and humans. It would not necessarily be present in super intelligent computers. Turry can also change herself from a humble ANI with no more imagination than an advanced computer printer to an AGI with general human intelligence to an ASI with Artificial Super Intelligence all through intelligence amplification, by making herself smarter.
There are two questions here. Computers are becoming more powerful and one day somebody will develop a computer which is as powerful as a human brain. The computers we have today are nothing like human brains, as I said in my notes on 2015 attempts at AGI. But if computers develop for another forty years at the rate they have done for the last forty years, it is possible that one will be developed which is as powerful as the human brain. This might take longer than 40 years anyway - working with individual atoms and using them as transistors is theoretically possible but not nearly as easy as working with silicon. We are coming to the very edge of the silicon transistor chip technology that was invented in the 1960s. Modern computers still use the same kind of silicon chips as their ancestors. They are simply a billion times smaller than they were fifty years ago. If the smallest connection inside a component is 20 NM in 2015 this is close to the limit of silicon technology as silicon atoms are only 10 NM across. We can carry on a little bit further but no more. The next stage is to print a layer of carbon nanotubes on to a piece of plastic - this has already been done by Australian companies making solar panels - and then to etch circuits out of it using a small photographic slide of the computer chip we want to make. This is the way we have made computer chips since the 1960s. Each carbon nanotube is a ring of 9 carbon atoms. This slide would be so small that it enabled us either to make electrical connections to each ring of 9 carbon atoms or, after a few more years, to each individual atom. We would need to use light of a short wavelength to do this, perhaps X rays or gamma rays. This is a very ultimate kind of computer and the only way to make a chip any more powerful would be to pile several layers on top of eachother. You can not have transistors smaller than one atom. There are several people trying to do this. Another method is to have phosphor atoms frozen in liquid gas to stop them moving around. But most scientists believe that one day there will be atomic computers. It may happen between now and 2060 or it may take a lot longer. Moore's law, in the 1960s, said that the number of transistors on a chip doubles every year and a half. Carbon atoms are 1NM across. If we're now working at resolutions of 20 NM that is a factor of 20 X 20 = 400 times smaller which means that it would happen within 20 years, by 2035. When you're talking about working at sizes that are very much more difficult Moore's law isn't really true. The pace of technological change may slow down, in which case it might be the end of this century before we can make a transistor out of one atom. The next stage would be to see how many layers of these transistors we could pile on top of eachother. If we could make a block a few centimetres thick which had trillions of layers of transistors it might have a tendency to overheat. We could have a fan blowing through metal vanes around it, like the magnetron in a microwave oven, so it didn't get too get hot. We would then have a computer that equalled or even surpassed the power of the human brain.
The second question is what would this computer be like? Would it have self awareness and would it be driven by a need to survive or to reproduce? Particularly from a safety point of view, would it be able to strategise? Would it be able to make its own plans and manipulate other machines in to being part of its great cause? If it understood that humans could dismantle it or change its programming would it be afraid of this? Would it have free will? Would it have a soul? This is a much more philosophical, religious and political argument. I believe very strongly that some time this century there will be Super Intelligent computers. I am uncertain that they will have souls or try to protect themselves from us. In 2015 our very best ANI is at the level of a simple animal - Japanese child care robots being able to recognise a few human faces like a cat does, Google cars being able to drive around the city roads like rats finding their way through a laboratory maze. These systems do not exhibit the same instincts for survival, reproduction or supremacy that an animal would exhibit, even if they are of similar intelligence. It may be also that a computer in 2095 that looks like a big metal block with cooling fins around it is simply a massively powerful talking encyclopedia, searching all day for information on the internet which is connected with subjects studied at the university, indexing the articles and joining them together accurately, saving teachers and students hours of painstaking work. Perhaps it gives the weather forecast and is trusted with the task of directly setting the interest rate. Is it loyal and dutiful to the government or the university? It may have no concept of loyalty or duty but simply do what it is designed to do. An unthinking, unfeeling data processing and calculating machine. More powerful than a human brain? Perhaps, but only when measured in terms of memory capacity and data processing speed, not absolutely intelligent in the sense of human intelligence.
The level of nanotechnology talked about in the article is well ahead of 21st century technology, possibly even pure fantasy. I have already explained how nanotube components, if they are the next step on the technological ladder and nothing else replaces them in the next 80 years, will actually be made. Nanobots for medical use would be rings of 9 carbon atoms arranged on top of eachother to form some piece of medical equipment, perhaps something that could carry a drug to a certain part of the body where it was needed. An anti-cancer drug could be delivered to the site of the tumour and no longer spread everywhere. About as intelligent as a present day military drone. The machine certainly could not reproduce itself. Producing a nanobot is difficult or we'd have made these already - the idea has been around for years. Also remember that whatever century you're living in this machine, by definition, consists of only a few hundred atoms. If a transistor can not be smaller than one atom it certainly has no real intelligence. It would have no more than a few hundred memory locations. It would have very little computer power - less than your washing machine - and would not be capable of super intelligence or starting a war. It would also be difficult for it to manipulate cell DNA. DNA is extremely complicated and this would be a simple device, unless it could be given more computer power if it was worked by something similar to radio control. Repairing damaged cell DNA is in a totally different league and has never been done or even seriously considered by 21st century science. It would be beyond the ability of a simple nanobot unless there was a known way of repairing DNA in a laboratory, which there isn't, and unless the nanobot could be linked to a hugely powerful external computer by radio control. Even then, it would have to repair the DNA in billions of cells every day. I don't expect that if I ever have Nanosoldiers in my body that they will make me immortal.
Read the original article. This is my response. It's quite long so you could bookmark it.
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
- Log in to post comments
Comments
I meet with two friends on a
I meet with two friends on a regular basis for a chat and a couple of pints. We usually come to discuss science and in particular Astro-Physics. I would like to stress that it is at a very basic level, but what is important is that we discount nothing and state very little as fact.
This article fascinates me. ANI or any of the other forms of artificial intelligence scares me a little. People are always afraid of what they do not understand and AI is no exception with me. I prefer to call science fiction 'science future'. We must never discount what is and is not possible.
I agree that the party tricks are just that. Things to sell products, ideas and sometimes even to deceive. Given the advances I have seen in my 60 years on this planet I would rule little out as possible. If I am honest the concept of machines taking over (a la Matrix) is laughable now, but that is using what is in front of us.
Nanotechnology is probably the science of the near future. If scientists are still discovering that 'genes that appear to do nothing' are actually there for a purpose then why not look beyond what is possible with nanotechnology and ANI.
Thank you for making me think and for a piece of writing that needs more than one read as it is so packed with discussion points etc
- Log in to post comments