I inadvertantly walked into the lady's loos at work today - in my defence, it seems the building's designers decided to mix up the sex's respective bathrooms by floor! Why was this revalatory? Well, I barely needed to push open the first of two doors before I knew something was amiss - by the time the second one was cracked open, I was already turning back. Pride intact, I might add!
"I knew something was amiss" got me thinking - there were no visual clues that this was not the gents (besides the huge sign on the door, but I was texting at the time and had my head down...). The only thing I can assume was that the smell was not quite the same, triggering a subconscious "this is not right" reaction. I thus successfully passed a context problem and prevented myself suffering (psychological) harm!
Now, my success in this simple task might seem trivial, however it immediately occurred that an pseudo-AI might find solving a similar context problem rather difficult. This is, if you like, a demonstration of the validity of the Chinese room thought experiment. I could construct a look-up table containing every possible characteristic of a lady's loo that had ever been encountered and yet it could inadvertantly walk into one tomorrow, given the right change in circumstances. It doesn't fundamentally understand the concept of the lady's loo and hence it is unable to make an judgement on whether to proceed in the context of the information it has available.
I also have started to wonder about context and AIs in general. Those subtle clues that my brain used to discover that I walked through the wrong door are just the tip of the iceberg for human beings. Most of us have an instinctive ability to read shifts in posture and tone in one another - witness the immediate recognition of hierarchy in business (and many social) groups. This is a result of our recent history as sub-liguistic pack animals: it makes sense for weaker members of the pack to detect subconsciously when the alpha animals are getting ready to give them a clip round the ear and beat a hasty retreat to reduce the risk of injury. If we did succeed in building true AIs, would they also have the same context recognition? Would they form natural heirarchies?
My immediate feeling is that they would not, unless there was an evolutionary rationale for such behaviour. I believe that it is possible to incite such behaviour by building certain basic instincts into an AI - desire to survive, for one, even if that takes the form of a desire to reproduce (and hence a need to survive). Furthermore, there is the need to induce situations where that survival could be threatened, but in such a way that the threat could be avoided through early recognition of the danger. I quickly reach a spiral of complexity when considering this - as creator we could programme what we like, but a truly self-aware, learning AI could simply reprogramme itself unless we had some means of ingraining instincts like above - AI subconscious. An interesting idea.
I digress and verge on the Asimov. True AI is many years away - further out than fusion power, fuel cells, mind altering mobiles and pretty much every subject I think about. Keeping to my self-control mechanism of "what's the commercial value of that", I've also started to wonder what the value of investing in artificial intelligence is. This line of thought is even more nebulous than the above, so I'll leave that alone while it crystallises. Apologies for the ramble - hopefully it stimulated some more organised thought for you :).
"I knew something was amiss" got me thinking - there were no visual clues that this was not the gents (besides the huge sign on the door, but I was texting at the time and had my head down...). The only thing I can assume was that the smell was not quite the same, triggering a subconscious "this is not right" reaction. I thus successfully passed a context problem and prevented myself suffering (psychological) harm!
Now, my success in this simple task might seem trivial, however it immediately occurred that an pseudo-AI might find solving a similar context problem rather difficult. This is, if you like, a demonstration of the validity of the Chinese room thought experiment. I could construct a look-up table containing every possible characteristic of a lady's loo that had ever been encountered and yet it could inadvertantly walk into one tomorrow, given the right change in circumstances. It doesn't fundamentally understand the concept of the lady's loo and hence it is unable to make an judgement on whether to proceed in the context of the information it has available.
I also have started to wonder about context and AIs in general. Those subtle clues that my brain used to discover that I walked through the wrong door are just the tip of the iceberg for human beings. Most of us have an instinctive ability to read shifts in posture and tone in one another - witness the immediate recognition of hierarchy in business (and many social) groups. This is a result of our recent history as sub-liguistic pack animals: it makes sense for weaker members of the pack to detect subconsciously when the alpha animals are getting ready to give them a clip round the ear and beat a hasty retreat to reduce the risk of injury. If we did succeed in building true AIs, would they also have the same context recognition? Would they form natural heirarchies?
My immediate feeling is that they would not, unless there was an evolutionary rationale for such behaviour. I believe that it is possible to incite such behaviour by building certain basic instincts into an AI - desire to survive, for one, even if that takes the form of a desire to reproduce (and hence a need to survive). Furthermore, there is the need to induce situations where that survival could be threatened, but in such a way that the threat could be avoided through early recognition of the danger. I quickly reach a spiral of complexity when considering this - as creator we could programme what we like, but a truly self-aware, learning AI could simply reprogramme itself unless we had some means of ingraining instincts like above - AI subconscious. An interesting idea.
I digress and verge on the Asimov. True AI is many years away - further out than fusion power, fuel cells, mind altering mobiles and pretty much every subject I think about. Keeping to my self-control mechanism of "what's the commercial value of that", I've also started to wonder what the value of investing in artificial intelligence is. This line of thought is even more nebulous than the above, so I'll leave that alone while it crystallises. Apologies for the ramble - hopefully it stimulated some more organised thought for you :).
Comments
Post a Comment