It has sensors that cut the heat off and push the toast out. At the most complex level, computers can store massive amounts of information and can provide instant access to it. So at this very basic level of "knowing" many machines already do know.
Turning to the third definition of knowing, do any machines "recognize or distinguish in comparison". Again, the answer is yes. A car can recognize a whole host of complex facts, from the road conditions, to the air temperature to its own state of repair. It distinguishes a full gas tank from an empty fuel tank and acts upon the information through providing a warning light for the driver. The car distinguishes a door being closed from a door being open, and will beep when a door has been left ajar. The car, or rather the computer within its engine, is distinguishing between one situation and another.
So the third definition seems to have been met. But perhaps the most important question is whether a machine does or ever will be able to "perceive". Essentially this is a questions that deals with whether a machine will ever be able to "think". Will we turn on a computer one day and have it say "hullo, I am here", and know that it is saying it. Thus the basic question is whether a machine will ever be alive.
The idea of a thinking computer, with all the benefits and risks involved, has existed virtually since they were invented. From the robots of the 1950's science-fiction to Star Trek computers gone out of control, to Hal and on to Data in Star Trek: The Next Generation there has been a pervasive fascination and/or fear with the thinking computer. The fascination involves the apparently limitless potential of a machine that combines the raw processing power of a computer with a human ability to reason. The fears stem from the same possibilities: will human beings stay the dominant life-form with a thinking computer
However, as the slowness of the development of robotic technology has shown, the creation of Artificial Intelligence (AI) is much more difficult than previously thought. Computers can learn from their mistakes, can teach one another and even repair one another, but none so far has shown any sign of self-awareness. They do not, or at least do not seem to, think.
At the moment the processing power of a computer is similar to that found within simple animals such as an ant or a flea. Such animals do not have "brains" in the sense of higher animals such as mammals. They have neural centers where their relatively simple life functions are organized and implemented. Ants probably are not self-aware in the manner of a human being, and yet they are "alive". The question arises of whether a computer will show self-awareness in a mammalian or perhaps even human sense when, within a few decades, it has a similar processing speed to the human brain.
Here this analysis moves into perhaps highly difficult definitions of "life", "thought" and perhaps even "spirit". As far as biology is concerned the human brain is made up of a network of neurons between which various connections are made. Two neurons, or long lines of neurons, either have a