Interesting article but I suspect you are way off the mark. It is tempting to draw analogies between the brain/ consicousness & computer/program but the idea of conscious computers simply does not stack up. Without consiousness you cannot have belief. Whilst science & philosophy do not yet understand what consiousness is, we can draw some conclusions about what consiousness is not. If you are not familiar with John Searle's Chinese Room Argument I suggest you have a look. The argument makes it clear that there is way more to "understanding" than just processing information (which is what computers do.) Information is processed when we think but thinking is not the same thing as information processing. The Hard Problem of consiousness (as defined by Philosopher David Chalmers) still illudes science and understanding. Until we have (even a basic) grasp of what consciousness actually is we should not expect our machines to start doing our thinking for us.
RE: Can Machines Ever Have Beliefs?