The point isn't whether the brain is a turing machine or not, it's whether it can be simulated by one. For example, push-down automata and finite state machines are examples of machines which aren't turing machines, but they can be simulated by one. Since the anatomy of the brain and its functionality can be represented as binary data and functions which perform operations on that data, it can be simulated by a turing machine.
Here you are pre-supposing that the brain is super-turing, when in fact it is not.
Just want to point out that it is mildly amusing to me that where I have clearly stated that my initial post was based on an assumption that the human brain is a turing machine, and haven't actually made any real speculation one way or another, I am somehow being challenged by both sides of that argument in this thread.
But just to clear things up, the original thing you quoted with emphasis added.
And if our brains aren't turing machines then at least I guess the singularity is an impossibility.
But anyway, I should thank you for proving my earlier point that basically all transhumanists think that the human brain is a turing machine.
OldM8 said:
It's important to keep in mind that from a purely physical point of view, there's nothing really special about the human brain. Ultimately, it's a jumble of atoms obeying the laws of physics, just like everything else is. So theoretically, if we had a turing machine with a huge amount of memory to store the positions and velocities of all the atoms that comprised the model of our brain, we could simulate it using the basic laws of particle physics. In other words, the brain is just a lump of matter with emergent properties like all other machines are, including computers.
I am not entirely convinced it is true. I am not suggesting that a human brain might have metaphysical properties, just that a turing machine has a bunch of limitations that theoretically a brain might not be constrained by if it operates in some kind of fundamentally different model. As far as I am aware, no one knows that for certain whether that is the case or not.
The singularity almost universally refers to the point where humans create an artificial intelligence that surpasses the intelligence of humans, which is when the slope of the exponential curve increases dramatically to a much steeper level than now.
Eh, the steepness of an exponential growth curve is kinda irrelevant. But a good rule of thumb when dealing with exponential curves in real life: they never last. Can't say when this one will end, but it might be sooner than you think.
Silicon is merely a substrate upon which information is stored and manipulated, just as neurons are. You seem to be conflating silicon and the computing framework that is implemented upon it. Current computer chips only do "dumb stuff" because that's all they have been designed and programmed to do. There are plenty of mushy blobs of neurons in nature which are also quite stupid (like snails, fish, etc) but that doesn't mean that neurons can't create human level intelligence. The development of strong AI requires the intersection of advanced hardware and software. Once we learn what algorithms/processes are responsible for producing general intelligence, we can implement them in dedicated hardware and we will witness the genisis of the singularity.
My point was really just that computers are really not very close to competing with human brains at this point in time.
Not really. Humans get distracted and procrastinate primarily because of the way our biological reward system works. We seek experiences which release dopamine, which is why we frequently switch to watching youtube videos or playing games instead of focusing on the task at hand.
Have you ever watched a youtube video or played a videogame where you didn't learn
anything? What even is the task at hand when it comes to advancing technology? I suspect this stuff is actually pretty useful when it comes to any form of creativity.
But to be a little more direct about my point here. I think that a general intelligence machine is soooo different from our current computers that assuming that they are going to operate in a similar fashion, and retain all the benefits of current computers seems a little unlikely to me. It makes more sense to me to expect them to act similar to the actual general intelligence machines we have all around us.