time travelling fun

When I said that qualia may not exist, I was just implying that physicalism is still a viable theory. The existence of qualia does bring up the mind-body problem.

"Predefined" and "free will" are mutually exclusive.

You don't need qualia to know what you are. You ARE your qualia. If there is no quale, the set of what is "You" is an empty set.

Back on topic, a "true" time travel to the past must be made in such a way that the object which travelled will eventually arrive back at it's own timeline even if it did not move once it arrived. Also, the past must have aldready been changed. Time travel do not make the timeline steer from course X to course Y : the timeline was never going toward X in the first place!
 

vonFiedler

I Like Chopin
is a Forum Moderator Alumnusis a Community Contributor Alumnus
"Predefined" and "free will" are mutually exclusive.
But that's now what's been said in this thread. It's been said that if something is predefined, you didn't have free will because you wouldn't have done anything differently. But the two concepts are so mutually exclusive it's just silly. If I did something, I can't change that I did it, but the problem with the argument against free will is in the very sentence. I can't change that I did it. Then you smartasses say "Oh, well you need to define I". How is there any confusion to what I means? If a button is pushed by me, are you going to tell me that I didn't push that button? Did the easter bunny push it? So there shouldn't be any matter as to what I means. You know who I'm talking about when I say I. Don't act like you don't. I live. I don't coast by and fool myself into thinking that I'm a slave to external stimuli, cause my brain is wired not to be. And regardless of what you have to say about the matter, the contents of my brain is who I am. That's the most physical, scientific explanation of the matter. You say qualia, some say soul, I say neurons. I can prove the existence and function of neurons.
 
Don't you understand that "You" and "Your body" are two different entities? I can conceive of a body without qualia. I can conceive of my own body without qualia, however, such a concept is a quale. Something material must have pressed the button, because the button has physico-chemical properties. and "You" are not material : qualia have no physico-chemical properties. They do not carry the energy needed to alter the inertia of the button.

You did not press the button, your body did. Edit : Your brain sent signals, not "you".

Edit : you need to define "kill" first before making such a claim. Beside, you are now confusing the metaphysical concept of "person" and the moral/legal concept of a "person". Causality and responsibility is not the same thing.
 

vonFiedler

I Like Chopin
is a Forum Moderator Alumnusis a Community Contributor Alumnus
No, I did press the button. I sent sent signals to my hand to move it downwards. Again, I can prove the existence of such a system.

"You" and "your gun" are two different entities, but if you kill someone with your gun you still killed someone.
 
Von fiedler, you seem to have a very broad view of free will, in the interests of understanding please answer me this thought experiment. Suppose I create an robot and I program it. and part of its programming is that whenever it sees a circular red button of diameter 2 inches it will press it exactly once. I release the robot into the world. It wanders around for a little bit, then it sees a red button with a two inch diameter and it presses it.
In your opinion, has the robot used free will in pressing the button?
 

vonFiedler

I Like Chopin
is a Forum Moderator Alumnusis a Community Contributor Alumnus
Edit : you need to define "kill" first before making such a claim.
No I don't. You know exactly what I mean, stop being a tit.

lati0s, you guys seem to be struggling to determine whether the robot even pressed the button at all. Now you say the robot is intelligent. Do you mean sentient? If the robot is sentient, it has free will. If a robot is sentient, it could feasibly stop pressing buttons even if you programmed it to. At the very least, it would stop and think "why do I always press buttons?". A sentient AI could decide whether or not it wants to follow its programming. It could choose not to press the button to spite its programming. Then you'd still say it didn't press the button as a result of its programming and thus lacks free will. You've got a very pessimistic attitude about a very common sense concept and I get the feeling you'll come up with nine more thought experiments to try and change my mind before giving up. But a sentient robot would have free will whether it presses the button or not. If it isn't sentient, I don't know what you mean by intelligent robot.
 
I now realize that the intelligent part was irrelevant to the thought experiment and have edited my post to reflect. Please tell me under what circumstances you think the robot would be exercising free will. (just saying 'if its sentient' isn't an answer as sentient is very vague)
 

vonFiedler

I Like Chopin
is a Forum Moderator Alumnusis a Community Contributor Alumnus
If the robot is not intelligent, if it is merely a tool, it has no free will. In essence, the robot's creator used his free will to push buttons in a more efficient manner (for whatever reason). A tool has no choices, predetermined or otherwise. Your robot is no different than the tap. Actually, a better analogy would be a rock that you throw. It left your field of influence when you threw it, but it still exerts your will.

EDIT:
(just saying 'if its sentient' isn't an answer as sentient is very vague)

Only trying to ascertain what you meant by intelligent.
 
If the robot is not intelligent, if it is merely a tool
What qualifications would have to be met for you to consider the robot intelligent enough for this to not be the case? Please do not be vague and just say something like 'if it can think' without qualifying it.
 
Jack Jack]1- You forgot to define I. My definition of I is : the ensemble of my [URL="http://en.wikipedia.org/wiki/Qualia said:
Qualia[/URL].
That definition strikes me as both bizarre and utterly useless. Could you define "I" in such a way that it doesn't rest on a heap of philosophical nonsense? Judging by your further comments, your definition of "self" is of the kind that essentially amounts to nothing.

"Predefined" and "free will" are mutually exclusive.
That depends on your definitions. If someone disagrees that "predefined" and "free will" are mutually exclusive, it is fair to say that they don't mean "free will" the same way that you do. Granted, determinism and free will are indeed mutually exclusive in the understanding of most people. But if someone holds that they are not, besides being unconventional, it doesn't really matter.

Jack Jack, if tomorrow you meet a beautiful girl and she's interested in you and you don't have the guts to ask her out, you don't have to feel bad about it. You don't have free will. If you've made mistakes in your life, don't learn from them. You didn't make those mistakes. Hormones did. Go rape someone. Tell the judge you didn't have free will. And when you die in bed lonely and without any accomplishments in life, it won't feel so bad, because you didn't have free will anyway. You guys have taken something as elementary as free will and turned into an excuse. An excuse so outrageously pathetic it's like you want me to be a cynic again. But then I remember that you guys are cynics, so I swallow my frustration and remember exactly why I'm on the side of the people with internally inconsistent concepts. I cherish learning from my mistakes and I love who I am, and the things I am predestined to do, and being predictable is not the same thing as having no will. You might as well say that you don't have a personality. And if wasn'y such a semantics whore, I might agree.
Unlike you (it seems), we really, really don't care whether we have free will or not. It has no effect on our behavior. What we do care about is knowing what we're talking about, and assessing what is true. If we deem that determinism and free will are incompatible, and that determinism is the case, then we will conclude that free will does not exist. Whoop dee fucking doo.

Free will discussions are not about colloquial free will, it's about digging deeper. Your definition of free will essentially amounts to "we have free will! it's obvious!". And then you pull bizarre equivalences about free will, thinking, existence and whatnot, even though none of these concepts are really related.

I mean, fuck, if you can't do philosophy, stay out of it.

I am, or I am not. If both of these are possible outcomes and we will never know the answer, one is still more practical and productive than the other. So I am. I don't need quales to know what I am, I only need a brain and I happen to have one. It seems to me like everyone but me has an internally inconsistent concept of who they are in this thread. I am, I exist, I make choices, and I do. The outcome of my choices is predefined by my free will.
Neither is more "practical" or "productive" than the other. Whether I believe I have free will or not, that God exists or not, that I exist or not, won't change my behavior. I want to believe what's true, and avoid believing what's false, and then optimize my own well-being accordingly to these beliefs. I'm not debating about whether free will exists or not because it matters, I'm doing it because I'm curious about the topic. Because it sure as fuck does not matter to me whether I have it or not - all I care about, always, is whether I can be confident that my belief correlates with truth.

Then you smartasses say "Oh, well you need to define I". How is there any confusion to what I means? If a button is pushed by me, are you going to tell me that I didn't push that button? Did the easter bunny push it? So there shouldn't be any matter as to what I means. You know who I'm talking about when I say I. Don't act like you don't. I live. I don't coast by and fool myself into thinking that I'm a slave to external stimuli, cause my brain is wired not to be. And regardless of what you have to say about the matter, the contents of my brain is who I am. That's the most physical, scientific explanation of the matter. You say qualia, some say soul, I say neurons. I can prove the existence and function of neurons.
Oh for fuck's sake, can't you just read the goddamn thread? I've defined "I" as the collection of machines that my brain can identify with. Jack Jack has defined "I" as "the ensemble of my qualia" (whatever that means). Some religious people might define "I" as "my soul" (again - whatever that means). You have defined "I" as "the contents of my brain" (which is quite decent! but why did it take you so goddamn long to say it?) Regardless of whether they really make sense or not, that's four definitions that at least one person is going to support. FOUR.

And then you have the fucking nerve to call me a smartass for asking for definitions? If you don't define "I", how the fuck am I supposed to know what you're talking about? How am I supposed to know whether you believe in souls like so many people do, or that your definition is more reasonable, or even weirder? I'm not here to read your mind, you know. If I ask for definitions, that's because it is necessary. That's because I've seen the kind of bullshit people come up with sometimes.

When I say that most people have an inconsistent idea of free will, it is mainly because a widely held opinion is that free will and determinism are mutually exclusive, and free will and randomness (of choice) are also mutually exclusive. Since this covers all possible cases, there's nothing left that free will could be. Furthermore, many people have a strange conception of their "self" (soul, qualia, etc) that's divorced from their brain. All this put together is a huge mess, and I hope you can see why definitions are so important.
 
I just want everyone to take a moment to reflect on the fact this thread has not 1 Brain respsonse, but 9.
 
10

lati0s, you guys seem to be struggling to determine whether the robot even pressed the button at all. Now you say the robot is intelligent. Do you mean sentient? If the robot is sentient, it has free will. If a robot is sentient, it could feasibly stop pressing buttons even if you programmed it to. At the very least, it would stop and think "why do I always press buttons?". A sentient AI could decide whether or not it wants to follow its programming. It could choose not to press the button to spite its programming. Then you'd still say it didn't press the button as a result of its programming and thus lacks free will. You've got a very pessimistic attitude about a very common sense concept and I get the feeling you'll come up with nine more thought experiments to try and change my mind before giving up. But a sentient robot would have free will whether it presses the button or not. If it isn't sentient, I don't know what you mean by intelligent robot.
Robots don't "violate their programming", they are defined by their programming (much like you are defined by your brain). For this reason, one could say that whatever a robot does, no matter how simple, is an act of free will. You seem to be under the impression that the way through which the robot was made actually matters, but that is ridiculous (or so it seems to me). A robot which presses any red button it sees could either be programmed by a human to act like that, or it could spontaneously appear out of nowhere, out of pure chance. That would be the same robot! Suppose that it is the latter that happened - then it's nonsense to define sentience as the possibility that the robot would violate its programming. It was never programmed, it popped out of nowhere. Pressing red buttons is what it does.

Stop rambling about "common sense". You simply have no idea how complex it is to formally define concepts such as "free will" without them being too broad or inconsistent. For instance, your description of what a sentient robot would be reeks of inconsistency, because even though you don't seem to realize it, you are defining the robot differently from how you are defining yourself. If you are your brain, then the robot is its programming. In the same way that any different brain would not be you, any different robot would not be that robot. If the robot was to violate its programming, it would be acting out of character, it would be acting upon factors that are external to it. For instance, it could violate its programming if a gamma ray was to fall at the right spot to change a register, but that's hardly free will, it's external aid.
 

vonFiedler

I Like Chopin
is a Forum Moderator Alumnusis a Community Contributor Alumnus
You said it's intelligence was irrelevant, not me. I'm still waiting for your definition. I can think of a few that you might be referring to, but it's not my thought experiment.

Or maybe you should just come up with a different thought experiment. At least with time travel I'm no physicist, and I can only rely on established scientific fact. Artificial intelligence is my area of expertise. Making a sentient AI is a work in progress for me. I shouldn't really struggle to define sentience as it applies to humans, but in a thread where people can't define kill I guess it is a bit more complex. We don't simply process information or rely on instinct, we can be introspective, self aware, we can have sensations and experiences. I know you know what sentience means, you weren't born yesterday. Now if I was to give a fine line between sentience and non-sentience, the best staple is probably whether or not the robot can ask "why am I here?". That's sappy, but even if that isn't the line between sentience and non-sentience it's damn close. Now, you can program an AI and try to make it sentient, but a sentient AI is not guaranteed to follow your programming. Then it's no different than a person. It has initial programming and it learns to change because of the world around it. We're back to square one and I still say that's free will.

EDIT:
Brain, for the love god. I. I just can't get past this. No one should have to define I. Or kill. Maybe sentience. Do you own a dictionary or not? I'm not writing a legal document, I'm arguing on a competitive Pokemon forum. Definitions are important, but they are also assumed. Knowing what certain words mean and if they have multiple meanings is important just for ordering food from a restaurant. When instead of trying to refute my argument all you do is ask the definition of "I", that annoys me. I can do philosophy. Oh god can I do philosophy. But I stay grounded with things that I can prove now (the function of my brain) or that I can prove later (the workings of AI). What you're doing isn't philosophy. It's fantasy.
 
That definition strikes me as both bizarre and utterly useless. Could you define "I" in such a way that it doesn't rest on a heap of philosophical nonsense? Judging by your further comments, your definition of "self" is of the kind that essentially amounts to nothing.

But there is only one empty set, which means that everybody's the same and that every self is rigorously identical. Cool. Now what.
I think that Jack Jack's definition is rather similar to "I am the sum of my experiences and memories," which is a definition I have heard from time to time. As experiences and memories could essentially be transformed into all of your knowledge (as knowledge builds from past experiences).

If you have no qualia, you have no memories or experiences. Therefore, all those with no memories are the same.

von fiedler said:
Or maybe you should just come up with a different thought experiment. At least with time travel I'm no physicist, and I can only rely on established scientific fact. Artificial intelligence is my area of expertise. Making a sentient AI is a work in progress for me. I shouldn't really struggle to define sentience as it applies to humans, but in a thread where people can't define kill I guess it is a bit more complex. We don't simply process information or rely on instinct, we can be introspective, self aware, we can have sensations and experiences. I know you know what sentience means, you weren't born yesterday. Now if I was to give a fine line between sentience and non-sentience, the best staple is probably whether or not the robot can ask "why am I here?". That's sappy, but even if that isn't the line between sentience and non-sentience it's damn close. Now, you can program an AI and try to make it sentient, but a sentient AI is not guaranteed to follow your programming. Then it's no different than a person. It has initial programming and it learns to change because of the world around it. We're back to square one and I still say that's free will.
>>> print "why am I here?"
why am I here?

Oh my god my computer is sentient.

Sentience to me means an object is aware of its own existence. How that can be proven, I have no idea. I can't prove anyone else is sentient, but I assume I am myself. A sentient AI does what its programming does. There are self-mutating programs, but they still follow their own source code. A human is similar in that it changes over time, but a human always follows what the brain sets out for it. Sentience does not play a role whatsoever.
 

vonFiedler

I Like Chopin
is a Forum Moderator Alumnusis a Community Contributor Alumnus
>>> print "why am I here?"
why am I here?

Oh my god my computer is sentient.

Sentience to me means an object is aware of its own existence. How that can be proven, I have no idea. I can't prove anyone else is sentient, but I assume I am myself. A sentient AI does what its programming does. There are self-mutating programs, but they still follow their own source code. A human is similar in that it changes over time, but a human always follows what the brain sets out for it. Sentience does not play a role whatsoever.
The computer would have to do that without your orders. And actually, you're right. You can't prove that anyone else is sentient. But you could make an AI that is as "sentient" as anyone else. But still, because it is sentient you can't guarantee that it will follow its source code. A sentient AI needs to be able to change its own source code. Thus, you could program it not to kill (oh no, an undefined word), but when it asks "why?" you'd better have a damn good answer. Now, like you said, it's just similar to a human and we're back at square one.

EDIT:
Maybe this has gone on long enough. I get that you guys want me to expand my thinking into new territories. But I've given the free will problem a good go over the years. I really have. Jokes aside, I am something of a fatalist. Free will comes up. But I stand by my point of view. Some of the things you guys say are possibilities. But they aren't scientific fact, and having a consensus on a philosophical subject is meaningless. "This is my philosophy and it's correct" is just bizarre. I only came into this thread to inject a little common sense into the problem (which at the time was time travel, and I think brain was spot on in that debate). I'm not trying belittle when I say that, I think when it comes to science fiction and philosophy most people don't want to consider common sense. Like it's a hindrance, something that prevents you from seeking the truth. But it really doesn't have to.

EDIT 2:
And because I'm sure it matters, I define common sense as a sense of basic assumptions. When we seek the truth, we learn to question our basic assumptions. But they don't all end up being false. I could get even more specific and say that common sense usually refers to the assumptions we make using inductive reasoning.
 
a "true" time travel to the past must be made in such a way that the object which travelled will eventually arrive back at it's own timeline even if it did not move once it arrived.
It doesn't have to be able to meet itself without moving. Just being able to meet itself is sufficient (not sure if even necessary) for time travel.

About free will - I'm sure you can create a definition that allows it to exist. But the "common sense" definition tends to include the idea that "I could have done differently", and it not being a case of "I had to do what I did". It's this idea that I think doesn't really work.

On "I" - I am the mass of flesh and blood and other bits capable of coherent action as a single organism.

On sentient AIs - taking the definition that sentience means being aware of one's own existence, we need to make such awareness POSSIBLE to start with. That's not too hard for a physical robot. For a "pure" program, we need to give it an environment that allows it to examine itself, and quite possibly others like it. One possible environment springs to mind in fact - /proc, on Linux and similar systems. (It contains information about all running programs and some info about the host system). Program an AI that can read /proc and has suitable intelligence and learning, let several instances loose and I think you have a situation where sentience could be observed.
 

vonFiedler

I Like Chopin
is a Forum Moderator Alumnusis a Community Contributor Alumnus
On sentient AIs - taking the definition that sentience means being aware of one's own existence, we need to make such awareness POSSIBLE to start with. That's not too hard for a physical robot. For a "pure" program, we need to give it an environment that allows it to examine itself, and quite possibly others like it. One possible environment springs to mind in fact - /proc, on Linux and similar systems. (It contains information about all running programs and some info about the host system). Program an AI that can read /proc and has suitable intelligence and learning, let several instances loose and I think you have a situation where sentience could be observed.
The suitable intelligence and learning is the hard part. You can make a program that knows exactly what you want it to, and learns exactly how you want it to learn, but it must be able to learn beyond the limits of its programming.

Currently, I'm trying to program AI that would appear to recognize storytelling. The goal is for it to recognize the mood of a story, dissect that story into parts, and be able to tell the story back to another person without copying it word for word. I'm not talking about having an AI paraphrase Shakespeare, but if it could gossip and seem believable that'd be a huge leap in AI.
 
did you mean "A Sound of Thunder", not "The Sound of Thunder"? It makes a difference.

Also, I wouldn't use Sci-Fi as evidence... it represents an extreme among extremes.
 
Wait, I don't get it. In this scenario, are we assuming the role of some spectral observer in time, or do we interact with the denizens of the new time period even though we can't change it?

If the latter, then it would be far safer to travel to the past. Who knows, the future may be an irradiated wasteland with malevolent dictators fighting constant wars over the last remaining fossil fuels.

But by definition, we know what's in the past, so I would totally go to Rome and watch gladiators.
 
I'd prefer going to the past instead of the future, just for the sake of seeing a ton of famous events that occurred in history like the Declaration of Independence and famous inventions in their beginning state, seeing famous people, and I always wondered about what my ancestry was like.
 

Users Who Are Viewing This Thread (Users: 1, Guests: 0)

Top