Are you, in fact, a pregnant lady who lives in the apartment next door to Superdeath's parents? - Commodore

Create an account  

 
Chairman Sheng-Ji Yang, Essays on Mind and Matter

(June 26th, 2018, 12:58)Bacchus Wrote: What are 'results'? Aren't they also particles? Doesn't the question then become 'Given identical particles, do you get identical particles?' You might introduce the time coordinate -- given identical particles at t would particles be identical, within quantum noise at t+1? That's not a falsifiable statement by itself. We don't have two worlds to run in parallel.

It is not falsifiable in absolute terms, yes. I could see experimental evidence pointing towards or against such a result, as defined by confidence intervals, operating within the constraints of physical law, isolating systems to the greatest extent possible, maybe (just speculating out-there hypothetically) some method of setting its temperature to absolute zero or compressing it into a singularity to destroy any information about the particles' prior states.


(June 26th, 2018, 12:58)Bacchus Wrote: But even if it were, if you hold to materialism and see that t+1 the particles are different, how would you choose between: postulating a yet-unobserved particle, abandoning materialism and abandoning determinism?

Like the mosaic, yes, we can never know for certain whether the fundamental concept is incorrect or if we're just missing a refinement to it like a yet-unobserved particle. I would propose that the exact results of the experimental evidence would lean in one of those directions, maybe there's a gravitational perturbation indicating a particle of dark matter. More experiments would follow to refine the confidence intervals. At some point, you decide what confidence interval you are willing to accept as truth.
Reply

(June 26th, 2018, 13:03)T-hawk Wrote:
(June 25th, 2018, 17:08)TheHumanHydra Wrote: So you, reasonably, ask why I demand that you not possess a genejacked slave. And a better answer than I gave before is that I hold the genejack to be a member of the polity ('humans living in X')

Yang's argument is that the genejack is by definition not a member of that polity.  To make a genejack is to preclude from ever existing whatever essence you consider to qualify it as such.

I believe your only counterargument rests on the divine, that that essence is God-granted and cannot be precluded or created by human activity.

(June 25th, 2018, 17:08)TheHumanHydra Wrote: Your stand is unreasonable, in that you can't point to an ultimate reason to stick to your guns, while mine is constrained by an objective standard.

My ultimate reason to stick to my guns is "because I want to and nothing stops me."  That's subjective but your standard is equally so.  You define your wants by what you perceive as the genejack's detriment -- but the genejack itself perceives no detriment.  Aggregating a number of subjective justifications into what you call society does not make that aggregation objective.

(June 25th, 2018, 17:08)TheHumanHydra Wrote: I will agree with you, then, that materialistically, there is no free will, but -- sorry! -- must continue to disagree for now that there is no will. Like Bacchus, I think that the aggregate physical processes are accurately described as will (choosing -- even if predictably).

I've actually come around to that view as well, that the aggregate physical processes could be described as will, as I just said to Bacchus.  But I don't think there's any practical consequence to that definition.  It's an abstraction like "water wants to flow downhill" or "the chlorine atom wants to fill its electron shell", just describing the outcome of the forces acting on the physical process as a desire.

(June 25th, 2018, 17:08)TheHumanHydra Wrote: As I observe the world, however, your world view (that higher-order phenomena are not real as such) seems patently incoherent.

Bacchus made the right arguments here, the top-down ontology as we've been calling it.  It is equivalently true to describe reality as aggregations of particles as to describe it as subdivisions that resulted from the big-bang singularity (or even from a divine creator.)  It is equivalently true to describe the fundament of reality as "what cannot be subdivided" as to describe it as "what cannot be aggregated" because that aggregation already includes the entire universe.  Bacchus brought me to realize that my position is not inherently superior to that, but neither is the other way around either.

1. Yang's argument is that the genejack is by definition not a member of that polity.  To make a genejack is to preclude from ever existing whatever essence you consider to qualify it as such.


I believe your only counterargument rests on the divine, that that essence is God-granted and cannot be precluded or created by human activity.

Yang may define his genejack that way, but I may not. Look, since you believe that people are mere aggregations of particles, not morally different from any other, you are obviously going to have problems accepting the moral dignification of anything/-one. But you do have to make those dignifications, because your system of practical ethics (don't murder so I don't get shanked) has to functionally acknowledge the value others place on so-called humans, if not philosophically. If the polity chooses to dignify the genejacks, it seems as difficult for you to gainsay them as for any other so-called human, regardless of what you actually believe about those beings/aggregates. Basically, you already make concessions from your worldview to get along, I predict that society is going to ask you to make this concession too, will you?

Yes, ultimately, my valuation rests on the divine! But practically, we have to work out a functioning society while disagreeing. It seems easier for you to abandon (hypothetical) efforts to make a genejack than for me and many others to abandon our obligation to stop you.

2. My ultimate reason to stick to my guns is "because I want to and nothing stops me."  That's subjective but your standard is equally so.  You define your wants by what you perceive as the genejack's detriment -- but the genejack itself perceives no detriment.  Aggregating a number of subjective justifications into what you call society does not make that aggregation objective.

Sorry, I believe you misunderstood me. You believe that personal imperatives are self-assigned, that is, subjective. I believe that my imperatives are externally assigned, that is, objective (yes, the belief itself is subjective -- but we are talking about how we functionally live these positions out). Assuming neither of us is persuaded to abandon our belief systems, who has the greater freedom of action? You do, you can malleably reassign your imperatives. I do not have that power. As such, I ask you (or some future person) to use your freedom of action to reassign your imperatives and preempt the conflict between us.

(Also, all this is postulating that a hypothetical society faced with this question will actually present resistance to the genejackers -- someone will try and stop you. What then?)

3. I've actually come around to that view as well, that the aggregate physical processes could be described as will, as I just said to Bacchus.  But I don't think there's any practical consequence to that definition.  It's an abstraction like "water wants to flow downhill" or "the chlorine atom wants to fill its electron shell", just describing the outcome of the forces acting on the physical process as a desire.

Cool, we've reached some agreement here, then. Just to kick the can around, I think the practical consequence (for people in society) is that it enables determinist materialists to think of their cognition as meaningful (to be clear, it's a perceptual difference, not a real one). So I could stand in a convenience store, broke and tempted by the Mars bars, and say, 'I'm just going to steal one. I can't help it anyway,' or I could say, 'I know my decision is already determined, but I understand that by my process of cognition I will arrive at the determination, so I will stop and think about whether it's really a good idea to steal the chocolate bar.' What do you think?

4. Bacchus made the right arguments here, the top-down ontology as we've been calling it. ...

Yeah, Bacchus is noticeably smarter (or better-read) than I am. I should probably leave most of this stuff to him. tongue That was a gracious and fair-minded response, though, thanks! And thanks for the discussion on these points.
Reply

(June 26th, 2018, 14:02)TheHumanHydra Wrote: If the polity chooses to dignify the genejacks
...
(Also, all this is postulating that a hypothetical society faced with this question will actually present resistance to the genejackers -- someone will try and stop you. What then?)

This is the remaining point where we're disagreeing. You're inherently assuming that societal opinion will be against genejacks. I make no such assumption.

What action I take has nothing to do with morality (which as you've surmised by now doesn't exist under materialism/determinism anyway.) It is to maximize my personal utility function given all inputs. If I could simply go pick up one at Genejacks R Us with no further consequence, I would. If I'd be imprisoned or executed, I wouldn't.

You do not get to define my personal utility function, you merely seek to define the inputs. I don't get to define yours either, even though it includes feelings for the genejack that to me are as silly as feelings for a Roomba. You can ask that I alter my function, sure, but I can equally ask you to alter yours. Each is equally malleable and neither is any more objective than the other. (If you say yours is not malleable because of a divine standard, then the point of malleability is that divine belief in the first place.)


(June 26th, 2018, 14:02)TheHumanHydra Wrote: But practically, we have to work out a functioning society while disagreeing.

Do we? Why do we incur that obligation any more than bees in their hive or the lion in the jungle?


(June 26th, 2018, 14:02)TheHumanHydra Wrote: So I could stand in a convenience store, broke and tempted by the Mars bars, and say, 'I'm just going to steal one. I can't help it anyway,' or I could say, 'I know my decision is already determined, but I understand that by my process of cognition I will arrive at the determination, so I will stop and think about whether it's really a good idea to steal the chocolate bar.' What do you think?

I'm not sure what you're asking -- those are the same to me.
Reply

(June 26th, 2018, 12:05)T-hawk Wrote: What is the difference between terminating a computer program that prints out "I enjoy life and don't want to be terminated" versus terminating a human that says "I enjoy life and don't want to be terminated"?

You would say that the computer program has no perception or enjoyment while the human does.  But Bacchus also says that free will is empirically unverifiable from outside the system; how do you reconcile those?

Uhm I wouldn't? If the computer is sentient to the level of that of a human being I would say it shouldn't be terminated. If we ever develop an algorithm that can achieve the same delusions of emotion and free will we have I see no reason why it shouldn't deserve the right to life and freedom humans enjoy. Consciousness is consciousness as far as I'm concerned, we are after all, by your own insight, just a set of particles that create an illusion, I don't give a damn if that illusion is created by organic matter or piles of copper wires.
Reply

We don't actually even know for sure that any free will except our own exists, I (you) could be surrounded by philosophical zombies. It just seems more straightforward and materialist to assume that people who look like us function like us internally. It's also definitely worth erring on the side of caution in this question.
DL: PB12 | Playing: PB13
Reply

(June 26th, 2018, 14:47)Japper007 Wrote: If the computer is sentient to the level of that of a human being I would say it shouldn't be terminated.

How do you make that determination, is the question.
Reply

(June 26th, 2018, 01:42)Bacchus Wrote: There are various ways towards objective morality that do not require a transcendental lawgiver.

Utilitarianism is of course one such, it puts suffering and satisfaction, or some aggregate of the two as an objective fact of the world -- and can consequently at least try to judge actions in terms of their net effect on the amount of suffering and satisfaction. This kind of approach can look particularly convincing on aaterialist-determinist-soft reductionist reading, where satisfaction and suffering are objective neurological phenomena that perhaps can be physically measured.

There are lots of problems with utilitarianism and thankfully it's not the only way forward. There is a kind of logical rationalism which starts with the will and works through just what it means to will, and shows that acting immorally amounts to a categorical failure. When you take your carnal desires as overriding reasons for acting, for example, you are just not willing. This is can also be couched as a 'lawgiver' argument -- you act freely only insofar as you act according to a law you give yourself, you have to be the lawgiver and, if you believe in free will, you do have the capacity to be such. That's Kant's argument.

There is another way to objective morality which is my favourite.

Bacchus, thanks for your response. I strongly disagree, and I think it may again be a matter of definitions.

This is almost parallel to what T-hawk was saying about 'free' will. We perceive it as free, but perhaps it is not.

I think we tend to perceive morality as this immutable force/rule inherent in the universe. Under a non-theistic framework, it is not.

That is, we usually argue that an action is immoral (so don't do it). We don't usually ask a person, 'will you please consider perceiving this action as immoral for yourself (and then not do it)?'

Utilitarianism is an arbitrary system of morality (so not a reflection of any inherent universal moral system). It exists because some humans choose to value suffering and satisfaction in certain ways. It does not provide a reason to value suffering and satisfaction in those ways. So you can judge actions and states objectively relative to utilitarianism. You can possibly measure suffering or satisfaction somewhat objectively. But the valuation of suffering as 'bad' and satisfaction as 'good' is entirely subjective. Why ought we to really care? (You can see that by asking, 'why ought?' I am necessarily appealing to some moral imperative higher than utilitarianism.) You can say, 'it will produce a better-functioning society,' but that is an appeal to efficiency, or, 'because, tit-for-tat, it will benefit you,' but that is an appeal to selfishness. If I say, 'because it's right,' I'm making an appeal beyond utilitarianism to justify it. I must come up with some other font of objective morality, or I must acknowledge that morality is not what I think of it as and is rather an expression of what I prefer in my own and others' behaviour.

Your Kantian argument just seems to lead plainly to an acceptance of morality as subjective. If you stop at the 'just not willing' part, then we are instead valuing actions against agency, not against rightness or wrongness (morality). Why is it wrong not to employ reason? Again, I am appealing to a higher standard.

I think the path to objective morality lies along the Socratic road of asking 'why ought I to?' (Here I'm stealing from my atheist friend, and he from elsewhere.) Until the buck stops somewhere other than your personal preferences, I don't think you've arrived at a non-subjective source for moral behaviour.
Reply

(June 26th, 2018, 14:54)TheHumanHydra Wrote: Until the buck stops somewhere other than your personal preferences, I don't think you've arrived at a non-subjective source for moral behaviour.

And I contend this is impossible, since the buck-stopping itself is a personal preference.
Reply

Why do you think we need a non-subjective standard for morality HumanHydra? If I get your argument you seem to insist there just IS an objective measurement (X Deity says Y), but why?
Reply

(June 26th, 2018, 14:46)T-hawk Wrote:
(June 26th, 2018, 14:02)TheHumanHydra Wrote: If the polity chooses to dignify the genejacks
...
(Also, all this is postulating that a hypothetical society faced with this question will actually present resistance to the genejackers -- someone will try and stop you. What then?)

This is the remaining point where we're disagreeing.  You're inherently assuming that societal opinion will be against genejacks.  I make no such assumption.

What action I take has nothing to do with morality (which as you've surmised by now doesn't exist under materialism/determinism anyway.)  It is to maximize my personal utility function given all inputs.  If I could simply go pick up one at Genejacks R Us with no further consequence, I would.  If I'd be imprisoned or executed, I wouldn't.

You do not get to define my personal utility function, you merely seek to define the inputs.  I don't get to define yours either, even though it includes feelings for the genejack that to me are as silly as feelings for a Roomba.  You can ask that I alter my function, sure, but I can equally ask you to alter yours.  Each is equally malleable and neither is any more objective than the other.  (If you say yours is not malleable because of a divine standard, then the point of malleability is that divine belief in the first place.)


(June 26th, 2018, 14:02)TheHumanHydra Wrote: But practically, we have to work out a functioning society while disagreeing.

Do we?  Why do we incur that obligation any more than bees in their hive or the lion in the jungle?


(June 26th, 2018, 14:02)TheHumanHydra Wrote: So I could stand in a convenience store, broke and tempted by the Mars bars, and say, 'I'm just going to steal one. I can't help it anyway,' or I could say, 'I know my decision is already determined, but I understand that by my process of cognition I will arrive at the determination, so I will stop and think about whether it's really a good idea to steal the chocolate bar.' What do you think?

I'm not sure what you're asking -- those are the same to me.

1. I agree that we're agreed on almost all points now (even the last one, about the belief in the divine being malleable, yes, I said as much!). As for the prediction of social opposition, well, it simply seems to me a reasonable projection from past and present circumstances (e.g. I can at least predict with a fair degree of accuracy I would be opposed), but yes, it's opinion.

Well, it appears our attempts to refine each other's inputs (I like the phrase) have failed. Alas for each of us! Hopefully the discussion was at least entertaining or stimulating for any readers.

2. Do we?  Why do we incur that obligation any more than bees in their hive or the lion in the jungle?

Well, no, materialist-deterministically we don't (as in my post about morality, just posted). It's a shorthand to describe the terrain I wanted to discuss on.

3. Then pay it no mind. smile
Reply



Forum Jump: