Wednesday, July 6, 2016

Software as Mind; Today's Response.

WF's current thoughts are HERE.

I don't think that I have made my position clear, so I will attempt to elaborate in order to clarify it. You (WF) say this:
"Let's start with a few basic assumptions, even if only for the sake of argument. If we suppose that our minds have:
1. The desire to understand the world
2. The concept of good / better
3. The desire to exercise our own agency
Then (to my thinking) that covers the areas of the human condition that we've discussed."

But it is not the limited areas of what minds have that produces any knowledge of the total physical existence of the whole, complete human mind. If the human mind is to be shown to be completely physical, then the entire mind in all its aspects must be demonstrated to be physical. In the software analogy then, that means that the most complex and most disparate aspects must be shown to be replicable in deterministic software. (Perhaps that is where you are headed here, but I'm not sure).

Here I will argue that if the human mind is to be shown reducible to software, thereby demonstrating that the human mind is likely to be merely physical in nature, then the software must demonstrate the ability to produce all (sum total) of the processes which are available to the human mind. This seems hardly arguable, but the current demonstration is limited to the presumptively simple processes, and does not argue for the ability to replicate the most complex or sum total of the human mind's intellectual and emotional capacities. In fact, by agreeing that humans are not automatons, you seem to agree to the opposite, that human agency shows non-determinism in the human mind.

In order to make the case that the mind is purely a physical entity, more than mere analogies would be required (especially analogies to purely physical items which are dependent, not independent agents). In fact, without any actual data now or possibly ever, no empirical statements can be made on the subject. So the case being made is purely inferential in nature, and other than being based in organic analogy, it is based on a presumptive ability to replicate the complete function of human minds in deterministic computer software, comprehensive programming which runs freely, and which creates agency and all the functions of human minds.

The question becomes what then, exactly, are the range, limits and full categorical capacities of the human mind which are to be produced in the combination of software and hardware? This is absolutely necessary if human mind is to be completely reproduced in computer technology. And if the whole mind cannot be fully reproduced in software, then the claim of total physical existence for the complete mind is not supported by the software analogy.

So far in this discussion, the first step is made by limiting the range and limits of human mind to “rational thought”, a small subset of human mind and hardly used by some minds. By starting with the presumptively deterministic nature of rational thought, it is said to be replicable in software. I’ll argue against this detail in a bit. First, a comparison of software and hardware state machine decisions to rational deduction.

The nature of analytical rational thought is different from the design of IF/THEN (note 1)decision points in software; here’s why: basic rational deductive analysis starts with a proposed “truth statement” which is to be analyzed. It will be declared undeniably valid and true IFF a series of syllogisms exist which are grounded in First Principles, have premises which are previously shown to be deductively true and valid, and which are necessary and sufficient to cause the "truth statement” to be known to be immutably true. These premises are originally not known to exist, and therefore must be created, crafted carefully using the principles of Aristotelian grounded deduction for guaranteeing the truth of each premise in the chain of premises.

While this looks similar to the design of IF/THEN decisions, it is not the same. The software design starts with the conclusion being defined as the desired outcome of known inputs; the task of the designer is to accumulate or fabricate the correct inputs which cause the output to occur (be true) when the time is appropriate, or conversely, to produce the desired output when the previously existing inputs are properly asserted. This is designation (design), not analysis. There is no need to analyze the premises/inputs, because they are obviously required by the design in order to produce the necessary output, so they must already exist and be valid by definition by the time the output is required. The process is deterministic. Both process and outcome are known in advance, by design.

It is vastly more work and complexity to have to discover a chain of grounded true and valid premises which will support the truth of a proposed “truth statement” which actually might not be true at all, and thus not have premises which even exist. So the IF/THEN when used in code is not the same as the IF/THEN of Aristotelian deductive analysis. Further, I don’t believe it can be shown with any conviction that the sequitur nature (logically follows of demonstrable necessity and sufficiency) of an analytical argument can be deduced by software, especially when it is not known in advance. Given that, then it would not be even remotely possible to design a deterministic analysis of a stand-alone proposition of a “truth statement”. This presents as a lock-out, a falsification of the concept of software as mind, even at the rational level.

Another obstacle is the determination of self-evident, axiomatic First Principles for grounding (note 2) the argument; can software be designed which would discover or at least agree that X, but not Y, is intuitively and obviously a universal truth due solely to its “self-evidence”, both ontologically and epistemologically? I doubt this to the extent that it strongly appears to be a lock-out, a falsifier for the concept of software as mind.

And how would experimental programs be designed by software, based only on inductive observations but requiring the elimination of non-essential variables, noise and external influences, bad instrumentation, non-linear processes, etc., in pursuit of isolating the cause for an effect? Could software be motivated to design new electronic/optical/quantum machinery and test equipment specific to an experimental application? Would a failed experiment suggest to software either a better experiment, or a better hypothesis? This is another point of doubtful ability of software, running freely and replicating human minds, to the point of the appearance of lock-out, another falsifier for the concept of software as mind.

Again, the task of demonstrating conclusively that the entirety of the human mind reduces to material existence is far and away more complex than attempting to fulfill with software the very limited (yet very daunting) task of deductive analysis under the discipline of Aristotelian principles… as simple as those principles are for a mind to understand and use. Analysis by this method is creative, because it requires fabricating new premises that not only are individually true and valid, but also as a chain of premises are necessary and sufficient to support the conclusion. I doubt that computer software can produce any creativity beyond the algorithmic level and here I’m thinking Chaos Theory and repetitive calculation with previous results as input data (frequently vaunted as “creative”, but actually not).

Even more creative is the stacking of premises which are true and valid in the pursuit of a previously unknown conclusion. Even humans have trouble with this, as the Darwinian “sciences” demonstrate. In Darwinian inferential logic, premises are created and stacked without concern for their absolute truth, being concerned only with appearance and inference. However the process produces the new conclusions of evolution and common descent, as its product. While the evidence does not immutably support the conclusion, the process is a valid abductive approach using Peirce’s logic of inference. So subjective inference is used rather than empirical contingent facts.

Inference to conclusion never produces immutable, incorrigible truth, and its use for truth claims is bogus. Another example of this is the use of Baye’s probability inference, which is wide open to misuse due to injection of personal bias. Bayesian calculations work well when used with previously observed empirical probabilities, but not with non-empirical, subjectively inferred probabilities. Selecting appropriate premises for Bayesian use is a very difficult procedure, and it is doubtful that such selection could be reduced to software.

Returning to the point made at the beginning: what are the full capacities of the human mind? What range and limits exist to human intellect, emotion, creativity, analytic capacity, etc.? Can specifications even be developed which fully define the complete human mind? What about genius level thought? Software must work at genius level (or beyond) if it is to cover all human minds.

Further, what about qualia? What about full comprehension of the implications of an accumulation of disparate facts? What about skepticism, solipsism, pyrrhonianism? What about the obsessive search for purpose? Self-focused, even narcissistic purpose? The list is enormous.

Finally, to address the issue of desire as a force, I think that terminology is confusing. If desire is, in fact, a unique physical, deterministic force, then it exists outside and beyond the four known physical forces (note 3), which would make it non-physical, non-material and a disproof of the concept of physical-only mind. It seems to me to be more appropriately termed a motivation, rather than a physical force. While a desire might be causal for certain effects, the "desire to conquer Europe" is difficult to pin to any physical entity in the brain, and it seems even less likely to arise automatically from general purpose software which is replicating human minds. In fact, desire - being irrational and even antirational - shows the necessity of the software's ability to generate irrational adherence to fallacious pursuits, ideologies, and subjective opinion over fact, because that is part of the human mind, too.

To conclude, the full range of the complete human mind must be shown to be replicable in software if the computer analogy to mind is to be supported. I doubt that this is possible, and the falsifiers given above seem much more likely than the ability of software to accomplish the replication of the whole, complete human mind in all its array of complexity. This skeptical conclusion, based in probable falsifiers, serves only as an argument against the software theory of mind, and not against any other theory of the purely physical existence of mind; other theories would require other analyses.

NOTES:
1. There are other uses of IF/THEN statements, too. (a) "Fit of pique": IF she goes, THEN I'm staying here. (b) "Pragmatism": IF my keys aren't found, THEN we can't get in the house. (c) "Forecast": IF it rains hard tonight, THEN the ground will be too muddy to play pick-up baseball tomorrow (therefore, we'll go to the matinee instead).

2. Grounding in self-evident axioms (e.g., First Principles) is necessary to avoid either the infinite regression of premises, or the circular, self-referencing premises (Appeal to Self-Authority Fallacy).

3. The four forces known to physics are gravity, electromagnetism, weak subatomic, and strong subatomic.

3 comments:

CJ said...

" it is based on a presumptive ability to replicate the complete function of human minds in deterministic computer software, comprehensive programming which runs freely, and which creates agency and all the functions of human minds."

As fascinating as this discussion is, I unfortunately didn't have time to read it thoroughly. However, this put me in mind of the recent Tesla crash, and discussions of liability surrounding self-driving cars. Whose to blame? The driver? Tesla? No one suggests the car.

If we are arguing for eventually replicating the full human mind in AI systems, including agency, then we should be putting the Tesla car on trial for involuntary manslaughter. Anyone who finds the idea odd, I suggest, is implicitly recognizing the problem: for all the progress we've made in developing AI systems that mimic human behavior, we simply cannot recognize machines as agents, moral or otherwise.

Stan said...

Excellent point.
Or perhaps there will be an admission that these simulation machines are not individuals which have names and personalities and choice when in unforseen situations.

And what about choice? Does anyone have choice regarding inputs which are not actually received? The car was perceptually ignorant of the presence of the truck. The lack of perception was the fault of... the car? the conditions? the designer? the truck? the mud on the lens? Or...

Having rational choice in unforseen situations is not likely to arise from deterministic processing.

The car likely performed as designed.

Weekend Fisher said...

Hi all

I'd agree that machines just aren't agents at this point. TBD whether they will become so (and if we can avoid them becoming HAL in the process ...)

At any rate, I've got my next response up. We're making progress in getting at our underlying premises ...

http://weekendfisher.blogspot.com/2016/07/physics-biology-and-mind.html

Take care & God bless
WF