The Ability of Computers to Compute Mathematical Proposition
I think the computers are able to figure out with an algorithm the propositions accepted as true by the mathematics community. To boot, I think there are holes in Godel's incompleteness theorem and Turing's halting problem resulting from the formalism employed in these proofs and concepts that are assumed in them. One gets the impression that Godel's mathematical platonism indicated that mathematics was something that occurred in the minds of people in a way axiomatic formal systems of math did not capture. Many believe that Godel's theorem also shows that certain mathematical propositions are not able to be solved by an algorithm used by a computer. One such person is mathematical physicist Roger Penrose. I don't see from a logical point of view how his arguments lead to a conclusion that machines can't compute mathematical propositions, a position that he attempts to argue for in books like An Emperor's New Mind and Shadows of the Mind.
In this brief write-up I wish point what I think are the ideas that seem to be illogical gaps in Godel's and Turing's arguments, based on assumed concepts that are vaguely constructed or arguably on logical errors. The essay will then go suggests that math is best understood as something that occurs as a psychological phenomena in the minds of mathematicians. I hope to demonstrate this by showing that the terms used in Godel's and Turing's paper are more complicated than how they are employed in their papers and therefore the terms don't achieve a quality of sense that allows them to function as solid foundations for commenting on real world computability.
With no further qualification, I will display below here what I suggest the are holes in these the concepts used in these papers, primarily in Godel's and somewhat in Turing's. I have listed the points below in terms of importance.
1) The paradigm of axioms, rules of inference, and theorems: modus ponens is often portrayed in the form of "If P then Q. P exists therefore Q exists". "If P then Q" can understood as a the theoretical rule of inference and "P exists therefore Q exists" can be said to an application of that rule to existentially quantified universe. It is not clear why some thing P should bring about some other thing Q and why simply saying that P and Q exist together doesn't capture the situation just as accurately. Perhaps we could say that variables needing to confirm to a rule of inference is itself a rule, a play this language game unto whatever infinity we may live throughout.
2) undecidability and a turing
machine: the notion of "undecidability" may be illogical given how a
Turing machine or a machine sufficiently similar to it, when actually
constructed and not just used as a thought experience is something that always
is part of a decided observation (barring irregularities like the possibly
cogent Copenhagen interpretation). Berkeley philosophy professor John
Searle gives an exposition of Penrose's formulation of Godel's argument in
terms of Turing machine movement. A Turing machine is often modeled off the
application of binary language where a mark made by the moving program head is
either formed or not formed (erased) and is thus analogous to program that uses
the Os or 1s of binary code that together are pearl-strung to a make string.
Undecideability, if I understand correctly, can result from when a Turing
machine either goes on "Infinitely" (a qualifier that attempts to
describe something that would never be observed in turing machine movement by a
presumably "finite" human and thus may be illogical as a posit when
talking about the set of all finitely constructable computers) or when the
machine theoretically loops "infinitely" (which will presumably never
witness) over the small territory of a finite space of Turing machine tape (one
perhaps wry thinks of Hamlet's soliloquy at the war front or no man's land
during WWI) shifting back and forth and writing and/or erasing marks in the
same pattern. The set of all possibly Turing movements that is theoretically
infinite as conceptualized in the presumably finite human mind seems to either
halt on infinite looping without halting or linear and ordinal furtherance
without halting. Again, to attempt at reality test, this notion also involves
endless computation something observationally insuperable. If someone actually
builds a Turing machine (which has been done many times; one is made of legos),
then it will either halt on a series of marked or unmarked squares or, from
observational limitation, it will "halt" on the last square or
squares that are observed by whoever is observing it such that it will always
yield a result of a finite square of linear group squares (again analogous to a
single binary symbol or string of symbols). If this is the case then there
would be no undecidability as long as the visual data that constitutes a Turing
machine and it's occurrence, perhaps over time or perhaps not, is a solid
observation that is not to be doubted, in either a Turing machine or
"metalogical" ultimate Turing machine (or perhaps my reasoning is off
here). This paradigm would eliminate the notion of undecidability of
mathematical statements that can be corresponded with one or many Turing
squares and applied to or corresponded with godel's argument (as Penrose
attempts to do), We wouldn't accept the notion that there is some higher
theoretical number or integer (assuming a linear model of ordinal arithmetic
progression) that hasn't been reached yet because from an empirical perspective
we can only talk about the arithmetic numbers that have so far been thought of
or computed by any possible collection of minds and computers, whose efforts
are either summated together or not. This is to treat mathematics like
induction in the in manner of mathematical induction. This list would probably
be finite, and seemingly would form an empirically complete and consistent
catalogue taxonomist's list. Any time a new mathematical notion was observed
than it could be appended to the list just like a taxonomy of animals or flora.
Mathematical notions like irrational numbers or imaginary numbers could be
called into question (do they correspond with legitimate or well-formed
psychological states or are they mere language games or something else, perhaps
a socially acceptable formalism?).
3) correspondence lemma: the
correspondence between prime factorization statements and statements of
potential logic is something that can be called into question because
"correspondence" (whatever the may mean) between a thing and symbol
or between a symbol and a symbol or between any combination of things and/or symbols
is able to be or necessarily semantically establishable by fiat. If something
holds a language we can all language A that helps us conclude by corresponding
yoking of isomorphorism that another language B has some property then we can
make that same conclusion by reasoning in language B soley without the indirect
scaffolding of the other language A. This sort of reasoning and therefore the
use of this reasoning to place statements of one language into another in some
way to create "self-reference" doesn't seem like a hard-in-fast way
to make an argument about real world computation of mathematics.
4) self-reference: the notion of
reference and/or meaning and/or correspondence (these three or any group of two
made of these three can be synonyms or "semi-synonyms") is itself is
not well-defined. The notions of "reference" and (especially)
"self-reference" I think some people consciously and/or unconsciously
think of as a statement-like entity that has above it a circular-rounded
vector-like loop that starts above it and ends up after its loop pointing back
towards it. This may be to apply a false spatial analogy. These considerations
above suggest that "reference" and "self-reference" are not
semantically well-formed and thus not helpfully able to justify the use of the
Godel statement in his paper.
5) proveability: if axioms, rules of inference, and theorems (which models in some way an algorithm as it exists as in reality) are to be doubted as is argued for in point 1 then the notion of "proveability" itself as it is used, in mathematics (and whenever a notion like this is used in proof theory) or otherwise, is to be called into question it seems. "Proveability" is a term that doesn't seem to have correspondence to anything immediately observable it seems (this can be talked more in another paper) and so it seems to be a potentially incorrectly invoked and applied term. Mathematical concept and concepts of proof might be things that are activated or deactivated just as the Kantian concepts of space and time and causality might be able (evidence to possibly support this hypothesis in respects to time: frame of observations dilating in times of time and measurable by a clock on a platform that is accelerated at speeds approaching the speed of light in accordance with Einstein's relativity and mediators sometimes report a subjective sense of "timelessness" and people's experience of time can be warped by being in a life threatening circumstance like a car crash (where a few seconds can seem protracted like hours, the mind arguably computing an inconceivably rapid gloss of memories to search for solutions or arguably a spandrel or arguably something else) and frame of observations dilating in times and measurable by a clock on a platform that is accelerated at speeds approaching those of light in accordance with Einstein's relativity) in a way that casts doubt perhaps on the notion of "proveability" in mathematics.
6) Cantor's, Godel's, Turing's use of the diagonalization argument:
Turing and Godel borrowed a rumored-to-be-powerful argument to demonstrate that there are states or sequences or numbers that evade severest tabulation. The method generally is laid out as follows: take a sequence of symbols--numbers, function symbols, 0s and 1s, take your pick--and list the horizontal rows vertically, column-like, so that each symbol, as it were, stacks upon another symbol that is below it in another sequence. To illustrate this further I will use the 0s and 1s that are used for Turing's able-to-be-computed numbers. Once we have the stacked rows of horizontal symbols--in this case rows of 1 and 0--and change them, systematically, staring from the first symbol or term and then going on to the next. How is this modification to be done? Take the first term. If it is a 0 make it 1; if it is a 1 make it 0. Do this for all the terms on the finite table of sequence strings. Now, once we have done this, and collected all our modified symbols, we can put them together, in the order that we created them, and well will have a new sequence. The way that we created the sequence of symbols sound-proofs it from being similar or comparable to any of the other sequences on the table. Therefore, it is a new string. So, if a computer or mathematician claims to have a complete list of strings, there will always be, by the hoodwinking, Pan-like trickery of the diagonalization argument, some new string that evades the list, that is yet-to-be computed. Thus, there is a countless amount of things and/or infinite amount of things considering symbols. Let's just say this: we can't use this argument to claim that there are an infinite amount of strings, if the term "infinity" doesn't correspond to something we observe (this is correspondence theory of truth, which may have its faults, but let's just say that is it probably the best game in town, and if we were to divorce with it them perhaps all we are saying is nonsense,pragmatic or otherwise). Some say there are an infinite amount of things in the universe; some say that "infinity" corresponds to a platonically mathematical, mathematically platonic realm of numbers. Therefore, either "infinity" corresponds to an infinity reality we observe or an infinite platonic reality we observe, or both. However, we are not infinite like God--I don't know of anyone that would claim that they or others perceive an abstract and/or physical set of states that are infinite in number--therefore, something seems awry.
7) The use of the negation tilde in Godel: the negation tilde "~" which corresponds with the Anglo-Saxon "not" may be a concept placed (spatially?) upon reality things like the notion of space, time, causality, a la Kant. Reality does not seem to admit of anything that is "not a thing" (unless the notion of "nothing" as invoked by the philosophical physicist (or physics-informed philosopher?) and notably popular atheist "Victor Stenger" is cogent (he seems to confusingly predicate in the manner of existential quantification nothingness by saying contending that nothingness is inherently "unstable") or the notion "emptiness" in, say, philosophical Tibetian Buddhism is cogent). This observation may casts a certain light (shadowy in dimensional width) of suspicion on the notion on Aristotle's law of the excluded middle, which says, in verbal reworking of symbolic logic, the logicians markedly air-tight leitmotif, that something is either A or not A. The entirely of the universe, in a framework cajoled into composition by Newton and other absently minded associates, can be described in completeness, one would presume, without the use of "not". The fact that I am writing "not" and you are reading "not" is wasting computational space or minds and/or brains and/or pixels that comprise a GUI interface that could be more usefully employed describing the world or universe that is actually there.
8) completeness: reality presumably is compete in itself including any subsection of it that involves formal computation written on a piece of people or constructed as an machine algorithm or the psychological state of thinking about mathematics. All propositions that we currently know can be written on paper or a computer completely (it seems we have enough informatic space for that) and if we discover a new proposition than we can be append it to that list. Things presumably aren't complete if from an empirical perspective our singular or collective mathematical thought processes have a higher "bit-size" (however that is to be construed exactly) than the amount of computer storage space and/or paper space that we use or can use to record those thoughts by isomorphism.
9) consistency: reality itself is presumably consistent (as a maneuver preemptive) because we are able to do things like build machines of where the parts consistently adhere to themselves to make a "molecular" construction by composition and and to scratch out the chicken scratches of mathematics on paper in which the scratch-featuring paper and paper-inhabiting scratches are themselves consistent states. Therefore when talking about computers when under the persuasion of a belief in a consistent world it seems best to not think in terms of formal systems that are ostensibly using the term "consistency" to mean something different. The psychological states of mathematics, if countable and therefore able to be recorded onto a finite collection of, say, strips of paper that propositions can be written on, are things that are part of this preemptively consistent reality and thus so are the strips of paper we could write on. That which is consistency can be reached by an implying reducto ad absurdam by pointing out what is inconsistent or contradictory. Stated with more spatially keen exactness, the class of all inconsistent statements (given a statistically saavy sense of the general and preferred usage of words and how they correspond to general psychological states) is a subclass of all possibly Cartesian-creative verbal constructions that can be made; the inconsistent set implies the consistent set by non-overlapping spatial differentiation). However, if reality is consistent as we said we would treat it as, then "inconsistency" must be something that doesn't refer or apply to reality at all. Thus the term may just in a way to apply to concieved-of possbilities that are all contradictotry and thus not all "compossible" (a Liebnizian notion) from which one possibility is chosen to be actualized into action or behavior.
10) arithmetic: This point is tenuously relevant to Godel and Turing and is at the same very relevant to their proofs since it is about doubting the notions mathematics itself. This seems more fitting point to make at then end because I predict it will be less popular given the current culture of science and mathematics. I don't personally think that mathematics fits with the correspondence theory of truth as it is thought of by common sense. I think ideas of physics--Newton's, Einstien's, Feynman's, Witten's --can be recast in terms without mathematics. Making claims about arithmetic and the natural numbers assumes that mathematics refers to something in a way that it's statements can decided upon to be true or false, however if the world is made of qualities or states then symbolic logic is a more likely candidate to pull off correspondence which deals with state-like variables and not quantity-like numbers. Given the potential Kantian arbitrariness of math concepts we should be consider how this arbitrary quality applies to the question of a complete and consistent catalogue of math statements able to be placed in correspondence with teh squares of a Turing machine. If you find this doubt about arithmetic unconvincing and/or unpallateable then maybe one of the other points written behind this point is better latched onto. this argument can be considered and fictional apparational whisp or self-denying patch of melting snow that barely even happened.
Now that we have looked at these 10 points lets look at a way that mathematics can be processed by a computer.
If the brain and mind can decide mathematical statements given their sizes individually and collectively then a computer, likewise, can probably produce the constructions of statements that correspond with mathematics. This takes into consideration the verbal mediums we have absorbed by ostension by the teaching of our moms, dads, caretakers, and our fanged but fur-motherly wolf wise-elders. Either the world's current fastest supercomputer--Tainhe-2, the computer in Guangzhou, China, which processes at a speed of 54.902 petaFLOPS--can produce the input of these propositional statements of mathematics by being isomorpically modeled off of the historically irrecoverable largely mammalian subsections of the brains of the best of 'em: Gauss, Euler, Erdős, Ramanujan, etc . . . OR a computer that is eventually created whose constituent parts are smaller and finer than the clusters of firing neurons and it a machine made of nano parts or quantum bits perhaps will be able to compute all mathematics equally, and then, in time's ponderous fullness, will surpass any mathematician given that it has the correct programming. the programming might involve adapting algorithms that in turn produce algorithms. The question of whether humans can create the correct programming is a harder thing to determine question, but one is inclined to think that by surveying the verbalizations of great living mathematicians using behavoristic methods--surveying mathematicians like Grigori Perelman (who earned a righteously rejected 1 million dollars for solving the Poincaré conjecture) and the mathematically sharp Edward Witten paired with PET and Fmri scans of these same thinkers comprise a dual strategy to construct an algorithm that can be uploaded into a supercomputer that produce mathematical statements and new statements that bring about mathematics ideas.
One would wish to try to compute or
taxonomize like a cartographer all types of concepts (and not only mathematics)
in the style of Liebniz's characteristica universalis; however, the brain may
sift through empirically searchable "spaces of mind-concepts" in a
way that is not always easily predictable. Perhaps the corpulent and thickened
corpus callousum of Einstien's allowed his mind to explore spaces of realizable
and possible thoughts of space and time, subjectively understood, that can't be
reached by our mind and brains given that they aren't as blessed as his with
the ability to conceived of space and time and physical reality. Perhaps a
person can cultivate Einstien's subjective ability at conceptualizating if they
very systemically directly our neurons towards that by socialization and
education and independent study, progressively building up those cycles up
through each sleep cycle like a rat latently dreaming ardently about mazes.
Presumably platihymenthus--the fluke worm--has a brain that can only faintly
search the "space" of all possible concepts and probably it can't
understand our notions of space and time and causality except on a very
rudimentary level.
The question becomes empirical. We
are led to ask: how likely is it that our ability to program computers can be
expected to search spaces of math and other concepts in a way that mirrors and
then eventually outpaces the human mind and brain? All possible computers
constructable in empirically percievable reality are themselves modelable off
of a searchable space just like mathematical propositions can undergo a similar
modeling.
A computer can probably be made to be produce statements that are inductively saavy in a way that can predict reality and Therefore predict possible concepts we can encounter in the future. My reasoning for this is as follows:
Inductive reasoning typically is based on observations of the past and possibly the present that are used to make a conclusion about the future. There were 12 things we'll call A, therefore in the "future" it is possible that reality will also have and A. This typically occurs as an association of usually two and sometimes three things. This is the common example: I saw 12 white swans in the present and/or future therefore I will probably (whatever "probably" means) see a 13th white swan sometimes in the future (or, "if there is a swan in the future then it will be white). In a variant vocabulary, The color of whiteness and the spatial and/or kinasthetic sense-data that we gerrymander out of the totality of observed reality that we (in some way) correspond with the term "swan", given that they were observered together in the past, are likely to occur coupled in the future. Understanding reality, past, present, future (to the degree that this triple break down of time says something about reality. It seems possible that a "multiple worlds" interpretation, a "Godel universe", or some other model of the universe would challenge this all-too-human intuition) seems to be largely or wholly done by this process. To understand the past and fill out a history full of it we use "reverse" induction (based on our past of reading the copied writings of Josephus, Pliny, the Talmud and analysis the New Testament documents it is likely that Jesus existed).
The sea is largely filled with
individual that fit into the class Chondrichthyes (itself a general term
corresponding to many individual species and then individuals within each
species whose diversity can be modeled off of fuzzy logic and displayed on a
biomorphic computer program, as shown by Richard Dawkins in the Blind
Watchmaker),
therefore we can say that, given that the those animals which are underwater are also those animals which are fish that we have observed and taxonomized then concluded that it is more likely that the fish that we. this can be used, with accurately that is to be consistently improved upon (where the inductive tweaking of an inductive principle programmed into a computer is like a written and processed meta-study, a study of studies), to become a marked predictor of what sort of species we are to find in the twilight zone of the ocean, deeply entrenched, within Mariana's or otherwise, raggedly scuttling along the floors of silent seas.
There are irregularities and lumpiness in reality like when how black matter is distributed at roughly 25 percent in the milky galaxy and the rest of the universe (given its doubtable model function as a predictor of gravitational pull). I invoke this extended exposition of induction and science to try to argue that what occurs in the mind, by introspection, given the thoughts and concepts and emotion we have, may be a similar inductive enterprise given how we seem to inductively zoom in on what occurs in our minds and come to more accurate approximations like the ideal gas law or special and general reality are better approximations. This is to be done perhaps in the manner of mathematical induction. A computer can theoretically uploaded to predict the new things we will see of thoughts (and therefore be closer to realizing Liebniz's vision of characeteristica universalis (I consider concepts a type of thought since beliefs also seem to be a type of thought, though this can be semantic to a degree).
Last thought: the brain and/or the fastest computers in the world combined can only process so much data if the notion of reality can't be compressed is correct (if Chaitin's notions of complexity are right, then the binary string 0000 is just as complex as the binary string 1010.) if this is true then the bit-size of the universe can't be captured by a view paltry computers and paltry brains and minds. We are possibly a spec of a spec of a spec, and so on. However, this is assuming the notion of bit-size which assumes some counting mathematical process. if math is a concept we place on reality, then maybe this obstacle is able to be overcome, but this is for another write-up.
In time's weary fullness we can
probably construct a computer than will advance human understanding beyond what
the current human imagination is capable of practically envisioning.
Comments
Post a Comment