Monday, January 27, 2014

Hawking's new thoughts on information & chaos & black hole physics...

Interesting new paper by Stephen Hawking, though I only half-understand it... (ok maybe 2/3 ...)

http://arxiv.org/pdf/1401.5761v1.pdf

Basically: He is discussing a certain case [stuff happening inside a black hole] where general relativity says something is not observable, but quantum theory says it is in principle observable....

Hawking's new solution is that the data escaping from inside the black hole is chaoticallly messed up, so that it's sort of in principle observable, but in practice too complicated to actually see...

This seems in line with my notion that quantum logic is for stuff that you cannot in principle measure -- where the YOU is highlighted ... i.e. this has to do with what you, as an information-processing system, have the specific capacity to measure without losing your you-ness...

hmmmm...

6 comments:

Bill Lauritzen said...

Yes, but my friend Mamikon, astrophysicist now at CalTech working on mathematical issues, has claimed for years, and still claims that Black Holes don't exist. He says they are a fashionable model that will pass with time. He has an alternative theory. See: http://www.mamikon.com/SovietArt/PapersUSSR.html

PonderSeekDiscover said...

According to the Unitary Flow blog Hawking doesn't technically say black holes don't exist: http://www.unitaryflow.com/2014/01/black-hole-paradox-5-black-holes-redefined.html.

And with regards to Quantum Logic, I exchanged a few rather interesting emails with Kevin Knuth on the subject (after reading the Symmetry paper posted to your website) and as he points out, Quantum Logic may not be wrong but it has some serious foundational issues:

"[…] I understand that you are disappointed. From what I understand given
your statements, this is due to the fact that there are several
perspectives on probability theory. I am VERY familiar with this
phenomenon in the area of machine learning and artificial
intelligence. A lack of understanding of probability theory is really
the reason that Artificial Intelligence (AI) failed in the 1980s. You
might take a look at Peter Cheeseman's 1985 paper "In Defense of
Probability." presented at the International Joint Conference on
Artificial Intelligence.

Part of this is historical. Consider that the first papers on the
Foundations of Probability Theory really came out in the 1930's and
40's:
Kolmogorov 1933 (Foundations of the Theory of Probability)
DeFinetti 1931 (Probabilism: A Critical Essay on the Theory of
Probability and on the Value of Science)
Cox 1946 (Probability, Frequency, and Reasonable Expectation”,
American Journal of Physics, 14, 1-13)

Kolmogorov's foundation is based on sets and measure theory,
DeFinetti's foundation is based on Betting, and Cox's foundation is
based on consistent quantification of an algebra. They are VERY
different from one another in perspective, and hence capability and
generality.

Note that these works were all written well after the foundations of
quantum mechanics (which relies heavily on probability theory) was
founded. Schrodinger got the Nobel Prize in 1933, which was the year
that Kolmogorov's foundation of probability based on set theory was
first written. In my opinion, this has been one reason why QM has
been a mess. Even Feynman made stupid statements about probability.

In 1936 Von Neumann and Birkhoff published their famous paper on "The
logic of quantum mechanics"
This is 10 years before Cox's algebraic foundation, which is most
similar to the approach Von Neumann and Birkhoff were taking. And if
I remember right, Von Neumann and Birkhoff rule out a Boolean algebra
of quantum propositions. This is unfortunate since that paper made
the connection between partial order and inference LONG before I
became involved in the game with my order-theoretic foundation (Knuth
and Skilling, 2013). They failed to realize that there is a space of
measurement sequences with its own algebra and this is coupled to a
space of logical propositions about those sequences, which follow a
Boolean algebra. This has generated an entire industry of Quantum
Logic that, may not be totally wrong, but at least has some serious
foundational problems.

I have to say that this mess started by Von Neumann and Birkhoff
drives me bonkers. They were both geniuses, and as my friend Keith
Earle says, this is measured by how long they have been able to retard
progress. Unfortunately, they messed up.

[…]

My derivations of probability theory grew out of Cox's foundation, and as such is FAR
more general. Sure, you can use consistent betting to derive the sum
rule for probability:

P( A or B | I ) = P( A | I ) + P( B | I ) - P( A and B | I)

But can you use DeFinetti to derive these?

Psi( A join B ) = Psi( A ) + Psi( B )
where A and B are Feynman paths and Psi returns an amplitude?

Mutual Information (as a function of entropies)
MI( A; B) = H(A) + H(B) - H(A,B)

Polya's Min-Max Rule
Min(A, B) = A + B - Max(A, B)
where A and B are real numbers or integers?

Or this rule involving least common multiples and greatest common divisors?
log LCM(A, B) = log A + log B - log GCD(A, B)

Or the formula for the Euler characteristic for regular polytopes?
chi = V - E + F

You get the point."

Anonymous said...

Recently Peter Woit posted (on his Not Even Wrong blog) a somewhat scathing review of Max Tegmark’s new book, “The Mathematical Universe: My Quest for the Ultimate Nature of Reality” [MT], which I found intriguing (the review I mean). So the next day I went to the bookstore, bought the book, and sat and read it straight through. I found the book refreshing and posted the following retort, which was promptly deleted, to Dr. Woit’s blog:

“In fundamental calculus one learns that in order for a mathematical model to be ontologically meaningful every element in the model must have an ontological referent; one can’t just add a constant, structure, or element of any kind to make things work. Assume, for the sake of argument, that quantum entanglement is a thoroughly demonstrated and irrefutable ontological phenomenon and examine the consequence: entanglement is described in the mathematical model by the inseparability of Hilbert space; Hilbert space must have an ontological referent. Many scientists deny this and resort to epistemological arguments because it makes them uncomfortable. But if John Wheeler was correct and elementary particles have an information-theoretic basis there’s nothing that says Hilbert space must have intrinsic existence.

Here’s another argument worth consideration. In his latest assault on religion, Richard Dawkins makes the argument that religion is not only foolhardy but potentially deadly. Assume, for the sake of argument, that Evolutionary Theory, like most theories, is a reasonably close approximation to what actually takes place – ontologically – and examine the consequence: within the framework of his own theory (which I personally am in agreement with) Dawkins’ argument becomes absurd. If one examines the context in which much of the bone record of ancestral humankind was found, it becomes clear that religion evolved quite early and has withstood the onslaught of natural selection for millennia: why? Perhaps, rather than continuously assaulting religion, Dawkins should examine religion with an open, scientific mind from the context of Evolutionary Theory; the book, “The Universe in a Single Atom” [HHDL] could very well be informative.

It is easy to dismiss arguments which make one uncomfortable as “an indulgence in their inner crank,” but arguments which make people uncomfortable are the hallmark of the scientific enterprise and occasionally represent a case of punctuated equilibrium. I find Max Tegmark’s book refreshing and his Mathematical Universe Hypothesis highly plausible!”

Now this is an interesting lead in to the question at hand: why has religion withstood the onslaught of natural selection for millennia? One can quite easily answer this question within the framework of Evolutionary Theory: religion demonstrates a high degree of evolutionary fitness. But this is unsatisfying because the second question immediately obtains: why does religion exhibit a high degree of evolutionary fitness?

Anonymous said...

So let’s ask the question, from the perspective of your Pattern Theoretics: what is a Buddha? You developed your system of magician dynamics, which I’ve never studied in pure form, but, based on your writings, I conclude that it consists of two systems of interacting functions on Non Well-Founded Sets; you call one system the magicians and the other the anti-magicians. These systems exist on Pattern Space, presumably some novel algebraic system of your own discovery. These systems run around casting spells on one another leading to autopoiesis in the form of structural conspiracies, patterns conspiring to maintain themselves. These structural conspiracies tend to gravitate towards semi-stable states of effective complexity – attractors – but yet maintain the ability to evolve and adapt.

So then, from this perspective a Darth Vader is an anti-magician who is somehow capable of bouncing the magician system out of an attractor representing a relatively high degree of effective complexity and into one of lower effective complexity – an agent of entropy! Conversely, a Buddha then, is the antithesis of Mr. Vader – an agent of negentropy.

Kevin Knuth has demonstrated quite compellingly that one can generalize a Boolean lattice to a calculus in contextually meaningful ways, and then use this contextually meaningful process calculus in conjunction with The Principle of Maximum Entropy to make sound inferences within the context at hand. So technically, one could generate a Boolean lattice from knowledge of the present state, derive a process calculus based on said lattice, and make sound inferences about the future state. But can one DEDUCE the future state from knowledge about the present state? In short, no!

Recently I purchased the illustrated edition of the Bardo Todol translated and compiled by Glenn Mullin and Thomas Kelly [MK] from which I quote:

“The night after seeing his (Khamtrul Yeshey Dorje) first plane he dreamed of Padma Sambhava and Yeshey Tsogyal. Padme Sambhava looked at him and said, ‘When iron birds fly, and horses run on wheels, my dharma will travel to the land of the Red Man’” (pgs. 133-134).

And from the endnote:

“Chogyam Trungpa was the first to publish the verse in America. Naturally, Americans loved the idea of being mentioned in prophecy. Buddha also made a similar prophecy that Tibetans often quote. It comes from one of the numerous editions of the Lankavatara Sutra, wherein Buddha states, ‘2500 years after my passing my dharma will go to the land of the Red Man.’”

Now you wax on about doing what’s in the best interest of the Cosmos in your Cosmist Manifesto, ridding it of religion being one of your inferred best interest scenarios, but how do you propose to DEDUCE what is in the best interests of the Cosmos? You know, one key feature of a Darth Vader, being an agent of entropy, is that they act out of ignorance and these actions generally have profound repercussions! Are you a Darth Vader? I oftentimes wonder!

In the same Bardo Todol referenced above His Holiness the Dalai Lama states that we are in a delicate period, a crossroad, in which either a 1,000 year period of light can obtain or a 1,000 year period of darkness; which are you striving for?

The latest Essay Contest sponsored by the Foundational Questions Institute poses the question: how should humanity steer the future? This is rather presumptuous but what the heck. Why don’t you chop and screw your Cosmist Manifesto down to 9 pages and submit; see how it stands up amongst your peers. Consider it a direct challenge Mr. Vader!

According to the Buddhist Treasures we’re in a Golden Aeon in which 1,000 Buddhas will appear in succession – 1,000 Buddhas! To date there have been five. Good Luck Mr. Vader. . .

Unknown said...
This comment has been removed by the author.
Unknown said...

Nice read! I do remember reading in ancient Indian text mentioned that there are universes upon universes. We recently published an article which attempts to satisfy the curiosity that if we are living in a multiverse then how far can be our closest neighbors?

http://www.blueplanetjournal.com/cosmos/mapping-the-multiverse-how-far-to-the-next-universe.html