John Baez originally shared this post:
Q: What’s negative information?
A: I could tell you, but then you’d know even less…
Just kidding. In 2005 Michal Horodecki, Jonathan Oppenheim and Andreas Winter wrote a nice paper on negative information. I find it a bit easier to think about entropy. Entropy is the information you’re missing about the precise details of a system. For example, if I have a coin under my hand and you can’t see which side it up, you’ll say it has an entropy of one bit.
Suppose you have a big physical system B and some part of it, say A. In classical mechanics the entropy of B is always bigger than that of A:
S(B) ? S(A)
where S means ‘entropy’. In particular, if we know everything we can about B, we know all we can about A. In quantum mechanics this isn’t true, so S(B) – S(A) can be negative. For example, it’s possible to have an entangled pair of electrons with no entropy, where if we look at either one, it has an entropy of one bit: we don’t know if it’s spin is up or down.
The paper by Horodecki, Oppenheim and Winter studied the implications of negative information for communication. There was a popularization here:
Quantum information can be negative, Phys.org, 4 August 2005, http://phys.org/news5621.html
but I understood less after reading it than before, so I decided to write this.
Puzzle: why do physicists use S to stand for entropy?
[quant-ph/0505062] Quantum information can be negative
Abstract: Given an unknown quantum state distributed over two systems, we determine how much quantum communication is needed to transfer the full state to one system. This communication measures the &…