Archive

Uncategorized

This summer, I have switched my math passion from work as the everyday ultimate tool for research, to leasure through this “powerful, passionate and inspiring” read into the heart of the hidden reality of maths, words from the New York Times book review. Indeed, the touching story of Edward Frenkel’s life in Russia to become a mathematician takes the reader deep to the beauty and elegance of what himself describes as the art of mathematics. With an extreme simplicity and always accompanied by the people who defined his academic journey until his professorship at University of California, Berkeley, Frenkel winds across mathematical symmetries and bariers in a lifetime. Highly recommended!

Love and Math: The Heart of Hidden Reality, by Edward Frenkel

Advertisements

I write this post along with the sad news of the car accident which ended the life of one of the most brilliant mathematicians of the XX century, John Forbes Nash. While battling against his mental illness, he achieved to solve some of the most important theoretical problems, for which he received both the Nobel Prize in Economic Science in 1994 and the Abel Prize of Mathematics in 2015.

To my opinion, what made Nash a beautilful mind was his unique way of approaching problems. Probably, because he worked on his own and had a wide spread view of mathematics research (in fact, unlike many mathematicians, he was not a specialist), he could tackle some of the most famous open problems in mathematics, such as an elliptic partial differential equations problem suggested by Louis Nirenberg. Nirenberg himself gives the following answer to the question whether there is a mathematicians you would consider as a genius: “I can think of one, and that’s John Nash (…) He had a remarkable mind. He thought abound things differently from other people”.

During this semester, I have been attending an Information Theory Course at the Barcelona Graduate School of Mathematics, and the last part of the course has been devoted to Quantum Information Theory, at the hands of ICREA Professor Andreas Winter. The science of the very small, where interactions between matter and energy involves a discrete treatment of quantities, has fostered the development of a mathematical approach to the information that is held in the states of a quantum system. Due to the Heinsenberg uncertainty principle, it is impossible to measure the state of a quantum system, and hence express it in terms of classical information, i.e., bits.

One of the fundamental tools in quantum information theory is Verschränkung or entanglement, a phyisical phenomenon implying that the quantum state of a pair of entangled or correlated particles cannot be described separately. Entanglement was the topic of a paper by Albert Einstein in 1935, which came to be known as the EPR paradox: physical reality described by quantum mechanics is incomplete. An example is the following quantum system, in bra-ket notation:

|\Phi^+\rangle=\frac{1}{\sqrt{2}}\left(|00\rangle+|11\rangle\right).

The term entanglement was in fact coined by Erwin Schrödinger at the same year, when he thought of the problem of interpretation of quantum superposition applied to a cat’s life. The cat, viewed as a quantum system, can be simultaneously in two states: alive or dead. The experiment goes as follows: a cat, a flask of poison, and a radioactive source are placed in a sealed box. If an internal monitor detects radioactivity (i.e. a single atom decaying), the flask is shattered, releasing the poison that kills the cat. The Copenhagen interpretation of quantum mechanics implies that after a while, the cat is simultaneously alive and dead. Yet, when one looks in the box, one sees the cat either alive or dead, not both alive and dead. This poses the question of when exactly quantum superposition ends and reality collapses into one possibility or the other.

Today I came across a very interesting article by Sergio Verdú published to the IEEE Transactions on Information Theory in 1998 to celebrate the 50th anniversary of Claude E. Shannon’s Magna Carta of the information era. Verdú gives a brief cronicle of the historical footsteps in building a theory of the fundamental limits behind both data compression and data transmission. The tentacles of Shanon’s paper have reached, besides communication engineering, many other fields such as mathematics, physics, cryptology, economics, biology and even linguistics.

Before 1948, the major communication systems at that time already provided some of the crucial ingredients that would enable information theory as we know it today: the Morse code (1830) is indeed an efficient way to encode information in the duration of the signals contained in each symbol, or spread spectrum techniques (1940) showed that bandwidth is another important parameter in reliable communication. Attemps to quantize the amount of transmitted information, rate, or capacity were given by Nyquist (1924), Küpfmüller (1924), Hartley (1928), and Kotel’nikov (1933). The followint letter was indeed used to denote the amount of information associated to a number of states:

H=\log(K)

What Nyquist and Hartley missed was a probabilistic or statsitical treatment of the sources of information. In fact, probabilistic modeling of information sources had a very long history from cryptography. As early as 1380 and 1658, tables of frequencies of letters and pairs of letters had been compiled for the purpose of decrypting secret messages! Precisely at the conclusion of his WWII work on cryptography, Shannon prepared a classified report where he included several of the notions, including entropy and the words information theory. After graduating from MIT, a twenty-two-year-old Shannon came up with a ground-breaking abstraction of the communication process which he could develop upon joining Bell Labs in 1941.

By 1948, the need of a theory of communication to describe the fundamental limits of nowadays trivial concepts such as bandwidth, signal-to-noise ratio or data rates was evident and recognized by the scientific community. In only a few months, several theories were put on the table with the hope of success (these include Clavier, Earp, Goldman, Laplume, Shannon, Tuller and Wiener). One of those theories would prove to be everlasting.

Read the full article here.

Some ficticious (apparently bad) reviews of seminal works from brilliand minds of the XX century, including, information’s theory father Claude E. Shannon, or computer science visionaries such as Turing and Dijkstra. Whenever getting such imaginative negative reviews, don’t give up 🙂 Here are the reviews of a Mathematical Theory of Communications:

Claude E. Shannon: A Mathematical Theory of Communication.

  1. This paper is poorly motivated and excessively abstract. It is unclear for what practical problem it might be relevant. The author claims that “semantic aspects of communication are irrelevant to the engineering problems,” which seems to indicate that his theory is suitable mostly for transmitting gibberish. Alas, people will not pay to have gibberish transmitted anywhere.
  2. I don’t understand the relevance of discrete sources: No matter what one does, in the end, the signal will have to be modulated using good old-fashioned vacuum tubes, so the signal on the “‘channel”‘ will always be analogical.
  3. A running example would have helped make the presentation clearer and less theoretical, but none is provided. Also, the author presents no implementation details or experiments taken from a practical application.
  4. Confidential comments to the editor: The only thing absolutely wrong with this paper is that it doesn’t quite “resonate” with what the research community finds exciting. At any point, there are sexy topics and unsexy ones: these days, television is sexy and color television is even sexier. Discrete channels with a finite number of symbols are good for telegraphy, but telegraphy is 100 years old, hardly a good research topic.
  5. The author mentions computing machines, such as the recent ENIAC. Well, I guess one could connect such machines, but a recent IBM memo stated that a dozen or so such machines will be sufficient for all the computing that we’ll ever need in the foreseeable future, so there won’t be a whole lot of connecting going on with only a dozen ENIACs!
    IBM has decided to stay out of the electronic computing business, and this journal should probably do the same!

The original We are sorry to inform you article can be foud here.