ChinaVis memo

Although ChinaVis is not a conference of visualization important or celebrate, there is still something interesting.

Day1: course of information theroy

Entropy

  • Shannon’s entropy:
    H(X) = -$\sum_xP(x)log(P(x))$

  • Joint entropy:
    the entropy of two events
    $H(X,Y) = -\sum_{x,y}p(x,y)log(p(x,y))$

  • Conditional entropy:
    $H(X|Y) = -\sum_{x,y}p(x,y)log(\frac{p(x,y)}{p(y)})$
    Demonstration
    $$
    \begin{aligned}
    H(X|Y) &= \sum_yp(Y=y)H(X|Y=y)\
    &= -\sum_yp(Y=y)\sum_xp(x|y)log(p(x|y))\
    &= -\sum_{x,y}p(x,y)log(p(x|y))
    \end{aligned}
    $$

  • Relative entropy (KL Divergence)
    similarity of two probability distribution
    $D_{KL}(p|q) = \sum_{x}p(x)log(\frac{p(x)}{q(x)})$

  • Mutual entropy
    Is the relative entropy of p(x,y) and p(x)p(y).
    If x and y are independent, $D_{KL}(p(x,y),p(x)p(y)) = 0$, so this indicate the dependency between X and Y.

Application Example

  1. Determine the viewpoint with least information loss.

  2. Divide the whole image into multiple blocks, using entropy to decide which block can be treated with high resolution, which with low resolution.(in order to reduce the resolution time).

Day2

Four Levels of Visualization

emmmmmmmmmmm
算了算了没什么意思