Here is the definition of the Arrow-Pratt risk aversion
A=-U''(W)/U'(W)
To derive this, we ask a simple question. Suppose your net wealth is W? Suppose that you are forced to take a a small fair gamble with zero expected payoff and variance of \sigma^2. How much are you willing to pay to avoid that gamble?
this implies that
E[U(W+\epsilon}]=U(W-c)
take a Taylor expansion,
we have
U(W)+U''(W)\sigma^2/2=U(W)-cU'(W)
c=(-U''(W)/U'(W))\sigma^2/2=A\sigma^2/2.
How much that you are willing to pay depends on your absolute risk aversion.
For details, see Huang and Litzenberger (1988)