HANDLING state in a monadic way is one of the techniques Haskellers come to learn about. But how does it work, roughly?
The following is a simplified, intuitional explanation of the state monad. It's basically a trick of function currying. Suppose you have some program state, and some functions that do something to your program state:
type MyState = Int
increaseState :: MyState -> Int -> MyState
increaseState myState n = myState + n
decreaseState :: MyState -> Int -> MyState
decreaseState myState n = myState - n
That's workable, but it's not composable: it's inconvenient to keep wrapping function calls like:
let myState = 0
in increaseState (decreaseState myState 1) 2
Because the earlier states are wrapped inside the first argument, and you have to also provide a second argument, you don't get any syntactic wins here.
But if you express the functions a little differently:
increaseState :: Int -> MyState -> MyState
increaseState n myState = myState + n
decreaseState :: Int -> MyState -> MyState
decreaseState n myState = myState - n
You can take advantage of the associativity of the function arrow and
think of the function types as Int ->
(MyState -> MyState)
. Now, this
return type is a transformation from an initial state to a final state.
If we alias the type: type StateWrapper =
MyState -> MyState
, we can write our
function types as Int ->
StateWrapper
.
So, the new function types are:
increaseState, decreaseState :: Int -> StateWrapper
If we say increaseState 2
, we get a result of
type StateWrapper
. If we say
decreaseState 1
, we get a result also of type
StateWrapper
. If we compose these two values
(which are actually functions, remember), we get a final value of type
StateWrapper
, that is
MyState -> MyState
. In fact, no
matter how many stateful operations we do, if we compose them all in
sequence, we get a final function of MyState ->
MyState
.
At the end of the sequencing (the composition), all we have to do is feed in the initial state, and we get back the final state after carrying out all the 'stateful' operations.
So, you ask, why the need for a state monad at all? Why not just do
function composition in the first place? The answer is that the state
wrapper type is slightly more complicated than what we've seen above. In
reality, it's closer to type StateWrapper s a
= s -> (a, s)
, that is,
given an initial state s
, it returns a result value
a
and a final state s
. A series
of functions of this type can't be composed together using normal
function composition; and hence we use monadic binding to do the
job.
So monadic binding is a more powerful form of function composition
and hence action sequencing that can handle unruly types (but which
still follow a certain pattern, hence the famous (>>=) :: m
a -> (a -> m b) -> m
b)
.
To get a more accurate idea of how the state monad works, see Graham Hutton's excellent tutorial.