Untyped Boolean Expressions
Untyped Lambda Calculus
BV[x] = empty, BV[λx.t] = BV[t] u {x}, BV[t1 t2] = BV[t1] u BV[t2]
Recursion
Types
Haskell and Laziness
take 1 $ quicksort
would be worst case O(n^2) since it has to complete the full algorithm in worst case because quickselect might not find the first element till the algorithm completestake 1 $ mergesort
is worst case O(n) because it only needs to run a single iteration of the algorithm, grabbing the first element from each subarray and recursively grabbing the smaller of its two childrenTypeclasses in Haskell
deriving Eq
at the end of a data
definition to include default definitions of structural equality for a data type>>=
operator ("bind") which is just a fancy operator for chaining, much like a semi colon in CJust Number  Nothing
Number  Nothing
) without needing to redefine every function in a composition expression>>=
operator that allows us to chain functions together while handling the new optional typeMore fundamentally, we first learn about Functors and Applicatives, and Monads are just more expressive Applicative Functors which are just more expressive Functors.
(a > b)
or more specifically (a > f a)
fmap id (f a) = f a
, identity function should still workfmap (x . y) (f a) = fmap x $ fmap y (f a)
, composition is chainingpure
and <*>
, which allows us to apply functions that have been wrapped in the functor, like f (a > b)
.pure
<*>
return
(same as pure
), >>=
bind operator, >>
, and fail
, but we mostly care about binddo
notation as syntactic sugar for thisMaybe
monad let's us check and handle possible failure, the list monad let's us handle nondeterminism by running a function on all possiblilitesExpressivity, meaning, guarantees, implementation
parantheses based syntax facilitates metaprogramming
A truly minimal subset of racket: lambda abstraction & function application
mutation is changing the value bound to a name, or stored in a data structure (think variable)
box
provides an intermediate mutation mechanism, and is a racket value that contains a racket valuePropositions are variables, with logical connectives
Γ  α
means “From the set of propositions Γ, we can prove the proposition α. Inference rules have their syntax, see notes.
Can be described with production rules as a grammar, with term simplifying to true, false, or conditionals
Prove a property P by structural induction on the parse tree
Need meaning to our language:
if t > t' and t > t", then t' = t"
t' s.t. t > t'
(only true
and false
in our system)if t is in normal form, then t is a value
forall terms t, there is a normal form t' s.t. t >* t'
In our definition of this, a number is either 0 or a successor of a number
In our definition, we have term's normal forms that aren't values, e.g. succ true
OCaml is kind of lot's of syntactic sugar atop lambda calculus
First order logic (forall, exists), there is no procedure for deciding statements. Church proved this, needing to first define what a procedure is.
3 grammar rules in lambda calculus (ambiguously)
note concrete syntax tree is just more implementation details
formalizing abstract lambda definition is tricky, out of scope
truth is a semantic notion, proof is a syntactic notion, and we want to manipulate to prove.
Abstractions extend as far right as possible
Purely, functions only have one parameter, so functions of multiple parameters must return a function lambda of n 1 parameters. curried functions
λx.t
, we call x a binder whose scope does not extend beyond t
λx.λx.x
parsed λx.(λx.x)
is a function that ignores its argument and returns the identity functionλx.y
, y
is free because it is not bound
We can recursively define the set of bound variables of an expression because the grammar is recursive:
notice this is just our three grammar rules
A variable can be both free and bound if there are multiple occurences in different scopes
[x> t2] t1
indicates substituting t2 for x in t1(λx.zx)y
should simplify to zy
(λx.t1)t2 > [x > t2] t1
Substitution rule gotchas
s
contains free variables, they must not be bound by a lambda after substitutionThis definition of substitution gives us alpha equivalence, =a
λx.t =a
βreduction is (λx.t1) t2 >β [x > t2] t1
, notice it just says passing a function an argument is equivalent to substitution in its term
(λx.t1) t2
, a function applicationTwo redecies:
(λx.(λy.x)z)w
1
2
If redex 1 first:
(λy.w)z
If 2:
(λx.x)w
But they're the same
The rules for full beta reduction says perform any beta reduction in the expression you want
Confluence, not every term has a normal form:
(λx.xx)(λx.xx)
one step beta reduces to itself (since x = (λx.xx)
, for x x)
(λz.w)((λy.yy)(λy.yy))
can reduce forever if we try to first reduce the argument, or reduce in one step if we reduce the lambda.
Standardizaiton Theorem says the repeatedly reducing the leftmost outermost redex will find a normal form if it exists. It's called normal order reduction and is deterministic, unlike full beta reduction in general.
(λz.w)((λy.yy)(λy.yy))
1
2
2 is the outermost (in this case also leftmost)
Call by name is like normal order, but does not reduce in the body of abstractions
Haskell is lazy eval version of callbyname, callbyneed
Callbyvalue strict and eager, is leftmost, innermost
We use symbols to represent familiar notions.
To use conditionals, if B then T else F, we can say if some complex statement B simplifies to λx.λy.x
it's true, and λx.λy.y
is false
cons consumes two arguments,
f and r, and produces a function
consuming a message. This function
produced represents a list, where
based on a sort of control bool
that we pass into it, returns either
first or rest of the list,
recursively defining a list as we
know it.
test:
λ
General Recursion p 64
?
needs to be something that produces fact
, let's stick a variable there and call it r
, and stick a lambda r
at the startx = f(x), then x is a fixed point of f, nothing happens
for pfact, we need to compute the fixed point (fact
) of pfact
suppose fact = Y pfact
, then
fact = pfact fact
fact = Y pfact
Y f = f (Y f)
forall f.
Can't substitute pfact into r because the function signature would be wrong in the recursive call
we end up with this cool definition of pfact'
(protofactorial) which when applied to itself, produces pfact' pfact'
as a recursive call
finally, rearranging
Y combinator doesn't work with call by value?
Y = λf. (
Y g = (λr.g(r r))(λr.g(r r)) // call by value FORCES (r r) to evaluate first
// normal order reduction DOES work though
We have to stop this evaluation
(r r) = λx.(r r) x
, now it needs to be applied before it get's reducedf
replaced with λx.f x
is called ηexpansionvariables? we take out λ. ??? well we reintroduce some combinators. These give us ways to still represent functions I guess?
replace terms with numbers based on their function call DEPTH
Meta language, for theorem provers. It was supposed to be used to express a proof and mechanical step through it to verify

together types, each disjunct needs to have a constructor defined for it (prevents type ambiguity)type iors = Int of int  String of string;;
, where Int
is an arbitrarily named constructor, so it acts as a label so we can distiguish Int 6
as an : iors = 6
vs 6
is an : int = 6
type 'a option = None  Some of 'a
let p = {x = 5 ; y = 2};;
Stack.pop
Look at the parse tree of the expression rather than dynamic evaluation of the expression
t:T
is pronounced "t has type T"t
is typable, then its type is unique and there is only one derivationt:T
, then either t
is a value or there is some t'
s.t. t>t'
t:T
and t>t'
, then t':T
Type rules for sum types:
inl
) and right (inr
) to express which side the type satisfiesinl 5 as Nat+Bool
is an annotation to help the type checker figure out what type it isGeneral Recursion
pfact r n = if n == 0 then
else n * r r (n  1)
so let's look at typing currently untyped terms.
λx:A.λy:B.x has an obvious return type A
λx.λy.y x is not typable, you need to substitute before you can type the expression (in other words, there's lots of possible types?)
Algorithms work by structural recursion on the term t.
Consumes the context and term, and produces the inferred type and the set of substitutions
given an untyped term t, annotate each abstraction var x, with a fresh type variable X, and record teh association x:X in context GAMMA
We need three inference rules to do this, one for each possible way t can be formed
sigma T1 = sigma T2
First accumulates type constraints and then finds substitutions that satisfy them.
Now we need to solve the system of equations. A solution for a problem (Γ, t, S, C) is a pair (σ, T) such that
σ unifies C and σS = T
We want soundness and completeness as properties
Recursive definition of the unify()
operation, where unify in the context of constraintbased type inference is the operation of solving a system of equations
unify(C)
will produce the principal unfier, the best oneProblem arises now when we use polymorphic functions. Applying them to data, they get specialized.
Letpolymorphism uses type reconstruction as above, and generalizes it to provide a simple form of polymorphism.
id
as forall X. X > Xtake 1 $ qsort
will be worst case O(n^2)
take 1 $ msort
will be worst case O(n) because it will lazily only look at the first element of each subproblem, and O(m log n) for take m
since we just need to find replacements to the element we removed from the possible smallest elements, one at each height for each iteration
foldr c b [] = b
foldr c b (x:xs) = c x (foldr c b xs)
looking at space complexity:
foldr (+) 0 [1,2,3,...,n]
1 + (2 + (3 + (... + (n + 0))))
O(n) space
but foldl is tail recursive (using tail call optimization) in an eager language:
foldl (+) 0 [1,2,3,...,n]
((((0 + 1) + 2) + 3) + ... + n)
O(1) space in eagar
O(n) space in lazy
Type classes in Haskell offer a
controlled approach to "ad hoc overloading".
we want one particular name to mean different things in different contexts
Eq
Eq
are types with the notion of equality
`==`
and`/=`
operatorsderiving
keyword gives you default behaviour.
data Btree a =
Empty
 Node (Btree a) a (Btree a)
deriving Eq
This is the obvious structural equality.
We can also define nondefault equality be explicitly defining our own equality.
data First = Pair Int Int
instance Eq First where
(Pair x _) == (Pair y _) = (x==y)
(Pair 1 3) == (Pair 2 3)
> False
(Pair 1 3) == (Pair 1 4)
> True
Ord
Inherits from Eq, specifies <
, >
, <=
, >=
, default defn of min
and max
, compare
msort :: (Ord a) => [a] > [a]
can be read as, msort has type list to list s.t. a derives Ord
more generally:
fcnname :: constraints => type signature
Show
specifies the method show :: a > String
and read :: String > a
which is a simple parser (crazy cool)
Simply a dictionary of operators to their function signatures
Code defining Num:
class Num a where
(+), (*) :: a > a > a
negate :: a > a
Then under the hood, the compiler conceptually creates the following:
data NumDict a
= MkND (a > a > a)
(a > a > a)
(a > a)
plus (MkND p _ _) = p
times (MkND _ t _) = t
neg (MkND _ _ n) = n
Again, it's really just a hidden dictionary.
Haskell's purity, higher order functions, and parameterized algebraic data types let us implement polymorphism at a "higher level" than any other language. Type classes alow us to describe behavior of types. Again, in the previous section we have type classes like Eq
that describe classes that can perform equaltiy checks. Here are more powerful type classes.
+
[]
and ++
(more generalizable, so more useful)class Monoid a where
mempty :: a
mappend :: a > a > a
mconcat :: [a] > a
mconcat = foldr mappend mempty
Maybe
, and can be used for lists. It can be used ot represent higherorder concepts like lists. Broadly, functors are things that can be mapped over, and this is reflected by the definition which simply provides an interface for fmap
class Functor f where
fmap :: (a > b) > f a > f b
{ notice there is no default implementation of fmap }
{ also notice that f is not a concrete type that holds a value,
but rather a type constructor that takes one type as the parameter }
instance Functor [] where
fmap = map
{ the type signature for map is: }
map :: ( a > b ) > [a] > [b]
{ so the above instance of list above is correct! where: }
f = []
instance Functor Maybe where
fmap _ Nothing = Nothing
fmap g (Just a) = Just (g a) { unwrap and wrap again in another Just }
*
. think of it as a type constant, type constructor with 0 arguments* => *
,
and >
have kind * => * => *
In class example:
instance Functor ((>) r) where
fmap = (.)
now we have f makes an r > ?
from above,
fmap :: (a > b) > f a > f b
so when f = ((>) r),
(a > b) > (r > a) > (r > b)
f g
composition?
Another "more useful" example:
instance Functor ((,) w) where
fmap f (w,a) = (w, f a)
Still kind of weird and limited, since we only have one argument functors. If you want to partially apply a function to the second argument, then we can simply wrap it in some newtype
that flips the arguments and overrides the fmap
definition to unswap accordingly.
Functor laws:
fmap id = id
fmap (g . h) = fmap g . fmap h
class Functor f => Applicative f where
pure :: a > f a
(<*>) :: f (a > b) > f a > f b
<*>
pronounced "applied" maybe?($) :: (a > b) > a > b
applied
is very close to the actual application operatorMaybe
, []
, ((>) r)
and ((,) w)
are all instances of Applicative
* => *
Looking at an example of Maybe
:
instance Applicative Maybe where
pure = Just { implementing Applicative interface }
Nothing <*> _ = Nothing
_ <*> Nothing = Nothing
(Just f) <*> (Just y) = Just (f y)
fmap (+) (Just 2) <*> (Just 3)
> Just 5
(+) <$> (Just 2) <*> (Just 3)
> Just 5
pure (+) <*> (Just 2) <*> (Just 3)
> Just 5
So notice we're "lifting" things into Maybe
, kind of just general definitions of operators that let us perform optional things within the Maybe
type
pure
lifts a function up, making it a Maybe
compatible function by writing pure f <*>
instead of f <$>
Again, monads are more powerful than applicatives:
class Applicative m => Monad m where
(=<<) :: (a > m b) > m a > m b { produces a value in the context m, from a value a in context m }
(>>=) :: m a > (a > m b) > m b { reversed arguments }
(>>) :: m a > m b > m b
m >> n = m >>= \_ > n
fail :: String > m a
fail s = error s
looking at IO example:
There are varieties of polymorphism, we studied letpolymorphism
gamma  [x > t1] t2 : T2
 TLetPoly simple
gamma  let x = t1 in t2 : T2
But we can pass garbage into the value for x and if it's not used in t2, it won't be type checked, so we need to add the extra premise to check this:
gamma  [x > t1] t2 : T2 gamma  t1 : T1

gamma  let x = t1 in t2 : T2
Letpolymorphism only allows top level letbindings, "disallows functions that take polymorphic values as arguments"?
System F introduce quantifiers. (lambda X.t)[T] > [X > T]t
is a type application. The pair of a type application and type abstraction forms a redex analogous to normal appabs pairs.
id id :: forall X.X > X
lambda x.x x
in the simple typed calculus but we can in System FType checking is still simple structural recursion, so progress and preservation theorems apply.
We need to still encode booleans and integers to make it useful. We use these encodings to allow us to leverage structural properties of the simple definition of our language, while making it expressive using these encodings
Again, a boolean is lambda x. lambda y. x :: X > X > X
. This can be expressed instead as forall X . X > X > X
in system F.
Natural numbers consume a successor and zero: Nat :: forall X . (X > X) > X > X
, where X>X
is the type of a successor.