CS442: Programming Languages Principles

Back to UWaterloo


Untyped Boolean Expressions

Untyped Lambda Calculus



Haskell and Laziness

Typeclasses in Haskell


More fundamentally, we first learn about Functors and Applicatives, and Monads are just more expressive Applicative Functors which are just more expressive Functors.

1 Intro

Expressivity, meaning, guarantees, implementation

parantheses based syntax facilitates metaprogramming

A truly minimal subset of racket: lambda abstraction & function application

Mutation in Racket

mutation is changing the value bound to a name, or stored in a data structure (think variable)

More notes on Racket

2 Logic Review & Arithmetic

Propositions are variables, with logical connectives

Γ |- α means “From the set of propositions Γ, we can prove the proposition α. Inference rules have their syntax, see notes.

Untyped Boolean Expressions

Can be described with production rules as a grammar, with term simplifying to true, false, or conditionals

Prove a property P by structural induction on the parse tree

Need meaning to our language:

Untyped Arithmetic Expressions

In our definition of this, a number is either 0 or a successor of a number

  1. 0
  2. successor t
  3. predecessor t
  4. iszero t

In our definition, we have term's normal forms that aren't values, e.g. succ true

3 Untyped Lambda Calculus

OCaml is kind of lot's of syntactic sugar atop lambda calculus

First order logic (forall, exists), there is no procedure for deciding statements. Church proved this, needing to first define what a procedure is.

3 grammar rules in lambda calculus (ambiguously)

note concrete syntax tree is just more implementation details

formalizing abstract lambda definition is tricky, out of scope

truth is a semantic notion, proof is a syntactic notion, and we want to manipulate to prove.

Abstractions extend as far right as possible

Purely, functions only have one parameter, so functions of multiple parameters must return a function lambda of n -1 parameters. curried functions

λx.t, we call x a binder whose scope does not extend beyond t

λx.y, y is free because it is not bound

We can recursively define the set of bound variables of an expression because the grammar is recursive:

  1. BV[x] = empty
  2. BV[ λx.t ] = BV[t] u {x}
  3. BV[ t t ] = BV[t1] u BV[t2]

notice this is just our three grammar rules

A variable can be both free and bound if there are multiple occurences in different scopes

Substitution rule gotchas

  1. we need a case that doesn't touch bound variables
  2. if our term substitution s contains free variables, they must not be bound by a lambda after substitution

This definition of substitution gives us alpha equivalence, =a

β-reduction is (λx.t1) t2 ->β [x |-> t2] t1, notice it just says passing a function an argument is equivalent to substitution in its term

Two redecies:


If redex 1 first:

If 2:

But they're the same

The rules for full beta reduction says perform any beta reduction in the expression you want

Confluence, not every term has a normal form:

(λx.xx)(λx.xx) one step beta reduces to itself (since x = (λx.xx), for x x)

(λz.w)((λy.yy)(λy.yy)) can reduce forever if we try to first reduce the argument, or reduce in one step if we reduce the lambda.

Standardizaiton Theorem says the repeatedly reducing the leftmost outermost redex will find a normal form if it exists. It's called normal order reduction and is deterministic, unlike full beta reduction in general.

2 is the outermost (in this case also leftmost)

Call by name is like normal order, but does not reduce in the body of abstractions

Haskell is lazy eval version of call-by-name, call-by-need

Call-by-value strict and eager, is leftmost, innermost

Programming in Lambda Calculus

We use symbols to represent familiar notions.

To use conditionals, if B then T else F, we can say if some complex statement B simplifies to λx.λy.x it's true, and λx.λy.y is false

cons consumes two arguments,
f and r, and produces a function
consuming a message. This function
produced represents a list, where
based on a sort of control bool
that we pass into it, returns either
first or rest of the list,
recursively defining a list as we
know it.

Natural Numbers p 60



General Recursion p 64

x = f(x), then x is a fixed point of f, nothing happens

for pfact, we need to compute the fixed point (fact) of pfact

suppose fact = Y pfact, then

fact = pfact fact 
fact = Y pfact
Y f = f (Y f)

forall f.

Can't substitute pfact into r because the function signature would be wrong in the recursive call

we end up with this cool definition of pfact' (protofactorial) which when applied to itself, produces pfact' pfact' as a recursive call

finally, rearranging

Y combinator doesn't work with call by value?

Y = λf. (
Y g = (λr.g(r r))(λr.g(r r)) // call by value FORCES (r r) to evaluate first
// normal order reduction DOES work though

We have to stop this evaluation

variables? we take out λ. ??? well we re-introduce some combinators. These give us ways to still represent functions I guess?

de Bruijn (brown) indices p 74

replace terms with numbers based on their function call DEPTH

4 OCaml

Meta language, for theorem provers. It was supposed to be used to express a proof and mechanical step through it to verify

5 Types

Look at the parse tree of the expression rather than dynamic evaluation of the expression

6 Extension to Lambda Calculus

Type rules for sum types:

General Recursion

pfact r n = if n == 0 then 
            else n * r r (n - 1)

7 Type Inference

so let's look at typing currently untyped terms.

λx:A.λy:B.x has an obvious return type A

λx.λy.y x is not typable, you need to substitute before you can type the expression (in other words, there's lots of possible types?)

Algorithms work by structural recursion on the term t.

  1. recursively find substitutions that types subterms, then add it as a result of the relationship between subterms.
  2. accumulate constraints, equations involving variables, then solve the system of equations. In practice, easier to prove and optimize and extend

Algorithm W

Consumes the context and term, and produces the inferred type and the set of substitutions

given an untyped term t, annotate each abstraction var x, with a fresh type variable X, and record teh association x:X in context GAMMA

We need three inference rules to do this, one for each possible way t can be formed

  1. if t is just a var, all we can do is look it up in GAMMA
  2. abstraction. We want to show that it has some form of X -> T (X is a type variable, T is some concrete type). By doing so, we don't add anything to our substitution map sigma. That only happens in application (where substitution intuitively happens)
  3. application. we want to show that application has some type T, GAMMA |- t1 t2 : T. We need as premises the fact that t1 is some mapping for a types (T' -> T) for some substitution sigma1, and ALSO that t2 is of type T' with some other substitution sigma2 so that the expression is well typed.

Constraint-based type inference

First accumulates type constraints and then finds substitutions that satisfy them.

  1. for variables, no constraints
  2. for abstractions, annotation is in the context
  3. for application, simpler because we don't need this substitution as we go. Instead, we throw everything in C instead of sigma, and accumulate all type relations with equations.

Now we need to solve the system of equations. A solution for a problem (Γ, t, S, C) is a pair (σ, T) such that
σ unifies C and σS = T

We want soundness and completeness as properties

Solving Constraint Set

Recursive definition of the unify() operation, where unify in the context of constraint-based type inference is the operation of solving a system of equations


Problem arises now when we use polymorphic functions. Applying them to data, they get specialized.

Let-polymorphism uses type reconstruction as above, and generalizes it to provide a simple form of polymorphism.

8 Haskell and Laziness

take 1 $ qsort will be worst case O(n^2)

take 1 $ msort will be worst case O(n) because it will lazily only look at the first element of each subproblem, and O(m log n) for take m since we just need to find replacements to the element we removed from the possible smallest elements, one at each height for each iteration

foldr c b [] = b
foldr c b (x:xs) = c x (foldr c b xs)

looking at space complexity:

foldr (+) 0 [1,2,3,...,n]
1 + (2 + (3 + (... + (n + 0))))
      O(n) space

but foldl is tail recursive (using tail call optimization) in an eager language:

foldl (+) 0 [1,2,3,...,n]
((((0 + 1) + 2) + 3) + ... + n)
    O(1) space in eagar
    O(n) space in lazy

9 Types in Haskell

Type classes in Haskell offer a
controlled approach to "ad hoc overloading".

we want one particular name to mean different things in different contexts

Some basic classes


Eq are types with the notion of equality

deriving keyword gives you default behaviour.

data Btree a =
| Node (Btree a) a (Btree a)
  deriving Eq

This is the obvious structural equality.

We can also define non-default equality be explicitly defining our own equality.

data First = Pair Int Int
instance Eq First where
(Pair x _) == (Pair y _) = (x==y)
(Pair 1 3) == (Pair 2 3)
> False
(Pair 1 3) == (Pair 1 4)
> True

Inherits from Eq, specifies <, >, <=, >=, default defn of min and max, compare

msort :: (Ord a) => [a] -> [a]

can be read as, msort has type list to list s.t. a derives Ord
more generally:
fcn-name :: constraints => type signature

specifies the method show :: a -> String and read :: String -> a which is a simple parser (crazy cool)

Implementaiton of type classes

Simply a dictionary of operators to their function signatures

Code defining Num:

class Num a where
  (+), (*) :: a -> a -> a
  negate :: a -> a

Then under the hood, the compiler conceptually creates the following:

data NumDict a
= MkND (a -> a -> a)
       (a -> a -> a)
       (a -> a)
plus  (MkND p _ _) = p
times (MkND _ t _) = t
neg  (MkND _ _ n) = n

Again, it's really just a hidden dictionary.

More Useful Type Classes

Haskell's purity, higher order functions, and parameterized algebraic data types let us implement polymorphism at a "higher level" than any other language. Type classes alow us to describe behavior of types. Again, in the previous section we have type classes like Eq that describe classes that can perform equaltiy checks. Here are more powerful type classes.

  1. Monoid in math is a set with an identity and an associative (bracketable any way) binary operation
class Monoid a where
  mempty :: a
  mappend :: a -> a -> a
  mconcat :: [a] -> a
  mconcat = foldr mappend mempty
  1. The functor class is motivated by the awkward use of Maybe, and can be used for lists. It can be used ot represent higher-order concepts like lists. Broadly, functors are things that can be mapped over, and this is reflected by the definition which simply provides an interface for fmap
class Functor f where
  fmap :: (a -> b) -> f a -> f b

{- notice there is no default implementation of fmap -}
{- also notice that f is not a concrete type that holds a value, 
   but rather a type constructor that takes one type as the parameter -}

instance Functor [] where  
    fmap = map  

{- the type signature for map is: -}
map :: ( a -> b ) -> [a] -> [b]

{- so the above instance of list above is correct! where: -}
f = []

instance Functor Maybe where
  fmap _ Nothing = Nothing
  fmap g (Just a) = Just (g a) {- unwrap and wrap again in another Just -}
  1. Kinds help us handle two arguments?

In class example:

instance Functor ((->) r) where
  fmap = (.)

now we have f makes an r -> ?
from above,
fmap :: (a -> b) -> f a -> f b
so when f = ((->) r),

(a -> b) -> (r -> a) -> (r -> b)
   f           g


Another "more useful" example:

instance Functor ((,) w) where
  fmap f (w,a) = (w, f a)

Still kind of weird and limited, since we only have one argument functors. If you want to partially apply a function to the second argument, then we can simply wrap it in some newtype that flips the arguments and overrides the fmap definition to unswap accordingly.

Functor laws:

fmap id = id
fmap (g . h) = fmap g . fmap h
  1. The Applicative class has less power than monads by design
class Functor f => Applicative f where
  pure :: a -> f a
  (<*>) :: f (a -> b) -> f a -> f b

Looking at an example of Maybe:

instance Applicative Maybe where
  pure = Just                       {- implementing Applicative interface -}
  Nothing <*> _ = Nothing
  _ <*> Nothing = Nothing
  (Just f) <*> (Just y) = Just (f y)

fmap (+) (Just 2) <*> (Just 3)
> Just 5
(+) <$> (Just 2) <*> (Just 3)
> Just 5
pure (+) <*> (Just 2) <*> (Just 3)
> Just 5

So notice we're "lifting" things into Maybe, kind of just general definitions of operators that let us perform optional things within the Maybe type

Revisiting the Monad class

Again, monads are more powerful than applicatives:

class Applicative m => Monad m where
  (=<<) :: (a -> m b) -> m a -> m b {- produces a value in the context m, from a value a in context m -}
  (>>=) :: m a -> (a -> m b) -> m b {- reversed arguments -}
  (>>) :: m a -> m b -> m b
  m >> n = m >>= \_ -> n
  fail :: String -> m a
  fail s = error s

looking at IO example:

Higher-order Polymorphism: System F

There are varieties of polymorphism, we studied let-polymorphism

  gamma |- [x |-> t1] t2 : T2
------------------------------- T-LetPoly simple
gamma |- let x = t1 in t2 : T2

But we can pass garbage into the value for x and if it's not used in t2, it won't be type checked, so we need to add the extra premise to check this:

gamma |- [x |-> t1] t2 : T2    gamma |- t1 : T1
       gamma |- let x = t1 in t2 : T2

Let-polymorphism only allows top level let-bindings, "disallows functions that take polymorphic values as arguments"?

System F introduce quantifiers. (lambda X.t)[T] -> [X |-> T]t is a type application. The pair of a type application and type abstraction forms a redex analogous to normal app-abs pairs.


Type checking is still simple structural recursion, so progress and preservation theorems apply.


We need to still encode booleans and integers to make it useful. We use these encodings to allow us to leverage structural properties of the simple definition of our language, while making it expressive using these encodings

Again, a boolean is lambda x. lambda y. x :: X -> X -> X. This can be expressed instead as forall X . X -> X -> X in system F.

Natural numbers consume a successor and zero: Nat :: forall X . (X -> X) -> X -> X, where X->X is the type of a successor.