## Grade Breakdown |

Test | Percentage |
---|---|

Exam 1 | 15% |

Exam 2 | 15% |

Exam 3 | 15% |

Exam 4 (final) | 15% |

Lab assignments* | 40% |

Extra credit homework | up to 4% extra credit |

*The laboratory project assignments consist of four programming projects, each counting 10% of the final grade.

## Exam Calendar |

Exam | Date | Time |
---|---|---|

Exam 1 | 2/2 | regular class time |

Exam 2 | 2/28 | regular class time |

Exam 3 | 4/4 | regular class time |

Final | 5/4 | 7:30AM-9:30AM |

## Letter Grades |

94-100% | A | 87-89% | B+ | 77-79% | C+ | 67-69% | D+ | 0-59% | F |

90-93% | A- | 83-86% | B | 73-76% | C | 63-66% | D | ||

80-82% | B- | 70-72% | C- | 60-62% | D- |

- Textbook chapters 1, 2 (2.1-2.9), and 3 (3.1-3.9).
- JVM Specification SE 8 1.1, 1.2, 2.1-2.7, 2.11, 4.1-4.10, 6.1-6.5
- Lecture notes.

The 2005 exam is available here.

The 2007 exam is available here.

The knowledge and techniques you should master include (but are **not** necessarily limited to) the following:

- The structure of a compiler: compiler phases and passes
- The analysis-synthesis model
- Definition of a context-free grammar: terminals, nonterminals, productions, start symbols
- Derivations and sentential forms
- Parse trees
- Ambiguity problem, operator associativity, and operator precedence
- Syntax-directed translation: attributes and semantic rules
- Translation schemes with semantic actions
- Recursive descent (predictive) parsing
- Using FIRST to write a recursive descent parser
- Left factoring and left-recursion elimination
- Handling identifiers and keywords with symbol tables
- Abstract stack machines
- The basics of the JVM
- Interaction between the lexical analyzer and parser
- Definition of tokens, token attributes, patterns, lexemes, alphabets, strings, and languages
- Operations on languages
- Regular expressions and notational shorthand
- Regular definitions
- Transition diagrams: how to construct and code a scanner based on regular definitions
- The Lex/Flex specification
- Definition of NFA, DFA, transition graph, transition table
- Thompson's construction algorithm
- NFA simulation with e-closure and move
- Subset construction algorithm
- Minimizing the number of states of a DFA using the partition algorithm
- Converting a regular expression into a DFA directly
- NFA/DFA time-space tradeoffs

- Textbook chapter 4, excluding (1st ed) section 4.6 "Operator Precedence Parsing" and (1st ed) section 4.7 parts "efficient construction of LALR parsing tables" and "compaction of LR parsing tables".
- Lecture notes.

The 2005 exam is available here.

The 2007 exam is available here.

The knowledge and techniques you should master include (but are **not** necessarily limited to) the following:

- Position of the parser in the front-end of the compiler model
- Importance of error handling, the viable prefix property, and error recovery strategies
- Grammars, deriviations, parse trees, and the Chomsky Hierarchy
- General and immediate left-recursion elimination
- Left factoring
- FIRST and FOLLOW
- Constructing an LL(1) parse table
- LL parsing and error recovery
- Shift-reduce parsing
- Constructing the set of LR(0) items using closure and goto
- Constructing an SLR parse table
- Constructing the set of LR(1) items using closure and goto (for canonical LR(1) and LALR(1) parse tables)
- Constructing canonical LR(1) and LALR(1) parse tables
- LR parsing and error recovery
- Subset relationship between LL(1), SLR(1), LR(1), and LALR(1) grammars
- Proving that a grammar is LL(1), SLR(1), LR(1), or LALR(1)
- Resolving shift/reduce conflicts with operator precedence and associativity
- The Yacc specification of a grammar
- Combining Yacc/Bison with Lex/Flex

- Textbook sections 2nd ed: 5.1 to 5.4, 6.1 to 6.2, 6.3.3 to 6.3.6, 6.4, 6.6, 6.7, 6.9, 7.1 to 7.3
- Lecture notes.

The 2005 exam is available here.

The 2007 exam is available here.

The knowledge and techniques you should master include (but are **not** necessarily limited to) the following:

- Syntax directed definitions.
- Attribute grammars.
- Attribute dependence graphs.
- Parse tree evaluation methods for attributes.
- S- and L-attributed grammars.
- Top-down parsing with S- and L-attributed grammars.
- Bottom-up parsing with S- and L-attributed grammars.
- Translation schemes.
- Using marker nonterminals in translation schemes.
- Eliminating left recursion from translation schemes.
- Static checking versus dynamic checking.
- Type checks, flow-of-control checks, uniqueness checks, name-related checks.
- Type expressions and type representations.
- Structural equivalence versus name equivalence of types.
- Using Post systems to define type rules (don't need to memorize specific type rules).
- Syntax-directed definitions for constructing type graphs.
- Syntax-directed definitions for type checking.
- Syntax-directed definitions for type coercion.
- Syntax-directed definitions to compute lvalue/rvalue properties.
- Intermediate code representations, pros and cons.
- Symbol tables and data structures to support scoping rules.
- Three address code generation of expressions in scope.
- Three address code generation of array indexing.
- Three address code generation of function calls.
- Three address code generation of relational operators.
- Backpatch lists to support code generation of short-circuit operators.
- Activation trees
- Control stacks
- Activation records (subroutine frames)
- Allocation of activation records
- Control links
- Access links
- Calling sequences and parameter passing
- Scope rules and bindings

- Textbook sections 2nd ed: , 8.1 to 8.4, 8.6 to 8.8, 9.1 to 9.2, 9.6
- Lecture notes.

The 2005 exam is available here.

The 2007 exam is available here and with solutions.

**not** necessarily limited to) the following:

- Target code generation
- Instruction cost issues
- Addressing modes
- Instruction selection
- Basic blocks
- Control flow graphs
- Next-use information
- A simple code generator based on getreg()
- Register allocation and assignment
- Register allocation with graph coloring
- Peephole optimization
- Constant folding
- Constant combining
- Strength reduction
- Constant propagation
- Common subexpression elimination
- Global common subexpression elimination
- Backward copy propagation
- Dead code elimination
- Forward copy propagation
- Code motion
- Loop strength reduction
- Induction variable elimination
- Dominators and dominator trees
- Loop detection with depth first spanning trees
- Natural loops
- Reducible and irreducible loops
- Preheaders
- Global data flow analysis
- Dataflow equations for reaching definitions (gen, kill, in, out)
- Dataflow equations for live variable analysis (use, def, in, out)