Question? Leave a message!




COMPILER DESIGN LECTURE NOTES

lecture notes on principles of compiler design and advanced compiler design and implementation lecture notes pdf free download
ShawnPacinocal Profile Pic
ShawnPacinocal,United States,Researcher
Published Date:09-07-2017
Your Website URL(Optional)
Comment
COMPILER DESIGN LECTURE NOTES (Subject Code: BCS-305) for Bachelor of Technology in Computer Science and Engineering & Information Technology Department of Computer Science and Engineering & Information Technology Veer Surendra Sai University of Technology (Formerly UCE, Burla) Burla, Sambalpur, Odisha Lecture Note Prepared by: Prof. D. Chandrasekhar Rao Prof. Kishore Kumar Sahu Prof. Pradipta Kumar Das Lecture-10 Regular Expression, regular grammar, Conversion of regular expression into NFA Ref: Automata Theory, KLP Mishra, N. Chandrasekharan Automata Theory, AV Aho, JD Ullman Lecture-11 A language for specifying lexical analyzer, Design of lexical analyzer generator Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-12 The role of Parser, Syntactic errors and recovery actions Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-13 Context free Grammar, Parse Tree, Parse tree Derivation, Left most Derivation, Right most derivation, ambiguity. Ref: Automata Theory, KLP Mishra, N. Chandrasekharan Automata Theory, AV Aho, JD Ullman Lecture-14 Eliminating ambiguity, predictive parsing, Recursive decent parsing, predictive parsing using tables. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-15 Top down parsing, bottom up parsing, shift reduce parsing using the ACTION/GOTO Tables. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-16 Table construction, SLR, LL, LALR Grammar, Practical consideration for LALR grammar. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Automata Theory, KLP Mishra, N. Chandrasekharan Lecture-17 Syntax directed translation, Syntax directed definition, bottom up evaluation of S-attributed definition. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-18 L-attribute definition, top-down translation, bottom up evaluation of inherited attributes. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-19 Recursive evaluators, space for attribute values at compile time, assigning space at compiler construction time, analysis of syntax directed definitions. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-20 Semantic actions, semantic analysis, symbol tables, types and type checking. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-21 Run time Environment, Activation Records, run time storage organization. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-22 Symbol Tables, Language facilities for dynamic storage allocation, Dynamic storage allocation techniques Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-23 Intermediate code Generation, intermediate languages, Declarations. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-24 Assignment statements, Boolean expressions, Case statements, Back patching, Procedure Calls. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-25 Code Generation, Issues in the design of code generation, The target machine. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-26 Run time storage management, Basic blocks and flow graphs. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-27 A simple code generator, Register and address descriptors, A code generation algorithm. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-28 Register allocation and assignments, global register allocation, usage counts, register assignment for outer loops, Register allocation by graph coloring. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-29 The Dag representation of basic blocks, Dag Construction, Application of Dag. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lectur-30 Peephole optimization, Redundant-instruction elimination, Flow of control optimizations, algebraic simplifications, Use of machine idioms. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-31 Generating code from dags, Rearranging the order, A Heuristic ordering for Dags.(Cont….) Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-32 Optimal ordering for Trees, The labeling algorithm, Code generation from a Labeled tree, Multiregister Operations, Algebraic Properties. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-33 Dynamic programming code generation algorithm, A class of register Machines, The principle of dynamic programming, contiguous evaluation.(Cont….) Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-34 The dynamic programming algorithm, Code-Generator Generators. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-35 Introduction to code optimization, An organization for an optimizing Compiler. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-36 The principal sources of optimization, Function-Preserving Transformations, Common sub expressions, Copy propagations. (Cont…) Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-37 Dead –Code Elimination, Loop Optimizations, Code motion, Induction Variables and Reduction in Strength. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-38 Optimization of basic Blocks, Loops in flow graph, Introduction to Global data flow analysis. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-39 Code improving transformations, Dealing with Aliases, Data flow analysis of structured flow graphs, Efficient data flow algorithm. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Lecture-40 A Tool for data flow analysis, Estimation of types, symbolic debugging of optimized code. Ref: Principle of Compiler Design, A.V.Aho, Rabi Sethi, J.D.Ullman Module -I Introduction to Compiling: 1.1 INTRODUCTION OF LANGUAGE PROCESSING SYSTEM Fig 1.1: Language Processing System Preprocessor A preprocessor produce input to compilers. They may perform the following functions. 1. Macro processing: A pre processor may allow a user to define macros that are short hands for longer constructs. 2. File inclusion: A preprocessor may include header files into the program text. 3. Rational preprocessor: these preprocessors augment older languages with more modern flow-of- control and data structuring facilities. 4. Language Extensions: These preprocessor attempts to add capabilities to the language by certain amounts to build-in macro COMPILER Compiler is a translator program that translates a program written in (HLL) the source program and translate it into an equivalent program in (MLL) the target program. As an important part of a compiler is error showing to the programmer. Fig 1.2: Structure of Compiler Executing a program written n HLL programming language is basically of two parts. the source program must first be compiled translated into a object program. Then the results object program is loaded into a memory executed. Fig 1.3: Execution process of source program in Compiler ASSEMBLER Programmers found it difficult to write or read programs in machine language. They begin to use a mnemonic (symbols) for each machine instruction, which they would subsequently translate into machine language. Such a mnemonic machine language is now called an assembly language. Programs known as assembler were written to automate the translation of assembly language in to machine language. The input to an assembler program is called source program, the output is a machine language translation (object program). INTERPRETER An interpreter is a program that appears to execute a source program as if it were machine language. Fig1.4: Execution in Interpreter Languages such as BASIC, SNOBOL, LISP can be translated using interpreters. JAVA also uses interpreter. The process of interpretation can be carried out in following phases. 1. Lexical analysis 2. Synatx analysis 3. Semantic analysis 4. Direct Execution Advantages: Modification of user program can be easily made and implemented as execution proceeds. Type of object that denotes a various may change dynamically. Debugging a program and finding errors is simplified task for a program used for interpretation. The interpreter for the language makes it machine independent. Disadvantages: The execution of the program is slower. Memory consumption is more. LOADER AND LINK-EDITOR: Once the assembler procedures an object program, that program must be placed into memory and executed. The assembler could place the object program directly in memory and transfer control to it, thereby causing the machine language program to be execute. This would waste core by leaving the assembler in memory while the user’s program was being executed. Also the programmer would have to retranslate his program with each execution, thus wasting translation time. To over come this problems of wasted translation time and memory. System programmers developed another component called loader “A loader is a program that places programs into memory and prepares them for execution.” It would be more efficient if subroutines could be translated into object form the loader could”relocate” directly behind the user’s program. The task of adjusting programs o they may be placed in arbitrary core locations is called relocation. Relocation loaders perform four functions. 1.2 TRANSLATOR A translator is a program that takes as input a program written in one language and produces as output a program in another language. Beside program translation, the translator performs another very important role, the error-detection. Any violation of d HLL specification would be detected and reported to the programmers. Important role of translator are: 1 Translating the HLL program input into an equivalent ml program. 2 Providing diagnostic messages wherever the programmer violates specification of the HLL. 1.3 LIST OF COMPILERS 1. Ada compilers 2 .ALGOL compilers 3 .BASIC compilers 4 .C compilers 5 .C compilers 6 .C++ compilers 7 .COBOL compilers 8 .Common Lisp compilers 9. ECMAScript interpreters 10. Fortran compilers 11 .Java compilers 12. Pascal compilers 13. PL/I compilers 14. Python compilers 15. Smalltalk compilers 1.4 STRUCTURE OF THE COMPILER DESIGN Phases of a compiler: A compiler operates in phases. A phase is a logically interrelated operation that takes source program in one representation and produces output in another representation. The phases of a compiler are shown in below There are two phases of compilation. a. Analysis (Machine Independent/Language Dependent) b. Synthesis(Machine Dependent/Language independent) Compilation process is partitioned into no-of-sub processes called ‘phases’. Lexical Analysis:- LA or Scanners reads the source program one character at a time, carving the source program into a sequence of automic units called tokens. Fig 1.5: Phases of Compiler Syntax Analysis:- The second stage of translation is called Syntax analysis or parsing. In this phase expressions, statements, declarations etc… are identified by using the results of lexical analysis. Syntax analysis is aided by using techniques based on formal grammar of the programming language. Intermediate Code Generations:- An intermediate representation of the final machine language code is produced. This phase bridges the analysis and synthesis phases of translation. Code Optimization :- This is optional phase described to improve the intermediate code so that the output runs faster and takes less space. Code Generation:- The last phase of translation is code generation. A number of optimizations to reduce the length of machine language program are carried out during this phase. The output of the code generator is the machine language program of the specified computer. Table Management (or) Book-keeping:- This is the portion to keep the names used by the program and records essential information about each. The data structure used to record this information called a ‘Symbol Table’. Error Handlers:- It is invoked when a flaw error in the source program is detected. The output of LA is a stream of tokens, which is passed to the next phase, the syntax analyzer or parser. The SA groups the tokens together into syntactic structure called as expression. Expression may further be combined to form statements. The syntactic structure can be regarded as a tree whose leaves are the token called as parse trees. The parser has two functions. It checks if the tokens from lexical analyzer, occur in pattern that are permitted by the specification for the source language. It also imposes on tokens a tree-like structure that is used by the sub-sequent phases of the compiler. Example, if a program contains the expression A+/B after lexical analysis this expression might appear to the syntax analyzer as the token sequence id+/id. On seeing the /, the syntax analyzer should detect an error situation, because the presence of these two adjacent binary operators violates the formulations rule of an expression. Syntax analysis is to make explicit the hierarchical structure of the incoming token stream by identifying which parts of the token stream should be grouped. Example, (A/BC has two possible interpretations.) 1, divide A by B and then multiply by C or 2, multiply B by C and then use the result to divide A. each of these two interpretations can be represented in terms of a parse tree. Intermediate Code Generation:- The intermediate code generation uses the structure produced by the syntax analyzer to create a stream of simple instructions. Many styles of intermediate code are possible. One common style uses instruction with one operator and a small number of operands. The output of the syntax analyzer is some representation of a parse tree. the intermediate code generation phase transforms this parse tree into an intermediate language representation of the source program. Code Optimization This is optional phase described to improve the intermediate code so that the output runs faster and takes less space. Its output is another intermediate code program that does the some job as the original, but in a way that saves time and / or spaces. a. Local Optimization:- There are local transformations that can be applied to a program to make an improvement. For example, If A B goto L2 Goto L3 L2 : This can be replaced by a single statement If A B goto L3 Another important local optimization is the elimination of common sub-expressions A := B + C + D E := B + C + F Might be evaluated as T1 := B + C A := T1 + D E := T1 + F Take this advantage of the common sub-expressions B + C. b. Loop Optimization:- Another important source of optimization concerns about increasing the speed of loops. A typical loop improvement is to move a computation that produces the same result each time around the loop to a point, in the program just before the loop is entered. Code generator :- Code Generator produces the object code by deciding on the memory locations for data, selecting code to access each datum and selecting the registers in which each computation is to be done. Many computers have only a few high speed registers in which computations can be performed quickly. A good code generator would attempt to utilize registers as efficiently as possible. Table Management OR Book-keeping :- A compiler needs to collect information about all the data objects that appear in the source program. The information about data objects is collected by the early phases of the compiler-lexical and syntactic analyzers. The data structure used to record this information is called as Symbol Table. Error Handing :- One of the most important functions of a compiler is the detection and reporting of errors in the source program. The error message should allow the programmer to determine exactly where the errors have occurred. Errors may occur in all or the phases of a compiler. Whenever a phase of the compiler discovers an error, it must report the error to the error handler, which issues an appropriate diagnostic msg. Both of the table-management and error-Handling routines interact with all phases of the compiler. Example: Fig 1.6: Compilation Process of a source code through phases 2. A simple One Pass Compiler: 2.0 INTRODUCTION: In computer programming, a one-pass compiler is a compiler that passes through the parts of each compilation unit only once, immediately translating each part into its final machine code. This is in contrast to a multi-pass compiler which converts the program into one or more intermediate representations in steps between source code and machine code, and which reprocesses the entire compilation unit in each sequential pass. 2.1 OVERVIEW • Language Definition o Appearance of programming language : Vocabulary : Regular expression Syntax : Backus-Naur Form(BNF) or Context Free Form(CFG) o Semantics : Informal language or some examples • Fig 2.1. Structure of our compiler front end 2.2 SYNTAX DEFINITION • To specify the syntax of a language : CFG and BNF o Example : if-else statement in C has the form of statement → if ( expression ) statement else statement • An alphabet of a language is a set of symbols. o Examples : 0,1 for a binary number system(language)=0,1,100,101,... a,b,c for language=a,b,c, ac,abcc.. if,(,),else ... for a if statements=if(a==1)goto10, if • A string over an alphabet o is a sequence of zero or more symbols from the alphabet. o Examples : 0,1,10,00,11,111,0202 ... strings for a alphabet 0,1 o Null string is a string which does not have any symbol of alphabet. • Language o Is a subset of all the strings over a given alphabet. o Alphabets Ai Languages Li for Ai A0=0,1 L0=0,1,100,101,... A1=a,b,c L1=a,b,c, ac, abcc.. A2=all of C tokens L2= all sentences of C program • Example 2.1. Grammar for expressions consisting of digits and plus and minus signs. o Language of expressions L=9-5+2, 3-1, ... o The productions of grammar for this language L are: list → list + digit list → list - digit list → digit digit → 0123456789 o list, digit : Grammar variables, Grammar symbols o 0,1,2,3,4,5,6,7,8,9,-,+ : Tokens, Terminal symbols • Convention specifying grammar o Terminal symbols : bold face string if, num, id o Nonterminal symbol, grammar symbol : italicized names, list, digit ,A,B • Grammar G=(N,T,P,S) o N : a set of nonterminal symbols o T : a set of terminal symbols, tokens o P : a set of production rules o S : a start symbol, S∈N o • Grammar G for a language L=9-5+2, 3-1, ... o G=(N,T,P,S) N=list,digit T=0,1,2,3,4,5,6,7,8,9,-,+ P : list - list + digit list - list - digit list - digit digit - 0123456789 S=list • Some definitions for a language L and its grammar G • Derivation : A sequence of replacements S⇒α1⇒α2⇒…⇒αn is a derivation of αn. Example, A derivation 1+9 from the grammar G • left most derivation list ⇒ list + digit ⇒ digit + digit ⇒ 1 + digit ⇒ 1 + 9 • right most derivation list ⇒ list + digit ⇒ list + 9 ⇒ digit + 9 ⇒ 1 + 9 • Language of grammar L(G) L(G) is a set of sentences that can be generated from the grammar G. L(G)=x S ⇒ x where x ∈ a sequence of terminal symbols • Example: Consider a grammar G=(N,T,P,S): N=S T=a,b S=S P =S → aSb ε • is aabb a sentecne of L(g)? (derivation of string aabb) S⇒aSb⇒aa Sbb⇒aaεbb⇒aabb(or S⇒ aabb) so, aabbεL(G) • there is no derivation for aa, so aa∉L(G) • note L(G)=anbn n≧0 where anbn meas n a's followed by n b's. • Parse Tree A derivation can be conveniently represented by a derivation tree( parse tree). o The root is labeled by the start symbol. o Each leaf is labeled by a token or ε. o Each interior none is labeled by a nonterminal symbol. o When a production A→ x1… xn is derived, nodes labeled by x1… xn are made as children nodes of node labeled by A. • root : the start symbol • internal nodes : nonterminal • leaf nodes : terminal o Example G: list - list + digit list - digit digit digit - 0123456789 • left most derivation for 9-5+2, list ⇒ list+digit ⇒ list -d igit+digit ⇒ digit -d igit+digit ⇒ 9-d igit+digit ⇒ 9-5 +digit ⇒ 9-5+ 2 • right most derivation for 9-5+2, list ⇒ list+digit ⇒ list+2 ⇒ list -d igit+2 ⇒ list -5 +2 ⇒ digit -5+ 2 ⇒ 9-5+ 2 parse tree for 9-5+2 Fig 2.2. Parse tree for 9-5+2 according to the grammar in Example Ambiguity • A grammar is said to be ambiguous if the grammar has more than one parse tree for a given string of tokens. • Example 2.5. Suppose a grammar G that can not distinguish between lists and digits as in Example 2.1. • G : string → st ring + string string - string 0123456789 Fig 2.3. Two Parse tree for 9-5+2 • 1-5+2 has 2 parse trees = Grammar G is ambiguous. Associativity of operator A operator is said to be left associative if an operand with operators on both sides of it is taken by the operator to its left. eg) 9+5+2≡(9+5)+2, a=b=c≡a=(b=c) • Left Associative Grammar : list → list + digit list - digit digit →01…9 • Right Associative Grammar : right → letter = right letter letter → ab…z Fig 2.4. Parse tree left- and right-associative operators. Precedence of operators We say that a operator() has higher precedence than other operator(+) if the operator() takes operands before other operator(+) does. • ex. 9+52≡9+(52), 95+2≡(95)+2 • left associative operators : + , - , , / • right associative operators : = , • Syntax of full expressions operator associative precedence + , - left 1 low , / left 2 heigh • expr → expr + term expr - term term term → term factor term / factor factor factor → digit ( expr ) digit → 0 1 … 9 • Syntax of statements o stmt → id = expr ; if ( expr ) stmt ; if ( expr ) stmt else stmt ; while ( expr ) stmt ; expr → expr + term expr - term term term → term factor term / factor factor factor → digit ( expr ) digit → 0 1 … 9 2.3 SYNTAX-DIRECTED TRANSLATION(SDT) A formalism for specifying translations for programming language constructs. ( attributes of a construct: type, string, location, etc) • Syntax directed definition(SDD) for the translation of constructs • Syntax directed translation scheme(SDTS) for specifying translation Postfix notation for an expression E • If E is a variable or constant, then the postfix nation for E is E itself ( E.t≡E ). • if E is an expression of the form E1 op E2 where op is a binary operator o E1' is the postfix of E1, o E2' is the postfix of E2 o then E1' E2' op is the postfix for E1 op E2 • if E is (E1), and E1' is a postfix then E1' is the postfix for E eg) 9 - 5 + 2 ⇒ 9 5 - 2 + 9 - (5 + 2) ⇒ 9 5 2 + - Syntax-Directed Definition(SDD) for translation • SDD is a set of semantic rules predefined for each productions respectively for translation. • A translation is an input-output mapping procedure for translation of an input X, o construct a parse tree for X. o synthesize attributes over the parse tree.  Suppose a node n in parse tree is labeled by X and X.a denotes the value of attribute a of X at that node.  compute X's attributes X.a using the semantic rules associated with X. Example 2.6. SDD for infix to postfix translation Fig 2.5. Syntax-directed definition for infix to postfix translation. An example of synthesized attributes for input X=9-5+2 Fig 2.6. Attribute values at nodes in a parse tree. Syntax-directed Translation Schemes(SDTS) • A translation scheme is a context-free grammar in which program fragments called translation actions are embedded within the right sides of the production. productions(postfix) SDD for postfix to SDTS infix notation list → list + term list.t = list.t term.t "+" list → list + term print("+") • print("+"); : translation(semantic) action. • SDTS generates an output for each sentence x generated by underlying grammar by executing actions in the order they appear during depth-first traversal of a parse tree for x. 1. Design translation schemes(SDTS) for translation 2. Translate : a) parse the input string x and b) emit the action result encountered during the depth-first traversal of parse tree. Fig 2.7. Example of a depth-first traversal of a tree. Fig 2.8. An extra leaf is constructed for a semantic action. Example 2.8. • SDD vs. SDTS for infix to postfix translation. productions SDD SDTS expr → list + term expr.t = list.t term.t "+" expr → list + term expr → list + term expr.t = list.t term.t "-" printf"+") expr → term expr.t = term.t expr → list + term printf"-") term → 0 term.t = "0" expr → term term → 1 term.t = "1" term → 0 printf"0") … … term → 1 printf"1") term → 9 term.t = "9" … term → 9 printf"0") • Action translating for input 9-5+2 Fig 2.9. Actions translating 9-5+2 into 95-2+. 1) Parse. 2) Translate. Do we have to maintain the whole parse tree ? No, Semantic actions are performed during parsing, and we don't need the nodes (whose semantic actions done). 2.4 PARSING if token string x ∈ L(G), then parse tree else error message Top-Down parsing 1. At node n labeled with nonterminal A, select one of the productions whose left part is A and construct children of node n with the symbols on the right side of that production. 2. Find the next node at which a sub-tree is to be constructed. ex. G: type → simple ↑id array simple of type simple → integer char num dotdot num Fig 2.10. Top-down parsing while scanning the input from left to right. Fig 2.11. Steps in the top-down construction of a parse tree. • The selection of production for a nonterminal may involve trial-and-error. = backtracking • G : S-aSb c ab According to topdown parsing procedure, acb , aabb∈L(G)? • S/acb⇒aSb/acb⇒aSb/acb⇒a aSbb/acb ⇒ X (S→aSb) move (S→aSb) backtracking ⇒a Sb/acb⇒a cb/acb⇒a cb/acb⇒a cb/acb (s→c) move move so, acb∈ L(G) Is is finished in 7 steps including one backtracking. • S/aabb⇒aSb/aabb⇒a Sb/aabb⇒a aSbb/aabb⇒aa Sbb/aabb⇒aa aSbbb/aabb ⇒ X (S→ aSb) move (S→aSb) move (S→aSb) backtracking ⇒aa Sbb/aabb⇒aa cbb/aabb ⇒ X (S→c) backtracking ⇒aaSbb/aabb⇒aa abbb/aabb⇒ X (S→ab) backtracking ⇒aaSbb/aabb⇒ X backtracking ⇒a Sb/aabb⇒acb/aabb (S→c) bactracking ⇒a Sb/aabb⇒aabb/aabb⇒a abb/aabb⇒aa bb/aabb⇒aaba/aab b (S→ab) move move move so, aabb∈L(G) but process is too difficult. It needs 18 steps including 5 backtrackings.