International Journal of Research in Engineering, Science and Management Volume-2, Issue-5, May-2019 www.ijresm.com | ISSN (Online): 2581-5792 347 Abstract: Present day compilers have profusion of code modification. But all most all compiler follow same old method to optimize code and they apply predetermined sequence like scanning, lexing, analyzing syntax and analyzing semantics followed by generating intermediary code and finally generating target code. Thus optimizing functions without interpreting whether code is getting modified or not we can’t assure that after compilation it uses less resources and faster execution. In order to overcome this problem, techniques like Reverse-Inlining (Procedural Abstraction), Cross Linking optimization, trace inlining, Optimizing Leaf Function, Combined code motion and allocation of registers etc are used which are more efficient and generated better machine codes. As trace inlining technique helps in analyzing enforcement and size of low level code generated. Various inlining rules on the benchmark suites DaCapo 9.12 Bach, SPECjbb2005, and SPECjvm2008 are assessed, proved that compilers based on tracing attains nearly 51% better enforcement than method-based Java HotSpot client compiler. Moreover, the positive effect of the large compiler scope on other optimizations of compiler is conveyed. One of the techniques used to deicide good optimizing sequence for programs is artificial neural network. Keywords: Compiler Analysis 1. Introduction Process of transforming a program where it considers resultant of foregoing step as input to next step and modifies code in such manner that it absorbs minimum expedients like CPU, memory etc. with superior accomplishment is called optimizing code. Machine dependent and machine independent are kinds of code optimizations. Transforming given code as stated in target device architecture, modifying induced code is called as Machine dependent optimization. It uses CPU and registers, instead of using corresponding memory label it uses established memory label. Machine independent optimization [1] method optimizes transitional code for better final code. This paper focuses on new techniques which supports for the emerging computer architectural designs. Very large instruction word (VLIW) Architecture, allows exploiting instruction level parallelism, meaning execution of more than one instruction simultaneously which are not inter dependent. In Parallel execution code execution doesn’t consume more time and hence code will be more efficient. So in this paper we will also discuss about the trending techniques of compilation. Traditional compilers have been in use for generating machine codes until the emerging architectural changes, since these were hardware dependent thus there was a necessity to build compilers that are not hardware dependent and use to compile source codes for type of microprocessor designed to control functionality of devices has accentuated coders to need for controlled potency consumption, real-time execution and agility of code. In static compilation, compiler processes the program code method after method, a control-flow graph (CFG) is constructed for each method, the graph is optimized and the native code gets generated based on CFG traversal. Suganuma et al, proposed a just-in-time dynamic compiler(JIT), where it accesses the runtime profile information, selects code only if it influences the overall runtime and inline those parts of method bodies only [2]. Gal proposed trace-based compilation approach to building dynamic compilers, wherein cyclic code paths that are frequently executed are identified and recorded. These traces are assembled dynamically into a tree-like data-structure. The major advantage is that the trace tree only consists of code areas that are relevant. Edges which appear in static CFG but that are not executed at runtime, are not considered in the trace representation, and are assigned to an interpreter in case of execution. The control flow merge point in the trace tree being absent greatly simplifies optimization algorithms which increases performance and reduces the machine code that gets generated. We will see the difference between the traditional compilers and modern compilers in their way of optimizations and compiler analysis, also the modifications done to the conventional compilers in order to meet the emerging changes. 2. Literature survey Phase Ordering optimization techniques: Here instead of following fixed sequence for modifying whole program, apply it to discrete portion by this it selects superior sequence for modification automatically. Phase ordering with genetic algorithm [3]: It is used in A Review of Conventional and Upcoming Approaches for Compiler Analysis and Code Optimization M. Shobha 1 , Soumyashree Dhabade 2 , C. Sowmya 3 , Sini Anna Alex 4 1,2,3 Student, Dept. of Computer Science and Engineering, Ramaiah Institute of Technology, Bengaluru, India 4 Assistant Professor, Dept. of Computer Science and Engg., Ramaiah Institute of Technology, Bengaluru, India