Transistor-Level Waveform Evaluation for Timing Analysis Qin Tang, Amir Zjajo, Michel Berkelaar, Nick van der Meijs Circuits and Systems Group, Delft University of Technology Q.tang@tudelft.nl Abstract—In (Statistical) Static Timing Analysis, one of the crucial steps in gate level design flow, delay modeling has focused on gate-level models. However, the black-box property of the gate-level models introduces limitations for the accuracy of timing analysis, especially in nanometer technology. In this paper, we present an efficient transistor model (Xmodel) to build up gate models, which, benefiting from transistor-level details, is independent of input waveform, output load and circuit structures. Additionally, the proposed model provides both high accuracy and efficiency in comparison with Spice/Spectre for all analysis scenarios including multiple-switching, and for all cell types including cells with high stacks. We also present a statistical extension of the proposed model (SXmodel) since the parameter variations are not negligible any more for nanometer technolo- gies. Using General Threshold (GVT) library in Nangate 45nm package, experiments showed high accuracy and efficiency of the proposed gate modeling and waveform evaluation methodology. Keywords- transistor-level, gate modeling, timing analysis, statistical, parameter variation. I. I NTRODUCTION The simulation of logic gates, the basic blocks of digital circuits, is of paramount importance in macrocell/block char- acterization and (statistical) static timing simulation (S/STA). By using gate-level models (GLMs) such as CCS [1] and ECSM [2], S/STA calculates delay and slew much faster ignoring accurate waveform information. GLMs model delay and slew as a function of input slew and output effective capacitance (C eff ) for a given cell arc and store the charac- terized data in lookup tables (LUTs) or polynomial functions. In nanometer technology, however, the intrinsic limitations of GLMs significantly affect the S/STA accuracy and efficiency. Firstly, the simple saturated ramps can no longer represent the input signals since the shape of signal waveform starts to be affected by process variations and other variabilities such as crosstalk noise. Secondly, GLMs fail to work with a multi-port coupled interconnect load because the load is simplified and modeled only by C eff . Additionally, GLMs fail to efficiently capture multi-input switching (MIS) and internal charge effects for high-stack and complex cells. Not modeling MIS for timing would result in as much as 100% error in stage delay and slew calculation [3]. Lately, intensive research efforts are being made to address the above issues. With recent proposals of optimized GLMs, there is a clear trend to sacrifice some performance, mostly adding complexity, to improve accuracy. In [3] and [4], gate delay and output slew are modeled as a function of node voltages to capture full waveforms and MIS scenario. The work in [3] has been extended in [5] by exposing internal nodes as virtual ports to model the internal states of the cell. All these works attempt to optimize GLMs to maintain acceptable accuracy for all types of gates. Unfortunately, the fact that GLMs are black-box models, where the internal structure of the gates is hidden, is the essential root of all these issues. Clearly, an efficient modeling and waveform evaluation ap- proach that is accurate within a few percent of Spice/Spectre, uniformly for all gate types and arcs, is required for nanometer technology. Combined with advanced algorithm and proper utilization of available computer resources, it becomes prac- tical to use transistor-level cell models in multi-million gates STA runs [6]. In nanometer technology, however, the sophis- ticated transistor model (e.g. BSIM4) evaluation dominates and dramatically slows down the Spice/Spectre simulation, which makes it impractical for timing analysis. Therefore, an efficient and accurate transistor model is a key component for transistor-level waveform evaluation for timing analysis. In this well-studied field, it has been recognized that LUT- based models combined with advanced interpolation methods can provide both accuracy and speed. The 3D tabular drain-current model for a single transis- tor for precise circuit simulation [7] and the corresponding monotonic piecewise cubic interpolation method demonstrate the speed [8] and accuracy [9] advantages of the LUT-based model, in both digital and analog applications, in comparison to the conventional analytical models. The LUT-based models have also been used for RF circuit simulation [10], SOI devices [11] and timing analysis [6], [12], [13]. In the LUT approach, the exact behavior of the device is accounted for without any approximations, and thus the long and difficult analytical model development phase is avoided. Furthermore, the LUT approach is independent of technologies since the data are measured or simulated from sufficiently accurate models. In general, the transistor model requires accurate representation of both the transistor’s current sources and the intrinsic capac- itances. Nowadays, the analytical model [12] or a single value [6] are still the dominant methods for capacitance modeling in transistor-level timing analysis, which is either too complicated or too simplified. The LUT approach is a potential alternative for capacitance modeling for the accuracy and computation time trade-off [14]. In most well-known circuit simulation programs (e.g. Spice), a numerical integration method is used to convert