Initial placement helps in the ASIC world since it is based on a proximity model. The closer you are, the easier you can build your routing tracks to justify that proximity placement. In FPGAs, this does not necessarily work because routing is segmented and hierarchical. Many physical synthesis solutions taken from the ASIC world fail in the FPGA space since they do not look at real wire delays and topologies. To be truly effective for complex FPGA design, new physical synthesis tools should combine the benefits of ASIC-strength algorithms with the advantages of using post-P&R data up front. Utilizing wire delays in this manner to drive the mapping is a huge benefit in physical synthesis for FPGAs. It is very difficult to co-relate post-layout data of the vendor tool’s timing engine with that of the physical synthesis tool. Any tool that can provide this functionality can truly make use of actual physical data in real time and ensure the highest accuracy.
As device complexities rise, we must review traditional approaches to timing convergence to see where they fall short. Current solutions using standalone logical synthesis are iterative and non-deterministic by nature. Designers typically write/re-write RTL code, provide guidance to the place and route (P&R) tools by grouping cells together, and possibly attempt some floorplanning. An alternative is to simply do numerous P&R runs. These cannot be considered “solutions”, since they only think of timing as an afterthought. Even then, the HDL code or constraints are modified without the faintest notion about whether timing will actually improve in the next iteration. It is inconceivable that designers must needlessly iterate through P&R—the most time-consuming step in FPGA design–before gaining any visibility if the changes made were a step in the right direction (or only served to exacerbate the problem). This unpredictability impacts the bottom line, negating the reduced costs and time-to-market advantages of using programmable logic in the first place.
Mr. Cummings has been a presenter, panelist, moderator and featured speaker at conferences and seminars world-wide, including the 2003-2004 world-wide "SystemVerilog NOW!" Seminars and the 2010-2011 world-wide Mentor-sponsored "SystemVerilog Assertions for Verification" Seminars.
Mr. Cummings was previously employed by Tektronix, Floating Point Systems and IBM where his duties included high-speed ECL and CMOS ASIC design, FPGA design, board design and hardware verification. Mr. Cummings received several awards in recognition of his exceptional performance while at Tektronix.
The gatelevel netlist from the synthesis tool is taken and imported into place and route tool in Verilog netlist format. All the gates and flip-flops are placed; clock tree synthesis and reset is routed. After this each block is routed. The P&R tool output is a GDS file, used by foundry for fabricating the ASIC. Backend team normally dumps out SPEF (standard parasitic exchange format) /RSPF (reduced parasitic exchange format)/DSPF (detailed parasitic exchange format) from layout tools like ASTRO to the frontend team, who then use the read_parasitic command in tools like Prime Time to write out SDF (standard delay format) for gate level simulation purposes.
However, time and market forces have taken their toll. Xilinx and Altera now both have very capable FPGA implementation tools that are well integrated into solid design flows. Although the Synplicity identity did, in fact, fade into the great purple fog, Synopsys continues to market updated versions of the former Synplicity tools, and they are also solid, vendor independent, high-performance tools that are used successfully by a number of companies. They are nowhere near the “must have” solutions they were ten years ago, however, and I think you’ll find that much of Synopsys’s efforts in that space are now focused on FPGA prototyping (at which they excel). They are no longer the de facto standard for FPGA synthesis. Most Xilinx and Altera customers today use the vendors’ own tools.
First-generation ASIC synthesis tools used the now primitive fanout-based approach, which worked fine since most of the delay from the cell/wire combination came from the cell. Moving into deep submicron (DSM) ASIC technologies (130 nm and below), however, the traditional separation between logical (synthesis) and physical (place and route) design methods created a critical problem. Designs no longer met their performance goals, giving rise to what is now notoriously known as the “timing closure” problem. As geometries kept shrinking, circuit delays were increasingly influenced by net delays and the wire topology, so that floorplanning and cell placement drastically affected the circuit timing. The traditional fanout-based wire load models used for estimating interconnect delay during synthesis were rendered inaccurate and eventually broke down. This is still the key factor driving the lack of timing predictability between post-synthesis and post-layout results. Timing closure is still one of the biggest areas of concern for ASIC performance-oriented designs.
ASIC design consists of many disparate design tasks that are not part of an FPGA design flow. For example, the FPGA vendor has already taken care of clock-tree synthesis and boundary scan. FPGA designers also need not perform silicon verification or scan-chain insertion for test. Since most FPGAs power up in a known state, FPGA designers do not have to initialize memory bits, latches or flip-flops. To their advantage, FPGAs can also have embedded logic analysis capability for debugging a design.
Ideally, interaction between the logic synthesis and the physical implementation would be contained in one environment. This allows synthesis-based optimizations, including restructuring of the logic and optimizations of the critical paths, to occur with real-wire topologies, and the associated effects can be considered simultaneously. This will greatly reduce the current dependence on wire load models as accurate wire information will be available from the very beginning of the timing closure cycle. To reiterate, ASICs and FPGAs require different implementation strategies, which becomes tantamount when addressing the growing ASIC-like physical synthesis challenges in the FPGA world.
Unfortunately, in studying the road map of Design Compiler’s voyage to success, there were a few key landmarks that the industry experts missed. In the ASIC market, third party synthesis (tools supplied by an independent EDA company rather than by the chip makers themselves) succeeded largely because of economy of scale. The EDA company could invest the resources to create a world class synthesis tool, and those engineering costs could be easily amortized over a large constellation of ASIC vendors and technologies. No single ASIC vendor had the resources to create their own tool that could match the sophistication of those created by the EDA experts. And, there was a knock-on benefit: by standardizing the industry behind one synthesis tool, your design could become largely technology- and vendor-independent. If you needed to switch to a different ASIC supplier, or wanted to target your design so that multiple suppliers could build it, the synthesis tool provided the firewall that kept any chip vendor from locking you in. Furthermore, as they quietly worked behind the scenes (this will become important later), it had already been established that place-and-route, the next step in ASIC design after synthesis, was in the domain of third-party EDA suppliers rather than the ASIC vendors themselves.
The business success Synopsys achieved with Design Compiler was the envy of the EDA industry. Every EDA executive and entrepreneur on Earth began a desperate quest to recreate that magic – to find that one franchise-maker tool that could propel their five-person project team to Fortune 500 status on the coattails of one mission-critical solution for the entire electronics industry. Design Compiler’s success was in ASIC design, so when FPGAs arrived on the scene and became the heirs apparent to the custom chip crown, the answer seemed obvious. Whoever could create the best FPGA synthesis tool would win. Players large and small threw their hats into the ring. Synopsys looked like a favorite as they created the FPGA equivalent of the wildly successful Design Compiler. Mentor Graphics set out to save their struggling synthesis program by creating an FPGA version of their AutoLogic tool, and a whole wave of startups appeared on the scene, all aiming to claim the prize, for one lesson the world had learned from Design Compiler was, “There is no second place.” Once a critical tool achieves a winning position, the rest is like the end of a game of Monopoly: the rich get steadily richer, and the poor gradually fail, until finally only one player is left standing.