At PC-Doctor we’ve been busy with some hardware projects for some time now, and so we experienced the daily excitement that only Hardware Description Languages like Verilog can provide.

Working with Verilog provides a virtually endless stream of insights to someone who has mostly focused on software in the past. Where else is the parallelism of hardware execution so glaringly exposed and laid bare for unsuspecting programmers to trip up on?

For instance, I can, by omitting a single âÂ?Â?<âÂ?Â? character, change a previously happily humming expansion board to one that likes to eat motherboards for breakfast. Trust me, it’s not a pretty sight when a motherboard starts popping capacitors in response to a bug in the firmware that was missed by the simulation test bench.

Some might say that VHDL is also an HDL and provides its users with the same capability for creating excitement, but I disagree. The formal definitions and verbose expression of VHDL hides the raw nature of the work. Reading VHDL gives me the same sort of creepies as ADA would to any honest C programmer (ADA, as in the programming language, not the disabilities act, diabetes association or dental association). It’s possible to get in big trouble with VHDL, too, but it takes a lot more typing on the keyboard, just like it takes a lot of trying in ADA to cause the damage that a single careless pointer reference will do in C.

In case you didn’t know, Verilog began its life as a language for simulating and modeling hardware designs. It was developed by Automated Integrated Design Systems in the early 80’s. The company soon renamed itself to Gateway Design Automation, perhaps due to an obvious conflict with the initials of the company name. Gateway was later acquired by Cadence, who made Verilog an open standard in the mid 90’s.

The cool part about Verilog is that it was intended to model and simulate hardware. This means that it makes twiddling bits and defining hardware features incredibly easy. Sort of what C did for software when it was introduced.

A major difference between software and hardware design is the difficulty of validating a hardware design. While a good software design always includes an associated test suite, it’s possible to design quite extensive software projects based on manual execution and âÂ?Â?debugging on errorâÂ?Â?. Needless to say, this is not a wise approach for hardware designs that can experience a wide range of problems, including self-destruction in the right circumstances.

A particular aspect of modern hardware design is the clocking that’s used to synchronize events within the design. Clocks are forced on hardware designers due to difficulties in reliably predicting the switching characteristics of gates, transmission lines, etc., that make up an integrated circuit. Rather than trying to figure out switching times for every possible gate transition, and adding circuitry to make sure that outputs only change from one final state to the next, it’s far easier to agree on a time limit within which every part of a design will respond to a change in the state of inputs, and allow outputs to toggle as they may in-between.

This is where the infamous @ (posedge clock) steps in. It is the command that forces a wait until the next positive edge of the agreed-upon synchronization method, the clock signal. A design without at least one instance of that magic phrase will often mean bad things for the implementation of the design. (Volumes have been published on this topic; just search for âÂ?Â?VerilogâÂ?Â? and you’ll get more details if you want them.)

This way, the consequences of the health condition have to face try that website viagra ordination conflicts, dissatisfaction, fights, and separation well. Skin of the avocado may see now cheap sildenafil no prescription lead to cardiac distress and heart failure in certain bird species. Weight loss may viagra generika deeprootsmag.org improve your blood circulation and not only to the heart but to all the organs including penis. Many people experience headaches and dizziness nearly everyday, but instead buying levitra from canada of consulting to doctor, they simply ignore these problems. I used to not like the idea of the clock. Hardware switching is incredibly fast, yet clock synchronization forces an effectual wait-state on all but the very slowest logic-sequence. Why would I want to give up significant processing bandwidth to a synchronization scheme?

I was not alone in being unhappy about clocks. Just read http://www.eetimes.com/in_focus/embedded_systems/OEG20030606S0034 and http://www.eetimes.com/story/OEG20011025S0074.

To add to the allure of clock-less design, it is not difficult to write such designs in Verilog, requiring merely a switch to event-based flow instead of clock synchronization. Simulating clock-less designs is no different than simulating clocked ones.

Unfortunately the implementation on actual hardware is a can of worms. The designs that we create are implemented on CPLD and FPGA parts. These are essentially collections of logic gates with interconnections. The interconnections and the logic gate functions are programmable, which makes the whole chip do what we want. What I found out is that trying to synthesize a clock-less design will eat up a significantly higher number of chip resources than a clocked one, and will in many instances have bad timing characteristics.

The problem seems to originate with the switching and signal propagation delays inherent with all electronics, and the implications this has on clock signals. In order to minimize clock �skew� to a large number of gates, CPLD and FPGA vendors implement additional propagation circuitry for clock traces, and hence designate only a relatively small number of traces as clocks. Some parts have as few as 4 of them, others 60 or more, but these are always a relatively small proportion of the available traces.

The small number of low-skew clock traces becomes a problem in a clock-less design, where an implied clock is carried from one processing block to the next. Once the number of implied clocks exceeds the number of available low-skew clock traces, the design will incur significant bloat as synthesizing and fitting tools attempt to make the limited resources do what the designer wants.

Things are made worse with the difficulty of predicting behavior across clock domains, which means that fewer of the synthesizing tools’ size and performance enhancing optimizers can be applied.

Perhaps things would be different if we were working on ASICs that allow the designer more freedom in stating what takes place where. Things would also be different if FPGA vendors were to introduce additional �local� or �mini� clocks, that could be used to implement a large number of implied clock domains.

But since we don’t work on ASICs, and the FPGAs we work with don’t have cool clock domain features, we are stuck with the clock. We just better love it, because hating it won’t change a thing.