Like most strongly typed languages, C++ has a way to group a set of constants together as their own type called enums. Enums are extremely useful in a wide variety of circumstances. However, enums in C++ have a lot of problems, and, in fact, they're really a mess. I'mcertainlynottheonlypersontocomplainaboutthis, either.
Enums don't fit in with the rest of the language. They feel like something that was tacked onto the language to me. This is purely an aesthetic issue, and the fact that they're useful in a wide variety of circumstances probably negates this.
More practically, you can't control the conversion of the enum to and from integers. For example, you can use the less than operator to compare an enum and an integer without using a cast. This can result in accidental conversions that don't make sense.
Perhaps the worst problem is the scope of the constants defined by the enum. They are enclosed in the same scope as the enum itself. I've seen a lot of code where people prepend an abbreviation of the enum's type to each of the enum's constants to avoid this problem. Adding the type to the name of a constant is always a good sign that something bad is happening.
In addition, you can't decide ahead of time what the size of your enum's type is. C++ normally tries to give the programmer as much control as possible. In the case of enums, this allows the compiler to store your enum in whatever type it wants to. Frequently, this doesn't matter, but when it does matter, you'll end up copying the value into an integer type that's less expressive than than the enum.
After the break, I'll explain what other languages are doing about it, what the next iteration of the C++ standard will do about it, and what you can do about it now.
PC-Doctor delivers an enormous number of different products to different customers. Each customer gets a different product, and they get frequent updates to that product as well. Delivering these products requires complex synchronization between dozens of engineers. We've gotten great at scheduling the most important work. Our clients love us for that.
However, the low priority projects get released significantly less reliably. Until recently, I'd assumed that this problem was unique to PC-Doctor. Based on some extremely sketchy evidence from another company, I'm going to release my Theory Of Scheduling Low priOrity Work (TOSLOW).
Regular expressions are extremely powerful. They have a tendency, however, to grow and turn into unreadable messes. What have people done to try to tame them?
Perl is often on the forefront of regex technology. It allows multiline regexes with ignored whitespace and comments. That's nice, and it's a great step in the right direction. If your regex grows much more than that example, then you'll still have a mess.
Lambda expressions and anonymous methods in C# are more complicated than you probably think. Microsoft points out that an incomplete understanding of them can result in "subtle programming errors". After running into exactly that, I'd agree. While I haven't tried it, Lambda expressions in C# 3 are supposed to do exactly the same thing.
This post is a bit of a change for me. I'm actually going to write about my work for PC-Doctor! I'm actually a bit embarrassed at how rare that's been.
I want to talk about how to design a brand new framework. It's not something that everyone has to do, and it's not something that anyone does frequently. However, there's very little information on the web about the differences between creating a library and a framework.
The next C++ standard (C++0x) will have lambda expressions as part of the standard. N2550 introduces them. It's a short document, and it's not too painful to read. Go ahead and click it.
Like many new C++ standards, it's not clear yet how the new feature is going to be used. Michael Feathers has already decided not to use them. At least one other person seems to mostly agree. I, on the other hand, am with Herb Sutter who seems excited enough about the feature to imply that MSVC10 will have support for it. This is going to be a great feature. Incidentally, Sutter has mentioned an addition to C++/CLI in the past that would add less sophisticated lambda support for concurrency. I suspect he's serious about adding the support soon.
There have been many times when I've desperately wanted to avoid defining a one-off functor or function in my code. In fact, there have been times when I've been desperate enough to actually use Boost.Lambda! This standard is a clear win over Boost's attempts to deal with the limitations of C++03.
Let's go through the major features that the committee thinks I'll need as our technology gets bigger...
This article is going to have more questions in it than answers. It's about a problem in software development that I'm not sure I've worried about enough. I've certainly thought about it for specific cases, but this is the first time I've tried to think about the problem in general.
My main question revolves around the cost of complexity in software. There is certainly a large cost in making software more complex. Maintenance becomes more difficult. Teaching new employees about the project becomes harder. In the end, you will get fewer engineers who understand a complex project than a simple one.
Unfortunately, almost any non-refactoring work will add to the complexity of a project. However, some changes can have a large effect on the complexity in a short period of time. Adding a new library or technique to the code base, for example, will make it so that the new technology will have to be understood by people working on the project.
What I really want to know is how much can this cost of complexity be mitigated? Besides switching libraries to add, what can be done to decrease the cost? My question is based on the assumption that some complexity is essential. So, given that you're going to add a new library to the code base, for example, what can be done to reduce the cost?
As the company Rails evangelist one of my challenges has been working out a consistent and understandable deployment strategy. One of the biggest challenges is that I may not have access to the root user acct. Additionally we are generally required to stay within the Etch distro, going with Lenny (testing) requires special approval. A final challenge is that compilers are not allowed on the production server.
The "Ruby gem problem" is the result of not having access to the root user acct. On development servers Ruby gems are easily managed using the root acct with the "gem" command. But without root on the production server, how do get our gems installed? Well, you might think we can just request the owner of the root acct to install gems, but not so fast - the gem command does not place files in accordance with the Linux FHS (see http://www.pathname.com/fhs/pub/fhs-2.3.html). And furthermore, the manager of the server has no interest in keeping track of Ruby gems and managing them seperately. If it's not related to "apt", you've got some explaining to do.
The visitor pattern from the GoF is frequently overlooked by programmers who are used to object oriented programming. However, in some cases, it is significantly cleaner and easier to use than an overridden function. Unfortunately, it's easier to misuse as well, and, when it is used poorly, it can be a real mess.
I was going to tell you about my static analysis project and how I'm using the visitor pattern there. Then I took a glance at the wikipedia article on the visitor pattern. It's clearly written by a OOP fanatic who's never seen the alternatives, so I'm going to contrast my implementation of visitor with the one there.
The contrast is useful because wikipedia's implementation is written using object oriented principles. Part of my goal with this post is to explain about OO alternatives. My implementation is written using compile time polymorphism rather than runtime polymorphism. As we'll see, this is significantly prettier and more flexible than runtime polymorphism.