On ease of IIS vs Apache configuration

By , June 27, 2009

Recently ran across what felt like a relatively straight forward task – make IIS 6 and Apache run side-by-side on the same Windows 2003 server using same port but different network interfaces. How difficult could this be? Apache was installed and configured first. IIS was installed, configured and started next. Then the computer was rebooted to test automatic application restart and all hell broke loose…

1) First Horror – Apache fails to start because IIS is hogging up all ports (well described all over the internet blogs)

After wasting time in fruitless attempts to use nice-looking IIS menus had to switch to Google searching for something much more obscure. Established the fact that IIS consumes all network interfaces even if you explicitly specify which one to listen on in its menu. That is default behavior from Microsoft that assumes that no one in their right mind would ever install another Web Server next to IIS (they turned out to be right). A variety of posts were arguing the merits of editing DisableSocketPooling registry setting vs running optional command-line httpcfg utility. An article from Microsoft (http://support.microsoft.com/kb/813368/EN-US) resolved this confusion – the first command was applicable to IIS 5 (hope no one uses it anymore) and has no effect on IIS 6, the second applies to IIS 6 only.

Per the instructions ran command: httpcfg set iplisten -i IIS_DESIGNATED_IP_ADDRESS. Now Apache and IIS could be started in the reverse order. The perceived man over machine victory was very short-lived because the subsequent reboot test showed a 20-30 minute start-up delay that did not exist before.

2) Second Horror – incredibly slow failing IIS start-up (no answer found on google)

The new non-UI configuration changes to IIS broke the machine beyond the wildest expectations. The services kept trying to start and were timing out and retrying causing very slow start-up behavior. The event log inspection yielded the following message.

The IIS Admin Service service hung on starting.
The FTP Publishing Service service depends on the IIS Admin Service service which failed to start because of the following error:
After starting, the service hung in a start-pending state.

Tried to rollback the changes using httpcfg delete commands – no help, the server seemed to be permanently broken and would not boot normally anymore. A few more people on the internet reported similar problem but no answers were published.

After a lot of experimentation found the solution that definitely qualifies for top lunacy category: IIS configuration needs to be backed up and without any changes immediately restored. I guess httpcfg leaves registry or IIS metabase in somewhat inconsistent shape and IIS restore does more than just restore. Otherwise how could restoring configuration that is known to break a computer fix anything?

So now when someone complains about not having a visual editor for Apache configuration, compare this with IIS registry/metabase combined nonsense. It takes less time to learn advanced Apache configuration examples than deal with this.

Not so clear case

By , June 26, 2009

Some software systems inspire strong love-hate feelings. ClearCase falls into this category and has a devoted set of admirers and a seemingly much larger community that loathes it. How come it is so widely spread given a huge “hate it” crowd? The answer probably lies in the placement of communities – senior architects/administrators with experience going into horrors of computing dark ages and a fresher developer group used to fast low-overhead systems.

ClearCase was a novel and feature rich product when it came out. In addition to offering a good set of source control features, the system encompasses a lot of interesting design ideas to solve main mid-to-late 80’s problems: slow computers, small expensive local disk space, network-based virtual file system architecture. The system could act as a hybrid between source control and build system and retain artifacts built by other team members to not force recompilation of components. This was useful when compiling something of BSD kernel size on a single machine could take a night. Thus a lot of senior developers invested time into learning the depths of ClearCase to make the complex project development easier to manage and to save everyone else time. It would often become almost full time job for 1-2 smart people on every moderately sized team.

Fast forward few years from the original system release to late-90’s: most of the issues that the design was addressing started to disappear. In addition, what most likely disappeared is the original development team that created ClearCase (I am guessing here but this almost always happens in the endless chain of product acquisitions). What was left is the old fame, the marketing team and lack of serious competition on the market. It is all downhill from that point…

The system’s Achilles heel lies in the main spot that is loved by the purists – its network file system (MVFS). The performance of the system outside small local LAN is beyond hideous – it is difficult to write something as slow on purpose. This is probably attributed much more to implementation decisions and talent loss on the CC team than the design itself.

I have not seen anyone who used ClearCase successfully recently. In all of the installations observed it has been hampering basic developer productivity and was not relying on any build “reuse” features.

Here is a short summary of the ClearCase usage in groups where I worked (does not apply to entire companies):

Oracle (1998) -> CC-based source/build management abandoned (it’s check-in performance was unfavorably compared to 2400 bps dial up modem located somewhere in Germany)

First Jewelry (1999): the contents of entire repository was lost due to administrator trying to tune the performance. It had to be restored from current versions sitting in static copies on developer workstations.

Bank of America group 1(2004): abandoned and successfully replaced by CVS due to horrible distributed network performance (check-out/check-in cycle was taking 40 minutes to 1 hour from every developer daily)

Bank of America group 2(2009): still used to make development miserable for anyone located more than few hops away from the repository. Remotely located developers do static view snapshot check-out once a day, make a local copy (to avoid overhead of ClearCase’s super slow file system) and then figure out what files they changed using diff at the end of the day to copy them over the view folder and check them in.

Most ClearCase defenders say something apologizing like: “you did not have the right administrator”, “did not try feature A?, yes, then you did not have a good administrator” , “worked for me”. Maybe there is a way to make it work, but it sounds more like an exception than the rule. Thus I would hope the system usage would be more of an exception too as it clearly outlived its usefulness.

I am typing this post in San Francisco as I am checking out a project from Chicago over fast internal network. However, for ClearCase it takes 1 hour to get 2000 files with about 30MB total data. That’s only slightly better than that 2400bps modem, they must have gotten a newer 56kbps model…

Panorama Theme by Themocracy