To Speed Up, Slow Down and Leverage – Especially in Software Development

In leading software development teams for almost two decades, I have learned that good software development is notoriously difficult. In 2005, an IEEE Spectrum article entitled “Why Software Fails” discusses this problem by highlighting several different reasons why software projects fail and the costs of these failures. Here is one particular quote that brings the issue of software development failures into focus:

“When you add up all these extra costs, the yearly tab for failed and troubled software conservatively runs somewhere from $60 billion to $70 billion in the United States alone. For that money, you could launch the space shuttle 100 times, build and deploy the entire 24-satellite Global Positioning System, and develop the Boeing 777 from scratch—and still have a few billion left over.”

The article goes on to highlight several valid reasons for software failure, including: unrealistic goals, badly defined requirements, sloppy development practices, poor project management, politics and others.

Among many of the reasons for software failures, I would like to focus on the “sloppy development practices” and to suggest one over-arching approach to removing the “sloppy” adjective from software development practices — an approach that has worked well for us at DMMD.

First, it is important to understand the reasons why software development has an innate tendency to be sloppy. Chief, among them, is because software development can be done with minimal amount of resources and training. All you need is a computer. At the entry level, all programming languages are intuitive and therefore, almost anyone with a computer can program. This translates into software development being a “high velocity engineering discipline.” How many times have you heard a developer say: “ah, that’s easy! I’ll have it implemented in no time!?”

Compare this with other engineering disciplines, such as the development of circuit boards, building bridges or almost any other engineering discipline. Construction in Civil Engineering requires a lot of system design and analysis, diagrams and paper plans, before even one brick is laid, or a cable is strung. Bridges, unlike a lot of software, are carefully pre-planned on paper and then implemented following a clear plan. I would venture to say that almost all other engineering practices have a “lower development velocity” than software development. In other words, building the simplest project, in most other engineering practices, takes longer than building a simple software project.

Second, if our estimation of the problem for “sloppiness,” in software development, is its “high velocity,” then an intuitive solution might be to design a way to slow things down, so that software development has a development velocity more in line with other engineering practices. This is a good intuition. The art, however, is in how to slow down this development velocity in a meaningful way. To answer the “how” I will turn to Archimedes and one of his quotes:

“Give me a lever long enough and a fulcrum on which to place it, and I shall move the world.”

Combining Archimedes’ wisdom with software development, my recommendation to removing sloppiness from software development is this:

“To speed up, slow down and leverage.”

To me, leverage in software development means designing small modules, with very concise and clear responsibilities and interfaces, in such a way that these modules can be re-used multiple times. Writing re-usable modules will slow down the short-term development process, making it more in line with other engineering disciplines. Further, it will have a huge speed-up impact, long term. Not only does it remove the sloppiness from software development, but it will speed up the long-term efficiency of future development.

In conclusion, good software development can be notoriously challenging, and billions of dollars are wasted each year on failed software projects. Software projects fail for many reasons, one of which is sloppy development practices. A software development mentality of “slowing down to leverage” can have a huge impact on removing sloppiness and “speeding up” the long-term goals of any project. This approach has worked well for us at DMMD.

Naming Strategies in Software Development

Often underestimated and rarely understood, naming strategies in software development can make a huge difference in how well and easy code is understood. In my opinion, an intuitive naming strategy can be the most significant benefit to a software project. A good naming strategy will not only help programmers communicate better and more efficiently, but it can also help highlight weaknesses in the software architecture. When naming variables, functions and classes (in case of C++) a naming strategy that can address the following issues is paramount:

  1. Can the names be remembered without flipping back and fourth to their definition?
  2. Are the names reasonable in length?
  3. When reading a program, do the function and variable names tell the logic and sequence, without reading comments?
  4. Do the names imply a broad to specific meaning?
  5. Is there a naming strategy common for the entire project?
  6. Will two different programmers arrive at the same variable name, given the scope and responsibility?

Let’s analyze each of these topics independently now.

Remembering Names

First, when writing new code if you find yourself flipping back and fourth to the variable declaration or you keep a header file always handy, this is a strong indication that the naming strategy you’re using can be improved. To help remembering variables, try the following:

  1. Do not use short-naming for common names. For example, try to avoid: img instead of image, num instead of number, lst instead of list, and so on. Instead, use the full name. It will not make the variable that much longer but it will help with memory and confusion.
  2. It is OK to use short naming (acronyms) when the variable refers to a name space or when referencing a common short name of well known acronyms. For example: OpenGL instead of Open Graphics Library, VTK instead of Visualization Toolkit and others. The expectation here is that an acronym can be used when it is well understood by programmers in the particular field of work.

Name Length and Program Readability

In the old days of programming, programming names could only be a certain length (such as 8 characters) and that forced programmers to come up with a wide range of naming strategies where the variable names looked more like an encrypted machine code rather than readable names. Those days are long gone and all modern programming languages will now allow longer variable names. Do not hesitate to use names that are reasonable in length, such as two to three words strung together. This usually helps with the readability of the code, to the point that comments can sometimes be skipped because the longer variable names can express the purpose and responsibility of their variable.

Broad to Specific

When naming variables, functions, classes, or anything in general, I prefer to use the logic of broad to specific. This is best given by an example. Assume that we have a variable that will keep track of the amount of tomatoes a farmer puts into his bucket during tomato harvesting season. The first try at this variable name might be: i. Yes, a generic loop variable that represents the number of tomatoes. A better name would be number and a better name still would be numberOfTomatoes. The broad name number clues the user that this is a counter and the second part of the variable is specific by telling us that the number keeps track of tomatoes. The name numberTomatoesBucket adds more specificity by telling the user that the variable keeps track of only the number of tomatoes in the bucket. Finally numberTomatoesBucketHarvest adds even more specificity and so on. How far one goes with the variable length is a bit of an art but the point is that one should think of the variable names as a beacon device that will hone the user into the variable’s responsibility.

Another nice benefit of this naming strategy is that if multiple names are grouped together, they are listed in alphabetical order keeping all the variables with similar responsibilities clustered. The names will also naturally guide the user to the sub-classing structure of the code.

Defined Naming Strategies

For a project to successfully use good naming strategies, the strategy needs to be published and be clearly communicated across the entire team. Naming strategies should be clearly marked like exit signs in a movie theater. Enforcing naming strategies can be done using a code review tool or other automated tools. The most important feature of a good naming strategy is not whether or not the strategy is the best strategy or better than another strategy, but whether or not the naming strategy brings consistency to the code.

Naming Consistency

The ultimate test of consistency is whether or not two different programmers will come up with the same variable name given the same variable responsibility.

Darian Muresan

Photoshop Host SDK

At DMMD we have developed a Photoshop Host SDK that will allow you to run any Photoshop Plugin without the use of Photoshop. This can be a very powerful tool that allows you to integrate existing Photshop Filters into your own application, without the need of Photoshop. In theory, we can even use DMMD’s Phothsop Host on Linux! — however, there are no Linux Photoshop Plugins! (If you would like to port your own plugin on Linux, please contact us and we can discuss.)

OK, so how do you use the SDK? Well, it’s very simple, just follow these steps:

1. Download the SDK.
2. From the DOS prompt, go to the “vsrPp_sdk\bin\Win32″ directory.
3. Copy your Photoshop Plugin to the “vsrPp_sdk\bin\Win32\Photoshop” directory.
4. Run the command: vsrPp_app.exe “myFileName.jpg” — please note that the exe supports only JPG files.
5. If you like what you see and you want to incorporate the SDK into your own application, take a look at the files:

  1. vsrPp_sdk\example\vsrPp_app.cpp
  2. vsrPp_sdk\include\vsrPp_sdk.h

That’s about it! Take a look at how the SDK is being used in the example application and then use it in the same fashion in your own application. If you’re all set, then Enjoy! If you need further help, please do not hesitate to contact our team at

The DMMD Development Team

no comments

Visere Medical Processing Pipeline

Visere Medical provides several different algorithms for enhancing medical XRay images.  I will discuss several of the filters available to Visere Medical and what their effect is on the output of the final image.  You can evaluate these algorithms with your own images by downloading a copy of our Visere Medical Viewer software or you can download a copy of our Medical Photoshop Plugin.  The discussion in this blog assumes that you are using our free Visere Medical Viewer.

To access all the filters available within the Visere Medical application (also called WhiteCap Viewer) you need to enable the Process Toolbar.  This is done by clicking on the Process menu item.  The Process toolbar starts off un-docked and it can be moved around or docked by simply dragging it to the desired location.  Once the Process toolbar is visible, the list of available processes can be accessed by clicking the down arrow in the upper left corner of the toolbar.  Of all the available filters, a few of them are discussed next.

Processing Pipeline

The most versatile filter is the Pipeline Filter.  The Pipeline filter has the ability to create new filters by stringing together a list of existing base filters.  The pipeline filter can be used to define different pipelines, which can use different lists of filters, or the same list of filters but with different settings for each of the individual filters.  

For example, when processing XRay films of a large dog and a small cat, different filter settings need to be used for the two different animals.  For the large dog an increased level of contrast might be necessary, whereas for the cat, a lower contrast level might be more appropriate. To see how to configure and use the Pipeline filter we’ll look at an example. We want to compare two images, one processed with DMMD’s algorithms and one processed with a 3rd party algorithm. The steps are as follow:


  1. Open up the two images side by side, with the unprocessed image on the left and the 3rd party processed image on the right. From the Process window select Edit Pipeline.

  2. A dialog similar to the image shown below will pop-up. Press the New button and a new entry titled Pipeline Name will appear. Double click on the cell with the Pipeline Name and enter the name of the new pipeline. In our example, the name used for this pipeline is Genesis. Up to 20 processes can be stringed together to generate a new pipeline called Genesis.

  3. Each of the individual filters can be edited by clicking on the three points button [...] and the individual configuration dialog will open as shown. In the image below, the example of Adaptive Histogram Equalization configuration is shown.

  4. After all the filters are selected and configured, click OK to save the pipeline. In the process list, the new pipeline will now be listed, just above the Edit Pipeline filter. In this case, the new entry is Genesis.

  5. Select the new process — Genesis. When the process is selected a list of all the different filters that are part of the new process is shown. This allows you to confirm that the pipeline is indeed properly defined. The pipeline can be edited at any time by going back to Edit Pipeline.

  6. Make sure that the active window is the window on the left (the raw image) and then hit the Apply button at the bottom of the filter. The process is then applied to the left side image and the results are shown once the process is complete.

For processing XRays, there are a multitude of options, but some processes are more useful than others.

Median And XRay Denoise Processes

The median filter is good for salt and pepper noise, as we’ve discussed it here.  In the current implementation we only process windows of sizes 3×3.  This is intentionally enforced for reasons of speed and because larger windows can create too much of an edge shift in the final images.  Since we are interested in only removing the strongest noise, the default of 3×3 seems to be good.

XRay Denoise is a more advanced version of the median filter.  Unlike the median filter, this filter analyzes each pixel in the image and modifies it only if the algorithm considers it a noisy pixel.  This tends to minimize the amount of correction and most importantly, eliminates the edge shift, usually introduced by median filtering.  The filter is mostly recommended for sensors that have a few bad pixels.

AQua Denoise

This filter removes White Gaussian noise, as discussed here.  Do not use a windows size larger than 3, otherwise the filter will be VERY slow.  This is a problem in the current implementation of the filter.  In the future we will optimize the filter for larger window sizes as well.  Thus, leave the window size at 3.  The first parameter, Sigma, is the level of noise.  Use a noise level of at most 15.  Using anything higher than 15 tends to denoise the image too much and since the image denoises local patches, too much denoising manifests itself by introducing “blocky” regions in locations where there might be less noise.

Adaptive Histogram Equalization

This filter is DMMD’s most effective filter for enhancing image contrast.
Histogram equalization is a well known technique for enhancing contrast over an entire image and we have discussed it

This filter is the adaptive version of histogram equalization.  The parameters required by this filter are as follows:

  1. Window and Border Size:  The overall image is broken up in regions that are Window Size large and histogram equalization is applied to each region independently.  Since the regions are independent, this can potentially introduce border artifacts.  To minimize these border artifacts, the Border Size controls the amount of overlap between neighboring patches.  Thus, the histogram equalization patch size is Window Size plus Border Size, with Border Size overlap.
  2. Distribution Type: In the Histogram Equalization blog we discussed how the re-mapping of the pixel values can be done such that their distribution (or the cumulative distribution function) can look like any desired distribution.  Currently, the algorithm has three different distribution types.  Experimentation with each distribution type is highly encouraged!  The performance of the filter can vary significantly from one distribution type to another.
  3. Distribution Constant: this is a constant value used for controlling the distribution spread.  For Exponential and Rayleigh distributions it corresponds to the standard distribution.  For the Uniform distribution it has no effect.
  4. Clip Limit:  This variable (between 0 and 1) controls the amount of histogram equalization that is to be applied to each patch.  A value of zero means no histogram equalization.

Unsharp Masking

Unsharp masking boosts the high frequencies in an image by a factor proportional to the Gain.  The high frequency in the image is obtained by subtracting from the image a low frequency version of it. The low frequency image is obtained by convolving the original image with a Gaussian of radius R. Thus R is the radius of the Gaussian and Gain is a multiplicative factor that enhances the high frequencies.  Higher Gain implies enhanced edges — and noise, since noise is high frequency.

– Darian Muresan

no comments

Side By Side Image Comparison

Visere and Visere Medical provide side by side image comparison to allow you to easily evaluate and compare two separate images. The side by side comparison works best when two images are of the same size.

To enable side-by-side comparison, take the following two steps, as shown in the Figure above:

  1. First, enable two side viewing from the menu: View->Two Split View (Ctrl+V)
  2. Second, enable zoom-lock from the menu: View->Zoom->Lock Zoom & Scroll (Ctrl+L). This will force whatever zooming and scrolling you do in one window to be applied to the other window. The active window has a red border around it.

To test the new setting, scroll and zoom in the active image. You will notice that the other image scrolls and zooms the same way. The images are tied together at the origin. This means that if the two images are not the same size, then the second image will zoom and scroll in such a way that it’s origin, the upper-left corner of the image, remains locked to the origin of the first image. An example of locked zoom and scroll is shown in the two images on the left.

The red border (barely visible) is on the left-side image. This means that the left-side image is the active image. Any zoom-scroll is applied to the active image first and then to the right-side image. Scroll and zoom into the left image. The right image view will follow around the left image view.

(Also note: all image processes used are applied only to the active image, the image with the red border around it.)

no comments