Dr. Pierce's reply to the response of authors Dr. Wilson and Dr. Aranda.
Their response states that line counts are "as good a measure of complexity...". I agree that line counts are not good for code optimization so how do we define "good". Programming is a practical field so what is useful is good. Complexity metrics such as McCabe's are useful for optimization and more useful than line counts in comparing programs. For example take two subprograms each having 100 lines with computations being equally difficult; the first involving 6 variables and straight linear logic while the second has 6 variables, 8 logical operators, and a logic nesting of 5 levels. McCabe-like metrics will make clear that the second will be more labor and time consuming to debug, construct test cases, maintain in an operational environment and modify and will alert personel other than the original programmer to a potential problem area; whereas line counts have none of this usefulness.
I agree with with the authors response that there are some coders who are "plunging into the code with constant refactoring and a minimum of design...", but is this a good thing to do? I suggest the authors expand their horizons beyond the coding stage in time before and after coding and talk to personel associated with software other than the coders. These include;
the customer ensuring that the program satisfies his requirements,
the development supervisor needing to evaluate coding progress to meet budgets and schedules,
the analyst ensuring that the problem solution is being implemented properly,
the verification people needing to understand the program well enough to construct tests,
the operation people needing to know if, when, and how this program should be used, and
the maintenance people needing to understand the program structure so they can prepare it for operations, increase its functionality and know when to send it back for another development cycle.
We have known since the 70's that inadequate design documentation creates scheduling problems and wasted labor for those other than the original coder. Inadequate design documentation throws a proverbial monkey wrench into the gears of the lifecycle because none of these people in these stages of the lifecycle have the time or money to pour over thousands of lines of code in order to accomplish their tasks. This is not being "dogmatic"; it is dealing with the facts of life[cycles].
I agree that evidence must be collected before conclusions are made so I suggest the authors investigate the ACM Software Engineering Group and the IEEE Computer Society efforts to improve the lifecycle. These groups have a long history of gathering evidence, reaching conclusions, and have now joined in making their curricula recommendations which include heavy emphasis on design and documentation including minimizing complexity [www.acm.org/education/curricula-recommendations]. While creating our universities Computer Science Department, my committee also surveyed local businesses and received a unanimous response that design and documentation should be part of our curricula. As cognizent engineer over NASA's tracking software, I found it was too labor intensive and caused mission critical schedule delays to debug or add functionality until I enjoined my programmers to reverse engineer this software to produce design documentation that should have been produced by the initial programmers instead of their just "plunging into the code". While optimizing the design of the GPS constellation after creating optimized algorithms to solve ordinary differential equations, it was relatively easy for me to add needed functionality in the software because I could refer to well documented design.
Like Newton said " If I have seen further it is by standing on ye sholders of Giants." and so we can carry on further through the software lifecycle when we can stand on the shoulders of quality design documentation.
posted by Sam Pierce
February 24, 2012 @ 12:46 AM
JSTOR, the online academic archive, contains complete back issues of American Scientist from 1913 (known then as the Sigma Xi Quarterly) through 2005.
The table of contents for each issue is freely available to all users; those with institutional access can read each complete issue.
View the full collection here.
An early peek at each new issue, with descriptions of feature articles, columns, and more. Every other issue contains links to everything in the latest issue's table of contents.News of book reviews published in American Scientist and around the web, as well as other noteworthy happenings in the world of science books.
To sign up for automatic emails of the American Scientist Update and Scientists' Nightstand issues, create an online profile, then sign up in the My AmSci area.