Using Function Points Effectively
This article describes common uses of the function point metric by the software development organization. To demonstrate the versatility of the function point metric, we have selected two scenarios; each represents the use of the metric at a different level in the organization. Function points are used at the IT management level as the key normalizing metric in establishing performance benchmarks used to identify and track improvements. Second, function points are often used at the organizational level as the base metric for establishing quantifiable service levels (seen primarily in outsourcing arrangements).
In each of the two scenarios, function point analysis has its greatest value when used in conjunction with other metrics. For example, simply knowing the average size of your software deliverable (expressed in function points) is of little value; however, by incorporating other data points, such as deliverable cost, duration, and defects, you can create and report on value-added measures such as the cost per function point, time to market, and defect density.
It is true that the function point metric is not a panacea for software development ills; however, it does afford the development team and IT management an opportunity to measure and evaluate key elements in the development environment to make more informed decisions.
IT Management Level: Establishing Performance Benchmarks
Baselining an organization's performance level has become a standard industry practice, particularly in companies whose IT organizations are required to track and improve their delivery of products and services relative to improved time to market, cost reduction, and customer satisfaction. Creation of an IT performance baseline (often referred to as benchmarking) gives an organization the information it needs to properly direct its improvement initiatives and mark progress.
Performance levels are commonly discussed in terms of deliveryfor example, productivity, quality, cost, and effort. In each of these categories, function points are used as the denominator in the metrics equation. By using function points as the base measure, the organization benefits in two ways. First, because function points are applied in a consistent and logical (not physical) fashion, they are considered a normalizing metric, thus allowing for comparisons across technologies, across business divisions, and across organizations, all on a level playing field. Second, there is an extraordinary amount of industry baseline data, which can be used to compare performance levels among various technologies and industries and to compare internal baseline levels of performance to best-practice performance levels.
Noted in Table 1 are some examples of industry data points for productivity levels. The values are expressed in hours per function point as a rate of delivery. These data points are from the International Software Benchmarking Standards Group (ISBSG), one of the numerous sources of industry benchmark data. The ISBSG data displayed in Table 2 depicts similar rates by business area.
Table 1
ISBSG Industry Data
Function Point Size |
Mainframe |
Client-Server |
Packaged Software |
Object-Oriented |
225 |
9.1 |
11.4 |
7.6 |
12.5 |
400 |
10.4 |
13.0 |
8.7 |
14.3 |
625 |
12.2 |
15.2 |
10.1 |
16.7 |
875 |
14.6 |
18.3 |
12.2 |
20.1 |
1125 |
18.3 |
22.8 |
15.2 |
25.1 |
Note: All values are expressed in hours per function point as a rate of delivery.
Table 2
Rates of Delivery by Business Area
Business Area |
Rate of Deliverya |
Accounting |
11.3 |
Manufacturing |
6.3 |
Banking |
12 |
Telecommunications |
2 |
Insurance |
27.4 |
Engineering |
8.1 |
aExpressed in hours per function point.
The data points shown in tables 1 and 2 make obvious the advantage of using function points. Representative industry performance data using function point-based measures and data points is available for organizations to use as the basis of their cost and performance comparisons to industry averages and best practices.
For an organization that has been engaged in the collection and analysis of its own metrics data, creating baseline information similar to the industry views displayed in tables 1 and 2 is a relatively easy task. However, most organizations do not have the advantage of readily available metrics data; therefore, they need to create a baseline of performance data from the ground up. Fortunately, a baseline can be developed relatively economically, depending on the level of systems documentation and project management data available.
The baselining process includes the quantifiable measurement of productivity and quality levels. Performance is determined according to measurements collected from a representative sampling of projects. The selection of projects is commonly based on unique criteria:
-
The project was completed or was undergoing development during the previous 18 months.
-
The labor effort to complete the project amounted to more than six staff-months.
-
The project represents similar types of projects planned for future development.
-
The primary technical platforms are represented.
-
The project selection includes a mix of technologies and languages.
Project data is collected (when available) on the function point size of the deliverable, level of effort, project duration, and number of defects. These measures are analyzed, and performance levels are established on a project-by-project basis. These data points can then be used to create a quantitative baseline of performance (see Figure 1). In Figure 1, data points for all projects are recorded during the baselining process. These data points create one view of an organizational baseline. The data points include an expression of functional size and rate of delivery. For our purposes, rate of delivery is expressed in terms of function points per person-month.
Rate of delivery during baselining
Figures 2 and 3 have sorted the baseline projects relative to the type of development. Figure 2 shows all enhancement projects; Figure 3 shows all new development projects. Note the difference among the various views for a baseline project of 400 function points. The advantage of looking at this baseline data from different viewpoints is to better understand the impact of different development types on performance levels. It would not be reasonable to expect future enhancement projects to perform at the same rate of delivery as new development projects.
Rate of delivery for enhancement projects
Rate of delivery for new development projects
Obviously, project size and complexity are contributing factors that influence productivity. However, numerous other factors also affect the capacity of an organization to define, develop, and deploy software. Assessing an organization's capacity to deliver represents the qualitative portion of the benchmarking process. A capacity analysis reveals the influence of current software practices on performance levels. Using the information from the capacity analysis, it is possible to recommend improvements in current practices, to suggest new practices, and to emphasize existing practices that have already demonstrated positive influence on productivity.
For example, we can observe from figures 1 through 3 that our capacity to deliver is influenced by size and type of development. If we hold these two variables constant while analyzing our baseline data, we can observe that there are still variations in performance data.
Figure 4 shows data from several projects that are closely related in size. Four data points fall in the range of 400 to 550 function points. Their corresponding rates of delivery are from 8 to 18 function points per person-month. That is a significant difference in performance. The challenge now becomes one of determining the contributing factors that caused these projects to perform at different levels.
Rate of delivery by functional size
We have completed many of these types of process performance assessments on the basis of our own proprietary assessment method. The method of collection consists of selected interview and team survey sessions with each of the project teams. Data is collected on key influence factors, such as project management capabilities, requirements and design effectiveness, build and test practices, skill levels, and other contributing environmental factors. Individual projects are analyzed, and project profiles are created. This data is also analyzed in aggregate and may be used as the basis for determining overall organization process improvements.
The results are somewhat predictable. Typical influence factors include skill levels, effective use of front-end life cycle quality practices, tool utilization, and project management. These findings parallel, in part, those of the ISBSG database analysis, which revealed that language complexity, development platform, methodology, and application type were significant factors influencing productivity.
Analysis of the qualitative data leads an organization to the discovery of certain process strengths and weaknesses. As performance profiles are developed and contrasted with quantitative levels of performance, key software practices that are present in the higher-performing projects tend to be missing from the lower-performing projects. These practices differentiate between success and failure.
The real value of the benchmarking activity results from identifying performance levels, analyzing process strengths and weaknesses, monitoring process improvements, and comparing to industry data points. There is much to be learned from comparisons to industry data. An IT organization can see at a glance the overall effectiveness of its performance in contrast to industry benchmarks. In addition, there is an opportunity to identify industry best practices and to evaluate how these best practices will affect your organization's performance levels.