StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Metrics Estimation Analysis And Team Assignment - Essay Example

Cite this document
Summary
Metrics estimation analysis and team assignment Name: Number: Course: Lecturer: Date: Metrics estimation analysis and team assignment Introduction In project management, project measures require careful identification, monitoring and evaluation. …
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER94.6% of users find it useful
Metrics Estimation Analysis And Team Assignment
Read Text Preview

Extract of sample "Metrics Estimation Analysis And Team Assignment"

? Metrics estimation analysis and team assignment Number: Lecturer: Metrics estimation analysis and team assignment Introduction In project management, project measures require careful identification, monitoring and evaluation. Inadequate measures often results in incomplete and substandard projects. Application of many measures results in project complexities and mismatched analyses. Various organizations have their specific measures that yield their desired goals; as well failure to measure project progress and performance reduces levels of monitoring and evaluation. Measurements are crucial in problem identification, the position of a certain program and their respective processes. It is good to pinpoint mistakes and errors at the initial stages by use of appropriate measurement tool which yields a more quantifiable accurateness of more complex projects (Pressman 2006). Metrics in its innate form can identify critical risks and accord resolutions before they happen. Importance of measurements therefore, is crucial a strategic, technical and project level. The goals of an organization are initially done before listing questions and identifying the measures to be undertaken. In order to have successful project development, the measures should each have attributes, evaluation, unit and counting rule. The measures are: 1. Support Definition: The supportability of a system e.g. software can be measured by tracking specific pertinent supportability features. The developer and acquirer have the opportunity to obtain knowledge which can be directed to supportability control. The systems support can be described in the form of memory size, I/O (input and output), the process, average module size, module complexity, error rate, supportability and lines of code change. Counting and Measurement: The metric can measure spare memory over time which should not be below any specification requirement. Metrics also tracks amounts of I/O that are reserved as functions of time again the capacity should not be below the given requirements. On the process, throughput capacity entails the amount of time and should not be below specification requirements. Average module size should not exceed requirement specification. Similar scenarios can be recorded by knowing the number of errors, average time required and average lines of code changed per deficiency. Estimation: the measurements need to start at project level and should include project planning, monitoring which will entirely depend on the gathered information through the process of measurement (Pressman 2006) Analysis: metrics used are representations of software and the process yielding them. Advanced process metrics is as a result of more mature software development process. It requires accurate data to provide good metrics process. There are indicators that are brought out by measurement of data. The indicator quality influences the analysis process since both objective and subjective measures are required when determining the current program state. The objective data constitutes staff hours, software lines of code, current function points, the prevailing components, list of items to be tested, number of coded units and the potentiality of changes and errors. On the other hand, subjective data could be based on the feelings of individuals or groups comprehension of certain features. Collected data must determine issues to be addressed, which requires understanding of metric meanings through performing multiple data sourcing, studying the data collection process at a lower level, separation of collected data, emphasizing on different data sources and realizing the development process. 2. Risk Definition: To run projects effectively, risks have to be identified and solutions given appropriately. The users should beware of existing and potential limitations and give actions appropriately. There is good knowledge on the levels of risks that can occur in software development environments. To avoid risks understanding of all phases and data measurements are crucial and one should even look beyond the circumstances yielding potential risks. Evaluation: Risks are determined from the impact they have on the overall performance, phasing and the project. Some risks have high occurrence rate but low degree of impact while others rarely happen but can stall or impede the project progress. Tabulation of risks is commonplace in project management and such documentation assists the project manager to leverage on the risks likely to face and the potential ones. Analysis and measurement: risks are assessed on their level of occurrence by determining if the occurrence rate is low (L), medium (M) and high (H). They are also judged by their degree of impact which could be varied from low (L), medium (M) to high (H). The risks like software failure arising from poor software size estimation have implications on costs and schedules. To avert this, the risks require assigning responsibility to someone who will track and give appropriate actions (Humphrey 2006) 3. Productivity Definition: Productivity is a function that determines the amount of output a unit makes given the resources availed. The team working in a unit could be regarded as productive at higher levels of team work and output sufficiency. Companies often bring optimal teams on board. The teams can spend time consulting and troubleshooting problems which has effect on learning curve especially if people have little knowledge about what is going on. The company should be set to contain change and have proper structures to sustain productivity. Any changes should not hamper efficiency of systems. Evaluation: productivity has implications on cost driver multipliers, amount of code and any scalable factors that relate to instruction numbers to those of man months or money. The software cost attributes can be assessed in the form of language experience, cost constraints, the given size of database, the turnaround time, and the experience of virtual machines. Others include volatility of virtual machines, the amount and nature of software tools, practices of modern programming and storage constraints. There are also known factors that influence productivity like experiences in applications, timing constraints, reliability requirements and product complexity. The key factor known to have a greater impact on the productivity of workers is the personal or team capability. All the given factors have varying degrees of influence on the software productivity range (Humphrey 2006). Unit and counting rule: the amount of software productivity is measured in terms of numbers of line of code which is attributed as software lines of code (SLOC). The requirement for it is to have been produced, tested and documentation done. This should be rated per staff that consequently yields an acceptable and usable system. The formula is given by; Productivity = size/effort where the effort= constant* size sigma *multipliers [BOEHM81) Estimation: multipliers are factors like support tools efficiency. The tools ought to perform within limited constraints of hardware, prevailing skills and personal experience. The factors discussed above are cost multipliers and reflects on the corresponding effect that the said factor could be exerting on the overall costs of total development. Productivity can be enhanced by employing the right people. The most impart estimates are reliability and complexity. Analysis: doubling the software size means that the costs are multiplied by two. It means that the costs are doubled to also include 30% size penalty. The size of penalty could be a consequence of the influence of inefficiency emanating from the size of the product. Integration issues become bigger when software development grows while big teams required completing projects becomes less efficient with growing numbers. 4. Cost and schedule Definition: Cost and schedule can be defined in terms of analogy, expert or engineering opinion, parametric models, engineering build, cost performance report analysis and cost estimation relationships. The programs usually rely on the available data which consequently depends on the lifecycle current position or definition of scope. Counting: a scaling factor is used depending on the expert opinion analogous effort. Effort is more preferable from cost data. In the event of cost data being the only option, then normalization can use the same base year. Effort measurement is in the form of current and inflation indices. The expert opinion is used to count low cost, lower-level and large cost elements. Moreover, statistical formulas regarding dependent variables are counted in terms of cost, schedule and resources. Independent variables which are called cost drivers usually yields changes in costs, schedules and resources. The rule of thumb is used to determine parametric models through multiple models which act as checks and balances. Parametric models yields 100% accuracy. Evaluation: Since expert opinion relies on experiences of same programs, the parametric models have stratification to internal databases which can simulate environments arising from many analogous programs. Cost and schedule relies on data from same efforts that have been completed. Analogous efforts at the total system level could be difficult to obtain, though analogous efforts at subsystem and lower levels could be easier to find. Required efforts based on inputs from expertise with extensive experience can be determined with regard to same programs. Inputs from several sources that are independent are applied. Effort data is more valuable than cost data since estimation of costs could be occurring on contracting situations which are dissimilar. Estimation: efforts estimation is based on their summation over the broad functional break outs of tasks which can happen at the lowest work level. Again, efforts can be estimated can be based on algebraic equations which are between effort and cost variables with other independents. Simple factors like cost per line- of- code are used for same contractors and programs. There are various statistical packages for developing cost effort relationships (CERs) and it may be a worthwhile investment in both current and future costs besides estimating task schedules. Analysis: performed cost schedule unit and the corresponding effort are based on the unit level of comparing same units. The method used is derived from the estimates of most probable cost in government. They are usually applied in source selections just before any contractor solutions have been determined. The method employed is labor intensive and requires engineering support. Better assurance from other methods is developed since the whole development scope is used in the consequent estimates. Counting rule: a rule of thumb is paramount when using parametric models to be used to estimate programs via multiple models which requires checks and balances on each other. [BOEHMS81]. The rule is 20% of actual costs, 70% of time and should be within the class of projects. 5. Requirements Definition: any change of requirements is a potential risk to software requirements. It requires control and base lining to limit impropriation of costs, schedule and defects. The evolution of requirements should not be in tandem with software evolution. Good software definitions require adequate definition of requirements, clarification of late requirements, deriving changes in requirements, creep and base lined late requirements (Lyons 91) Counting: should be done through system requirement specification in which it needs tracking along the phases of development. The process of design translates user specified requirements to those implicit ones needed for codes to be derived from solutions (Glass 92). Analysis and evaluation: implicit requirements should be capable of being fulfilled, tracked to the explicit needs and solved during design and testing. The configuration head hence guarantees that the final system fulfils the requirements of the original user. Since requirement evolves to design from the software solution, snowball effect occurs during conversion of original requirements into design and finally codes. Requirements seldomly fail to be integrated and drop via the developmental holes leading to system performance malfunctions. 6. Scrap and rework Definition: this contributes a major factor in software development as those that do not fit to requirements are scrapped or reworked. The conformance costs comprise the normal costs required to prevent defects or reduce other conditions that consequently engenders scrapping or reworking the software. The costs relating to non conformance arises from frequency of redoing tasks from errors, defects and initial failure. Counting and measurement: rework and scrap are considered from the costs used in the rework cycle. Nonetheless, rework constitutes a greater part of program work content and cost. They are therefore measured in costs associated with failure fixing occurring after systems are in operation and are in the form of scrap and rework cost. Analysis and evaluation: The costs associated with rework are very high. It amounts to about 40% of all software development costs. Defects present risks in costs, delays and performance. To limit the amount of rework; procedures have to be implemented to detect defects early, proper examination of defect causes ought to be done and good incentives need to be given to contractors and developers to detect and prevent defects early. 7. Project duration days Definition: project days are the number of days required to conceptualize and roll out the project. The duration is given by summing all the days required to complete major tasks and sub-tasks. The days assigned to tasks takes into consideration the work break down structure and task determination. The project manager can consult by developing a business case to know which activity takes what amount of time. The logical; sequencing of activities takes into account scheduling methods like Gantt charts, network diagrams and PERT methods to complete any project. Measurement: The project days are measured by identifying specific activities; they are then ordered in a logical or sequential manner that is to mean that certain activities have to be done before others. Time in days, weeks or months are assigned to the specific activities and generated in Microsoft Project 2007 or in an Excel 2007. Analysis: the method used determines the method which is most applicable to complete the project. Gantt charts, critical path and network diagrams are applicable to various stages of project phases. Such tools as brainstorming and Gantt charts are valuable in doing cost estimations while critical path and network diagrams apply to project duration day’s estimation. 8. Size Definition: software size is important and is one of the reasons why programs fail. Inaccurate size estimations could occur when the size it is too low, inadequate funding and insufficient time. Poor size estimates as a result arises from costs and schedule overruns. Variety of software sizing techniques should be used and not single source or method. Using single source raises costs and risks. Evaluation: the common size inaccuracies are the normal statistical inaccuracies which can be sorted by using many data sources and methodology estimation. Multiple organizations as well help in estimations, checks and analysis. When estimates are done early, software development becomes vague and so will be the errors. Function points Source line of code Based on specification Based on analogy Does not rely on language Depends on language Leans on users Leans on design Varies on counting conventions Varies on language Expands to SLOC Converts to function points Table 1: Software size estimation Analysis: it is good to base estimates on multiple sources to correct discrepancies. Accuracy can be improved by emphasizing on smallest unit of each component (Humphrey 89). It is crucial to measure, track and control the development of software size at all stages. Analysis is thus important to show trends and functionality in software size. Measurement: Software lines of code (SLOC), function points and features points are common measurement metrics. Actual software size is tracked against original estimates incrementally to amount in the total build. These are stated in the contract data requirement list (CDRL) 9. Complexity Definition: Has derived meaning from the description of overall designs and actual code. This has an assumption that direct correlation exists between design complexity and errors of design and also between code complexity and the resulting latent defects. It is easy to identify high risk applications by looking at the properties of each particular correlation. Revision or additional testing can be done in this regard. Evaluation and measurement; they are determined by the number of modules arising from a given application called the fan-in and secondly through the structures which implies the number of paths in a given module. There are other acceptable techniques to determine complexity by application of automated tools. Analysis: The properties both intrinsic and extrinsic in soft wares mostly have correlations with the size it carries and the module interfaces. These can be adjusted by changing the number of modules prompting a given application and those modules that are invoked by certain applications. The complexity metrics assists in knowing the number and test types required to conclude design specifically interfaces and calls together with coded logic which refers to branches and statements (Humphrey 2006) 10. Quality assurance Definition: Quality is the most crucial and important aspect of customer or user satisfaction. Product quality invokes aspects like reliability, serviceability, cost, dependability, functionality and aesthetic appeal among others. Quality determines the credibility of inputs, process and outputs and almost all companies have adopted various certifications to meet the customer or user requirements and satisfaction (Leitner, A 2007) Evaluation: quality can be at times difficult to measure due to varying definitions of quality since one can define in various terms by quality gurus like Joseph Juran, Deming and Kaoru Ishikawa. Software as a discipline derives its definition of quality from the IEEE which is the extent of which a system, its components and the process meets requirements specified and exceeding the expectations or needs of a customer (IEEE 2005). Measurement: owing to the varied definition of quality, the programs should be able to be measured like in terms of low failure density rate; other users can base it on serviceability and maintenance. Therefore, definitions are based on quality measurable attributes that will constantly satisfy and fulfill the expectations and demands of the customer. Analysis: Quality can be measured by conducting a customer survey to seek opinions of customers regarding cost, deliver and functionality of the product. If customers are delighted and happy then it infers that the product is of superior quality. Customer satisfaction results to more sales and referrals, increased demand for the product, employment assurance to workers and increased market share to the company. Quality can be improved through planning, constant improvement and control using statistical control charts and other tools. The basis should be to optimize inputs, process, materials, human resource and time resource. Control charts can be used as below chart (Leitner, A 2007) Figure 1: software quality control using control charts 11. Function points Definition: Function points comprise the weighted summation of different factors that have relations with user requirements. These constitute inputs, outputs, logic files, inquiries and interfaces. In addition, definitions have been extended to cover functionality of software which can be related to feature points. Counting : the function points has a specific counting measure by initially tallying numbers of each function types like inputs, outputs, logic files, inquiries and interfaces. The total function points which remain unadjusted could be done by using complexity measures to each type of the function point. For all types of function points, the summation of complexity adjusted function points results into the function point count which has been adjusted. The end function point figure has the capacity to be converted to form a viable estimate of development resources required (IEEE 2005). Analysis: this is done by counting the input, outputs, logic files, inquiries and interfaces numbers. The counts obtained are multiplied by conventional values. The resulting totals can be adjusted to the extent of complexity based on the judgment of the one doing estimation with regard to the complexity of the software. These judgments are more specific to domains and comprise issues like distributed data processing, data communications, level of performance, the rates of transaction, amount of on-line data entry, efficiency level of end users, the potentially of re-use, the ease at which it can be installed, level of operations, change and the possibility of having a multiple site applications. This is shown below: Simple Average Complex Total Inputs 3* 5*3 7*2 32 Outputs 5*2 6*4 8* 42 Inquiries 4* 8* 9* 21 Files 6* 9*1 12* 27 Interfaces 2* 6* 10* 22 Unadjusted function points=144 Table 2: Determination of function points Estimates: the focal points are meant to measure real-time systems software with the complexity in the higher algorithms. The feature points constitute algorithms parameters which can be assigned to weights that are default. Feature point methods lowers the given empirical weights belonging to the files of logical data from a range of values which explains the importance of logical files in application of real-time systems. Those applications showing similarity in the number of algorithms and logical data points, their functions and feature point counts gives similar numerical values. However, given situations of more algorithms than files, the feature points can give a bigger total than the system function points. Evaluation: Function points are good in determination of early estimates, though they can be affected by changes in the requirements of systems or software. A few important databases have also been developed to estimate function points be analogy to resemble earlier software applications. Function points have a difficulty of estimation at the initial stage of systems development. Moreover, complexity factors used in the equations could be subjective having the option of the analyst discretion. There are few automated tools which can be used to count all focal points with adjusted or unadjusted status. This makes it difficult to compare between or among programs. Inconsistency can arise when single program function points are calculated by analyst having varying point of views. 12. Effort hours Definition: this is given by range of lines of code with some assigned constant yielding some lines of code and months of effort. The data collected should give and provide commensurate output and input efforts in months and durations. Effort and duration of work is mostly interchangeable hence their definition is intertwined. Measurement: effort hours can be measured by assigning duration to tasks followed by indicators. If an indicator is to cut the completion duration by half then double effort is required which means twice as more people are required to perform the operation. Analysis: efforts required can be learned through lessons, hard knocks and failures and through program measures. Relationships between durations and effort are complex and mostly give non linear functions. Manpower demonstrates effort and thus possesses unique features in application to complex power functions in size of software and duration analysis. Conclusion Project management metrics measures estimations has the capacity to pinpoint mistakes and errors at the initial stages by use of appropriate measurement tool which provides a more quantifiable accurateness of more complex projects. Metrics identifies critical risks and accord resolutions before they happen. Importance of measurements therefore, is crucial a strategic, technical and project level. The goals of an organization are initially done before listing questions and identifying the measures to be undertaken. In order to have successful project development, the measures should each have attributes, evaluation, unit and counting rule. References Albrecht, AJ 1979, 'Measuring Application Development Productivity', Proceedings of the IBM Applications Development Symposium, California. Boehm, BW 2007, Software Engineering Economics, Prentice-Hall, Inc, New Jersey. Campbell, L & Koster, B 2004, Software Metrics: Adding Engineering Rigor to a Currently Ephemeral Process, McGrummwell. DeMarco, T 2006, Controlling Software Project, Yourdon Press. Ferens, DV & Christensen, DS 2005, Does Calibration Improve the Predictive Accuracy of Software Cost Models, CrossTalk. Glass, RL 2004, Building Quality Software, Prentice-Hall, Inc, Englewood Cliffs. Hetzel, B 2003, Making Software Measurement Work: Building an Effective Measurement Program, QED Publishing Group, Boston. Humphrey, WS 2006, Managing the Software Process, Addison-Wesley Publishing Company, In. IEEE 2005, 'IEEE Standard Glossary of Software Engineering Terminology', Institute of Electrical and Electronic Engineers, Inc, IEEE Std 610.12-1990. Kerzner, H 2009, Project Management: A Systems Approach to Planning, Scheduling, and Controlling, John Wiley & Sons. Leitner, A 2007, Statistical Process Control, GRIN Verlag. Lock, D 2007, Project Management, Gower Publishing, Ltd. Lyonnet, P 2001, Tools of Total Quality: An Introduction to Statistical Process Control , Springer. Lyons, RP 2005, 'Acquisition Perspectives:F-22 Advanced Tactical Fighter', Boldstroke Senior Executive Forum on Software Management. Pressman, RS 2006, Software Engineering: A Practitioner’s Approach, McGraw-Hill, Inc, New York. Shim, ,J & Siegel, J 2008, Operations Management, Barron's Educational Series. Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(“Metrics Estimation Analysis And Team Assignment Essay”, n.d.)
Retrieved from https://studentshare.org/information-technology/1398384-metrics-estimation-analysis-and-team-assignment
(Metrics Estimation Analysis And Team Assignment Essay)
https://studentshare.org/information-technology/1398384-metrics-estimation-analysis-and-team-assignment.
“Metrics Estimation Analysis And Team Assignment Essay”, n.d. https://studentshare.org/information-technology/1398384-metrics-estimation-analysis-and-team-assignment.
  • Cited: 0 times

CHECK THESE SAMPLES OF Metrics Estimation Analysis And Team Assignment

Management of Risks and Safety

The characterization will lead to analysis and calculation of the Asset Value.... Hence, special care should be taken that investments should be planned after a thorough analysis of the assets, threats, impacts, and vulnerabilities before a risk tag is assigned.... Hence, every proposal presented for the investments has to be justified with adequate quantitative and qualitative analysis backing each dollar proposed to be spent....
24 Pages (6000 words) Assignment

Web Media and Web Application

The aim of this report, Web Media and Web Application, is to plan and manage the project and identify any associated risks and ways to mitigate them.... nbsp;Website is the information delivered over the Web that combines characteristics of both Web media and Web application.... nbsp;… According to the discussion websites may be deployed instantly worldwide without any delay....
21 Pages (5250 words) Assignment

Marketing Skills and Team for Enhancing Business Performance

The challenges depend on the operational functions of the organization.... This study aims at evaluating the organizational challenges, needs,… The technological advancement is increasing more than the previous century.... These technologies are being employed for offering cost effective and Organizations have to bear the high initial cost of acquiring and running the operations, but innovative developments in technology is shortening the life span of the previous one....
8 Pages (2000 words) Assignment

Metrics Estimation Analysis

Provide an analysis report to indicate which components would be of concern going forward.... On the other hand, its essential noting that no matter what effort is exacted the estimation curve will never be at its maximum accuracy.... You have been asked to produce an evaluation of shareware project planning tools....
4 Pages (1000 words) Assignment

Service-Oriented Architecture

In fact, agile method has become a routine in quickly changing software industry.... At the present, the majority of software development organizations and software developers… They use it all the way through the software development lifecycle.... Basically, agile software development approach is based on some rules which can be changed according to the changing requirements of software projects On the other hand, SOA (service oriented architecture) refers to a communication framework that is initiated to support communications between services (Rouse, 2008)....
12 Pages (3000 words) Assignment

Commercial management

Through the analysis of the UK's construction industry, the report indicates… e of supportive environmental factors that range from government policies to projections of economic recovery for the year 2014 and 2015(BBC News, 2014).... The project further builds on SWOT analysis of the firm by focusing on the predominant opportunities and threats facing the The report also underscores the importance of strategic bidding process by highlighting on models of bidding process and their strengths....
12 Pages (3000 words) Assignment

Business Project Valuation

hellip; The author of the paper explains that Discounted Cash Flow analysis (DCF) is a valuation technique whereby a the evaluator makes use of the Net Current Value (NCV) and then compounding it at a rate considering various factors and then seeing the possible future value of the company.... Leverage Buyout/“Ability to Pay” analysis (LBO): in this case, the company is valued based on assumptions of company purchase through a leveraged buyout.... Comparable Company analysis (Public Comps): this is the estimation of the metrics or the terms that the other companies are using I valuing products....
6 Pages (1500 words) Assignment
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us