Quantitative Analysis of Fault and Failure Using Software Metrics

α
Shital V. Tate
Shital V. Tate
σ
Shital  V. Tate
Shital V. Tate
ρ
S. Z. Gawali
S. Z. Gawali
α Bharati Vidyapeeth Deemed University Bharati Vidyapeeth Deemed University

Send Message

To: Author

Quantitative Analysis of Fault and Failure Using Software Metrics

Article Fingerprint

ReserarchID

CSTSDE58VLR

Quantitative Analysis of Fault and Failure Using Software Metrics Banner

AI TAKEAWAY

Connecting with the Eternal Ground
  • English
  • Afrikaans
  • Albanian
  • Amharic
  • Arabic
  • Armenian
  • Azerbaijani
  • Basque
  • Belarusian
  • Bengali
  • Bosnian
  • Bulgarian
  • Catalan
  • Cebuano
  • Chichewa
  • Chinese (Simplified)
  • Chinese (Traditional)
  • Corsican
  • Croatian
  • Czech
  • Danish
  • Dutch
  • Esperanto
  • Estonian
  • Filipino
  • Finnish
  • French
  • Frisian
  • Galician
  • Georgian
  • German
  • Greek
  • Gujarati
  • Haitian Creole
  • Hausa
  • Hawaiian
  • Hebrew
  • Hindi
  • Hmong
  • Hungarian
  • Icelandic
  • Igbo
  • Indonesian
  • Irish
  • Italian
  • Japanese
  • Javanese
  • Kannada
  • Kazakh
  • Khmer
  • Korean
  • Kurdish (Kurmanji)
  • Kyrgyz
  • Lao
  • Latin
  • Latvian
  • Lithuanian
  • Luxembourgish
  • Macedonian
  • Malagasy
  • Malay
  • Malayalam
  • Maltese
  • Maori
  • Marathi
  • Mongolian
  • Myanmar (Burmese)
  • Nepali
  • Norwegian
  • Pashto
  • Persian
  • Polish
  • Portuguese
  • Punjabi
  • Romanian
  • Russian
  • Samoan
  • Scots Gaelic
  • Serbian
  • Sesotho
  • Shona
  • Sindhi
  • Sinhala
  • Slovak
  • Slovenian
  • Somali
  • Spanish
  • Sundanese
  • Swahili
  • Swedish
  • Tajik
  • Tamil
  • Telugu
  • Thai
  • Turkish
  • Ukrainian
  • Urdu
  • Uzbek
  • Vietnamese
  • Welsh
  • Xhosa
  • Yiddish
  • Yoruba
  • Zulu

Abstract

It is very complex to write programs that behave accurately in the program verification tools. Automatic mining techniques suffer from 90-99% false positive rates, because manual specification writing is not easy. Because they can help with program testing, optimization, refactoring, documentation, and most importantly, debugging and repair. To concentrate on this problem, we propose to augment a temporal-property miner by incorporating code quality metrics. We measure code quality by extracting additional information from the software engineering process, and using information from code that is more probable to be correct as well as code that is less probable to be correct. When used as a preprocessing step for an existing specification miner, our technique identifies which input is most suggestive of correct program behaviour, which allows offthe-shelf techniques to learn the same number of specifications using only 45% of their original input.

References

23 Cites in Article
  1. (2002). The economic impact of inadequate infrastructure for software testing.
  2. R Seacord,D Plakosh,G Lewis (2003). Modernizing Legacy Practices.
  3. Manuvir Das (2006). Formal Specifications on Industrial-Strength Code—From Myth to Reality.
  4. H Chen,D Wagner,D Dean (2002). Setuid demystified.
  5. Glenn Ammons,David Mandelin,Rastislav Bodík,James Larus (2003). Debugging temporal specifications with concept analysis.
  6. G Ammons,R Bodik,J Larus (2002). Mining specifications.
  7. D Engler,D Chen,A Chou (2001). Bugs as inconsistent behaviour: A general approach to inferring errors in systems code.
  8. Mark Gabel,Zhendong Su (2008). Symbolic mining of temporal specifications.
  9. J Whaley,M Martin,M Lam (2002). Automatic extraction of object-oriented component interfaces.
  10. Claire Le Goues,Westley Weimer (2012). Measuring Code Quality to Improve Specification Mining.
  11. Mohammed Kayed,Chia-Hui Chang (2010). FiVaTech: Page-Level Web Data Extraction from Template Pages.
  12. S Chidamber,C Kemerer (1994). A metrics suite for object oriented design.
  13. David Detlefs,Greg Nelson,James Saxe (2005). Simplify: a theorem prover for program checking.
  14. Massimiliano Di Penta,Daniel German (2009). Who are Source Code Contributors and How do they Change?.
  15. Cory Kapser,Michael Godfrey (2006). "Cloning Considered Harmful" Considered Harmful.
  16. J Krinke (2007). A study of consistent and inconsistent changes to code clones.
  17. C,Le Goues,W Weimer (2009). Specification mining with few false positives.
  18. T Mccabe (1976). A Complexity Measure.
  19. Nachiappan Nagappan,Thomas Ball (2007). Using Software Dependencies and Churn Metrics to Predict Field Failures: An Empirical Case Study.
  20. J Sanchez,L Williams,E Maximilien (2007). On the Sustained Use of a Test Driven Development Practice at IBM.
  21. W Weimer,N Mishra (2008). Privately finding specifications.
  22. W Weimer,G Necula (2005). Mining temporal specifications for error detection.
  23. D Engler,D Chen,A Chou (2001). Bugs as inconsistent behavior: A general approach to inferring errors in systems code.

Funding

No external funding was declared for this work.

Conflict of Interest

The authors declare no conflict of interest.

Ethical Approval

No ethics committee approval was required for this article type.

Data Availability

Not applicable for this article.

How to Cite This Article

Shital V. Tate. 2012. \u201cQuantitative Analysis of Fault and Failure Using Software Metrics\u201d. Global Journal of Computer Science and Technology - C: Software & Data Engineering GJCST-C Volume 12 (GJCST Volume 12 Issue C12): .

Download Citation

Journal Specifications

Crossref Journal DOI 10.17406/gjcst

Print ISSN 0975-4350

e-ISSN 0975-4172

Keywords
Version of record

v1.2

Issue date

August 21, 2012

Language
en
Experiance in AR

Explore published articles in an immersive Augmented Reality environment. Our platform converts research papers into interactive 3D books, allowing readers to view and interact with content using AR and VR compatible devices.

Read in 3D

Your published article is automatically converted into a realistic 3D book. Flip through pages and read research papers in a more engaging and interactive format.

Article Matrices
Total Views: 10174
Total Downloads: 2655
2026 Trends
Related Research

Published Article

It is very complex to write programs that behave accurately in the program verification tools. Automatic mining techniques suffer from 90-99% false positive rates, because manual specification writing is not easy. Because they can help with program testing, optimization, refactoring, documentation, and most importantly, debugging and repair. To concentrate on this problem, we propose to augment a temporal-property miner by incorporating code quality metrics. We measure code quality by extracting additional information from the software engineering process, and using information from code that is more probable to be correct as well as code that is less probable to be correct. When used as a preprocessing step for an existing specification miner, our technique identifies which input is most suggestive of correct program behaviour, which allows offthe-shelf techniques to learn the same number of specifications using only 45% of their original input.

Our website is actively being updated, and changes may occur frequently. Please clear your browser cache if needed. For feedback or error reporting, please email [email protected]

Request Access

Please fill out the form below to request access to this research paper. Your request will be reviewed by the editorial or author team.
X

Quote and Order Details

Contact Person

Invoice Address

Notes or Comments

This is the heading

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

High-quality academic research articles on global topics and journals.

Quantitative Analysis of Fault and Failure Using Software Metrics

Shital  V. Tate
Shital V. Tate
S. Z. Gawali
S. Z. Gawali

Research Journals