Using Latency to Evaluate Computer System Performance

1
Olawuyi J.O
Olawuyi J.O
2
Olawuyi J.O.
Olawuyi J.O.
3
Fagbohunmi S.G.
Fagbohunmi S.G.
4
Olawuyi O.M.
Olawuyi O.M.
5
Mgbole F.
Mgbole F.
1 Abia State Polytechnic

Send Message

To: Author

GJCST Volume 14 Issue G5

Article Fingerprint

ReserarchID

28L6C

Using Latency to Evaluate Computer System Performance Banner
  • English
  • Afrikaans
  • Albanian
  • Amharic
  • Arabic
  • Armenian
  • Azerbaijani
  • Basque
  • Belarusian
  • Bengali
  • Bosnian
  • Bulgarian
  • Catalan
  • Cebuano
  • Chichewa
  • Chinese (Simplified)
  • Chinese (Traditional)
  • Corsican
  • Croatian
  • Czech
  • Danish
  • Dutch
  • Esperanto
  • Estonian
  • Filipino
  • Finnish
  • French
  • Frisian
  • Galician
  • Georgian
  • German
  • Greek
  • Gujarati
  • Haitian Creole
  • Hausa
  • Hawaiian
  • Hebrew
  • Hindi
  • Hmong
  • Hungarian
  • Icelandic
  • Igbo
  • Indonesian
  • Irish
  • Italian
  • Japanese
  • Javanese
  • Kannada
  • Kazakh
  • Khmer
  • Korean
  • Kurdish (Kurmanji)
  • Kyrgyz
  • Lao
  • Latin
  • Latvian
  • Lithuanian
  • Luxembourgish
  • Macedonian
  • Malagasy
  • Malay
  • Malayalam
  • Maltese
  • Maori
  • Marathi
  • Mongolian
  • Myanmar (Burmese)
  • Nepali
  • Norwegian
  • Pashto
  • Persian
  • Polish
  • Portuguese
  • Punjabi
  • Romanian
  • Russian
  • Samoan
  • Scots Gaelic
  • Serbian
  • Sesotho
  • Shona
  • Sindhi
  • Sinhala
  • Slovak
  • Slovenian
  • Somali
  • Spanish
  • Sundanese
  • Swahili
  • Swedish
  • Tajik
  • Tamil
  • Telugu
  • Thai
  • Turkish
  • Ukrainian
  • Urdu
  • Uzbek
  • Vietnamese
  • Welsh
  • Xhosa
  • Yiddish
  • Yoruba
  • Zulu

Building high performance computer systems requires an understanding of the behaviour of systems and what makes them fast or slow. In addition to our file system performance analysis, we have a number of projects in measuring, evaluating, and understanding system performances. The conventional methodology for system performance measurement, which relies primarily on throughput-sensitive benchmarks and throughput metrics, has major limitations when analyzing the behaviour and performance of interactive workloads. The increasingly interactive character of personal computing demands new ways of measuring and analyzing system performance. In this paper, we present a combination of measurement techniques and benchmark methodologies that address these problems. We use some simple methods for making direct and precise measurements of event handling latency in the context of a realistic interactive application. We analyze how results from such measurements can be used to understand the detailed behaviour of latency-critical events. We demonstrate our techniques in an analysis of the performance of two releases of Windows 9x and Windows XP Professional. Our experience indicates that latency can be measured for a class of interactive workloads, providing a substantial improvement in the accuracy and detail of performance information over measurements based strictly on throughput.

14 Cites in Articles

References

  1. Beenish Zia,Xiang Guo,Yong Tong Chua,Thomas Labno (1995). Medical Image Reconstruction Optimizations Using Unified Programming Model.
  2. Brian Bershad,Richard Draves,A Forin (1992). Using microbenchmarks to evaluate system performance.
  3. Ben Smith (1996). Ultrafast Ultrasparcs.
  4. J Bradley Chen,Yasuhiro Endo,Kee Chan,David Mazieres,Antonio Dias,Margo Seltzer,Michael Smith (1996). The Measured Performance of Personal Computer Operating Systems.
  5. David Jefferson,L Johnson,Donald Reifer (1995). Ada Compiler Validation Summary Report: Certificate Number 940902S1.11376. UNISYS Corporation IntegrAda for Windows NT, Version 1.0. Intel Deskside Server for Intel Pentium 60 MHz =>. Intel Deskside Server with Intel Pentium 60 MHz,.
  6. C Lindblad,D Tennenhouse (1996). The VuSystem: A Programming System for Compute-Intensive Multimedia.
  7. Larry Mcvoy (1996). Lmbench: Portable tools for performance analysis.
  8. Jeffrey Mogul (1992). SPECmarks are leading us astray.
  9. O' James,Scott Toole,David Nettles,Gifford (1993). Concurrent Compacting Garbage Collection.
  10. John Ousterhout (1991). Why Operating Systems Aren't Getting Faster As Fast As Hardware.
  11. Mark Shand (1992). Measuring Unix Kernel Performance with Reprogammable Hardware.
  12. Ben Shneiderman (1992). Designing the User Interface.
  13. Jeff Reilly (1995). SPEC Discusses the History and Reasoning behind SPEC 95.
  14. M Vannamee,B Catchings (1994). Reaching New Heights in Benchmark Testing.

Funding

No external funding was declared for this work.

Conflict of Interest

The authors declare no conflict of interest.

Ethical Approval

No ethics committee approval was required for this article type.

Data Availability

Not applicable for this article.

Olawuyi J.O. 2015. \u201cUsing Latency to Evaluate Computer System Performance\u201d. Global Journal of Computer Science and Technology - G: Interdisciplinary GJCST-G Volume 14 (GJCST Volume 14 Issue G5): .

Download Citation

Journal Specifications

Crossref Journal DOI 10.17406/gjcst

Print ISSN 0975-4350

e-ISSN 0975-4172

Keywords
Classification
Not Found
Version of record

v1.2

Issue date

February 5, 2015

Language

English

Experiance in AR

The methods for personal identification and authentication are no exception.

Read in 3D

The methods for personal identification and authentication are no exception.

Article Matrices
Total Views: 8662
Total Downloads: 2294
2026 Trends
Research Identity (RIN)
Related Research

Published Article

Building high performance computer systems requires an understanding of the behaviour of systems and what makes them fast or slow. In addition to our file system performance analysis, we have a number of projects in measuring, evaluating, and understanding system performances. The conventional methodology for system performance measurement, which relies primarily on throughput-sensitive benchmarks and throughput metrics, has major limitations when analyzing the behaviour and performance of interactive workloads. The increasingly interactive character of personal computing demands new ways of measuring and analyzing system performance. In this paper, we present a combination of measurement techniques and benchmark methodologies that address these problems. We use some simple methods for making direct and precise measurements of event handling latency in the context of a realistic interactive application. We analyze how results from such measurements can be used to understand the detailed behaviour of latency-critical events. We demonstrate our techniques in an analysis of the performance of two releases of Windows 9x and Windows XP Professional. Our experience indicates that latency can be measured for a class of interactive workloads, providing a substantial improvement in the accuracy and detail of performance information over measurements based strictly on throughput.

Our website is actively being updated, and changes may occur frequently. Please clear your browser cache if needed. For feedback or error reporting, please email [email protected]
×

This Page is Under Development

We are currently updating this article page for a better experience.

Request Access

Please fill out the form below to request access to this research paper. Your request will be reviewed by the editorial or author team.
X

Quote and Order Details

Contact Person

Invoice Address

Notes or Comments

This is the heading

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

High-quality academic research articles on global topics and journals.

Using Latency to Evaluate Computer System Performance

Olawuyi J.O.
Olawuyi J.O.
Fagbohunmi S.G.
Fagbohunmi S.G.
Olawuyi O.M.
Olawuyi O.M.
Mgbole F.
Mgbole F.

Research Journals