Analyzer

Analyzer file Extension is “.lra”

1. Cross results option: Allow you to compare two “.lrr” files as part of benchmarking test.

2. Section explorer: Contains “.lrr” path, period, duration, average throughput hits per second, total throughput, hits per second, transaction response time and status code.

3. Graphs: Allow you to add and delete the graphs.

4. Properties: Allow you to exclude/ include think time and generate percentage response time.

5. Controller output message: Controller error message will be displayed which will be help full to analysis.

6. User data: Allow you to write something.

7. Raw data: Based on the request we can pull the raw data and send to architecture people to analysis purpose.

8. Graph data: Will give raw data for graph.

9. Legend: To make you understand which color is indicating which measurement.
 Scale: Indicating number of measurements in graph.

10. Granularity: Time difference b/w two saturation points.
NOTE: minimum Granularity for throughput and Hits per 5 second.
For all remaining graphs 1 second

11. 90th percentile:
90 percent of the transactions are completing with in this limit.
Step1: Write all the response times in ascending order.
Step2: Take out 10% of values from below.
Step3: Which will be the highest value consider as 90% response time.
Note1: We have to report only 90% response time to client.
Note 2: Based on client requirement we can generate 80%, 85%, 90% …etc.

12. Reports:
By default we can generate doc report, Html report, crystal report, PDF report.
Reporting:
Once the test got completed, I will export the response times to excel and I will prepare a comparison report.

Comparison report:
It compares 90th percentile response times with baseline response times of previous test results and I will maintain a RAG (Red Amber Green) status.In some other tab, I will copy merged graph to understand the test results.Apart from comparison report, we will prepare a quick analysis summary which contains objective, scope, how we designed the scenario, test environment, observations in terms of resource utilization, high response times and controller, web server logs.
I will send a mail to get AWR, NMON reports for future analysis.
I will prepare a PPT by analyzing all the supporting files (AWR, NMON …) by mentioning objective, observations, environment comparison, high response transactions, root causes to present to the stake holder.

What is your approach to analyze the statistics?
(or)
What is the process you are following to identify the bottleneck?
A:
Once the test got finished, I am going to compare derived statistics with expected statistics. If both are not comparable, then I will start the process to find out the root cause.
 Client side statistics analysis (analyzer, throughput, Hits per second, Response time).
 Server side statistics analysis (Hardware and OS level statistics).
 Application side statistics (Methods, I/O operations, DB, EJB, Packages etc..).
 Configuration setting analysis (Current limit, Connection limit, Thread limits..etc..).

No comments:

Post a Comment