Web click & script protocol


Web click & script protocol records the actions which are performed against the browser
(or) it will record the only browser specific actions. There is no correlation values in Vugen
script.
Functions:
1. web_browser(): Performs an action on a browser.
2. web_edit_ field(): Enters data for a text field/ input purpose
3. web_image_submit(): Emulates a user clicking on an image that fires a submit request.
4. web‐image_link(): Emulates a user clicking on an image that is a Hyper text link
5. web_list(): Select an item from a list control/Drop down list
6. web_radio_group(): selects one button from a radio button group.
Click & Script:
1. Records the browser specific actions
2. Correlation is not required
3. Recommended for GUI based apps.
4. Records the objects in terms of X.Y co‐ordinates
5. Identified some deviations in response times when compared with HTTP Protocol
HTTP/HTML
1. Records the communication b/w client & server.
2. Correlation is required
3. Recommended for any web apps which is communicating in HTTP Protocol
4. Records the objects in terms of GET & POST Requests
5. It will give accurate Response times.
Record time options:
Recording
1. GUI Level: It will generate a step for every user actions
2. HTML Level
3. URL Level
Challenges and Enhancements:
 While recording your script password will be present in encrypted format, we have to
change to normal format. Ex: set value=jojo; set value= bean
 All the clicks will be recording in screen co‐ordinates, we have to convert them to action
format.
NOTE: If image co‐ordinates are changing causes script failure.

Advantages:
1. Correlation is not required
2. Easy to understand & easy to maintain the script

Disadvantages:
1. Response times are slightly different from actual
2. It will generate object for every user action.

Pre requisites (or) Precautions while recording the script:
Which objects (Fields) would like to parameterize those fields should be modify whilerecording.
Case study 1:
I have an application which is developed in Java, my business scenario having 10 requests.
Every jsp page having 100 fields which are filling based on my previous input.
Solution: in the above scenario correlate 100s of values/ fields for every page is a differentprocess.
To avoid conducting/ implementing correlation I switched from HTTP/HTML protocol to click & script protocol.
Case study 2:
I have an application in that application having multiple tabs. As per the business flow I haveto move Tab1 to Tab2 which is not a server call.
Solution: In the above scenario Tab2 action is not a server call.
To perform continue button I have to navigate Tab1 to Tab2 which is not possible inHTTP/HTML protocol. So I moved to Click & script protocol.
Case study 3: If the application having GUI interface we can use Click & script protocol.

Oracle NCA Protocol


I have an application web interface is interacting with data base via oracle forms for this
scenario I used Web+ Oracle NCA Protocol.
Functions:
1. nca-connect-server ( ): Establishes the connection to Oracle NCA DB server.
2. nca-set-window ( ): indicates the name of the active window.
3. nca-obj-type ( ): sends keyboard input to an object.
4. nca-edit-set ( ): sets the contents of an edit objects.
5.nca-button-press (): Activates the specified push button.
Challenges:
1. Correlation for web objects.
2. NCA objects are recording in the form of ID's instead of object names.
1. Solution:
We have to append the record = Names to the URL. So that it will record objects as
names.
Ex:
2: Set record= name is start up file.
In oracle form server find start up file called Base.html.
3: Set value in form web file and html start up file.

RDP PROTOCOL



It Is developed by Microsoft. RDP is a extension of ITU_T (International Telecommunication Union) Protocol. it is the protocol that is used to connect the local client to the Terminal Server.
It follows ISO Model. RDP uses TCP3389 Port and UDP3389 port. By default virtual channels created by Microsoft to transfer various media types. RDP follows “BIT map cache mechanism”. It can create maximum 64000 virtual channels. RDP uses Proprietary security RSA security using RC4 ciffer . even it can SSL 64218-bit mechanism.
Versions of RDP:
                    so for we are using versions 6.0. and 6.1.
        Version 7.0.
        Version 8.0 and 8.1.
     Current version 10.0.
NOTE: Any kind of application recorded the RDP Protocol.
To Create RDP Script:
Select file > NEW or click the button. The new virtual user dialogue box opens.
Select Microsoft Remote Desktop Control Protocol (RDP). The Start Recording dialog box opens.
Click Options to set the Recording Options.
Select the RDP: Login node. Select one of the session options: Run Client, Connection File, or Default Connection File.
Select the RDP: Code Generation node and enable the desired options.
During recording to select a screen region for synchronization, click the Sync on image button on the recording toolbar and indicate an area for synchronization.
Stop recording and save the script.


Correlating Parameters:
If the Client sends the server the same data as it is received, then Vugen replaces the sent data with parameter during code generation.
To run an RDP script:
Open the Run-Time settings dialog box. Click the Run-Time Settings button on the toolbar, or select Vuser > Run-Time Settings.
Resolution (640*480) and colour depth should be same (Run-Time Settings-- > Configuration node).
 Select the Synchronization node. Select the desired settings.
 Click OK to accept the run-time settings and close the dialog box.
 Click the Run button o r select Vuser > Run.




RDP Recording Options:
Allow you to open the popup or existing RDP Connection or default connection.
Allow to generate mouse click, row mouse clicks, raw keyboard calls, connections name and synchronous radius.
Allow you to provide DoubleClick timeout snapshot prefix names.
Agent allow you to record the information gathered by RDP agent like window name, window position….... this information will be available in generation log.
Pre-Requisites:
RDP agent has to installed to get this information. RDP agent is available in additional components as part of LR installation.
All LR resolution should be same.
Colour depth should be same, try to keyboard shortcuts.
In order to execute RDP get text synchronous on text, RDP agent has to be installed.
For every user action we have to image synchronous OR windows synchronous function.

Challenges in RDP Scripts:
1)How to add sync_on_image?
Def:  ctl+alt+p (snapshot view) -->
Right click -->
add sync_on_image function
Ex: rdp_syn_on_image ("step description= ",
                                           "wait for=appear",
                                           "Add of set to input=Default",
                                             Item data, RDP_LAST);
NOTE: This function can work without agent installation.
Def: It waits until an image Appear (or) disappear.

2)How to add syn_on_window?
Def: It will wait for an window to match specified state.
     In order to execute this function RDP Agent has to be located and need to be installed.
Syntax:
 madhu=rdp_sync_on_window(RDP_LAST);
lr_output_message("%S",madhu);
if(strcmp(madhu,"0”) ==0)
{
          lr_output_message("pass");
} else {
        lr_output_message("fail");}
3)How to save an image?
sol: It saves a screen to specified location.
Syntax: rdp_save_image ("stepdescription=Mystep",
                                     "filename=C:\\path",
                                       "Image Top=0",
                                       "Image Left=0",
                                       "Image Height=600",
                                       "Image width=400",
                                       "origin=Default", RDP_LAST");
4)How to capture the window position?
Sol:  As per my requirement I may required to capture the main window position or child window position.
Syntax:  rdp_get_window_position("stepdescription=Mystep",
                                                                "snapshot=snapshot_18.inf",
                                                                 "window Title/RE=editing*",
                                                                 "window Left=param left",
                                                                 "window Top=param Top",
                                                                "window Width =param width",
                                                                 "window Height=param height",RDP_LAST);
5)How to capture the window Title?
sol:
 rdp_get_window_Title("stepdescription=Get",
                                           "snapshot=snapshot_\aint",
                                           "windowTitle/RE=pavan",
                                           "window ordinal=paramwindowordinal", RDP_LAST);
6)How to capture on screen text from main window?
Sol: Using  rdp_get_text(); we can capture the text. Always use XY co-ordinates from last point.
7)How to capture the text from child window?
sol:  -->Get the child window position.
  --> Get the text window position.
  -->Subtract child window left position text "X" position & top position with text "Y" position.
rdp_get_text("stepdescription=GetText",
                            "snapshot=snapshot_89.inf",
                            "windowTitle/RE={pavan}",
                            "Text X=320",
                            "Text Y=75",
                           "Text=chiru”, RDP_LAST);
lr_output_message("%s",lr_eval_string("{chiru}"));
8) how to handle synchronization issue in RDP protocol?
1) use RDP_synchronous_on_window.
2) use RDP_synchronous_on_image.
3) use RDP_synchronous_on_text.
9)how to write synchronous_on_text function?
Syntax:
Singh = rdp_synchronous_on_text (“ “,”snapshot=sync_90.inf”,”windowstitle/RE={pavan}”,”text x=320”,”text y=75”,”text=company”, “fail step if not found =NO”,RDP_LAST);
Lr_output_message (“%d”, Singh);

10) how to get the object information?
Sol: Retrieve the information about the objective
Syntax:
rdp_get_object_info (“     ”,”snapshot=sync_90.inf”,”windowstitle/RE={pavan}”,
RDP_LAST);

EXECUTION TIPS:
1)Use proper Think Time.
2)Use proper synchronize Timeout.
3)Use same colour depth themes, bit map caching mechanism.
4)Use proper buffer cache size.
5)Enable extended log for Agent.
1) ISSUES:
1)Connection Reset by server and terminating the user.
sol:-->Less Think time &pacing.
-->Use image sync, window sync for even user request.
FRAME WORK;
Prepare a logoff which will allow you to logoff from application and disconnect from RDP.

Citrix ICA Protocol


Citrix_ ICA Protocol: If the application deployed in citrix environment we have to use Citrix _ ICA protocol. But my current project deployed in citrix. Before accessing this application we have to connect to the citrix env. Through citrix protocol.
VDA – (Virtual Desktop Access).
VDI – Virtual Desktop Intra-structure.
These are two ways we can access the application.

1.Through web:
In this case we require to use multiprotocol. (Web+Citrix+ICA).

2.Through client (citrix):
Citrix is a software allow you to access remote profile.
*Xenapp:
Application will be shared in between the users
Xen Desktop:
Application will be published individually
Steps to Access the application:
Access the citrix Env. Through UR
Provide credentials & access the application which is published in citrix Env.
Perform the business scenario against the application.
Log off from the application and from citrix

Prerequisites:
Install citrix_Ica agent in vugen as well as in LG machines (same version of citrix).
Stream less mode should be off.
Try to use only keyboard strokes.
Create virtual channels.
Use the same resolution for Vugen machine as well as LG’s.
Use proper think time for every request.
DEP settings should be disable.
Color depth should be same in LG as well as vugen machine.

Functions:
ctrx_nfuse_connect(): connects to a citrix server via an NFUSE portal.
ctrx_sync_on_window(): waits until a window is created (or) become active.
ctrx_mouse_click(): Emulates a mouse click on a citrix client sent to a citrix server.
ctrx_wait_for_event(): It is a synchronization function that waits for an event to occur.
ctrx_get_window_name(): Retrieves the name of the active window.
ctrx_type(): Emulates a typing alpha numeric keys.
ctrx_sync_on_bitmap(): waits until a bitmap appears.
Ctrx_sync_on_text(): waits until the text is appeared
Ctrx_mouse_click(): emulate the mouse click
Ctrx_get_text(): to capture text from rectangular.
Ctrx_key(): emulates non alpha numeric press.
Ctrx_win_exist(): checks a window is visible or not

Challenges:
1) My test got failed in controller
Cause: citrix_ICA agent version is different in LG & vugen machines.
My mouse click objects were failed in controller.

Cause: Resolution is different in LG & vugen         machines.

Synchronization issue: I have to instruct the users to wait until the response received from server it is called synchronization issue.

Case study 1: Synchronization issue is the biggest issue in citrix protocol. To overcome synchronization issue we have 4 ways.
Based on window name
Based on bitmap value
Based on wait for an event function
Based on sync text

i) Based on window name:
I recorded a script for every action it generated ctrx_sync_window () function. Which will wait for the expected window for a specified time.
To perform new action we have to verify whether the correct window appeared or not. For this verification I used function ctrx_get_window_name() to capture the active window name.
From the above if the active window name is expected window name then we can continue to the next action. Else we have to instruct the vuser to wait until specified window name to be appeared.
Shiva:---  lable
          Ctrx_sync_on_window ( “editing a customer “, activate, 232, 244, 536, “snapshot6”,
CONTINUE_ON_ERROR, CTRX, LAST);
Ctrx_get_window_name(window – name , ctrx-LAST);
If(“Editing < Customer”,window_name)==0)
{
      Lr_output_message(“pass”);
}
else
{
      Lr_output_message(“fail”);
      goto shiva;
}

ii) Based on the Bitmap value:
Using bitmap hash value we can overcome the synchronization issue.
Using  ctrx_get_bitmap_value()  we can capture hash value of any image which can compare with expected Hash value.
Ctrx_get_bitmap_value(672,280,97,18,text_buffer,snapshot_1,CONTINUE_ON_ERROR,CTRX_LAST);
If(strcmp(“04308587……”,text_buffer)==0)
{
         Lr_output_message(“pass”);
}
else
{
         Lr_output_message(“fail”);
}

iii) Based on wait for an event function:
ctrx_wait_for_event () function wait for a specified event to occur.
iv) Based on sync text ():
waits until specified text is displayed around the specific position.
Vijay= ctrx_sync_on_text_ocr (514,313,50,16 ,”company”, ”NULL=snapshot_3”,
CONTINUE_ON_ERROR, CTRX_LAST);
Lr_output_message (“vijay value is %d”, vijay);

4.Challenge: To capture the specific text from on screen (or) capture dynamic value from screen
Solution: using ctrx_get_text() function to capture on screen text to handle text verification points (or) to capture dynamic value.



Q2: How to capture on screen text?
                          OR
       When I am navigating one of the screen displaying dynamic number which has to be captured and pass to the next request?
Sol: ctrx_get_text_ocr (NULL, 515,314,50,14,”snapshot_4”,text_buffer,CONTINUE_ON_ERROR);
Vijay=
ctrx_sync_on_text_ocr(514,313,50,16,”company”,”NULL=snapshot_3”,CONTINUE_ON_ERROR);

Lr_output_message (“vijay value is %d”, vijay);

Q3: How to save an image to a folder?
Sol: ctrx_sync_on_window (“kw hotels free(0,46.129)-[calender]”, Activate,
ctrx_save_bitmap (10,50,100,200,”shiva.bmp”);

SCRIPT DEVELOPMENT TECHNIQUES:
Use proper think time
Create recovery scenario using modular approach
Avoid mouse clicks
Develop session cleaning mechanism
Below things you have to remember before going for execution
Session overlapping would be avoided
If first user fails at 10th page second user should start from 1st page
Window position should be same for all the users
User has to run as a process



The following example will helps you.

Action()
{
 char window_name[100];
 char buffer[50];
 int text;
 
 web_add_cookie("CtxsDeviceId=WR_r4uoM4xicm2E; DOMAIN=ctx.shaft.com");

 web_url("storeweb", 
  "URL=http://ctx.shaft.com/citrix/storeweb", 
  "Resource=0", 
  "RecContentType=text/html", 
  "Referer=", 
  "Snapshot=t10.inf", 
  "Mode=HTML", 
  EXTRARES, 
  "Url=../Citrix/StoreWeb/receiver/js/external/velocity.min_B218502A82F66680.js", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/js/external/hammer.v2.0.8.min_F699A1E56189259A.js", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/js/external/jquery.dotdotdot.min_08EE54CBA886AD0A.js", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/js/external/slick.min_FEB62CC230E2BA2A.js", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/js/ctxs.core.min_913780ECE5947BE4.js", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/js/ctxs.webui.min_41CC5860D625BCD9.js", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/custom/style.css", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/css/ctxs.large-ui.min_647DF07BE00D295E.css", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/images/1x/folder_template_C13BB96DEBC9F30F.png", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/images/1x/actionSprite_531B7A6FF85CA98E.png", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/images/1x/CitrixReceiver_WebScreen_CBE548FB8FEE049E.png", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/images/common/ReceiverFullScreenBackground_46E559C0E6B5A27B.jpg", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/custom/script.js", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/custom/strings.en.js", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/images/1x/viewSprite_B2F322BDCB824FAF.png", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/images/common/authspinner_B0BCD339560CA593.gif", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/images/1x/CitrixStoreFront_auth_14B96BFF2B0A6FF8.png", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/images/1x/spinner_white_auth_button_53FD5A337A529DA7.gif", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/images/common/icon_loading_9A0623127A028FEB.png", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/Resources/Icon/L0NpdHJpeC9TdG9yZS9yZXNvdXJjZXMvdjIvU213ek1WTXJNVXBtYjI1MFowTjZZa2N5UkZwNmIxUlBhMDQ0UFEtLS9pbWFnZQ--?size=128", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/images/1x/search_close_BC5A22358E58905F.png", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/images/1x/CitrixReceiverLogo_Home_5C24BCEC5A182425.png", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/images/1x/spinner_5CF0D1C8A76AAC8E.png", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/images/1x/ico_search_E84E3D63D821F80D.png", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  "Url=../Citrix/StoreWeb/receiver/images/1x/ico_desktop_ready_482FD91B201F2A55.png", "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", ENDITEM, 
  LAST);

 web_add_auto_header("X-Citrix-IsUsingHTTPS", 
  "No");

 web_add_auto_header("X-Requested-With", 
  "XMLHttpRequest");

/*Correlation comment: Automatic rules - Do not change!  
Original value='CAB598DC40F9DF729BA594970033E077' 
Name ='CitrixXenApp_CsrfToken' 
Type ='Rule' 
AppName ='Citrix_XenApp' 
RuleName ='CsrfToken'*/
 web_reg_save_param_ex(
  "ParamName=CitrixXenApp_CsrfToken",
  "LB/IC=CsrfToken=",
  "RB/IC=;",
  SEARCH_FILTERS,
  "Scope=Cookies",
  "RequestUrl=*/Configuration*",
  LAST);

 web_custom_request("Configuration", 
  "URL=http://ctx.shaft.com/Citrix/StoreWeb/Home/Configuration", 
  "Method=POST", 
  "Resource=0", 
  "RecContentType=application/xml", 
  "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", 
  "Snapshot=t11.inf", 
  "Mode=HTML", 
  "EncType=", 
  LAST);

 web_add_cookie("CtxsPluginAssistantState=Done; DOMAIN=ctx.shaft.com");

 web_add_auto_header("Csrf-Token",
  "{CitrixXenApp_CsrfToken}");

 web_submit_data("List", 
  "Action=http://ctx.shaft.com/Citrix/StoreWeb/Resources/List", 
  "Method=POST", 
  "RecContentType=text/plain", 
  "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", 
  "Snapshot=t12.inf", 
  "Mode=HTML", 
  ITEMDATA, 
  "Name=format", "Value=json", ENDITEM, 
  "Name=resourceDetails", "Value=Default", ENDITEM, 
  LAST);

 web_custom_request("GetAuthMethods", 
  "URL=http://ctx.shaft.com/Citrix/StoreWeb/Authentication/GetAuthMethods", 
  "Method=POST", 
  "Resource=0", 
  "RecContentType=application/xml", 
  "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", 
  "Snapshot=t13.inf", 
  "Mode=HTML", 
  "EncType=", 
  LAST);
//
// web_revert_auto_header("Csrf-Token");

// web_revert_auto_header("X-Citrix-IsUsingHTTPS");
//
// web_revert_auto_header("X-Requested-With");
//
// web_add_auto_header("Csrf-Token",
//  "{CitrixXenApp_CsrfToken}");

 web_custom_request("Login", 
  "URL=http://ctx.shaft.com/Citrix/StoreWeb/ExplicitAuth/Login", 
  "Method=POST", 
  "Resource=0", 
  "RecContentType=application/vnd.citrix.authenticateresponse-1+xml", 
  "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", 
  "Snapshot=t14.inf", 
  "Mode=HTML", 
  "EncType=", 
  EXTRARES, 
  "Url=https://www.bing.com/favicon.ico", "Referer=", ENDITEM, 
  LAST);

 web_set_sockets_option("SSL_VERSION", "TLS1.2");

 web_add_auto_header("X-Citrix-IsUsingHTTPS", 
  "No");

 web_add_auto_header("X-Requested-With", 
  "XMLHttpRequest");

 web_submit_data("LoginAttempt", 
  "Action=http://ctx.shaft.com/Citrix/StoreWeb/ExplicitAuth/LoginAttempt", 
  "Method=POST", 
  "RecContentType=application/xml", 
  "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", 
  "Snapshot=t15.inf", 
  "Mode=HTML", 
  "EncodeAtSign=YES", 
  ITEMDATA, 
  "Name=username", "Value=shaft\\shaftctx20", ENDITEM, 
  "Name=password", "Value=Sh@ft123", ENDITEM, 
  "Name=saveCredentials", "Value=false", ENDITEM, 
  "Name=loginBtn", "Value=Log On", ENDITEM, 
  "Name=StateContext", "Value=", ENDITEM, 
  LAST);

 web_submit_data("List_2", 
  "Action=http://ctx.shaft.com/Citrix/StoreWeb/Resources/List", 
  "Method=POST", 
  "RecContentType=application/json", 
  "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", 
  "Snapshot=t16.inf", 
  "Mode=HTML", 
  ITEMDATA, 
  "Name=format", "Value=json", ENDITEM, 
  "Name=resourceDetails", "Value=Default", ENDITEM, 
  LAST);

 web_custom_request("GetUserName", 
  "URL=http://ctx.shaft.com/Citrix/StoreWeb/Authentication/GetUserName", 
  "Method=POST", 
  "Resource=0", 
  "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", 
  "Snapshot=t17.inf", 
  "Mode=HTML", 
  "EncType=", 
  LAST);

 web_add_cookie("SRCHD=AF=NOFORM; DOMAIN=iecvlist.microsoft.com");

 web_add_cookie("SRCHUID=V=2&GUID=AA30ACF476A441C3A0D8D63E620AED81&dmnchg=1; DOMAIN=iecvlist.microsoft.com");

 web_add_cookie("SRCHUSR=DOB=20181207; DOMAIN=iecvlist.microsoft.com");

 web_custom_request("AllowSelfServiceAccountManagement", 
  "URL=http://ctx.shaft.com/Citrix/StoreWeb/ExplicitAuth/AllowSelfServiceAccountManagement", 
  "Method=POST", 
  "Resource=0", 
  "RecContentType=application/xml", 
  "Referer=http://ctx.shaft.com/Citrix/StoreWeb/", 
  "Snapshot=t18.inf", 
  "Mode=HTML", 
  "EncType=", 
  EXTRARES, 
  "Url=https://iecvlist.microsoft.com/IE11/1479242656000/iecompatviewlist.xml", "Referer=", ENDITEM, 
  LAST);

 ctrx_nfuse_connect("http://ctx.shaft.com/Citrix/StoreWeb/Resources/LaunchIca/U2hhZnQuS1dIb3RlbA--.ica?CsrfToken={CitrixXenApp_CsrfToken}&IsUsingHttps=No&launchId=1544174851766", CTRX_LAST);

 ctrx_wait_for_event("LOGON", CTRX_LAST);

 lr_think_time(4);

 ctrx_sync_on_window("User login", ACTIVATE, 194, 175, 412, 248, "snapshot2101", CTRX_LAST);

 ctrx_type("admin", "", CTRX_LAST);

 ctrx_key("TAB_KEY", 0, "", CTRX_LAST);

 ctrx_type("S", "", CTRX_LAST);

// ctrx_key("NO_KEY", MODIF_SHIFT, "", CTRX_LAST);

 lr_think_time(7);

 ctrx_key("BACKSPACE_KEY", 0, "", CTRX_LAST);

// ctrx_type("S", "", CTRX_LAST);

// ctrx_key("NO_KEY", MODIF_SHIFT, "", CTRX_LAST);

 ctrx_type("Sh@ft123", "", CTRX_LAST);

 ctrx_key("TAB_KEY", 0, "", CTRX_LAST);

 ctrx_key("TAB_KEY", 0, "", CTRX_LAST);

 ctrx_key("ENTER_KEY", 0, "", CTRX_LAST);

 lr_think_time(5);

 ctrx_sync_on_window("KWHotel Free (0.47.108) - [Calendar]", ACTIVATE, 0, 0, 801, 601, "snapshot2110", CTRX_LAST);

 
// ctrx_sync_on_bitmap(222, 56, 37, 37, "39e4123d16312e3980861cf8fea72dc8", CTRX_LAST);
 
 ctrx_get_window_name(window_name, CTRX_LAST);

if(strcmp(window_name, "KWHotel Free (0.47.108) - [Calendar]")==0)
{
//     ctrx_get_window_position(window_name, 0, 0, 801, 601, CTRX_LAST);
lr_output_message("the window is available : pass",window_name);
}
 else
 {
  lr_output_message("the window is not available : fail",window_name);
 }
 
  
// ctrx_sync_on_bitmap(222, 56, 37, 37, "39e4123d16312e3980861cf8fea72dc8", CTRX_LAST);

 ctrx_get_bitmap_value(222, 56, 37, 37, buffer, CTRX_LAST );
 
 if(strcmp(buffer, "39e4123d16312e3980861cf8fea72dc8")==0)
{
//     ctrx_get_window_position(window_name, 0, 0, 801, 601, CTRX_LAST);
lr_output_message("the image is available : pass",buffer);
}
 else
 {
  lr_output_message("the image is not available : fail",buffer);
 } 
  
 ctrx_get_text("About Notepad", 264, 68, 44, 13, "snapshot8", text_buffer, CONTINUE_ON_ERROR,CTRX_LAST );
  
  text=ctrx_sync_on_text_ocr(264, 68, 44, 13, "Services", "NULL=snapshot_1", CONTINUE_ON_ERROR, CTRX_LAST);

  if(strcmp(text)==0)
  {
  lr_output_message("the text is available : pass",text);
  }
 else
  {
  lr_output_message("the text is not available : fail",text);
  } 
   
    
    
  
 ctrx_key("t", MODIF_ALT, "", CTRX_LAST);


 ctrx_key("DOWN_ARROW_KEY", 0, "", CTRX_LAST);

 ctrx_key("ENTER_KEY", 0, "", CTRX_LAST);

 return 0;
}

Sap Web Protocol

SAP web protocol same like as Web HTTP/HTML protocol. If the application is developed as a ECC portal, net weaver portal, dynapro portal. We have to use SAP Web protocol. If the LR failed to record the objects using Web HTTP/HTML then only we can use SAP protocol.
The SAP-Web Vuser script typically contains several SAP transactions which make up a
business process. The business process consists of functions that emulate user actions. For information about these functions, see the Web functions in the Function Reference.
Note: You can generate a SAP - Web Vuser script by analyzing an existing network traffic file (capture file). This method may be useful for creating Vuser scripts that emulate activity on mobile applications.
Common correlation values in SAP Protocol:
1. SAP_exit_sid
2. SAP_context_id
3. SAP_securid
4. Window_id
5. Event queue [WD1101]

How to handle window Id?
Window id is a 13 digit time stamp which will generates in millisec.
Ex: windowed=144333121987

Challenges (or) Scripting Technique:
In one of the ECC portal SAP_exit_sid capturing as ABCD11234566789PLNNO.
Whenever I am trying to convert the above value HTML to URL, URL to HTML. Which
is not happening properly.
To overcome the above scenario. I have to search for a character in the captured value, replace with expected character.
To find and replace a character, we written the “C” code which will automatically search for string and replaces with other string. Once it replaced we are converting to the Lr variable & substituting whenever we required.

The following example shows a typical recording for an SAP Portal client:

Example:
vuser_init()
{
 web_reg_find("Text=SAP Portals Enterprise Portal 5.0",
 LAST);
 web_set_user("junior{UserNumber}",
 lr_decrypt("3ed4cfe457afe04e"),
 "sonata.hplab.com:80");
 web_url("sapportal",
 "URL=http://sonata.hplab.com/sapportal",
 "Resource=0",
 "RecContentType=text/html",
 "Snapshot=t1.inf",
 "Mode=HTML",
 EXTRARES,
 "Url=/SAPPortal/IE/Media/sap_mango_polarwind/images/header/branding_image.jpg",

"Referer=http://sonata.hplab.com/hrnp$30001/sonata.hplab.coml:80/Action/26011[header]"
 , ENDITEM,
 "Url=/SAPPortal/IE/Media/sap_mango_polarwind/images/header/logo.gif",

"Referer=http://sonata.hplab.com/hrnp$30001/sonata.hplab.com:80/Action/26011[header]",
 ENDITEM,
...
 LAST);
The following section illustrates an SAP Web and SAP GUI multi-protocol recording in which
the Portal client opens an SAP control. Note the switch from web_xxx to sapgui_xxx functions.
Example:
web_url("dummy",
 "URL=http://sonata.hplab.com:1000/hrnp$30000/sonata.hplab.com:
 1000/Action/dummy?PASS_PARAMS=YES=;dummyComp=dummy=;
Tcode=VA01=;draggable=0=;CompFName=VA01=;Style=sap_mango_polarwind",
 "Resource=0",
 "RecContentType=text/html",
 "Referer=http://sonata.hplab.com/sapportal",
 "Snapshot=t9.inf",
 "Mode=HTML",
 LAST);
 sapgui_open_connection_ex(" /H/Protector/S/3200 /WP",
 "",
"con[0]");
 sapgui_select_active_connection("con[0]");
 sapgui_select_active_session("ses[0]");
 /*Before running script, enter password in place of asterisks in logon function*/
 sapgui_logon("JUNIOR{UserNumber}",
 "ides",
 "800",
 "EN",
 BEGIN_OPTIONAL,
 "AdditionalInfo=sapgui102",
 END_OPTIONAL);

ALM

Lunch the ALM application using URL upload the scripts in to ALM by zipping or save as option on my computer.
Navigate the test plan under testing,
Move the script test plan by creating a folder.
Click on edit test, to provide by number/ percentile, manual/ or goal, by group /or by schedule
Push script into scenario, provide RTS, ramp up, ramp down, duration, LG and load distribution.
Click on submit button.
Go to time slot, book a specific time or block a specific time by L.G by providing no. of users date, start time and end time.
Note: you can manually specify the L.G’s or you can choose automatic option.
You can start the test automatically or manually.
Go to test results, choose your run id, download RAW- results or zip file.
Menu:
Dashboards
Managements
Requirements
Cloud settings
Testing
Resources
Defects
Performance center

Defect: When there is a deviation between the actual and expected, we have to raise a ticket under defect tab by providing severity, priority, expect, actual results, attachments.

Testing host: It will allow you to verify how many LG’S and controllers are configure whether those are operational are not operational and in which location or hosted.

Performance Center

Performance center is a web based application, which is a web interface of controller. Using PC we can design, execute, and download the results from anywhere any time. Using PC you can manage your resources (controller, LG’s and no. of users) in perfect manner by looking
time slots.
Note: You have to buy PC license as well as Controller License.
Versions: 9.1/9.5 and 11.0 (Integrated with ALM)

Activities in Performance center:
 User level access.
 Project level access.
 Time slot bookings.
 Test design and Test execution.
 Upload &Download the scripts.
 Download the results anywhere Even you can monitor the servers by
integrating site scope or other tool.

Advantages:
1. It is a web based application.
2. You can monitor the resources in b/w your team members.

Process to connect to the PC:
 Launch the PC URL.
 Enter credentials.
 Choose your project.
 Push the scripts into controller.

Options or Tabs:

1.testung dashboard
2.assets dashboard
3.resourse dashboard
4.app dashboard

1.Tesing dashboard: Here we will design the scenarios .
To design a scenario follow the below procedure.
Navigate to testing dashboard
Create new folder
Create new test
Create new test suit
Inject the script
Add the users and select the controller and LG
Provide rampu,ramp down and durations.

To inject the script into pc we have  two ways to upload the scripts:

1. Zip the script folder & upload to PC

2. Manual Procedure:
 Vugen.
 Tools.
 HP ALM (11.0).
 Connection (or) PC Connection (based on version).
 Provide the PC URL.
 Click connect button.
 Credentials of PC.

2. Assets dashboard : here we will add the monitoring tools .

3. Resources dashboard : Once we designed the scenario then we have to book the slot to kick off the test. So go-to timeslot option provide starting and ending date and time.

4. App dashboard : here we will download .lrr files.

Fiddler

When ever we are unable to record the script using Vugen then we'll use the fiddler tool to record the statements then convert those statements into Vugen statements.

Fiddler is a web debugging tool or stripping tool.
Fiddler versions:
2.2, 2.4, 2.5, 2.6. 4.4

Q: Why do we fail to launch the application in vugen sometimes?
Reasons:
 DEP settings
 Protocol selection
 Browser compatibility
 Wininet level/socket level
 Fiddler
When we failed to record some objects using vugen, we can use fiddler to record the communication.
Fiddler file extension is “.saz”

Q: In one of my application, we failed to record some of the get and post requests.
Solution: We used fiddler to develop the script.
Scenario 1: How to convert fiddler get request to vugen get request?
Solution:
Select the request in fiddler.
Right click and copy just url.
Write web_url in vugen and construct a request.

Scenario 2: How to convert post requests?
Solution:
Select the request in fiddler.
Right click and copy just url.
Go to inspectors panel.
Choose text view and copy the content.
Using above just url, content and construct the web_custom_request.
In current version of LR, we can directly open the fiddler file.

Counters


1. JVM COUNTERS:
Process CPU Time:
Indicating the total amount of CPU time consumed by JVM.
Garbage Collector time (Garbage Collection Time):
Indicates the cumulative time spent on garbage collection and total number of innovations (invokes).
Current Heap Size:
Indicates the no of kilobytes (Kb) occupied by objects.
Free memory:
Available memory in the heap.
Garbage collector interval Time: time difference between garbage collection cause.

2. CLR (Common Library Runtime) Counters:
 Exception through per second: Indicating number of managed code exceptions
thrown per second.
 Timing GC: Time spent on garbage collection.
 .net CLR memory heap size:
 .net CLR total committed bytes
 .net CLR large object heap size

3. WEB SERVER COUNTERS:
Apache:
 CPU load: Percentage of the CPU consumed by apache server.
 Request per second: The total number of request per second served by apache.
 Bytes per second(throughput)
 Busy workers: Number of active threads serving the request.
 Ideal workers: Number of inactive threads in the apache.
IIS SERVER (INTERNET INFORMATION SERVICES):
NOTE: IIS is a web app server for .Net based application.
 Bytes sent per second.
 Bytes received for second.
 Current connections.
 Request per second disconnection ratio.
 Number of request queued.
 Number of requests rejected.
Anonymous Users:
Indicating anonymous http connections in the particular tab.


4. APPLICATION SERVER COUNTERS:

Web logic server:

 Execute the thread total count: Indicating the total number of threads assigned to
queue.
 Pending request current count: Indicates number of pending request in queue
 Queue length: Number of request in the priority queue.
 Throughput: number bytes received per second.
 Exception count: It should not cross 20.
 Connections current count:
 Transactions roll back total counts: it should not cross 5.

WEBSPHERE (IBM WAS) COUNTERS:

 Concurrent request: Number of requests are concurrently processed.
 Service time: Response time for a servlet request.
 Active count: Number of active threads in the system.
 Connection pool size: Number of threads in the pool.

5. Network Counters:
 Connection established: indicating connections success ratio.
 Connections failure: Percentage of connections failure.
 Through put:
 Network latency (delay):
 Pocket loss:

6. Disk Counters:
Disk read per second:
Disk writes per second: rate of write operations on the disk.
Average disk queue length: Number of read and write request that were queued for selected disk during the sample interval.
Disk time: percentage of the elapsed time that the selected disk was busy with serving read and write request.
Note:
Average disk queue length should not cross to for every disk.
Split I/O‘s Second: Measures the rate of I/O split due to file fragmentation.
Free space: Display the percentage of total available space.

7. Database Counters:
 We can monitor oracle 10g versions using DB stats reports above 10g.
 AWR reports.

Note:
Even DBA can generates the DB trace (or) oracle trace report to identify dead locks and full table scan.

Oracle counters (for all java based applications):
 Buffer hit ratio
 Full table scan
 Indexing
 DB time
 DB CPU time
 Hard parses and soft parses
 Top 5 time taken events(in a particular durations)
 Physical read
 Physical writes
 High CPU utilized query’s
 High memory utilized query’s
 High I/O’s utilized query’s

SQL Server Counters (.NET based applications):
Using sql profile we can monitor DB activities.
Navigations:
 Open SQL server
 Choose new option
 Choose create new profile.
Counters:
Buffer cached hit ratio.
Transactions for second.
Log cached hit ratio.
Page read per second.
Page writes per second.

SAP HANA Counters:
Using SAP HANA studio we can monitor SAP HANA.
SAP HANA built on column and row based technology.But purely work on column based.

Database side statistics analysis

To analyse the data base bottlenecks we have to generate the AWR report.

How to generate a report?
Sol:
 Select any monitor which we added.
 Right click on monitor.
 Select "reports" and select "Quick" It will display quick report pop up window.
 Select "thresholds" and select "options" according to our requirement.
 Select "general"->Graph.
 Navigate to "filter" and schedule settings.
 Select the report period.
 Select the report type (html/text/xml).
 Click on generate report file.

AWR report (Automatic Workload Repository):

 To communicate with remote server we need to install putty.
 In putty we need to provide remote server host name (or) IP address and we need to
provide remote server credentials.
 We have to enter a command {sqlplus"/as sysdba";}
 Click on enter which will prompt to sql prompt.
 In sql prompt we need to enter below one:
SQL>@$ORACLE_HOME/rdbms/admin/awrrpt.
 Click enter which will ask for "Report_type".
 Provide report type as html.
 Click enters which will ask for number of days.
 Provide number of dayClick enter.
It will show "snapshot ID's".
And ask for begin snap_IDProvide begin snap id and click enter.
And Ask for end "snap_ID".
Provide end snap_ID and click enter lt asks for report name.
Provide the report name with extensions (.htm1) click enterexit.

Database bottlenecks:


1. Low buffer hit ratio:
Process: Generated AWR report and found low buffer hit ratio which is reported 60%. BHR should be more than 95%.
Cause: Due to allocation of low memory to the buffer memory, triggers low BHR.
Recommendation: Recommended DB architect people to increase buffer memory allocation by redesigning oracle SGA.

2. Low index utilization
Low index utilization can’t monitor by AWR. With the help of DBA, we found index utilization issues and asked them to provide tuning opportunities in terms of index utilization.
Cause: Low index utilization causes delayed response time.
Recommendation: Redesign the index.

3. Full table scans
With the help of DBA, we found there are lot of full table scans.
Cause: Full table scan causes delayed response time.
Recommendation: Redesign the index and implement the index of which tables are frequently accessed.

4.High I/O operation queries

Drill downWhy queries are utilizing more CPU for any I/O operation?
How much CPU utilized by the I/O operation?
Recommendation: Recommended fine tune the queries of I/O operations.

5. High Elapsed running queries
Cause: AWR reported high elapsed running queries which causes the delayed response time.
Process: We gone through the queries which contains lot of binding variables, inner queries which causes the delayed response time.
Recommendation: Recommended DB people to rewrite the query and tuning the inner queries and binding variables.

6. High CPU utilized queries
Cause: AWR reported high CPU utilized queries which causes the delayed response time.
Process: Once we gone through the AWR report which contains more number of high CPU utilized queries, we segregated that into DB level and machine level.
Recommendation: Recommended DB people to tune the query which causes high CPU utilization.

7.DB time and DB CPU time
Cause: AWR reported high DB time and DB CPU time which causes the delayed response time. It should be minimal.
Recommendation: Recommended DB people to redesign the DB architecture.

Configuration side statistics analysis


OACore issue: Oacore process the requests and consumes the memory from JVM heap.
Scenario 1: I have an application called oracle R12/oracle apps/Ebiz which has to support 300 users, expected response times are below 5 seconds. While ramping up the users, application performance is degraded at 100 user load. Even application itself crashed at 150 user load.

Process: Initially we started analyzing client side statistics and found that issue with server side. Then we analyzed OS/hardware/code/DB but we did not find anything. So moved to DB layer,
finally we found an issue with JVM settings which is a oacore settings (memory management configuration).
After analyzing oacore settings, we recommended them to tweak that setting from oacore = 1 to oacore = 3 which resolved the issue.

 Cursor Limit Issue:
We have an application which has to support 500 users
Solution: We started test with 500 users but application got crashed at 150 user load and application performance degraded at 100 user load.
We have started analyzing client side, OS, hardware, memory, method, level statistics which didn’t provide any clue. Then we started analyzing configuration settings in HTTPD file in Tomcat Apache server and database statistics. Then moved to database configuration settings.
We found an issue with cursor limit which is causing application crash.
Process: We used dynatrace profiling tool to monitor all the layers of application including configuration settings as part of testing while ramping up users, cursor limit usage reported 100% at 120 user load. Then we asked DB team to provide the cursor limit setting which is reported 50. By default cursor limit is 50 which can accommodate only 100 users. We
recommended them to change 50 to 2000 as per oracle SGA, they can tweak to 1000 for that application. After deploying new build with 1000 cursor limit supported 500 users without any performance degradation.

Thread limit:
We found an issue server reached the max number of simultaneous connections in server log.
Cause: Not enough threads in Apache.
Statistics: Due to the connection pool settings, application is not able to handle simultaneous connections created by 150 users.
Recommendations: Increase the connection pool settings from 10 to 16.

Private Byte Issue:
Process: In one of my SAP application, users are causing memory dumps as per SAP
architecture, they configured private bytes 4mb memory. If the users are trying to retrieve more than 4mb memory, then the users will be pushed to run mode to debug mode. If SAP system allocates more memory, then the user will come back to run mode. To execute the report if the
system is not allocating the memory, the users will be pushed to error mode which causes the memory dump.
Recommendation: Recommended SAP people to reconfigure private byte settings.


DOP settings and many settings will be there.
Based on scenario and as per our requirement we'll took the changes from configuration side.


Application side statistics analysis

From application side analysis we have monitor code level issues like threads, classes , packages and memory leakages etc.....
To monitor the code level issues we may use jconsole,JVM ,JMC ,dynatrace and appdynamics .


JVisualVM:
JVisualVM is a default profiling tool for JVM. To leverage the services we have to install JDK.
Steps to invoke JVisualVM:
 My Computer
 Program Files
 Java
 bin
 JVisualVM

We can Monitor
1. Local Machine
2. Remote Machine
Steps to invoke Remote Machine:
 Go to Remote tab
 Add ProcessID
 Add JMX connections by providing Port Numbers

1. Overview:
Here we can view how much memory?, XMS & XMX settins, JVM version, JRE VersionHere we can check JVM Arguments & System Properties

2. Monitor:
By default we can View CPU,Memory,Classes,threds graphs
Here we can perform GC and we take heap dump .Heap dump extension is “.hprof”
Here we can check memory leakge with the help of heap metaspace graph.

3. Threads:
Here we can view thread status like Running(), Sleep(), Park(), Waiting()
And we can take thread dump for Analysis Purpose.
Note 1: For thread dump analysis copy the thread dump and paste it in any online thread analyzer tool.
Note 2: We can’t copy Thread dump from remote machine to local machine and vice versa.

4. Sampler:
Here we can view how much CPU & Memory Utilized by each & every thread as part of JVisualVM

Thread Dump:
Whenever application is not performing well, we used to analyze thread dump.
Thread Dump is a snapshot of thread status.

There are two types of threads:
1. Demon Threads: Which are invoked by OS and hardware level
2. Non Demon Threads: Which are created by program.

Thread contention:
Thread contention is a status in which one thread is waiting for a lock which is locked by some other thread.

Deadlock:
Deadlock is a situation where one or more threads are waiting for other resources which are locked by some other threads.

Thread synchronization:
This code will allow the threads to use multiple resources by multiple threads.
In Java, every object has one monitor. At any point of time, only one thread get lock on monitor. Other threads will wait until monitor will be released.
Note: Thread dumps will be published with the help of jstack or jvisualvm.
As part of analysis, we are going to identify which threads are in blocked status and waiting status.
If you find any blocked threads, verify for which stack it is looking for, who locked on expected stack, copy these details and post to developer.
If you find waiting threads, we have to find which methods are executing for particular thread (wait (), park (), sleep ()) and copy details and post to developer.

If multiple threads are trying to get locks on stacks which is locked by some other threads causes deadlock threads. Apart from above analysis, we do have internal thread analyzer tools.
Using these tools, we are deducting the deadlocks and thread level issues. For infrastructure level, we have to take CPU sampling, which thread is utilizing more CPU and report the same.


Memory Dump:
Memory dump is a snapshot of memory utilization statistics in a particular time period.
Whenever you received OOM (Out of memory) exception or memory leakages we have to take
the memory dump to find out the root cause.

We do have two types of memories
1. Stack memory: Static variables will be loaded into stack memory
2. Heap memory: Dynamic variables will be loaded into heap memory.
Memory dump contains below information:
 Objects
 Classes
 Class Names
 Class loader information
 Fields
 Primitive fields
 Garbage collection roots
 Thread level data
 Stacks
To understand the memory dump, we should drill down how many threads required for our application, how much memory required by each and every thread will give detailed idea about memory footprint (static)
Memory dump will give static memory and dynamic memory information (user session) for analysis purpose.
Using memory dump, we can identify which object, class are running for more time, we can copy them and send to the developer.
If you are not able to analyze the memory dump manually, we can use memory dump analyzertools to identify the issue.
Memory dump extension is “.phd” or “.hprof”

Note: Using Dynatrace we can drilled exact place which causing for delay in response times.
But for dynatrace license is required. 

Sitescope


 It is an agent less monitoring tool or online monitoring tool.
 It is a server monitoring tool, to monitor any kind of servers.
 At a time, we can monitor 120 servers.
 We have to configure the web, app and DB to site scope.
 We can monitor for every time interval at any point of time from anywhere in the world.
 By using this, we can monitor windows, UNIX and ax-box.
 We need to install the site scope in your machine.
 We need to configure server in site scope.
 We can add monitoring profiler in site scope.
 It is a product of HP, and latest version is 11.52.
Launch the site scope URL and provide credentials after that we are able to see the following tabs
 Monitor – Here we can add counter to monitor which server you want.
 Remote Server – In this, we can configure win, UNIX servers.
 Templates 
 Preference
 Server statistic
 Tools

Configuring server in site scope

1. Go to Remote server tab – select windows or Unix
2. For windows, provide the following details
 Name
 Description
 Server IP or Server Name
 Provide credentials. i.e, username & password
 Method – WMI ( web method invocation)
 Click on save

3. For UNIX provide the following details.
 Name
 Description 
 Server IP or server name.
 Provide credentials i.e. username and password.
 Provide OS.
 Method –SSH (secure socket host)
 Click on save
In monitor tab we can do add counter, quick reports, current status and history.

Adding servers:
 Open Site Scope
 Monitor 
 Remote servers
 UNIX remote systems

Performance monitor(Perfmon)

Usually Performance monitor is called as Perfmon by the testers.This is default available tool in windows machines.
So to monitor cpu , memory and disk utilization we will use Perfmon.
Using this we can monitor local and as well as remote systems but both should be windows operated.

✓ How to monitor local machine?

Open the Perfmon then we can see the default graph.

To analyse the graph just goto report tab (in the left side on your system screen )
There you can double click on report then you can see the graph , right click on the graph and save the graph details with extension of the file as " .csv"
Then open the file and select the graph data , choose a graph then you can see the graph.

Note : Perfmon extension is " .blg"

✓ How to configure the remote server ?

Navigate to data collector set
Right click on it and create new data collector set by providing the details like name , url , credentials etc.
And also add the counters to be monitored during the test.

Then you can see the same data  collector set in the user defined node and also you can see in the reports tab as well.

To save the result and To analyse the graph just goto report tab (in the left side on your system screen ) There you can double click on report then you can see the data collector name provided by you , right click on the graph and save the graph details with extension of the file as " .csv"
Then open the file and select the graph data , choose a graph then you can see the graph.

Note :

It is a default monitoring tool to monitor windows operating systems based servers.
 Start
 RUN
 Type Perfmon
 Click ok.

Process:
 Right click on counter logs
 New log settings
 Provide the output file name
 Add servers& objects along with counters
 Define time interval
 Provide the output file name and Schedule settings.

Windows resources

Windows resources is default graphs in controller.
✓Drag and drop windows resources graph then right click on it and choose add measurement , there add the counters what you are going to miner during the test.
Note: we can monitor the remote machines also by configuring the credentials of remote machine.
✓once the test got completed then right click on windows resources graph and choose save as html and save it.
✓using windows resources we can monitor the counters like

From cpu level
processor time
User time
Idle time
Cpu interrupts etc. ..

From memory level
available bytes
Commited bytes
Page faults read per sec
Page faults write per sec
Cache by memory etc...

From dusk level
Disk read time
Avg disk bytes read per sec
Avg disk bytes write per sec
Current disk queue length
Avg disk queue length etc........

From system level
processor queue length
Context switching
Threads etc.....

From Server level
Hits per sec
Requests per sec
Idle workers
Active workers
Bytes per sec etc......

From network level
connections established
Connections failed
Throughput
Network delay etc......

Note :for UNIX resources also we will follow the same procedure as like as windows resources.

Server side Statistics analysis


Monitoring CPU, Memory, and Disk Utilizations is called server side analysis.
Note: For all windows OS based machines CPU, memory utilization should not cross 80%. For all UNIX, Linux, Red hat OS based machines should not cross 90%.

To monitor these we may use default graphs like windows resources(for windows machines) , UNIX resources available with controller.
✓ For monitoring the windows machines we can use Perfmon tool which is available in windows system default.
Using this we can monitor local as well as remote machine but moth machines should be windows operated.
✓For monitoring the UNIX machines we can use UNIX commends like top,iostat,
vmstat,netstat,nmon.
Note : using windows resources and UNIX resources we can monitor the counters by configuring local and remote machines.

Network side statistics analysis

Client side statistics analysis:

To analyse the client side statistics we have to merge the following graphs inorder to identify the bottleneck.

 Running Vusers.
 Connections.
 Hits per seconds.
 Throughput.
 Error per second.
 Response time.

Merging: We can merge the graph in 3 ways.
Overlay Graph
Tile graph
Correlate graph

Scenario1:
Relation between Hits per second and throughput.
 Both should be directlyproportional, if not
Cause1: That could be a network bandwidth issue.
Cause2: Web server might have issue.
Cause3: Application itself having the issue.

Note: Hits are increasing and throughputs are not increasing due to the application issues. We are receiving exception page which impacts high hits low throughput.

Scenario2:
Relation between running users and hits per second?
 Both should be directly propositional, if not application itself having a problem (or)
application is not responding well.

Scenario3:
Relation between throughput and response time.
 Both should be inversely proportional.
Note: As per the market standard both should be inversely proportional based on boundaries (if you are testing pages).

Scenario4:
Relation between running users and connections
 Both should be directly proportional, if not
Cause1: Connection limit issues in the web server.
Cause2: Number of treads limit reached threshold point in the web server.
Controller output message for above issue:
1. Users permanently or prematurely shutdown
2. Web server log max client error.


WEB PAGE DIAGNOSTICS:
Using Web page diagnostics graph we drill down the issues related to component, network, and server level issues (break down drill).

Component Break down graph:
Which will allow you to analyze component level issue any one of the component getting delay to download which will reported as an issue.

Time Taken For First Buffer Graph (TTFB):
If the TTFB is high then the problem is with server or application.
If the TTFB is low and the page response time is very high then that is a network issue.

Note: After this we have to move to server side analysis.

Analyzer

Analyzer file Extension is “.lra”

1. Cross results option: Allow you to compare two “.lrr” files as part of benchmarking test.

2. Section explorer: Contains “.lrr” path, period, duration, average throughput hits per second, total throughput, hits per second, transaction response time and status code.

3. Graphs: Allow you to add and delete the graphs.

4. Properties: Allow you to exclude/ include think time and generate percentage response time.

5. Controller output message: Controller error message will be displayed which will be help full to analysis.

6. User data: Allow you to write something.

7. Raw data: Based on the request we can pull the raw data and send to architecture people to analysis purpose.

8. Graph data: Will give raw data for graph.

9. Legend: To make you understand which color is indicating which measurement.
 Scale: Indicating number of measurements in graph.

10. Granularity: Time difference b/w two saturation points.
NOTE: minimum Granularity for throughput and Hits per 5 second.
For all remaining graphs 1 second

11. 90th percentile:
90 percent of the transactions are completing with in this limit.
Step1: Write all the response times in ascending order.
Step2: Take out 10% of values from below.
Step3: Which will be the highest value consider as 90% response time.
Note1: We have to report only 90% response time to client.
Note 2: Based on client requirement we can generate 80%, 85%, 90% …etc.

12. Reports:
By default we can generate doc report, Html report, crystal report, PDF report.
Reporting:
Once the test got completed, I will export the response times to excel and I will prepare a comparison report.

Comparison report:
It compares 90th percentile response times with baseline response times of previous test results and I will maintain a RAG (Red Amber Green) status.In some other tab, I will copy merged graph to understand the test results.Apart from comparison report, we will prepare a quick analysis summary which contains objective, scope, how we designed the scenario, test environment, observations in terms of resource utilization, high response times and controller, web server logs.
I will send a mail to get AWR, NMON reports for future analysis.
I will prepare a PPT by analyzing all the supporting files (AWR, NMON …) by mentioning objective, observations, environment comparison, high response transactions, root causes to present to the stake holder.

What is your approach to analyze the statistics?
(or)
What is the process you are following to identify the bottleneck?
A:
Once the test got finished, I am going to compare derived statistics with expected statistics. If both are not comparable, then I will start the process to find out the root cause.
 Client side statistics analysis (analyzer, throughput, Hits per second, Response time).
 Server side statistics analysis (Hardware and OS level statistics).
 Application side statistics (Methods, I/O operations, DB, EJB, Packages etc..).
 Configuration setting analysis (Current limit, Connection limit, Thread limits..etc..).

Types of testings

1. Warm up test/Dry-run test/Discovery test:
This is not the actual test but making ensure that all scripts, test data, environment and
application running fine and stable.
Note: Warm up test will conduct with 10% or 100% of actual load for short duration.

2. Performance test/Baseline test:
Whenever you don’t have SLAs, you have to conduct baseline test with single user, single script, single iteration and execute the script in standalone mode and get the response times, consider as a baseline response time.
Step 1: Choose schedule by group option
Step 2: Choose start scenario begin option to make scripts run one by one.
Note: Application should behave properly under the load how it was behaved with a single user.

3. Load Test:
Load testing is the testing to verify the application behavior under load.
We are designing the scenario with 100% load.

4. Endurance Test/Soak Test/Longitivity Test:
Verifying whether application is available for longer duration or not.
We have to design endurance test with normal load (50-60% of peak load) for longer duration (12h, 18h, 24h)
Note 1: Objective of this test is to identify the memory leakages.
Note 2: In some cases, we might require to design scenario with normal load by increasing think time and pacing time.

5. Stress Test:
Stress test is the test to identify the breaking point or performance degradation point of
application.
We can stress the application in two ways:
1. By increasing the number of users
2. By reducing the pacing and think time

Note: We can increase the number of transactions by reducing the pacing and think time without adding the users.

6. Failover Test (based on client request):
In the absence of first data cord, verifying whether second data cord can take the load without any failure transactions or performance degradation.
Process:
In one of my test, we designed a failover test for 2 hours duration. After one hour duration, we reported to IT admin and architect people to unplug the first data cord from the network. We verified whether test reported any failure transactions or performance degradations. Monitored second data cord whether it is able to take entire load in the absence of first data cord.
In the above scenario, my role is very minimal. Plugging and unplugging performed by network people.

7. Benchmark Test:
Benchmark test will give a repeatable set of quantifiable results which from current and future releases. These results has to be compared with baseline test results.

8. Capacity Planning Test:
By forecasting the future usage, whatever the sequence of test we are conducting is called capacity planning test.
Process:
We have to conduct load test, stress test to identify application breaking point. Speak with BA people to understand what is the growth of the business and plan for capacity planning.

9. Spike Test:
We have to test the application behavior under abnormal conditions.

10. Volume Test:
Volume test is the test to verify the application behavior under huge amount of load.
Example: Interfaces and Batches

11. Scalability Test:
Scalability Test is the testing of an application to measure its capacity. Scale up or scale down using horizontal and vertical techniques.
We can scale down application in two ways:
1. Vertical scale down: Adding the resources to same node (CPU, memory,cores)
2. Horizontal scaling: We can add multiple nodes to existing system.

12. Network Latency/Wan Emulation Test:
To simulate the network latency (delay) we used HP Shunra (Network Virtualization) and conducting virtualization test.

Memory footprint test: Testing with single user or 20 users and 50 users conducting how many users it can sustain called memory footprint test.Current version of LR will send the notification whenever it reaches the threshold point.
Memory Foot Print in Load Generators:
The no of Load Generators depend on the below items.
1. Ram size of the Load Generator.
2. No of Variables & Memory allocation for variables in LR Script
3. How you are running vuser as a Process or a Thread.



Pacing Calculations

Pacing Calculations:
(or)
Little’s Law:
(or)
How to calculate TPH (Transactions per hour)?

Q : Target 1800 transactions per 1hr
 1 Script contain 30 transactions
 1 Iteration is taking 30 seconds
Calculate pacing?
A:
Step 1:
Calculate the total no of iterations = target transactions/script transactions
 = 1800/30
 = 60 (iterations)
Step 2:
Time for target iteration = Target iteration * 1 iteration time
= 60*30
= 1800 sec
Step 3:
Remaining Time = Target time‐ Target iteration time
 = 3600‐1800
 = 1800sec
Step 4:
Pacing = Reaming time/Target iterations
 = 1800/60
 = 30 sec

RTS

Runtime settings:
Extension of RTS is “.cfg” & “.usp”.
Note: RTS will be transferred to controller from vugen script but not vice versa.

1. Run Logic: Indicates number of Iterations.
Note1: Test duration setting will override the Run Logic.
Note2: To make our script iterates for specific number of iterations, we have to choose “run until
complete”.

2. Pacing: Time delay between the Iterations
We have 4 types of Pacing.
1. No pacing/ no delay.
2. Fixed Pacing, waiting time.
3. Random Pacing (generate random value and wait for the same).
4. Interval Pacing: Instructs users to finish with in a time, which includes pacing time, iteration time.
Note1: Pacing is the time delay to start new iteration after finishing previous iteration.
Note2: Pacing will allow you to control the number of iteration and number of transactions.
Note3: Pacing calculation is very important while preparing work load model.

3. Log: Logs will help you to debug the script.
 Enable logging: You will receive log messages based on the settings.
 Disable log: You are going to use this option while running the test to avoid unnecessary

4. Think time:
Think Time is the time to choose new action after getting previous response.
 (or)
Time delay b/w the User actions.
Note: Even you can pass float numbers as a think time.

Q: Why think time is required?
A:
In realistic environment end users are taking sometime to choose the new action after receiving previous response.
But in my script virtual users are not waiting for to choose the new action and they are firing back to back request.
To simulate the realistic environment we have to instruct the users to pause in between the request with the help of think time.

Who is going to provide the think time?
A:
As a performance tester we have to calculate the think time. How long a normal user is waiting
to choose the new action on every page.
Note: we should not use recorded think time.
Think time option:
1. Ignore Think time: It is going the function and execute back to back request.
2. As Recorded: It will pause the script execution as per the function time.
3. Multiply think time: Multiplies the think time.
4. Random Think time: it will generate the random number based on minimum and maximum
percentage and passes the same.
5. Limit Think time: It limits the think time.

Q: Where do you place the think time?
A: We should not place the think time in between start and end transaction.

Q: What is the impact of think time on the response time?
A:
Scenario1:
If you reduce the think time you will receive higher response time. Because of less think time
users will perform more iterations and transactions, which will impact the server
performance.
 Lower think time will give less breathing time, which will impact the transaction
response time.
Scenario2:
 Higher think time will give good response times.
 Due to the higher think time server will get more breathing time which will process the request very fast.

Global think time:
Define the variable in global.h
Ex: int X=10;
lr_think_time(x); (Action)
Note: If we forgot to take out the think time from start to end transaction you can filter those response times in lra file with the help of properties.

5. ADDITIONAL ATTRIBUTES:
To declare the Environment variables.

Q: How to create environment variables in LR?
(or)
How to pass a new value in to the script without opening the script?
A:
char * server;
server = lr_get_attrib_string(“host”);
lr_save_string(server,”url”);
 Write the above statement in the script.
 Set the value in the RTS and pass vary arguments in runtime.

6. Miscellaneous:
Continue on Error:
Continue script execution even when an Error Occurs.
Generate snap shot on Error:
It will generate the snap shot for every Error you can verify them in Result file (or) Controller
vuser log (by clicking camera symbol).

Q: Where we can find a screen shot?
A: Go to the vuser log and click on the camera symbol. Navigate to the LG results path and you can find HTML page for every error

Multithreading:
Run vuser as a process:
For all client server apps (SAP GUI or Desktop based apps. EX: SAP GUI/ Calculator) you have to run vuser as a Process.If we are running vuser as a process every vuser required one MDRV (Multi Driver Program) Engine.
Every MDRV engine required 5Mb memory in the LG Machine.

Running Vuser as a Thread:
For all web based applications you have to run vuser as a Thread. If you are running vuser as a thread multiple users will share one MDRV engine.
NOTE: Approximately 50 vusers use one MDRV engine.

Automatic Transaction:
Allow you to generate the automatic transactions to measure the response time.

7. Network:
Speed simulation:
It specifies to use maximum/ predefined/ custom band width for your test.
Usually we are using maximum bandwidth option until or unless there is a requirement, we are not going to use custom or advanced bandwidth.
If you like to test your application with a specific network bandwidth, then use custom or advanced bandwidth.

8. Browser Emulation:
It will allow you to use multiple browsers for test.
Note:
Prerequisites:
We should install the browser in the load generator.
1. Simulate browser cache: Enabling this option instructs the users to use or simulate cache
files from browser.
If disable this option, cache files will be deleted or will not be simulated.
2. Simulate a new user on each iteration: If the users are iterating for multiple times, we have
to make him behave like a new user on every iteration by enabling this option.

9. Internet Protocol:
Content check:
It is a global text verification option. Verifies the text on every page.
Note: web_reg_find is a local verification point.
Steps:
1. Create application by clicking new application.
2. Create a rule under the application.
3. Provide the text and match case.
4. You can export or import the rule and extension is “.xml”.

10. Proxy:
It will allow you to configure proxy settings for all requests. It will redirect all the requests to proxy server.
Options are
1) No proxy
2) Obtain proxy settings from browser
3) Custom proxy

11. Preferences:
1. Enable image and text check:
This option has to be enabled for web_find(), and web_image_check().
2. Wininet replay instead of socket:
For NTLM based applications or SSL based applications, you can use wininet replay.
If you don’t want to use web_set_socket option, then use wininet replay.
Options:
 HTTP request connection time out: A unit time within which request connection
operation should finish. Default is 120 seconds.
 HTTP request receive time out: A time unit within which receive operation should
finish. Default is 120 seconds.
 Step Download time out: A time unit within which entire step has to be finished.
Default is 120 seconds.

12. Download filters:
To exclude or include specific URL, use download filter option.

Q: How to design shared RTS? (or) How to configure RTS for multiple scripts?
A: Two ways we can configure the shared RTS
1. Vugen level: Configure RTS in one script, copy “.cfg” and “.usp” files and paste into
remaining script folders.
2. Controller level: Select all scripts and choose shared RTS and configure the same.
Note: Whenever script having vary number of actions and names, you should not create shared RTS or should not copy “.cfg” and “.usp” files to some other scripts.


SLA configuration:
SLA configuration will allow you to compare derived statistics with expected statistics.
Step 1: Click on new under SLA
Step 2: Choose SLA measurement like total, average, throughput, hits per second, response time
and errors.
Step 3: Provide threshold point and click on finish.

Rendezvous test:
Rendezvous point is the point to instruct the users to wait at a certain location once the specified number of users arrived at that point. It will execute the subsequent request.
Syntax:
lr_rendezvous(“xyz”);
Step 1: Write the function in the script.
Step 2: Go to controller and select rendezvous under scenario.
Step 3: Configure the policy by providing number of users and timeout.

Controller

CONTROLLER

Note: Controller extension is “.lrs”.

Manual scenario: Manually we have to design the scenario by providing ramp up, ramp down, duration, number of users to generate the anticipated load against the application.

Percentage mode: It will distribute the load in between use cases in terms of percentage.

Goal oriented scenario: Controller itself will design the scenario as per the goal.
 Controller having 3tabs:
1. Design tab.
2. Run tab.
3. Diagnostics tab.
1. Design tab: where you can design the scenario.
Note: scenario extension is “.lrs”.

 Ramp up: gradually increasing the load against application.

 Duration: indicates test duration which extends ramp up and ramp down time. this is called “duration”, ”standalone”, ”steady time”.

 Ramp down: gradually reducing load from application.

 Elapsed time: this is the test duration time which includes ramp up, steady time, ramp down.

 Throughput: bits received from server.

 Hits per second: per second how many hits happening.

 Schedule by scenario: it considers all the scripts like a scenario and configure ramp up, ramp down, duration.
 (or)
Consider as a scenario by sharing ramp up, ramp down, duration.

 Schedule by group: every script consider as a different scenario by having their own
ramp up, ramp down, duration.

 Real world schedule: allow you to create multiple actions (ramp up, ramp down,
duration).

 Basic schedule: you have only basic action under this option (one ramp up, one ramp
down).

2. Run tab:
 User status:
Down
Pending
Initialization
Ready
Run
Rendezvous
Passed
Failed
Error
Gradual exiting
Exiting
Stopped

 Gradual exiting: whenever test duration completed by the time user are in mid of
iteration. So those users will be moved to gradual exiting once the iteration is finished users will be come out from test.
Note: elapsed time stats whenever you hit the start button.

3. Diagnostics tab: By default web page diagnostics are available at free of cost. We have to buy license for java, Siebel, sap, oracle etc.

Manual scenario:
 Check list:
1. Choose manual scenario.
2. Push the script into controller.
3. Choose schedule by scenario (or) schedule by group.
4. Choose real world (or) basic schedule.
5. Assign the quantity.
6. Assign load generators.
7. Verify connectivity with LG’s.
8. Configure SLA’s (if required).
9. Configure runtimes settings for every script.
10. Set the result path.
11. Provide ramp up, ramp down, duration.

Note: whenever situations demands like every script should run for multiple durations with vary ramp up’s then you can choose schedule by group.

Differences between request and hit:
User action is a request.
Successful request is a hit.
Note:One request may contains multiple hits.



 Goal oriented scenario: controller itself will design the scenario according to goal.
 Check list:
1. Choose goal oriented scenario.
2. Push the scripts into controller.
3. Click on edit scenario goal.
4. Provide profile name.
5. Provide goal type and threshold point.
6. Provide max and min users.
7. Provide duration once it reaches the goal.
8. Configure notifications if it is not able to reach goal.
9. Distribute load in terms of percentage.
10. Assign LG and check the connectivity.
11. Configure SLA’s (if required).
12. Set the result path.

Note: Goal types
1. Virtual users.
2. Transaction per second.
3. Hits per second.
4. Response time.
5. Pages per minute.

IP Spoofing:
Masking the original IP address and using the different IP address is called IP spoofing.Whenever load balancer is not functioning. Due to the request source of IP address. We
have to mask the original IP address and mask sure that every user has to use different IP address.
Process: In realistic environment every end users using different IP address to access the application but in LR environment all the users are invoking from same LG and same IP which is not realistic.
Step1:Request IT infrastructure guys to provide dynamic IP address.
Step2: Ask them to configure these IP address in DHCP server.
Step3: We have to configure these IP address in load generator using IP wizard option.
Step4: Enable IP spoofing option in controller.

Load balancing: Load balancer is a URL, which will distribute load in between web instances.
As part of performance testing you will receive two kinds of URL.
1. Direct URL: It will access web instance directly.
2. Load balancer URL: which will distribute the load in between web instances.
We do have two types of load balancer
1. Hardware load balancer
2. Software load balancer
Among the above, hardware load balancer is accurate and good
Load balancer URL will distribute the load based on below algorithms.
1. IP sticky
2. Least connections
3. Round robin
4. Round passion
5. Least load


TestData

Test Data:
Usually you will get test data from DBA’s. While developing the script itself, we have to prepare a test data requirement sheet which contains which required for every use case how much test data you required and segregate which is reusable and which is not.Sometimes you can generate test data using LR script itself.
Example: You can create username and password if the signup functionality is available in the
application. If the functionality is not available to generate test data, we have to request DBA to provide it.
If the test data is not reusable, we have to request DBA to create database restoration point or take the flashback of database.
Scenario 1: Once the test is completed, we have to request DBA to change DB to previous restoration point. So that data will be available for next test.
Scenario 2: Once the test is completed, we have to request DBA, flashback the database or load previous database instance.

Scenario 3:
Q: One of the script is generating purchase order number and second script is processing same purchase order number.

What is your approach to design the script?
Or
How to pass value from one script to another?

Solution 1: Create two actions in one script for both of the scenarios. Capture purchase order number from 1st action and pass into the second action.

Solution 2:
Data Staging:
Execute the first script with multiple users in controller. Write purchase order numbers into local file before starting actual test. Copy purchase order numbers from local file. Load into second
script so that both of the script will execute simultaneously.

Solution 3:
VTS (Virtual Table Server):
Virtual Table Server is a tool that enables sharing of test data and parameters between Load Runner Virtual Users (VUsers). Usually performance testers use parameterization in their
scripts, but in Virtual Table Server it acts as centralized repository to store data which can be shared between VUsers during the test execution.

Recording Options

Recording Options:
Shortcut (Ctrl + F7)

1. Recording mode :

We do have two modes in HTTP/HTML protocol
1) HTML Mode
2) URL Mode

HTML Mode: Generates a separate step for every user action even you can record non HTML
resouces.
HTML Advanced: Using advanced techniques, you can record web_submit_form or
web_submit_data/
Usually we will use web_submit_data along with record within script.
Advantages:
 Easy to understand the script.
 Easy to maintain the script.
 Very less number of lines of code.
 Generate script for HTML and non HTML resources.
 Script never fails for non HTML resources. If any one of non HTML resource is not
available, it will throw a warning like resource unavailable HTTP status code 403.

URL Mode:
Records not only user actions, even it records server side resources too. It will
generate web_custom_request and web_url.
 You can find huge number of lines of code.
 It’s very difficult to maintain the script.
 It will record non HTML resources in the form of concurrent groups.
 URL mode prefers for non-browser specific applications.
If any one of non HTML resource is not available or failed to download, the same script will be
aborted by throwing an error.
If application having java script files, prefer to use URL mode.

2. Script:
Script option will allow you to generate the auto think time, limits number of lines in a script.

3. Protocols:
Displays opted/chosen protocol.

4. Code generation:
Allow you to conduct auto scan for dynamic values.

5. Configuration:
Dealing with only auto correlation by specifying record scan, replay scan, rules scan, correlation
function, min and max length of dynamic value.

6. Rules or Correlation Studio (till 11.0):
We can create our own rules for common dynamic values. Those rules will be used across the
project.
Step 1: Create new application.
Step 2: Create new rule by providing LB, RB, param.
Note: We have to select boundary based scan type
Step 3: Test the rule.
Step 4: You can export the rule by clicking export button.
Step 5: You can import the rule by clicking import button.
Note: Correlation rule extension is “.cor”

7. Advanced:
We can generate auto text verification function using this option.
We can generate auto headers.

8. Mapping and Filtering:
Q: I am recording a business scenario but I failed to record the events.
Solution:
If you failed to launch the application, then change the DEP (Data Execution Prevention) settings
in my computer properties.
My Computer
Properties
Advanced System Settings
Advanced Performance Settings
DEP
Choose turn on DEP for all programs.