These questions are similiar to the ones asked in the actual Test.
How should I know? I know, because although I got my Basis Certification five years back, I have re-certified with the
latest version of the Associate Certification test.
Before you start here are some Key features of the Basis Associate Certification Exam
- The exam is Computer based and you have three Hours to answer 80 Questions.
- The Questions are (mostly) multiple choice type and there is NO penalty for an incorrect answer.
- Some of the Questions have more than one correct answers. You must get ALL the options correct for you to be
- The Official Pass percentage is 65% (But this can vary). You will be told the exact passing percentage before your
begin your test.
1. Which RFC will be used to “Read system data remote” from transaction SMSY?
RFC destinations within the Solution Manager are as below:
• SM_<SID>CLNT<Client>_READ: This RFC will be used to Read system data remote from the transaction SMSY, to
read the “Data Collection Method” for system monitoring
• SM_<SID>CLNT<Client>_LOGON: This RFC will be used by the System Monitoring for accessing the “Analysis
• SM_<SID>CLNT<Client>_TRUSTED: This RFC will be used by the Customizing Synchronization functionality and
by the System Monitoring for accessing the “Analysis Method”. This RFC exists in both Solution Manager and Satellite
• SM_<SID>CLNT<Client>_BACK: This RFC will be used to send Service Data and Support Desk messages from the
Satellite system to the SAP Solution Manager system. This RFC destination required in satellite system only.
2. What is the common monitoring system used for Business process monitoring in the solution manager?
d) None of the above
Business Process Monitoring in SAP Solution Manager is based on CCMS (transaction RZ20). This
means that with the setup of the monitoring so-called monitoring tree elements (MTEs) will be created.
Both the CCMS on the SAP Solution Manager system and the satellite system are used.
Most monitoring functionalities use the CCMS of the SAP Solution Manager system.
- Application monitors
Document volume monitoring
Application log monitoring
Due list log monitoring
3. In your company data back-ups were scheduled as follows t1, t2, t3, t4 and t5 and the duration of back up cycle is 28
days. A disk crash occurred at a point t5 and the backup file t3 is defective. What action is required to recover the
database without any data loss? (More than one option is correct)
a) You must have all log info backups
b) T4 backup file is sufficient
c) T2 and T4 are required
d) T3 and T4 are required
Answer: a & c
To recover the database without data loss, you must have all log info backups (in this case: t2 and t4) following the data
backup t1 and the log information from 28. You must therefore keep older data and log information backups.
Some database also requires log information to be able to reset the database. You should therefore ensure that you
perform data and log backups regularly. It is recommended that the duration of backup cycle is 28 days.
This means that the backup media for the data and log information backups are not overwritten for at least 28 days.
Some databases allow you to import only the data files that are missing. The system then applies all the consecutive log
information written since t1 (22, 23,……..). No data will be lost if all log information since the data backup is available with
no gaps. To prevent data loss it is advisable to perform complete data backups every day.
Some databases offer the option of differential or incremental data backups that do not backup the complete data of the
database, but only the data that has been changed.
If you use incremental data backups, you must perform a complete data backup at least once a week. There must be at
least four complete data backups contained in the backup cycle. The log information should be backed up at least once
a day. It is always better to have two different backup media containing the same information.
Perform a data and log information backup with verification of the backup media at least once in the backup cycle.
4. How do you estimate the amount of DB space freed by archiving? (more than one option is correct)
a) Transaction SARA
Answer: a & c
When writing, deleting, reading, or reloading, statistical data on each archiving run (such as the storage space that has
been freed in the database by deletions or the number of deleted data objects) is automatically generated and is
persistently stored in the database.
The data archiving administrators can analyze these figures so that they can better plan future archiving projects and
request the necessary resources. Statistics also provided pertinent information on the role of data archiving in reducing
the data volume in the database.
The actual data archiving is a process in three steps:
1. Creating the archiving file: In the first step, the write program creates one (or more) archive file (s). The data to be
archived is then read from the database and written to the archive file(s).
2. Storing the archive file (s): After the write program has completed the creation of the archive file (s), these can be
3. Deleting the data: the delete program first reads the data in the archive file and then deletes the corresponding
records from the database.
Steps to analyze the gained space: You can follow the below steps to analyze the gained space:
1. Go to the transaction SARA
2. Click on Statistics
3. You can pass the active archiving objects on the selection screen which you have used during launching the archiving
4. You can also pass the date range considering the data on which you have run the archiving jobs.
5. Keep the status only complete
6. Now execute i.e. display the statistics
7. In the final report, have a look on the Total for the DB space Deleted, Keep in mind that we have DB space write and
DB space delete..db space deleted space will tell you the space you have freed up from the SAP R/3 database i.e. this
much space gained after running the archiving runs.
You can also use transaction SAR_DA_STAT_ANALYSIS. If you use this transaction, you must enter client and archiving
5. After an archiving session you have noticed that because of a selection error wrong data has been archived. Is it
possible to reload the archive data without inconsistencies? (more than one option is correct)
a) No it is not possible to load the data
b) If the archiving object has reload function
c) If has to be reloaded immediately to avoid inconsistencies
d) Only customizing data can be reloaded
Answer: a & c
From a technical point of view, archived data can be reloaded into the database. However, because this data is
essentially historical data that has not been adjusted to changes in the database and its contents, if it is reloaded; you
risk inconsistencies occurring in the system. For this reason SAP strongly recommends that you do not reload archived
The only exception to this is where the dat Consequently, archiving master data is essentially the same as deleting it
from the system is reloaded directly after the archiving process. This function is only to be used when absolutely
necessary. If, for example, you loaded data that is incorrect (the Customizing settings are wrong), you may have to do
this. The reload function is not available for all archiving objects.
Data archiving requires close cooperation between the user and the IT departments. You cannot reload archived master
data at any chosen time. You can only do this if the same objects have not been recreated in the system. You can only
reload this data if you have archived it incorrectly.
There is only one situation in which data should be reloaded into the database: if, immediately after archiving data, you
realize that, as the result of a selection error, too much data was archived, you can reload this data back into the
database. Reloading historical data into the live system’s database always carries certain risks. You should ask the
1. Can the data be completely reconstructed?
2. Does the data still comply with the current Customizing?
3. Has the data in the database been changed in the intervening period (for example, as a result of currency
4. Has there been a release upgrade since the data was archived?
In addition to these questions, there is the more general question of the practicability of archiving in practice. Usually,
data is archived to make space for new data in the database. The greater the growth of data, the more frequently
archiving has to be carried out.
The more frequently data is archived, the more likely it becomes that access is required to archived data. It is also more
likely that there will be sufficient space in the database to accommodate the required data.
1. Use transaction code SARA to open the Archive Administration: Initial Screen window.
2. Choose FI_BANKS as the object name.
3. Press the Enter key.
4. Click Goto > Reload on the main menu of the SAP GUI.
5. Click the Continue icon in the Information window to confirm the message:
6. In the Variant field, specify or select a variant.
7. Press the Maintain button.
10. Select Detail Log.
11. Click the Save icon.
12. Click the Back icon.
13. Click the Archive Selection button. A list of reloadable archive runs is displayed.
14. Mark the run to be reloaded.
15. Click the Continue icon at the bottom of the window.
16. Click the Start Date button.
17. Click the Immediate button.
18. Click the Save icon at the bottom of the window.
21. Press F8 or click the Execute icon (clock symbol with green check mark)
22. Press Shift+F4 or click the Job Overview icon. You can see that a reload job (REL) has been started for this
6. What is the transaction used to import SAP-provided support packages into your system.
The SAP Patch Manager (SPAM) is the customer side of the Online Correction Support (OCS). Transaction SPAM lets
you efficiently and easily import SAP-provided Support Packages into
your system. The SAP Patch Manager (SPAM) is the customer side of the Online Correction Support (OCS).
Transaction SPAM lets you efficiently and easily import SAP-provided Support Packages into your system.
You can call Transaction SPAM in one of the following 2 ways:
_ Go to the SAP menu and choose Tools _ ABAP Workbench _ Utilities _ Maintenance
_ Enter the Transaction SPAM.
The SAP Patch Manager offers you the following functions:
_ Loading Support Packages:
You can load the Support Packages you need from the SAPNet Web Frontend, the SAPNet R/3 Frontend, or from
Collection CDs into your system.
_ Importing Support Packages
_ Restart capability
When you import a Support Package into your system, SPAM follows a predefined sequence of steps.
If the Support Package process terminates, it can be resumed at a later point in time. Processing restarts at the step that
Displaying the import status in your system
You can find the import status in your system at any time using Transaction SPAM.
Transaction SPAM is integrated into the SAP upgrade procedure.
You need the following authorizations to use all SAP Patch Manager Functions:
You can find both of them in the authorization profile S_A.SYSTEM.
If you log on in client 000 and your user master data contains the corresponding authorization
Profile, you can use all the functions of the SAP Patch Manager. If you log on in another client or
Without the correct user profile, you can only use the display functions.
Only assign this authorization profile to the system administrator. Only the system administrator
Should have authorization for:
_ Downloading Support Packages
_ confirming successfully imported Support Packages
_ resetting the status of a Support Package
7. You were assigned to run an ABAP program with more than one selection screen as a background job. How do
you supply input to the selection screen and run it as a back ground job?
a) SPA/GPA Parameter
b) ABAP Memory
d) Function Modules
Every ABAP program can be scheduled as a step of a job. If the ABAP program has one or more selection screens, you
must create the input required there in advance in the form of a variant. A variant makes it possible to run an ABAP
program in the background although the program requires input.
The values stored in the variant are then used during the execution of the program. If an ABAP program has a screen
output as its result, this is directed to a spool list. You can specify an email recipient for this spool list when defining the
job. This recipient then receives the job output by email after the job has been executed.
You must also specify a printer for the creation of spool lists even though, as a result of a background processing, there
is not necessarily any direct output to a printer (this depends on the printer’s access methods. This may have to be
explicitly started later.
Screen variants allow you to simplify screen editing by:
• You can insert default values in fields
• Hiding and changing the ready for input status of fields
• Hiding and changing the attributes of table control columns
A screen variant contains field values and attribute for exactly one screen. A screen variant may be assigned to multiple
transaction variants, however. Screen variants are always cross-client; they may, however, be assigned to a client-
specific transaction. They can also be called at runtime by a program. The different possibilities for calling screen
variants guarantee great flexibility of use.
A specific namespace has been designated for screen variants and they are automatically attached to the Change and
8. What is the scheduler that checks the job scheduling table in the databases that are waiting for processing and
transfer them to the free background work process in accordance with their priority?
a) Back ground job scheduler
b) Time dependent job scheduler
c) Dialog scheduler
d) Standard job scheduler
The profile parameter rdisp/bctime specifies the time period in which the time-dependent job scheduler is active.
Executing jobs with the start condition immediate usually avoids the time-dependent job scheduler. In this case, the
dialog work process of the scheduling user performs the job scheduling.
Only if no free resources are found, is the job concerned scheduled in a time-based way. The scheduled time then
corresponds to the time point at which it should have started.
Background work processes can be configured on every instance of the SAP system using the profile parameter
rdisp/wp_no_btc. The number of background processes required in the SAP system depends on the number of tasks to
be performed in the background.
If the transport system is used, there must be at least two background work processes in the system. The combination of
the job ID and the job name define the job uniquely in the system. On every SAP instance on which background work
processes are defined, the time-dependent job scheduler runs every rdisp/bctime seconds (default value: 60 seconds)
this is an ABAP program that runs automatically in a dialog work process.
The time dependent job scheduler checks the job scheduling table in the database for jobs that are waiting for
processing. These jobs are transferred to free background work processes in the SAP instance, in accordance with their
priority and execution target.
1. Jobs that are not assigned any particular execution target can be executed with any free background work
process. This means that the workload is automatically distributed between the SAP instances.
2. If a job is explicitly assigned an execution target ( such as a selected instance or a group of instances), the
special properties of the execution target can be used ( for example, you can ensure that a job is performed on a
particular operating system, or that the job is executed by a background work process that is running on the same host
as the database
9. What is the memory area in Java VM where objects that have been required for a longer period of time by an
application are stored?
a) Permanent generation
b) Young generation
c) Tenured generation
d) Persistent generation
The three main memory areas of the VM, the “young”, “tenured” and “permanent generations” differ from one another
due to the data stored in them. The objects that have been newly created by the applications are stored in the young
generation. Objects that have been required for a longer period of time by an application are automatically moved to the
The newer objects are in the “young generation” and the older objects are in the “tenured generation”. Objects that are
permanently required by the VM, such as classes and methods, are stored in the permanent generation. Objects that
are no longer required by the applications are automatically removed from the “generation”.
This process is known as garbage collection. For the “young generation”, you can define the “initial size” with the
parameter –XX:NewSize, and the “max size” with the parameter –XX: PermSize and –XX: MaxPermSize. You cannot
directly define the initial and maximum sizes of the “tenured generation”.
These are calculated from the parameters for the “young generation” and the parameters –Xm and –Xms. The
parameter –Xm is called the “max heap size” and defines the total size of the “young and “tenured generations”. The
parameter –Xms is called the “start heap size” or “initial heap size” and defines the total initial size of the “young” and
In addition to the memory area for the “generations”, the VM also reserves space for its processes and threads.
10. What is the repository that provides versioning source code administration in the context of the SAP NetWeaver
Development Infrastructure, and therefore allowing the distributed development of software in teams and transportation
and replication of sources?
a) Design Time Repository
b) Version Management
c) Application Repository
d) Database Repository
The Design Time Repository provides versioning of Source code administration in the context of the SAP Netweaver
Development Infrastructure and therefore allowing the distributed development of software in teams and transportation
and replication of sources.
At the start of the development, you must make the repository aware of the intended change and arrange a change list
(Activity) to record the changes. The files are then checked out of the DTR and changed locally (“offline”, as it was).
After the changes have been successfully made, the sources are then checked back into the DTR. Using the check-in
mechanism, the changes to the components take effect when the activities are released.
The DTR consists of two parts, the DTR client and DTR server. The main activities of the individual developers, such as
checking files in and out, and creating sources, are performed in the SAPNetWeaver Developer Studio is, in turn,
automatically performed. The DTR server manages the data versioning. All files are stored in a file.
The resources are accessed in the context of a workspace, and version is administered in the context of activities. Put
another way: the workspace refers to a set of resources, each in exactly one version. This also means a resource can be
referenced in multiple workspaces.
If a versioned resource is changed or deleted, a new version is created for this resource. Each version of the resource
created in a specific workspace receives a unique sequence number. The sequence number specifies the order in which
the version were created in this workspace. The DTR displays the relationship between individual versions of a
versioned resource graphically as a version graph.
The Structure of the DTR is illustrated below:
11. What is the thread pool that is responsible for system activities such as backup and background optimization for
loaded and held data?
a) Application thread pool
b) System thread pool
c) Cluster Manager
d) HTTP requests pool
The System Thread Pool exists both for the dispatcher and for the server. The Thread Manager transfers values to the
System Thread Pool. The System Thread Pool is responsible for system activities such as backup and background
optimization for loaded and held data. You do not usually adjust the server threads, since the time of the load is very
short. By default, the value 100 is set.
The Application Thread Manager supports the threads in which source code must be executed. When a HTTP request
reaches SAPNetWeaver AS Java this is passed on to an application thread. The Application thread pool shows the
requests that arrive in the system.
The Cluster Manager is responsible for the communication within the SAP Netweaver AS Java cluster. It is used to
exchange messages between the cluster elements. Every cluster node has a connection to the message server with
which sort messages can be exchanged. Special services require server to server communication, which is always
performed using the message server for small quantities of data. If the quantity of data is larger, a direct connection is
provided. The lazy threshold parameter determines when each type of communication is used.
12. How can you obtain information related to individual (user) activities, which are running distributed across
a) Single Activity Trace
b) Single User Trace
c) Individual User Trace
d) JRM User Trace
The Single Activity Trace (SAT) is used to trace individual (user) activities, which are running distributed across multiple
components. If a performance problem occurs, use SAT to start more detailed analysis in a component. The Single
Activity Trace is based on data that is provided by Java Application Response Measurement (JARM).
This means in practice that all SAP Java applications that are instrumented with JARM can write an SAT.
A separate single Activity Trace is written on each component. The traces are combined using passports. Each request
receives a passport, which is transferred to all of the components involved.
All user actions that are processed within a called application are recorded. If, for example, you create users in the User
Management Engine (UME), performance data is recorded for the logon process for the UME, for the call of the “Create
User” application, and for saving of the details.
The SAT data is automatically written to the trace file for every request and component using the SAP Logging API. You
can display this trace file using the Log Viewer.
More Questions? Have a look at:
SAP Certified T echnology Associate - S ystem Administration with SAP
NetWeaver 7.0 Questions, Answers & Explanations