Oracle iexpense LEARNOVITA

100+ SAP Bodi Interview Questions & Answers

Last updated on 07th Dec 2022, Blog, Interview Question

About author

Sanjay (Sr Big Data DevOps Engineer )

Highly Expertise in Respective Industry Domain with 7+ Years of Experience Also, He is a Technical Blog Writer for Past 4 Years to Renders A Kind Of Informative Knowledge for JOB Seeker

(5.0) | 13265 Ratings 1723

1. How does a statement ‘Single point of Integration’ suits a data integrator?

Ans:

DI combines the both batch data movement and management with caching to provide the single data integration platform for an information management from any information source and for any information use.

2. State and explain a key function of data integrator.?

Ans:

Loading data: Loading an ERP or Enterprise application data into the operational datastore and update in batch-time

Routing request: Creates an information requests to the DW or ERP system using a complex rules

Applying transaction: DI can apply a data changes in the variety of data formats and any custom format.

3.what are various data integrator components.?

Ans:

  • Designer
  • Repository
  • Local repository
  • Central repository
  • Service
  • Component
  • Metadata Reporting

4. Explain a process of running a job from designer?.

Ans:

The Designer tells a job server to run a job. The job server then gets a job from associated repository and starts an engine to process a job.

5. Explain a job server and engine.

Ans:

When jobs are executed, the DI engine starts a data movement engine that integrates data from the heterogeneous sources, performs a complex data transformations and manages an extraction and transactions in an ETL process.

6. State various function of administrator.?

Ans:

Scheduling, monitoring & executing a batch job. Managing User.Configuring, starting and stopping a real-time service.Configuring and managing adapters

7. What are various analyses present with a Data Integrator and BO Enterprise?

Ans:

Datastore analysis: Use a reports to see whether following BI reports uses data from tables that are contained in a Business Views, Crystal Reports, and Universe etc..

Dependency analysis: Search a specific objects in a repository and understand whether they impact or are impacted by the other DI or BO universe or reports.

Universe analysis: View an universe class and object lineage.

Business view analysis: View a data sources for Business Views in a CMS.

Report analysis: View a data sources for reports in a CMS.

8. State and explain various management tools in a data integrator?.

Ans:

Repository Manager: Allows to creating, upgrading and checking a versions of local and central repositories.

Server Manager: Allows adding, deleting or editing properties of a Job Server

9. Name few common DI Objects

Ans:

A Projects, A Jobs, A Work flows, A Data flows, A Scripts, A Transforms.

10. Distinguish between a single-use objects and reusable objects?

Ans:

Single-use objects: Appear only as a components of the other objects> Cannot be copied> operate only in a context in which they are created.

Reusable objects: Has single definition> All calls to a object refer to that definition.> If definition of the object is changed in a one place then change is reflected in all other called objects.

11. How can behavior of various objects be changed?

Ans:

Options: Control a obj

Properties: Describe a obj

Classes: Obj are of a two classes: Single-use, Reusable.

12. State a relationships between the work flow and data flow.?

Ans:

A work flow is an incorporation of several data flows into the coherent flow of work for an entire job.A data flow is a process by which source data is transformed into a target data.

13. State common characteristics of a project

Ans:

They are listed in a local obj library,Only one project can be open at the time,They cannot be shared among a multiple users.

14. State and explain the various phases of a DI development process.?

Ans:

Design: Define the objects & build diagrams that instruct a DI in data movement requirements.

Test: Here DI is used to test an execution of application. Can test for the errors & trace flow of execution.

Production: Set up the schedule in DI to run the application as job. Whenever can return to a design phase.

15. What is use of BusinessObjects Data Services?

Ans:

BusinessObjects Data Services provides the graphical interface that allows to easily create a jobs that extract data from heterogeneous sources, transform that data to meet a business requirements of an organization, and load the data into the single location.

16. Define a Data Services components.

Ans:

  • Designer
  • Repository
  • Job Server
  • Engines
  • Access Server
  • Adapters
  • Real-time Services
  • Address Server
  • Cleansing Packages, Dictionaries, and Directories
  • Management Console

17. What are steps included in a Data integration process?

Ans:

  • Stage data in the operational datastore, data warehouse, or data mart.
  • Update a staged data in batch or real-time modes.
  • Create the single environment for developing, testing, and deploying an entire data integration platform.
  • Manage a single metadata repository to the capture a relationships between different extraction and access methods and provide integrated lineage and an impact analysis.

18. Define a terms Job, Workflow, and Dataflow.

Ans:

A job is a smallest unit of work that can schedule independently for an execution.

A work flow explains the decision-making process for an executing data flows.

Data flows extract, transform, and also load data. Everything having to do with the data, including a reading sources, transforming data, and loading targets, occurs inside the data flow.

19. How many types of a datastores are present in a Data services?

Ans:

Database Datastores: Provide simple way to import a metadata directly from an RDBMS.

Application Datastores: Let users simply import a metadata from most Enterprise Resource Planning (ERP) systems.

Adapter Datastores: Can provide an access to an application’s data and a metadata or just metadata.

20. What are the Memory Datastores?

Ans:

Data Services also allows to create a database datastore using a Memory as the Database type. Memory Datastores are designed to an enhance processing performance of data flows executing in a real-time jobs.

21. What are the file formats?

Ans:

A file format is a set of properties a structure of a flat file (ASCII). File formats explain the metadata structure. File format objects can explain files in:

Delimited format : Characters such as commas or tabs separate to each field.

Fixed width format : The column width is specified by a user.

A SAP ERP and R/3 format

22. What is a repository? List the types of a repositories.

Ans:

Repository is a set of tables that holds a user-created and predefined system objects, source and target metadata, and also transformation rules.

  • A local repository.
  • A central repository.
  • A profiler repository.

23. What is difference between the Repository and a Datastore?

Ans:

A Repository is a set of tables that hold a system objects, source and target metadata, and transformation rules. A Datastore is an actual connection to the database that holds a data.

24. What is difference between the Parameter and a Variable?

Ans:

A Parameter is an expression that passes a piece of an information to a work flow, data flow or custom function when it is called in job. A Variable is the symbolic placeholder for the values.

25. When would use a global variable instead of the local variable?

Ans:

  • While variable will need to be used a multiple times within a job.
  • While reducing a development time required for the passing values between job components.
  • While creating the dependency between job level global variable name and job components.

26. List some reasons why job might fail to execute?

Ans:

An Incorrect syntax, Job Server not running, port numbers for a Designer and Job Server not matching.

27. List factors consider when determining whether to run a work flows or data flows serially or in parallel?

Ans:

  • Whether or not a flows are independent of each other.
  • Whether or not a server can handle the processing requirements of a flows running at the same time (in parallel).

28. What are the Adapters?

Ans:

Adapters are an additional Java-based programs that can be installed on a job server to provide connectivity to the other systems such as Salesforce.com or a JavaMessagingQueue. There is also the SoftwareDevelopment Kit (SDK) to allow the customers to create adapters for custom applications.

29. List a data integrator transforms.

Ans:

  • Data_Transfer
  • Date_Generation
  • Effective_Date
  • Hierarchy_Flattening
  • History_Preserving
  • Key_Generation
  • Map_CDC_Operation
  • Pivot Reverse Pivot
  • Table_Comparison
  • XML_Pipeline

30. List a Data Quality Transforms.

Ans:

  • Global_Address_Cleanse
  • Data_Cleanse
  • Match
  • Associate
  • Country_id
  • USA_Regulatory_Address_Cleanse

31. What are Cleansing Packages?

Ans:

These are packages that an enhance the ability of Data Cleanse to accurately process different forms of global data by an including language-specific reference data and also parsing rules.

32. What is a Data Cleanse?

Ans:

The Data Cleanse transform identifies and isolates a specific parts of mixed data, and standardizes a data based on information stored in a parsing dictionary, business rules explained in the rule file, and expressions explained in a pattern file.

33. What is difference between the Dictionary and Directory?

Ans:

Directories provide an information on addresses from a postal authorities. Dictionary files are used to identify, parse, and standardize a data such as names, titles, and firm data.

34. Give some examples of how data can be enhanced through a data cleanse transform, and describe a benefit of those enhancements.

Ans:

  • An a Enhancement Benefit.
  • Determine gender distributions and target.
  • Gender Codes are marketing campaigns.
  • Provide a fields for improving matching.
  • Match a Standards results.

35. A project requires a parsing of names into given and family, validating a address information, and finding duplicates across several systems. Name a transforms needed and task they will perform.

Ans:

Data Cleanse: Parse a names into given and family.

Address Cleanse: Validate an address information.

Match: Find a duplicates.

36. Describe when to use a USA Regulatory and a Global Address Cleanse transforms.

Ans:

Use a USA Regulatory transform if USPS certification and/or additional options like DPV and Geocode are required. Global Address Cleanse should be utilized are when processing a multi-country data.

37. What are different strategies that can use to avoid duplicate rows of a data when re-loading a job?

Ans:

  • Using a auto-correct load option in a target table.
  • Including a Table Comparison transform in a data flow.
  • Designing a data flow to completely replace a target table during each execution.
  • Including a preload SQL statement to an execute before the table loads.

38. What is use of an Auto Correct Load?

Ans:

It does not allow the duplicated data entering into a target table. It works like a Type1 Insert else Update a rows based on Non-matching and matching data respectively.

39. What is use of an Array fetch size?

Ans:

Array fetch size indicates a number of rows retrieved in the single request to a source database. The default value is a 1000. Higher numbers reduce requests, lowering a network traffic, and possibly improved a performance. The maximum value is a 5000

40. What is difference between a Row-by-row select and Cached comparison table and sorted input in aTable Comparison Transform?

Ans:

    Row-by-row selectCached comparison table Sorted input
    Look up a target table using a SQL every time it receives an input row. This option is best if a target table is large. To load a comparison table into memory. This option is best when a table fits into memory and are comparing the entire target table. To read a comparison table in an order of the primary key column(s) using a sequential read. This option improves performance because of Data Integrator reads a comparison table only once. Add a query between a source and the Table_Comparison transform. Then, from a query’s input schema, drag primary key columns into an Order by box of a query.

41. What is the use of using Number of loaders in aTarget Table?

Ans:

Number of loaders loading with the one loader is known as a Single loader Loading. Loading when the number of loaders is greater than a one is known as Parallel Loading. The default number of loaders is be 1. The maximum number of loaders is a 5.

42 What is difference between a lookup (), lookup_ext () and lookup_seq ()?

Ans:

lookup() : Briefly, It returns a single value based on a single condition.

lookup_ext(): It returns a multiple values based on a single/multiple condition(s).

lookup_seq(): It returns a multiple values based on a sequence number.

43. What is the use of a History preserving transform?

Ans:

The History_Preserving transform allows to produce a new row in a target rather than updating an existing row. Can indicate in which columns transform an identifies changes to be preserved. If the value of a certain columns change, this transform creates the new row for each row flagged as a UPDATE in the input data set.

44. What is use of a Map-Operation Transform?

Ans:

The Map_Operation transform allows to change the operation codes on data sets to produce a desired output. An Operation codes: INSERT UPDATE, DELETE, NORMAL, or DISCARD.

45. What is a Hierarchy Flattening?

Ans:

  • Constructs rhe complete hierarchy from parent/child relationships, and then produces description of the hierarchy in a vertically or horizontally flattened format.
  • Parent Column, Child Column.
  • Parent Attributes, Child Attributes.

46. What is the use of a Case Transform?

Ans:

Use a Case transform to simplify branch logic in a data flows by consolidating case or decision-making logic into a one transform. The transform allows to split a data set into a smaller sets based on a logical branches.

47.How do improve the bod performance?

Ans:

BODS recommends to set a Rows per commit value between the 500 and 2000. The default value for a regular loading for the Rows per commit is 1000. The value of a Row per commit depends on a numbers of column in a target table.

48.How do compare a two tables in bods?

Ans:

Row-by-row select: Looks up a target table using a SQL every time it receives an input row.

Cached comparison table: Loads a comparison table into a memory.

Sorted input: Reads a comparison table in an order of primary key columns using a sequential read.

49.What are the reusable objects in a SAP bods?

Ans:

Can reuse and replicate a most objects defined in a software. After define and save the reusable object, SAP Data Services stores the definition in a repository. Can then reuse a definition as often as necessary by creating a calls to the definition.

50.How do join a 3 tables in SAP bods?

Ans:

Right click on a ‘From’ option then “input schema”, ”Joins pairs” and “From Clause” will be appearing in a Schema Remapping. Now have to select both tables in a input schema which wanted to join ‘FROM’ option. Now move to a ‘Join pairs’ below an Input Schema and select left table as per requirement

51.How do check bods Datastore?

Ans:

Enter a user name and password. Click an Advance tab and enter a system number and client number. Step 3 − Click the OK and the Datastore will be added to a Local object library list. If expand a Datastore, there is no table to display.

52.What is a staging in bods?

Ans:

Taking a data from Databaseand make it as a source and followed by a various transformationsand make its as temaparary table. This temporary table will be stored in a Staging area.

53.What is a repository in bods?

Ans:

Repository is used to store a metadata of objects used in the BO Data Services. Every Repository should be registered in a Central Management Console, CMC, and is linked with the single or many job servers, which are responsible to execute a jobs that are created .

54.How do replicate a dataflow in bods?

Ans:

Click on a new job in the same project, create a new job, re-use this EDF_SOURCE_OUTPUT object and give a 2 template tables, one for pass and other for the fail. if using a BODS 4.0 then can able to copy and paste the columns, as it automatically maps with the columns where copied.

55.How do import a datastore into a bods?

Ans:

  • Open a Datastores tab in object library and right-click the name of applicable SAP BW target datastore.
  • Select an Import By Name.
  • In a Import by Name dialog box, specify a Source System and DataSource names.
  • Click OK.

56.Can one table have multiple schemas?

Ans:

A single user can own a multiple schemas. Every user has a default schema. Objects created in the schema are owned by a schema owner by default, not by user who created a object.

57.What is ETL in bods?

Ans:

ETL stands for an Extract-Transform-Load, and it is a process of how data is loaded from a target system to a data warehouse.

58.What is auto correct load in a SAP bods?

Ans:

Auto-correct load is used to the avoid loading of duplicate data in a target table using a SAP BODS. Basically, Auto-Correct load is used to implement SCD-1 when not using a table comparison feature of a SAP BODS.

59.How many VM is datastore?

Ans:

VMware currently supports the maximum of 2,048 powered-on a virtual machines per VMFS datastore. However, in a most circumstances and environments, a target of a 15 to 25 virtual machines per datastore is a conservative recommendation.

60.What are the advantages of a BOD POD?

Ans:

Advantages : high level of an accuracy, ease-of-use, and a fast test time. Compared to an underwater weighing, Bod Pod does not require getting wet, and is well suited for a special populations like children, obese, elderly, and disabled persons.

61. What is a slowly changing dimension?

Ans:

SCDs are the dimensions that have a data that changes over time.

62. Is a file format in a Data Services type of a data store?

Ans:

No, File format is not datastore type.

63. What is the real-time Job?

Ans:

A Real-time jobs “extract” data from the body of a real-time message received and from any secondary sources used in a job.

64. What is an Embedded Dataflow?

Ans:

An Embedded Dataflow is a dataflow that is called from the inside another dataflow.

65.What is the use of Compact repository?

Ans:

Remove redundant and an obsolete objects from a repository tables.

66. Which is NOT datastore type?

Ans:

A File Format

67. What is the use of a query transformation?

Ans:

  • Data filtering from the sources.
  • Joining a data from a multiple sources.
  • Perform a functions and transformations on data.
  • Column mapping from an input to output schemas.
  • Assigning a Primary keys.
  • Add a new columns, schemas and functions resulted to the output schemas.

As a Query transformation is a most commonly used transformation, so a shortcut is provided for this query in a tool palette.

68. What is a text data processing transformation?

Ans:

  • This allows to extract a specific information from the large volume of text. Can search for facts and entities like a customer, product, and financial facts specific to the organization.
  • This transform also checks a relationship between entities and allows an extraction.
  • The data extracted using a text data processing can be used in a Business Intelligence, Reporting, query, and analytics.

69. What is difference between a text data processing and data cleansing?

Ans:

Text data processing is used for a finding relevant information from an unstructured text data however data cleansing is used for a standardization and cleansing structured data.

70. What is real time job in Data Services?

Ans:

  • Can create a real time jobs to process a real time messages in the Data Services designer. Such as batch job, a real time job extracts a data, transforms and loads it.
  • Each real time job can extract a data from a single message or and can also extract data from the other sources like tables or files.

71. Explain difference between the real time and batch jobs in Data Services?

Ans:

    Real timeBatch jobs
    Transforms like a branches and control logic are used for more often in a real time jobs unlike a batch jobs in design. Real time jobs are not executed in a response to a schedule or an internal trigger unlike a batch jobs.

72. What is embedded data flow?

Ans:

Embedded data flow is known as a data flows which are called from the another data flow in the design. The embedded data flow can contain a multiple numbers of source and targets but only one input or output passes data to a main data flow.

73. What are different types of an embedded data flow?

Ans:

One Input: Embedded a data flow is added at an end of dataflow.

One Output: Embedded a data flow is added at beginning of a data flow.

No input or output: Replicate the existing data flow.

74. What are the local and global variables in a Data services job?

Ans:

Local variables in a data services are restricted to a object in which they are created.

Global variables are restricted to a jobs in which they are created. Using a global variables, and can change values for a default global variables at a run time.

75. How variables are different form the parameters in Data Services job?

Ans:

  • Expressions that are used in a work flow and data flow are called parameters.
  • All variables and parameters in a workflow and data flows are shown in a variable and parameters window.

76. What are different recovery mechanisms that can be used in a failed jobs?

Ans:

Automatic Recovery : This allows to run a unsuccessful jobs in a recovery mode.

Manually Recovery : This allows to rerun a jobs without considering a partial rerun previous time.

77. What is use of a Data Profiling?

Ans:

Data Services Designer provides the feature of a Data Profiling to ensure and improved the quality and structure of source data.

78. Find anomalies in a source data, validation and corrective action and quality of source data.?

Ans:

  • The structure and a relationship of source data for the better execution of jobs, workflows and data flows.
  • The content of a source and target system to find that a job returns the result as expected.

79. Explain different performance a optimization techniques in BODS?

Ans:

The performance of ETL job depends on a system on which are using a Data Services software, number of moves, etc. There are various other factors that contributes to a performance in an ETL task:

  • Source Data Base
  • Source Operating System
  • Target Database
  • Target Operating System
  • Network
  • Job Server OS
  • BODs Repository Database

80. What do understand by a multi user development in a BODS? How do manage a multi user development?

Ans:

SAP BO Data Services support a multi user development where every user can work on an application in their own local repository. Every team uses the central repository to save the main copy of the application and all the versions of objects in a application.

81. Suppose have updated a version of Data Services software? Is it required to update arepository version?

Ans:

If update version of a SAP Data Services, there is a need to update a version of Repository. Below points should be considered ,migrating a central repository to an upgrade version:

Point 1:Take a backup of central repository all the tables and objects.

Point 2:To maintain a version of objects in data services, maintain the central repository for each version. Create new central history with new version of Data Services software and copy all the objects to this repository.

Point 3:It is always be recommended if install a new version of Data Services, should upgrade a central repository to new version of objects.

Point 4:Also upgrade a local repository to same version as various version of central and local repository may not work at a same time.

Point 5:Before migrating a central repository, check in all objects.

82. How do manage a slowly changing dimensions? What are fields required in a managing different types of SCD?

Ans:

  • SCD Type 1 No history preservation
  • Natural consequence of a normalization
  • SCD Type 2 Preserving all the history and new rows
  • There are new rows generated for a significant changes
  • Need to use of an unique key
  • There are a new fields are generated to a store history data
  • Need to manage an Effective_Date field.
  • SCD Type 3 Limited history preservation
  • In this only two states of a data are preserved – current and old

83. If want to find an immediate answers to business questions, which tool would go with?

Ans:

Will go with business objects are explorer in this case. It acts like the business object’s search engine. And can search for anything through a keyword search box in an explorer and all the information spaces appear in a result where that specific keyword exists.

84. In a BO 3.1, had a ‘dashboard builder’. What is a nomenclature for the same in BO BI 4.0?

Ans:

It is called as a ‘BI workspaces’ in the BO BI 4.0 platform.

85. Do have any idea about a Live Office?

Ans:

There are many users who are comfortable using a only Microsoft office products. They need a business reports in the Microsoft office products like Microsoft Excel and all. For those users, and have SAP BO of Live office tools.

86. Have a lot of tools available under a business objects platform. Will a company have to purchase an entire suite or does the SAP provide a way for an organizations to buy only the tools which are needed?

Ans:

No, it is not necessary for the company to buy all the tools that come under a SAP BO platform. The SAP provides a customization packages under which a company can buy only a tools which are needed.

87. What can be various data sources for a BO reports?

Ans:

The various data sources are SAP BW, OLAP, application database, customer database, Text file, XML file and Web service.

88. What is difference between a UDT and IDT?

Ans:

The universes which are designed in an UDT are UNV universes whereas a universes which are designed in IDT are UNX universes. In UDT don’t have a multi-resource universes enabled whereas in a IDT we have this option. IDT is enhanced and more organized as a compared to UDT.

89. Can crystal reports be built on a top of UDT?

Ans:

No, the tools like a crystal report for Enterprises, Dashboards, BO explorer does not support a UNV universe (designed by UDT). It supports only an UNX universe (designed by IDT). Web Intelligence is a only tool that supports both an universes – UNV and UNX.

90. Mention what are major benefits of a reporting with BW over R/3?

Ans:

Business Warehouse uses the data warehouse and OLAP concepts for an analyzing and storing a data While the R/3 was intended for transaction processing. And can get a same analysis out of R/3, but it would be simpler from a BW.

91. Mention two types of services that are used to deal with the communication?

Ans:

Message Service: In order to exchange a short internal messages, this service is used by an application servers.

Gateway Service: This service allows to communication between R/3 and external applications using a CPI-C protocol.

92. Mention what are reason codes used in an Account Receivable?

Ans:

“Reason Codes” are tags that can be allocated to explain an under/overpayments during allocation of an incoming customer payments. They should not be a mixed up with as “void reason codes” used when an outgoing a cheques are produced.

93. Mention what protocol does a SAP Gateway process use?

Ans:

The SAP gateway process uses a TCP/IP protocol to communicate with a clients.

94. Mention what is a pooled tables?

Ans:

Pooled tables are used to store a control data. Several pooled tables can be united to form the table pool. Table tool is a physical table on a database in which all the records of an allocated pooled tables are stored.

95. Explain what is an update type with a reference to a match code ID?

Ans:

If a data in one of the base tables of the matchcode ID changes, the matchcode data has to be updated. The update type stipulates when as match-code has to be updated and how it has to be done. The update type also explains which method is to be used for a building match-codes.

96. what the .sca files and mention their importance?

Ans:

sca stands for a SAP component Archive. It is used to deploy a Java components, patches and the other java developments in a form of .sca,.sda,.war and .jar.

97. what is meant by a “Business Content” in SAP?

Ans:

Business Content in a SAP is a pre-configured and pre-defined models of an information contained in a SAP warehouse which can be used directly or with a desired modification in various industries.

98. what a dispatcher is?

Ans:

Dispatcher is a component that takes a request for a client systems and stores a request in a queue.

99. What is transform?

Ans:

A transform enables to control a how datasets change in the dataflow.

100. What is a Script?

Ans:

A script is a single-use object that is used to call the functions and assign the values in a workflow.

Are you looking training with Right Jobs?

Contact Us

Popular Courses