The flashcards below were created by user
on FreezingBlue Flashcards.
Ques.1> What are the type of repositories created using Informatica Repository Manager?>
Informatica PowerCenter includes following type of repositories:Standalone Repository, which functions individually.Global Repository which is a centralized repository in a domain and it also contain shared objects across the repositories in a domain.Local Repository is one which is within a domain.Versioned Repository can be either local or global but it allows version control.
What is a code page?
- A code page contains encoding to specify characters in a
- set of one or more languages and is selected based on source of the data. The
- set code page refers to a specific set of data that describes the characters
- the application recognizes. This influences the way that application stores,
- receives, and sends character data.
What do you mean by code page compatibility?
- When two code pages are compatible, the characters
- encoded in the two code pages are virtually identical which ensures no data
- loss. This compatibility is used for accurate data movement when the
- Informatica Sever runs in the Unicode data movement mode. One code page can be
- a subset or superset of another. For proper data movement, the target code page
- must be a superset of the source code page.
What is a transformation?
- A transformation is a repository object that generates,
- modifies, or passes data. The Designer provides a set of transformations that
- perform specific functions. Each transformation has rules for configuring and
- connecting in a mapping. Transformation is created to use once in a mapping or
- reusable transformations can be created to use in multiple mappings.
- Eg. Aggregator transformation performs calculations on
- groups of data.
What are the types of loading in Informatica?
There are two types of loading in informatica,normal loading and bulk loading.In normal loading record by record are loaded and writeslog for that. In this longer time is needed to load data to the target.In bulk loading number of records are loaded at a time totarget database. It takes less time to load data to the target than in normalloading.
Why do we use the lookup transformation? How can we improve session performance in aggregator transformation?
- A lookup
- transformation is used for checking the matched values from the source or
- target tables and check whether the record already existing in the table. It is
- also used for updating the slowly changing dimensions and also performs some
- Using Incremental Aggregation we create Sorted Input
- option to improve the performance since performance is reduced using the
What is the difference between static cache and dynamic
- In case of dynamic cache, when we are inserting a new row
- it checks the lookup cache to see if it exists, if not inserts it into the
- target as well as the cache but in case of static cache the new row is written
- only in the target and not the lookup cache.
- The lookup cache remains static and does not change
- during the session but incase of dynamic cache the server inserts, updates in
- the cache during session.
What is throughput in Informatica? Where can we find this option to
check? How does it works?
- Throughput is the rate at which power centre server read
- the rows in bytes from source or write the rows in bytes into the target per
We can find this option in workflow monitor.
- Its working is as follows: right click on session choose
- properties and transformation statistics tab, there we can find throughput
- details for each instance of source and target.
> What is a source qualifier? What do you mean by Query Override?
- Source Qualifier represents
- the rows that the PowerCenter Server reads from a relational or flat file
- source when it runs a session. When the definition of the relational or flat
- file is added to mapping then it is connected to Source Qualifier
- transformation. The default query is SELECT statement containing all the source
- columns. Source Qualifier has capability to override this default query by
- changing the default settings of the transformation properties
What is aggregate cache in aggregator transformation?
The aggregator stores data in the aggregate cacheuntil it completes aggregate calculations. When we run a session that uses anaggregator transformation, the informatica server creates index and data cachesin memory to process the transformation. If the informatica server requiresmore space, it stores overflow values in cache files.
What are two types of processes that runs thesession? >
The two types of processes that runs the sessionare Load Manager and DTM process.Load manager process starts the session, creates DTMprocess, and sends post session email when the session completes.DTM process creates threads to initialize the session,read, write and transform data and handle pre-session and post-sessionoperations.
What are the difference between joinertransformation and source qualifier transformation?
In joiner transformation heterogeneous datasources can be joined but this cannot be achieved incase of source qualifiertransformation.We need matching keys to join two relational sources insource qualifier transformation whereas this is not needed incase of joiner.Two relational sources should come from same data source in source qualifier.We can join relational sources which are coming from different sources also.
What is Datadriven?
The informatica server follows instructions codedinto update strategy transformations within the session mapping which determinehow to flag records for insert, update, delete or reject. If we do not choosedata driven option setting, the informatica server ignores all update strategytransformations in the mapping.
What are the types of mapping wizards that areprovided in Informatica?
The designer provide two mapping wizard.Getting Started Wizard creates mapping to load staticfacts and dimension tables as well as slowly growing dimension tables.Slowly Changing Dimensions Wizard, creates mappings toload slowly changing dimension tables based on the amount of historicaldimension data we want to keep and the method we choose to handle historicaldimension data.
What are the differences between Connected and Unconnected Lookup?
- 1.Connected lookup participates in dataflow and receives input directly from the pipeline
- 1.Unconnected lookup receives input values from the result of a LKP: expression in another transformationConnected lookup can use both dynamic and static cacheUnconnected Lookup cache can NOT be dynamicConnected lookup can return more than one column value ( output port )Unconnected Lookup can return only one column value i.e. output portConnected lookup caches all lookup columnsUnconnected lookup caches only the lookup output ports in the lookup conditions and the return portSupports user-defined default values (i.e. value to return when lookup conditions are not satisfied)Does not support user defined default values