Tuesday, March 12, 2019

Flat File Accounts Loader Reconciliation

Flat file account loading or user loading seems like a very basic feature of OIM, and something every admin should have a grasp on. Unfortunately, that seems not to be the case. 

Though mostly used for 'Disconnected' app instances, a flat file account loading can also be executed for any connected application too, in case we're unable to do the recon with the OOTB recon schedulers.

the point here is, we're trying to write the account footprint in OIMdatabase, so things like  Request data validator, Adapters,or pre-populate plug-ins do not come into play. 

The key components of this set-ups are  : 

1. Reconciliation profile, created through design console from the resource object : 


  • Search for / Create the 'Resource Object', in the design control, under 'Resource Management' option. 
  • Expand 'Reconciliation fields', and make sure to add all the attributes of the object form, with their type.
  • Add child forms as 'Multi-valued Attributes '. Add each attribute of the child form under it. 
  • Mark as 'Required' only the attribute(s) which must be present in the object form. 
  • Click on 'Create Reconciliation Profile' button on the right hand upper corner --> This will RA_PROCESS_FORM_NAME table in the database. 





2. Data mapping in process definition :


  • Search for / Create the 'Process Definition' , in the design control, under 'Process Management' option.
  • Expand the 'Reconciliation Field Mapping' part, click on 'Add Field Map' for adding new attributes mapping, click on 'Add Table Map' for adding new child forms mapping.
  • The above mapping is between Resource Object and Process Form.
  • Mark the fields as 'Key' which are unique for an user, and also present in the user profile in OIM. 


3.  Create Reconciliation Rule : 


  • Search for / Create the 'Reconciliation Rules', in the design control, under 'Development Tools' option. 
  • click on 'Add Rule Element' , and map an attribute from 'User Profile Data' , to 'Attribute'.
  • Make sure you map the attribute(s) only which you've marked as 'Key' in step 2.  
  • We may use multiple 'Rules' based on multiple key attribute, but all of these rules can work together only with either  AND or OR.


4. Loop-ups set-up :


  • Open / Create  'Lookup.XYZ.UP.FF.Recon' --> This lookup will hold the mapping between the resource object fields and the 'FieldNames' in schema name(STEP : 6) or headers in the input CSV file (STEP : 5).

  • Open 'Lookup.FlatFile.UM.Configuration' --> Put the lookup name (ex : Lookup.XYZ.UP.FF.Recon) in decode key against the code 'Recon Attribute Map' 
Find the lookup 'Lookup.FlatFile.Configuration'.
in this lookup, define 

  • fieldDelimiter --> the character that distinguish one field from another
  • subFieldDelimiter--> the character that distinguish subfields (i.e Start date,role code,role name sub fields under one field ROLES)
  • multiValueDelimiter--> the character that distinguish multiple values of one field in one account
  • textQualifier --> the character that defines a string as one testValue for one field, comes in use where you've space in the value.
  • Put the user configuration lookup name (Lookup.FlatFile.UM.Configuration) in decode key against the code 'User Configuration Lookup' 



5. Set up the input CSV file - 

In the format  - (we'll be using the delimiters defined in the above screenshot)
Copy and paste the input file directory name in scheduler, make sure there is only one csv file in that directory.

header1,header2,header3,header4
"ABC","AB BC CA","0","XY";"YZ";"ZX"

6. Set up the schema file generic_schema_APPNAME.properties file - 

The schema file is used for reading the input.csv file, this is where you define the structure of the input file.
We'll just refer tot he sample schema file from Oracle so that we can get a grasp on every aspect of it, even though not all may be useful to us .


#List of fields
#FieldNames=UID,UserId,FirstName,LastName,email,Currency,Salary,status,JoiningDate,LastUpdated,Groups,Roles
FieldNames=header1,header2,header3,header4 (Mandatory)

#Unique ID Attribute (Mandatory)( This is the attribute that has the AccountId=true property in the foem designer) 
UidAttribute=UID

#Account Name attribute (Mandatory)( This is the attribute that has the AccountName=true property in the foem designer) 
NameAttribute=UserId

#Multivalued attributes
Groups.Multivalued=true
Roles.Multivalued=true

#Subfields for complex child form
Roles.Subfields=RoleName,Start_Date,End_Date

#Complex child form objectClass
Roles.EmbeddedObjectClass=MyROLES

#Datatypes (Default:String) (Optional)
Roles.Start_Date.DataType=Long
Roles.End_Date.DataType=Long
FirstName.DataType=String
JoiningDate.DataType=Long

#Incremental reconciliation attribute with datatype set to Long
LastUpdated.DataType=Long

#Parent and child form mandatory fields (Optional)
Roles.RoleName.Required=true

#Date format
SystemDateFormat=ddmmyy

#Account Status Attribute and Mapping
StatusAttribute=status
status.True=Enabled
status.False=Disabled

7. configure the IT resource  - 

Find the ITResource that you're going to use for this recon. It can be the general 'Flat File Accounts' ITResource , or the specific IT Resource for our application. 
Change the value of the attribute 'Configuration Lookup' to 'Lookup.FlatFile.Configuration'.
Change the value of the attribute 'schemaFilep' to the schema file name with entire location , made in step 6 (ex : /u01/oracle/admin/shared/schemaFiles/generic_schema_APPNAME.properties).
(in case of cluster set up, make sure your schema file is in a shared location, or you're running the job from the managed server in which the schema file is kept.)

8. Configure the Scheduler "Flat File Accounts Loader" - 




Run the scheduler. 

9. Do a blank search on 'Event Management' .

Check whether the events are getting created with the key fields from step 2 or not, if not check the logs.
For each event generated , under RECONCILIATION DATA 'Attribute Name', 'Attribute Value', and 'OIM Mapped field'
Under "Matched User', we can check which user is mapped with the account.
Under Roles section, we'll see the multi-valued attributes.



Thanks for Reading !


NOTES : 

If a delimiter containing more than one character (for example, $#) is specified in the flat file and in the main configuration lookup definition, then the following error is encountered:

Only single character delimiters are supported with the exception of "tab" and "space".



Configuring Fault Handling

Record level errors while parsing the file are logged in a separate file and will be saved in a directory named "failed" that the connector creates, within the flat file directory. The processed flat file will be saved in the following format:

FILENAME_dd-MM-yyyy_HH-mm-ss.EXT

In this format, FILENAME is the name of the flat file being archived. dd-MM-yyyy_HH-mm-ss is the date and time at which the connector started processing the file. EXT is the extension of the file.

For example, the filename will be saved in the following format:

acmeusers_29-08-2013_22-44-12.csv

Friday, March 8, 2019

Loggers and log_handler in OIM

We often may want to use our own loggers in our codes, for the sheer benefit of ease of troubleshooting, instead of the normal OUT or diagnostic logs. 

Oracle Diagnostic Logging (ODL) is the principal logging service used by Oracle Identity Manager. For ODL logging to work, both loggers and log handlers need to be configured. Loggers send messages to handlers, and handlers accept messages and output them to log files.

ODL recognizes five message types: INCIDENT_ERROR, ERROR, WARNING, NOTIFICATION, and TRACE. Each message type can also take a numeric value between 1 (highest severity) and 32 (lowest severity) that we can use to further restrict message output.

The configurations are stored in the logging.xml file, which we can find in the location :  DOMAIN_NAME/config/fmwconfig/servers/SERVER_NAME/

The logging.xml file has a <log_handlers> configuration section, followed by a <loggers> configuration section. Each log handler is defined within the <log_handlers> section, and each logger is defined within the <loggers> section.

When configuring a logger to write messages to either the console or a file, make configuration changes to both the logger and the handler. Setting the level attribute for the logger configures the amount of detail (and therefore, the volume of messages) that the logger sends to the handler. 
Similarly, setting the level attribute for the handler configures the amount of detail that the handler accepts from the logger.

An example of a log_handler will be :

This will define the path and file where the logs will be printed in, also, the encoding as well as the size, format etc.

<log_handler name='EXAMPLE-HANDLER-ONE' class='oracle.core.ojdl.logging.ODLHandlerFactory' level='FINEST'>
  <property name='logreader:' value='off'/>
  <property name='path' value='${domain.home}/servers/${weblogic.Name}/logs/EXAMPLE-HANDLER-ONE.log'/>
  <property name='format' value='ODL-Text'/>
  <property name='useThreadName' value='true'/>
  <property name='locale' value='en'/>
  <property name='maxFileSize' value='104857600'/>   
  <property name='maxLogSize' value='1048576000'/>
  <property name='encoding' value='UTF-8'/>
</log_handler>

An example of a logger will be :

<logging_configuration>
<loggers>
  <logger name='example.logger.one' level='FINEST' useParentHandlers='false'>
   <handler name='EXAMPLE-HANDLER-ONE'/>
   <handler name='EXAMPLE-HANDLER-TWO'/>
   <handler name="console-handler"/>
   <!--Additional logger elements defined here....-->
  </logger>
</loggers>
</logging_configuration>
  
NOTE  : 

  • A logger can inherit a parent logger's settings, including the parent's level setting and other attributes, as well as the parent logger's handlers. 
  • At the top of the logger inheritance tree is the root logger. The root logger is the logger with an empty name attribute.



Now, we're free to use the error messages in our code in the below mentioned way  - 

private static final Logger LOGGER = Logger.getLogger("example.logger.one");
LOGGER.debug("WRITE_YOUR_OWN_MESSAGE");
LOGGER.info("WRITE_YOUR_OWN_MESSAGE");


We can also change the level of an existing logger from the EM console - 
1. Log-into EM console
2. Navigate to “Identity and Access” ->  OIM -> oim(11.1.2.0)

3. Right-click and navigate to Logs -> Log configuration.

Change the level of the specific logger from drop down  - 












Thanks for reading !

Thursday, March 7, 2019

Plug-in auto-registration in OIM

We often register plug-ins in OIM through the plug-in registration utility, however OIM also gives us the option for switching on auto-registration just by keeping it in a specific folder or folders. 

To configure it  - 

1. Search for the file 'oim-config.xml' , it should be in location 'OIM_HOME/server/server/metadata/db/' (or you can take an metadata export from EM console)
2. Add  / Modify the below tags : 

<pluginConfig storeType="common">
<storeConfig reloadingEnabled="true"
reloadingInterval="20">
 <!--
 Plugins present in the OIM_HOME/server/plugins directory are added by default.
 For adding more plugins, specify the plugin directory as below:
 <registeredDirs>/scratch/oimplugins</registeredDirs>
 <registeredDirs>/scratch/custom</registeredDirs>
 -->
</storeConfig>
</pluginConfig>

3. Take a server restart cause it's OIM we're talking about ! ;) 

Thanks for reading !

RequestDataValidator Plug-in in OIM

The third article on the plug-in points will be based on Request data validators.
The relevant plugin point for my favorite plug-in will be  : oracle.iam.request.plugins.RequestDataValidator

Often when submitting a request in APS , there can be number of rules, conditions that the user may have to abide by. This rules can be process related, it can also be very well defined in the target in DB layers, as well as validations in the APIs exposed.

OIM being middleware are not by default aware of all these rules, and it's important that we impose these rules exclusively, and it executes BEFORE an user submits a request.

This is where the request data validator comes in to the picture, this validator plug-ins needs to be registered against each Obj form using the plug-in registration utility. The plugin.xml will look like  -




For the coding part of it , the basic approach will be  -

1. creating a class which implements RequestDataValidator
2. overwriting the method "validate(RequestData reqData)" which throws a InvalidRequestDataException
3. finding out from the RequestData who are the Beneficaries are, and what the BeneficaryEntities are.



4. Hereafter , we can have two blocks to check the conditions , for both the operation provisioning / modifying -



Now, the METHOD_NAME_WHICH_VALIDATES_THE_CONDITION(benEntity, benUser) functionality is limited only to our imagination ;) and the business requirement in question.

A couple of examples can be  -

Method for checking an eMail domain provided by the user in the form :



Method for checking if an User already has an account provided for the same application instance :





Thanks for reading !

Tuesday, March 5, 2019

Creating an OIMClient

Often , for development purpose, we'll like to connect with your OIM from local IDE; this approach also comes in handy for unit testing for a code.

Below is the code that'll help us to make a OIMClient , which we can refer to your other classes to connect to OIM.

While deploying the codes in server in jar format, replace the OIMClient reference with 'Platform'.

Make sure  to incorporate oimclient.jar from OIM_Home/designconsole/lib


Monday, March 4, 2019

MDS, Sandboxes in OIM, and Reverting Sandboxes

Oracle Metadata Services (MDS) is an XML configuration store used by Oracle Identity Manager (OIM), as well as several other Oracle Middleware products such as SOA. Each of this application gets it's own MDS repository.
OIM first adopted MDS with the release of 11gR1. Prior to MDS, many Oracle Middleware products used  files on the filesystem as configuration stores, in various formats (XML, Java properties files, etc.). One of the purposes of MDS to create a standard configuration store across the Middleware stack.

For the connector JARs and plugin JARs in OIM, rather than storing them in MDS, we instead store them as BLOBs inside database tables in the OIM database schema. However, once again, the possibility of cluster inconsistencies which existed in 9.x (where these JARs lived on the filesystem) is eliminated.

Problem with storing configuration files on the filesystem (e.g. xlconfig.xml in OIM 9.x), is that in a clustered environment there is a risk of inconsistencies in configuration between the nodes, which may have inimical effects. In OIM 11g, by storing the configuration in the MDS database schema, this possibility is eliminated.

OIM 11gR2 provides an easy way of customizing the UI through sandbox.
"...a sandbox is a temporary storage area to save a group of runtime page customization before they are either saved and published to other users, or discarded."

All customization and form management are performed in a sandbox. A sandbox allows us to isolate and experiment with customization without affecting the environment of other users. Any changes made to a sandbox are visible only in the sandbox. we must create and activate a sandbox to begin using the customization and form management features. After customization and extending forms are complete, we can publish the sandbox to make the customization available to other users.


Though, the user form, catalog entry, etc are contained within the sandbox as well as any modifications we may have done to them, but the App Instance, Resource, etc are not included.
Part of the reason for this division of objects is due to the fact that sandboxes only store front-end objects and modifications.
The Resource, IT Resource, etc are all back-end database objects.  These are objects we can interact with in design console or can view if you connect directly to the database and perform queries.

These Database objects can technically exist without the UI changes.  We could create a sandbox with a new database object, publish it, and then in a subsequent sandbox delete all of the UI components of the database object and we won't see any errors assuming there are no remaining references to the object on any pages.  If there are references remaining, we will see ADF Faces errors referencing the missing component


Hence for moving a solution in between different environment consists of two main component  -

 1. The sandbox with all the front end UI customizations, ADF pages ( as well as create/modify/bulk pages for the Object forms created ) and PageDefinitions including the VO and EO objects and their associations.
 2. A "Request Dataset" export from the Deployment manager of the SYSADMIN console. The reason for taking the export of a request dataset is , it consists of every aspect of a form including IT Resource,IT Resource definition,app instance, Process forms (parent and child), Resource obj, Process definition, lookups - adapters attached to the form etc.
 ( An important point here would be  -  request dataset export DO NOT consists of the artifacts not associated directly with the form but necessary nevertheless, such as Configuration lookups, Schedulers, etc.  We need to  go back and "Add More" to the export for each of these.)

* This export needs to be imported in next environment by deployment manager.

for the sandbox part of it, it includes two files which holds entries very specific to an environment , If this files are not edited properly, it may as well bring down the entire next environment, until fixed.

The files are :


  • sandbox_XYZ\persdef\oracle\iam\ui\catalog\model\am\mdssys\cust\site\site\CatalogAM.xml.xml
  • sandbox_XYZ\xliffBundles\oracle\iam\ui\runtime\BizEditorBundle.xlf


The Catalog.xml file holds the VO - ViewObject entries of each Object forms and the User form of any envronment.
The BizEditorBundle file holds entries for each attributes of each Object forms and the User form of any envronment.
Since not all the forms are same across each OIM environemt , you simply can not export and import a sandbox from one OIM env to another.

We need to select the entry tags from both the files which are relevant to us, and append them in the same files of the enxt environment.

A sample tag from BizEditorBundle will be as below  :


A sample tag from Catalog.xml will be as below  :




Reverting Sandboxes :


It may as well be that after publishing the sandbox if we came to know that the changes we have made is not correct, we can revert that sandbox and bring the OIM to same state as before. But we cannot perform this action using OIM GUI. However, we can revert back to the state before publishing the sandbox changes through EM console by following the below steps.




Thanks for Reading !

Friday, March 1, 2019

Schedulers in OIM


The second article on the plug-in points will be based on schedulers.
The relevant plugin point for the same is  : oracle.iam.scheduler.vo.TaskSupport

Oracle Identity Manager provides the capability of creating your own scheduled tasks. We can create scheduled tasks according to your requirements if none of the predefined scheduled tasks fit your needs.

OIM exposes scheduler APIs to perform long running tasks which may include huge data exchange, data processing which can be triggered periodically or on demand without any code changes.

one quick point here will be , we can start or stop the entire scheduler services from the below portal -
http://OIM_HOST:OIM_PORT/SchedulerService-web/status

There are three part of developing a custom scheduler  :

  • Creating / Importing the Scheduled Task definition XML File
  • Developing the Scheduled Task Class
  • Configuring the Plug-in XML File


Creating / Importing the Scheduled Task definition XML File :

Configuring the scheduled task XML file involves updating the XML file that contains the definitions of custom scheduled tasks.

This file name must be the same as the scheduled task name, with the .xml extension. We must import the custom scheduled task file to the /db namespace of Oracle Identity Manager MDS schema.

XML Name space is very important when we're deploying custom scheduler task in MDS Schema. If we give wrong name space in scheduledTasks tag and it will deploy in the OIM MDS Schema But OIM won't recognised as a ScheuledTask.

We can import this metadata through "weblogicImportmetada.sh" script , make sure our "weblogic.properties" file under OIM_HOME/bin directory has the "metadata_from_loc" property updated with the current location for

The way I like to do it though, is from taking an export of any of the existing tasks (without the job jar file), and changing the parameters accordingly, and re-importing it through deployment manager -


Configuring the Plug-in XML File : 


just like any other plugins in OIM, we need a plug-in file to register the scheduler.


Developing the Scheduled Task Class : 


We'll read the user's AD account UIds from the input parameter defined in the scheduler definition metadata, split them by ',' and provision OUD account to each of them. Point to be noted - we're only passing the Container DN for the OUD account here, rest of all the attributes will be filled by pre-populate adapters mapped in the process form.

The basic will be to extend the 'TaskSupport' class, and to override the method  'execute' -


At the end  -
I'll like to encourage you to go through the below Oracle guides for a deeper understanding -

OIM admin guide  -  https://docs.oracle.com/cd/E37115_01/admin.1112/e27149/scheduler.htm#OMADM738 
OIM developer guide  - https://docs.oracle.com/cd/E37115_01/dev.1112/e27150/refsched.htm#OMDEV231