Saturday, April 18, 2020

// // 6 comments

FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

Hi Friends,

As we know Oracle Hyperion Financial Data Quality Management (FDMEE) data tables are used to store archives of your exported data for the purpose of audit and drill through. The size of these Hyperion Financial Data Quality Management (FDMEE) tables grows with time containing vast amounts of data. It consumes a lot of space on your database server as the size of your Hyperion Financial Data Quality Management (FDMEE) schema increases due to these tables. If the size of your FDMEE schema becomes very huge, it can not only cause performance issue during data load process but also can make maintaining FDMEE schema backup difficult for application disaster and recovery purposes, as you always need at least one healthy backup of FDMEE schema to deal with issues like application corruption. 

The good thing is you can purge these Hyperion Financial Data Quality Management (FDMEE) tables as per your requirement. Almost all organizations have a data retention policy defined for their applications. For example, some organizations keep only past 1 Financial Year (FY) data apart from current FY in the live tables and delete the rest but that should be completely your Business team call.

You need to evaluate the impact of performing FDMEE data purging activity in terms of audit, drill back requirements, etc. but if your business team is comfortable that they no longer require the data then there should be no adverse effects of removing this data other than it will no longer be available to you, because these processed data is also stored in the respective target applications' databases apart from FDMEE data tables.

In Oracle Hyperion Financial Data Quality Management (FDMEE), Oracle has supplied some default purge scripts for your use according to your retention policies. You can execute these scripts to cleanup runtime artifacts, such as Staging tables, Process tables or Inbox / Outbox folders.

These scripts can be found in:

Workspace—Navigate—Administer—Data Management--On the Workflow tab—Other—Script Execution link. Then select the ‘System Maintenance Tasks’ from the ‘Custom Script Group’ dropdown menu.

FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

System Maintenance Tasks group contains the following purge scripts to purge standard FDMEE tables and folder:

    1. Maintain Application Folder
    2. Maintain EBS GL Balances Table
    3. Maintain FDMEE Data Tables
    4. Maintain ODI Session Data
    5. Maintain PeopleSoft Ledger Table
    6. Maintain Process Table
In this first post, we will focus on the ‘Maintain FDMEE Data Tables’ task only. We will see how to manually execute ‘Maintain FDMEE Data Tables’ purge script. Other purge scripts of Hyperion Financial Data Quality Management (FDMEE) System maintenance tasks are covered in subsequent posts of this blog series, which you can find on below links:

FDMEE: System Maintenance Tasks: Maintain EBS GL Balances Table

Important Note: 
  • This post has been written and associated activities have been demonstrated on Oracle Hyperion Financial Data Quality Management (FDMEE) version 11.1.2.4.210.
  • The demonstrating Hyperion environment has Oracle database server 12.2.0.2 (18c) as backend database. 
  • There is no outage required on the FDMEE application to execute these purge scripts. But it is advisable not to run any data loads during the purge activity.
  • Don't forget to take complete FDMEE schema backup before attempting to run these purge scripts. 
Maintain FDMEE Data Tables

Maintain FDMEE Data Tables purge script deletes the data from the following tables:
  • TDATAMAPSEG
  • TDATASEG
  • TLOGPROCESS
  • TDATAARCHIVE
  • TDATACHECK
  • TDATAMEMOITEMS
Brief description of these FDMEE tables:

TDATAMAPSEG: When you import data, the mappings which you use to transform data are archived in the TDATAMAPSEG table for each point of view (POV). After the mappings have been processed in the TDATAMAP_T temporary table they are archived in the TDATAMAPSEG table. If you use a large number of maps, this table grows in size over a period of time and can create a performance issue during the data import process.

TDATASEG: The TDATASEG table is used to store the data loaded by the user, and the transformation between the source dimension members and results of the mapping process. This table stores the actual data. After the import stage and validate stage, the data is moved from the temporary table TDATASEG_T to TDATASEG. Only in a successful data rule execution, data transformations stored in TDATASEG_T is copied over to TDATASEG. A very large TDATASEG table can impact your FDMEE performance. During the data load run, if any query is made to the large-sized TDATASEG table, it may take a long time to complete the query. 

TLOGPROCESS: This table is used to store the workflow process status for a location, category, and period.

TDATAARCHIVE: Information for various FDMEE files/documents/data is stored in table TDATAARCHIVE.

TDATACHECK: Results of Validation rules/Data load rules are stored in table TDATACHECK.

TDATAMEMOITEMS: This table stores your Memo items.

When you will execute ‘Maintain FDMEE Data Tables’ purge script, it seeks the following input parameters:

  • Target Application
  • Start Period
  • End Period
  • Category
Here Start Period and End period specify the dates for which data will be deleted.

Before proceeding further with purging steps, I would suggest you to maintain an excel sheet to track pre-purge and post-purge records count for various tables and Hyperion Financial Data Quality Management (FDMEE) schema size to see how much space has been freed up post purge activity. Something like below:

FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

Pre-purge and post-purge records count of these FDMEE tables can be fetched by logging to FDMEE schema and running simple SELECT queries:

select count(*) from TDATAMAPSEG;
select count(*) from TDATASEG;
select count(*) from TLOGPROCESS;
select count(*) from TDATAARCHIVE;
select count(*) from TDATACHECK;
select count(*) from TDATAMEMOITEMS;

Step-by-Step process to execute ‘Maintain FDMEE Data Tables’ purge script

1- Log in to Workspace with Admin credentials.

2- Open Oracle Hyperion Financial Data Quality Management (FDMEE) console (Data Management) from Workspace and on the Workflow tab, under Other, select Script Execution:

FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

3In Script Execution, and then in Custom Script Group, select System Maintenance Tasks.

FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

4- From the Scripts grid, select Maintain FDMEE Data Tables script and Click Execute.

FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

5It will prompt to enter following parameter values on the Execute Script screen.

  • Target Application
  • Start Period
  • End Period
  • Category
FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

6- Click on the search icon for Target Application

Select the Target Application name for which you want to purge tables and click OK.

FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

7- Target Application will be selected in the parameters tab.

Now click on the search icon for the Start Period.

FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

8- Select the Start Period for which you want to purge tables and click OK.

For example, If you want to purge all data from Nov-2014 till Oct-2018, then select Start Period as NOV_2014 as shown below:

FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

9- Start Period will be selected in the parameters tab.

Now click on the search icon for the End Period.

FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

10- Select the End Period as OCT_2018 as per the data deletion range Nov-2014 to Oct-2018 and click OK.

FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

11- End Period will be selected in the parameters tab.

Now click on the search icon for the Category.

FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

12- Select the Category (Scenario) for which you want to purge tables and click OK.

FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

13- Category will be selected in the parameters tab.

FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

14- Keep Execution mode selected as Online. The online method processes the report immediately.

Execution modes:

online  — ODI processes the data in sync mode (immediate processing).
offline — ODI processes the data in async mode (runs in background).

FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

15- Optionally, If you want to schedule the purge script run rather than running it instantly, click on Schedule option and enter the scheduling details as per your requirement.

FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables


FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

16- If you want to instantly run the purge script, then directly click on OK (not on the Schedule option).

You will see a message informing the process ID of your script run as shown below:


FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

17- To see the status of your script execution, click on the Process Details link under the Workflow tab—Monitor—Process Details

FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

18- Above we can see our script with Process ID 7805 executed successfully. 

Click on the Show button for Process ID 7805 to open the corresponding log file.


FDMEE: System Maintenance Tasks: Maintain FDMEE Data Tables

In the log file, look for the message "Maintain FDMEE Data Tables: Completed" to confirm that script has run successfully without any error. If you encounter any errors, troubleshoot them based on the cause mentioned in the log file. In our case, we see no issue in the script execution.

Similarly, you need to run this purge script selecting the same application and other categories. Note, you need to repeat all of the above steps for various combinations of target applications and categories where your FDMEE data loads are running. 

Once you are done with running the Maintain FDMEE Data Tables purge script for different target applications and categories combinations, check post purge records count of FDMEE tables to see how many records have been deleted. It will give you a fair idea on the growth rate of these tables and accordingly you can plan and decide the frequency of running these purge scripts. 

Now its time to check your Oracle Hyperion Financial Data Quality Management (FDMEE) schema size to see how much space has been cleared as a result of tables purging.

Are you surprised to see that FDMEE schema size is still the same despite deleting millions of records from FDMEE tables? 

Don't worry! You just need to inform your DBA and ask them to do table reorganization in order to reclaim freed space. 

Actually, when you delete the data from the Oracle tables, the data blocks will go onto the free list for that table. That free space will be used for the subsequent insert and update operations into the table. You need to perform table reorganization to reclaim that free space. 

Note, reorganization of tables requires outage in your Hyperion environment so you have to stop all your application services prior to the reorganization activity to be performed by your DBA.

Don't forget to ask your DBA team to 'Rebuild Indexes' in your Hyperion Financial Data Quality Management (FDMEE) database after performing the 'Tables Reorganisation' activity. 

If you miss rebuilding indexes after tables reorganization activity, your FDMEE data load job will be taking longer than normal to complete.  

Index rebuild becomes absolutely necessary especially when you delete millions of records from FDMEE tables as part of running FDMEE purge scripts because the corresponding indexes of those deleted records go in the unusable state after tables reorganization activity. Due to this bulk deletion of records, indexes become fragmented causing indexes taking more time to give the output and hence delaying your FDMEE data load job.

You will observe the following error during your FDMEE data load run if you do not rebuild database indexes after tables reorganization activity:
ORA-01502: index 'FDMEE_ODI.PK_TLOGPROCESS' or partition of such index is in unusable state.
Once both the DBA tasks (Tables reorganization + Rebuilding Indexes) are completed, compare pre and post purge size of your FDMEE schema to see how much has the schema size reduced. You might see a significant reduction in FDMEE schema size if you are doing this purging activity very first time or for a data deletion range covering many Financial Years.

Maintain FDMEE Data Tables purge script can also be automated to run using Executescript batch file located in the folder: 

<drive>\Oracle\Middleware\user_projects\epmsystem1_fdm\FinancialDataQuality.

We will cover the automation of Maintain FDMEE Data Tables purge script in a separate post of this blog series. 

That's all for this post. 

I hope this article has helped you. 
Your suggestions/feedback are most welcome.
Keep learning and Have a great day!!!
Read More

Monday, April 13, 2020

// // 2 comments

DRM: There are currently no applications available for login

Hi Friends,

Most of you would have faced Oracle Data Relationship Management (DRM) Error: There are currently no applications available for login error while logging to Data Relationship Management (DRM) Web-client. No exception to that, I too faced one such instance in recent times.

In this post, we will see what are things to check in Oracle Data Relationship Management (DRM) when you encounter this error and how this issue got fixed for me.

Note:
  • This post has been written and associated activities have been demonstrated on Oracle Data Relationship Management (DRM) version 11.1.2.4.344.
  • The demonstrating Hyperion environment has Oracle database server 12.2.0.2 (18c) as backend database. 
One fine morning, one of our DRM users had reported that he is not able to login to DRM Web Client encountering the following error message:

Error: There are currently no applications available for login 

DRM Error:There are currently no applications available for login

Immediately I opened DRM Web Client to confirm whether it's working with ADMIN credentials or not. Like the user, I too faced the same error even with Data Relationship Management (DRM) ADMIN account.

When I checked the DRM Web services status in the service panel, they were running perfectly fine.

DRM Error:There are currently no applications available for login

To cross-check the Data Relationship Management (DRM) Web Services status, I connected to Oracle Data Relationship Management (DRM) Configuration Console (as an Administrator):

DRM Error:There are currently no applications available for login

In Configuration Console, I noticed that, though DRM Web services were up and running, Data Relationship Management (DRM) application was stopped.

DRM Error:There are currently no applications available for login

I thought why not first validate the database connection here and did the same. 
Database connection was successful.

DRM Error:There are currently no applications available for login

Its time to start the application.

DRM Error:There are currently no applications available for login

But starting application did not work. I could see, the application was still in STOPPED state.

Alright, let's search for some errors now.

Open the Windows Event Viewer and Explore Windows Logs-->Application

I noticed total 3 DRM error messages recorded in Event viewer:

Error-1:
DRM Error:There are currently no applications available for login

Error-2:
DRM Error:There are currently no applications available for login


Error-3:
DRM Error:There are currently no applications available for login

Going through the above DRM Error messages, we come to know that DRM application engine could not be started because of:

DRM-63015: Read/Write engine for Application on DRM Server will not start because another Application is currently using the same repository.

Generally, 'DRM-63015' error message indicates that your DRM schema might be locked because some other DRM application has already connected to it i.e. when two DRM applications try to connect to the same DRM schema at the same time, we face 'DRM-63015' error.

This happens when you have two DRM application servers in your Hyperion environment i.e. two DRM Configuration consoles and the secondary DRM application server has been configured to connect to the same DRM schema as the primary DRM application server and you try to start the DRM application on both the servers at the same time.

Ideally, you need to ensure that only one DRM console has been configured to connect to any given DRM schema.

In my Hyperion environment, there is only one DRM Server and only one DRM application. So the above scenario is not applicable at least in my case.

Also when I checked, my DRM Schema was not Locked:

DRM Error:There are currently no applications available for login

So let's proceed further and try to list out possible causes along with troubleshooting steps to tackle this error. You can try these options in the given sequence and any of these may work for you. 

1- In a particular EPM environment, you might have DRM installed on two servers for load balancing. But you need to make sure that only one DRM configuration console (DRM installation) has been configured to connect to your DRM schema. Sometimes what happens that by mistake we configure the secondary DRM application server too to connect to the same DRM application schema as the primary DRM application server (i.e. connection details to the same DRM schema have been set up on two separate DRM servers) and make an attempt to start the DRM application on both the servers. When you configure two different DRM configuration consoles of your EPM environment try to connect to the same DRM schema at the same time, we face 'DRM-63015' error. Therefore make sure its not the case in your DRM setup. 

2- Make sure your DRM service 'startup type' is set to Manual and not Automatic on all the DRM servers. This will prevent the auto-start of your DRM service during any unplanned server reboot. This is important especially when you have CUP (Common User Provisioning) enabled in your DRM application.

Also, change your DRM service 'log on as' account from Local system to EPM Admin domain account (which was used to install and configure DRM) on all the DRM servers.

3- Do 'iisreset' on Oracle Data Relationship Management (DRM) application server.

As DRM's primary web interface runs through IIS (Microsoft Internet Information Service), there's a chance bouncing IIS will free up whatever rogue session might have been locking things up for you.

To bounce IIS service, open a command prompt on your DRM application server and type "iisreset":


If this workaround works for you, it's well and good, If not, move to the next troubleshooting step 

4- 'Apply Updates' in Oracle Data Relationship Management (DRM) Configuration console.

To 'Apply Updates' follow the below steps: 

1- Open Windows service panel and stop DRM Web services. 

2- Check the DRM Configuration console to ensure DRM services are stopped.

DRM Error:There are currently no applications available for login

3- Now goto DRM Configuration console and select Application-->Apply Updates

DRM Error:There are currently no applications available for login

4- Click OK on the below message while Applying updates:

DRM Error:There are currently no applications available for login

5- You will see below CMD window opened. As there were actually no updates to apply so we see the message: No updates were applied. 

Press SPACE key to continue. CMD window will be closed then. 

DRM Error:There are currently no applications available for login

6- Goto the service panel and start DRM Web services on all the servers. 

7- Check DRM Web services status in DRM Configuration console. Both DRM Web services are up and running fine. Also notice, our DRM application too has been successfully started this time.

DRM Error:There are currently no applications available for login

8- When I opened DRM web-client, my DRM application was available to login.

DRM Error:There are currently no applications available for login

9- And application login too was working as expected.

DRM Error:There are currently no applications available for login

So for me, the issue got fixed following the 3rd workaround. :-)

We know that Applying Updates to Oracle Data Relationship Management (DRM) application refreshes the database connection apart from making any changes like patch upgrade, config changes, etc. in your DRM application, effective. 

But exactly what has caused this issue is still not known. This workaround has fixed this issue for me in one go so it can be used as a temporary fix.

Please note, this DRM error "There are currently no applications available for login" can occur due to many reasons, and above mentioned workarounds reflect only a few of those scenarios. 

You need to check the Windows Event Viewer on your DRM server in order to identify the exact root cause of the problem as Event Viewer will most likely have an error message recorded related to your DRM issue. 

That's all for this post.

I hope this article has helped you. 
Your suggestions/feedback are most welcome.
Keep learning and Have a great day!!!
Read More

Monday, April 6, 2020

// // 1 comment

HFM Housekeeping in Oracle Hyperion (EPM) 11.1.2.4: PART-2

Hello Friends!

This is PART-2 of our 'Oracle Hyperion Financial Management (HFM) Housekeeping' blog series. If you haven't read PART-1, I would suggest you to first go through that post to have a clear understanding of this PART-2.

In PART-1, we had covered stopping Oracle Hyperion Financial Management (HFM) services and processes, killing HFM database sessions, housekeeping of HFM schema tables, etc.

Now we will see Oracle Hyperion Financial Management (HFM) logs archiving, deleting HFM temp and cache files, etc. activities as part of Oracle Hyperion Financial Management (HFM) Housekeeping.

NOTE: 
  • This post has been written and associated activities have been demonstrated on Oracle Hyperion Financial Management (HFM) version 11.1.2.4.204.
  • The demonstrating Hyperion environment has Oracle database server 12.2.0.2 (18c) as backend database. 
Oracle Hyperion Financial Management (HFM) logs archiving

To have good Hyperion Financial Management (HFM) application performance, it's recommended to archive HFM application logs on a regular basis as if it's not done on time it keeps growing and can consume your server's resources apart from downgrading your application performance.

1- Login to all your Hyperion Financial Management (HFM) application servers and perform below steps on each one of them.

2- Goto the path E:\apps\Oracle\Middleware\user_projects\epmsystem_hfm\diagnostics\logs\hfm

3- In the above folder, you will find the following log files:
  • xfm.odl.<APPLICATION_NAME>.log
  • oracle-epm-fm-hsx-server.log
  • oracle-epm-fm-bi-publisher.log
  • oracle-epm-fm-hsx-registry.log
  • oracle-epm-fm-lcm-client.log
  • SharedServices_Security.log
What is HFM Housekeeping and how to do that in Oracle EPM (Hyperion) 11.1.2.4

4- Archive all the LOG files under \hfm folder by zipping them to a separate backup folder. 

5- Now goto the path E:\apps\OracleEPM\Middleware\user_projects\domains\EPMSystem\servers\HFMWeb0\logs

What is HFM Housekeeping and how to do that in Oracle EPM (Hyperion) 11.1.2.4

6- Archive all the LOG files under \logs folder by zipping them to a separate backup folder. 

7- Goto the path E:\apps\OracleEPM\Middleware\user_projects\domains\EPMSystem\servers\HFMWeb0\logs\hfm

What is HFM Housekeeping and how to do that in Oracle EPM (Hyperion) 11.1.2.4 
8- Archive all the LOG files under \hfm folder by zipping them to a separate backup folder. 

9- Now login to your HFM Webservers and perform below steps on each one of them.

10- Goto the path E:\apps\OracleEPM\Middleware\EPMSystem11R1\logs\hfm

What is HFM Housekeeping and how to do that in Oracle EPM (Hyperion) 11.1.2.4

11- Archive all the LOG files under \hfm folder by zipping them to a separate backup folder. 

This completes the Hyperion Financial Management (HFM) logs archiving part. 

Deleting Oracle Hyperion Financial Management (HFM) temp and cache files

Note: The purpose to include 'deleting tmp and cache folders on Hyperion Financial Management (HFM) app servers' is that it should be done once in a quarter or probably once in a 6 months period (not weekly/monthly) like truncation of HFM DATA_AUDIT, TASK_AUDIT, HFM_ERRORLOG tables or do it when you need to troubleshoot any particular technical issue where Oracle recommends to perform it. Deleting tmp and cache folders is mainly required during issues in Hyperion Financial Management (HFM) Web pages/interfaces or after Hyperion Financial Management (HFM) patching. On some occasions where frequent and intense data loads are running from FDMEE to HFM apps, it has been observed that truncating overgrown HFM audit and data tables along with clearing tmp and cache folders have worked as a temporary fix to resolve the issue like data is not getting loaded into Hyperion Financial Management (HFM) applications even though FDMEE DLR is getting successfully completed. 

Tmp folder on Hyperion Financial Management (HFM) application managed server is used to store temporary files related to your HFM applications. A cache is a set of temporary files used by HFM application server. 

Oracle recommends that users should periodically clear the tmp and cache directories to help your system run faster and reclaim disk space. Without damaging your applications, you can delete tmp and cache easily.

1- Login to all your Hyperion Financial Management (HFM) application servers and perform the below tasks.

2- Goto path E:\apps\OracleEPM\Middleware\user_projects\domains\EPMSystem\servers\HFMWeb0

3- Rename tmp and cache folders as shown below:

What is HFM Housekeeping and how to do that in Oracle EPM (Hyperion) 11.1.2.4

Note: Make sure you keep clearing these backup folders on every next Housekeeping activity. 

Rebooting Oracle Hyperion Financial Management (HFM) Application servers

Rebooting Hyperion Financial Management (HFM) application servers are important in order to clear locking, blocking, hanged, orphan processes/sessions running at the Operating system level. Sometimes these sessions can cause interruptions in HFM application normal functioning. 

Login to all your Hyperion Financial Management (HFM) application servers and reboot them. 

Note: It's also recommended to regularly monitor Hyperion Financial Management (HFM) application servers' space and especially memory utilization. 

Starting Oracle Hyperion Financial Management (HFM) application services

Once you complete all of the above tasks, you can go ahead and start HFM services on all Hyperion Financial Management (HFM) application servers.

What is HFM Housekeeping and how to do that in Oracle EPM (Hyperion) 11.1.2.4

Now login to Workspace and open your Hyperion Financial Management (HFM) applications and thoroughly validate them ensuring everything is working fine.

This completes your Hyperion Financial Management (HFM) Housekeeping process.

That's all for this post.

I hope this article has helped you. 
Your suggestions/feedback are most welcome.
Keep learning and Have a great day!!!
Read More

Friday, March 27, 2020

// // 1 comment

HFM Housekeeping in Oracle Hyperion (EPM) 11.1.2.4: PART-1


Hello Friends!

There are many applications where a lot of data transactions, movement, retrieval, refresh, update, etc. activities happen as a result of day-to-day business activities and thus it creates many log files, temporary files, audit files, database records, etc. If not house kept on time, these files can cause severe performance issues in that application. 

Oracle Hyperion Financial Management (HFM) is one of those apps where we need to regularly perform housekeeping in order to improve application performance, especially in the Production environment. 

In this blog series, we will explore in detail what are the various things, which need to be regularly house kept in Oracle Hyperion Financial Management (HFM) and how to do that.

NOTE: 
  • This post has been written and associated activities have been demonstrated on Oracle Hyperion Financial Management (HFM) version 11.1.2.4.204.
  • The demonstrating Hyperion environment has Oracle database server 12.2.0.2 (18c) as backend database. 
The complete Hyperion Financial Management (HFM) Housekeeping activity will be covered in two parts.

In this first part, we will cover prerequisites, stopping Hyperion Financial Management (HFM) services and processes, killing Hyperion Financial Management (HFM) database sessions, Hyperion Financial Management (HFM) schema tables housekeeping, etc. You can find the second part of Hyperion Financial Management (HFM) Housekeeping blog series on below link:

HFM Housekeeping in Oracle Hyperion 11.1.2.4: PART-2

Prerequisites:
  • As Hyperion Financial Management (HFM) Housekeeping activity requires complete HFM outage and involves some critical tasks in the database, it’s recommended to plan this activity over the weekend if you are doing in Production. Business users should be informed accordingly. 
  • As housekeeping of Hyperion Financial Management (HFM) audit tables is also involved, you need to have login credentials of HFM relational database schema. Needless to say, your HFM schema will be having all the required privileges as recommended in the Oracle EPM guide. 
  • It’s recommended to have a database user with DBA level privileges in order to check active, Inactive, Killed sessions on your Hyperion Financial Management (HFM) schema. If you can’t own such a user, you can ask your application DBA to do perform such activity which will be described later in this post. 
  • Different paths and folders' names mentioned in this blog may slightly vary for different Oracle Hyperion setups but at large it should be the same.  
  • Please make sure you take Hyperion Financial Management (HFM) database schema backup before attempting HFM housekeeping steps mentioned below.
  • Never forget to take complete backup of EVERYTHING before deleting or changing anything. 
Step-by-step process to do Oracle Hyperion Financial Management (HFM) Housekeeping:

PART-A: Stopping HFM services and killing HFM processes

1- Stop following two HFM services in all your HFM application servers:
  1. Oracle Hyperion Financial Management – Java Server
  2. Oracle Hyperion Financial Management - Web Tier 
What is HFM Housekeeping and how to do that in EPM (Hyperion) 11.1.2.4

Note: Open task manager and make sure the following two processes are no more running:
  1. HyS9FinancialManagementJavaServer.exe
  2. HyS9FinancialManagementWeb.exe
What is HFM Housekeeping and how to do that in EPM (Hyperion) 11.1.2.4

2- Open Windows Task Manager and kill all XFMDataSource.exe processes running in all your Oracle HFM application servers. For each HFM application, there will be one XFMDataSource.exe process.

What is HFM Housekeeping and how to do that in EPM (Hyperion) 11.1.2.4

Note: When an HFM application is started the Java Server starts an application process with the name XFMDataSource.exe. 

PART-B: Killing any running “Active, Inactive, Killed” database sessions under HFM schema

1- Using SQL developer, login to your environment’s database with a user having DBA level privileges (user should have access on GV$SESSION view) and run the following query to list out any Active, Inactive or Killed sessions running under HFM schema:

select sid, serial#, inst_id, status from gv$session where username = 'HFM' and status in ('INACTIVE','KILLED','ACTIVE');

What is HFM Housekeeping and how to do that in EPM (Hyperion) 11.1.2.4

2- Now using below query, kill all those Active, Inactive or Killed sessions listed above. You can take the help of your DBA if the user you own, does not have sufficient privileges to do this deletion task.

ALTER SYSTEM KILL SESSION 'sid,serial#' IMMEDIATE;

Where sid and serial# are session ID and serial number of your Active, Inactive or Killed sessions listed above.

For example:

What is HFM Housekeeping and how to do that in EPM (Hyperion) 11.1.2.4

If your Oracle Hyperion environment has a RAC database setup, you can specify the INST_ID in your kill command. This way you will be able to kill the session on the respective RAC node.

ALTER SYSTEM KILL SESSION 'sid,serial#,@inst_id' IMMEDIATE;

For example:

What is HFM Housekeeping and how to do that in EPM (Hyperion) 11.1.2.4

Note: The KILL SESSION command doesn't actually kill the session. It only asks the session to kill itself. In some cases, like waiting for a reply from a remote database or rolling back transactions, the session will not kill itself immediately and will wait for the current operation to complete first. In such cases, the session will have a status of "marked for kill". It will then be killed as soon as possible.

Actually, adding the IMMEDIATE clause does not affect the work performed by the command, but it returns control back to the current session immediately, rather than waiting for confirmation of the kill.

If the marked session persists for some time, you should kill the process at the operating system level. Ask your DBA to kill that session at the OS level.

3- Once listed sessions are killed, re-run below query to ensure all sessions are gone:

select sid, serial#, inst_id, status from gv$session where username = 'HFM' and status in ('INACTIVE','KILLED','ACTIVE');

What is HFM Housekeeping and how to do that in EPM (Hyperion) 11.1.2.4

PART-C: HFM schema tables housekeeping

Here we will first archive following three Hyperion Financial Management (HFM) tables and then truncate table entries keeping only past few months data in live tables:
  1. <appname>_DATA_AUDIT
  2. <appname>_TASK_AUDIT 
  3. HFM_ERRORLOG
The general recommendation to maintain good application performance is to archive and delete the content of these tables in the HFM repository database before it reaches 500,000 records.

If the Data Audit feature is not part of your business requirements then it is recommended to turn off auditing of data for Hyperion Financial Management (HFM) applications. There is degradation in performance observed for HFM applications with the Data Audit table having more than 10GB entries.

There is no built-in mechanism in Hyperion Financial Management (HFM) to monitor the size of these tables, so the Hyperion administrator should be tasked with it to do the regular maintenance of these tables.

For the above tables, it is recommended to implement the following housekeeping best practices:
  • Quarterly - Business to review the Audit logs, archive and delete.
  • Half-Yearly - Archive System Messages and truncate table.
1- Archiving Oracle Hyperion Financial Management (HFM) apps Audit tables

To perform this activity, log in to HFM schema using SQL Developer. Suppose you have the following two HFM applications in your Hyperion environment:
  1. HFMAPP1
  2. HFMAPP2
So to archive/backup the audit tables of these two HFM apps, run following queries:

Archive HFMAPP1_task_audit table with today's date (ddmmyyyy):
Create table HFMAPP1_task_audit_21032020 as select * from HFMAPP1_task_audit;

Archive HFMAPP1_data_audit table with today's date (ddmmyyyy):
Create table HFMAPP1_data_audit_21032020 as select * from HFMAPP1_data_audit;    

Archive HFMAPP2_task_audit table with today's date (ddmmyyyy):
  Create table HFMAPP2_task_audit_21032020 as select * from HFMAPP2_task_audit;  

Archive HFMAPP2_data_audit table with today's date (ddmmyyyy):
Create table HFMAPP2_data_audit_21032020 as select * from HFMAPP2_data_audit;  

2- Deleting Oracle Hyperion Financial Management (HFM) Audit tables entries keeping last 90 days entries (you can decide the no. of days data to be retained for by checking with your business/team):

Delete HFMAPP1_task_audit table's all entries leaving past 90 days entries in live table: 

Run below query to see how many records are going to be deleted:
select count(*)from HFMAPP1_task_audit where starttime < (select max(starttime)-90 from HFMAPP1_task_audit);     

Run below query to delete the records:                
delete from HFMAPP1_task_audit where starttime < (select max(starttime)-90 from HFMAPP1_task_audit);

Run below query again to see whether all the records have been deleted or not:
select count(*)from HFMAPP1_task_audit where starttime < (select max(starttime)-90 from HFMAPP1_task_audit);

Run below command to end your current transaction and make permanent all changes performed in the transaction:
commit;

Delete HFMAPP1_data_audit table's all entries leaving past 90 days entries in live table: 

Run below query to see how many records are going to be deleted:
select count(*)from HFMAPP1_data_audit where dtimestamp < (select max(dtimestamp)-90 from HFMAPP1_data_audit);

Run below query to delete the records:
delete from HFMAPP1_data_audit where dtimestamp < (select max(dtimestamp)-90 from HFMAPP1_data_audit);

Run below query again to see whether all the records have been deleted or not:
select count(*)from HFMAPP1_data_audit where dtimestamp < (select max(dtimestamp)-90 from HFMAPP1_data_audit);

Run below command to end your current transaction and make permanent all changes performed in the transaction:
commit;

Delete HFMAPP2_task_audit table's all entries leaving past 90 days entries in live table: 

Run below query to see how many records are going to be deleted:
select count(*)from HFMAPP2_task_audit where starttime < (select max(starttime)-90 from HFMAPP2_task_audit);  
                   
Run below query to delete the records:
delete from HFMAPP2_task_audit where starttime < (select max(starttime)-90 from HFMAPP2_task_audit);

Run below query again to see whether all the records have been deleted or not:
select count(*)from HFMAPP2_task_audit where starttime < (select max(starttime)-90 from HFMAPP2_task_audit);

Run below command to end your current transaction and make permanent all changes performed in the transaction:
commit;

Delete HFMAPP2_data_audit table's all entries leaving past 90 days entries in live table: 

Run below query to see how many records are going to be deleted:
select count(*)from HFMAPP2_data_audit where dtimestamp < (select max(dtimestamp)-90 from HFMAPP2_data_audit);

Run below query to delete the records:
delete from HFMAPP2_data_audit where dtimestamp < (select max(dtimestamp)-90 from HFMAPP2_data_audit);

Run below query again to see whether all the records have been deleted or not:
select count(*)from HFMAPP2_data_audit where dtimestamp < (select max(dtimestamp)-90 from HFMAPP2_data_audit);

Run below command to end your current transaction and make permanent all changes performed in the transaction:
commit;

3- Deleting HFM ERRORLOG table entries keeping last 30 days entries/data (you can decide the no. of days data to be maintained for checking with your business/team):

Run below query to see how many records are going to be deleted:
select count(*)from HFM_ERRORLOG where dtimestamp < (select max(dtimestamp)-30 from HFM_ERRORLOG);  
                                 
Run below query to delete the records:
delete from HFM_ERRORLOG where dtimestamp < (select max(dtimestamp)-30 from HFM_ERRORLOG);

Run below query again to see whether all the records have been deleted or not:
select count(*)from HFM_ERRORLOG where dtimestamp < (select max(dtimestamp)-30 from HFM_ERRORLOG);  

Run below command to end your current transaction and make permanent all changes performed in the transaction:
commit;

4- Purging database Recyclebin:

purge recyclebin;

The 'Purge recyclebin' command removes the items and their objects from the database recyclebin and restores the used storage space. The purge command is used to remove the items which have no use in the future. The main purpose here is to reclaim space used by deleted objects laying in recyclebin.

Note: Keep an eye on the space utilization on the database server and Hyperion Financial Management (HFM) Schema.

That's all for this PART-1.

In the last PART-2 of this blog series, we will see Hyperion Financial Management (HFM) logs archiving, deleting HFM temp and cache files, etc.

I hope this article has helped you.
Your suggestions/feedback are most welcome.
Keep learning and Have a great day!!!
Read More