To realize the power of Oracle's Cloud applications, it is critical that the underlying data is clean and correct. Whether implementing ERP, HCM, SCM, Financials, Sales, or any other Oracle Cloud module, the data migration into those modules is a risky process. Data migration is the leading cause of delay, budget overrun, and overall pain on most implementations and understanding the top challenges before you start your implementation will be critical to the project's success.
This article and presentation share real world experiences that organizations encounter while migrating into Oracle Cloud. Use this information to prepare for your implementation or contact us to discuss how Premier can address them for you
1. Oracle’s In Control
The biggest challenge we’ve seen customers struggle with: In the Cloud, Oracle’s in control.
This starts with a paradigm shift in how you think about requirements gathering. Previously, you might have customized your solution to fit your business practice – going into Oracle Cloud, you should be looking for ways to adapt to Oracle Best Practice and utilize configuration and personalization to make it your own.
As part of your transition, you need to ensure that you are reflecting these same changes and adaptations in your Data Conversion requirements, which should be driven by these business decisions.
Next, Oracle controls your pods (or environments). Pods refreshes are executed by Oracle and must be scheduled weeks in advance, pending Oracle’s schedule. Refreshes can take anywhere from 24-48 hours, depending on the volume of data in the pod. The more data, the longer it takes.
If you need to change a schedule pod refresh, you have to give Oracle advance notice and you may not be able to simply push it back a day – the next slot might be a week or two later than you had planned.
One of the major limitations of converting data to Oracle Cloud is the Pod refresh schedule. In a traditional on-prem scenario, you might ask your DBA to back up your target environment right before doing a data conversion – or even to back up a specific table “just in case”. Then, if something goes wrong during conversion, you can quickly restore the backup and try again without losing too much time.
In Cloud – you can’t do that. If something goes wrong, you’re stuck with it until you have your next refresh scheduled and configured, with the lead time wait that entails. Because of this – its imperative that as much validation is done on the data prior to load as possible, to really get the most out of your refresh cycles.
One of the key benefits of Oracle Cloud is that it is constantly evolving, with monthly and quarterly software patches. Your schedule needs to take into account when these patches are scheduled, as you won’t be able to use your system during that period. Additionally, these patches can introduce changes which impact the data migration process. These changes are not always communicated in the patch notes – sometimes FBDI templates may have additional fields added (sometimes in the middle of the file rather than at the end) or the actual interface load processes may change. You may not find this out until you review your converted data and notice something has changed.
This is getting better – earlier releases of Cloud had would have these kinds of changes more frequently, but it is much more stable now. That said, you need to be aware that anything could change at anytime.
Finally, as we mentioned before, once your data is in – its in. You can’t back it out, you can’t use SQL to mass update it as easily as you could in an on-prem world, and there may or may not be an ability to mass-upload changes depending on the area.
When setting your schedule, try not to have UAT right before a quarterly patch comes through – any unexpected changes caused by that patch won’t be reflected in your conversions before going into PROD.
2. Square Pegs, Round Hole
One of the first challenges you’ll face when migrating data to Oracle Cloud is actually rooted in your current data landscape. Every company has a unique data landscape - how well do you really understand yours? Where are all the places data resides today? Do you know about the custom database the sales department built to manage a specific task? What about that department who decided to “innovate” and re-use fields for something other than their intended purpose? Is that documented anywhere? Have you acquired companies who might be running on different platforms than your main enterprise, which now need to be combined?
If you can’t answer these questions, you’re not alone! These may seem straightforward, but they’re often anything but. I’ve lost count of the number of companies we’ve talked with who uncovered critical data sources – or even complete sets of data! – days after they turned on their new system and “went live”.
At the extreme end, we had a client who had to bring an ex-employee back out of retirement to support a new ERP implementation, as this employee was the only one who knew anything about how their very old legacy system worked. Absolutely nothing was documented, and the legacy system was significantly customized – meaning there were no references publicly available to support the transformation exercise!
A slightly less extreme example we ran into more recently involved a client running on an antiquated mainframe system, where the one employee who knew the system was nearing retirement. It was a race against the clock to get out of the system before that happened, as their 500 page user manual consisted of illegible hand written notes.
Many companies aren’t running just one system today - if you grew through acquisition (or even if you didn’t), you may have a different ERP for each company. There are probably Access databases or Excel files in play across the business. What about third party add-ons or workarounds bolted on to your primary data source? All of these need to be wrangled into a single set of data as part of the migration. This might require merging records across sources, harmonizing duplicates or combining individual fields.
The sooner you start understanding your specific landscape, the better prepared you’ll be when you find those blind spots.
Locking down the data sources is only the first step. Once you know where your data is, you need to work out how it’s being used. Every legacy system structures their data differently and is often customizable.
On top of that, unless you’re going through an upgrade of Oracle EBS to Oracle Cloud, there is generally a major structural disconnect between what you have today, and where you’re going tomorrow. Even between EBS and Cloud there are areas which have been reworked / redesigned or restructured, and that has to be taken into account as part of the migration. If you have multiple data sources, that’s multiple sets of restructuring.
To take a simple example – let’s consider Customer data. Each ERP system structures Customer data in a different way. SAP and PeopleSoft customers are stored and maintained in a completely different manner than Oracle Cloud customers. Every other platform you can consider will also have their own way of structuring this data. As part of the migration, you need to transform what you have today into what you need tomorrow.
A lot of companies we work with are doing their first major implementation in years, meaning they are running on systems which have been around for a long time. In an on-prem world, this often means they’ve been customizing these systems for a long time. All of this has to be unraveled when its time to move to the Cloud.
We have a long term client we’ve been working with for years, as they’ve deployed a single instance of Oracle to their locations around the world. Over the years, we’ve combined more than 58 legacy sources in 39 sites over 17 countries. With so many sources, a lot of work went into ensuring that the migration for each consolidated and structured the disparate legacy data into the same format – regardless of what it looked like in legacy. Even when sites were running on the same legacy platform, there was a lot of variability in their conversion processes. Four of the 39 sites were running MFGPro, but each of them were using it differently from one another and had a ton of unique customizations. Aligning them into Oracle took four completely separate processes to avoid unexpected results.
3. Stringent Load Criteria
With Cloud, Oracle does provide tools to try and make the loading of data more ‘user friendly’, with the aim of allowing organizations to handle this task on their own. If only it was that simple. There are two main load utilities to leverage – FBDI (File Based Data Import) and ADFdi (ADF Desktop Integration). The templates outline what to do once populated – the trick is getting legacy data to match the format, content and structure expected by Oracle Cloud
Both FBDI and ADFdi enforce basic data integrity validation and structural cross-checks to prevent invalid data scenarios. If these rules are violated, the data will not load – leaving you with a sometimes cryptic error report to decipher and correct for. If you’re used to fixing things “live” – or pushing the data in with the plan to deal with data quality issues later – you may find yourself with load files which won’t load.
Even if you have data which is perfectly clean, you may still encounter issues with your FBDIs. Any configuration which doesn’t match your file will result in a load failure (trailing spaces in configuration are fun to track down!). Required fields which aren’t populated on the file will result in a load failure, even if they are something you haven’t used in the past. Files with too many records sometimes cause loads to fail or timeout (though this differs from file to file). There can also be diminishing performance returns if an FBDI has too many rows.
FBDIs may seem like they make data migration a no brainer – but if you’ve not done the work in advance to make sure that the data in the templates is clean and consistent, you can end up going through multiple iterations of this process to get a file that will load successfully. It is extremely easy to burn a ton of hours caught in the load process, fixing issue after issue.
4. Incomplete Functionality
At some point as you’re starting to plan and prepare for your implementation, you’ll find yourself face to face with challenge number 3: Missing (or Incomplete) functionality. Oracle Cloud is very much a work in progress.
New features and functionality are constantly being released as part of Oracle’s quarterly patches. While the constant enhancements help keep the solution fresh, it does pose a challenge when something you need isn’t yet available. We’re not going to talk about every feature which might not exist yet – but we are going to highlight a few things which impact data migration
While many areas are supported by FBDI loads, others are not. For example, Price Lists can only be loaded 1 at a time, making it very difficult for clients with large volumes of price lists, especially those with lots of configured items. If you have attachments (be they item, order or invoice attachments), you will likely find the process to load them involved and complicated. If you’re implementing Oracle Cloud PLM, you may find that while there is a Change Order FBDI process, it doesn’t work as intended without changes to the embedded FBDI macros.
There are also areas where the functionality may seem to exist – but isn’t actually functional. An example here are Item Revision attributes. That is, attributes defined at the Item Revision level. While the FBDI makes it seem like this is possible during conversion, they don’t actually do anything! Our client ended up having to assign these attributes at the master item level, not the revision level as they had wanted.
Another client ran into a challenge trying to migrate their serialized items. The Oracle FBDI only allows you to load a range of sequential serial numbers – and not serial numbers which are not sequential.
Finally, and maybe weirdest of all, is a challenge we’ve encountered recently, where the customer load process stopped working one day. Checking the load files, we confirmed everything was as it should be and reached out to Oracle to understand what was causing the failure. Oracle didn’t have a definitive answer as to what we were seeing, and suggested a reboot of the Oracle server. This worked – temporarily. When we ran into the same issue on another pod, a reboot didn’t resolve the issue. We’re still not sure what the underlying issue was, or if the recent quarterly patch might have addressed it.
As we said, Oracle Cloud is a work in progress, and as with any software of this magnitude there are going to be issues and kinks to work out. Service Requests (SRs) are key – working with Oracle to troubleshoot and determine how to address the issues you encounter. If you’re loading inventory and the load results in errors, an SR is the only way you’re going to get the interface table purged to try the load again.
Some of the FBDIs are more ‘solid’ than others. In general, the Financials and HR FBDIs are currently more stable than the manufacturing and project based templates. As more companies start implementing those modules, those FBDIs should evolve and start to become more stable. That said, even ‘stable’ processes have been known to have issues.
5. Cryptic Error Messages
The first time you set out to load your data to Oracle Cloud, you will very likely have some (or all) of your records fail, and find yourself faced with an Oracle error report. Depending on what type of data you’re loading, the error reports can look very different both in format and content. Some error reports are very straight forward and easy to read and understand. Others…not so much.
At times, Oracle error reports contain cryptic error messages that are hard, if not impossible, to understand and resolve. We’ve even ran into cases where the import shows the load was successful, but no data was actually loaded. Turns out, there are some load files that have ‘silent’ errors – showstopping load errors which are never reported or communicated, but which stop the loads in their tracks.
One of our favorite error messages, which we ran into during a customer load, was “Error encountered. Unable to determine cause of error.” That one was super fun to troubleshoot!
Also fun are misleading error messages. That is, an error message that is completely unrelated to what the actual problem is. We had a purchase order conversion failing, which the error reported as being due to a currency conversion error. This was a bit perplexing as all the POs and the financial ledgers were in USD – there shouldn’t have been any currency to convert!. We reached out to Oracle for some guidance, and they had no idea what the issue was. Eventually, we determined that the issue was being caused due to the external FX loader, being used to load currency conversion rates. Once that was paused, the Pos loaded without a hitch. The problem wasn’t with the PO load – the problem was caused by the FX loader conflict.
Even when errors are reported more clearly, they don’t always point to the true underlying issue in the most direct way. For instance, when loading projects, you may run into issues if the project manager was not active on the HCM side as of the project start date. If the current project manager was assigned after the inception of the project, this is always going to lead to a load failure. The message you’ll see reported is “The project manager is not valid”. If you know to check the dates, you might get to the bottom of the issue quickly. If you don’t, and see that the project manager exists and IS valid today, it may take you a bit to realize that the issue is truly that they weren’t valid as of the project start date.
6. Cloud Database Access
As you’re working through the errors, you’re likely going to run into this next challenge: accessing the Oracle Cloud database. If you are coming from a background of on-prem data migration, this might be a shock as you can’t directly SQL query the database to quickly look into specific data concerns, to mass update records, to check record counts, etc.
There are ways around this, primarily through leveraging customer BI publisher reports to look at specific query conditions. Another option are PVOs, or Public View Objects, which are pre-built view queries for key areas. However, they don’t pull all rows and fields from the underlying base tables so if you need something other than what’s pre-defined, you might not be able to find it without building a customer report that doesn’t show you the full picture.
If you’re working through error reports to address load failures, it’s very difficult and almost impossible to update, correct or back out data once it’s loaded. Some areas do have the ability to be updated via FBDI, but not all do. To get around this challenge, our company built hundreds of pre-validation checks to run on the data before attempting to load and a utility that can query the Cloud outside the application. This ensures the load will be successful with minimal errors, and no need to back out or mass-update data.
7. Scope Creep
Once you think you’re done and all your data is loaded into your test POD, business will start to test and work with the data. As the business starts to discover the power of Oracle Cloud applications, new business requirements are discovered that impact the original specifications and scope.
For example, one of our clients had a multi-country tax reporting requirement for a single legal entity. They designed a solution to handle this scenario but through their testing, they determined a more efficient way in which to structure their tax information. This resulted in changes to not just how their customer data was structured, changing attributes from the account level to the site level, but also impacted all their downstream conversions. Including Customer Item Relationships, Open AR and Receipts, Sales Orders and Contracts, just to name a few.
This small change had a big impact and is just one example.
There are additional complexities to consider when multiple Oracle Cloud applications are being implemented. On one engagement, there were multiple but related initiatives happening on separate schedules – Oracle Sales Cloud and Oracle ERP. The Sales Cloud Prospects were implemented without considering the ERP parties. When ERP users logged into the system for the first time, thousands of duplicated parties existed due to these sales cloud prospects. Once schedules aligned, complex cross-team deduplication processes had to be designed late in the project lifecycle…. Increasing data team workload.
Sometimes scope is impacted by areas beyond the core Oracle Cloud solution. Third party interfaces can come with their own requirements based on what they consume from your Oracle data hub.
At one client, we converted additional items to support an Oracle Cloud to Wavemark integration. However, the initial integration failed. It was determined that Oracle needed to store intraclass UoM data for the Wavemark integration to be functional. This entirely new conversion object was not part of the Client’s original data migration scope.
These three examples are just tip of the iceberg and every project provides it’s own unique challenges. You don’t always know what you don’t know.