Creating a data strategy for finance
Originally written by Nathan while he was in Deloitte on deloitte.com at https://www2.deloitte.com/ch/en/pages/strategy-operations/articles/data-strategy-for-finance.html
Are you thinking end-to-end about data for more efficient and accurate accounting, reporting and planning? Finance is, in most instances, a consumer of data produced in other parts of the organisation.
The data used ‘within’ Finance is usually created in the transaction processing and accounting systems which record events initiated in business units and markets. While an insights or AI strategy for Finance is important, a solid data foundation is fundamental if these strategies are to succeed in powering insights or artificial intelligence (AI).
‘Record to report’ processes originate the data on which financial reporting and analytics depend.
Order to cash (O2C) orders become invoices, then receivables, and then finally cash receipts. In source to pay (S2P), similarly, purchase orders become supplier invoices, then payables and then cash disbursements. Each process step generates finance data in the form of accounting entries.
This means a Finance team’s data strategy should be focused on ensuring that the quality of the data is high enough to drive touchless transactional accounting and enable the strategic reporting, planning, budgeting, forecasting and analytics processes needed to steer the business.
Any Finance team looking to improve financial operations, planning and transactional accounting, reduce the cost of Finance, improve the quality of reporting for decision-making, or reduce the time to report needs to take an end-to-end approach to data.
In our experience Finance’s problems with data are more often triggered by poor master data which leads to incomplete or wrong transactional data.
However, there is also the risk that transactional data is incorrectly captured at source. The end-to-end approach also means working with the Order to Cash or Source to Pay teams as they will be affected operationally themselves when data is entered incorrectly at source.
The CFO can engage at board level and propel down the company the view that it is not really about Finance’s data but the organisation’s data.
Strategy
Advising on long-term strategic direction and M&A, identify what you need to get there and how it’ll impact the P&L in short-term and long-term
Analysis and insight
Identify what drives your profitability, give insight into revenue-generating or cost-saving opportunities
Budgeting and forecasting
Create an annual budget, do quarterly or rolling monthly forecasts, review performance against budget/forecast regularly
Reporting
Get accurate, regular, timely management accounts, file all your statutory returns accurately and on time, stay complaint with the law
Transaction processing
Staff and suppliers paid on time, cash collected from customers, don’t run out of money, input stuff properly into your ERP
Data for planning, budgeting, forecasting, and reporting
Data inconsistencies between different bespoke planning and reporting applications have always been a challenge. High quality data is a prerequisite for advanced analytics such as predictive and AI.
Spreadsheets, while powerful tools for Finance, are often a cause of data quality problems which undermine effective decision- making. Using spreadsheets for planning and budgeting without any validation rules or checks can compromise data quality by allowing inaccurate manual data entries, incorrect master data values or inconsistent aggregation hierarchies.
Data for transactional accounting
The goal of accounting teams is usually to automate as much of the accounting process as possible, enabling high quality with few human touchpoints.
‘Duelling Spreadsheets’ has long been a topic in reporting – executives come to the same meeting with different views of what should be the same data but which contain differences in inclusions and exclusions, effective dates, or hierarchies, leading to different bases for decision-making.
The quality of the data passing through automated processes can be a major driver of breaks in the process. Any time the process breaks, and data needs to be touched – whether in Enterprise Resource Planning (ERP) or in Excel – it causes friction, delays and cost in terms of human effort.
Costs and impact of data quality
Poor data quality in a process will directly affect its duration. Finance most often sees this if data needs to be fixed and reloaded multiple times, potentially delaying the close by days compared to a straight through process.
The amount of time wasted by the need for multiple updates, reconciliations, checks, reloads, business reviews and other remediation activities has a direct impact on the effectiveness of the Finance function. Data may be of poor quality for many different reasons:
Incomplete: “we haven’t received data from a specific market”
Out of date: “the data is from day 1 but we need the update and must adjust”
Incorrectly coded: “the wrong lookup is used for a hierarchy
Missing: “missing critical data for reporting, such as the correct FX rate”
Inconsistent: “one team has updated a data structure but hasn’t informed all downstream systems”
Heterogenous: “data is coded one way at local level but means something else at group level”
Priorities for Finance data teams
If Finance teams are looking to improve their processes, addressing data is an imperative and data quality, definitions and automating pipelines are priorities. There must be a focus – data programmes are often too broad – and Finance should sponsor and focus on those data elements that are critical to automating their processes or decision-making.
With one recent client it was crucial for the Finance data team to work closely with the order to cash process team to improve the quality of the data entered into systems during the O2C process. Finance must care about data from the start.
What data do we mean?
Often, when Finance people think about data, they think about the data coming in from local ERPs to their finance ERPs: consolidation or planning and reporting solutions; and the main master data and hierarchies.
Both are critical. The organisation may be in some sort of ERP-led transformation and have achieved some straight-through processing. But countries and markets that haven’t transitioned may still need manual mapping for financial processing – even for the core ERPs, as the data architecture is fragmented.
At a minimum, the Finance department needs a data team to drive improvements. Its role is to focus on enabling all upstream stakeholders and systems to have the right data to enable faster, more accurate processing and thereby improve the time to report, trust in the reports, and financial planning budgeting and forecasting. The Finance data team needs to engage both IT and the business.
Implementing the strategy involves a combination of maintaining business as usual and efforts to create the new data management, structure and architecture.
Should the Finance data team be the central data team for the whole organisation? Probably not. Finance has a clear and distinct role, and its data needs are specific. A Chief Data Officer (CDO) reporting into the CEO or Chief Operating Officer (COO) provides more of a wider business and commercial view.
The time, effort and cost of remediating data quality is visible in some processes, such as closing, but not always. Teams ignore it at their peril.
A rule of thumb guide to the cost of data quality is:
If it costs $1 to get data right at source first time,
it costs about 10x to fix it at source
It costs about 100x as much to deal with it in downstream systems – particularly Finance, reporting and analytics where manual interventions and adjustments are needed
It could cost multiples of this in terms of the wrong decision made, regulatory impact, pricing impact and countless other impacts.
Rarely do organisations see this end-to-end view of their data as a crucial part of their transactional finance operations. Those who don’t are missing opportunities to reduce cost and the time to complete reports or close the books – and to improve the quality of reporting and thereby better inform decision-making. than Finance. A Finance data team has enough to do as it ensures that data that affects Finance across the organisation is well managed and of high quality.
A CDO reporting into Finance may have conflicting priorities between Finance and the goals of the business. It is better for the group CDO and Finance data team to work together.
Based on our experience we recommend the following as a clear set of priorities for Finance with regards to data: first, reference data; then the taxonomy, master data, data sourcing, and scheduling, in that order
Priority 1 – Reference data
Get the codes and mappings right
Reference data means all the codes and the mappings between codes that are used in systems.
Management of reference data is all too often decentralised, leading to many inconsistencies. Addressing reference data inconsistencies is a good starting point. These initiatives have a short time to value and can create an impact with little IT spend.
Reference data can be easy to standardise – e.g., ISO codes for currencies or countries. At one recent client, however, the difference between countries and markets proved a big challenge.
Certain teams referred to countries when they meant what Finance called markets and vice versa. There were countries that contained multiple markets, and markets consisting of multiple countries. It is clearly problematic when teams use terms interchangeably.
How should we deal with differences? At one life sciences company, regulatory GxP (good practice) data needed to be allocated at country level, which was important for manufacturing and supply. But Finance had to manage at market level, where some sort of aggregation was needed. In order to resolve this challenge, we encouraged different data owners and teams (from Finance, manufacturing and regulatory) to work together to define which mappings were right.
Different ERPs may also have different mappings and codes. The mappings, for example between countries and markets, become reference data.
Bringing all this data under unified management is a must if Finance teams want to enable touchless transactional accounting and accurate planning budgeting and forecasting. Other reference data include classifications of, for example, customers, suppliers and products, the location of plants, offices, distribution centres and retail stores, and IDs that are needed to ensure payments.
Priority 2 –Data definitions and taxonomy
speak the same language
Put simply, data (both in source systems and in results, KPIs and metrics) requires uniformity: the same thing must be called the same thing – and anything else must be excluded.
For Finance- owned metrics, Finance should proactively align the definitions and the language across the business.
Take a metric: ‘Total Sales’. For a Finance executive, that probably means one thing: ‘invoices booked with an invoice date during the period but excluding internal sales or samples.’ This might seem obvious.
But Distribution may be using the same title for “sales orders based on the order date during the period, for which there is a eed for a distribution order, possibly including internal sales and samples. And to those in distribution, this definition may also seem obvious.
The same problem can also arise geographically, or by line of business.
These difficulties occur often in organisations that do not have control over their data definitions and which speak different dialects of the same language. It is crucial that wider stakeholders are engaged in data governance and data quality discussions, for example through working groups and councils.
We often find that only direct stakeholders are engaged. For example, sales and marketing or account managers have the strongest view on customer data despite Finance and operations being stakeholders.
Getting Finance and O2C to work closely together on customer master data, product master data and the related hierarchies proved critical for a recent client.
Priority 3 - Master data
the who, when, what of the business
Finance teams think a lot about Finance master data, consisting of the financial reporting hierarchies, organisation charts and other data structures owned by them.
Dealing with finance hierarchies in their own right can be a challenge: it is painful to map and manage different hierarchies and views implemented in different markets or business lines.
There is often a need for different reporting views, but Finance should aim to simplify this as much as possible.
Finance processes, however, are also affected by other master data –such as product, material, location, customer and supplier masters. These are usually not owned by the Finance team, and mostly should not be.
However, Finance needs to be actively involved in the governance of data domains that may originally seem to be outside their scope. Finance data teams must be involved in working groups, decision bodies and approval processes so that the creators and users of these master data are fully aligned.
There must also be processes to deal with data quality issues or conflicts, including clear paths for resolution with clear ownership. At a recent client the owning process (in this case O2C) had set up data quality and completeness reporting on the data important to them, but Finance had not provided sufficiently clear requirements for other data that was important for financial reporting. This resulted in problems when filtering and aggregating data into reports.
how do we get a view when it doesn’t exist? At a recent client, there was previously no way to get a view of strategic customers and customer groups. For example, Amazon and Walmart, as distributors, were represented by different customer IDs across multiple ERPsin different countries; but Finance needed to report, plan and forecast on the basis of total sales to these companies.
We (Deloitte) helped Finance and O2C to set up a strategic customer group hierarchy and then roll these fields out across the ERPs. Codes and mappings were set up in a reference data management system, with quality checks in the data warehouse, so that data was entered correctly in the country or market and linked to the hierarchy owned by Finance.
Priority 4 – Automation of data sourcing
create trusted pipelines
In the automation of data sourcing, Finance needs to work very closely with lines of business and IT, including ERP, business process and data/analytics teams. Establishing the trusted source for each data set that feeds Finance is crucial. Thistrusted source is where the investment and effort needed to fix data quality should be targeted.
One thing we often see discussed is the death of the data warehouse and the birth of the data lake. But there is no such thing as a finance data lake. Data lakes, by definition, store huge amounts of both structured and unstructured data and allow so-called ‘schema on read’ so that the analyst chooses how to define the data. This is the opposite of what the Finance department needs. Finance needs an integrated, historically tracked, subjectoriented database that enables them to drive management decision-making. This, essentially, is still the definition of the data warehouse. The data must be properly structured, well documented and controlled – signifying what it is meant to signify. This demands sourcing, controls, data models, business rules, and data catalogues: a data warehouse. Interfaces should be sourced from the integrated layer and extracted at a particular time.
Planning, budgeting and forecasting must be using the same sources (at the same time) as reporting. At a recent client we integrated eight different ERPs and their subledgers into one consolidated data hub to enable single data pipelines to be reused for many purposes.
Conflicting data warehouses should be addressed, working together with IT and the Chief Data Officer. At a recent client while a common data warehouse consolidated finance data across the organisation, each division had a local data warehouse which they used for local management reporting and other non-financial data. This led to conflicts, data redundancy and, probably, to different views on the same KPIs since the data for reporting were not in one common data platform.
A combination of financial and non-financial data in reporting is important in order to have a full picture of customer, products and the organisation; this creates better decision-making.
Local warehouses and data marts are fine if they are leveraging trusted data, but the advice around common master data, definitions and hierarchies still applies.
Priority 5 – Scheduling and latency
when results are needed and the duration to process
An often overlooked priority from the Finance side is the turnaround time and latency between a data issue being found, fixed and updated, and then reports or applications being reloaded.
This can be a problem in day-to-day business decision support, planning and forecasting, and particularly during the close. The time between reporting a source or intermediate data issue, resolving it and then reloading it for accurate decision- making is a common concern for Finance teams.
What about if we are ‘in-memory’? Taking latency and scheduling into account is important, even if you are using in-memory systems. Many organisations are still encumbered by large legacy environments around an in-memory finance core. Large volumes of data often still need to be moved between systems. Near real time processing may be achieved on an in-memory platform but getting the data there still takes time.
Latency is particularly important when deciding technology choices with your IT teams, and then in designing the data flows into Finance.
Finance can help IT here by clearly figuring out which data flows can be done more often, perhaps on partial data, and which needs a full set of data to complete processing.
IT teams will want to know how up-to-date your data needs to be, and it might make sense to extract data from source systems more frequently than reporting needs (e.g. obtain changes from the source every hour) particularly if you need data from multiple systems.
The type of data matters: incremental data, like orders or invoices, can be extracted almost continuously. For other data e.g., inventory or balances, the time of day of the extract is important for measurement and comparison. Different data elements even within one system can be designed to be scheduled differently.
Also think about whether decision-makers using reports and systems have a time of day or day in the month that is particularly important. Mornings are often the best for daily reports – but, in a global company, which morning!
Other patterns include: morning/ noon/evening for trading summaries; end of week/beginning of week; the 4-5-4 retail calendar or calendar months. There may be more patterns to consider in your organisation.
One global client recently had to balance multiple, possibly conflicting, schedules between sales reporting, demand and supply planning, financial planning and management information (MI) reporting.
By separating the schedules and processing, and addressing process optimisation and notifications, we were able to help them achieve the necessary reporting timescales.
Things to do
Finance teams who want to improve how they use their data for reporting, closing and forecasting should start by addressing the priorities we outline. Other things like better predictive modelling, use of artificial intelligence and machine learning, etc., should be on the roadmap but, without good data, you are likely to be throwing money away.
Nominate Data Owners
Data owners need to be defined before starting a data project. For each key master data e.g. material, product/services, business partners, markets, organisational data, chart of accounts(CoA) etc. a data owner needs to be appointed. In the best case such data owners have a global assignment. But the global responsibility may need to be split between regional data owners as well.
Nominate the right people in your team
Also before starting, find who is currently managing data within Finance and free them up to become engaged in the improvement initiative. The team needs to include subject matter expertise – someone who understands the data quality issues and current hierarchies, and where data is located in systems, and who can work closely with stakeholders across the business, not just within Finance. Also connectwith your company’s data organisation – the CDO, or similar.
Define what data is critical for business decision-making or to drive the automation agenda
Finance needs to clearly assess what data needs to be taken into scope. There are two key criteria: to select data that is necessary for driving (and automating e.g. predictive analytics) decision-making, and to select data that further automates finance processes, not simply for cost efficiency but also to set a foundation for self-service reporting.
Collate a list of issues
List the issues currently facing Finance from a data perspective, then categorise them in relation to finance processing in categories such as close, payments, planning, reporting or budgeting. Prioritise categories based on the scale of the issue and potential benefit. Look for synergies. Prioritise. Write out problem/fix statements for each issue.
Map out critical data flows coming into Finance
Finance data teams should be asking themselves, “how do we get enough right information at the start of a flow so that by the time the data comes through to us, we know we have the right information?” For example, in O2C or S2P we usually have the following flow: quote -> order –> invoice –> AR -> journal entry. Getting good data might mean going all the way back to the quote or order.
Determine the most important data elements
Don’t treat every data item as of equal importance. It is much more likely that certain data items have a disproportionate impact. Address the data behind the most important and urgent issues first. Find authoritative sources for each data flow, as granular as you need and prioritised by importance.
For each data item fed to Finance (orders, invoices, receivables, journals etc) a single source system needs to be specified. Usually in multinationals at least the Country, Line of Business (LOB), Function, and type of Account need to be clearly specified and governed. Depending on your needs you may need more dimensions, but you should aim to minimise them.
Where there are authoritative sources, invest data remediation efforts and budget into that system.
Decide what ‘good enough’ looks like
While a transformational approach could involve a major programme of work, a view on what is not perfect but good enough will help shape you build a foundation without a multi- year investment plan. Once ‘good enough’ is achieved, you can start making bigger changes, based on experience. Enriching the prioritised authoritative sources is part of this effort.
Determine the technology needed to support data quality, automation and data management Finance should work with IT to determine which technology to use to drive automation, and manage transactional data quality and master data. Today’s tools and platforms can help identify, fix and monitor many data issues and a combination of tools are likely to be needed.
Work with specialists to set out your requirements, define a functional architecture (what needs to happen where and when) and then choose and implement technologies to address the priorities.
Create a plan
Define the roadmap taking into consideration both fixes to existing processes, organisations and technology, and redesign and implementation of new capabilities. Timing is important. Companies normally need to take into consideration in-year business cases as well as investment requirements for a budgetary cycle.
Originally written by Nathan while he was in Deloitte on deloitte.com at https://www2.deloitte.com/ch/en/pages/strategy-operations/articles/data-strategy-for-finance.html