Posts Tagged ‘Best Practice’

IoRIT, Internet of Really Important Things

March 1, 2016

beryl2

Just recent I published a post entitled “The Three A’s of Predictive Maintenance” https://www.linkedin.com/pulse/three-predictive-maintenance-graham-smith-phd?trk=prof-post basically discussing the importance of maintaining assets in these current economically volatile times. The post does contain some references to IoT (Internet of Things), but here I want to concentrate what is really important, so I am going to steal the phrase from Mr Harel Kodesh, Vice President, Chief Technology Officer at GE, who introduced the phrase in his key note speech at the Cloud Foundry Summit in May of 2015 (http://youtu.be/cvIjvbjB7qo)

We build huge assets to support our way of living and these assets are the REALLY important things that without maintenance will disrupt everything if left to the “fix it when it breaks mentality”. Mr Kodesh uses two examples which I have explained in the table below, we have the Commercial Internet and we have the Industrial Internet.Both are equally as important as each other, but impacts on business and environment are much greater in the Industrial Internet and could have far reaching consequences.

table

When we wake in the morning we tend to think about having a shower and getting ready for work, cooking our breakfast either via electric or gas. We don’t think about the Water Distribution system, We don’t think about power generation or it distribution and we certainly don’t think about gas extraction or it’s distribution.We don’t think about the fuel or where it was made for the fight across the world for us to do business in another country. We are not sure about where the petrol or diesel comes from that powers are cars or trucks.

Well it’s reasonably simple to define, all of these commodities come from huge assets that may power other assets and have to be maintained. We are talking here about Oil & Gas Drilling and Production platforms, or Oil Refineries, or Power Stations. All of these asset include other assets which have to be maintained.

beryl

Above is a good example of what we are talking about and one that I was intimately involved with. Some 195 miles out to sea, the first concrete platform (Condeep, built by Aker in Stavanger, Norway ), the Beryl Alpha, was given a life expectancy of 20 years when it was installed by Mobil, now part of ExxonMobil, on the Beryl oilfield (Block 9/13-1) in 1975. Now 41 years on and being purchased from ExxonMobil By the Apache Corporation there is no sign of it being decommissioned and the addition in 2001 of facilities to process gas from the nearby Skene gas field has given it a new lease of life.
beryl1.jpg

At its peak in 1984, Beryl Alpha was producing some 120,000 bpd. It is still pumping an average of 90,000 to 100,000 barrels of Beryl, a high quality crude (Beryl) named after Beryl Solomon, wife of Mobil Europe president Charles Solomon. Gas production is around 450 mm cfpd, representing nearly 5 % of total British gas demand or the needs of 3.2 mm households. Today “The challenge is the interface between technology 41 years old and new technology.”

So here we are thinking now about “The Internet of Really Important Things” and how we can use technology of today with the technology of yesteryear? Doing more with less, sweating the assets to coin a phrase! Compliance to specifications and rules and regulations, this is where we need tools and techniques such as Predictive Maintenance (PdM).The link specifications is a snapshot of specifications for the Beryl, monitors and sensor ensure that data is captured which as a result can be used to highlight problems before they occur, this information is being collected in realtime.

To achieve what it is called World Class Maintenance (WCM), it is necessary to improve adopted maintenance processes.Various tools available today have adopted the word maintenance. It is important to note that these are not new types of maintenance but tools that allow the application of the main types of maintenance.

 

 

Advertisements

Process Mining, Bridging the gap between BPM and BI

February 29, 2016

Later this year I will be involved in a MOOC entitled “Introduction to Process Mining with ProM“,  from FutureLearn. Unfortunately it has just been delayed from April till July, but being interested in BPM and BI, I thought that I would start my own research into the subject and publish my own findings.

Prof. Dr. Ir. Wil van der Aalst, Department of Mathematics and Computer Science (Information Systems WSK&I) is the founding father of “Process Mining ” and is located at the Data Science Center, Eindhoven in the Netherlands. You will find many quotes attributed to him in this post.

Introduction

Today a tremendous amount of  information about business processes is recorded by information systems in the form of  “event logs”. Despite the omnipresence of such data, most organisations diagnose problems based on fiction rather than facts. Process mining is an emerging discipline based on process model-driven approaches and data mining. It not only allows organisations to fully benefit from the information stored in their systems, but it can also be used to check the conformance of processes, detect bottlenecks, and predict execution problems.

So lets see what it is all about?

Companies use information systems to enhance the processing of their business transactions. Enterprise resource planning (ERP)  and workflow management systems (WFMS)  are the predominant information system types that are used to support and automate the execution of business processes. Business processes like procurement, operations, logistics, sales and human resources can hardly be imagined without the integration of information systems that support and monitor relevant activities in modern companies. The increasing integration of information systems does not only provide the means to increase effectiveness and efficiency. It also opens up new possibilities of data access and analysis. When information systems are used for supporting and automating the processing of business transactions they generate data. This data can be used for improving business decisions.

The application of techniques and tools for generating information from digital data is called business intelligence (BI) . Prominent BI approaches are online analytical processing (OLAP)  and data mining  (Kemper et al. 2010 pp. 1–5). OLAP tools allow analysing multidimensional data using operators like roll-up and drill-down, slice and dice or split and merge (Kemper et al. 2010 pp. 99–106). Data mining is primarily used for discovering patterns in large data sets (Kemper et al. 2010 p. 113).

However the availability of data is not only a blessing as a new source of information but it can also become a curse. The phenomena of information overflow  (Krcmar 2010 pp. 54–57), data explosion (Van der Aalst 2011 pp. 1–3) and big data  (Chen et al.2012) illustrate several problems that arise from the availability of enormous amounts of data. Humans are only able to handle a certain amount of information in a given time frame. When more and more data is available how can it actually be used in a meaningful manner without overstraining the human recipient?

Data mining  is the analysis of data for finding relationships and patterns. The patterns are an abstraction of the analysed data. Abstraction reduces complexity and makes information available for the recipient. The aim of “Process Mining” is the extraction of information about business processes (Van der Aalst 2011 p. 1). Process mining encompasses “techniques, tools and methods to discover, monitor and improve real processes “by extracting knowledge from event logs” (Van der Aalst et al. 2012 p. 15). The data that is generated during the execution of business processes in information systems is used for reconstructing process models. These models are useful for analysing and optimising processes. Process mining is an innovative approach and builds a bridge between data mining (BI) and business process management (BPM).

Process mining evolved in the context of analysing software engineering processes  by Cook and Wolf in the late 1990s (Cook and Wolf 1998). Agrawal and Gunopulos (Agrawal et al. 1998) and Herbst and Karagiannis (Herbst and Karagiannis 1998) introduced process mining to the context of workflow management. Major contributions to the field have been added during the last decade by van der Aalst and other research colleagues by developing mature mining algorithms and addressing a variety of topic related challenges (Van der Aalst 2011). This has led to a well developed set of methods and tools that are available for scientists and practitioners.

Introduction to the basic concepts of process mining. 

The aim of process mining is the construction of process models based on available event log data. In the context of information system science a model is an immaterial representation of its real world counterpart used for a specific purpose (Becker et al.2012 pp. 1–3). Models can be used to reduce complexity by representing characteristics of interest and by omitting other characteristics. A process model is a graphical representation of a business process that describes the dependencies between activities that need to be executed collectively for realising a specific business objective. It consists of a set of activity models and constraints between them (Weske 2012 p. 7).

Process models can be represented in different process modelling languages, BPMN provides more intuitive semantics that are easier to understand for recipients that do not possess a theoretical background in informatics. So I am going to use BPMN models for examples in this post.

Above is a business process model of a simple procurement process . It starts with the definition of requirements. The goods or service get ordered, at some point of time the ordered goods or service get delivered. After the goods or service have been received the supplier issues an invoice which is finally settled by the company that ordered the goods or service.

Each one of the events depicted in the process above will have an entry in an event log.  An event log  is basically a table. It contains all recorded events that relate to executed business activities. Each event is mapped to a case. A process model  is an abstraction of the real world execution of a business process. A single execution of a business process is called process instance . They are reflected in the event log as a set of events that are mapped to the same case. The sequence of recorded events in a case is called trace . The model that describes the execution of a single process instance is called process instance model . A process model abstracts from the single behaviour of process instances and provides a model that reflects the behaviour of all instances that belong to the same process. Cases and events are characterised by classifiers and attributes.Classifiers  ensure the distinctness of cases and events by mapping unique names to each case and event. Attributes store additional information that can be used for analysis purposes.

The Mining Process

The process above provides an overview of the different process mining activities. Before being able to apply any process mining technique it is necessary to have access to the data. It needs to be extracted from the relevant information systems. This step is far from trivial. Depending on the type of source system the relevant data can be distributed over different database tables. Data entries might need to be composed in a meaningful manner for the extraction. Another obstacle is the amount of data. Depending on the objective of the process mining up to millions of data entries might need to be extracted which requires efficient extraction methods. A further important aspect is confidentiality. Extracted data might include personalised information and depending on legal requirements anonymisation or pseudonymisation might be necessary.

Before the extracted event log can be used it needs to be filtered and loaded into the process mining software. There are different reasons why filtering is necessary. Information systems are not free of errors . Data may be recorded that does not reflect real activities. Errors can result from malfunctioning programs but also from user disruption or hardware failures that leads to erroneous records in the event log.

Process Mining Algorithms

The main component in process mining is the mining algorithm. It determines how the process models are created. A broad variety of mining algorithms do exist. The following three categories will be discussed but not in great detail.

  • Deterministic mining algorithms
  • Heuristic mining algorithms
  • Genetic mining algorithms

Determinism means that an algorithm only produces defined and reproducible results. It always delivers the same result for the same input. A representative of this category is the α-Algorithm  (Van der Aalst et al. 2002). It was one of the first algorithms that are able to deal with concurrency. It takes an event log as input and calculates the ordering relation of the events contained in the log.

Heuristic mining also uses deterministic algorithms but they incorporate frequencies of events and traces for reconstructing a process model. A common problem in process mining is the fact that real processes are highly complex and their discovery leads to complex models. This complexity can be reduced by disregarding infrequent paths in the models.

Genetic mining algorithms use an evolutionary approach that mimics the process of natural evolution. They are not deterministic. Genetic mining algorithms follow four steps: initialisation, selection, reproduction and termination . The idea behind these algorithms is to generate a random population of process models and to find a satisfactory solution by iteratively selecting individuals and reproducing them by crossover and mutation over different generations. The initial population of process models is generated randomly and might have little in common with the event log. However due to the high number of models in the population, selection and reproduction better fitting models are created in each generation.

The process above shows a mined process model that was reconstructed by applying the α-Algorithm from an event log. It was translated into a BPMN model for better comparability. Obviously this model is not the same as the model in the first process diagram above. The reason for this is that the mined event log includes cases that deviate from the ideal linear process execution that was assumed for modelling in the first process depiction. In case 4 the invoice is received before the goods or service. Due to the fact that both possibilities are included in the event log (goods or service received before the invoice in case 1, 2, 3, 5 and invoice received before the ordered goods in case 4) the mining algorithm assumes that these activities can be carried out concurrently.

Process Discovery and Enhancement

A major area of application for process mining is the discovery of formerly unknown process models for the purpose of analysis or optimisation  (Van der Aalst et al. 2012 p. 13). Business process reengineering and the implementation of ERP systems in organisations gained strong attention starting in the 1990s. Practitioners have since primarily focused on designing and implementing processes and getting them to work. With maturing integration of information systems into the execution of business processes and the evolution of new technical possibilities the focus shifts to analysis and optimisation.

Actual executions of business processes can now be described and be made explicit. The discovered processes can be analysed for performance indicators like average processing time or costs for improving or reengineering the process. The major advantage of process mining is the fact that it uses reliable data. The date that is generated in the source systems is generally hard to manipulate by the average system user. For traditional process modelling necessary information is primarily gathered by interviewing, workshops or similar manual techniques that require the interaction of persons. This leaves room for interpretation and the tendency that ideal models are created based on often overly optimistic assumptions.

Analysis and optimisation is not limited to post-runtime inspections. Instead it can be used for operational support  by detecting traces being executed that do not follow the intended process model. It can also be used for predicting the behaviour of traces under execution. An example for runtime analysis is the prediction of the expected completion time by comparing the instance under execution with similar already processed instances. Another feature can be the provision of recommendations to the user for selecting the next activities in the process. Process mining can also be used to derive information for the design of business processes before they are implemented.

Summary

Process mining builds the bridge between data mining (BI)  and business process management (BPM). The increasing integration of information systems for supporting and automating the execution of business transactions provides the basis for novel types of data analysis. The data that is stored in the information systems can be used to mine and reconstruct business process models. These models are the foundation for a variety of application areas including process analysis and optimisation or conformance and compliance checking. The basic constructs for process mining are event logs, process models and mining algorithms. I have summarised essential concepts of process mining in this post, illustrating the main application areas and one of the available tools, namely ProM.

Process mining is still a young research discipline and limitations concerning noise, adequate representation and competing quality criteria should be taken into account when using process mining. Although some areas like the labelling of events, complexity reduction in mined models and phenomena like concept drift need to be addressed by further research the available set of methods and tools provide a rich and innovative resource for effective and efficient business process management.

The Three “A’s” of Predictive Maintenance

February 25, 2016

Again today in the news is another Oil & Gas company posting a profit loss, a rig operator scrapping two rigs and predictions of shortfalls in supplies by 2020, plus major retrenchments of staff across the globe. With all of this going on the signs are that we are going to have to sweat the assets and do more with less. How then are we going to do more with less?

Slide1

This post is going to focus on the use of Predictive Analytics for the Maintenance Process or PdM (Predictive Maintenance ) Organisation’s are looking at their operations and how to reduce costs more than ever before. They are experiencing increased consumer empowerment, global supply chains, ageing assets, raw material price volatility, increased compliance, and an ageing workforce. A huge opportunity for many organisations is a focus on their assets.

Although organisations lack not only visibility but also predictability into their assets’ health and performance, maximising asset productivity and ensuring that the associated processes are as efficient as possible are key aspects for organisations striving to gain strong financial returns.

In order for your physical asset to be productive, it has to be up, running, and functioning properly. Maintenance is a necessary evil that directly affects the bottom line. If the asset fails or doesn’t work properly, it takes a lot of time, effort, and money to get it back up and running. If the asset is down, you can’t use it. For example, you can’t manufacture products, mine for minerals, drill for oil, refine lubricants, process gas, generate power etc, etc.

Maintenance has evolved with the technology, organisational processes, and the times. Predictive maintenance (PdM), technology, has become more popular and mainstream for organisations, but in many cases remains inconsistent.

There are many reasons for the this that include the items below:

  • Availability of large amounts of data due to Instruments and connected assets (IoT)
  • Increased coupling of technology within businesses (MDM, ECM, SCADA)
  • Requirements to do more with less. For example, stretching the useful life of an asset (EOR)
  • Relative ease of use of garnering insights from raw data (SAP HANA)
  • Reduced cost of computing, network, and storage technology (Cloud Storage, SaaS, In Memory Computing)
  • Convergence of Information Technology with Operational technology (EAM, ECM)

PdM will assist organisations with key insights regarding asset failure and product quality, enabling them to optimise their assets, processes, and employees. Organisations are realising the value of PdM and how it can be a competitive advantage. Given the economic climate and pressure on everyone to do more with less.

Operations budgets are always the first to be cut, it no longer makes sense to employ a wait-for-it-to-break mentality. Executives say that the biggest impact on operations is failure of critical assets. In this post I am going to show how Predictive Analytics or PdM will assist organisations.

Predictive Maintenance Definition.

We have all understood what Preventive Maintenance was, it was popular in the 20th Century but PdM is very much focused in the 21st Century. PdM is an approach based upon various types of information that allows maintenance, quality and operational decision makers to predict when an asset needs maintenance. There is a myth that PdM is focused purely on asset data, however it is much more. It includes information from the surrounding environment in which the asset operates and the associated processes and resources that react with the asset.

PdM leverages various Analytical techniques to provide better visibility of the asset to the decision makers and analyses various type of data. It is important to understand the data that is being analysed. PdM is usually based upon usage and wear characteristics of the asset, as well as other asset condition information. As we know data comes in many different formats. The data can be at REST (data that is fixed and does not move over time) or Streaming data (where data can be constantly on the move).

Types of Data.

From my previous posts on the subject of Big Data you will know by now that there are basically two types of Data, however in the 21st century there is a third. The 1st being Structured Data, the 2nd being Unstructured data and the 3rd is Streaming Data. The most common of course is structured and is collected from various systems and processes. CRM, ERP, Industrial controls systems such as SCADA, HR, Financial, information and data warehouses etc. All of these systems contain datasets in tables. Examples of this include Inventory information, production information, financial information and specifically asset information such as name, location, history, usage, type etc.

Unstructured Data comes in the form of Text data such as e-mails, maintenance and operator logs, social media data, and other free-form data that is available today in limitless quantities is unstructured data. Most organisations are still trying to fathom how to utilise this data. To accommodate this data, a text analytics program must be in place to make the content useable.

Streaming data is information that needs to be collected and analysed in real time. It includes information from sensors, satellites, Drones and programmable logic controllers (PLCs), which are digital computers used for automation of electromechanical processes, such as control of machinery on factory assembly lines, amusement rides, or light fixtures. Examples of streaming data include telematic, measurement, and weather information this format is currently gaining the most traction as the need for quick decision making grows.

Why use PdM?

There are a number of major reasons to employ PdM and there is a growing recognition that the ability to predict asset failure has great long term value to the organisation.

  •  Optimise maintenance intervals
  • Minimise unplanned downtime
  • Uncover in depth root cause analysis of failures
  • Enhance equipment and process diagnostics capabilities
  • Determine optimum corrective action procedures

Many Industries Benefit from PdM

For PdM to be of benefit to organisations, the assets must have information about them as well as around them. Here are a couple of examples from my own recent history. However any industry that has access to instrumented streaming data has the ability to deploy PdM.

Energy Provider

Keeping the lights on for an entire State in Australia is no small feat. Complex equipment, volatile demand, unpredictable weather, plus other factors can combine in unexpected ways to cause power outages. An energy provider used PdM to understand when and why outages occurred so it could take steps to prevent them. Streaming meter data helped the provider analyze enormous volumes of historical data to uncover usage patterns. PdM helped define the parameters of normal operation for any given time of day, day of the week, holiday, or season and detected anomalies that signal a potential failure.

Historical patterns showed that multiple factors in combination increased the likelihood of an outage. When national events caused a spike in energy demand and certain turbines were nearing the end of their life cycle, there was a higher likelihood of an outage. This foresight helped the company take immediate action to avoid an imminent outage and schedule maintenance for long-term prevention. With PdM, this energy provider

  • Reduced costs by up to 20 percent (based on similar previous cases) by avoiding the expensive process of reinitiating a power station after an outage
  • Predicted turbine failure 30 hours before occurrence, while previously only able to predict 30 minutes before failure
  • Saved approximately A$100,000 in combustion costs by preventing the malfunction of a turbine component
  • Increased the efficiency of maintenance schedules, costs and resources, resulting in fewer outages and higher customer satisfaction

Oil & Gas Exploration & Production Company

A large multinational company that explores and produces oil and gas conducts exploration in the Arctic Circle. Drilling locations are often remote, and landfall can be more than 100 miles away. Furthermore, the drilling season is short, typically between July and October.

The most considerable dangers that put people, platforms, and structures at risk are colliding with or being crushed by ice floes, which are flat expanses of moving ice that can measure up to six miles across. Should a particularly thick and large ice floe threaten a rig, companies typically have less than 72 hours to evacuate personnel and flush all pipelines to protect the environment. Although most rigs and structures are designed to withstand some ice-floe collisions, oil producers often deploy tugboats and icebreakers to manage the ice and protect their rigs and platform investments. This is easily warranted: a single oil rigcosts $350 million and has a life cycle that can span decades. To better safeguard its oil rigs, personnel, and resources, the company had to track the courses of thousands of moving potential hazards. The company utilised PdM by analyzing direction, speed, and size of floes using satellite imagery to detect, track, and forecast the floe trajectory. In doing so, the company

  • Saved roughly $300 million per season by reducing mobilisation costs associated with needing to drill a second well should the first well be damaged or evacuated
  • Saved $1 billion per production platform by easing design requirements, optimising rig placement, and improving ice management operations
  • Efficiently deployed icebreakers when and where they were needed most

Workforce Planning, Management & Logistics and PdM

The focus of predictive maintenance (PdM) is physical asset performance and failure and its associated processes. One key aspect that tends to be overlooked, but is critical to ensure PdM sustainability, is Human Resources. Every asset is managed, maintained, and run by an operator or employee. PdM enables organisations to ensure that they have the right employee or service contractor assigned to the right asset, at right time with the right skill set.

Many organisations already have enough information about employees either in their HR, ERP, or manufacturing databases. They just haven’t analysed the information in coordination with other data they may have access to.

Some typical types of operator information include

  • Name
  • Work duration
  • Previous asset experience
  • Training courses taken
  • Safety Courses
  • Licences
  • Previous asset failures and corrective actions taken

The benefits of using PdM in the WPML process include the following:

  • Workforce optimisation: Accurately allocate employee’s time and tasks within a workgroup, minimising costly overtime
  • Best employee on task: Ensure that the right employee is performing the most valuable tasks
  • Training effectiveness: Know which training will benefit the employee and the organisation
  • Safety: Maintain high standards of safety in the plant
  • Reduction in management time: Fewer management hours needed to plan and supervise employees
  • A more satisfied, stable workforce: Make people feel they are contributing to the good of the organisation and feel productive.

The key for asset intensive companies is to ensure that their assets are safe, reliable, and available to support their business. Companies have found that simply adding more people or scheduling more maintenance sessions doesn’t produce cost-effective results. In order for organisations to effectively utilize predictive maintenance (PdM), they must understand the analytical process, how it works, its underlying techniques, and its integration with existing operational processes; otherwise, the work to incorporate PdM in your organisation will be for nothing.

The Analytical Process, the three “A” approach.

As organisations find themselves with more data, fewer resources to manage them, and a lack of knowledge about how to quickly gain insight from the data, the need for PdM becomes evident.The world is more instrumented and interconnected, which yields a large amount of potentially useful data. Analytics transforms data to quickly create actionable insights that help organizations run their businesses more cost effectively

First A = Align

The align process is all about the data. You understand what data sources exist, where they are located, what additional data may be needed or can be acquired, and how the data is integrated or can be integrated into operational processes. With PdM, it doesn’t matter if your data is structured or unstructured, streaming or at rest. You just need to know which type it is so you can integrate and analyse the data appropriately.

Second A = Anticipate

In this phase, you leverage PdM to gain insights from your data. You can utilise several capabilities and technologies to analyse the data and predict outcomes:

1). Descriptive analytics provides simple summaries and observations about the data. Basic statistical analyses, for which most people utilise Microsoft Excel, are included in this category. For example, a manufacturing machine failed three times yesterday for a total downtime of one hour.

2). Data mining is the analysis of large quantities of data to extract previously unknown interesting patterns and dependencies. There are several key data mining techniques:

Anomaly detection: Discovers records and patterns that are outside the norm or unusual. This can also be called outlier, change, or deviation detection. For example, out of 100 components, component #23 and #47 have different sizes than the other 98.

Association rules: Searches for relationships, dependencies, links, or sequences between variables in the data. For example, a drill tends to fail when the ambient temperature is greater than 100 degrees Fahrenheit, it’s 1700 hrs, and it’s been functioning for more than 15 hours.

Clustering: Groups a set of objects in such a way that objects in the same group are more similar to each other than to those in other groups. For example, offshore oil platforms that are located in North America and Europe are grouped together because they tend to be surrounded by cooler air temperatures, while those in South America and Australia are grouped separately because they tend to be surrounded by warmer air temperatures.

Classification: Identifies which of a set of categories a new data point belongs to. For example, a turbine may be classified simply as “old” or “new.”

Regression: Estimates the relationships between variables and determines how much a variable changes when another variable is modified. For example, plant machinery tends to fail as the age of the asset increase.

Text mining derives insights and identifies patterns from text data via natural language processing, which enables the understanding of and alignment between computer and human languages. For example, from maintenance logs, you may determine that the operator always cleans the gasket in the morning before starting, which leads to an extended asset life.

Machine learning enables the software to learn from the data. For example, when an earthmover fails, there arethree or four factors that come into play. The next time those factors are evident, the software will predict that the earthmover will fail. You may come across predictive analytics. It is a category of analytics that utilises machine learning and data mining techniques to predict future outcomes.

Simulation enables what-if scenarios for a specific asset or process. For example, you may want to know how running the production line for 24 continuous hours will impact the likelihood of failure.

Prescriptive analytics goes beyond predicting future outcomes by also suggesting actions and showing the implications of each decision option. For example, based on the data, organisations can predict when a water pipe is likely to burst. Additionally, the municipality can have an automated decision where for certain pipes, certain valves must be replaced by a Level-3 technician. Such an output provides the operations professional with the predictive outcome, the action, and who needs to conduct the action. A decision management framework that aligns and optimises decisions based on analytics and organisational domain knowledge can automate prescriptive analytics.

The Final A = Act

In the final A, you want to act at the point of impact with confidence on the insights that your analysis provided. This is typically done by using a variety of channels including e-mail,mobile, reports, dashboards, Microsoft Excel, and enterprise asset management systems (EAM) essentially, however, your organisation makes decisions within your operational processes. A prominent aspect of the act phase is being able to view the insights from the anticipate process so employees can act on them. There are three common outputs:

Reports: Display results, usually in list format

Scorecards: Also known as balanced scorecards; automatically track the execution of staff activities and monitor the consequences arising from these actions; primarily utilised by management

Dashboards: Exhibit an organisation’s key performance indicators in a graphical format; primarily utilised by Senior Management

Organisations that utilise as many analytical capabilities ofPdM as possible will be able to optimise the appropriate analytics with the data. Ultimately, organisations will have better insights and make better decisions than those organisations that don’t. It may be easier for you to leverage a single software vendor that can provide all of these capabilities and integrate all three phases in your operational processes so you can maximise PdM’s benefits. Here are a few names to be going on with, TROBEXIS, OPENTEXT, SAP, MAXIMO

Five Major Challenges that Big Data Presents

February 22, 2016

The “big data” phrase is thrown around in the analytics industry to mean many things. In essence, it refers not only to the massive, nearly inconceivable, amount of data that is available to us today but also to the fact that this data is rapidly changing. People create trillions of bytes of data per day. More than 95% of the world’s data has been created in the past five years alone, and this pace isn’t slowing. Web pages. Social media. Text messages. Instagram. Photos. There is an endless amount of information available at our fingertips,but how to harness it, make sense of it, and monetise it are huge challenges. So lets narrow the challenges down a little and put some perspective on them. After fairly extensive reading and research, I believe there are 5 major challenges offered up to big data at the moment.

1). Volume

How to deal with the massive volumes of rapidly changing data coming from multiple source systems in a heterogeneous environment?

Technology is ever-changing. However, the one thing IT teams can count on is that the amount of data coming their way to manage will only continue to increase. The numbers can be staggering: In a report published last December, market research company IDC estimated that the total count of data created or replicated worldwide in 2012 would add up to 2.8 zettabytes (ZB). For the uninitiated, a Zettabyte is 1,000 Exabytes, 1 million Petabytes or 1 billion Terabytes or, in more informal terms, lots and lots and lots of data. By 2020, IDC expects the annual data creation total to reach 40 ZB, which would amount to a 50-fold increase from where things stood at the start of 2010.

Corporate data expansion often starts with higher and higher volumes of Transaction Data. However, in many organisations, unstructured and semi-structured information, the hallmark of Big Data Environments, is taking things to a new level altogether. This type of data typically isn’t a good fit for relational databases and comes partly from external sources, big data growth also adds to the Data Integration Workload, and challenges for IT managers and their staff.

2). Scope

How do you determine the breath, depth and span of data to be included in cleansing, conversion and migration efforts?

Big Data is changing the way we perceive our world. The impact big data has created and will continue to create can ripple through all facets of our life. Global Data is on the rise, by 2020, we would have quadrupled the data we generate every day. This data would be generated through a wide array of sensors we are continuously incorporating in our lives. Data collection would be aided by what is today dubbed as the “Internet of Things”. Through the use of smart bulbs to smart cars, everyday devices are generating more data than ever before. These smart devices are incorporated not only with sensors to collect data all around them but they are also connected to the grid which contains other devices. A Smart Home today consists of an all encompassing architecture of devices that can interact with each other via the vast internet network. Bulbs that dim automatically aided by ambient light sensors and cars that can glide through heavy traffic using proximity sensors are examples of sensor technology advancements that we have seen over the years. Big Data is also changing things in the business world. Companies are using big data analysis to target marketing at very specific demographics. Focus Groups are becoming increasingly redundant as analytics firms such as McKinsey are using analysis on very large sample bases that have today been made possible due to advancements in Big Data. The potential value of global personal location data is estimated to be $700 billion to end users, and it can result in an up to 50% decrease in product development and assembly costs, according to a recent McKinsey report. Big Data does not arise out of a vacuum: it is recorded from some data generating source. For example, consider our ability to sense and observe the world around us, from the heart rate of an elderly citizen, and presence of toxins in the air we breathe, to the planned square kilometer array telescope, which will produce up to 1 million terabytes of raw data per day. Similarly, scientific experiments and simulations can easily produce petabytes of data today. Much of this data is of no interest, and it can be filtered and compressed by orders of magnitude. There is immense scope in Big Data and a huge scope for research and Development.

3). 360 Degree View

With all the information that is now available, how do you achieve 360 degree views of all your customers and harness the kind of detailed information that is available to you, such as WHO they are? WHAT they are interested in?  and HOW they are going to purchase and WHEN?

Every brand has its own version of the perfect customer. For most, these are the brand advocates that purchase regularly and frequently, take advantage of promotions and special offers, and engage across channels. In short, they’re loyal customers.

For many brands and retailers, these loyal customers make up a smaller percentage of their overall customer base than they would prefer. Most marketers know that one loyal customer can be worth five times a newly acquired customer, but it’s often easier to attract that first-time buyer with generic messaging and offers. In order to bring someone back time and time again, marketers must craft meaningful and relevant experiences for the individual. So how can brands go about building loyalty for their businesses? Let’s start with what we know.

We all know that customers aren’t one dimensional. They have thousands of interests, rely on hundreds of sources to make decisions, turn to multiple devices throughout the day and are much more complex than any audience model gives them credit for. It’s imperative for marketers to make knowing their customers a top priority, but it isn’t easy.

In the past, knowing a customer meant one of two things: you knew nothing about them or you knew whatever they chose to tell you. Think about it. Often in the brick and mortar world you would deal with a first-time customer about whom you knew next to nothing: age, gender and the rest was based on assumptions. Over time, a regular customer might become better understood. You’d see them come in with children, or they’d make the same purchase regularly.

Now, thanks to technology, you can know an incredible amount about your customers some might even say too much. Amassing data is one thing, but increasingly the challenge has become how to make sense of the data you already have to create a rich, accurate and actionable view of your customer a 360-degree view.

Building and leveraging a 360-degree view of your customer is critical to helping you drive brand loyalty. Your customers need to be at the center of everything your business does. Their actions, intentions and preferences need to dictate your strategies, tactics and approach. They aren’t a faceless mass to whom something is done; they are a group of individuals that deserve personalised attention, offers and engagement. Your goal as a marketer is to make the marketing experience positive for your customers, which, in turn, will be positive for your business.

How can marketers go about establishing that 360-degree view and creating that positive customer relationship? It must be built on insights, but that doesn’t mean simply more data. In fact, more data can make it more difficult to come to a solid understanding of your customer. On top of that, it can also clearly raise privacy concerns. Marketers need to know how to make good inferences based on smart data.

Lets look at some of the key types of data and how they can be used:

First and most valuable is an organisation’s own (“first-party”) data. This should be obvious, but the diversity of this data – past purchase history, loyalty program participation, etc. – can cause some potentially valuable information to be overlooked.

Next is the third-party data so readily available for purchase today. This can be useful for target new audiences or finding look-alikes of existing customers, but it often comes at a prohibitive price and with fewer guarantees of quality and freshness than first-party data.

Finally, there is real-time data about customers or prospects. While real-time data can, in theory, come from either a first- or third-party source, it functions differently than the historical data sources described above. Certainly it can be used to help shape a customer profile but in its raw form, in the moment, it acts as a targeting beacon for buying and personalising impressions in the perfect instant.

How can you as a marketer use these three data types to come up with the most accurate view of your customer?

First, you need to understand the scope and diversity of your own data. There is valuable information lurking in all kinds of enterprise systems: CRM, merchandising, loyalty, revenue management, inventory and logistics and more. Be prepared to use data from a wide array of platforms, channels and devices.

From there, you can start answering questions about your customers. What are they saying about my products; when are they thinking about purchasing a product from me (or a competitor); how frequently have they have done business with me; how much do they spend?  The faster and more fully I can answer these questions, the more prepared I am to connect with my customer in the moment.

Integrating and analysing all of this information in a single manageable view is the next challenge for marketers, allowing them to recognise, rationalise and react to the fantastic complexity that exists within their data. Doing this is no small task, but a holistic view will enable marketers to differentiate meaningful insights from the noise.

The bottom line is that customers want brand experiences that are relevant and engaging, and offers that are custom-tailored for them, not for someone an awful lot like them. This is exactly what the 360-degree approach is designed to make possible: highly personalised, perfectly-timed offers that can be delivered efficiently and at scale.

In order to deliver those experiences, marketers must think about customer engagement from the 360-degree perspective, in which every touch-point informs the others. This cannot be achieved with a hodgepodge of disconnected data. It can only be achieved when all of the resources available, insights, technology and creative, are working together in perfect harmony. Over time, personalised customer experiences drive long-term loyalty for brands and retailers, ultimately creating even more of those “perfect” customers.

4). Data Integrity

How do you establish the desired integrity level across multiple functional teams and business processes? Is it merely about complete data (something in every required field)? or does it include accurate data, that is, the information contained within those fields is both correct and logical? What about unstructured data?

In the previous sections, we saw what Big Data means for the search and social marketer. Now, let’s spend some time on how we can make sure that the Big Data we have actually works for us.
Specifically, It’s my belief that there are four key factors determining our return from Big Data:

  • Is our Big Data accurate?
  • Is our Big Data secure?
  • Is our Big Data available at all times?
  • Does our Big Data scale?

Collating and creating Big, Valuable Data is a very expensive process and requires lots of investment and massive engineering resources to create a rigorous and high-quality set of data streams. Currently, 75% of Fortune 500 companies use cloud-based solutions, and the IDC predicts that 80% of new commercial enterprise apps will be deployed on cloud platforms.
Given these numbers, let’s address the 4 factors above in a specific context, using a cloud-based digital marketing technology platform for your Big Data needs.
1. Ensure Your Data Is As Accurate As Possible
As a search marketer, you are among the most data-driven people on this planet. You make important decisions around keywords, pages, content, link building and social media activity based on the data you have on hand.
Before gaining insight and building a plan of action based on Big Data, it’s important to know that you can trust this data to make the right decisions. While this might seem like a daunting exercise, there are a few fairly achievable steps you can take.
a. Source data from trusted sources: trust matters. Be sure that the data you, or your technology vendor, collect is from reliable sources. For example, use backlink data from credible and reputed backlink providers such as Majestic SEO, which provides accurate and up to-date information to help you manage successful backlinking campaigns.
b. Rely on data from partnerships: this is a corollary to the previous point. Without getting into the business and technical benefits of partnerships, I strongly recommend that you seek data acquired through partnerships with trusted data sources so that you have access to the latest and greatest data from these sources.
For example, if you need insight into Twitter activity, consider accessing the Twitter fire hose directly from Twitter and/or partner with a company who already has a tie-up with Twitter. For Facebook insight, use data that was acquired through the Facebook Preferred Developer Program certification. You need not go out and seek these partnerships, just work with someone who already has these relationships.
c. Avoid anything black hat: build your SEO insights and program with a white hat approach and takes a trusted partnership driven approach like the ones mentioned above.
If and when in doubt, ask around and look for validation that your technology provider has partnerships and validate it on social media sources such as Facebook and Twitter. Do not be shy about getting more information from your  technology vendors and track back to check that their origins do no tie back to black hat approaches.
2. Ensure Your Data Is Secure
You have, on your hands, unprecedented amounts of data on users and their behavior. You also have precious marketing data that has a direct impact on your business results.
With great amounts of knowledge comes even greater responsibility to guarantee the security of this data. Remember, you and your technology provider together are expected to be the trusted guardians of this data. In many geographies, you have a legal obligation to safeguard this data.
During my readings and research, I have learned a lot about the right way to securely store data. Here are a few best practices that, hopefully, your technology provider follows:

  1. ISO/IEC 27001 standard compliance for greater data protection
  2. Government level encryption
  3. Flexible password policies
  4. Compliance with European Union and Swiss Safe Harbor guidelines for compliance with stringent data privacy laws

3. Ensure Your Data Is Available
Having access to the most valuable Big Data is great, but not enough, you need to have access to this data at all times. Another valuable lesson I learned is how to deliver high availability and site performance to customers.
To achieve this, implement industry leading IT infrastructure including multiple layers of replication in data centres for a high level of redundancy and failover reliability, and datacenter backup facilities in separate locations for disaster recovery assurance and peace of mind. If you work with a marketing technology provider, be sure to ask them what measures they take to guarantee data availability at all times.
4. Ensure Your Data Scales With User Growth
This is the part that deals with the Big in Big Data. Earlier in the post we saw how Zetabytes of data already exist and more data is being generated at an astounding pace by billions of Internet users and transactions everyday. For you to understand these users and transactions, your technology should have the ability to process such huge volumes of data across channels and keep up with the growth of the Internet.
Scale should matter even if you are not a huge enterprise. Think about this analogy, even if you are searching for a simple recipe on Google, Google has to parse through huge volumes of data to serve the right results.
Similarly, your technology should be able to track billions of keywords and pages, large volumes of location-specific data and social signals to give you the right analytics. Be sure the technology you rely on is made for scale.

5). Governance Process.

How do you establish Procedures across people, processes and technology to maintain a desired state of Governance? Who sets the rules? Are you adding a level of Administration here?

Big Data has many definitions, but all of them come down to these main points: It consists of a high volume of material, it comes from many different sources, it comes in a variety of formats, it arrives at high speeds and it requires a combination of analytical or other actions to be performed against it. But at heart, it’s still some form of data or content, though slightly different than what has been seen in the past at most organizations. However, because it is a form of data or content, business-critical big data needs to be included in Data Governance processes.

Do remember that not all data must be governed. Only data that is of critical importance to an organisation’s success (involved in decision making, for example) should be governed. For most companies, that translates to about 25% to 30% of all the data that is captured.

What Governance best practices apply to big data? The same best practices that apply to standard data governance programmes, enlarged to handle the particular aspects of Big Data:

  1. Take an enterprise approach to big data governance. All Data Governance Programmes should start with a strategic view and be implemented iteratively. Governance of big data is no different.
  2. Balance the people, processes and technologies involved in big data applications to ensure that they’re aligned with the rest of the data governance programme. Big data is just another part of enterprise data governance, not a separate programme.
  3. Appoint Business Data Stewards for the areas of your company that are using big data and ensure that they receive the same training as other data stewards do, with special focus on big data deemed necessary due to the technology in use at your organisation.
  4. Include the Value of Big Data Governance in the business case for overall data governance.
  5. Ensure that the metrics that measure the success of your data governance programme include those related to big data management capabilities.
  6. Offer incentives for participating in the data governance programme to all parts of the business using big data to encourage full participation from those areas.
  7. Create data governance policies and standards that include sets of big data and the associated metadata, or that are specific to them, depending on the situation.

It has to be said that there are many more challenges in Big Data, but researching and reading these are basically the top five that come out every time and are referenced by any and all that are venturing into this world. If there are any different aspects that have been encountered please let me know and perhaps together we can formulate a global checklist for all to follow.

Continuous Controls Monitoring, Three Key Considerations

March 10, 2014

Increasing complexity and challenging new business risks pervade today’s global environments. To address these risks and meet regulatory requirements, organizations must establish effective internal controls, along with processes to make sure these controls remain repeatable, sustainable, and cost-effective. Therefore, as part of their overall governance, risk, and compliance (GRC) strategies, organizations are building continuous controls monitoring (CCM) programmes to improve efficiencies, avoid controls deficiencies and focus resources on managing critical risks. With an effective and sustainable CCM programme that’s designed, managed, and optimized to account for changes such as regulatory shifts, mergers and acquisitions, and system upgrades an organization can meet its compliance objectives, reduce risk exposures, and meet the expectations of key stakeholders. Over time, as their CCM processes mature, companies can transition from manual risk detection efforts to automated prevention measures. Organizations considering CCM must first focus on their control objectives and establish sound processes.

1. Create a Foundation for Your CCM Programme.

A CCM programme should include risk detection, prevention, remediation, and compliance components, all focusing on people, processes, and technology. Using CCM to evaluate and monitor key business processes against predetermined business rules enables an  organization to identify patterns and anomalies to help minimize potential risk exposures.

When a company embarks upon a CCM initiative, the automation or technical aspects often become their primary focus. Although automating the controls can be very beneficial to the organization, it is recommended that companies focus initially on the following control objectives:

1).Application access controls and segregation of duties (SoD) can reduce opportunities for fraud or for material errors by ensuring that financial and operational transactions are properly authorized and approved. A CCM strategy should drive the development and enforcement of effective user and role governance processes, practical SoD rules, and sustainable access controls.

 2).Business process controls help users evaluate system configuration settings to identify events that occur outside of set control limits.

 3).Master and transactional data controls are used to analyse sensitive fields and transactional data against predefined control criteria. The analysis of this data supports the detection of potential controls violations, such as changes to vendor addresses or terms, duplicate payments, timing issues, and other anomalies. Additionally, the transactional data analysis can facilitate business efficiency improvements.

2. Manage the CCM Life Cycle

To create and sustain an effective CCM programme, an organization must understand and manage the entire CCM life cycle, which includes:

Process design. This begins with a clear vision based on operational objectives (i.e., achieve compliance, reduce risk). It is impractical to monitor all of a companies controls, and therefore it’s essential to first identify the controls most in need of monitoring, based on business objectives. It is also recommended to establish a CCM governance body to lead the process design effort and to help ensure that business objectives are met.

Business rule development. A CCM programme is only as effective as the business rules used to evaluate the control data. Business rules for SoD, master and transactional data, and automated application controls are used as filters and applied against data sources to identify potential control anomalies.

Controls optimization. Once significant risks have been identified within business process areas, appropriate controls must be established to mitigate them. A vital step in achieving control optimization is establishing controls that cover multiple risk areas and eliminate redundant or ineffective controls.

Exception validation and rationalization. Organizations often become overwhelmed by the volume of control exceptions. Since some exceptions are legitimate, organizations can manage risks and reduce the number of reported exceptions and therefore the cost of compliance by filtering out legitimate business exceptions.

Resolution reporting. To successfully manage and mitigate business risk, and to ensure timely resolution of compliance violations, it is important to set up a process that allows the company to diligently review and resolve reported violations.

Process optimization. The processes that make up the CCM programme should be flexible and allow the company to dynamically react to change. They also should be continually adjusted to meet business needs and sustain the CCM investment.

 3. Automate CCM with SAP Functionality

Companies running SAP have a significant advantage when enabling and automating CCM because integrated business disciplines such as financial accounting and asset management can be built into a centralized CCM programme. A CCM programme that encompasses well designed controls, appropriate business rules, and the diligent management of the CCM life cycle, allows organizations to focus on their enhancement and automation efforts, reducing time and resources that would otherwise be spent manually monitoring controls.

As companies move toward automation, they should make managing configurable controls through benchmarking a part of their testing strategy, since it is a mechanism that ensures configurable controls remain unchanged. SAP provides this capability through table logging, which can help reduce year-to-year control testing.

SAP also provides a number of tools embedded in its GRC solution suite, which can be used to automate the CCM process. These tools include SAP GRC Access Control, SAP GRC Process Control, and SAP GRC Global Trade Services. An organization can leverage these tools, combined with the functionality already embedded within SAP systems, to gain a clear advantage in creating an effective end-to-end solution for managing risk and compliance.

Make CCM a Priority

Having a GRC strategy and making an effective CCM programme a priority can help Companies drive their compliance efforts, identify potential processing errors, and proactively detect fraud. It also is critical to design practical processes as you develop your GRC strategy and CCM programme. Many companies hold the misconception that an automated controls solution will solve all compliance needs. However, an automated solution is only effective after a successful CCM programme has been established based on well designed controls, appropriate business rules, and ongoing management of the CCM programme.

Strategic Sourcing, what is it all about?

December 17, 2013

Strategic sourcing processes introduced in the mid-nineties have proven to be so robust that even today they remain broadly similar.

This quick overview is not an absolute step-by-step template, because each organisation is unique and each deployment, although broadly similar, will be unique. It is not designed as a one-size-fits-all approach as this will not align your sourcing strategies with what your organisation wants to achieve. One thing that has been learnt from multiple deployments is that successful organisations drive deployment of strategic sourcing in their own way. 

Definitions

stra·te·gic [struh-tee-jik] – adjective

1. Helping to achieve a plan, for example in business or politics;

2. Pertaining to, characterised by, or of the nature of strategy: strategic movements;

3. Of an action, as a military operation or a move in a game, forming an integral part of a stratagem: a strategic move in a game of chess.

sourc·ing [sawr-sing, sohr-] – noun

1. Buying of components of a product or service delivery from a supplier.

Strategic sourcing is an integral part of a wider business strategy to improve profitability and, in turn, shareholder value. It is directly linked and specific to the business, and illustrates opportunities within the supply base to either reduce cost or increase the value of products or services required by the business. Typically, it includes demand management and supplier management. However, increasingly it is becoming important to factor in total cost of ownership (TCO) and sustainability. 

Demand management

Understanding the specification and volume requirements from the business ensures that needs can be appropriately met and that resources are not being wasted. Demand management is not about reducing contract volumes. Rather, it is about ensuring that contract volumes are appropriate for meeting the needs and objectives of the organisation. A core process that will contribute to the strategic sourcing plan is the sales and operations planning process (S&OP).

The S&OP is an integrated business management process through which the business continually achieves alignment and synchronisation between all functions of the organisation. It generally includes:

• an updated sales plan;

• a production or delivery plan;

• inventory holdings;

• customer lead times and commitments;

• a new product development plan;

• a strategic initiative plan;

• a financial plan.

The strategic sourcing team would ultimately be involved in several of these areas, to contribute towards capacity planning and to understand how each feeds into the overall plan and influences demand profiles.

Supplier Management

Understanding the capability, costs and capacity within the supply base ensures that business requirements can be appropriately matched without incurring higher costs. Systematic improvements in supplier management not only improve cost of goods and services but can also improve relationships with suppliers. This can lead to supplier relationship management (SRM) – tools and processes that enable the proactive management of an ongoing business relationship to secure a competitive advantage for your organisation.

To deploy SRM, an organisation needs to decide on a segmentation approach that considers the internal needs of the business, spend, and also accounts for risk to the business. Broadly speaking there are four high-level categories of suppliers.

Transactional suppliers are where little or no relationship or performance management activity is undertaken. Either the suppliers are utilised infrequently or the supplier is of low value to the business. These suppliers can be easily switched for another if required.

Performance-managed suppliers focus on ensuring delivery of the contracted goods and services to the required cost and service levels, rather than on building a collaborative long-term relationship.

Relationship-managed suppliers have some strategic value, so elements of SRM needs to be applied here.

Strategic suppliers are typically either business critical suppliers, or high spend suppliers. Generally the most effort is expended on this category to drive a mutually beneficial collaborative relationship. This is an effective route to improving costs through the Value Add or Value Engineering (VA/VE) process. A close working relationship with strategic suppliers also leads to a greater understanding (and reduction) of the TCO of products or services. 

Total Cost of Ownership

Understanding TCO is becoming increasingly important to procurement. Legislation concerning the environment is affecting the way we do business either through EU directives such as the Waste Electrical and Electronic Equipment (WEEE) Regulations or through corporate social responsibility programmes that drive different behaviours from the business. It is important to factor in not just the acquisition costs but also the cost of doing business with the supply base and any return flows or on-cost from recycling. 

Sustainability

The fourth element of strategic sourcing also provides part of the rationale for driving it within the organisation. Being able to sustain the supply of goods and services while de-risking the supply chain as well as balance the total costs is ultimately the responsibility of procurement.

A coherent approach

Tying all activities together into a coherent plan will transform the business, as only the procurement team can do. Internal ‘silos’ are built as a company grows. Although each silo represents the company’s acquisition of knowledge and improves the ability to deliver value to the customer, they can also create inefficiencies in the business, leading to organisational inertia. This can slow the pace of change and reduce the capability for innovation. Creating a plan balanced across the four areas ensures you will engage with the business and supply base.

When creating a communications plan, consider each of the four areas and how they might affect the stakeholder. Simple, bite-sized statements work well for those in more senior levels of the organisation. However, greater detail will be needed for others, especially where they perceive they might have to change what they do. Build in the wider plan, so each stakeholder can see all issues and organisational levels have been considered.

Develop your plan and highlight the best solutions for each area of the business. Consider using a SWOT analysis (see below) to develop the ideal outcomes.

 

Risks to Avoid or Manage before they become issues on Business Systems Implementations (SAP Specific)

November 28, 2013

Specifically, the SAP system has been implemented successfully in at least 50000 customers globally. The Majority of project failures are not related to the product or software but tied to the project execution, the software implementation partner or just the people themselves.

This brief will help you maximize your chances of a successful SAP implementation and you are more than welcome to discuss with me any specific questions you may have about managing risks on your SAP project.

Risks and issues are part of any and every major Business transformation project. Put in perspective of large business transformation projects which involve the larger ERP’s (SAP and Oracle) but not limited to, these risks and issues can be huge which could destroy the entire project if not managed and mitigated in a timely manner. Most of these risks and issues discussed here are applicable to any Business System implementation project. However, this brief is based on my several years of experience in leading, overseeing and participating in large ERP projects and also based on some of the projects that have not gone all the way to plan.

Most Common Risks on SAP Projects
These risks listed below are the ones that occur very frequently on a challenged or failed SAP project. There will be no reference to any specific project, but once you read through the risks you will probably be able to guess who and where the project may have been. The ERP Project world has become such a small place that everybody knows every bodies business.

Risk #1: SAP System not producing correct output or not working properly during UAT or post go-live
During UAT or post go-live, your organization realizes that SAP system is working correctly for certain business scenarios but produced inaccurate results for business scenarios with few deviations. You may also see unexpected behaviour of the system such as inability to execute end-to-end business process or cause systems short dump. This risk is common on SAP projects where business requirements have not been captured in detail or poor quality system design during realization phase or the business does not really understand their own process. Ideally this may also mean inadequate testing of your business scenarios.

Risk #1 Mitigation
 Ensure that business requirements gathered during blueprint phase are at sufficient level of detail that clearly describes your business process with examples. Verify that all business requirements are reviewed by the subject matter experts (SME) and approved by the business lead and/or business process owner.
 Ensure end-to-end business process in clearly documented in BPRD documents and process flows covering the most common business scenario as well as all the process variants. Verify that each BPRD document is reviewed and accepted by the SME(s).
 Validate that proper SAP software fit-gap analysis is performed on each and every business requirement. Each business requirement that is classified as “fit” should have corresponding SAP standard functionality covering the requirement or “gap” should have a RICEF object that needs to be developed to address the gap in the SAP standard functionality. This will mean that you have 100% coverage of business requirements with either a standard SAP solution or a RICEF object.
 Integration and User Acceptance Testing (UAT) should be very thorough to cover all end-to-end business processes. Each test case should be driven based on business requirements and business process. In other words, each business requirement should be tested with at least one test case. Most projects have test cases that only test most common business scenarios which can lead to system malfunction when there are variations in business process inputs or process step. It is very important that the test cases cover most common business scenarios and all its variations that represent your day to day business operations.
 Every SAP project should have a high quality requirements traceability matrix (RTM) that will ensure that your RICEFW functional designs and technical designs trace back to meet all business requirements associated with a specific software gap. RTM will also assure the business that each business requirements has a SAP standard solution, RICEF and further on a test case to test each of these requirements.

Risk #2: Project experiencing frequent delays in deliverable completions and slippage of deadlines by the Systems Integrator (SI)
Is your project experiencing severe delays with deliverables taking longer than expected? Are project deliverables submitted as complete not being truly finished and lacking detail?” One common risk on SAP implementations is that your Systems Integrator (SI) may take longer than anticipated to complete project activities and deliverables there by missing your project deadlines. If this happens, it can delay your project phase completion and also result in cost overruns. I have noticed that this risk mostly occurs when the project work effort is incorrectly estimated or SAP skilled resources from your systems integrator are inadequate and does not possess required solution expertise or experience. From programme governance perspective, this risk can be accurately monitored by having a good project plan with well-defined work breakdown structure that will provide visibility into key activities associated with production of essential project deliverables.

Risk #2 Mitigation
 First and foremost, I would recommend that your SAP project leadership and PMO should have a clear visibility to the progress of every key activity and deliverable completion on the project. In order to achieve this, it is important to have a project plan with a good work breakdown structure. This project plan should also be adequately resource levelled to ensure that SAP skilled, SME and project architects are not over resourced

Example: Blueprint phase work breakdown structure for a sample business process ABC may look like the following:
ABC Business Process

a). Requirement Gathering work sessions (AS-IS including a SWOT Analysis)
b). To-Be level 2 and level 3 business process design and Workshops
c). ABC BPRD (Business Process Requirements & Design Document), sign off to commence
d). Fit Gap Analysis
e). High level SAP Solution
f). BPRD and Requirement Review and Approval by the Business Stakeholders

PMO and program managers should review the weekly progress and identify any work streams that are facing delays and likely to be a bottleneck to overall project progress. Attempt to resolve the delays by hopefully increasing participation of SME or project architects. It may also be helpful if any outstanding unresolved items can be de-prioritized if these are not critical to business operations.
 Projects may also have SAP skilled consultants that lack required experience with a specific SAP module that is being implemented. In both these situations SAP customer leadership is unable to identify these kinds of issues because the leadership assumes that your SAP systems integrator is bringing the best SAP consultants to the project. To mitigate this risk, it is extremely important that your project leadership with the help of your SAP project advisor (third party SAP project leadership expert) interviews all systems integrator resources especially the business leads, solution architects, SAP consultants and team leads provided by SI.
 Most often the delays in SAP implementations are caused by under allocation of SAP skilled resources on the project. Ensure that your project has well balanced teams of SME (subject matter experts) from your business and SAP solution experts. Try to keep the resources from your SI like business analysts or systems integration analysts to minimum that do not have any prior SAP implementation experience. You are better off investing this money to have additional SAP skilled resources on project work streams to produce deliverables quicker.

Risk #3: Ineffective use of standard delivered SAP functionality due to lack to knowledge within consulting organizations

Predominately all SAP modules and industry solutions are proven to meet 60-90% of business requirements within a specific industry. This percentage of standard SAP package fit is higher with products that have gone through multiple release cycles compared to those that are just launched. There have been projects where systems integrators lack in-depth expertise of the modules that are being implemented. This results in poor quality SAP software fit gap analysis during blueprint there by resulting in higher RICEFW objects especially with enhancements. A few large enhancements can easily transform into custom development projects thereby blowing the project budget off the roof. I strongly recommend having a project solution architect with in-depth module knowledge or having a senior expert consultant from SAP or your local division of SAP to assure that your project is leveraging maximum standard delivered functionality.

Risk #3 Mitigation
The best way to mitigate this risk is to have representation of at least one senior consultant with product expertise from SAP. As a Project Manager I usually recommend this on most projects that are implementing a new industry solution of SAP. If the project does not allow extra budget for this resource, then one thing every SAP customer should do is to review the fit gap analysis output with SAP and solicit their feedback. This will help your project eliminate RICEFW objects where SAP might have standard alternative solutions.

Risk #4: Lack of business subject matter experts causing project delays
Business team members from the company implementing SAP play a very crucial role on the project. Each major business process or operational area should be represented by at least one subject matter expert who understands how the end-to-end business process is handled today and also how this process needs to work in the future. Inadequate business SME can directly impact quality and progress of requirements gathering, review and approval of to-be business process designs and verification of project deliverables. It is very important to ensure that your business provides required number of SMEs without jeopardizing you current daily production operations.

Risk #4 Mitigation
 In project planning or blueprint phase, meet with business stakeholders to ensure that each business process or operational area is represented with experienced SME. If required numbers of resources are not available, then it may be wise to split the project into multiple releases.
 Do not supplement your business SME needs with business analysts from your SAP systems integrator. Only your SMEs and business process owners understand business requirements. It may not be effective for an external consultant to fulfil this role without understanding your internal business operations.

Risk #5: Lack of confidence of business team in understanding and acceptance of blueprint and overall solution in SAP system
To me this is one risk that every executive and project sponsor of a SAP project should pay close attention. The ultimate goal of any SAP implementation is to transform current business operations into the new SAP system. It is very important that business subject matter experts, analysts and process owners understand the future state business requirements, new to-be business process flows, solution design in SAP and functional documents. If the business team is not onboard with requirements that are gathered during blueprint and solution design in SAP then your project is running a very high risk of business operations not working as anticipated upon go-live. I recommend that every SAP project leadership especially the executive project sponsor and overall project business lead to verify that business requirements, blueprint documents (process flows, BPRD documents, solution design and solution architecture) is reviewed and approved by SMEs, process owners and the business lead. This will ensure a high quality blueprint and realization of blueprint in design & build phase. Ultimately I also suggest that test cases cover all of your business processes and its variations.

Risk #5 Mitigation
 Key business team members especially SMEs should be part of business requirements gathering work sessions. SME should have clear understanding of to-be detailed business requirements, business process design and solution design in SAP.
 SMEs should review and approve each business requirement associated with their business process.
 SME and business process owners should review and approve process flows, business process requirements and design document (BPRD) and high level solution design created during the blueprint phase
 Client Solution Architect (not from your SI) should review, validate and approve the SAP software fit gap analysis and overall solution architecture. This task can be accomplished together by Client Solution Architect and your leadership QA advisor if you have one on the project.
 DO NOT approve any blueprint document or deliverable which is not 100% complete. Do not let your systems integrator invent the definition of “Complete” in order to meet project deadlines. Complete means the document is fully finished and needs no re-work unless there is a change request.
 Business team and stakeholders should have a full buy-in of the solution that is being designed and developed. All the steps above will ensure that you are heading towards a successful path rather than a mysterious avenue of uncertainties during realization and go-live.

Risk #6: Ineffective, rigid and political project leadership
On a very large undertaking like an SAP implementation, the project leadership plays a crucial role in the success of your project. It is not uncommon to see corporate executives (level of vice presidents and senior directors) in the project leadership who are slow decision makers, enforcing cumbersome decision making process when not needed and creating unnecessary political environment thereby causing bottlenecks and impeding project progress. I treat an SAP transformation initiative as a fast moving train and every project leader should adjust and cope with the pace of this train rather than slowing it. What I precisely mean is that lot of decision on an SAP project need to be made very quickly and issues should be resolved in an expedited manner. This will help tasks and deliverables on critical path completed in a timely manner without affecting dependent activities. I use this train example very often and suggest that every leader in a SAP project should be flexible, adaptive and work collaboratively with project leadership to meet one common goal of the project i.e. “Successful on-time and on-budget go live”. -individual decision making authority -always have leadership backup to expedite decision making -leadership and steering committee escalations if bottlenecks are causing project delays. Engage an independent leadership QA advisor to monitor resolve and escalate these issues without any influence of project or corporate environment

Risk #6 Mitigation
 Select project leadership (executive sponsor, business lead, IT lead, change/training lead and project management lead) from your internal organization that have proven track record of successful IT transformation implementation. Make sure these executives are flexible, capable of handling complex project challenges and able to make decision without causing bottlenecks on the project.
 One ineffective and political leader can bring the whole project down. Make sure that you have leaders that are excellent team workers and not the ones that are eager to demonstrate power and authority.
 Decision making on a SAP project (whether a business decision, deliverable approval or issue resolution) should not be solely in the hands of one leader. Depending on the work load, a project leader may have backlog of several key project decisions that need to be made which can ultimately become show stopper on the whole project. Project leadership decision makers should have backup individuals who can evaluate situations and make decisions in scenarios where primary decision maker is not available or lacks bandwidth.
 Critical project risks and issues should be proactively escalated to the project sponsor and the steering committee. Steering committee should also be presented with analysis, alternatives and possible solutions to risks and issues that are escalated.
 Communication is the key between the project leadership and it is important that there is full transparency about project key decisions, risks, issues and status between these leaders. On large SAP projects, I recommend that project leadership should meet at least once every week.
 Project Leadership should keep a “Anonymous Project Feedback & Suggestion Box” for project team members to provide project feedback, express concerns, raise potential problems and suggest avenues of improvement. This will ensure that major unforeseen project issues and challenges are reviewed and addressed in a timely manner. This gives every project team member irrespective of their role and title to voice their concerns or suggest improvements on the project.

Risk #7: Offshoring SAP design and build effort: Is it cost saving or risk doubling? Tighter control on resource skills, work quality and on-time delivery capability
Offshore development centres of big 5 and other SAP systems integrators have proven to be cost effective option to lower the cost of overall SAP implementation. The same SAP skilled resources that cost between $200-300 per hour onsite in the US can be available offshore in countries such as India, Philippines, Europe, etc for $30-70 per hour. This is a no brainer cost saving initiative for any SAP project. But off shoring your SAP project work comes with its own set of risks and challenges that clients in the US are not aware or often hidden due to lack of visibility thousands of miles away. Some of the major risks with offshore development centres are the following:
 Under-qualified resources in design and build teams
 Major discrepancies between actual deliverable progress versus the one reported in weekly project leadership meetings
 Lack of key senior SAP functional and technical key members on the offshore team which leads to critical solution quality risk.
 Language and cultural barriers leading to project work ethics being compromised which can lead major project delivery issues to go unreported and escalated till the very last minute
 Incorrect project progress reporting by offshore leadership to alleviate concerns and anxiety of project leadership.
 “Lost in translation” often business requirements are missed or misinterpreted and similarly functional designs and so on.
SAP customers such as your company does not have visibility and leadership control on what happens in the offshore development centre for your project. We realized this as a huge risk on many SAP implementations. Recently our company launched a new practice of “Offshore SAP QA & Advisory Leadership” which allows one of our experienced SAP senior executive to be your exclusive independent QA representative and work onsite at the project offshore delivery centre. This is not the focus here so if you need more information then you can reach our company.

Risk #7 Mitigation
The best way to make sure your project offshore team is transparent, effective and well qualified to deliver your project on time is to engage a third party SAP project QA advisor (one of our offering) that will allow a senior SAP industry leader to serve as your exclusive representative at the offshore location closely working with the offshore leadership, entire offshore project team and collaborating with your internal US project leadership. Remember that this resource does not work for your SAP systems integrator but for the SAP customer leadership. An Offshore SAP QA Project Advisor will mitigate all the risks mentioned above by doing the following:
 Interview and select each offshore project leader and team member by conducting project management, SAP functional and technical interviews.
 Ensure balanced SAP design and build teams with adequate number of architects, senior designers, developers with some room for junior SAP resources.
 Review project and deliverables progress with offshore SAP project manager and also conducting independent verification of these project deliverables.
 Ensure end to end SAP solution integration with collaboration between project teams across various work streams.
 Independently report project progress to project sponsor and leadership. Also proactively report offshore project risks, issues and recommend areas of improvements to deliver high quality SAP solution.
 Ensure that level of communication between business SME and onsite team members is appropriate so that business and SAP functional requirements are clearly understood.

Risk #8: Inaccurate or incomplete work estimation on SAP projects resulting in cost overruns and schedule delays
Several projects fail or end up being prolonged due to cost overruns as a result of work effort being inaccurately or incompletely estimated. SAP projects have been no exception to this situation. Work estimations should be done at various points on a SAP implementation. Blueprint work estimation should be done during project planning phase. After SAP software fit gap is complete and RICEF inventory is finalized, the work effort should be estimated for design, build, testing and deployment of your SAP system. So from where do these estimation risks surface? Often project estimators from the systems integrator do not include the client employees (SME, analysts, etc) that are required to be consulted or complete the deliverable. Estimates in producing a deliverable should include time required from SI as well as client resources. The work effort for SAP solution related items should directly be tied to a RICEFW object or SAP configuration object. Legacy or external systems remediation effort to integrate these systems with SAP should be added to the above estimates. Sometimes work effort associated with mandatory SAP work streams such as hardware setup, security, systems administration (SAP BASIS) and network administration is often missed. Note: Re-estimations should be done in early realization phase if you realize that RICEFW objects are taking longer than expected due to project cultural or operative barriers. This should ideally be resolved by project leadership and if not addressed can delay the entire project. There may be some RICEFW objects especially a select few Enhancements that are super complex and as such these should be estimated separately. Because these super complex objects may take much longer to be designed and developed.

Risk #8 Mitigation
 Make sure each RICEFW, configuration and other work object is classified as “High”, “Medium” or “Low” and work effort includes design, build and testing effort from system integrator as well as client resources.
 It may be a good idea to do parallel prototype of 2-3 RICEFW objects to convince the project leadership that RICEFW development can be optimistically delivered as per the estimates.
 Verify that project estimates include hardware setup, network administration, security and SAP systems administration effort.
 Estimates should also include duration based work effort components such as PMO, OCM/Training and testing.
 It is very important that “super complex” enhancements are estimated separately and not by the SI estimation tool. This will allow for accurate reflection of work effort for completion of these complex enhancements on the project plan.

Risk #9: Choosing an incorrect Systems Integrator with limited track record of successful SAP systems delivery in “specific SAP industry solution” can lead to project failure on multiple fronts
This is one risk that can be avoided if you follow the principles on which I basically operate on any SAP project. During early blueprint and there on your project leadership may realize that you have not chosen the best SAP systems integrator for variety of reasons. These reasons may include poor quality resources that lack proper SAP knowledge, project delays, and poor project execution and so on. It can be very painful and cost prohibitive to change your SAP systems integrator at the end of a phase and more so in the middle of a project phase. As such it is very important to carefully evaluate, verify and strategically engage a SAP systems integrator for your project during project pre-planning phase.

Risk #9 Mitigation
 Verify that SAP systems integrators (vendors) bidding for your SAP implementation have implemented specific SAP solutions at two or more customers in your specific industry.
 Conduct reference calls with these customers. Check how these vendors have performed on these other SAP projects. Was the delivery in line with original project budget and timeline?
 Ensure that SI or vendor partner and senior executives that will be part of your project also have been part of at least one of these prior SAP implementations. It is important that senior executives and client partners have successful track record in delivering SAP transformation projects in your industry.
 Include financial and corrective penalties in the Statement of Work (SOW) in case the project milestones or Q-gates are delayed.
 It is absolutely crucial to include clear and detailed scope of work in the SOW. SOW should not have any ambiguities that can compromise the successful delivery of project.
 Engage an independent SAP project advisors right from the beginning of the project if your project budget allows.

Risk #10: Inefficient Project Management Office (PMO) with poor project visibility, deliverable tracking, issues/risks management and communication shortfalls

This is one area where I have hardly compromised when setting my expectations from the PMO of the SAP projects that I served. PMO is the backbone of any IT transformation project and most of what is mentioned here about PMO applies to SAP as well as non-SAP projects. PMO should serve as the single source of truth to project an accurate project status at any point in time. It should provide full visibility to project status by presenting the clear picture of work activities, tasks and deliverables progress. I expect the SAP project manager and PMO team to work with individual business, IT and other teams and their underlying work streams to gather correct work progress and reflect the same in the project management tool such as MS Project. A highly effective PMO is the one that deploys, monitors and enforces the proper usage of tools and methods and properly manage time spent by project resources to deliver the tasks as per plan. PMO should ascertain that all project risks and issues are entered into risks and issues management tools and ensure resolution of these items in a timely manner as set in the project charters.

Risk #10 Mitigation
 Verify that PMO is working with good project plan with well defined work breakdown structure that depicts accurate progress of tasks and deliverables.
 Every week PMO team should work with team leads and update the project plan. Any delays in completion of tasks and deliverables should be reflected in the team leads weekly report and also highlighted in the weekly PMO meeting.
 SAP Project Manager (or also referred to as PMO Lead) should work with the programme manager or Project Director, project sponsor and independent project advisors to discuss project progress and also seek recommendations to bring project back on track in case of delays.
 In the blueprint phase, PMO must ensure that all business requirements, BPRD documents, SAP solution design, SAP solution architecture, organization change management strategy, etc are reviewed and approved by the business or IT lead and other corporate stakeholders.
 In the realization phase, PMO must ensure that each RICEFW functional and technical design, unit tests and UAT are approved by the customer business and IT teams.
 No deliverable should be set as “complete” in the project plan unless it is reviewed and fully accepted by the business leads.
 Project capital and expenses should be accurately tracked as per guidelines from leadership and CFO (Chief Financial Officer). Total project costs incurred should be reported on a weekly basis in the project leadership meeting.

The Sardine Strategy 2013 for 2014

November 19, 2013

sardine-schooling-edpdiver

Once again I have resurrected an older post which I believe is still extremely relevant this year as it was towards the end of last year. What I am a little disappointed in is that there are people that have not heard what is being said!  Before you embark upon any business improvement in 2014, make sure that you know where you want to go and understand the implications of your actions. Without moving as one, you will surely fail. Don’t lose direction!

What is the latest buzz in the supply chain world I was asked at a recent SAP Conference, I replied there is plenty of buzz in the SCM world but all this hype needs to be underpinned by a good solid supply chain strategy.

Have you heard of the “Sardine Strategy” ? the questioner looked perplexed at my question! I will elaborate for schooling fish, staying together is a way of life. Fish in a school move together as one, for schooling fish the “move as one” trait is innate. Separation means likely death!

For Global Supply Chain, misalignment – failure to move as one – means poor service, high inventory, unexpected cost, constrained growth and profits, finally resulting in loss or market share and possibly reputation. Once market share and reputation have been damaged they are difficult to repair.

So what are the common causes of misalignment – failure in supply chains to “move as one”?

I offer a list of some 15 common causes that have plagued companies for many years and still do today, I am sure that the list is non exhaustive, but I am doubly sure that the readers can equate to one or more of these businesses today.
1). Lack of Technology investment plan.
2).Little or no return on investment (ROI)
3).Isolated supply chain strategies.
4). Competing supply chain business improvement projects.
5).Faulty sales and operations planning.
6).Failure to meet Financial Commitments.
7).Lack of support and specialized expertise.
8).Mismatch between Corporate Culture and ERP.
9). Under utilization of existing technology.
10).Vaguely defined goals.
11).Impact of mergers and acquisitions.
12).Mismanagement and poor standardization of business processes.
13). Extension from supply chain to the value chain.
14).Running out of ideas for new improvement projects.
15).An organisation that defies effective and efficient supply chain.

We could discuss each of these in great depth, but space and time is limited, however if you want to discuss any of these causes you have identified just drop me a line

Where does ERP failure really come from?

November 18, 2013

Here is a Question that is asked over and over again, where does ERP failure really come from? Ultimately, most problems can be summed up in one word: People.

Most projects begin life with the hopeful enthusiasm of anticipated triumph and success. However, success requires planning for details that do not become relevant until much later in project; training is a perfect example

In a number of cases the losses are blamed, at least in part, on their employees not understanding how to use their newly installed SAP ERP system, which they said worked just fine.

So, why is this? Well, basically, in the beginning of any big, shiny new project, what you have is a great deal of excitement over the benefits that it will bestow upon your organization streamlined processes, bottom-line savings, top line growth, more efficiency, reduced waste, better customer service, etc. The problem arises when this leads to a “hurry-up-and-get-things-done” approach.

Too much of the time companies are overly aggressive when they set their initial timelines, they see the statistics that show that many projects go over budget or take longer than expected, so they end up wanting a very aggressive project plan just so they can manage the time and budget.

Maybe your company has a history of talking-the-talk but, walking-the-walk? Different story! “This time it’s going to be different!,” exclaims the CEO slapping the table for emphasis and everyone’s on board. Here you go charging ahead, selecting a vendor, shun due diligence, and fail to define clear business requirements and goals. Perhaps you even know you don’t understand your business processes as well you need to but, “well now, ERP’s are designed to fix all that, Correct?

So how to you combat this? It’s simple: don’t do it! Rushing an ERP project to save time and money will cost you more in the end than you will ever save up front.

Done properly ERP can and will transform your business by automating and re-engineering its beating heart: its business processes. It is, therefore, in your best interest to take the time to understand how your business actually runs.

It is so so critical to understand the level of resource commitment the project will take.
So the objective is understand what you do as a business, understand what your systems currently support or don’t support and then have the vendors or integrators show you how their system can best support what it is you’ve brought to the table.

One of the biggest elements of any implementation where executives often fail is under estimating the time it will take to get the project done. Understanding things like time-to-value, change management, adoption, employee training, etc. are all down-played in favour of the perceived benefits the software will bring.

Here are a few suggestions of do’s and don’ts that will help:

Develop your own benchmarks; don’t rely on the vendor’s. Vendors can supply you with templates and best practices that can take you a good part of the way but you still need to define what constitutes success and failure, progress and set-backs, deadlines and must-haves, your “as-is” state verses your ideal “to-be” state.

Don’t rely on your vendor or SI to handle change management. It’s your company, so it’s your culture that has to change, not theirs. Change management is up to you and the difficulty or ease of this process is directly affected by expectations set early on. If you think it’s going to be easy and it’s not, then you are going to be in for some sleepless nights.

Define your business processes up front. Don’t’ let a vendor’s software define them for you. Most companies have no real idea how their business processes work in practice until, someone resigns. Suddenly all of that problem solving and expediting wisdom is gone. No software can replicate that knowledge. Find that someone! Talk to that someone, preferably in the beginning of the process, not when you’re trying to get that someone back.

All of this will be more time consuming up front, however will save lots of heartache and money at the back end when you find yourself fixing what should never have been broken in the first place.

It’s my belief some of the essential ingredients first require a strategy and a direction. So there has to be an earmarked plan and with that plan comes budgeting of time, resources and dollars to invest in that plan because a lot of corporations take a real penny pinching approach and it ends up costing them many times more than they ever anticipated as a result of that.

Ten Tips on how to sell Compliance in the Organisation

October 28, 2013

It probably seems to you like every time you want to talk about Compliance, everyone runs away and hides, they ignore you and hope you go away, or they fuss and moan. Compliance is a fact of business life, however, Your company must comply with:

Your Customers requirements (quality, safety, performance specifications, quantity, price, prompt delivery, etc.); Industry or other standards and guidelines (ISO 9001, IRFS, etc.); and/or
Regulations (e.g., 8th EU Directive, Food Safety Modernization Act) in order to get or to keep business. Therein lies the problem: compliance is like healthy eating or exercise. We know we have to, but well, it’s so hard to either make the time or get enthusiastic about it! Why is it that “have to” and “want to” always seem to be inversely proportional to one another?

How do you sell yourself and your employees on the notion that compliance is something you want, not something you merely put up with? How do you turn “got to” into “want to”?

First, you have to…

Sell yourself on the idea. You’ll find in life, that is, if you haven’t already that if you don’t have a deep and firmly held belief in your company, your product, or your people, you won’t sell your product or your service. If you lack enthusiasm, conviction, self-discipline, vision, perspective, and some of the other characteristics that define leadership, you won’t have many followers.

Your customers are your ultimate critics. If you don’t meet their requirements, you’re out of business. It won’t matter what other requirements you fail to meet if you fail to meet your customer’s. Have your priorities in order, listen to your customers first.

Include your staff in the development of Policies and Procedures that will ensure your company’s compliance, because: (a) you can’t do it all by yourself; (b) they know more of the day-to-day tasks, operations, and processes than you; and (c) you need to show that you value and trust their judgement if they’re to grow (i.e., micromanagers never win).

Give everyone in your firm the resources they need to do their jobs effectively.

Ensure that your employees are more than adequately trained and experienced.

Make sure they know what they’re doing and more importantly why they’re doing it.

Keep the lines of communication open all the time. Communicate effectively and continually with all levels of your organization.

Get out of your office! Regularly address your employees first hand, directly and openly.

Listen, and then turn what you’re hearing into something your employees — and your customers want to act upon.

Make a habit of meeting with suppliers, subcontractors, and everyone who has a hand in getting your product or service into the hands of your customers. You might not be able to do this often but you shouldn’t let a year go by without visiting with your valuable partners. Communication is key!

Look at failures as opportunities for improvement. Don’t go looking for the guilty party every time something doesn’t go according to plan! You want to keep failure to a minimum, yes, but keep things in perspective. Not every mistake requires Draconian countermeasures!

Share success. Compliance goes beyond merely observing standards or laws, compliance can help you win business! When it does, spread the wealth. Acknowledge the part everyone played in making your company a success, especially those who had a direct hand in your victory.

Sell yourself, then sell everyone else on the importance and value of compliance.

Make them want it! Your customers do.