Posts Tagged ‘Processes’

Process Mining, Bridging the gap between BPM and BI

February 29, 2016

Later this year I will be involved in a MOOC entitled “Introduction to Process Mining with ProM“,  from FutureLearn. Unfortunately it has just been delayed from April till July, but being interested in BPM and BI, I thought that I would start my own research into the subject and publish my own findings.

Prof. Dr. Ir. Wil van der Aalst, Department of Mathematics and Computer Science (Information Systems WSK&I) is the founding father of “Process Mining ” and is located at the Data Science Center, Eindhoven in the Netherlands. You will find many quotes attributed to him in this post.


Today a tremendous amount of  information about business processes is recorded by information systems in the form of  “event logs”. Despite the omnipresence of such data, most organisations diagnose problems based on fiction rather than facts. Process mining is an emerging discipline based on process model-driven approaches and data mining. It not only allows organisations to fully benefit from the information stored in their systems, but it can also be used to check the conformance of processes, detect bottlenecks, and predict execution problems.

So lets see what it is all about?

Companies use information systems to enhance the processing of their business transactions. Enterprise resource planning (ERP)  and workflow management systems (WFMS)  are the predominant information system types that are used to support and automate the execution of business processes. Business processes like procurement, operations, logistics, sales and human resources can hardly be imagined without the integration of information systems that support and monitor relevant activities in modern companies. The increasing integration of information systems does not only provide the means to increase effectiveness and efficiency. It also opens up new possibilities of data access and analysis. When information systems are used for supporting and automating the processing of business transactions they generate data. This data can be used for improving business decisions.

The application of techniques and tools for generating information from digital data is called business intelligence (BI) . Prominent BI approaches are online analytical processing (OLAP)  and data mining  (Kemper et al. 2010 pp. 1–5). OLAP tools allow analysing multidimensional data using operators like roll-up and drill-down, slice and dice or split and merge (Kemper et al. 2010 pp. 99–106). Data mining is primarily used for discovering patterns in large data sets (Kemper et al. 2010 p. 113).

However the availability of data is not only a blessing as a new source of information but it can also become a curse. The phenomena of information overflow  (Krcmar 2010 pp. 54–57), data explosion (Van der Aalst 2011 pp. 1–3) and big data  (Chen et al.2012) illustrate several problems that arise from the availability of enormous amounts of data. Humans are only able to handle a certain amount of information in a given time frame. When more and more data is available how can it actually be used in a meaningful manner without overstraining the human recipient?

Data mining  is the analysis of data for finding relationships and patterns. The patterns are an abstraction of the analysed data. Abstraction reduces complexity and makes information available for the recipient. The aim of “Process Mining” is the extraction of information about business processes (Van der Aalst 2011 p. 1). Process mining encompasses “techniques, tools and methods to discover, monitor and improve real processes “by extracting knowledge from event logs” (Van der Aalst et al. 2012 p. 15). The data that is generated during the execution of business processes in information systems is used for reconstructing process models. These models are useful for analysing and optimising processes. Process mining is an innovative approach and builds a bridge between data mining (BI) and business process management (BPM).

Process mining evolved in the context of analysing software engineering processes  by Cook and Wolf in the late 1990s (Cook and Wolf 1998). Agrawal and Gunopulos (Agrawal et al. 1998) and Herbst and Karagiannis (Herbst and Karagiannis 1998) introduced process mining to the context of workflow management. Major contributions to the field have been added during the last decade by van der Aalst and other research colleagues by developing mature mining algorithms and addressing a variety of topic related challenges (Van der Aalst 2011). This has led to a well developed set of methods and tools that are available for scientists and practitioners.

Introduction to the basic concepts of process mining. 

The aim of process mining is the construction of process models based on available event log data. In the context of information system science a model is an immaterial representation of its real world counterpart used for a specific purpose (Becker et al.2012 pp. 1–3). Models can be used to reduce complexity by representing characteristics of interest and by omitting other characteristics. A process model is a graphical representation of a business process that describes the dependencies between activities that need to be executed collectively for realising a specific business objective. It consists of a set of activity models and constraints between them (Weske 2012 p. 7).

Process models can be represented in different process modelling languages, BPMN provides more intuitive semantics that are easier to understand for recipients that do not possess a theoretical background in informatics. So I am going to use BPMN models for examples in this post.

Above is a business process model of a simple procurement process . It starts with the definition of requirements. The goods or service get ordered, at some point of time the ordered goods or service get delivered. After the goods or service have been received the supplier issues an invoice which is finally settled by the company that ordered the goods or service.

Each one of the events depicted in the process above will have an entry in an event log.  An event log  is basically a table. It contains all recorded events that relate to executed business activities. Each event is mapped to a case. A process model  is an abstraction of the real world execution of a business process. A single execution of a business process is called process instance . They are reflected in the event log as a set of events that are mapped to the same case. The sequence of recorded events in a case is called trace . The model that describes the execution of a single process instance is called process instance model . A process model abstracts from the single behaviour of process instances and provides a model that reflects the behaviour of all instances that belong to the same process. Cases and events are characterised by classifiers and attributes.Classifiers  ensure the distinctness of cases and events by mapping unique names to each case and event. Attributes store additional information that can be used for analysis purposes.

The Mining Process

The process above provides an overview of the different process mining activities. Before being able to apply any process mining technique it is necessary to have access to the data. It needs to be extracted from the relevant information systems. This step is far from trivial. Depending on the type of source system the relevant data can be distributed over different database tables. Data entries might need to be composed in a meaningful manner for the extraction. Another obstacle is the amount of data. Depending on the objective of the process mining up to millions of data entries might need to be extracted which requires efficient extraction methods. A further important aspect is confidentiality. Extracted data might include personalised information and depending on legal requirements anonymisation or pseudonymisation might be necessary.

Before the extracted event log can be used it needs to be filtered and loaded into the process mining software. There are different reasons why filtering is necessary. Information systems are not free of errors . Data may be recorded that does not reflect real activities. Errors can result from malfunctioning programs but also from user disruption or hardware failures that leads to erroneous records in the event log.

Process Mining Algorithms

The main component in process mining is the mining algorithm. It determines how the process models are created. A broad variety of mining algorithms do exist. The following three categories will be discussed but not in great detail.

  • Deterministic mining algorithms
  • Heuristic mining algorithms
  • Genetic mining algorithms

Determinism means that an algorithm only produces defined and reproducible results. It always delivers the same result for the same input. A representative of this category is the α-Algorithm  (Van der Aalst et al. 2002). It was one of the first algorithms that are able to deal with concurrency. It takes an event log as input and calculates the ordering relation of the events contained in the log.

Heuristic mining also uses deterministic algorithms but they incorporate frequencies of events and traces for reconstructing a process model. A common problem in process mining is the fact that real processes are highly complex and their discovery leads to complex models. This complexity can be reduced by disregarding infrequent paths in the models.

Genetic mining algorithms use an evolutionary approach that mimics the process of natural evolution. They are not deterministic. Genetic mining algorithms follow four steps: initialisation, selection, reproduction and termination . The idea behind these algorithms is to generate a random population of process models and to find a satisfactory solution by iteratively selecting individuals and reproducing them by crossover and mutation over different generations. The initial population of process models is generated randomly and might have little in common with the event log. However due to the high number of models in the population, selection and reproduction better fitting models are created in each generation.

The process above shows a mined process model that was reconstructed by applying the α-Algorithm from an event log. It was translated into a BPMN model for better comparability. Obviously this model is not the same as the model in the first process diagram above. The reason for this is that the mined event log includes cases that deviate from the ideal linear process execution that was assumed for modelling in the first process depiction. In case 4 the invoice is received before the goods or service. Due to the fact that both possibilities are included in the event log (goods or service received before the invoice in case 1, 2, 3, 5 and invoice received before the ordered goods in case 4) the mining algorithm assumes that these activities can be carried out concurrently.

Process Discovery and Enhancement

A major area of application for process mining is the discovery of formerly unknown process models for the purpose of analysis or optimisation  (Van der Aalst et al. 2012 p. 13). Business process reengineering and the implementation of ERP systems in organisations gained strong attention starting in the 1990s. Practitioners have since primarily focused on designing and implementing processes and getting them to work. With maturing integration of information systems into the execution of business processes and the evolution of new technical possibilities the focus shifts to analysis and optimisation.

Actual executions of business processes can now be described and be made explicit. The discovered processes can be analysed for performance indicators like average processing time or costs for improving or reengineering the process. The major advantage of process mining is the fact that it uses reliable data. The date that is generated in the source systems is generally hard to manipulate by the average system user. For traditional process modelling necessary information is primarily gathered by interviewing, workshops or similar manual techniques that require the interaction of persons. This leaves room for interpretation and the tendency that ideal models are created based on often overly optimistic assumptions.

Analysis and optimisation is not limited to post-runtime inspections. Instead it can be used for operational support  by detecting traces being executed that do not follow the intended process model. It can also be used for predicting the behaviour of traces under execution. An example for runtime analysis is the prediction of the expected completion time by comparing the instance under execution with similar already processed instances. Another feature can be the provision of recommendations to the user for selecting the next activities in the process. Process mining can also be used to derive information for the design of business processes before they are implemented.


Process mining builds the bridge between data mining (BI)  and business process management (BPM). The increasing integration of information systems for supporting and automating the execution of business transactions provides the basis for novel types of data analysis. The data that is stored in the information systems can be used to mine and reconstruct business process models. These models are the foundation for a variety of application areas including process analysis and optimisation or conformance and compliance checking. The basic constructs for process mining are event logs, process models and mining algorithms. I have summarised essential concepts of process mining in this post, illustrating the main application areas and one of the available tools, namely ProM.

Process mining is still a young research discipline and limitations concerning noise, adequate representation and competing quality criteria should be taken into account when using process mining. Although some areas like the labelling of events, complexity reduction in mined models and phenomena like concept drift need to be addressed by further research the available set of methods and tools provide a rich and innovative resource for effective and efficient business process management.

The Three “A’s” of Predictive Maintenance

February 25, 2016

Again today in the news is another Oil & Gas company posting a profit loss, a rig operator scrapping two rigs and predictions of shortfalls in supplies by 2020, plus major retrenchments of staff across the globe. With all of this going on the signs are that we are going to have to sweat the assets and do more with less. How then are we going to do more with less?


This post is going to focus on the use of Predictive Analytics for the Maintenance Process or PdM (Predictive Maintenance ) Organisation’s are looking at their operations and how to reduce costs more than ever before. They are experiencing increased consumer empowerment, global supply chains, ageing assets, raw material price volatility, increased compliance, and an ageing workforce. A huge opportunity for many organisations is a focus on their assets.

Although organisations lack not only visibility but also predictability into their assets’ health and performance, maximising asset productivity and ensuring that the associated processes are as efficient as possible are key aspects for organisations striving to gain strong financial returns.

In order for your physical asset to be productive, it has to be up, running, and functioning properly. Maintenance is a necessary evil that directly affects the bottom line. If the asset fails or doesn’t work properly, it takes a lot of time, effort, and money to get it back up and running. If the asset is down, you can’t use it. For example, you can’t manufacture products, mine for minerals, drill for oil, refine lubricants, process gas, generate power etc, etc.

Maintenance has evolved with the technology, organisational processes, and the times. Predictive maintenance (PdM), technology, has become more popular and mainstream for organisations, but in many cases remains inconsistent.

There are many reasons for the this that include the items below:

  • Availability of large amounts of data due to Instruments and connected assets (IoT)
  • Increased coupling of technology within businesses (MDM, ECM, SCADA)
  • Requirements to do more with less. For example, stretching the useful life of an asset (EOR)
  • Relative ease of use of garnering insights from raw data (SAP HANA)
  • Reduced cost of computing, network, and storage technology (Cloud Storage, SaaS, In Memory Computing)
  • Convergence of Information Technology with Operational technology (EAM, ECM)

PdM will assist organisations with key insights regarding asset failure and product quality, enabling them to optimise their assets, processes, and employees. Organisations are realising the value of PdM and how it can be a competitive advantage. Given the economic climate and pressure on everyone to do more with less.

Operations budgets are always the first to be cut, it no longer makes sense to employ a wait-for-it-to-break mentality. Executives say that the biggest impact on operations is failure of critical assets. In this post I am going to show how Predictive Analytics or PdM will assist organisations.

Predictive Maintenance Definition.

We have all understood what Preventive Maintenance was, it was popular in the 20th Century but PdM is very much focused in the 21st Century. PdM is an approach based upon various types of information that allows maintenance, quality and operational decision makers to predict when an asset needs maintenance. There is a myth that PdM is focused purely on asset data, however it is much more. It includes information from the surrounding environment in which the asset operates and the associated processes and resources that react with the asset.

PdM leverages various Analytical techniques to provide better visibility of the asset to the decision makers and analyses various type of data. It is important to understand the data that is being analysed. PdM is usually based upon usage and wear characteristics of the asset, as well as other asset condition information. As we know data comes in many different formats. The data can be at REST (data that is fixed and does not move over time) or Streaming data (where data can be constantly on the move).

Types of Data.

From my previous posts on the subject of Big Data you will know by now that there are basically two types of Data, however in the 21st century there is a third. The 1st being Structured Data, the 2nd being Unstructured data and the 3rd is Streaming Data. The most common of course is structured and is collected from various systems and processes. CRM, ERP, Industrial controls systems such as SCADA, HR, Financial, information and data warehouses etc. All of these systems contain datasets in tables. Examples of this include Inventory information, production information, financial information and specifically asset information such as name, location, history, usage, type etc.

Unstructured Data comes in the form of Text data such as e-mails, maintenance and operator logs, social media data, and other free-form data that is available today in limitless quantities is unstructured data. Most organisations are still trying to fathom how to utilise this data. To accommodate this data, a text analytics program must be in place to make the content useable.

Streaming data is information that needs to be collected and analysed in real time. It includes information from sensors, satellites, Drones and programmable logic controllers (PLCs), which are digital computers used for automation of electromechanical processes, such as control of machinery on factory assembly lines, amusement rides, or light fixtures. Examples of streaming data include telematic, measurement, and weather information this format is currently gaining the most traction as the need for quick decision making grows.

Why use PdM?

There are a number of major reasons to employ PdM and there is a growing recognition that the ability to predict asset failure has great long term value to the organisation.

  •  Optimise maintenance intervals
  • Minimise unplanned downtime
  • Uncover in depth root cause analysis of failures
  • Enhance equipment and process diagnostics capabilities
  • Determine optimum corrective action procedures

Many Industries Benefit from PdM

For PdM to be of benefit to organisations, the assets must have information about them as well as around them. Here are a couple of examples from my own recent history. However any industry that has access to instrumented streaming data has the ability to deploy PdM.

Energy Provider

Keeping the lights on for an entire State in Australia is no small feat. Complex equipment, volatile demand, unpredictable weather, plus other factors can combine in unexpected ways to cause power outages. An energy provider used PdM to understand when and why outages occurred so it could take steps to prevent them. Streaming meter data helped the provider analyze enormous volumes of historical data to uncover usage patterns. PdM helped define the parameters of normal operation for any given time of day, day of the week, holiday, or season and detected anomalies that signal a potential failure.

Historical patterns showed that multiple factors in combination increased the likelihood of an outage. When national events caused a spike in energy demand and certain turbines were nearing the end of their life cycle, there was a higher likelihood of an outage. This foresight helped the company take immediate action to avoid an imminent outage and schedule maintenance for long-term prevention. With PdM, this energy provider

  • Reduced costs by up to 20 percent (based on similar previous cases) by avoiding the expensive process of reinitiating a power station after an outage
  • Predicted turbine failure 30 hours before occurrence, while previously only able to predict 30 minutes before failure
  • Saved approximately A$100,000 in combustion costs by preventing the malfunction of a turbine component
  • Increased the efficiency of maintenance schedules, costs and resources, resulting in fewer outages and higher customer satisfaction

Oil & Gas Exploration & Production Company

A large multinational company that explores and produces oil and gas conducts exploration in the Arctic Circle. Drilling locations are often remote, and landfall can be more than 100 miles away. Furthermore, the drilling season is short, typically between July and October.

The most considerable dangers that put people, platforms, and structures at risk are colliding with or being crushed by ice floes, which are flat expanses of moving ice that can measure up to six miles across. Should a particularly thick and large ice floe threaten a rig, companies typically have less than 72 hours to evacuate personnel and flush all pipelines to protect the environment. Although most rigs and structures are designed to withstand some ice-floe collisions, oil producers often deploy tugboats and icebreakers to manage the ice and protect their rigs and platform investments. This is easily warranted: a single oil rigcosts $350 million and has a life cycle that can span decades. To better safeguard its oil rigs, personnel, and resources, the company had to track the courses of thousands of moving potential hazards. The company utilised PdM by analyzing direction, speed, and size of floes using satellite imagery to detect, track, and forecast the floe trajectory. In doing so, the company

  • Saved roughly $300 million per season by reducing mobilisation costs associated with needing to drill a second well should the first well be damaged or evacuated
  • Saved $1 billion per production platform by easing design requirements, optimising rig placement, and improving ice management operations
  • Efficiently deployed icebreakers when and where they were needed most

Workforce Planning, Management & Logistics and PdM

The focus of predictive maintenance (PdM) is physical asset performance and failure and its associated processes. One key aspect that tends to be overlooked, but is critical to ensure PdM sustainability, is Human Resources. Every asset is managed, maintained, and run by an operator or employee. PdM enables organisations to ensure that they have the right employee or service contractor assigned to the right asset, at right time with the right skill set.

Many organisations already have enough information about employees either in their HR, ERP, or manufacturing databases. They just haven’t analysed the information in coordination with other data they may have access to.

Some typical types of operator information include

  • Name
  • Work duration
  • Previous asset experience
  • Training courses taken
  • Safety Courses
  • Licences
  • Previous asset failures and corrective actions taken

The benefits of using PdM in the WPML process include the following:

  • Workforce optimisation: Accurately allocate employee’s time and tasks within a workgroup, minimising costly overtime
  • Best employee on task: Ensure that the right employee is performing the most valuable tasks
  • Training effectiveness: Know which training will benefit the employee and the organisation
  • Safety: Maintain high standards of safety in the plant
  • Reduction in management time: Fewer management hours needed to plan and supervise employees
  • A more satisfied, stable workforce: Make people feel they are contributing to the good of the organisation and feel productive.

The key for asset intensive companies is to ensure that their assets are safe, reliable, and available to support their business. Companies have found that simply adding more people or scheduling more maintenance sessions doesn’t produce cost-effective results. In order for organisations to effectively utilize predictive maintenance (PdM), they must understand the analytical process, how it works, its underlying techniques, and its integration with existing operational processes; otherwise, the work to incorporate PdM in your organisation will be for nothing.

The Analytical Process, the three “A” approach.

As organisations find themselves with more data, fewer resources to manage them, and a lack of knowledge about how to quickly gain insight from the data, the need for PdM becomes evident.The world is more instrumented and interconnected, which yields a large amount of potentially useful data. Analytics transforms data to quickly create actionable insights that help organizations run their businesses more cost effectively

First A = Align

The align process is all about the data. You understand what data sources exist, where they are located, what additional data may be needed or can be acquired, and how the data is integrated or can be integrated into operational processes. With PdM, it doesn’t matter if your data is structured or unstructured, streaming or at rest. You just need to know which type it is so you can integrate and analyse the data appropriately.

Second A = Anticipate

In this phase, you leverage PdM to gain insights from your data. You can utilise several capabilities and technologies to analyse the data and predict outcomes:

1). Descriptive analytics provides simple summaries and observations about the data. Basic statistical analyses, for which most people utilise Microsoft Excel, are included in this category. For example, a manufacturing machine failed three times yesterday for a total downtime of one hour.

2). Data mining is the analysis of large quantities of data to extract previously unknown interesting patterns and dependencies. There are several key data mining techniques:

Anomaly detection: Discovers records and patterns that are outside the norm or unusual. This can also be called outlier, change, or deviation detection. For example, out of 100 components, component #23 and #47 have different sizes than the other 98.

Association rules: Searches for relationships, dependencies, links, or sequences between variables in the data. For example, a drill tends to fail when the ambient temperature is greater than 100 degrees Fahrenheit, it’s 1700 hrs, and it’s been functioning for more than 15 hours.

Clustering: Groups a set of objects in such a way that objects in the same group are more similar to each other than to those in other groups. For example, offshore oil platforms that are located in North America and Europe are grouped together because they tend to be surrounded by cooler air temperatures, while those in South America and Australia are grouped separately because they tend to be surrounded by warmer air temperatures.

Classification: Identifies which of a set of categories a new data point belongs to. For example, a turbine may be classified simply as “old” or “new.”

Regression: Estimates the relationships between variables and determines how much a variable changes when another variable is modified. For example, plant machinery tends to fail as the age of the asset increase.

Text mining derives insights and identifies patterns from text data via natural language processing, which enables the understanding of and alignment between computer and human languages. For example, from maintenance logs, you may determine that the operator always cleans the gasket in the morning before starting, which leads to an extended asset life.

Machine learning enables the software to learn from the data. For example, when an earthmover fails, there arethree or four factors that come into play. The next time those factors are evident, the software will predict that the earthmover will fail. You may come across predictive analytics. It is a category of analytics that utilises machine learning and data mining techniques to predict future outcomes.

Simulation enables what-if scenarios for a specific asset or process. For example, you may want to know how running the production line for 24 continuous hours will impact the likelihood of failure.

Prescriptive analytics goes beyond predicting future outcomes by also suggesting actions and showing the implications of each decision option. For example, based on the data, organisations can predict when a water pipe is likely to burst. Additionally, the municipality can have an automated decision where for certain pipes, certain valves must be replaced by a Level-3 technician. Such an output provides the operations professional with the predictive outcome, the action, and who needs to conduct the action. A decision management framework that aligns and optimises decisions based on analytics and organisational domain knowledge can automate prescriptive analytics.

The Final A = Act

In the final A, you want to act at the point of impact with confidence on the insights that your analysis provided. This is typically done by using a variety of channels including e-mail,mobile, reports, dashboards, Microsoft Excel, and enterprise asset management systems (EAM) essentially, however, your organisation makes decisions within your operational processes. A prominent aspect of the act phase is being able to view the insights from the anticipate process so employees can act on them. There are three common outputs:

Reports: Display results, usually in list format

Scorecards: Also known as balanced scorecards; automatically track the execution of staff activities and monitor the consequences arising from these actions; primarily utilised by management

Dashboards: Exhibit an organisation’s key performance indicators in a graphical format; primarily utilised by Senior Management

Organisations that utilise as many analytical capabilities ofPdM as possible will be able to optimise the appropriate analytics with the data. Ultimately, organisations will have better insights and make better decisions than those organisations that don’t. It may be easier for you to leverage a single software vendor that can provide all of these capabilities and integrate all three phases in your operational processes so you can maximise PdM’s benefits. Here are a few names to be going on with, TROBEXIS, OPENTEXT, SAP, MAXIMO

The Business Intelligence Puzzle

February 21, 2016

The Data Warehousing Institute, provider of education and training in the areas of data warehousing and BI industry defines Business Intelligence as: “The processes, technologies, and tools needed to turn data into information, information into knowledge, and knowledge into plans that drive profitable business action”. Business intelligence has been described as “active, model-based, and prospective approach to discover and explain hidden decision-relevant aspects in large amount of business data to better inform business decision process” (KMBI, 2005).

Defining Business Intelligence has not been a straightforward task, given the multifaceted nature of data processing techniques involved and managerial output expected. “Business information and business analyses within the context of key business processes that lead to decisions and actions and that result in improved business performance” (Williams & Williams, 2007). BI is “both a process and a product. The process is composed of methods that organisations use to develop useful information, or intelligence, that can help organisations survive and thrive in the global economy. The product is information that will allow organisations to predict the behaviour of their competitors, suppliers, customers, technologies, acquisitions, markets, products and services and the general business environment” with a degree of certainty (Vedder, et al., 1999). “Business intelligence is neither a product nor a system; it is an architecture and a collection of integrated operational as well as decision-support applications and databases that provide the business community easy access to business data” (Moss & Atre, 2003). “Business Intelligence environment is a quality information in well-designed data stores, coupled with business-friendly software tools that provide knowledge workers timely access, effective analysis and intuitive presentation of the right information, enabling them to take the right actions or make the right decisions” (Popovic, et al., 2012).

The aim of business intelligence solution is to collect data from heterogeneous sources, maintain, and organise knowledge. Analytical tools present this information to users in order to support decision making process within the organisation. The objective is to improve the quality and timeliness of inputs to the decision process. BI systems have the potential to maximise the use of information by improving company’s capacity to structure a large volume of information and make it accessible, thereby creating competitive advantage, what Davenport calls “competing on analytics” (Davenport, 2005). Business intelligence refers to computer based techniques used in identifying, digging-out, and analysing business data such as sales revenue by product, customer and or by its costs and incomes.

Business Intelligence encompasses data warehousing, business analytic tools and content/knowledge management. BI systems comprise of specialised tools for data analysis, query, and reporting such as Online Analytical processing system (OLAP) and dashboards that support organisational decision making which in turn enhances the performance of a range of business processes. General functions of BI technologies are reporting, online analytical processing (OLAP), analytics, business performance management, benchmarking, text mining, data mining and predictive analysis:

Online Analytical Processing (OLAP) includes software enabling multi dimensional views of enterprise information which is consolidated and processed from raw data with a possibility of current and historical analysis.

Analytics helps make predictions and forecasting of trends and relies heavily on statistical and quantitative analysis to enable decision making concerned with future predictions of business performance.

Business Performance Management tools concerned with setting appropriate metrics and monitoring organisational performance against these identifiers.

Benchmarking tools provide organisational and performance metrics which help compare enterprise performance with benchmark data, to industry average, for example.

Text Mining software helps analyse non structured data, such as written material in natural language, in order to draw conclusions for decision making.

Data Mining involves large scale data analysis based such techniques as cluster analysis, anomaly and dependency discovery, in order to establish previously unknown patterns in business performance or making predictions of future trends.

Predictive Analysis deals with data analysis, turn it into actionable insights and help anticipate business change with effective forecasting.

Specialised IT infrastructure such as data warehouses, data marts, and extract transform & load (ETL) tools are necessary for BI systems deployment and their effective use. Business intelligence systems are widely adopted in organisations to provide enhanced analytical capabilities on the data stored in the Enterprise Resource Planning (ERP) and other systems. ERP systems are commercial software packages with seamless integration of all the information flowing through an organisation – Financial and accounting information, human resource information, supply chain information and customer information (Davenport, 1998). ERP systems provide a single vision of data throughout the enterprise and focus on management of financial, product, human capital, procurement and other transactional data. BI initiatives in conjunction with ERP systems increase dramatically the value derived from enterprise data.

While many organisations have an information strategy in operation, effective business intelligence strategy is only as good as the process of accumulating and processing of corporate information. Intelligence can be categorised in a hierarchy which is useful in order to understand its formation and application. The traditional intelligence hierarchy is shown below, which comprises of data, information, knowledge, expertise and, ultimately, wisdom levels of intelligence.


Data is associated with discrete elements – raw facts and figures; once the data is patterned in some form and is contextualised, it becomes information. Information combined with insights and experience becomes knowledge. Knowledge in a specialised area becomes expertise. Expertise morphs into the ultimate state of wisdom after many years of experience and lessons learned (Liebowitz, 2006). For small businesses, processing data is a manageable task. However, for organisations that collect and process data from millions of customer interactions per day, identifying trends in customer behaviour, accurately forecasting sales targets appear more challenging.

Use of data depends on the contexts of each use as it pertains to the exploitation of information. At a high level it can be categorised into operational data use and strategic data use. Both are valuable for any business, without operational use the business could not survive but it is up to the information consumer to derive the value from a strategic perspective. Some of the strategic uses of information through BI applications include:

Customer Analytics, which aims to maximise the value of each customer and enhance customer’s experience;

Human Capital Productivity Analytics, provides insight into how to streamline and optimise human resources within the organisation;

Business Productivity Analytics, refers to the process of differentiating between forecasted and actual figures for inputs/outputs conversion ratio of the enterprise;

Sales Channel Analytics, aims to optimise effectiveness of various sales channels, provides valuable insight into the metrics of sales and conversion rates;

Supply Chain Analytics offers the ability to sense and respond to business changes in order to optimise an organisation’s supply chain planning and execution capabilities, alleviating the limitations of the historical supply chain models and algorithms.

Behaviour Analytics helps predict trends and identify patterns in specific kinds of behaviours.

Organisations accumulate, process and store data continuously and rely on their information processing capabilities for staying ahead of competitors. According to the PricewaterhouseCoopers Global Data Management Survey of 2001, the companies that manage their data as strategic resource and invest in its quality are far ahead of their competitors in profitability and superior reputation. A proper Business Intelligence system implemented for an organisation could lead to benefits such as increased profitability, decreased cost, improved customer relationship management and decreased risk (Loshin, 2003). Within the context of business processes, BI enables business analysis using business information that lead to decisions and actions and that result in improved business performance. BI investments are wasted unless they are connected to specific business goals (Williams & Williams, 2007).

As competitive value of the BI systems and analytics solutions are being recognised in the industry, many organisations are initiating BI to improve their competitiveness, but not as quickly as it could be.

Business Process Re Engineering

January 8, 2013

Business process reengineering (often referred to by the acronym BPR) is the main way in which organizations become more efficient and modernize. Business process reengineering transforms an organization in ways that directly affect performance.

The Impact Of BPR On Organizational Performance
The two cornerstones of any organization are the people and the processes. If individuals are motivated and working hard, yet the business processes are cumbersome and non-essential activities remain, organizational performance will be poor. Business Process Reengineering is the key to transforming how people work. What appear to be minor changes in processes can have dramatic effects on cash flow, service delivery and customer satisfaction. Even the act of documenting business processes alone will typically improve organizational efficiency by 10%.

How To Implement A BPR Project
The best way to map and improve the organization’s procedures is to take a top down approach, and not undertake a project in isolation. That means:
• Starting with mission statements that define the purpose of the organization and describe what sets it apart from others in its sector or industry.
• Producing vision statements which define where the organization is going, to provide a clear picture of the desired future position.
• Build these into a clear business strategy thereby deriving the project objectives.
• Defining behaviors that will enable the organization to achieve its’ aims.
• Producing key performance measures to track progress.
• Relating efficiency improvements to the culture of the organization
• Identifying initiatives that will improve performance.
Once these building blocks are in place, the BPR exercise can begin.

Tools To Support BPR
When a BPR project is undertaken across the organization, it can require managing a massive amount of information about the processes, data and systems. If you don’t have an excellent tool to support BPR, the management of this information can become an impossible task. The use of a good BPR/documentation tool is vital in any BPR project.
The types of attributes you should look for in BPR software are:
• Graphical interface for fast documentation
• “Object oriented” technology, so that changes to data (eg: job titles) only need to be made in one place, and the change automatically appears throughout all the organization’s procedures and documentation.
• Drag and drop facility so you can easily relate organizational and data objects to each step in the process
• Customizable meta data fields, so that you can include information relating to your industry, business sector or organization in your documentation
• Analysis, such as swim-lanes to show visually how responsibilities in a process are transferred between different roles, or where data items or computer applications are used.
• Support for Value Stream mapping.
• CRUD or RACI reports, to provide evidence for process improvement.
• The ability to assess the processes against agreed international standards
• Simulation software to support ‘what-if’ analyses during the design phase of the project to develop LEAN processes
• The production of word documents or web site versions of the procedures at the touch of a single button, so that the information can be easily maintained and updated.

To be successful, business process reengineering projects need to be top down, taking in the complete organization, and the full end to end processes. It needs to be supported by tools that make processes easy to track and analyze. If you would like help with your BPR project, please Manage to Supply
• Business process reengineering is a huge step for any company, though one that can bring equally significant rewards when properly implemented. Be sure to think your decision through thoroughly and proceed only after you’ve done sufficient research.
• Should you decide to act as your own business process engineer, realize that you’ll need adequate BPR training and excellent business process engineering software to successfully pull it off. You’ll need to develop the skills necessary for creating a business process map redesign that not only meets your company’s unique needs, but also adequately addresses your prior business process problems.

Adoption of sourcing technology – ease of use.

January 3, 2013

Organisations spend millions of dollars on technology implementations. It has been seen that many projects fail within one year of implementation. In a recently issued study report from the World Economic Forum 2010-2011, Sweden and Singapore continue to dominate the rankings, whereas Malaysia ranks 28th and Oman stands only 41st in terms of technological savvy nations. One of the reasons for this could be lack of adoption of new technologies within the organisation.

Employees using a new software system exhibit steep learning curves and resistance to change which is evident from the large percentage of organisations feeling their ability to deal with change being poor. Most of the time this failure can be attributed to a lack of communication between the decision maker (which in our case, would be the CPO or the VP procurement) and the end user(buying manager, buyers etc.). The point being, that such an environment is not conducive for effective software implementation.

Procurement technology solutions have also not been immune to adoption failure. Let’s take a look at a case study.

An $11 billion organisation had in place an existing eSourcing solution from a major solution provider. The investment for the same was close to $ 100,000 and there were 100 user licences which had been purchased. After a Post implementation review it was observed that there existed only 5 active users of the application whilst 95% remained inactive indicating lack of adoption amongst the users. More importantly, what was not considered was the comfort level of the suppliers who would be an important end user of the sourcing solution. Suppliers would not respond to RFIs created within the tool citing it to be too complex and would send in quotes through excel documents making evaluation almost impossible and tedious.

This post will explore the major challenges involved in adoption and how an organisation can use four strategies to overcome the adoption challenges and ensure acceptance of the eSourcing solution by the end users.

Challenges Faced!

Before I discuss the ways to increase procurement technology adoption within organisations, let us look into what are the major challenges that organisations face with respect to procurement technology adoption.

The first and foremost challenge is to deal with the resistance to change. Even when organisational members recognise that a specific change would be beneficial, they often fall prey to the gap between knowing something and actually doing it.

The second reason can be attributed to the complexity of technology which detracts the end user since it requires acquiring new technological knowledge and skills. Complex features may sound great in product demonstrations and data sheets but become a bane to adoption at the ground level.

The third reason could be a lack of visibility into benefits of the software post implementation. It’s important to note here that a benefit needs to be expressed in the parlance of the end user. The end user needs to see how the technology will benefit him in his job. In short what is the take away for him?

So how does one overcome these challenges? Here I would like to draw your attention to what I want to call the “For Ease Strategies” of efficient user adoption. These are – Ease of Use, ease of user Involvement during evaluation, ease of Training and Adoption and finally ease of Metrics & Incentives. Let us look into each of these “ease strategies” and the role they play in overcoming the challenges.

Overcoming challenges to procurement technology adoption is the key to ensure that an organisation reaps the benefits from their implementation. In this section I will discuss the importance of having the right strategy to overcome the adoption challenges.

Strategy 1. Ease of Use

As discussed earlier, complexity of technology was one of the major reasons for lack of adoption. This is where having a technology which is easy to use goes a long way in fostering acceptance among the end users. Let’s consider a very simple example here.
Consider an i-Phone or i-Pad as an innovation which although loaded with several sophisticated features is extremely easy to use for the end users leading to quick and higher adoption levels. Ease of use of course should not be at the cost of functionality.

Organisations should work on achieving a balance between satisfying all key core requirements and enhancing the user experience. While talking of ease of use, it is of utmost importance to speak from the perspective of the end users. Technology vendors and decision makers often confuse what is naturally easy for them as ease of use when discussing software.

Organisations must ensure that the new technology that they are planning to implement shall be easy to use not just for the stakeholders but the eventual users of the solution who will use it day in and day out. Technology must make things simpler for the end user.

Features need to be mapped with the needs within the organisation rather than looking at solutions which have the maximum number of features which don’t really satisfy the inherent needs in the process.

Strategy 2. User involvement

User involvement goes a long way in overcoming adoption challenges. User involvement can be accomplished by involving the end user in the initial stages of the software selection process. Users can be involved in the product demonstration process, which would help in conveying the benefits of the product to the end user for e.g. Using the ‘drag & drop’ feature within e-sourcing can be used to set up complex events in just minutes ensuring 100% category coverage. Demonstrating this to an end user will help convince him/her to create all events within the solution.

This process can now be followed up by a pilot process involving the end user. This would further convince the end user regarding the benefits by understanding how a particular feature directly benefits him in his work process.

Once the user receives a hands-on demonstration of the tool’s capabilities, make sure to have a feedback about the experience. Such an activity would ensure greater buy-in from the end user and also considerably reduce the objections arising post implementation.

Strategy 3. Training

Training should be arranged both pre and post implementation.. The training can be conducted by a variety of means . Combining periodic on-site training with regular feature level training provided online in the form of user sessions, webinars etc. is the most effective way of achieving user adoption goals. It is recommended to have the vendors / Suppliers involved at every stage of training to ensure a constant communication between the end user and the trainer.

Ideally there should be a training council formed comprising of members from both the vendor / Supplier and organisation. Once the training is conducted organisations can also look at conducting product knowledge tests and quizzes. This has dual benefits;

1. Makes the end user more responsible
2. Helps in judging the effectiveness of the training sessions

Strategy 4.

Once the technology has been implemented, top management needs to sit with the end user and decide on how to measure the performance of the end user. Including the end user in setting performance goals inculcates a lot of responsibility and accountability among the end users. Organisations must ensure setting fair, consistent and rigid goals which are transparent in every sense. I offer an example of how this could be accomplished.

Example 1. Consider an organisation who has just implemented an eSourcing solution. Suppose the organisation has 50 sourcing events scheduled to take place in the year. One of the check points could be to see how many of these sourcing events were channeled through the eSourcing platform. With deliberation organisations can set a goal of 80% of the sourcing events to be conducted through the e-sourcing platform.

Or another example

Example 2. If an organisation enters into say 100 contracts in a year, one of the objectives would be to have say 90% of contracts under the contract management system.

Once the objectives have been set by deliberation with the end user, the next logical step could be to link the incentives of the users with the objectives set. A simple incentive can be percentage sharing of the savings achieved from implementation of the solution. These can be benchmarked with similar numbers before the software was implemented to derive the results or direct benefits from the solution implementation.

Managing Change.

A learning orientation is critical during implementation stages. This brings us to the next point which is concerned with managing change. In order to successfully manage the change process, I recommend the following four steps:-


Brief the end user about the new technology and involve the end user in the evaluation stages.


Educate the end user about the product in the form of product training, workshops (video, onsite etc), webinars etc.


Devise mutually accepted metrics for measuring the performance of the end user post implementation.


Linking the objectives with incentives, with disbursement of incentives related to the objective met.


Companies must do away with persuasion and edict as part of technology implementation and adoption processes since both involve little or no user input in decisions regarding implementation and adoption. Also, change management is the key to ensure buy-ins from the various related stakeholders thus ensuring benefits from the technology implementations.

Supply Chain Management and “The Wisdom of Bees”

November 10, 2012

Michael O’Malley’s intriguing analogy between business organizations and beehives provides delightful entertainment and clear instruction, which can be appreciated by business people and laymen alike. In twenty-five brief chapters, the reader will come to understand how and why an enterprise succeeds or fails using the immagination and science of bees at work for guidance.

On closer evaluation bees were working on the very same kinds of problems we are trying to solve. How can large diverse groups work together harmoniously and productively? It seems to me that we could take what the bees do so well and apply it to our enterprises.” When Michael O’Malley first took up beekeeping he thought it would be a nice hobby to share with his ten-year-old son. But as he started to observe these industrious insects he noticed that they do a lot more than just make honey. Bees not only work together to achieve a common goal but in the process create a highly coordinated efficient and remarkably productive organization. The hive behaved like a miniature but incredibly successful business. O’Malley also realized that bees can actually teach managers a lot about how to run their organizations. He identified twenty-five powerful insights such as: * Distribute authority : the queen bee delegates relentlessly and worker bees make daily decisions based on local cues and requirements. * Keep it simple : bees exchange only relevant information operate under clear standards and use straightforward measures and feedback to guide their actions. * Protect the future : when a lucrative vein of nectar is discovered the entire colony doesn’t rush off to mine it no matter how enriching the short- term benefits. Blending practical advice with interesting facts about the hive The Wisdom of Bees is a useful and entertaining guide for any manager looking to get the most out of his or her organization.

In subsequent posts I will try and give more examples of how this book, in it’s 25 chapters has had an influence on me in my work and in organisations that I have been associated with in the field of supply chain management. As I mentioned in the pervious post relating to keeping things simple, removing complexity in our business is one of my own key challenges.

Keep Processes Simple, reduce complexity – The Wisdom of Bees!

November 5, 2012

For the last couple of years I have constantly carried a book with me which fascinated me from the first time I read a review of the book and subsequently bought it. The book in question is called “The Wisdom of Bees”, written by Micheal O’Malley Ph.D. So what has this book got to do with Supply Chain Management? And in particular my blog about The supply chain management world. Basically the book is about what the hive can teach business about leadership, efficiency and growth! Every time I pick up the book which I have read now cover to cover many times, I find something new or related to my work within the supply chain management field.

For the last twenty plus years I have been involved in implementations of SAP ERP and over the last decade been involved in eBusiness implementations, which are about streamlining, standardising and simplifying the business! However, the message of “Keeping things Simple” still seems to thwart enterprises.

All to often enterprises introduce a level of complexity into their processes or products that inadvertently undermines efficiency and effectiveness. That is, over time, corporate operations and goods become “Rube Goldberg” creations. Rube Goldberg was an American cartoonist best known for his drawings of complicated machines that performed simple tasks. Even today we still honour him through the many engineering contests held in his name. Individuals or teams receive awards for building the most convoluted contraptions to achieve simple outcomes such as turning on a light or breaking an egg. Although we might not all win a Rube Goldberg award, we unnecessarily complicate our work in many ways. I will briefly describe three of the major ones here, but keep in mind that this by no means exhausts the possibilities.

One common way that we over engineer products, processes or services is by feeling compelled to incorporate ideas of every conceivable constituency into the final design. The many quips about committees ( for example, the unwilling picked from the unfit to do the unnecessary) are nods to the ineffective solutions that many hours of group discussion can produce.

A second way to do ourselves in is by building on existing platforms that have inherent limitations, making it hard to do what you want! For example where I live there were plumbers working in the condominium, good people and true professionals. Some time ago there was a requirement to change pipe work ( originally done by different plumbers) and now certain parts of the system had to be changed. The supervisor explained that the original system would not be entirely compatible with the new requirements and new piping would have to be installed. If this was to be done it would introduce new routes of piping, some of which would have to be exposed, with different mechanisms to maintain the water pressure. This would be done over a period of weeks leading to disruptions and fussing around to get everything working in harmony. Ultimately, a decision was to taken to replace the old system rather than building on top of it and wind up with something that would not function properly. Obviously, the costs were a little higher for the new system, but there will not be extra to pay for endless work arounds and inevitable fixes and we have a system that runs smoothly.

Finally, we sometimes try to design for the exceptions, rather than create a product or process for the 99% of the people who are good law abiding citizens, we incorporate features that try and exclude or capture the 1% of users (abusers) who will circumvent our intent. I read a story a while ago about the State Liquor Control Board of Pennsylvania that was thinking about introducing wine kiosks at selected sites. It’s true that we do not want the intoxicated or under aged buying alcohol, so purchasers would have to would have to insert their driving licence and breath into a breathalyser to complete the transaction. The intentions are good, but the result is that you irritate the people that you want as customers while providing the under agers with a easy challenge.

So how is this relevant to Supply Chain Management (and what we can learn from the honey bee). When it comes to the hive, honey bees keep it simple. They get right to the point, concisely, clearly and without undue complications. There are several aspects of what they do from which we should learn. These will not overcome all potential barriers to simplicity, but keeping the following three maxims in mind will help.

Firstly, information exchange among bees is relevant. This way, the bee receives a signal, it knows it means something important. They do not communicate any more or less than is necessary. For example, there is no feedback signal in the hive that tells the bees to abandon a poor flower patch. That information does not help anyone. Bees working in the same patch already know the quality is poor and the information is not pertinent to foragers who work at who work at different patches. In addition, bees recruit unemployed foragers to good patches, it would not make sense to tell them about all the places not to go.

Secondly, Bees have clear standards that regulate their behaviour. The standards keep their mission on track and protect against wrongheaded commitments. When foragers return to the hive, they express their enthusiasm for the quality (example, the sugar concentration) of the nectar they have found through their waggle dances. As you can well imagine, if all the bees had a different idea of what constitutes a “Good” flower patch, then the colony could mobilise to the wrong place. An excited bee returns to the hive and recruits others to the site, some of whom then return to the hive to recruit more bees, and so on. Thus it would only take an errant few to get the hive going in the wrong direction. I have personally witnessed eager and enthusiastic authorities in enterprises mobilise people and resources around new processes that absolutely made no sense at all and, in retrospect, would have been difficult to justify had the definition of desirable patches of business “nectar” been clearly established from the start. The integrity of communication is possible because the entire foraging force has a common criteria and understanding about the true value of one of its chief products.

Third, there is an elegance and parsimony to what honey bees do. At times, this involves dodging solutions that seem logical but may not be. Honey bees, for example, do not cross train foragers and receivers so that one may take the place of the other. That is, they do not employ task switching to try and balance their work capacities, minimise queuing delays and maximise the import of nectar. Instead they elect to pull from a reserve workforce of foragers. For honey bees, bringing in more workers to Exploit a ripe situation is more important than keeping a fixed resource such as the proportion of foragers to receivers perfectly balanced. It is fairly safe to say that where trade offs exist in the hive, the colony will favour the equivalence of revenue intake over nifty accounting and administrative rigour.

The best plans are ultimately the simplest one involving clear, direct and uncomplicated communications and actions. Colonies execute with little waste in resources, communications and personal energy as possible. I grant you that the honey bee has had a long time to work things out, but their wonderfully successful society remains brilliantly straightforward.

Simplicity is a consequence of knowing what you are talking about, doing and want. In part , achieving clarity of perspective and direction depends on the use of a common method of analysis to examine problems. A standard approach provides organisational members with a mutual vocabulary and framework for defining concepts, proposing relationships, conducting tests, assessing consequences, determining goals, and, in turn, putting the proper mechanisms in place (example feedback and tracking).

eTransactions assist in combating corrupt practices.

October 26, 2012

Even today in many businesses corrupt practices still exists within the procure to pay (P2P) process, however eTransactions with visibility and electronic audit trails help to alleviate the risk of corruption.

Before we had the electronic transaction, and by this I mean the ones that are incorporated into a business system such as SAP, Oracle or even Great Plains, the risk of corruption in the procurement process was extremely high. Collusion between buyers and sellers was almost an accepted practice, until they got caught that is! Although eBusiness transactions have been with us for quite some time the early transactions were still fraught with opportunities for corrupt practices, also audit controls and compliance monitoring were still in their infancy

Today eTransactions are much more highly developed and actually offer both the buyer and seller a certain degree of protection against being accused of corrupt practices. It is not to say that corrupt practices have been totally eradicated, this is not the case. However, it is much harder for the corrupt buyer or seller to get around the process.

Business processes are much more visible today (end to end) with changes and updates being logged almost every step of the way. Audits and compliance monitoring tools and techniques have developed considerably to assist in weeding out corruption in the business process.

As an early adopter of both eTransactions and eAuctions in the procurement process in a variety of companies and industries, I have seen the reduction in corrupt practices. I frequently visit the Transparency International ( website and monitor the Corruption Perception Index for those countries I have worked in over the years and who have adopted eBusiness into their business portfolios. What I recognize in the business is a direct relationship to countries adopting eBusiness transactions to their position in the Corruption Perception Index.

Electronic transactions are not the ultimate cure for all evils, however, they certainly help in the battle against corruption; specifically within the Procure to Pay process.

Business Process Analysis – Pitfalls to avoid

October 20, 2012

Successful implementation of the BPA will mean you have successfully documented, standardized,harmonized, managed—as well as analyzed and improved—your business processes. Process improvements are aligned with optimization goals, such as cost savings,time savings and quality.
With BPA, you’ll be able to:
Understand the business environment
Identify the strategy and key objectives
Analyze critical success factors
Define and follow standards
Record an enterprise process landscape
Define end-to-end processes
Identify improvement opportunities
Develop to-be concept and processes
Transform the organization
Implement BPM governance model

So what are some of the pitfalls to avoid?

No standards
A variety of process modeling tools are available. Some use Visio®, others ARIS and some describe their processes in Microsoft® PowerPoint®. Process models are stored on the local hard disk; some are on file servers. Others cannot be found. Everyone uses different objects/shapes to describe the same thing. This is indeed the worst case.

Strategy is strategy and process is process
Management knows that a corporate strategy is important. It takes several meetings to agree on it but then it stays in the board room. If you ask employees what the corporate strategy looks like, you barely get ananswer. It’s even harder for employees to understand how they contribute to the strategy.

Modeling only the “happy path”
It’s tempting to model only the processes where everything runs smoothly. But if you do this you can’t find improvement potentials.

Keeping models secret
Processes are for everyone. Don’t keep them secret in your repository. Share them with your organization or even beyond. But don’t forget the Five Ws.

Why you are modeling? You must ensure the benefits of your model align with corporate objectives.
Who are the customers for the models? An IT designer will have different expectations than a business analyst.
What are you modeling? Is it a sales process, and where does it start and end? What products does it handle?
When will the models be relevant? Distinguish between as-is and to-be processes and consider the lifetime of models.
Where will the models be used? Models published on the intranet need to be visual and fully linked so that people can easily
navigate them. Models that will be used for documentation need to rely more on information defined in model/object attributes.

Forgetting input and output
A process consumes input and transfers it to an output—and hopefully adds value along the way. If you design a process or a process step, make sure you also document the input and the output.

Not differentiating between model designer and consumer
The person creating a process model should always keep in mind who the consumer will be. A business person has different requirements than an IT person. The best is to have one model with different views on it.

Everyone can model everything—no governance
Process transformation needs a process of process management. You need to set up a governance structure around rights and roles. Not everyone should have the right to model or change every process. Don’t underestimate the effort of developing and implementing governance. It is strongly recommended to use technology as governance support.