3 Contract Requirements for Automated Warehouse and Logistics Systems.


Posted on November 12, 2020 by Esoteric Staffing


Automation systems are often custom designed for customer specific applications with proprietary equipment, controls, and software. Problems related to system performance and software cannot be easily resolved by engaging another supplier if issues arise.

When purchasing an automated system, there are 3 essential contract requirements that must be defined. Agreeing to what happens when things go wrong is an essential part of smart business and project planning. The essential contract requirements for automated systems are:

  1. Defined performance, reliability, and availability specifications.
  2. Defined testing requirements, including the human factor.
  3. Defined schedule and performance liquidated damages.

Defined System Performance:

Performance, reliability, and availability specifications are critical to define clearly and explicitly. These metrics are challenging to measure when system operation, and hence system performance, relies on human input. Defining the interplay of automation and human factors as they relate to system performance is critical to project success. Human performance dictates system performance. Packing orders. Stacking boxes. Picking parts. Delivering to or taking away from the automation system. Just because it’s automated, doesn’t mean there aren't people!

Performance specifications will indicate metrics such as "picks per hour" or "moves per hour". It is important to define the duration of peak volumes, whether 15 minutes or 15 hours. Accumulation may be required to accommodate production surges. Defining the magnitude and duration of the production surges will dictate system design or set operational constraints. Downstream interruptions of seconds, minutes, or hours may affect upstream operations. Define, document, and understand the interplay of equipment and operations.

Reliability and availability metrics should be defined and reasonable. Defining reasonable for each application means playing out the numbers of sample scenarios to determine if the promised performance meets business needs. A reliability value of 99% is commonly used for automation systems. In some systems, that could mean a fault every few minutes. Reliability values may be the completion of a process from start-to-finish or one part of an operation with multiple steps. If there are 5 sequential steps and each step is 99% reliable, the whole operation is NOT 99%... it is 95% reliable.

Availability (again, 99% is common) is frequently used to ensure system up-time, and by extension is a barometer of reliability. Reliability and available metrics include exclusions such as: response times to arrive at the fault location, down-time due to load condition (pallets, cartons, products, etc.) - factors that are usually outside the automation suppliers' control. Items such as non-conforming boxes, pallets, stretch wrap, or labels will cause automation system faults. These faults will drive down reliability and availability but aren't included in the supplier guarantees. Will the system still provide benefit for the user and meet operation requirements when these factors are considered? Considering these items will influence system design and investment decisions.

Defined System Testing & Human Factor:

Defining how humans affect performance is critical. A slow-down in one area may cause an entirely different area of the system to be affected. A performance testing strategy needs to be defined that considers the human- machine interface. When humans are involved, how will their performance be measured and integrated into the results that measure success or failure?

One of the challenges when testing automated systems is that they are typically designed to accommodate performance requirements 3, 5, or 7 years into the future. The production capacity with which to test these systems may not exist at go-live. When testing complex systems that utilize autonomous equipment (AMRs, AGVs, Shuttle Systems, etc.), simple time-based measurements cannot be used to extrapolate system performance. A testing strategy should be developed that simulates future-state performance for sufficient duration to provide confidence the system is "future ready".

Define duration of tests, whether conducted for 1-minute, 1 hour, or 1 day. The documentation should define testing duration sufficient to prove that the system meets operational requirements. If tests are long duration, ensure that people and processes can support the testing schedule. What if a human- caused mistake occurs midway through a 24-hour test, causing a ripple effect to performance for several hours? Does the test get repeated? Who pays for the repeated testing?

There are many variables at play. Start with a supplier you can trust. Then document. And verify.

Liquidated Damages for Schedule and Performance:

Damages for delay are normally up to 5% of the contract value for delays caused by the supplier. Delay penalties are common in Europe and uncommon in the USA. In both cases, they are rarely enforced when included. Agreeing to who is at fault in many scenarios is difficult. Delays are frequently related to external factors or related to design and scope changes. Damages related to delay will help to hold parties accountable when there are resource constraints due to availability of specialized labor.

The penalty clause type less common to include are damages related to system performance. The expectation is that performance levels promised will be met and that the supplier will continue working until the system meets performance obligations. Normally, a final payment of the last 10% or 15% of the value of the contract is linked to successful completion of performance tests. It is rarely so cut and dry.

What happens if performance targets can't be achieved no matter the effort expended? What happens if despite best efforts, reliability cannot be improved beyond 98%? What happens if the contracted performance, whatever the metric, cannot be achieved? Does the failure to meet performance targets mean the final payment is withheld indefinitely? Is 10% or 15% a fair deduction for a system that may not meet customer needs? Is a 15% deduction fair when the system performs at 90% of contracted rates? All of these are questions to be considered to hold suppliers accountable to deliver system performance.

Perhaps a $3 million automated system that doesn’t work as intended renders a $75 million investment compromised. Does $300,000 withheld from the automation supplier satisfy a $75 million investment that can't be fully utilized? There are risk / reward scenarios that are untenable for both customer and supplier. Those risks and rewards need to be rectified during the contract phase.

Perhaps despite best efforts, the system can only deliver 75% of the promised performance. Does that mean the client only pays 75% of the contract? Or does it mean the system is unusable and needs to be removed? What if the non-performing system is inextricably linked to a larger investment provided by a different supplier? What if the system is so unique there are no other solutions that fit the available space, operational needs, etc.? There isn't a boilerplate answer to suit every situation. A lawsuit rarely satisfies anyone but the attorneys.

Surprising, many of these questions are not detailed in purchase contracts. Sometimes due to constraints of time. Sometimes due to lack of experience or understanding the risks. More likely, human optimism gets in the way of rational thought. As the saying goes, "marry in haste, repent at leisure".

Contracts usually indicate that non-conforming systems must be repaired. Rarely do contracts define what happens when a system doesn't meet all the business needs it set out to accomplish. Indeed, all the equipment may be provided and in good condition. Software may function as agreed. And yet, the system fails to deliver the required performance. The answer is rarely to "rip it out" and request a refund. That almost never happens. Usually the system is installed and useable but doesn't meet the intended goals. Defining what happens when the solution falls short of the finish line needs to be answered before placing an order. Don’t count on a mediator to solve it equitably after the fact.

Contract Requirements Require Time and Expertise:

The essential steps of defining performance, documenting testing requirements, and sorting out what happens when things go wrong takes time. It also takes expertise and experience.

You'll need an Expert. An independent 3rd party. Perhaps a single expert or small company. Someone with the experience to know what details are missing. Someone with expert intuition that can look at operations, designs, and proposed solutions and judge likelihood of success. That expert likely has 20+ years of experience. They won’t be cheap, but they'll keep your project on the rails. In the best scenario, you'll never know your project was at risk of derailment. Don't gamble with your company or career. Find your expert here

Categories