Simple Advanced Controls in the Distributed Control System

July 8, 2008

BY Greg Martin

Over the past 30 years several simple advanced control methodologies have been invented out of necessity by process control consultants. In the early days these applications were implemented in a supervisory computer. Now they can be implemented in the distributed control system (DCS). Applications using these methodologies are easy to understand, quick to implement and commission, and are virtually maintenance-free. Moreover, they are readily accepted by management and operations.

Four examples are provided with applications in a fuel ethanol plant.

DPC versus PID robustness.

Figure 1. DPC, indicated by the top pair of lines in each graph, versus PID robustness

Nonlinear Level Control
Nonlinear level control is used when the flow that is adjusted to control level is the feed to a downstream process, and it is desired to maintain the flow as constant as possible to avoid upsetting the downstream process.

This method uses a variation of the classic proportional, integral, derivative (PID) algorithm. When the controller error falls outside a designated "gap" then the reset tuning parameter is reduced. The effect is that level is allowed to wander inside the gap, with small changes to the flow, and reset action is increased whenever the error wanders outside the gap.

An example application of nonlinear level control would be the distillation section second column bottoms level control, since the bottoms flow that is adjusted to control that level is the feed to the molecular sieve section. A more constant feed to the molecular sieve section makes that part of the plant easier to manage.

For PID control, when the disturbance is introduced, the level increases somewhat and the controller adjusts the bottoms flow to compensate, and bring the level back to its setpoint. In the simulation the bottoms flow is a negative number so an increase in flow appears as a decrease in the plot trend. While the PID control action is notable, it may be desirable instead to let the level wander more away from the setpoint, and adjust the bottoms flow less. After all, the bottoms flow for this example is the feed to the molecular sieve section.

Another example introduces the same disturbance and the level gradually rises, with the bottom flow not being adjusted much. Finally, when the level exceeds the high side of the gap, the reset is increased, which is clear in that the level starts to turn around, but moreover in that the bottoms flow is adjusted at an increased rate. The objective of the nonlinear level control is achieved: the bottoms flow is adjusted less severely and the level is allowed to wander more.

Constraint Projection
Constraint projection is effective when pushing constraints. It improves profits by keeping the process at the most limiting of a number of possible constraints.

Constraint projection applies to a situation where one manipulated variable influences more than one potential controlled variable (constraint variable). The method calculates the manipulated variable move to place the most limiting of the constraint variables at its constraint limit.

An example would be molecular sieve gallons as the manipulated variable and inlet temperature and backpressure as potential constraint variables. The constraint projection application maximizes gallons to the most limiting of the minimum inlet temperature (e.g., 300 degrees Fahrenheit) and the maximum back pressure (e.g., 30 inches of water).

Each potential constraint variable is compared with its limit constraint value, and the corresponding manipulated variable move to put that constraint variable at its constraint value is calculated. The minimum of the calculated gallons moves is the output of the constraint projection calculation.

The reason this method is called constraint projection is that each constraint variable move is "projected" back onto the manipulated variable using the corresponding steady-state gain.

Constraint projection moves are always based on the most limiting constraint.

Evolutionary Optimization
Evolutionary optimization is applicable when an operating objective can be expressed as a function that includes the manipulated variable. This method minimizes (or maximizes, as desired) the objective by repeatedly making moves to the manipulated variable that improve the objective function value.

Evolutionary optimization does not use a complicated optimization algorithm. It simply makes a move to the manipulated variable in one direction, waits for the process to line-out, and checks to see if the objective function value improved. If it did, the method makes another move to the manipulated variable in the same direction as the first move. If the objective function value did not improve, it reverses direction and makes a move to the manipulated variable in the direction opposite that of the first move.

The method proceeds in this way until a valley (or peak) is confirmed by objective function values. Evolutionary optimization then uses a simple rule to make a final move to approximate optimization of the objective function.

An example objective function would involve the calculation of slurry solids percent based on slurry density, dry milled corn, water, backset and alpha amylase enzymes. The function used for evolutionary optimization is the error-squared of the slurry solids percent setpoint minus the calculated slurry solids percent. The manipulated variable is dry milled corn. The evolutionary optimization application adjusts dry milled corn up or down until the calculated slurry solids percent approximates the setpoint, thus minimizing the objective function.

The name "evolutionary" comes from the fact that the method simply evolves the manipulated variable value by repeated moves to find the optimum.

The first move made by evolutionary optimization is just a guess—the manipulated variable is increased by a specified amount, x. The objective function is recalculated and the error is found to increase. Evolutionary optimization then reverses direction, and decreases the manipulated variable by 2x. The error is found to decrease, and another move is made to decrease the manipulated variable by x. This is followed by a series of moves to decrease the manipulated that results in a continuous decrease in the objective function.

Finally, the minimum of the objective function is passed, and evolutionary optimization reverses again. This sequence of moves adjusted dry milled corn to drive a calculated slurry solids percent to a desired setpoint.

Distributed Predictive Control
Distributed predictive control (DPC) is a single variable model predictive controller, which is effective for control loops with significant time delay, like analyzer loops or some temperature control loops. The performance of DPC in these cases is from two to four times better than a PID application. DPC can increase profit accordingly when the application is to maintain an economically important setpoint or push an economically important constraint.

DPC is designed specifically for application in a microprocessor, such as is common in modern DCSs. It uses minimum storage for the predictions, since all response elements and predictions are calculated on-line. This gives DPC adaptive and nonlinear capability.

An example of a suitable application for DPC would be the control of distillation section water in ethanol using analyzer feedback.

DPC has another advantage over PID, which is called robustness. This means that the estimated model used in the DPC can be significantly different from the actual process model without the DPC performance being significantly depreciated. In other words, the engineer's estimate of the process response need not be as accurate for DPC as for an equivalent PID application. This is demonstrated in Figure 1, where the model gain is underestimated by a factor of 2.5. On both sides of Figure 1, the upper plot is DPC, and the lower is PID.

On the left the model gain is 1 and both DPC and PID have been tuned to respond similarly to a step change in the setpoint. The controlled variable is blue, and the manipulated variable is red. On the right side of Figure 1 the tuning for both DPC and PID are held constant and the process model gain is increased to 2.5. This gives the DPC application an oscillation that damps out after two cycles, an acceptable result. The PID application, however, is oscillatory. In fact, it borders on instability, which is definitely an undesirable result.

This demonstrates the robustness property of a DPC application relative to an equivalent PID application.

Summary
Four examples have been provided of simple advanced control and optimization applications: three control and one optimization. These applications can be implemented in the DCS. They are easy to understand and quick to implement and commission, and are virtually maintenance-free. Moreover, they are readily accepted by management and operations.

Greg Martin is an independent consultant in the automation industry. Reach him at greg@gregmartinconsulting.com or (512) 864-3822. Derek Peine, vice president of Heartland Ethanol LLC, provided the author with ethanol industry specific examples.

Advertisement

Advertisement

Upcoming Events

Sign up for our e-newsletter!

Advertisement

Advertisement