[Home] [APPW 2004] [Journal papers]

 

PROCESS OPTIMIZATION AND MODEL BASED
CONTROL IN PULP INDUSTRY

Authors

Johan Jansson *) **) ,Tomas Lindberg *) **), Erik Dahlquist *), Ulf Persson **)

Companies

*) Malardalen University, Box 883, SE 721 23 Vasteras , Sweden
**) ABB Automation, SE- 721 67 Vasteras , Sweden

Keywords

process control, pulp mill, optimisation, simulation, diagnostics, models

 

 

 

 

ABSTRACT

The paper covers a the usage of an overall mathematical model for the whole pulp mill to do on-line simulation and optimization and more advanced physical models for specific process sections for control systems and online diagnostics.

BACKGROUND

Today ABB is using the product AutoCook to control digesters. AutoCook is an advanced, process-oriented control system developed to control, monitor and report the performance of the continuous and batch digesters where kraft pulp processes, where lignin is dissolved from the fibers in wood chips. The system enables mill personnel achieve efficient and high quality pulp production. The AutoCook uses tables, where the operator can see suitable temperatures and flows to use for all loops around the digester for a certain production rate and quality. Behind these tables models are calculating the values. Models are also used for example to track chips/pulp throughout the whole fiber-line. The tracking calculates the movement of chip/pulp "packages" downwards through the impregnation and digester vessels. The tracking function is used by many of the control functions to make sure that the right action is made at the right time. During specie/grade changes the quality dependent nominal set points are not changed until the new quality has reached the level in the digester where the control action is to take place.

Examples where tables are used is when AutoCook is used to change production or change wood species without producing more off-spec pulp than necessary. Each wood specie/pulp grade has its own set of pulp tracking parameters.

The trend today is to optimize the whole mill not only towards production and quality, but also towards for example minimization of energy and chemical consumption, and effluents. Most of the models are first principles process model, which is physical models. These are used to simulate the process as part of the optimization. With a good process model it is possible to implement a more advanced control system like model predictive control (MPC) and 2D or 3D dynamic online models for diagnostic purpose. E.g. hang ups and channeling are of interest to identify in a continuous digester.

For the on-line simulation of the whole mill ABB has developed a Pulp and Paper On-line Production Optimizer and this optimizer works like an "overall mill management system" giving set points to each process section. The models will be on the same software platform as the more advanced models, for example the digester model presented in this paper.

Process Optimization and Control:

In this paper we will focus on the use of adaptive physical models for both on-line control and production planning the next 24 hours.

Mill Optimization

On the overall mill level we want to optimize and control the chemical balance in a complex pulp mill. The first application is the control of Sulfur in an integrated Kraft and NSSC (Neutral Sulfite) pulp mill. This type of integrated recovery systems is called "cross recovery", as sulfur is passed over between the two lines. It is important to adjust the dosage of sulfur to adjust the sulfidity, so that there is enough but not too much, and to keep the balance between different parts of the plant. This has been done by physical models that have been combined to form a plant model. This model is then used to make a production plan for keeping the sulfur balance at it's optimum. Implementation has been done recently at a Billerud's Gruvön Mill in Sweden, which typically produces 630 000 metric tonnes of paper a year.

Modeling

The modeling is based on a description of all components in the system as objects. The object type groups are streams, production units, measurements and calculated properties in streams.

Some examples of production units are digester, pulp tank, black liquor tank, limekiln and paper machine. Stream types may for example be pulp, white liquor, black liquor or wash liquor. Examples of measurements are pulp flow, tank level, sulfidity, effective alkali, reduction efficiency, black liquor density or alkali charge.

The streams are described by the total flow and by the concentrations of the different components that are relevant for each stream. Examples of stream components used are, fiber, dissolved organic solids, hydroxide, sulfide, carbonate, sulphate, sulfur. These relations may be considered as a dynamic mass balance and are a system of differential and algebraic equations of the form

equation 1

where
  denotes the time derivative of the state .

  denotes not measurable disturbances and is accounting for modelling uncertainty.

  denotes measurable disturbances which in our case are equivalent to the variables which are possible to manipulate by the operators or control system.

The production units are connected with streams in a flow sheet network. Measurements are connected to both production units and to streams. The pilot mill system with production units, streams and measurements has the following size:

  • 25 production units
  • 38 buffer tanks
  • Approx. 250 streams or pipes
  • Approx. 250 measurements or observations
  • Approx. 2500 variables

The levels in buffer volumes are changing as a function of changes in the flow in and out. Tanks are rigorously modelled assuming ideal mixing as

equation 2

where Fin   and Fout   denotes incoming and outgoing total flows. M denotes total mass in tank, cin and cout   denotes incoming and outgoing concentrations. One equation like the second is used for each component. More about this is described in Persson et al, 2003 [1]

The optimization of the plant is aiming at minimizing:

  • Variation in sulfidity
  • Cost of sodium hydroxide and sulfur usage
  • Production losses due to up and down stream disturbances
  • Variation in sodium stock
  • Variation of the distribution of sodium between white and black liquor

The optimization is done by estimating what value each variable should have during each time step during the time horizon we want to control. We are then not allowed to pass limits like high or low level in tanks, feasible flows in pumps and valves etc. This is where we really need a computerized system to keep control of all these constraints. This optimization schedule is then updated on a frequent basis, to handle new situations coming up like need for changing priority in the production at the paper mill, some equipment has to be replaced etc. Each time we make a new optimization calculation we need to have a starting point for where we are right now. This is not as clear as you might believe, as the process seldom is as stable as you would like to. To handle this a technique called Moving-horizon estimation (MHE)[2].

This means that we include a number of time steps and make an average for these. For the next time step we then take out the oldest, and add the newest, and make a new average of these. This is repeated in the same way every new time step. In this way we can filter out noise from both sensors and actual upsets in the process. This is normally positive as we do not use the optimization for doing changes on the very short time basis. We then instead get a better view of "stable values", so that the optimization is done from more correct values than if only the momentarily measured values were used as starting points. This calculation also can be used to determine process upsets and faulty sensors, if the data are processed further.

Optimization

The production-planning problem is formulated similar to a model predictive control problem [3]. The optimization criteria consist of different terms, quadratic and linear, and include terms to enable the following functions  included in the objective function:

1. Minimization of the deviation from set-point trajectories of any of the manipulated variables. One example is the paper machine production plan that is incorporated in the optimization with production rates and recipes. These set-points are automatically imported from the paper machine scheduling system. The pilot mill very often replan the paper machine every second hour.

2. Minimization of the deviation from set-points trajectories of any of the measurement or soft-point variables. One example is the white liquor sulfidity.

3. Minimization of the deviation from set-point trajectories of any of the state variables. One example is the total sodium stock in the recovery cycle and the part of the sodium stock distributed on the white side. Another common example is a tank level set-point.

4. Minimization of the rate of change of the manipulated variables.

5. Minimization of linear costs. One example is make-up chemical cost

6. Maximization of linear revenues associated with a state variable. One example is revenue by producing pulp during periods of overcapacity in pulp production compared to paper production.

This optimization criterion is minimized subject to the following constraints:

1. Process model

2. Sensor model

3. Upper and lower bounds on state variables. (may be a time dependent trajectory)

4. Upper and lower bounds on manipulated variables. A maintenance stop is introduced in the optimization by adjusting the upper bound of the production rate during the stop time for a production object e.g. digester.

5. Maximum and minimum sensor values.

6. Upper and lower bounds on change of manipulated variables. Changes in the manipulated variables are sometimes bounded by hard constraints to avoid that the resulting trajectories will be too aggressive. One example is that a digester should not change the production rate faster than a certain rate per hour.

7. The production planning should start from the current estimated value from the state estimation.

for the results will be the setpoints for all important controlled variables for the coming hours.
During steady-state periods, the buffer tank volumes are most likely not needed to even out disturbances. It is then preferable to adjust tank levels in a way that statistically minimize the production loss around each tank by considering the intensities of stops that propagate through buffer tanks to adjacent production units. Long term set points for all levels in buffer tanks are computed that minimize the production loss considering the time between failure and time to repair for each process section.

Simulation

To check the feasibility of the way the process is operated for the moment, as simulation is made to see how long it can be operated until some limits are violated. This is used as a support for the operators.

Results

An example of a process display from the mill is seen in figure 1. Here we can see the history of the levels in the tanks in the fiberline to the left, the optimized values recommended for the future production strategy at the right of this graph and far two the right the actual level right now. This figure gives the operator a very good quick overview over the process status and recommended operations.

Figure 1

Figure 1. Historic trend of levels, optimized values for the future as well as the actual level right now, seen from left to right. This for the fiber line.

In figure 2 we can see a bottle neck analysis. This gives the operator a quick overview of were the bottle necks will be if we proceed operations without any changes of the set points.
Finally we look at a single variable, in this case the sulfidity of the white liquor, in figure 3. To the left of the dotted, vertical line we have the estimated value of the sulfidity. Horizontally at 35 % we see the actual last measurement of the sulfidity. The difference is that the concentration varies in different parts of the system.

Figure 2

Figure 2. Bottle neck analysis.

To the right we have first a strait, dark line, which predicts what the value would be without any actions. The brighter curve shows the optimized value, if actions are taken for control.

Figure 3

Figure 3.

Actually measured value of sulfidity is 35 %, in one spot. To the left we have the estimated real sulfidity in the system, and to the right predicted value if no action is taken ( dark) respectively the optimized value( the bright).

Digester performance optimization

The next level of optimization is to look inside each process section, in our case a continuous digester. The overall set points for the digester we have got from the mill optimization schedule. For digester operations we have been working with physical modeling of the digester in 1-D and 2-D, including pressure drops inside the digester for the 2-D case.

Model predictive control, 1-D model

The 1-D model has been used to do MPC, Model Predictive control, for the digester, by optimizing the flow in each circulation loop. This is to keep control over the temperature and chemical addition depending on the real demands.

The main advantage with MPC is its ability to handle multivariable processes with strong interaction between process variables and with constraints involving both process and state variables. An example of how the results with the MPC compared to just operating according to the "normal practice" is shown in the table 1 below.

Table 1

Table 1. Optimized set-points from MPC compared to "normal practice" operations

2-D model for diagnostics purposes

The 2-D physical model of the digester has been made taking into consideration the pressure drop inside the digester due to the channels between the wood chips. When there are mostly large chips, the channels between the chips are large, with low pressure drop for fluid flowing between. When there are a major amount of fine pins etc, the channels between the chips will decrease, and the pressure drop increase. This is what happens in reality if we have different chip size distribution or different packing in the digester. One reason for this may be that the chip size distribution is inhomogeneous or the chip screw in the stack may not feed in a constant way. When we have a lot of flakes these may adhere to the screens and cause hang ups. Aside of causing an increased pressure drop in the screens, also the chips will get different residence times and contact with liquors of different concentration of both chemicals and dissolved organics. This may cause a significant variation in the kappa number of the final fibers. By identifying pressure drops, residual concentration of chemicals in the liquors, temperatures and flows and compare actual results to those predicted by the model, we can tune the model to match reality. This is under the assumption that we first have achieved a good process performance. The model can then be used both to optimize the performance by adjusting e.g. temperature and chemicals dosage, as well as back flushing screens to avoid hang ups before the problems become severe. There is also a potential for finding channeling to have a chance to go in and adjust, although this demands regularly measurements as well.
For both the 1-D and the 2-D model we need to tune with real process data. The boarders between statistical models and physical models may not be that clear. If we introduce a number of parameters into a physical model and these have to be tuned by plant data, it is in reality a combined physical and statistical model. The advantage is that we get the robustness of the physical model, but can make use of the statistics really relating to the actual process.

To do this we first have to consider what input data to use to update the model. Which values are important for the model? Which data are reliable, so that we do not feed in "scrap information"?

When it comes to the digester application, we need information on kappa of the fibers as a function of operational data, as well as data on what fibers are introduced (chip size, type of wood etc). The operational data include the capacity, concentrations of chemicals and temperatures primarily. From these data we can calculate among other residence time for the chips.

We can get the kappa number for what is coming out of the digester, but normally not from inside the reactor (although there are some digesters having sample points inside the actual digester as well).

Liquors from the extraction lines still are available, and the composition with respect to residual NaOH, NaHS and dissolved lignin may be measured on-line or in lab.

Average values will give a filtered value, but also the standard deviation, which is of great value to determine the process performance.

When we want to update the reaction rates with respect to the dissolution of lignin from the fibers we have to simplify some factors, as it will be too complex otherwise. First we can group the collected data into classes. The first class level will be the quality of the pulp. If we operate short fiber and long fiber, we keep these two classes as the primary classes. If we also have different qualities of short and long fibers, we make sub groups also for these. In the long run it may hopefully be possible to determine on-line what type of fibers we feed by measuring e.g. the NIR spectrum of the chips at the conveyor belt feeding the digester. A lot of research has been performed to find good correlations between NIR spectra and chip quality ( e.g. Axrup [4])  a well as NIR -pulp quality (e.g. Liljenberg [5]). Principally we calculate the H-factor:

equation 3

Still, this is not including the effects of varying concentrations of chemicals, and thus we use the following expression instead:

equation 4

Here we note that there is normally  approximately 21-27 % lignin ( see table 2 below) in the wood chips. The constant "a" is representing the wood properties including density and size. A large chip will have a lower "a" than a small one.

By integrating over the whole digester we will principally get the total lignin dissolution, assuming the washing is perfect. If the washing is less good we have to adjust for that as well. We can also just make a digital notation that the washing is working as it shall or not. Thus we can neglect it in the model as an analogue feature, that needs to be updated.

Table 2

Table 2. Wood composition


From this we can estimate that Kappa number to lignin relation is like the equation below:

% Lignin in dry pulp = kappa number * 0.18

Softwood Pulp that should be further bleached normally is cooked to approximately kappa 30, with a yield of approximately 40, while unbleached Kraft for liner is cooked to kappa 90-110 , with a yield around 58-60 %. For sack paper and some other applications, kappa 40-50 is the target, and the yield becomes around 50 %. These are the rough values.

What we do now is to calculate the lignin dissolution rate over the whole digester, and then divide by the residence time in the cook part of the reactor. From this we can then calculate the constants a,b,c,d and f. This in practice can be done in an iterative way, to get updated constants.

For a pinus specie we got a reduction of lignin + hemicellulose by 40 % from the original 55% with a residence time of 3.3 hours and 145 oC. We then have a reduction rate by 12.0 %/hour. This we use as dL/dt = 0.120. In this case we started with 0.9 moles/l of NaOH and 0.45 moles/l NaHS.

The constants b =0.5 and c = 0.4 are taken from literature as suitable, d= 18.33 and f= 5000 we calculate from process data when operation was done at different temperatures. The constants here are for a fast reacting wood specie.  From this we can calculate "a" = 2.986*10^-4, and we have the formula:

dL/dt= 2.986^10-4*[0.95]^0.5*[0.45]^0.4* exp(18.33-(5000/T)),

where T is in Kelvin (K).

This we feed into each volume element in the model and calculate for the whole digester. Here the actual temperatures are used, as well as the actual calculated concentrations. If the overall rate simulated is 0.125 instead of 0.12, we reduce the constant "a", for the temperature we have decided should be the normal setpoint.. When we have got process data for a number of different temperatures, we recalculate the constants d and f. The constants b and c we keep as constants.

What we have to be careful about is the specie going into the digester, while collecting tuning data. Preferably we would like to have an on-line measurement of the species by e.g. NIR spectrum, and correlate spectrum characteristics to final result of kappa under certain conditions. Then we just keep the set of constants to be used and switch to the correct ones as a consequence to what NIR spectra is measured at the feed point. A time delay is used to compensate for the time from measurement until the fibers have passed the different sections in the digester. When we do not have on-line NIR measurement, we need at least a good "guesstimate" of the mix as manual input by the operators. The difference between the "guestimate" and the resulting kappa will be used by the operator to train on how to make better "guesstimates" from the information available.

In reality we tune to kappa number and not lignin, as the kappa number is the measured variable, and not the lignin. In reality this kappa number is a mix of lignin and hemi-cellulose, as it just measures material that is oxidized by an oxidant in the standard method, which is in reality affected by many different factors not always that well controlled.

By storing operational data from different runs we can both calculate averages for each run, as well as an average for many runs during a longer time period.

So from these calculations we will both get a tool to predict what value the lignin should have at the fiber surface, as well as in the liquor. We then can compare to measured values. The difference between predicted values and measured can then be used for both control and diagnostics.

If we combine the physical model for the digester and an optimization algorithm we can optimize the operation of the digester in such a way that the yield can be optimized, chemical consumption and energy minimized and quality variations minimized. The MPC then gives set points for the different circulation loops, which may be controlled by PI or PID controls with respect to flow rates. Under the assumption that also the other parts of the plant are coordinated with the production in the digester we can get a significant improvement of the economy.

If we just replace PID loops with MPCs without coordinating the operations of the different equipments Castro and Doyle , 2004 [6],[7] got the result that the difference was small, and thus recommended that coordination of the production in the whole plant should be done.

Figure 4

Figure 4. Typical variations during a week of kappa number and actual production rates for a digester.

It is not enough to only control flows, temperatures and pressures. Also measurements on quality variables are needed. Still, it is custom that all these variables move around all the time, and steady state conditions are seldom reached. E.g. screens are clogged and have to be back flushed. Because of this flow rates are changing, and with them also chemical concentrations and temperatures.

Typical variations in Kappa number and production rates are as shown in the figure 4. Kappa trends are shown as solids lines. Temperature and flow are changing in a similar way. To use these values for tuning the model we test different averages. A problem is that we often do not get lab measurements more than every two hours. On-line measurements may be every 5-10 minutes up to every 30 minutes. These measurements may be usable. Now we have to try to fit the kappa number measured to the average digestion temperature, time and average concentrations of hydroxide and sulfide.

As can be seen the variations can be significant, although the wood is principally of the same quality all the time!

The flow rates often drop as the screens are clogged. When they are backflushed the flow goes up again. The problem is that the pumps cannot overcome the pressure drop over the screens fully. The temperatures are collected in each zone and the average multiplied by the residence time, which is calculated as the volume of the segment divided by the flow rate of chips including liquor per second.

So far we have tested this in a simulation environment with good results. Next step is to make real application tests. Results from simulations are shown in the figure 6 below. In the curve we see the predicted value of NaOH respectively Dissolved lignin (DL) compared to the measured values. In the beginning the predicted and measured values follow each other. As channeling occurs the dissolved lignin goes down, as the liquid is not used for chemical reaction, but passes in the channels between the chips to some extent. Then also the NaOH concentration goes up, as it is not consumed.

Figure 6

Figure 6 Channeling determined by comparing NaOH and Dissolved lignin (DL) predicted versus measured in the extraction line

For a simulation of the variations in the process we can see that variations occur in the feed chips with respect to reactivity (lignin dissolution rate), moisture content and chips size distribution. The variation will depend on where in the stack material is taken. A heavy rainfall may decrease the DS content a lot at the surface and a bit into the stack. Chip density may vary depending on how the screw moves in the stack. At the end positions, the bulk density may go up. The wood quality depends on not only species, but also how long it was since it was cut, if the trees were growing in a valley or at a hill side etc.

When it then comes to the digester performance, we have to consider these variations. We also will often have a clogging of the screens by time, and thus need back flushing now and then. The frequency needed will depend on the chip size and type. When the screen starts to clog, the recirculation flow goes down, as the pressure drop goes up. This may affect both temperatures and concentrations in the liquor. We also may have an in-homogenous reaction due to unequal distribution of chips, with different size distribution both vertically and horizontally in the digester.

CONCLUSIONS

What we have given here is a discussion on how physical models tuned with process data can be used for several different purposes. First we discuss overall mill optimization. Thereafter the optimization respectively diagnostics in the digester operations is discussed. It is important to fit process data into the models in a good way and how this can be done in an automatic way is discussed. An on-line measurement of chip quality fed to the digester with e.g. NIR would principally have a major impact on the overall process performance including the yield. A 1-D model was used for MPC and a 2-D model for diagnostics of hang ups and channeling. There are advantages to have different degrees of complexity in the models depending on what it should be used for.  Long term when the computer power is significantly better than today, it may still be possible to have the same model for many purposes.

REFERENCES

[1]Ulf Persson, Lars Ledung, Tomas Lindberg, Jens Pettersson, Per-Olof Sahlin and Åke Lindberg:
"On-line Optimization of Pulp & Paper Production", in proceedings from TAPPI conference in Atlanta, 2003.

[2]Rau, Cristopher V, Moving horizon strategies for the constrained monitoring and control of nonlinear discrete-time systems, University of Wisconsin-Madison, 2000

[3] Maciejowski, J.M., Predictive control with constraints, Prentice Hall, 2001

[4]Lars Axrup,Karin Markides och Torsten Nilsson : " Using miniature diode array NIR spectrometers for analysing wood chips and bark samples in motion", Journal of Chemometrics 2000;14:561-572

[5]T.Liljenberg,S Backa,J Lindberg,E Dahlquist: "On-line NIR charactization of pulp".Paper presented at ISWPC99, Japan, 1999

[6] Castro JJ and Doyle F J III: A pulp mill benchmark problem for control : problem description, p 17-29, Journal of process control 14, 2004 

[7] Castro JJ and Doyle F J III: A pulp mill benchmark problem for control : application of plant wide control design , p 329-347, Journal of process control 14, 2004  

BACK TO TOP