Published: 31 August 2016

Building and optimizing a model of low level for two-phase filtration

K. A. Sidelnikov1
A. M. Gubanov2
V. A. Tenenev3
M. A. Sharonov4
1JSC “Izhevsk Petroleum Research Center”, Izhevsk, Russia
2Institute of Oil and Gas, Udmurt State University, Izhevsk, Russia
3, 4Kalashnikov Izhevsk State Technical University, Izhevsk, Russia
Corresponding Author:
K. A. Sidelnikov
Views 10
Reads 5
Downloads 1202

Abstract

The paper proposes a method for constructing models of low level for two-phase flow based on a combination of finite-difference solutions and detailed spatial approximation, providing the possibility of approximating the models for solving optimal control the operating parameters of the oil reservoir.

1. Introduction

The choice of numerical methods for solving optimization problems determined by the type tasks. For a simple class of optimization problems with linear objective function and linear constraints (linear programming) developed methods to find an optimal solution (or set the insolubility of the problem) in a finite number of calculation steps. Introduction conditions discrete variables complicates finding solutions. Arbitrary views of objective functions and constraints limiting the capacity of existing computational optimization methods. The biggest challenge for solving optimization problems is the presence of several extremes. Upon receipt of the decision, without any additional studies of the behavior of the objective function, it is impossible to establish unequivocally, it has converged to some extremes. Therefore, global optimization is yet unsolved problem. The result of the application of classical optimization methods, such as requiring the calculation of derivatives, as well as direct methods is highly dependent on the existing initial allowable approach. In [1, 2] fairly complete and detailed concepts and modern approaches to the implementation of genetic algorithms. The analysis of the existing crossing operators, selection and mutation, and suggested some new ones. The focus is on algorithms for solving unconstrained optimization or with restrictions “corridor” type. Therefore, we will consider further development of genetic algorithm adapted for solving problems with constraints of any kind.

2. The model of the system and the optimization strategy

The use of real coding can improve the accuracy of the solutions found and increase the speed of finding the global minimum or maximum. The speed is increased due to the lack of coding and decoding processes of chromosomes in each step of the algorithm. The only transformation that it is advisable to carry out this reduction of variables to a dimensionless form in the range [0, 1]. The literature describes the following types are most common crossing operators [2-5].

BLX-α operator. For crossing two individuals selected:

1
x(1)=x1(1),...,xi(1),...,xn(1), x(2)=x1(2),...,xi(2),...,xn(2).

The value of the new gene is defined as a linear combination of xi=aBLXxi(1)+bBLXxi(2). Coefficients aBLX, bBLX defined by the following relations: aBLX=1+α-u1+2α, bBLX=u1+2α-α, where the number of α0,1; u0,1 – random number.

B1 operator simulating mating with binary coding:

2
xi=aBin1xi(1)+bBin1xi(2), aBin1=1+z2, bBin1=1-z2,
3
z=2u11+β, u0.5, β>1,21-u-11+β, u>0.5.

Crossover operator B2.

Just as in a binary coding, transform the real number ri on the interval Ai,Bi an integer gi=2L-1ri-Ai/Bi-Ai. In the case of a binary cross-breeding with the binary representation of the number gi randomly determined by the position of the section of the chromosome k for crossing. This is followed by the exchange of selected parts of the chromosomes. If there are two numbers g(1)=j=0Llj(1)2j, g(2)=j=0Llj(2)2j, the result of interbreeding δ10-4,10-3.

The result of the simulation of the operation in the real version is the expression:

4
xi=aBin2xi(1)+bBin2xi(2).

Coefficients aBin2, bBin2 are defined as follows:

5
bBin2=2ξ2L=2ξ-L=2-12-uL-1, aBin2=1-bBin2,

where ξ0,L – a random number corresponding to the position of crossing; Ai'=xi'-Δi, Bi'=xi'+Δi; Δi=δBi-Ai – random number and u0,1.

Operators crossing B1, B2, BLX are probabilistic mechanism by random choice of u. The operators Bin1, Bin2 often crossing occurs in the lower ranks of numbers. The operator BLX crossing occurs uniformly in the range of the real numbers. For all operators crossing condition a+b=1.

As used random mutation operator mutation in which gi, subject to change, it takes a random value in the range of its changes x=x'i, i=1,N¯ Ai,Bi.

Besides mutation operator applied inversion operator that for real code has the form:

6
g¯=g¯i=gi-ξ, i=1,N-ξ,¯g¯i=gi-n+ξ, i=N-ξ+1,N¯,, ξ1,N.

The sequence of operations in a real coding algorithm is the same as the standard binary encoding genetic algorithm. The difference lies in the form of genetic operators. Before starting the process at t= 0 is formed by a population consisting of individuals of m. An individual or chromosome is represented as:

7
G=xψ=gi, i=1,N¯,

where x=xi, i=1,N¯ – vector function arguments; ψ – transformation, the transition from the vector x (phenotype) to the encoded representation (genotype).

The transformation is carried out by bringing the argument to the dimensionless form:

8
ψ: yi=xi-minxii=1,N¯maxxii=1,N¯-minxii=1,N¯, i=1,N¯¯.

Inverse transformation:

9
ψ-1: xi=minxii=1,N¯+yimaxxii=1,N¯-minxii=1,N¯, i=1,N¯.

Held on the evolution of the population iteration t=t+1. The selection of individuals for crossbreeding carried out by the tournament: two randomly selected individuals and individuals with the best quality involved for mating with a given probability of crossing.

One of the ways to improve the work of optimization techniques is the use of hybrid algorithms that combine the properties of gradient and evolutionary algorithms. Usually, they find the initial approximation, localized in extremum, using a genetic algorithm, and then specify the position of the extremum of the gradient method. In that case also accelerates convergence, but the extremum is not necessarily global.

In [1, 6], a hybrid algorithm based on parallel work of genetic operators and any additional gradient or direct method. leader - in population created by the genetic algorithm, the best individual is selected. This leader is trained separately for an additional method. If it is thus a qualitative indication of better than all other individuals in the population, then it is introduced into a population and is involved in the reproduction of progeny. If there is individual in the population resulting from evolution to be the best indicator, it becomes the leader.

As an additional method shows good efficacy conjugate gradient method (the Fletcher-Reeves) (CGM). In the method of Fletcher-Reeves search direction extremum is conjugated. The system of linearly independent directions zi, i=1,N-1¯ is said to be conjugate with respect to the positive definite matrix H, if ziTHzj=0, ij, i, j=1,K-1¯. It is necessary to construct a sequence of conjugate k directions zi, i=1,k¯, are linear combinations F(xk) and previous destinations zi, i=0,k-1¯. The starting direction is taken z0=-F(x0). Direction z1 selected conjugate z0 towards H=2F(x), so that z0THz1=0 and z1=-F(x1)+w1z0. Coefficient w1=TFx1Fx1/TFx0Fx0 . For direction k we have wk=TFxkFxk/TFxk-1Fxk-1. Having built a sequence of search directions we get the following algorithm: 1) k= 0, at point x0 determined z0=-F(x0); 2) k=k+1, determined xk=xk-1-δk-1Fxk-1, δk-1=argminδ>0f(Xk-1-δzk-1). Compute Fxk, Fxk and direction zk=-Fxk+wkzk-1. When k=Nx0 replaced to xN and the iterative process is cyclically repeated until the conditions f(xk<ε.

Conjugate gradient method has a high rate of convergence to the optimal solution, which is close to the square. In addition, this method does not calculate the matrix and are not stored in memory, it can be used to solve optimization problems of large dimension.

The hybrid algorithm.

1) k= 0. Formed population consisting of m individuals {Cs,s=1,m¯}k for VGA or RGA method. The first issue to take special C1 with the best result (the minimum value of the function (1)). With the conversion ψ-1 we obtain a vector xbk and xk=xbk.

2) k=k+1. Using the method of MSG is calculated as follows x=x'i, i=1,N¯ vector approach xk. Genetic algorithm creates the next population {Cs,s=1,m¯}k and it is the best specimen that defines the next vector xbk.

3) If F(xbk)<F(xk), then xk=xbk.

4) If F(xbk)F(xk), then C1=[xk,ψ].δ10-4,10-3.

5) x=xi', i=1,N¯, Ai'=xi'-Δi, Bi'=xi'+Δi; Δi=δBi-Ai. If the stop condition is satisfied, then δ10-4,10-3 otherwise end on a paragraph 2.

As a further method, it is also advisable to use a genetic algorithm, but in a narrower field of search. The boundaries of the narrow search region are determined as follows. If x=xi', i=1,N¯ – the best individual in the population, the additional border search area is: Ai'=xi'-Δi, Bi'=xi'+Δi; Δi=δBi-Ai. Test results showed that the amount of δ10-4,10-3.

To solve optimization problems and function restrictions. As a rule, the problem of conditional minimization reduced to the unconstrained minimization problem through the use of penalty functions, and then find a solution using genetic algorithms and gradient methods. When using penalty functions the problem of determining the weighting factors, and the function itself is often the nature of the ravine. Different objective functions and constraints require careful selection of factors that cannot always be done successfully, and there is a need for a new selection when changing the conditions of the original problem.

Suffice universal are two approaches, based on the selection of operations and the formation of new populations. In one approach [7] used the idea of parallel populations with acceptable and unacceptable specimens. In another [8] is also involved in illegal crossing of individuals with selections to dominate. We will apply for the numerical solution of the problem with the restrictions basic genetic algorithm, described in [6] with an additional selection, described in [8]. Genetic algorithm of [6] applies to crossing BLX operators, B1, B2 and classical operator Holland. These operators are selected at each crossing procedure at random, which expands their opportunities.

When selecting a tournament selection method is applied to the implementation of the following rules: 1. If the two compared individuals permissible, i.e. satisfying the constraints is selected from the best indicator of the objective function. 2. If one individual is permitted, and the second not, permissible selected. 3. If both individuals invalid, then choose the one with the smaller number of outstanding restrictions. 4. If both individuals are unacceptable and have the same number of violated constraints, the chosen one in which a violation of minimal restrictions.

These rules are implemented algorithmically easy and does not require significant processing of a basic genetic algorithm.

3. Computational experiments

To test the efficiency of the method, consider some examples that are used for testing optimization techniques. For all the tasks carried out to minimize the objective functions.

Objective 1.

Rosenbrock objective function:

10
Fx=i=1N-1100xi+1-xi22+1-xi2, xi-5,5, i=1,N.¯

The global minimum without restrictions: xiopt=1, i=1,N¯; Fxopt=0.

As-equality constraints used by the system of equations that define the circle of radius R for each pair of adjacent variables:

xi2+xi+12=R2, i=1,N-1¯.

As constraints-inequalities favor a system of inequalities that define the multi-dimensional ring with an inner radius and an outer:

11
R12xi2+xi+12R22, i=1,N-1¯.

For n=100 if is inside the ring, then the best solution is for the 400 iterations. When R1=2, R2=4 a decision xi [1.4141-1.4143], F= 3414.227 is reached for 2000 iterations. When the equality constraints, defined in the form R1= 1.999, R2= 2.001, the same solution was obtained at a significantly greater number of iterations 32000.

Objective 2.

The task of multi-function Rastrigin:

12
Fx=i=1N101-cos2πxi+xi2, xi-5,5, i=1,N¯.

The global minimum without restrictions: xiopt=0, i=1,N¯; Fxopt=0.

Restrictions are the same as in the problem 1.

xopt a large number of extrema of this function can be seen even on the chart for two variables.

When R1= 0, R2=1, decision achieved by 250 iterations.

When R1= 0.9999, R2= 1.0001 received the decision: even numbers of variables xi= –0.9999, odd xi=0, F= 49.9901 when the number of iterations 2300.

Objective 3 (Welded beam design) [8].

Mathematical record of work tasks [8].

fw(x)=1.10471h2l+0.04811tb(14.0+l);
g1x13.600-τx0, g2 x30.000-σx0,
g3xb-h0, g4(x)Pc(x)-6.0000,
g5x0.25-δx0, 0.125h10, 0.1l,t,b10,
τ(x)=(τ'(x))2+(τ''(x))2+lτ'(x)τ''(x)/0.25(l2+(h+t)2),
σx=504000/t2b, Pcx=64746.0221-0.0282346ttb3,
δx=2.1952t3b, τ'x=60002hl, τ''(x)=6000(14+0.5l)0.25(l2+(h+t)2)2{0.707hl(l2/12+0.25(h+t)2)}.

The optimal solution, known in the literature: h= 0.2444; l= 6.2187; t= 8.2915; b= 0.2444; F= 2.3816.

We received: h= 0.244369; l= 6.2186069; t= 8.2914718; b= 0.244369; F= 2.3811304 for 1200 iterations.

Objective 4 [9] (38 restrictions).

f2(x)=-0.1365-5.843(10-7)y17+1.17(10-4)y14+2.358(10-5)y13
+1.50210-6y16+0.0321y12+0.004324y5+1.010-4c15/c16+37.48y2/c12,
g1x1.5x2-x30, g2xy1x-213.10, g3(x)405.23-y1(x)0,
gj+2xyjx-aj0, gj+18xbjx-yjx0, j=2,...,17,
g36xy4x-0.28/0.72y5x0, g37(x)21-3496.0y2x/c12x0,
g38(x)62212.0/c17x-110.6-y1(x)0,
704.4148x1906.3855, 68.6x2288.88, 0x3134.75,
193x4287.0966, 25x584.1988,
y1x=x1+x2+41.6, c1x=0.024x4-4.62, y2(x)=12.5/c1x+12.0,
c2x=0.0003535x1x1+0.5311x1+0.08705y2xx1,
c3x=0.052x1+78.0+0.002377y2xx1,
y3x=c2x/c3x, y4x=19.0y3x,
c4x=0.04782x1-y3x+0.1956(x1-y3(x))2/x2+0.6376y4x+1.594y3x,
c5x=100.0x2, c6x=x1-y3x-y4x, c7(x)=0.95-c4x/c5(x),
y5x=c6xc7x, y6x=x1-y5x-y4x-y3x,
c8(x)=0.995(y4(x)+y5(x)), y7(x)=c8(x)/y1x,
y8(x)=c8x/3798.0, c9(x)=y7(x)-0.0663y7x/y8x-0.3153,
y9x=98.82/c9x+0.321y1x,
y10x=1.29y5x+1.258y4x+2.29y3x+1.71y6x,
y11x=1.71x1-0.452y4x+0.58y3x, c10x=12.3/752.3,
c11(x)=1.75y2(x)0.995x1, c12(x)=0.995y10(x)+1998.0,
y12x=c10xx1+c11x/c12x, y13x=c12x-1.75y2x,
y14x=3623.0+64.4x2+58.4x3+146312.0/y9x+x5,
c13(x)=0.995y10(x)+60.8x2+48.0x4-0.1121y14(x)-5095.0,
y15x=y13x/c13x,
y16x=148000.0-331000.0y15x+40y13x-61.0y15xy13x,
c14x=2324.0y10x-28740000.0y2x, c15x=y13x/y15x-y13x/0.52,
y17x=14130000.0-1328.0y10x-531.0y11x+c14x/c12x,
c16x=1.104-0.72y15x, c17x=y9x+x5,
a[i]={0.0, 17.505, 11.275, 214.228, 7.458, 0.961, 1.612, 0.146, 107.99, 922.693,
926.832, 18.766, 1072.163, 8961.448, 0.063, 71084.33, 2802713.0};
b[i]={0.0, 1053.6667, 35.03, 665.585, 584.463, 265.916, 7.046, 0.222, 273.366,
1286.105, 1444.046, 537.141, 3247.039, 26844.086, 0.386, 140000.0, 12146108.0},
x*=705.1803, 68.60005, 102.90001, 282.324999, 37.5850413,
f*=-1.90513.

Received for 1200 iterations:

xopt=705.17454;68.6;102.9;282.3249;37.5841, F= –1.905155. Active restrictions: 1, 2, 34, 37, 38. Fig. 2 shows the proportion of the population of feasible solutions in volume 1000.

Fig. 1Rastrigin function, n=2

Rastrigin function, n=2

Fig. 2The proportion of feasible solutions

The proportion of feasible solutions

4. Conclusions

The results of solving test problems show that the proposed algorithm can handle all the tasks discussed in [8, 9] for arbitrary restrictions and objective functions. This algorithm is applied to solve further problems for the optimal operation of oil wells.

References

  • Teneev V. A., Shaura A. S., Jakimovich B. A. Structural-Parametric Optimization and Management. Izhevsk State Technical University Publishing House, Izhevsk, 2014, p. 235, (in Russian).
  • Eshelman L. J., Schaffer J. D. Real-Coded Genetic Algorithms and Interval-Schemata. Foundations of Genetic Algorithms 2. Morgan Kaufman Publishers, San Mateo, 1993, p. 187-202.
  • Herrera F., Lozano M., Verdegay J. L. Tackling real-coded genetic algorithms: operators and tools for the behavior analysis. Artificial Intelligence Review, Vol. 12, Issue 4, 1998, p. 265-319.
  • Haupt Randy L., Haupt Sue Ellen Practical Genetic Algorithms. 2nd Edition. John Wiley and Sons, Hoboken, NJ, 2004, p. 261.
  • Poli Riccardo, Langdon William B., McPhee Nicholas F. A Field Guide to Genetic Programming. Lulu Enterprises, UK, 2008, p. 235.
  • Tyulenev V. A., Jakimovich B. A. Genetic Algorithms in Modeling Systems. Izhevsk State Technical University Publishing House, Izhevsk, 2010, p. 308, (in Russian).
  • Prokhorovskaya E. V., Teneev V. A., Shaura A. S. Genetic algorithms with real-coded for solving constrained optimization problems. Intelligent Systems in Production, Vol. 2, 2008, p. 46-55, (in Russian).
  • Deb K. An efficient constraint handling method for genetic algorithms. Computer Methods in Applied Mechanics and Engineering, Vol. 186, 2000, p. 311-338.
  • Michalewicz Z. Genetic algorithms, numerical optimization, and constraints. Proceedings of the 6th International Conference on Genetic Algorithms, Morgan Kauffman, San Mateo, 1995, p. 151-158.

About this article

Received
03 August 2016
Accepted
08 August 2016
Published
31 August 2016
SUBJECTS
Flow induced structural vibrations
Keywords
models of low spatial approximation
optimal control
two-phase filtration
oil reservoir