## SIMULATION OF RANDOM IMPACT ON SYSTEMS

When modeling the system 5 by the method of simulation, in particular by statistical modeling on a computer, considerable attention is paid to the consideration of random factors and effects on the system. For their formalization, random events, discrete and continuous quantities, vectors, processes are used. Formation on the computer of realizations of random objects of any nature from the listed is reduced to generation and transformation of sequences of random numbers. The questions of generating base sequences of pseudo-random numbers {x,} having a uniform distribution in the interval (0, 1) were considered in § 4.2; therefore, we consider the questions of converting the sequences of random numbers {x,} to the sequence {y,} to simulate the effects on the simulated system 5.

These tasks are very important in the practice of simulation of computer systems, since a significant amount of operations, and therefore, the time resources of a computer is spent on actions with random numbers. Thus, the existence of effective methods, algorithms, and formation programs necessary for modeling specific systems of random number sequences {>}, largely determines the possibilities of practical use of computer simulation for system research and design [37, 46].

>## Simulation of random events.

The simplest random objects in the statistical simulation of systems are * random events. * Consider the features of their simulation [4].

Let there be random numbers * x ь * ie the possible values of the random variable ∞, uniformly distributed in the interval (0,

1). It is necessary to implement a random event * A *, occurring with a given probability * p. * Define * A * as an event that the selected value * x *, of the random variable & pound; satisfies the inequality

Then the probability of the event * A * will be * P (A) = 1 4x = p. * The opposite event * A * is that x,> gt ; R. Then * P (A) = * 1 - * p. *

The simulation procedure in this case consists in choosing the values * x { * and comparing them with * p. * If condition (4.16) is satisfied , the outcome of the test is the event * A. *

In the same way, we can consider a group of events. Let * A 19 A * 2 , * A, * be the full group of events that come with the probabilities R, R 2 , ..., * p l * respectively. Define * A t * as an event consisting in the fact that the selected value of x, of a random variable {satisfies the inequality

where

Then

The test simulation procedure in this case consists in a sequential comparison of the random numbers * x i * with the values of 4. The outcome of the test is the event * A t , * if the condition (4.17) is satisfied. This procedure is called the determination of the outcome of the test * by lot * in accordance with the probabilities * p 1V */* 2 • ** A *

These modeling procedures were considered on the assumption that random numbers x "having a uniform distribution in the interval (0, 1) are used for the tests. In computer simulation, pseudo-random numbers with a quasi-uniform distribution are used, which leads to some error. We estimate it

Example 4.7. Suppose that there are n-bit random numbers with possible values **/quot; // (2 * -1),/= 0,2 * -1. Substituting the number * x], * in (4.16) instead of x, we define * A * * as an event consisting of the fact that x * * 4p- "*

The probability of occurrence of the event * A * * can be defined as * P (A *) * = t */2, where m is the number of random numbers less than or equal to * It follows that the use of pure* * x * * instead of * x { * leads to an error in determining the probability of the event * Ap ^ m ^ -p. *

Obviously, the maximum value of the error does not exceed 1/(2n-1). Thus, to reduce the effect of errors, you can use the increase in the number of random numbers.

When modeling systems, it is often necessary to carry out tests in which the desired result is a complex event that depends on two (or more) simple events. Let, for example, the independent events * A * and * B * have the probability of occurrence * p l * and * p in . * The possible outcomes of joint trials in this case are the events * AB, IB, AB, IB * with probabilities * p ^ <, * О - * Рл) Рв *. * -Pe), * (1 * -Рл) (* 1 * -Рв) • *

To simulate joint tests, you can use two options for the procedure: 1) sequential condition check

(4.16); 2) the definition of one of the outcomes * AB, IB, AB, IB * by lot with the corresponding probabilities, ie analogy (4.17). The first option requires two numbers * x 1 * and comparisons to check the condition

(4.16). In the second variant, one can do with one number * x (, * but comparisons may be more.) In terms of the convenience of constructing the modeling algorithm and saving the number of operations and computer memory, the first option is more preferable.

Now consider the case when the events * A and B * are dependent and occur with probabilities * p A * and * p in . * We denote by * P (B/A) * the conditional probability of occurrence of the event * B * under the condition that the event * A * occurred . We assume that the conditional probability * P (B/A) * is given.

Consider one of the options for building a model. From the sequence of random numbers {*,}, the next number * x t * is extracted and the inequality * x t & lt; p . * If this inequality is true, then the event * A * occurs. For the test associated with the event * B *, the probability * P (B/A). * From the collection of numbers {*,} the next number * x t + x * is taken and the condition * x < sub> t + { ^ P (B/A). * Depending on whether or not this inequality is met, the outcome of the test is * A B * or * AB. *

If the inequality * x t*

A does not hold, then the event * A * occurs. Therefore, for the test , associated with the event * B *, it is necessary to determine the probability

Choose from the set {*,} the number * x m +: * and check the validity of the inequality * x * t + Depending on whether it is satisfied or not, we get the test results * AB * or * A B *

The logical scheme of the algorithm for implementing this version of the model is shown in Fig. 4.13. Here, the VIEW [...] is the procedure for entering the initial data; GEN [...] - generator of uniformly distributed random numbers; * ХМ = х т ХМЫх т ^; PA = p l ; PWrp w PBA * = * P (B/A) ', PABD = P (B/A)] KA, QAA U U KAIV, CATAT, UTAM IN * - the number of events * A, L y AB, XB, ~ AB y respectively; BPM [...] - the procedure for issuing simulation results.*

Let's consider features of modeling on the computer of Markov chains serving, for example, for formalization

** **

** Fig. 4.13. Schema of modeling algorithm with dependent events **

** processes in P-schemes (see § 2.4). **

A simple homogeneous Markov chain is determined by the transition matrix

where * pc * is the probability of transition from the * z i state to the * state

The transition matrix P completely describes the Markov process. Such a matrix is stochastic, that is, the sum of the elements

Each line is equal to one:

We denote by * p, (n) y * 1 = 1, * k *, the probability that the system will be in the state r, after * n * transitions. By definition

Using the event approach, one can approach the modeling of the Markov chain in the following way. Suppose that the possible outcomes of the tests are the events * A and L * 2 , * A * *. Probability

P/y is the conditional probability of occurrence of the event * Ay * in this test, provided that the outcome of the previous test was the event * Ai. * The simulation of such a Markov chain consists of a sequential selection events * Aj * by lot with probabilities * pf *

First, the initial state * r *, * given by the initial probabilities * pA * 0), * p 2 (0), * ••• P * (0). For this, the number * x t * is chosen from the sequence of numbers {*,} and is compared with /, from (4.17), where the values p 1 are used as R (0), * p 2 (0)> * ..., * p to (0). * Thus, the number m 0 is chosen, for which the inequality (4.17) is valid. Then the initial event of this realization of the chain is the event * A * " 0 . Then we select the next random number x m + i which is compared with/ r , where * p 1 * p t about ; . The number t 1? is defined and the next event of this implementation of the chain will be the event * A t *, etc. It is obvious that each m, only the next event * A w * of the generated implementation, but also the probability distribution * p т1и р "12Z * ... * Pt * * to select the next number t /+ 1 , and for ergodic Markov chains, the effect of the initial probabilities decreases rapidly with the test number. * Every Markov process is called an ergodic process for which the limit probability distribution p, (n), j = 1,* * k, * does not depend on the initial conditions p / (0). Therefore, in modeling, we can assume that

Similarly, more complex algorithms can be constructed, for example, for modeling inhomogeneous Markov chains [29].

The considered ways of modeling realizations of random objects give a general idea of the most typical procedures for the formation of realizations in models of processes of functioning of stochastic systems, but do not exhaust all the techniques used in the practice of statistical computer modeling.

To form possible values of * random variables * with a given distribution law, the source material is the base sequence of random numbers {*,} having uniform distribution in the interval (0, 1). In other words, random numbers as possible values of a random variable having a uniform distribution in the interval (0, 1) can be transformed into possible values * y ] * of a random variable * * whose distribution law is given [4].

Simulation of discrete random variables. Let us consider the peculiarities of the transformation for the case of obtaining discrete random variables. The discrete random variable * y * takes on the values * y 1 * with probabilities * p 19 p * r >, * p * y, which constitute the differential probability distribution

The integral distribution function

To obtain discrete random variables, one can use the inverse function method [10, 29, 53]. If i is a random variable uniformly distributed on the interval (0, 1), then the unknown random variable y is obtained using the transformation

where is the inverse of * E n . *

The calculation algorithm according to (4.19) and (4.20) is reduced to the following:

With the account according to (4.21), the average number of comparison cycles

** Example 4.8. ** It is necessary to obtain a sequence of numbers (yy) having a binomial distribution, which gives the inverse function on the basis of the basic sequence of random numbers {x;} uniformly distributed in the interval (0, 1) probability * y * of successful outcomes in * N * realizations of some experiment:

where p = 0.5 and * N = 6; * C * quot; N/y! (L-y)!

The mathematical expectation and variance of the binomial distribution, respectively, will be * Mu] = pr, Boo] = pr (1 -p). *

Using the notations for & nbsp; (4.21), we calculate:

For example, if we obtain from the uniform distribution the number *, = 0.85393 and make a comparison by the algorithm (4.21), we find that x/* 0.85393 0.89063, ie, yy • * 4.

In this case, the average number of comparison cycles is * 0.01562 + 2 0.09375 h- + 3 0.23438 + 4 0.31250 + 5 0.23438 + 6 (0.09375 4-0, 01562) 3.98.

Example 4.9. It is necessary to check the stochasticity of the sequence of N random numbers {yy} obtained by simulating the binomial distribution for the given parameters * pir * The simplest way to check is to evaluate the fulfillment of the conditions

Let's check the correspondence between the binomial distribution and the parameters * n * & 5 and p = 0,1 of a sequence of random numbers: (yy} = 0, 0, 1,0, 1, 0, 2, 0, 1.0, LH-10.

We compute n /? 0.5; * pr (* 1 4- * pr-p) - * 0.7;

As you can see from the estimates, this sequence of numbers {yy} is good * in * the conditions of this example is a binomial distribution with given parameters.

There are other examples of algorithms and programs for obtaining discrete random variables with a given distribution law, which find application in the practice of computer simulation.

Simulation of continuous random variables. Let's consider features of generation on the computer of continuous random variables. A continuous random variable * r} * is given by an integral distribution function

where */ n (y) * is the probability density.

To obtain continuous random variables with a given distribution law, as for discrete quantities, one can use the inverse function method. The one-to-one monotone function ξ = obtained by the solution with respect to * r * of the equation φ (y) = ξpound;, converts the uniformly distributed on the interval (0, 1) r with the required density */ n (y). *

Indeed, if the random variable * r * has a distribution density */ h (y), * then the distribution of the random variable

is uniform in the interval (0, 1) [4, 29]. Based on this, we can draw the following conclusion. In order to obtain a number belonging to the sequence of random numbers * {y ^, * having the density function L (y), it is necessary to solve the equation y )

Let us consider some examples of obtaining continuous random variables with a given distribution law on the basis of random numbers having a uniform distribution in the interval (0, 1) by the inverse function method.

Example 4.10. It is necessary to obtain random numbers with an exponential distribution law

In view of the relation (4.22), we obtain

where * x 1 * is a random number having a uniform distribution in the interval (0, 1). Then

Given that the random variable & pound; has also a uniform distribution in the interval (0, 1), we can write yy * - ^ In * x t . *

Example 4.11. It is necessary to obtain random numbers y y with the distribution law

Using (4.22), we obtain * X * (y, -yyy) = * x,. * Hence

Other examples of the use of relation (4.22) can be cited. But this method of obtaining random numbers with a given distribution law has a limited scope in the practice of computer simulation, which is explained by the following circumstances: 1) for many distribution laws encountered in practical modeling problems, the integral (4.22) is not taken, it is necessary to resort to numerical methods of solution, which increases the expenditure of computer time for obtaining each random number; 2) even for the cases when the integral (4.22) is taken in a finite form, formulas are obtained that contain the actions of logarithm, root extraction, etc., which are performed using standard computer routines containing many initial operations (addition, multiplication, etc.), which also dramatically increases the amount of computer time spent on obtaining each random number.

Therefore, in the practice of modeling systems, * are used * approximate methods of converting random numbers, which can be classified as follows: a) universal methods by which it is possible to obtain random numbers with a distribution law of any kind; b) not universal methods suitable for obtaining random numbers with a specific distribution law [4, 46].

Consider an approximate universal method of obtaining random numbers, based on a piecewise approximation of the density function. Let it be required to obtain a sequence of random numbers with a density function/ h (y), the possible values of which lie in the interval (a, * b). * Imagine */ (y) * in the form of a piecewise constant function, that is, we divide the interval (a, * b) * into * m * intervals, like this is shown in Fig. 4.14, and we will assume that/^ (* y *) on each interval is constant. Then the random variable * r] * can be represented in the form * T} = a k + r *! to *, where a to is the abscissa of the left border of the & pound; -th interval; * t to * * is a random variable whose possible values are located uniformly within the * k * -th interval, that is, on each segment a * a * +1 the value * g] to * * is considered to be distributed evenly. In order to approximate f, (y) in the most convenient way for practical purposes, it is advisable to divide (a, * b) * into intervals so that the probability of hit of a random variable m in any interval * (a and to + x ) * was constant, i.e., it did not depend on the number of the interval * k. * So Thus, to calculate * a to * we use the following relationship:

The algorithm of computer implementation of this method of obtaining random numbers is reduced to the sequential execution of the following actions: 1) randomly distributed number * x ( * from the interval (0, 1) is generated; 2) with this number

** Fig. 4.14. Piecewise density function approximation **

the interval (q *, 1) is randomly chosen; 3) is generated

number * 1 + 1 and scaled to bring it to the interval * (a to , and * to +,), ie is multiplied by the coefficient (q * + 1 - A *) */*. 4) the random number * y ^ = a to + (a to + x-a to ) is calculated * dg /+ 1 with the required distribution law.

Let's consider in more detail the process of sampling the interval (* a * to , * d to +) with * using a random number x ,. It is advisable for this purpose to construct a table (to form an array) in which to pre-place the numbers of intervals * to * and the values of the scaling factor, determined from the relation (4.23) to reduce the number to the interval (a, * b). * Having obtained a random number * x 19 * from the generator using this table, we immediately determine the abscissa of the left border * a to * and the scaling factor (q * + | - * a to ). *

The advantages of this approximate way of converting random numbers: when implemented on a computer, a small number of operations are required to obtain each random number, since the scaling operation (4.23) is performed only once before modeling, and the number of operations does not depend on the accuracy of the approximation, ie, on the number of intervals * t. *

Let's consider ways of converting a sequence of uniformly distributed random numbers {x,} into a sequence with a given distribution law {#} on the basis of limit theorems of probability theory. Such methods are oriented to obtaining sequences of numbers with a specific distribution law, that is, they are not universal [29, 43]. Let us explain the above with examples.

Example 4.12. Let it be required to obtain a sequence of random numbers {y,} having a normal distribution with mathematical expectation * a * and an average quadratic deviation <

We will generate random numbers * y * y as sums of sequences of random numbers {* /} having a uniform distribution in the interval (0, 1). Since the initial (basic) sequence of random numbers {x & lt;} for summation is a sequence of numbers having a uniform distribution * in the interval (0, 1), we can use the central limit theorem for identically distributed random variables (cm , §4.1): if the independent identically distributed random variables x ,..., X, have each expectation a, and the standard deviation* * and * then the sum of & pound; * x / * is asymptotically normal with the expectation a = Va, and the mean square deviation a = a and N. m

As calculations show, the sum of & pound; x, has a distribution close to the normal value of/', even with relatively small * N. * In practice, to obtain a sequence of normally distributed random numbers, one can use the values # = 8-12, and in the simplest cases - ЛР, for example, ЛГ = 4 + 5 [4].

Example 4.13. Let it be necessary to obtain random numbers having the Poisson distribution law:

For this, we use the Poisson limit theorem (see § 4.1): if * P * is the probability of occurrence of the event * A * in one test, then the probability of occurrence * of events in* * N * independent trials for * N - * * oo,

Q, Np = X is asymptotically equal to * P (t ). *

* Choose* * a sufficiently large* * N, * such that * p * = X/N <** 1, we will hold a series of* * N * of independent trials, in each of which the event * A * occurs with probability * p, * and count * the number of occurrences yj * of the actual occurrence of the event * A * in the series with the number * j. * The numbers * yj * will follow approximately Poisson's law, and the more accurate, the larger * N. * N should be selected in such a way that p & lt; 0,1 * +0,2 * [4].

The algorithm for generating a sequence of random numbers * yp * having a Poisson distribution using this method is shown in Fig. 4.15. Here * LA * I; * N ~ N PNsp; XImx t * are random numbers of a sequence uniformly distributed in the interval (0, 1); * YJ ^ yj, NO * is an auxiliary variable; VID (...) - the procedure for input of initial data; OUTPUT (...) - the calculation procedure; GEN (...) - the procedure for generating random numbers; BPM (...) is the procedure for issuing simulation results.

## thematic pictures

## Also We Can Offer!

##### Essay Writing

[...]

- Argumentative essay
- Best college essays
- Buy custom essays online
- Buy essay online
- Cheap essay
- Cheap essay writing service
- Cheap writing service
- College essay
- College essay introduction
- College essay writing service
- Compare and contrast essay
- Custom essay
- Custom essay writing service
- Custom essays writing services
- Death penalty essay
- Do my essay
- Essay about love
- Essay about yourself
- Essay help
- Essay writing help
- Essay writing service reviews
- Essays online
- Fast food essay
- George orwell essays
- Human rights essay
- Narrative essay
- Pay to write essay
- Personal essay for college
- Personal narrative essay
- Persuasive writing
- Write my essay
- Write my essay for me cheap
- Writing a scholarship essay

##### Case Study

[...]

##### Citation Style

[...]

##### Additional Services

[...]

##### CV

[...]

##### Assignment Help

[...]

##### Admission Services

[...]

##### Custom Paper

[...]

## How to ...

**We made your life easier**with putting together a big number of articles and guidelines on how to plan and write different types of assignments (Essay, Research Paper, Dissertation etc)