The probability is 1, which means. Formulas for calculating the probability of events


Presented to date in the open bank of Unified State Exam problems in mathematics (mathege.ru), the solution of which is based on only one formula, which is the classical definition of probability.

The easiest way to understand the formula is with examples.
Example 1. There are 9 red balls and 3 blue balls in the basket. The balls differ only in color. We take out one of them at random (without looking). What is the probability that the ball chosen in this way will be blue?

A comment. In problems in probability theory, something happens (in this case, our action of pulling out the ball) that can have a different result - an outcome. It should be noted that the result can be looked at in different ways. “We pulled out some kind of ball” is also a result. “We pulled out the blue ball” - the result. “We pulled out exactly this ball from all possible balls” - this least generalized view of the result is called an elementary outcome. It is the elementary outcomes that are meant in the formula for calculating the probability.

Solution. Now let's calculate the probability of choosing the blue ball.
Event A: “the selected ball turned out to be blue”
Total number of all possible outcomes: 9+3=12 (the number of all balls that we could draw)
Number of outcomes favorable for event A: 3 (the number of such outcomes in which event A occurred - that is, the number of blue balls)
P(A)=3/12=1/4=0.25
Answer: 0.25

For the same problem, let's calculate the probability of choosing a red ball.
The total number of possible outcomes will remain the same, 12. Number of favorable outcomes: 9. Probability sought: 9/12=3/4=0.75

The probability of any event always lies between 0 and 1.
Sometimes in everyday speech (but not in probability theory!) the probability of events is estimated as a percentage. The transition between math and conversational scores is accomplished by multiplying (or dividing) by 100%.
So,
Moreover, the probability is zero for events that cannot happen - incredible. For example, in our example this would be the probability of drawing a green ball from the basket. (The number of favorable outcomes is 0, P(A)=0/12=0, if calculated using the formula)
Probability 1 has events that are absolutely certain to happen, without options. For example, the probability that “the selected ball will be either red or blue” is for our task. (Number of favorable outcomes: 12, P(A)=12/12=1)

We looked at a classic example illustrating the definition of probability. All similar problems of the Unified State Exam in probability theory are solved by using this formula.
In place of the red and blue balls there may be apples and pears, boys and girls, learned and unlearned tickets, tickets containing and not containing a question on a certain topic (prototypes,), defective and high-quality bags or garden pumps (prototypes,) - the principle remains the same.

They differ slightly in the formulation of the problem of the probability theory of the Unified State Examination, where you need to calculate the probability of some event occurring on a certain day. ( , ) As in previous problems, you need to determine what is the elementary outcome, and then apply the same formula.

Example 2. The conference lasts three days. On the first and second days there are 15 speakers, on the third day - 20. What is the probability that Professor M.’s report will fall on the third day if the order of reports is determined by drawing lots?

What is the elementary outcome here? – Assigning the professor’s report one of all possible serial numbers for the speech. 15+15+20=50 people participate in the draw. Thus, Professor M.'s report may receive one of 50 issues. This means there are only 50 elementary outcomes.
What are the favorable outcomes? - Those in which it turns out that the professor will speak on the third day. That is, the last 20 numbers.
According to the formula, probability P(A)= 20/50=2/5=4/10=0.4
Answer: 0.4

The drawing of lots here represents the establishment of a random correspondence between people and ordered places. In example 2, matching was considered from the point of view of which of the places a particular person could occupy. You can approach the same situation from the other side: which of the people with what probability could get to a specific place (prototypes , , , ):

Example 3. The draw includes 5 Germans, 8 French and 3 Estonians. What is the probability that the first (/second/seventh/last – it doesn’t matter) will be a Frenchman.

The number of elementary outcomes is the number of all possible people who could get into a given place by drawing lots. 5+8+3=16 people.
Favorable outcomes - French. 8 people.
Required probability: 8/16=1/2=0.5
Answer: 0.5

The prototype is slightly different. There are still problems about coins () and dice (), which are somewhat more creative. The solution to these problems can be found on the prototype pages.

Here are a few examples of tossing a coin or dice.

Example 4. When we toss a coin, what is the probability of landing on heads?
There are 2 outcomes – heads or tails. (it is believed that the coin never lands on its edge) A favorable outcome is tails, 1.
Probability 1/2=0.5
Answer: 0.5.

Example 5. What if we toss a coin twice? What is the probability of getting heads both times?
The main thing is to determine what elementary outcomes we will consider when tossing two coins. After tossing two coins, one of the following results can occur:
1) PP – both times it came up heads
2) PO – first time heads, second time heads
3) OP – heads the first time, tails the second time
4) OO – heads came up both times
There are no other options. This means that there are 4 elementary outcomes. Only the first one, 1, is favorable.
Probability: 1/4=0.25
Answer: 0.25

What is the probability that two coin tosses will result in tails?
The number of elementary outcomes is the same, 4. Favorable outcomes are the second and third, 2.
Probability of getting one tail: 2/4=0.5

In such problems, another formula may be useful.
If with one toss of a coin we have 2 possible outcome options, then for two tosses the results will be 2 2 = 2 2 = 4 (as in example 5), for three tosses 2 2 2 = 2 3 = 8, for four: 2·2·2·2=2 4 =16, ... for N rolls the possible results will be 2·2·...·2=2 N .

So, you can find the probability of getting 5 heads out of 5 coin tosses.
Total number of elementary outcomes: 2 5 =32.
Favorable outcomes: 1. (RRRRRR – heads all 5 times)
Probability: 1/32=0.03125

The same is true for dice. With one throw, there are 6 possible results. So, for two throws: 6 6 = 36, for three 6 6 6 = 216, etc.

Example 6. We throw the dice. What is the probability that an even number will be rolled?

Total outcomes: 6, according to the number of sides.
Favorable: 3 outcomes. (2, 4, 6)
Probability: 3/6=0.5

Example 7. We throw two dice. What is the probability that the total will be 10? (round to the nearest hundredth)

For one die there are 6 possible outcomes. This means that for two, according to the above rule, 6·6=36.
What outcomes will be favorable for the total to roll 10?
10 must be decomposed into the sum of two numbers from 1 to 6. This can be done in two ways: 10=6+4 and 10=5+5. This means that the following options are possible for the cubes:
(6 on the first and 4 on the second)
(4 on the first and 6 on the second)
(5 on the first and 5 on the second)
Total, 3 options. Required probability: 3/36=1/12=0.08
Answer: 0.08

Other types of B6 problems will be discussed in a future How to Solve article.

Everything in the world happens deterministically or by chance...
Aristotle

Probability: Basic Rules

Probability theory calculates the probabilities of various events. Fundamental to probability theory is the concept of a random event.

For example, you throw a coin, it randomly lands on a head or a tail. You don't know in advance which side the coin will land on. You enter into an insurance contract; you do not know in advance whether payments will be made or not.

In actuarial calculations, you need to be able to estimate the probability of various events, so probability theory plays a key role. No other branch of mathematics can deal with the probabilities of events.

Let's take a closer look at tossing a coin. There are 2 mutually exclusive outcomes: the coat of arms falls out or the tails fall out. The outcome of the throw is random, since the observer cannot analyze and take into account all the factors that influence the result. What is the probability of the coat of arms falling out? Most will answer ½, but why?

Let it be formal A indicates the loss of the coat of arms. Let the coin toss n once. Then the probability of the event A can be defined as the proportion of those throws that result in a coat of arms:

Where n total number of throws, n(A) number of coat of arms drops.

Relation (1) is called frequency events A in a long series of tests.

It turns out that in various series of tests the corresponding frequency at large n clusters around some constant value P(A). This quantity is called probability of an event A and is designated by the letter R- abbreviation of the English word probability - probability.

Formally we have:

(2)

This law is called law of large numbers.

If the coin is fair (symmetrical), then the probability of getting a coat of arms is equal to the probability of getting heads and equals ½.

Let A And IN some events, for example, whether an insured event occurred or not. The union of two events is an event consisting of the execution of an event A, events IN, or both events together. The intersection of two events A And IN called an event consisting in the implementation as an event A, and events IN.

Basic Rules The calculus of event probabilities is as follows:

1. The probability of any event lies between zero and one:

2. Let A and B be two events, then:

It reads like this: the probability of two events combining is equal to the sum of the probabilities of these events minus the probability of the events intersecting. If the events are incompatible or non-overlapping, then the probability of the combination (sum) of two events is equal to the sum of the probabilities. This law is called the law addition probabilities.

We say that an event is reliable if its probability is equal to 1. When analyzing certain phenomena, the question arises of how the occurrence of an event affects IN upon the occurrence of an event A. To do this, enter conditional probability :

(4)

It reads like this: probability of occurrence A given that IN equals the probability of intersection A And IN, divided by the probability of the event IN.
Formula (4) assumes that the probability of an event IN Above zero.

Formula (4) can also be written as:

(5)

This is the formula multiplying probabilities.

Conditional probability is also called a posteriori probability of an event A- probability of occurrence A after the attack IN.

In this case, the probability itself is called a priori probability. There are several other important formulas that are intensively used in actuarial calculations.

Total Probability Formula

Let us assume that an experiment is being carried out, the conditions of which can be determined in advance mutually mutually exclusive assumptions (hypotheses):

We assume that there is either a hypothesis, or...or. The probabilities of these hypotheses are known and equal:

Then the formula holds full probabilities :

(6)

Probability of an event occurring A equal to the sum of the products of the probability of occurrence A for each hypothesis on the probability of this hypothesis.

Bayes formula

Bayes formula allows the probability of hypotheses to be recalculated in the light of new information provided by the result A.

Bayes' formula in a certain sense is the inverse of the total probability formula.

Consider the following practical problem.

Problem 1

Suppose there is a plane crash and experts are busy investigating its causes. 4 reasons why the disaster occurred are known in advance: either the cause, or, or, or. According to available statistics, these reasons have the following probabilities:



When examining the crash site, traces of fuel ignition were found; according to statistics, the probability of this event for one reason or another is as follows:




Question: what is the most likely cause of the disaster?

Let's calculate the probabilities of causes under the conditions of the occurrence of an event A.



From this it can be seen that the most likely reason is the first one, since its probability is maximum.

Problem 2

Consider an airplane landing at an airfield.

When landing, weather conditions may be as follows: no low clouds (), low clouds (). In the first case, the probability of a safe landing is P1. In the second case - P2. It's clear that P1>P2.

Devices that provide blind landing have a probability of trouble-free operation R. If there is low cloud cover and the blind landing instruments have failed, the probability of a successful landing is P3, and P3<Р2 . It is known that for a given airfield the proportion of days in a year with low clouds is equal to .

Find the probability of the plane landing safely.

We need to find the probability.

There are two mutually exclusive options: the blind landing devices are working, the blind landing devices have failed, so we have:

Hence, according to the total probability formula:

Problem 3

An insurance company provides life insurance. 10% of those insured by this company are smokers. If the insured person does not smoke, the probability of his death during the year is 0.01. If he is a smoker, then this probability is 0.05.

What is the proportion of smokers among those insured who died during the year?

Possible answers: (A) 5%, (B) 20%, (C) 36%, (D) 56%, (E) 90%.

Solution

Let's enter the events:

The condition of the problem means that

In addition, since the events form a complete group of pairwise incompatible events, then .
The probability we are interested in is .

Using Bayes' formula, we have:

therefore the correct option is ( IN).

Problem 4

The insurance company sells life insurance contracts in three categories: standard, preferred and ultra-privileged.

50% of all insured are standard, 40% are preferred and 10% are ultra-privileged.

The probability of death within a year for a standard insured is 0.010, for a privileged one - 0.005, and for an ultra-privileged one - 0.001.

What is the probability that the deceased insured is ultra-privileged?

Solution

Let us introduce the following events into consideration:

In terms of these events, the probability we are interested in is . By condition:

Since the events , , form a complete group of pairwise incompatible events, using Bayes' formula we have:

Random variables and their characteristics

Let it be some random variable, for example, damage from a fire or the amount of insurance payments.
A random variable is completely characterized by its distribution function.

Definition. Function called distribution function random variable ξ .

Definition. If there is a function such that for arbitrary a done

then they say that the random variable ξ It has probability density function f(x).

Definition. Let . For a continuous distribution function F theoretical α-quantile is called the solution to the equation.

This solution may not be the only one.

Quantile level ½ called theoretical median , quantile levels ¼ And ¾ -lower and upper quartiles respectively.

In actuarial applications plays an important role Chebyshev's inequality:

at any

Symbol of mathematical expectation.

It reads like this: the probability that the modulus is greater than or equal to the mathematical expectation of the modulus divided by .

Lifetime as a random variable

The uncertainty of the moment of death is a major risk factor in life insurance.

Nothing definite can be said about the moment of death of an individual. However, if we are dealing with a large homogeneous group of people and are not interested in the fate of individual people from this group, then we are within the framework of probability theory as the science of mass random phenomena that have the property of frequency stability.

Respectively, we can talk about life expectancy as a random variable T.

Survival function

Probability theory describes the stochastic nature of any random variable T distribution function F(x), which is defined as the probability that the random variable T less than number x:

.

In actuarial mathematics it is nice to work not with the distribution function, but with the additional distribution function . In terms of longevity, this is the probability that a person will live to age x years.

called survival function(survival function):

The survival function has the following properties:

In life tables it is usually assumed that there is some age limit (limiting age) (usually years) and, accordingly, at x>.

When describing mortality by analytical laws, it is usually assumed that life time is unlimited, but the type and parameters of the laws are selected so that the probability of life beyond a certain age is negligible.

The survival function has simple statistical meaning.

Let's say that we are observing a group of newborns (usually), whom we observe and can record the moments of their death.

Let us denote the number of living representatives of this group at age by . Then:

.

Symbol E here and below is used to denote mathematical expectation.

So, the survival function is equal to the average proportion of those who survive to age from some fixed group of newborns.

In actuarial mathematics, one often works not with the survival function, but with the value just introduced (fixing the initial group size).

The survival function can be reconstructed from density:

Lifespan Characteristics

From a practical point of view, the following characteristics are important:

1 . Average lifetime

,
2 . Dispersion lifetime

,
Where
,

At When assessing the probability of the occurrence of any random event, it is very important to have a good understanding of whether the probability () of the occurrence of the event we are interested in depends on how other events develop.

In the case of the classical scheme, when all outcomes are equally probable, we can already estimate the probability values ​​of the individual event of interest to us independently. We can do this even if the event is a complex collection of several elementary outcomes. What if several random events occur simultaneously or sequentially? How does this affect the likelihood of the event we are interested in happening?

If I roll a die several times and want a six to come up, and I keep getting unlucky, does that mean I should increase my bet because, according to probability theory, I'm about to get lucky? Alas, probability theory does not state anything like this. No dice, no cards, no coins can't remember what they showed us last time. It doesn’t matter to them at all whether it’s the first time or the tenth time I’m testing my luck today. Every time I repeat the roll, I know only one thing: and this time the probability of getting a six is ​​again one sixth. Of course, this does not mean that the number I need will never come up. This only means that my loss after the first throw and after any other throw are independent events.

Events A and B are called independent, if the implementation of one of them does not in any way affect the probability of another event. For example, the probabilities of hitting a target with the first of two weapons do not depend on whether the target was hit by the other weapon, so the events “the first weapon hit the target” and “the second weapon hit the target” are independent.

If two events A and B are independent, and the probability of each of them is known, then the probability of the simultaneous occurrence of both event A and event B (denoted AB) can be calculated using the following theorem.

Probability multiplication theorem for independent events

P(AB) = P(A)*P(B)- probability simultaneous the onset of two independent events is equal to work the probabilities of these events.

Example.The probabilities of hitting the target when firing the first and second guns are respectively equal: p 1 =0.7;

p 2 =0.8. Find the probability of a hit with one salvo by both guns simultaneously. Solution:


What happens to our estimates if the initial events are not independent? Let's change the previous example a little.

Example.Two shooters shoot at targets at a competition, and if one of them shoots accurately, the opponent begins to get nervous and his results worsen. How to turn this everyday situation into a mathematical problem and outline ways to solve it? It is intuitively clear that it is necessary to somehow separate the two options for the development of events, to essentially create two scenarios, two different tasks. In the first case, if the opponent missed, the scenario will be favorable for the nervous athlete and his accuracy will be higher. In the second case, if the opponent has taken his chance decently, the probability of hitting the target for the second athlete decreases.


To separate possible scenarios (often called hypotheses) for the development of events, we will often use a “probability tree” diagram. This diagram is similar in meaning to the decision tree that you have probably already dealt with. Each branch represents a separate scenario for the development of events, only now it has its own meaning of the so-called conditional


probabilities (q 1, q 2, q 1 -1, q 2 -1).

This scheme is very convenient for analyzing sequential random events. It remains to clarify one more important question: where do the initial values ​​of the probabilities come from? real situations

Example.? After all, probability theory doesn’t work with just coins and dice? Usually these estimates are taken from statistics, and when statistical information is not available, we conduct our own research. And we often have to start it not with collecting data, but with the question of what information we actually need. Let's say we need to estimate in a city with a population of one hundred thousand inhabitants the market volume for a new product that is not an essential item, for example, for a balm for the care of colored hair. Let's consider the "probability tree" diagram. In this case, we need to approximately estimate the probability value on each “branch”.

So, our estimates of market capacity:

1) of all city residents, 50% are women,

2) of all women, only 30% dye their hair often,

3) of them, only 10% use balms for colored hair,

4) of them, only 10% can muster the courage to try a new product,




p 2 =0.8. Find the probability of a hit with one salvo by both guns simultaneously. According to the law of multiplication of probabilities, we determine the probability of the event we are interested in A = (a city resident buys this new balm from us) = 0.00045.

Let's multiply this probability value by the number of city residents. As a result, we have only 45 potential customers, and considering that one bottle of this product lasts for several months, the trade is not very lively.

And yet there is some benefit from our assessments.

Firstly, we can compare forecasts of different business ideas; they will have different “forks” in the diagrams, and, of course, the probability values ​​will also be different.

Secondly, as we have already said, a random variable is not called random because it does not depend on anything at all. Just her exact the meaning is not known in advance. We know that the average number of buyers can be increased (for example, by advertising a new product). So it makes sense to focus our efforts on those “forks” where the probability distribution does not suit us particularly, on those factors that we are able to influence.

Let's look at another quantitative example of consumer behavior research.

Example. On average, 10,000 people visit the food market per day. The probability that a market visitor enters the dairy products pavilion is 1/2.

It is known that this pavilion sells an average of 500 kg of various products per day.

Can we say that the average purchase in the pavilion weighs only 100 g? Discussion.




Of course not. It is clear that not everyone who entered the pavilion ended up buying something there.

As shown in the diagram, to answer the question about the average weight of a purchase, we must find an answer to the question, what is the probability that a person entering the pavilion will buy something there. If we do not have such data at our disposal, but we need it, we will have to obtain it ourselves by observing the visitors to the pavilion for some time. Let’s say our observations showed that only a fifth of pavilion visitors buy something.

Once we have obtained these estimates, the task becomes simple. Out of 10,000 people who come to the market, 5,000 will go to the dairy products pavilion; there will be only 1,000 purchases. The average purchase weight is 500 grams. It is interesting to note that in order to build a complete picture of what is happening, the logic of conditional “branching” must be defined at each stage of our reasoning as clearly as if we were working with a “specific” situation, and not with probabilities.

1. Let there be an electrical circuit consisting of n elements connected in series, each of which operates independently of the others.




The probability p of failure of each element is known. Determine the probability of proper operation of the entire section of the circuit (event A).

2. The student knows 20 out of 25 exam questions. Find the probability that the student knows the three questions given to him by the examiner.

3. Production consists of four successive stages, at each of which equipment operates, for which the probabilities of failure over the next month are equal to p 1, p 2, p 3 and p 4, respectively. Find the probability that there will be no production stoppages due to equipment failure in a month.

It is clear that each event has a varying degree of possibility of its occurrence (its implementation). In order to quantitatively compare events with each other according to the degree of their possibility, obviously, it is necessary to associate a certain number with each event, which is greater, the more possible the event is. This number is called the probability of an event.

Probability of event– is a numerical measure of the degree of objective possibility of the occurrence of this event.

Consider a stochastic experiment and a random event A observed in this experiment. Let's repeat this experiment n times and let m(A) be the number of experiments in which event A occurred.

Relation (1.1)

called relative frequency events A in the series of experiments performed.

It is easy to verify the validity of the properties:

if A and B are inconsistent (AB= ), then ν(A+B) = ν(A) + ν(B) (1.2)

The relative frequency is determined only after a series of experiments and, generally speaking, can vary from series to series. However, experience shows that in many cases, as the number of experiments increases, the relative frequency approaches a certain number. This fact of stability of the relative frequency has been repeatedly verified and can be considered experimentally established.

Example 1.19.. If you throw one coin, no one can predict which side it will land on top. But if you throw two tons of coins, then everyone will say that about one ton will fall up with the coat of arms, that is, the relative frequency of the coat of arms falling out is approximately 0.5.

If, with an increase in the number of experiments, the relative frequency of the event ν(A) tends to a certain fixed number, then it is said that event A is statistically stable, and this number is called the probability of event A.

Probability of the event A some fixed number P(A) is called, to which the relative frequency ν(A) of this event tends as the number of experiments increases, that is,

This definition is called statistical determination of probability .

Let's consider a certain stochastic experiment and let the space of its elementary events consist of a finite or infinite (but countable) set of elementary events ω 1, ω 2, …, ω i, …. Let us assume that each elementary event ω i is assigned a certain number - р i, characterizing the degree of possibility of the occurrence of a given elementary event and satisfying the following properties:

This number p i is called probability of an elementary eventωi.

Let now A be a random event observed in this experiment, and let it correspond to a certain set

In this setting probability of an event A call the sum of the probabilities of elementary events favoring A(included in the corresponding set A):


(1.4)

The probability introduced in this way has the same properties as the relative frequency, namely:

And if AB = (A and B are incompatible),

then P(A+B) = P(A) + P(B)

Indeed, according to (1.4)

In the last relation we took advantage of the fact that not a single elementary event can favor two incompatible events at the same time.

We especially note that probability theory does not indicate methods for determining p i; they must be sought for practical reasons or obtained from a corresponding statistical experiment.

As an example, consider the classical scheme of probability theory. To do this, consider a stochastic experiment, the space of elementary events of which consists of a finite (n) number of elements. Let us additionally assume that all these elementary events are equally possible, that is, the probabilities of elementary events are equal to p(ω i)=p i =p. It follows that

Example 1.20. When throwing a symmetrical coin, getting heads and tails are equally possible, their probabilities are equal to 0.5.

Example 1.21. When throwing a symmetrical die, all faces are equally possible, their probabilities are equal to 1/6.

Now let event A be favored by m elementary events, they are usually called outcomes favorable to event A. Then

Got classical definition of probability: the probability P(A) of event A is equal to the ratio of the number of outcomes favorable to event A to the total number of outcomes

Example 1.22. The urn contains m white balls and n black balls. What is the probability of drawing a white ball?

Solution. The total number of elementary events is m+n. They are all equally probable. Favorable event A of which m. Hence, .

The following properties follow from the definition of probability:

Property 1. The probability of a reliable event is equal to one.

Indeed, if the event is reliable, then every elementary outcome of the test favors the event. In this case t=p, hence,

P(A)=m/n=n/n=1.(1.6)

Property 2. The probability of an impossible event is zero.

Indeed, if an event is impossible, then none of the elementary outcomes of the test favor the event. In this case T= 0, therefore, P(A)=m/n=0/n=0. (1.7)

Property 3.The probability of a random event is a positive number between zero and one.

Indeed, only a part of the total number of elementary outcomes of the test is favored by a random event. That is, 0≤m≤n, which means 0≤m/n≤1, therefore, the probability of any event satisfies the double inequality 0≤ P(A)1. (1.8)

Comparing the definitions of probability (1.5) and relative frequency (1.1), we conclude: definition of probability does not require testing to be carried out in fact; the definition of relative frequency assumes that tests were actually carried out. In other words, the probability is calculated before the experiment, and the relative frequency - after the experiment.

However, calculating probability requires preliminary information about the number or probabilities of elementary outcomes favorable for a given event. In the absence of such preliminary information, empirical data are used to determine the probability, that is, the relative frequency of the event is determined based on the results of a stochastic experiment.

Example 1.23. Technical control department discovered 3 non-standard parts in a batch of 80 randomly selected parts. Relative frequency of occurrence of non-standard parts r(A)= 3/80.

Example 1.24. According to the purpose.produced 24 shot, and 19 hits were recorded. Relative target hit rate. r(A)=19/24.

Long-term observations have shown that if experiments are carried out under identical conditions, in each of which the number of tests is sufficiently large, then the relative frequency exhibits the property of stability. This property is that in different experiments the relative frequency changes little (the less, the more tests are performed), fluctuating around a certain constant number. It turned out that this constant number can be taken as an approximate value of the probability.

The relationship between relative frequency and probability will be described in more detail and more precisely below. Now let us illustrate the property of stability with examples.

Example 1.25. According to Swedish statistics, the relative frequency of births of girls for 1935 by month is characterized by the following numbers (the numbers are arranged in order of months, starting with January): 0,486; 0,489; 0,490; 0.471; 0,478; 0,482; 0.462; 0,484; 0,485; 0,491; 0,482; 0,473

The relative frequency fluctuates around the number 0.481, which can be taken as an approximate value for the probability of having girls.

Note that statistical data from different countries give approximately the same relative frequency value.

Example 1.26. Coin tossing experiments were carried out many times, in which the number of appearances of the “coat of arms” was counted. The results of several experiments are shown in the table.

Knowing that probability can be measured, let's try to express it in numbers. There are three possible ways.

Rice. 1.1. Measuring Probability

PROBABILITY DETERMINED BY SYMMETRY

There are situations in which possible outcomes are equally probable. For example, when tossing a coin once, if the coin is standard, the probability of “heads” or “tails” appearing is the same, i.e. P("heads") = P("tails"). Since only two outcomes are possible, then P(“heads”) + P(“tails”) = 1, therefore, P(“heads”) = P(“tails”) = 0.5.

In experiments where outcomes have equal chances of occurrence, the probability of event E, P (E) is equal to:

Example 1.1. The coin is tossed three times. What is the probability of two heads and one tail?

First, let's find all possible outcomes: To make sure that we have found all possible options, we will use a tree diagram (see Chapter 1, Section 1.3.1).

So, there are 8 equally possible outcomes, therefore, the probability of them is 1/8. Event E - two heads and tails - three occurred. That's why:

Example 1.2. A standard die is rolled twice. What is the probability that the score is 9 or more?

Let's find all possible outcomes.

Table 1.2. The total number of points obtained by rolling a die twice

So, in 10 out of 36 possible outcomes the sum of points is 9 or therefore:

EMPIRICALLY DETERMINED PROBABILITY

Example with a coin from the table. 1.1 clearly illustrates the mechanism for determining probability.

Given the total number of experiments that are successful, the probability of the required result is calculated as follows:

A ratio is the relative frequency of occurrence of a certain result over a sufficiently long experiment. The probability is calculated either based on the data of the experiment performed, based on past data.

Example 1.3. Of the five hundred electric lamps tested, 415 worked for more than 1000 hours. Based on the data from this experiment, we can conclude that the probability of normal operation of a lamp of this type for more than 1000 hours is:

Note. Testing is destructive in nature, so not all lamps can be tested. If only one lamp were tested, the probability would be 1 or 0 (i.e. whether it can last 1000 hours or not). Hence the need to repeat the experiment.

Example 1.4. In table 1.3 shows data on the length of service of men working in the company:

Table 1.3. Men's work experience

What is the probability that the next person hired by the company will work for at least two years:

Solution.

The table shows that 38 out of 100 employees have been working in the company for more than two years. The empirical probability that the next employee will remain with the company for more than two years is:

At the same time, we assume that the new employee is “typical and the working conditions are unchanged.

SUBJECTIVE PROBABILITY ASSESSMENT

In business, situations often arise in which there is no symmetry, and there is no experimental data either. Therefore, determining the likelihood of a favorable outcome under the influence of the views and experience of the researcher is subjective.

Example 1.5.

1. An investment expert estimates that the probability of making a profit in the first two years is 0.6.

2. Marketing manager's forecast: the probability of selling 1000 units of a product in the first month after its appearance on the market is 0.4.