Get Latest Exam Updates, Free Study materials and Tips
Probability means possibility. It is a branch of mathematics that deals with the occurrence of a random event. The value is expressed from zero to one. Probability has been introduced in Maths to predict how likely events are to happen. The meaning of probability is basically the extent to which something is likely to happen.
Signals can be divided into two main categories - deterministic and random. The
term random signal is used primarily to denote signals, which have a random in
its nature source.
As an example we can mention the thermal noise, which is created by the
random movement of electrons in an electric conductor.
Apart from this, the term random signal is used also for signals falling into other
categories, such as periodic signals, which have one or several parameters that
have appropriate random behavior. An example is a periodic sinusoidal signal
with a random phase or amplitude
There is one other class of signals, the behaviour of which cannot be predicted.
Such type of signals are called random signals.
These signals are called random signals because the precise value of these signals
cannot be predicted in advance before they actually occur.
The examples of random signals are the noise interference in communication systems.
This means that the noise interference during transmission is totally unpredictable.
In the same way, the noise generated by the receiver itself is random. Even some other
signals which are not noise signals are also random signals. These signals cannot be
modelled mathematically. Actually the electromagnetic interference is the major source
of random noise.
The expected subset of the sample space or happening is called an event.
As an example, let us consider an experiment of throwing a cubic die. In this case, the
sample space S will be as
S = {1, 2, 3, 4, 5, 6}
Now, if we want the number ‘3’ to be an outcome or an even number, i.e., {2, 4, 6}, then
this subset is called an event.
This is denoted by letter ‘E’. Hence event E is a subset of the sample space ‘S’.
If event E has only one outcome, then it is called an elementary event.
On the other hand, if event E does not contain any out come, then it is called a null
event.
If E = S, then an event contains all the outcomes. Such as event is called a certain
event.
It always occurs, no matter what so ever is the outcome.
An experiment is defined as the process which is conducted to get some results. If the same experiment is performed repeatedly under the same conditions, similar results are expected. An experiment is sometimes called trial. As an example, throw of a coin is an experiment or trial. This trial results in two outcomes namely Head and Tail.
Conditional probability is defined as the likelihood of an event or outcome occurring, based on the occurrence of a previous event or outcome. Conditional probability is calculated by multiplying the probability of the preceding event by the updated probability of the succeeding, or conditional, event.
A random variable that takes on an infinite number of values is called a
continuous random variable.
Actually, there are several physical system (experiments) that generate
continuous outputs or outcomes.
Such systems generate infinite number of outputs or outcomes within the finite
period.
Continuous random variables may be used to define the outputs of such
systems.
As an example, the noise voltage generated by an electronic amplifier has a
continuous amplitude. This means that sample space S of the noise voltage
amplitude is continuous. Therefore, in this case, the random variable X has a
continuous range of values
Joint probability is a statistical measure that calculates the likelihood of two events
occurring together and at the same point in time. Joint probability is the probability of
event Y occurring at the same time that event X occurs.
The Formula for Joint Probability Is
Notation for joint probability can take a few different forms. The following formula
represents the probability of events intersection:
P (X⋂Y)
Where : X,Y=Two different events that intersect
P(X and Y),P(XY)=The joint probability of X and Y
In Probability, the set of outcomes of an experiment is called events. There are different
types of events such as independent events, dependent events, mutually exclusive
events, and so on.
If the probability of occurrence of an event A is not affected by the occurrence of
another event B, then A and B are said to be independent events.
Property 1: The probability of a certain event is unity i.e.,
P (A) = 1
Property 2: The probability of any event is always less than or equal to 1 and nonnegative. Mathematically,
Property 3: If A and B are two mutually exclusive events, then
P(A + B) = P(A) + P(B)
Property 4: If A is any event, then the probability of not happening of A is
P (Ā) = 1 – P (A)
where Ā represents the complement of event A.
Property 5: If A and B are any two events (not mutually exclusive events), then
P (A + B) = P (A) + P (B) – P (AB)
where P (AB) is called the probability of events A and B both occurring simultaneously.
Such an event is called joint event of A and B, and the probability P (AB) is called the
joint probability. Now, if events A and B are mutually exclusive, then the joint probability,
P(AB) = 0.
In statistics and probability theory, the Bayes’ theorem (also known as the Bayes’ rule)
is a mathematical formula used to determine the conditional probability of events.
Essentially, the Bayes’ theorem describes the probability of an event based on prior
knowledge of the conditions that might be relevant to the event.
The Bayes’ theorem is expressed in the following formula:
Where:
P(A|B) – the probability of event A occurring, given event B has occurred
P(B|A) – the probability of event B occurring, given event A has occurred
P(A) – the probability of event A
P(B) – the probability of event B
Requirements engineering refers to the process of defining, documenting, and maintaining The values which vary without following any pattern, that is they change randomly. For example, if any experiment (flipping of a coin, or rolling of a die) has the outcome bounded to be from a given set of values, and is not fixed, the result will change every time the experiment is conducted. Such an outcome is termed as Random Variable.
The possible outcomes of an experiment have varied chances. Understanding this distribution of chances/probabilities among the possible outcomes is known as Probability Distribution.
• The Bernoulli Distribution
• The Binomial Distribution
• The Geometric Distribution
• The Hypergeometric Distribution
• The Multinomial Distribution
• The Poisson Distribution
• The Negative Binomial Distribution
• Discrete uniform distribution
Probability is calculated for the possible outcomes of an experiment. Data can be classified as Discrete and Continuous. The probability distribution is broadly classified into two types based on the type of Data that we are working with. Continuous probability distribution and Discrete probability distribution are two types of techniques.
Since random variables simply assign values to outcomes in a sample space and we have defined probability measures on sample spaces, we can also talk about probabilities for random variables. Specifically, we can compute the probability that a discrete random variable equals a specific value (probability mass function) and the probability that a random variable is less than or equal to a specific value (cumulative distribution function).
A Poisson distribution is a tool that helps to predict the probability of certain events
happening when you know how often the event has occurred. It gives us
the probability of a given number of events happening in a fixed interval of time.
Poisson distributions, valid only for integers on the horizontal axis. λ (also written as μ)
is the expected number of event occurrences.
A binomial distribution can be thought of as simply the probability of a SUCCESS or
FAILURE outcome in an experiment or survey that is repeated multiple times. The
binomial is a type of distribution that has two possible outcomes (the prefix “bi” means
two, or twice). For example, a coin toss has only two possible outcomes: heads or tails
and taking a test could have two possible outcomes: pass or fail
A Binomial Distribution shows either (S)uccess or (F)ailure.
The Rayleigh distribution is a continuous probability distribution named after the
English Lord Rayleigh. It is a special case of the Weibull distribution with a scale
parameter of 2. When a Rayleigh is set with a shape parameter (σ) of 1, it is equal to
a chi square distribution with 2 degrees of freedom.
The notation X Rayleigh(σ) means that the random variable X has a Rayleigh
distribution with shape parameter σ. The probability density function (X > 0) is:
Where e is Euler’s number.
As the shape parameter increases, the distribution gets wider and flatter.
Rayleigh dist. showing several different shape parameters, σ.
If (x1
• Normal Distribution
• Chi-Square Distribution
• Fishers F Distribution
• Students t Distribution
• The Gamma Distribution
• The Exponential Distribution
• The Beta Distribution
• The Weibull Distribution
A continuous random variable is a random variable where the data can take infinitely
many values. For example, a random variable measuring the time taken for something
to be done is continuous since there are an infinite number of possible times that can be
taken.
For any continuous random variable with probability density function f(x), we have that:
A random variable is called discrete if it has either a finite or a countable number of
possible values.
These are random variables that are neither discrete nor continuous, but are a
mixture of both. In particular, a mixed random variable has a continuous part and a
discrete part.
• Gaussian random variables are completely defined through only their first and second order moments, i.e.
by their means,variance, and co-variance.
• If the random variables are uncorrelated, they are also called statistically independent.
• Random variables produced by a linear transformation of X1………Xn will also be Gaussian.
The variance of random variable X is often written as Var(X) or σ2 or σ2x.
For a discrete random variable the variance is calculated by summing the product of the square of the
difference between the value of the random variable and the expected value, and the associated probability
of the value of the random variable, taken over all of the values of the random variable. In symbols, Var(X)
= (x - µ)2 P(X = x)
An equivalent formula is, Var(X) = E(X2) – [E(X)]2
The square root of the variance is equal to the standard deviation.
The “moments” of a random variable (or of its distribution) are expected values of powers or related functions of the random variable.
The Markov inequality applies to random variables that take only nonnegative values. It can be stated as
follows:
If X is a random variable that takes only nonnegative values, then for any a>0,
Chebyshev’s inequality is a probability theory that guarantees that within a specified range or distance from the mean, for a large range of probability distributions, no more than a specific fraction of values will be present. In other words, only a definite fraction of values will be found within a specific distance from the mean of a distribution.
A pair of variables whose values are determined by a random experiment is called
a pair of random variables. There are two types of pairs:
• A pair of random variables is discrete if the set of values taken by each of the random variables is a
finite or infinite countable set.
• A pair of random variables is continuous if the set of values taken by each of the random variables is
an infinite noncountable set.
Joint moments are calculated using sophisticated laboratory equipment and computer programs.
The total moment at a joint is calculated as the product of two measurable quantities:
1. the joint segments' moments of inertia, which involves knowing thee segments' masses and lengths 2.
the joint's angular acceleration
The Central Limit Theorem states that the sampling distribution of the sample means approaches a normal
distribution as the sample size gets larger — no matter what the shape of the population distribution. This
fact holds especially true for sample sizes over 30.
All this is saying is that as you take more samples, especially large ones, your graph of the sample
means will look more like a normal distribution.
Here’s what the Central Limit Theorem is saying, graphically. The picture below shows one of the simplest
types of test: rolling a fair die. The more times you roll the die, the more likely the shape of the
distribution of the means tends to look like a normal distribution graph.
• A random process is a time-varying function that assigns the outcome of a random experiment to each time
instant: X(t).
• For a fixed (sample path): a random process is a time varying function, e.g., a signal.
-For fixed t: a random process is a random variable.
• If one scans all possible outcomes of the underlying random experiment, we shall get an ensemble of
signals.
• Random Process can be continuous or discrete
• Real random process also called stochastic process – Example: Noise source (Noise can often be modeled
as a Gaussian random process.
• The computation of statistical averages (e.g., mean and autocorrelation function) of a random process
requires an ensemble of sample functions (data records) that may not always be feasible.
• In many real-life applications, it would be very convenient to calculate the averages from a single
data record.
• This is possible in certain random processes called ergodic processes.
A Poisson Process is a model for a series of discrete event where the average time between events is known, but the exact timing of events is random
Regression is a statistical method used in finance, investing, and other disciplines that attempts to determine the strength and character of the relationship between one dependent variable (usually denoted by Y) and a series of other variables (known as independent variables).
Normally, linear regression is divided into two types: Multiple linear regression and Simple linear
regression. So, for better clearance, we will discuss these types in detail.
Multiple Linear Regression
In this type of linear regression, we always attempt to discover the relationship between two or more
independent variables or inputs and the corresponding dependent variable or output and the independent
variables can be either continuous or categorical.
Simple Linear Regression
In simple linear regression, we aim to reveal the relationship between a single independent variable or
you can say input, and a corresponding dependent variable or output. We can discuss this in a simple line as
y = β0 +β1x+ε
Here, Y speaks to the output or dependent variable, β0 and β1 are two obscure constants that speak to
the intercept and coefficient that is slope separately, and the error term is ε Epsilon.
The least square method is the process of finding the best-fitting curve or line of best fit for a set of data points by reducing the sum of the squares of the offsets (residual part) of the points from the curve. During the process of finding the relation between two variables, the trend of outcomes are estimated quantitatively. This process is termed as regression analysis. The method of curve fitting is an approach to regression analysis.This method of fitting equations which approximates the curves to given raw data is the least squares.
1. Marks scored by students based on number of hours studied (ideally)- Here marks scored in exams
are independent and the number of hours studied is independent.
2. Predicting crop yields based on the amount of rainfall- Yield is a dependent variable while
the measure of precipitation is an independent variable.
3. Predicting the Salary of a person based on years of experience- Therefore, Experience becomes
the independent while Salary turns into the dependent variable.
Not a member yet? Register now
Are you a member? Login now