Numerical and Statistical Methods
Free

Unit2

Unit3

Unit4

unit5
 Introduction to Probability Distribution
 Discrete Random Variable in Probability Distribution
 Distribution function of Discrete Random Variable
 Continuous Random Variable in Probability Distribution
 Continuous Distribution Function in Probability Distribution
 Expectation in Probability Distribution
 Mean and Variance Part 1 in Probability Distribution
 Mean and Variance Part 2 in Probability Distribution
 Moments and Moments Generating Function Part #1 in Probability Distribution
 Moments and Moments Generating Function Part #2 in Probability Distribution
 Binomial Probability Distribution
 Poisson Distribution with Solved Example
 Normal Distribution Full Basic Concept
 Normal distribution with Solved Example
Numerical and Statistical Methods
Numerical and Statistical Methods is semester 2 subject of BSc IT offered by Mumbai University. The course Numerical and Statistical Methods contains following subtopics
Mathematical Modeling and Engineering Problem Solving contains following subtopics such as A Simple Mathematical Model, Conservation Laws and Engineering Problems Approximations and RoundOff Errors: Significant Figures, Accuracy and Precision, Error Definitions, RoundOff Errors Truncation Errors and the Taylor Series contains following subtopics such as The Taylor Series, Error Propagation, Total Numerical Errors, Formulation Errors and Data Uncertainty.
Solutions of Algebraic and Transcendental Equations contains following subtopics such as The Bisection Method, The NewtonRaphson Method, The Regulafalsi method, The Secant Method. Interpolation contains following subtopics such as Forward Difference, Backward Difference, Newton’s Forward Difference Interpolation, Newton’s Backward Difference Interpolation, Lagrange’s Interpolation.
Solution of simultaneous algebraic equations (linear) using iterative methods contains following subtopics such as GaussJordan Method, GaussSeidel Method. Numerical differentiation and Integration: Numberical differentiation, Numerical integration using Trapezoidal Rule, Simpson’s 1/3rd and 3/8th rules. Numerical solution of 1st and 2nd order differential equations: Taylor series, Euler’s Method, Modified Euler’s Method, RungeKutta Method for 1st and 2 nd Order Differential Equations.
LeastSquares Regression contains following subtopics such as Linear Regression, Polynomial Regression, Multiple Linear Regression, General Linear Least Squares, Nonlinear Regression Linear Programming contains following subtopics such as Linear optimization problem, Formulation and Graphical solution, Basic solution and Feasible solution.
Random variables contains following subtopics such as Discrete and Continuous random variables, Probability density function, Probability distribution of random variables, Expected value, Variance. Distributions contains following subtopics such as Discrete distributions: Uniform, Binomial, Poisson, Bernoulli, Continuous distributions: uniform distributions, exponential, (derivation of mean and variance only and state other properties and discuss their applications) Normal distribution state all the properties and its applications.
In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a system of linear equations. It is named after the German mathematicians Carl Friedrich Gauss and Philipp Ludwig von Seidel, and is similar to the Jacobi method. Though it can be applied to any matrix with nonzero elements on the diagonals, convergence is only guaranteed if the matrix is either strictly diagonally dominant,or symmetric and positive definite. It was only mentioned in a private letter from Gauss to his student Gerling in 1823. A publication was not delivered before 1874 by Seidel.
In numerical analysis, the Runge–Kutta methods are a family of implicit and explicit iterative methods, which include the wellknown routine called the Euler Method, used in temporal discretization for the approximate solutions of ordinary differential equations. These methods were developed around 1900 by the German mathematicians Carl Runge and Wilhelm Kutta.
Gaussian elimination, also known as row reduction, is an algorithm in linear algebra for solving a system of linear equations. It is usually understood as a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used to find the rank of a matrix, to calculate the determinant of a matrix, and to calculate the inverse of an invertible square matrix. The method is named after Carl Friedrich Gauss (1777–1855). Some special cases of the method – albeit presented without proof – were known to Chinese mathematicians as early as circa 179 CE. To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower lefthand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations: Swapping two rows, Multiplying a row by a nonzero number, Adding a multiple of one row to another row.
In mathematics, the Taylor series of a function is an infinite sum of terms that are expressed in terms of the function’s derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor’s series are named after Brook Taylor who introduced them in 1715. If zero is the point where the derivatives are considered, a Taylor series is also called a Maclaurin series, after Colin Maclaurin, who made extensive use of this special case of Taylor series in the 18th century. The partial sum formed by the first n + 1 terms of a Taylor series is a polynomial of degree n that is called the nth Taylor polynomial of the function. Taylor polynomials are approximations of a function, which become generally better as n increases. Taylor’s theorem gives quantitative estimates on the error introduced by the use of such approximations. If the Taylor series of a function is convergent, its sum is the limit of the infinite sequence of the Taylor polynomials. A function may differ from the sum of its Taylor series, even if its Taylor series is convergent. A function is analytic at a point x if it is equal to the sum of its Taylor series in some open interval (or open disk in the complex plane) containing x. This implies that the function is analytic at every point of the interval.
Prepare For Your Placements: https://lastmomenttuitions.com/courses/placementpreparation/
/ Youtube Channel: https://www.youtube.com/channel/UCGFNZxMqKLsqWERX_N2f08Q
Follow For Latest Updates, Study Tips & More Content!
Course Features
 Lectures 26
 Quizzes 0
 Duration 50 hours
 Skill level All levels
 Language English
 Students 86
 Certificate No
 Assessments Yes