Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ π -¹ ² ³ °

You are not logged in.

- Topics: Active | Unanswered

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Angle**

In Euclidean geometry, an angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle. Angles formed by two rays lie in the plane that contains the rays. Angles are also formed by the intersection of two planes. These are called dihedral angles. Two intersecting curves may also define an angle, which is the angle of the rays lying tangent to the respective curves at their point of intersection.

Angle is also used to designate the measure of an angle or of a rotation. This measure is the ratio of the length of a circular arc to its radius. In the case of a geometric angle, the arc is centered at the vertex and delimited by the sides. In the case of a rotation, the arc is centered at the center of the rotation and delimited by any other point and its image by the rotation.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Sine, Cosine, and Tangent**

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Pi**

The number

(spelled out as "pi") is a mathematical constant that is the ratio of a circle's circumference to its diameter, approximately equal to 3.14159. The number π appears in many formulas across mathematics and physics. It is an irrational number, meaning that it cannot be expressed exactly as a ratio of two integers, although fractions such as 22/7 are commonly used to approximate it. Consequently, its decimal representation never ends, nor enters a permanently repeating pattern. It is a transcendental number, meaning that it cannot be a solution of an equation involving only sums, products, powers, and integers. The transcendence of implies that it is impossible to solve the ancient challenge of squaring the circle with a compass and straightedge. The decimal digits of π appear to be randomly distributed, but no proof of this conjecture has been found.For thousands of years, mathematicians have attempted to extend their understanding of

, sometimes by computing its value to a high degree of accuracy. Ancient civilizations, including the Egyptians and Babylonians, required fairly accurate approximations of for practical computations. Around 250 BC, the Greek mathematician Archimedes created an algorithm to approximate with arbitrary accuracy. In the 5th century AD, Chinese mathematicians approximated to seven digits, while Indian mathematicians made a five-digit approximation, both using geometrical techniques. The first computational formula for π, based on infinite series, was discovered a millennium later. The earliest known use of the Greek letter π to represent the ratio of a circle's circumference to its diameter was by the Welsh mathematician William Jones in 1706.The invention of calculus soon led to the calculation of hundreds of digits of π, enough for all practical scientific computations. Nevertheless, in the 20th and 21st centuries, mathematicians and computer scientists have pursued new approaches that, when combined with increasing computational power, extended the decimal representation of

to many trillions of digits. These computations are motivated by the development of efficient algorithms to calculate numeric series, as well as the human quest to break records. The extensive computations involved have also been used to test supercomputers.Because its definition relates to the circle,

is found in many formulae in trigonometry and geometry, especially those concerning circles, ellipses and spheres. It is also found in formulae from other topics in science, such as cosmology, fractals, thermodynamics, mechanics, and electromagnetism. In modern mathematical analysis, it is often instead defined without any reference to geometry; therefore, it also appears in areas having little to do with geometry, such as number theory and statistics. The ubiquity of makes it one of the most widely known mathematical constants inside and outside of science. Several books devoted to have been published, and record-setting calculations of the digits of often result in news headlines.It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Euler Number**

**e (mathematical constant)**

The number e, also known as Euler's number, is a mathematical constant approximately equal to 2.71828 which can be characterized in many ways. It is the base of the natural logarithms. It is the limit of

as n approaches infinity, an expression that arises in the study of compound interest. It can also be calculated as the sum of the infinite seriesIt is also the unique positive number a such that the graph of the function

has a slope of 1 at x = 0.The (natural) exponential function

is the unique function f that equals its own derivative and satisfies the equation f(0) = 1; hence one can also define e as f(1). The natural logarithm, or logarithm to base e, is the inverse function to the natural exponential function. The natural logarithm of a number k > 1 can be defined directly as the area under the curve y = 1/x between x = 1 and x = k, in which case e is the value of k for which this area equals one. There are various other characterizations.e is sometimes called Euler's number (not to be confused with Euler's constant

), after the Swiss mathematician Leonhard Euler, or Napier's constant, after John Napier. The constant was discovered by the Swiss mathematician Jacob Bernoulli while studying compound interest.The number e is of great importance in mathematics, alongside

and2.71828182845904523536028747135266249775724709

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Phi - Golden Ratio**

In mathematics, two quantities are in the golden ratio if their ratio is the same as the ratio of their sum to the larger of the two quantities. Expressed algebraically, for quantities a and b with

where the Greek letter phi

or represents the golden ratio. It is an irrational number that is a solution to the quadratic equation with a value ofThe golden ratio is also called the golden mean or golden section (Latin: sectio aurea). Other names include extreme and mean ratio, medial section, divine proportion (Latin: proportio divina), divine section (Latin: sectio divina), golden proportion, golden cut, and golden number.

Mathematicians since Euclid have studied the properties of the golden ratio, including its appearance in the dimensions of a regular pentagon and in a golden rectangle, which may be cut into a square and a smaller rectangle with the same aspect ratio. The golden ratio has also been used to analyze the proportions of natural objects as well as man-made systems such as financial markets, in some cases based on dubious fits to data. The golden ratio appears in some patterns in nature, including the spiral arrangement of leaves and other parts of vegetation.

Some 20th-century artists and architects, including Le Corbusier and Salvador Dalí, have proportioned their works to approximate the golden ratio, believing this to be aesthetically pleasing. These often appear in the form of the golden rectangle, in which the ratio of the longer side to the shorter is the golden ratio.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Solution of a Quadratic Equation**

Here, the sum of roots is -b/a and product of roots c/a.

In algebra, a quadratic equation (from Latin quadratus 'square') is any equation that can be rearranged in standard form as

where x represents an unknown, and a, b, and c represent known numbers, where a ≠ 0. If a = 0, then the equation is linear, not quadratic, as there is no

term. The numbers a, b, and c are the coefficients of the equation and may be distinguished by calling them, respectively, the quadratic coefficient, the linear coefficient and the constant or free term.The values of x that satisfy the equation are called solutions of the equation, and roots or zeros of the expression on its left-hand side. A quadratic equation has at most two solutions. If there is only one solution, one says that it is a double root. If all the coefficients are real numbers, there are either two real solutions, or a single real double root, or two complex solutions. A quadratic equation always has two roots, if complex roots are included; and a double root is counted for two. A quadratic equation can be factored into an equivalent equation

where r and s are the solutions for x.

The quadratic formula

expresses the solutions in terms of a, b, and c. Completing the square is one of several ways for getting it.

Solutions to problems that can be expressed in terms of quadratic equations were known as early as 2000 BC.

Because the quadratic equation involves only one unknown, it is called "univariate". The quadratic equation contains only powers of x that are non-negative integers, and therefore it is a polynomial equation. In particular, it is a second-degree polynomial equation, since the greatest power is two.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Root of unity**

In mathematics, a root of unity, occasionally called a de Moivre number, is any complex number that yields 1 when raised to some positive integer power n. Roots of unity are used in many branches of mathematics, and are especially important in number theory, the theory of group characters, and the discrete Fourier transform.

Roots of unity can be defined in any field. If the characteristic of the field is zero, the roots are complex numbers that are also algebraic integers. For fields with a positive characteristic, the roots belong to a finite field, and, conversely, every nonzero element of a finite field is a root of unity. Any algebraically closed field contains exactly n nth roots of unity, except when n is a multiple of the (positive) characteristic of the field.

**General definition**

An nth root of unity, where n is a positive integer, is a number z satisfying the equation

Unless otherwise specified, the roots of unity may be taken to be complex numbers (including the number 1, and the number -1 if n is even, which are complex with a zero imaginary part), and in this case, the nth roots of unity are

However, the defining equation of roots of unity is meaningful over any field (and even over any ring) F, and this allows considering roots of unity in F. Whichever is the field F, the roots of unity in F are either complex numbers, if the characteristic of F is 0, or, otherwise, belong to a finite field. Conversely, every nonzero element in a finite field is a root of unity in that field. See Root of unity modulo n and Finite field for further details.

An nth root of unity is said to be primitive if it is not an mth root of unity for some smaller m, that is if

If n is a prime number, then all nth roots of unity, except 1, are primitive.

In the above formula in terms of exponential and trigonometric functions, the primitive nth roots of unity are those for which k and n are coprime integers.

Subsequent sections of this article will comply with complex roots of unity.

**Trigonometric expression**

De Moivre's formula, which is valid for all real x and integers n, is

Setting

gives a primitive nth root of unity – one gets

but

for k = 1, 2, …, n - 1. In other words,

is a primitive nth root of unity.

This formula shows that in the complex plane the nth roots of unity are at the vertices of a regular n-sided polygon inscribed in the unit circle, with one vertex at 1 (see the plots for n = 3 and n = 5 on the right.) This geometric fact accounts for the term "cyclotomic" in such phrases as cyclotomic field and cyclotomic polynomial; it is from the Greek roots "cyclo" (circle) plus "tomos" (cut, divide).

Euler's formula

which is valid for all real x, can be used to put the formula for the nth roots of unity into the form

It follows from the discussion in the previous section that this is a primitive nth-root if and only if the fraction

is in lowest terms; that is, that k and n are coprime. An irrational number that can be expressed as the real part of the root of unity; that is, as

is called a trigonometric number.Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Pascal's Triangle**

In mathematics, Pascal's triangle is a triangular array of the binomial coefficients that arises in probability theory, combinatorics, and algebra. In much of the Western world, it is named after the French mathematician Blaise Pascal, although other mathematicians studied it centuries before him in India, Persia, China, Germany, and Italy.

The rows of Pascal's triangle are conventionally enumerated starting with row n=0 at the top (the 0th row). The entries in each row are numbered from the left beginning with k=0 and are usually staggered relative to the numbers in the adjacent rows. The triangle may be constructed in the following manner: In row 0 (the topmost row), there is a unique nonzero entry 1. Each entry of each subsequent row is constructed by adding the number above and to the left with the number above and to the right, treating blank entries as 0. For example, the initial number in the first (or any other) row is 1 (the sum of 0 and 1), whereas the numbers 1 and 3 in the third row are added to produce the number 4 in the fourth row.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Prime Numbers - Advanced Concepts**

Prime Numbers - Advance Concepts.

In mathematics, a Mersenne prime is a prime number that is one less than a power of two. That is, it is a prime number of the form

for some integer n. They are named after Marin Mersenne, a French Minim friar, who studied them in the early 17th century. If n is a composite number then so is . Therefore, an equivalent definition of the Mersenne primes is that they are the prime numbers of the form for some prime p.The exponents n which give Mersenne primes are 2, 3, 5, 7, 13, 17, 19, 31, ... and the resulting Mersenne primes are 3, 7, 31, 127, 8191, 131071, 524287, 2147483647, ....

Numbers of the form

without the primality requirement may be called Mersenne numbers. Sometimes, however, Mersenne numbers are defined to have the additional requirement that n be prime. The smallest composite Mersenne number with prime exponent n isMersenne primes were studied in antiquity because of their close connection to perfect numbers: the Euclid–Euler theorem asserts a one-to-one correspondence between even perfect numbers and Mersenne primes. Many of the largest known primes are Mersenne primes because Mersenne numbers are easier to check for primality.

As of October 2020, 51 Mersenne primes are known. The largest known prime number,

, is a Mersenne prime.[1] Since 1997, all newly found Mersenne primes have been discovered by the Great Internet Mersenne Prime Search, a distributed computing project. In December 2020, a major milestone in the project was passed after all exponents below 100 million were checked at least once.Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Derivative**

In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. For example, the derivative of the position of a moving object with respect to time is the object's velocity: this measures how quickly the position of the object changes when time advances.

The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the "instantaneous rate of change", the ratio of the instantaneous change in the dependent variable to that of the independent variable.

Derivatives can be generalized to functions of several real variables. In this generalization, the derivative is reinterpreted as a linear transformation whose graph is (after an appropriate translation) the best linear approximation to the graph of the original function. The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables. It can be calculated in terms of the partial derivatives with respect to the independent variables. For a real-valued function of several variables, the Jacobian matrix reduces to the gradient vector.

The process of finding a derivative is called differentiation. The reverse process is called antidifferentiation. The fundamental theorem of calculus relates antidifferentiation with integration. Differentiation and integration constitute the two fundamental operations in single-variable calculus.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Limit (mathematics)**

In mathematics, a limit is the value that a function (or sequence) approaches as the input (or index) approaches some value. Limits are essential to calculus and mathematical analysis, and are used to define continuity, derivatives, and integrals.

The concept of a limit of a sequence is further generalized to the concept of a limit of a topological net, and is closely related to limit and direct limit in category theory.

In formulas, a limit of a function is usually written as

(although a few authors may use "Lt" instead of "lim") and is read as "the limit of f of x as x approaches c equals L". The fact that a function f approaches the limit L as x approaches c is sometimes denoted by a right arrow (→ or

), as inwhich reads "f of x tends to L as x tends to c".

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Integral Calculus**

In mathematics, an integral assigns numbers to functions in a way that describes displacement, area, volume, and other concepts that arise by combining infinitesimal data. The process of finding integrals is called integration. Along with differentiation, integration is a fundamental, essential operation of calculus, and serves as a tool to solve problems in mathematics and physics involving the area of an arbitrary shape, the length of a curve, and the volume of a solid, among others.

The integrals enumerated here are those termed definite integrals, which can be interpreted as the signed area of the region in the plane that is bounded by the graph of a given function between two points in the real line. Conventionally, areas above the horizontal axis of the plane are positive while areas below are negative. Integrals also refer to the concept of an antiderivative, a function whose derivative is the given function. In this case, they are called indefinite integrals. The fundamental theorem of calculus relates definite integrals with differentiation and provides a method to compute the definite integral of a function when its antiderivative is known.

Although methods of calculating areas and volumes dated from ancient Greek mathematics, the principles of integration were formulated independently by Isaac Newton and Gottfried Wilhelm Leibniz in the late 17th century, who thought of the area under a curve as an infinite sum of rectangles of infinitesimal width. Bernhard Riemann later gave a rigorous definition of integrals, which is based on a limiting procedure that approximates the area of a curvilinear region by breaking the region into infinitesimally thin vertical slabs. In the early 20th century, Henri Lebesgue generalized Riemann's formulation by introducing what is now referred to as the Lebesgue integral; it is more robust than Riemann's in the sense that a wider class of functions are Lebesgue-integrable.

Integrals may be generalized depending on the type of the function as well as the domain over which the integration is performed. For example, a line integral is defined for functions of two or more variables, and the interval of integration is replaced by a curve connecting the two endpoints of the interval. In a surface integral, the curve is replaced by a piece of a surface in three-dimensional space.

.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

Solids of Revolution by Shells

Integral Approximation Calculator.

**Volume integral**

In mathematics (particularly multivariable calculus), a volume integral (∭) refers to an integral over a 3-dimensional domain; that is, it is a special case of multiple integrals. Volume integrals are especially important in physics for many applications, for example, to calculate flux densities.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Summation**

In mathematics, summation is the addition of a sequence of any kind of numbers, called addends or summands; the result is their sum or total. Beside numbers, other types of values can be summed as well: functions, vectors, matrices, polynomials and, in general, elements of any type of mathematical objects on which an operation denoted "+" is defined.

Summations of infinite sequences are called series. They involve the concept of limit, and are not considered here.

The summation of an explicit sequence is denoted as a succession of additions. For example, summation of [1, 2, 4, 2] is denoted 1 + 2 + 4 + 2, and results in 9, that is, 1 + 2 + 4 + 2 = 9. Because addition is associative and commutative, there is no need of parentheses, and the result is the same irrespective of the order of the summands. Summation of a sequence of only one element results in this element itself. Summation of an empty sequence (a sequence with no elements), by convention, results in 0.

Very often, the elements of a sequence are defined, through a regular pattern, as a function of their place in the sequence. For simple patterns, summation of long sequences may be represented with most summands replaced by ellipses. For example, summation of the first 100 natural numbers may be written as 1 + 2 + 3 + 4 + ⋯ + 99 + 100. Otherwise, summation is denoted by using

notation, where is an enlarged capital Greek letter sigma. For example, the sum of the first n natural numbers can be denoted asFor long summations, and summations of variable length (defined with ellipses or

notation), it is a common problem to find closed-form expressions for the result. For example,Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Differential Equation**

In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common; therefore, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology.

Mainly the study of differential equations consists of the study of their solutions (the set of functions that satisfy each equation), and of the properties of their solutions. Only the simplest differential equations are solvable by explicit formulas; however, many properties of solutions of a given differential equation may be determined without computing them exactly.

Often when a closed-form expression for the solutions is not available, solutions may be approximated numerically using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.

**Example**

In classical mechanics, the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow these variables to be expressed dynamically (given the position, velocity, acceleration and various forces acting on the body) as a differential equation for the unknown position of the body as a function of time.

In some cases, this differential equation (called an equation of motion) may be solved explicitly.

An example of modeling a real-world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity and air resistance. The ball's acceleration towards the ground is the acceleration due to gravity minus the deceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the ball's velocity. This means that the ball's acceleration, which is a derivative of its velocity, depends on the velocity (and the velocity depends on time). Finding the velocity as a function of time involves solving a differential equation and verifying its validity.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Separation of variables**

In mathematics, separation of variables (also known as the Fourier method) is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation.

**Ordinary differential equations (ODE)**

Suppose a differential equation can be written in the form

which we can write more simply by letting :

As long as

, we can rearrange terms to obtain:so that the two variables x and y have been separated. dx (and dy) can be viewed, at a simple level, as just a convenient notation, which provides a handy mnemonic aid for assisting with manipulations. A formal definition of dx as a differential (infinitesimal) is somewhat advanced.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Linear differential equation**

In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form

where

and b(x) are arbitrary differentiable functions that do not need to be linear, and y', ..., are the successive derivatives of an unknown function y of the variable x.Such an equation is an ordinary differential equation (ODE). A linear differential equation may also be a linear partial differential equation (PDE), if the unknown function depends on several variables, and the derivatives that appear in the equation are partial derivatives.

A linear differential equation or a system of linear equations such that the associated homogeneous equations have constant coefficients may be solved by quadrature, which means that the solutions may be expressed in terms of integrals. This is also true for a linear equation of order one, with non-constant coefficients. An equation of order two or higher with non-constant coefficients cannot, in general, be solved by quadrature. For order two, Kovacic's algorithm allows deciding whether there are solutions in terms of integrals, and computing them if any.

The solutions of homogeneous linear differential equations with polynomial coefficients are called holonomic functions. This class of functions is stable under sums, products, differentiation, integration, and contains many usual functions and special functions such as exponential function, logarithm, sine, cosine, inverse trigonometric functions, error function, Bessel functions and hypergeometric functions. Their representation by the defining differential equation and initial conditions allows making algorithmic (on these functions) most operations of calculus, such as computation of antiderivatives, limits, asymptotic expansion, and numerical evaluation to any precision, with a certified error bound.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Differential Equations - Types**

Differential equations can be divided into several types. Apart from describing the properties of the equation itself, these classes of differential equations can help inform the choice of approach to a solution. Commonly used distinctions include whether the equation is ordinary or partial, linear or non-linear, and homogeneous or heterogeneous. This list is far from exhaustive; there are many other properties and subclasses of differential equations which can be very useful in specific contexts.

**Ordinary differential equations**

An ordinary differential equation (ODE) is an equation containing an unknown function of one real or complex variable x, its derivatives, and some given functions of x. The unknown function is generally represented by a variable (often denoted y), which, therefore, depends on x. Thus x is often called the independent variable of the equation. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable.

Linear differential equations are the differential equations that are linear in the unknown function and its derivatives. Their theory is well developed, and in many cases one may express their solutions in terms of integrals.

Most ODEs that are encountered in physics are linear. Therefore, most special functions may be defined as solutions of linear differential equations.

As, in general, the solutions of a differential equation cannot be expressed by a closed-form expression, numerical methods are commonly used for solving differential equations on a computer.

**Partial differential equations**

A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved in closed form, or used to create a relevant computer model.

PDEs can be used to describe a wide variety of phenomena in nature such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalized similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. Stochastic partial differential equations generalize partial differential equations for modeling randomness.

**Non-linear differential equations**

A non-linear differential equation is a differential equation that is not a linear equation in the unknown function and its derivatives (the linearity or non-linearity in the arguments of the function are not considered here). There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particular symmetries. Nonlinear differential equations can exhibit very complicated behaviour over extended time intervals, characteristic of chaos. Even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, and well-posedness of initial and boundary value problems for nonlinear PDEs are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf. Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution.

Linear differential equations frequently appear as approximations to nonlinear equations. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations (see below).

**Equation order**

Differential equations are described by their order, determined by the term with the highest derivatives. An equation containing only first derivatives is a first-order differential equation, an equation containing the second derivative is a second-order differential equation, and so on. Differential equations that describe natural phenomena almost always have only first and second order derivatives in them, but there are some exceptions, such as the thin film equation, which is a fourth order partial differential equation.

**Examples**

In the first group of examples u is an unknown function of x, and c and ω are constants that are supposed to be known. Two broad classifications of both ordinary and partial differential equations consist of distinguishing between linear and nonlinear differential equations, and between homogeneous differential equations and heterogeneous ones.

Heterogeneous first-order linear constant coefficient ordinary differential equation:

Homogeneous second-order linear ordinary differential equation:

Homogeneous second-order linear constant coefficient ordinary differential equation describing the harmonic oscillator:

Heterogeneous first-order nonlinear ordinary differential equation:

Second-order nonlinear (due to sine function) ordinary differential equation describing the motion of a pendulum of length L:

In the next group of examples, the unknown function u depends on two variables x and t or x and y.

Homogeneous first-order linear partial differential equation:

Homogeneous second-order linear constant coefficient partial differential equation of elliptic type, the Laplace equation:

Homogeneous third-order non-linear partial differential equation :

Homogenous Differential Equations

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Partial Derivative**

In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). Partial derivatives are used in vector calculus and differential geometry.

The partial derivative of a function

with respect to the variable x is variously denoted by orIt can be thought of as the rate of change of the function in the x-direction.

Sometimes, for

, the partial derivative of z with respect to x is denoted as Since a partial derivative generally has the same arguments as the original function, its functional dependence is sometimes explicitly signified by the notation, such as in:The symbol used to denote partial derivatives is ∂. One of the first known uses of this symbol in mathematics is by Marquis de Condorcet from 1770, who used it for partial differences. The modern partial derivative notation was created by Adrien-Marie Legendre (1786), although he later abandoned it; Carl Gustav Jacob Jacobi reintroduced the symbol in 1841.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Matrix**

A matrix is a rectangular array of numbers (or other mathematical objects), called the entries of the matrix. Matrices are subject to standard operations such as addition and multiplication. Most commonly, a matrix over a field F is a rectangular array of elements of F. A real matrix and a complex matrix are matrices whose entries are respectively real numbers or complex numbers. More general types of entries are discussed below. For instance, this is a real matrix:

The numbers, symbols, or expressions in the matrix are called its entries or its elements. The horizontal and vertical lines of entries in a matrix are called rows and columns, respectively.

**Size**

The size of a matrix is defined by the number of rows and columns it contains. There is no limit to the numbers of rows and columns a matrix (in the usual sense) can have as long as they are positive integers. A matrix with m rows and n columns is called an m × n matrix, or m-by-n matrix, while m and n are called its dimensions. For example, the matrix A above is a 3 × 2 matrix.

Matrices with a single row are called row vectors, and those with a single column are called column vectors. A matrix with the same number of rows and columns is called a square matrix. A matrix with an infinite number of rows or columns (or both) is called an infinite matrix. In some contexts, such as computer algebra programs, it is useful to consider a matrix with no rows or no columns, called an empty matrix.

**Overview of a matrix size**

Name : Size : Example : Description

Row vector : 1 × n :

A matrix with one row, sometimes used to represent a vector

Column vector : n × 1 :

A matrix with one column, sometimes used to represent a vector

Square matrix : n × n :

A matrix with the same number of rows and columns, sometimes used to represent a linear transformation from a vector space to itself, such as reflection, rotation, or shearing.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Matrix - Part II**

In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first minors) are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Determinants**

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It allows characterizing some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants (the preceding property is a corollary of this one). The determinant of a matrix A is denoted det(A), det A, or |A|.

In the case of a 2 × 2 matrix the determinant can be defined as

Similarly, for a 3 × 3 matrix A, its determinant is

Each determinant of a 2 × 2 matrix in this equation is called a minor of the matrix A. This procedure can be extended to give a recursive definition for the determinant of an n × n matrix, known as Laplace expansion.

Determinants occur throughout mathematics. For example, a matrix is often used to represent the coefficients in a system of linear equations, and determinants can be used to solve these equations (Cramer's rule), although other methods of solution are computationally much more efficient. Determinants are used for defining the characteristic polynomial of a matrix, whose roots are the eigenvalues. In geometry, the signed n-dimensional volume of a n-dimensional parallelepiped is expressed by a determinant. This is used in calculus with exterior differential forms and the Jacobian determinant, in particular for changes of variables in multiple integrals.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Vectors**

In mathematics and physics, vector is a term that refers colloquially to some quantities that cannot be expressed by a single number, or to elements of some vector spaces.

Historically, vectors were introduced in geometry and physics (typically in mechanics) for quantities that have both a magnitude and a direction, such as displacements, forces and velocity. Such quantities are represented by geometric vectors in the same way as distances, masses and time are represented by real numbers.

The term vector is also used, in some contexts, for tuples, which are finite sequences of numbers of a fixed length.

Both geometric vectors and tuples can be added and scaled, and these vector operations led to the concept of a vector space, which is a set equipped with a vector addition and a scalar multiplication that satisfy some axioms generalizing the main properties of operations on above sorts of vectors. A vector space formed by geometric vectors is called a Euclidean vector space, and a vector space formed by tuples is called a coordinate vector space.

There are many vector spaces that are considered in mathematics, such as extension field, polynomial rings, algebras and function spaces. The term vector is generally not used for elements of these vectors spaces, and is generally reserved for geometric vectors, tuples, and elements of unspecified vector spaces (for example, when discussing general properties of vector spaces).

**Euclidean vector**

In mathematics, physics and engineering, a Euclidean vector or simply a vector (sometimes called a geometric vector or spatial vector) is a geometric object that has magnitude (or length) and direction. Vectors can be added to other vectors according to vector algebra. A Euclidean vector is frequently represented by a directed line segment, or graphically as an arrow connecting an initial point A with a terminal point B, and denoted by

.A vector is what is needed to "carry" the point A to the point B; the Latin word vector means "carrier". It was first used by 18th century astronomers investigating planetary revolution around the Sun. The magnitude of the vector is the distance between the two points, and the direction refers to the direction of displacement from A to B. Many algebraic operations on real numbers such as addition, subtraction, multiplication, and negation have close analogues for vectors, operations which obey the familiar algebraic laws of commutativity, associativity, and distributivity. These operations and associated laws qualify Euclidean vectors as an example of the more generalized concept of vectors defined simply as elements of a vector space.

Vectors play an important role in physics: the velocity and acceleration of a moving object and the forces acting on it can all be described with vectors. Many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances (except, for example, position or displacement), their magnitude and direction can still be represented by the length and direction of an arrow. The mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the coordinate system include pseudovectors and tensors.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 44,504

**Hermitian matrix**

In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

or in matrix form:

Hermitian matrices can be understood as the complex extension of real symmetric matrices.

If the conjugate transpose of a matrix A is denoted by

, then the Hermitian property can be written concisely asHermitian matrices are named after Charles Hermite, who demonstrated in 1855 that matrices of this form share a property with real symmetric matrices of always having real eigenvalues. Other, equivalent notations in common use are

, although note that in quantum mechanics, typically means the complex conjugate only, and not the conjugate transpose.**Alternative characterizations**

Hermitian matrices can be characterized in a number of equivalent ways, some of which are listed below:

*Equality with the adjoint*

A square matrix A is Hermitian if and only if it is equal to its adjoint, that is, it satisfies

for any pair of vectors , where denotes the inner product operation.

This is also the way that the more general concept of self-adjoint operator is defined.

*Reality of quadratic forms*

A square matrix {\displaystyle A}A is Hermitian if and only if

**Spectral properties**

A square matrix A is Hermitian if and only if it is unitarily diagonalizable with real eigenvalues.

**Applications**

Hermitian matrices are fundamental to Quantum mechanics because they describe operators with necessarily real eigenvalues. An eigenvalue a of an operator

on some quantum state is one of the possible measurement outcomes of the operator, which necessitates the need for operators with real eigenvalues.Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline