Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ π -¹ ² ³ °

You are not logged in.

- Topics: Active | Unanswered

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

18) **Snell's law**

Snell's law (also known as Snell–Descartes law and ibn-Sahl law and the law of refraction) is a formula used to describe the relationship between the angles of incidence and refraction, when referring to light or other waves passing through a boundary between two different isotropic media, such as water, glass, or air. This law was named after the Dutch astronomer and mathematician Willebrord Snellius (also called Snell).

In optics, the law is used in ray tracing to compute the angles of incidence or refraction, and in experimental optics to find the refractive index of a material. The law is also satisfied in meta-materials, which allow light to be bent "backward" at a negative angle of refraction with a negative refractive index.

Snell's law states that, for a given pair of media, the ratio of the sines of the angle of incidence θ1 and angle of refraction θ2 is equal to the ratio of phase velocities (v1 / v2) in the two media, or equivalently, to the refractive indices (n2 / n1) of the two media.

The law follows from Fermat's principle of least time, which in turn follows from the propagation of light as waves.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

19) **Stefan–Boltzmann law**

The Stefan–Boltzmann law describes the power radiated from a black body in terms of its temperature. Specifically, the Stefan–Boltzmann law states that the total energy radiated per unit surface area of a black body across all wavelengths per unit time

(also known as the black-body radiant emittance) is directly proportional to the fourth power of the black body's thermodynamic temperature T:.The constant of proportionality σ, called the Stefan–Boltzmann constant, is derived from other known physical constants. Since 2019, the value of the constant is

where k is the Boltzmann constant, h is Planck's constant, and c is the speed of light in a vacuum. The radiance from a specified angle of view (watts per square metre per steradian) is given by

A body that does not absorb all incident radiation (sometimes known as a grey body) emits less total energy than a black body and is characterized by an emissivity,

:.The radiant emittance

has dimensions of energy flux (energy per unit time per unit area), and the SI units of measure are joules per second per square metre, or equivalently, watts per square metre. The SI unit for absolute temperature T is the kelvin. is the emissivity of the grey body; if it is a perfect blackbody, . In the still more general (and realistic) case, the emissivity depends on the wavelength, .To find the total power radiated from an object, multiply by its surface area, A:

.Wavelength- and subwavelength-scale particles, metamaterials, and other nanostructures are not subject to ray-optical limits and may be designed to exceed the Stefan–Boltzmann law.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

20) **Pauli exclusion principle**

In quantum mechanics, the Pauli exclusion principle states that two or more identical particles with half-integer spins (i.e. fermions) cannot occupy the same quantum state within a quantum system simultaneously. This principle was formulated by Austrian physicist Wolfgang Pauli in 1925 for electrons, and later extended to all fermions with his spin–statistics theorem of 1940.

In the case of electrons in atoms, it can be stated as follows: it is impossible for two electrons of a poly-electron atom to have the same values of the four quantum numbers: n, the principal quantum number; ℓ, the azimuthal quantum number;

, the magnetic quantum number; and , the spin quantum number. For example, if two electrons reside in the same orbital, then their n, ℓ, and values are the same; therefore their must be different, and thus the electrons must have opposite half-integer spin projections of 1/2 and -1/2.Particles with an integer spin, or bosons, are not subject to the Pauli exclusion principle: any number of identical bosons can occupy the same quantum state, as with, for instance, photons produced by a laser or atoms in a Bose-Einstein condensate.

A more rigorous statement is that, concerning the exchange of two identical particles, the total (many-particle) wave function is antisymmetric for fermions, and symmetric for bosons. This means that if the space and spin coordinates of two identical particles are interchanged, then the total wave function changes its sign for fermions and does not change for bosons.

If two fermions were in the same state (for example the same orbital with the same spin in the same atom), interchanging them would change nothing and the total wave function would be unchanged. The only way the total wave function can both change sign as required for fermions and also remain unchanged is that this function must be zero everywhere, which means that the state cannot exist. This reasoning does not apply to bosons because the sign does not change.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

21) **Mass–energy equivalence**

In physics, mass–energy equivalence is the relationship between mass and energy in a system's rest frame, where the two values differ only by a constant and the units of measurement. The principle is described by the physicist Albert Einstein's famous formula:

The formula defines the energy E of a particle in its rest frame as the product of mass (m) with the speed of light squared :

Because the speed of light is a large number in everyday units (approximately 300000 km/s or 186000 mi/s), the formula implies that a small amount of rest mass corresponds to an enormous amount of energy, which is independent of the composition of the matter. Rest mass, also called invariant mass, is the mass that is measured when the system is at rest. It is a fundamental physical property that is independent of momentum, even at extreme speeds approaching the speed of light (i.e., its value is the same in all inertial frames of reference). Massless particles such as photons have zero invariant mass, but massless free particles have both momentum and energy. The equivalence principle implies that when energy is lost in chemical reactions, nuclear reactions, and other energy transformations, the system will also lose a corresponding amount of mass. The energy, and mass, can be released to the environment as radiant energy, such as light, or as thermal energy. The principle is fundamental to many fields of physics, including nuclear and particle physics.Mass–energy equivalence arose from special relativity as a paradox described by the French polymath Henri Poincaré. Einstein was the first to propose the equivalence of mass and energy as a general principle and a consequence of the symmetries of space and time. The principle first appeared in "Does the inertia of a body depend upon its energy-content?", one of his *annus mirabilis* papers, published on 21 November 1905. The formula and its relationship to momentum, as described by the energy–momentum relation, were later developed by other physicists.

**Description**

Mass–energy equivalence states that all objects having mass, or massive objects, have a corresponding intrinsic energy, even when they are stationary. In the rest frame of an object, where by definition it is motionless and so has no momentum, the mass and energy are equivalent and they differ only by a constant, the speed of light squared

. In Newtonian mechanics, a motionless body has no kinetic energy, and it may or may not have other amounts of internal stored energy, like chemical energy or thermal energy, in addition to any potential energy it may have from its position in a field of force. These energies tend to be much smaller than the mass of the object multiplied by , which is on the order of joules for a mass of one kilogram. Due to this principle, the mass of the atoms that come out of a nuclear reaction is less than the mass of the atoms that go in, and the difference in mass shows up as heat and light with the same equivalent energy as the difference. In analyzing these explosions, Einstein's formula can be used with E as the energy released and removed, and m as the change in mass.In relativity, all the energy that moves with an object (i.e., the energy as measured in the object's rest frame) contributes to the total mass of the body, which measures how much it resists acceleration. If an isolated box of ideal mirrors could contain light, the individually massless photons would contribute to the total mass of the box, by the amount equal to their energy divided by

. For an observer in the rest frame, removing energy is the same as removing mass and the formula indicates how much mass is lost when energy is removed. In the same way, when any energy is added to an isolated system, the increase in the mass is equal to the added energy divided by .Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

22) **Laws of Reflection**

If the reflecting surface is very smooth, the reflection of light that occurs is called specular or regular reflection. The laws of reflection are as follows:

* The incident ray, the reflected ray and the normal to the reflection surface at the point of the incidence lie in the same plane.

* The angle which the incident ray makes with the normal is equal to the angle which the reflected ray makes to the same normal.

* The reflected ray and the incident ray are on the opposite sides of the normal.

These three laws can all be derived from the Fresnel equations.

**Mechanism**

In classical electrodynamics, light is considered as an electromagnetic wave, which is described by Maxwell's equations. Light waves incident on a material induce small oscillations of polarisation in the individual atoms (or oscillation of electrons, in metals), causing each particle to radiate a small secondary wave in all directions, like a dipole antenna. All these waves add up to give specular reflection and refraction, according to the Huygens–Fresnel principle.

In the case of dielectrics such as glass, the electric field of the light acts on the electrons in the material, and the moving electrons generate fields and become new radiators. The refracted light in the glass is the combination of the forward radiation of the electrons and the incident light. The reflected light is the combination of the backward radiation of all of the electrons.

In metals, electrons with no binding energy are called free electrons. When these electrons oscillate with the incident light, the phase difference between their radiation field and the incident field is

(180°), so the forward radiation cancels the incident light, and backward radiation is just the reflected light.Light–matter interaction in terms of photons is a topic of quantum electrodynamics, and is described in detail by Richard Feynman in his popular book *QED: The Strange Theory of Light and Matter.*

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

23) **Faraday's law of induction**

Faraday's law of induction (briefly, Faraday's law) is a basic law of electromagnetism predicting how a magnetic field will interact with an electric circuit to produce an electromotive force (emf)—a phenomenon known as electromagnetic induction. It is the fundamental operating principle of transformers, inductors, and many types of electrical motors, generators and solenoids.

The Maxwell–Faraday equation (listed as one of Maxwell's equations) describes the fact that a spatially varying (and also possibly time-varying, depending on how a magnetic field varies in time) electric field always accompanies a time-varying magnetic field, while Faraday's law states that there is emf (electromotive force, defined as electromagnetic work done on a unit charge when it has traveled one round of a conductive loop) on the conductive loop when the magnetic flux through the surface enclosed by the loop varies in time.

Faraday's law had been discovered and one aspect of it (transformer emf) was formulated as the Maxwell–Faraday equation later. The equation of Faraday's law can be derived by the Maxwell–Faraday equation (describing transformer emf) and the Lorentz force (describing motional emf). The integral form of the Maxwell–Faraday equation describes only the transformer emf, while the equation of Faraday's law describes both the transformer emf and the motional emf.

**History**

Electromagnetic induction was discovered independently by Michael Faraday in 1831 and Joseph Henry in 1832. Faraday was the first to publish the results of his experiments. In Faraday's first experimental demonstration of electromagnetic induction (August 29, 1831), he wrapped two wires around opposite sides of an iron ring (torus) (an arrangement similar to a modern toroidal transformer). Based on his assessment of recently discovered properties of electromagnets, he expected that when current started to flow in one wire, a sort of wave would travel through the ring and cause some electrical effect on the opposite side. He plugged one wire into a galvanometer, and watched it as he connected the other wire to a battery. Indeed, he saw a transient current (which he called a "wave of electricity") when he connected the wire to the battery, and another when he disconnected it. This induction was due to the change in magnetic flux that occurred when the battery was connected and disconnected. Within two months, Faraday had found several other manifestations of electromagnetic induction. For example, he saw transient currents when he quickly slid a bar magnet in and out of a coil of wires, and he generated a steady (DC) current by rotating a copper disk near the bar magnet with a sliding electrical lead ("Faraday's disk"). 191–195

Michael Faraday explained electromagnetic induction using a concept he called lines of force. However, scientists at the time widely rejected his theoretical ideas, mainly because they were not formulated mathematically. An exception was James Clerk Maxwell, who in 1861–62 used Faraday's ideas as the basis of his quantitative electromagnetic theory. In Maxwell's papers, the time-varying aspect of electromagnetic induction is expressed as a differential equation which Oliver Heaviside referred to as Faraday's law even though it is different from the original version of Faraday's law, and does not describe motional emf. Heaviside's version (see Maxwell–Faraday equation below) is the form recognized today in the group of equations known as Maxwell's equations.

Lenz's law, formulated by Emil Lenz in 1834, describes "flux through the circuit", and gives the direction of the induced emf and current resulting from electromagnetic induction.

**Faraday's law**

The most widespread version of Faraday's law states:

*The electromotive force around a closed path is equal to the negative of the time rate of change of the magnetic flux enclosed by the path.*

**Mathematical statement**

The definition of surface integral relies on splitting the surface Σ into small surface elements. Each element is associated with a vector dA of magnitude equal to the area of the element and with direction normal to the element and pointing "outward" (with respect to the orientation of the surface).

For a loop of wire in a magnetic field, the magnetic flux ΦB is defined for any surface Σ whose boundary is the given loop. Since the wire loop may be moving, we write Σ(t) for the surface. The magnetic flux is the surface integral:

where dA is an element of surface area of the moving surface Σ(t), B is the magnetic field, and B · dA is a vector dot product representing the element of flux through dA. In more visual terms, the magnetic flux through the wire loop is proportional to the number of magnetic field lines that pass through the loop.

When the flux changes—because B changes, or because the wire loop is moved or deformed, or both—Faraday's law of induction says that the wire loop acquires an emf, defined as the energy available from a unit charge that has traveled once around the wire loop.[(Some sources state the definition differently. This expression was chosen for compatibility with the equations of Special Relativity.) Equivalently, it is the voltage that would be measured by cutting the wire to create an open circuit, and attaching a voltmeter to the leads.

Faraday's law states that the emf is also given by the rate of change of the magnetic flux:

where is the electromotive force (emf) and ΦB is the magnetic flux.

The direction of the electromotive force is given by Lenz's law.

The laws of induction of electric currents in mathematical form was established by Franz Ernst Neumann in 1845.

Faraday's law contains the information about the relationships between both the magnitudes and the directions of its variables. However, the relationships between the directions are not explicit; they are hidden in the mathematical formula.

A Left Hand Rule for Faraday's Law. The sign of ΔΦB, the change in flux, is found based on the relationship between the magnetic field B, the area of the loop A, and the normal n to that area, as represented by the fingers of the left hand. If ΔΦB is positive, the direction of the emf is the same as that of the curved fingers (yellow arrowheads). If ΔΦB is negative, the direction of the emf is against the arrowheads.

It is possible to find out the direction of the electromotive force (emf) directly from Faraday’s law, without invoking Lenz's law. A left hand rule helps doing that, as follows:

* Align the curved fingers of the left hand with the loop (yellow line).

* Stretch your thumb. The stretched thumb indicates the direction of n (brown), the normal to the area enclosed by the loop.

* Find the sign of ΔΦB, the change in flux. Determine the initial and final fluxes (whose difference is ΔΦB) with respect to the normal n, as indicated by the stretched thumb.

* If the change in flux, ΔΦB, is positive, the curved fingers show the direction of the electromotive force (yellow arrowheads).

* If ΔΦB is negative, the direction of the electromotive force is opposite to the direction of the curved fingers (opposite to the yellow arrowheads).

For a tightly wound coil of wire, composed of N identical turns, each with the same ΦB, Faraday's law of induction states that

where N is the number of turns of wire and ΦB is the magnetic flux through a single loop.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

24) **Sabine's Formula**

Wallace Clement Sabine (June 13, 1868 – January 10, 1919) was an American physicist who founded the field of architectural acoustics. Sabine was the architectural acoustician of Boston's Symphony Hall, widely considered one of the two or three best concert halls in the world for its acoustics.

**Career**

After graduating, Sabine became an assistant professor of physics at Harvard in 1889. He became an instructor in 1890 and a member of the faculty in 1892. In 1895, he became an assistant professor and in 1905, he was promoted to professor of physics. In October 1906, he became dean of the Lawrence Scientific School, succeeding Nathaniel Shaler.

Sabine's career is the story of the birth of the field of modern architectural acoustics. In 1895, acoustically improving the Fogg Lecture Hall, part of the recently constructed Fogg Art Museum, was considered an impossible task by the senior staff of the physics department at Harvard. (The original Fogg Museum was designed by Richard Morris Hunt and constructed in 1893. After the completion of the present Fogg Museum the building was repurposed for academic use and renamed Hunt Hall in 1935.) The assignment was passed down until it landed on the shoulders of a young physics professor, Sabine. Although considered a popular lecturer by the students, Sabine had never received his Ph.D. and did not have any particular background dealing with sound.

Sabine tackled the problem by trying to determine what made the Fogg Lecture Hall different from other, acoustically acceptable facilities. In particular, the Sanders Theater was considered acoustically excellent. For the next several years, Sabine and his assistants spent each night moving materials between the two lecture halls and testing the acoustics. On some nights they would borrow hundreds of seat cushions from the Sanders Theater. Using an organ pipe and a stopwatch, Sabine performed thousands of careful measurements (though inaccurate by present standards) of the time required for different frequencies of sounds to decay to inaudibility in the presence of the different materials. He tested reverberation time with several different types of Oriental rugs inside Fogg Lecture Hall, and with various numbers of people occupying its seats, and found that the body of an average person decreased reverberation time by about as much as six seat cushions. Once the measurements were taken and before morning arrived, everything was quickly replaced in both lecture halls, in order to be ready for classes the next day.

Sabine was able to determine, through the experiments, that a definitive relationship exists between the quality of the acoustics, the size of the chamber, and the amount of absorption surface present. He formally defined the reverberation time, which is still the most important characteristic currently in use for gauging the acoustical quality of a room, as number of seconds required for the intensity of the sound to drop from the starting level, by an amount of 60 dB (decibels).

His formula is

where

T = the reverberation time

V = the room volume

A = the effective absorption area

By studying various rooms judged acoustically optimal for their intended uses, Sabine determined that acoustically appropriate concert halls had reverberation times of 2-2.25 seconds (with shorter reverberation times, a music hall seems too "dry" to the listener), while optimal lecture hall acoustics featured reverberation times of slightly under 1 second. Regarding the Fogg Museum lecture room, Sabine noted that a spoken word remained audible for about 5.5 seconds, or about an additional 12-15 words if the speaker continued talking. Listeners thus contended with a very high degree of resonance and echo.

Using what he discovered, Sabine deployed sound absorbing materials throughout the Fogg Lecture Hall to cut its reverberation time and reduce the "echo effect." This accomplishment cemented Wallace Sabine's career, and led to his hiring as the acoustical consultant for Boston's Symphony Hall, the first concert hall to be designed using quantitative acoustics. His acoustic design was successful and Symphony Hall is generally considered one of the best symphony halls in the world.

The unit of sound absorption, the Sabin, was named in his honor.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

25) **Bernoulli's principle**

In fluid dynamics, Bernoulli's principle states that an increase in the speed of a fluid occurs simultaneously with a decrease in static pressure or a decrease in the fluid's potential energy. The principle is named after Daniel Bernoulli who published it in his book Hydrodynamica in 1738. Although Bernoulli deduced that pressure decreases when the flow speed increases, it was Leonhard Euler in 1752 who derived Bernoulli's equation in its usual form. The principle is only applicable for isentropic flows: when the effects of irreversible processes (like turbulence) and non-adiabatic processes (e.g. heat radiation) are small and can be neglected.

Bernoulli's principle can be applied to various types of fluid flow, resulting in various forms of Bernoulli's equation. The simple form of Bernoulli's equation is valid for incompressible flows (e.g. most liquid flows and gases moving at low Mach number). More advanced forms may be applied to compressible flows at higher Mach numbers (see the derivations of the Bernoulli equation).

Bernoulli's principle can be derived from the principle of conservation of energy. This states that, in a steady flow, the sum of all forms of energy in a fluid along a streamline is the same at all points on that streamline. This requires that the sum of kinetic energy, potential energy and internal energy remains constant. Thus an increase in the speed of the fluid – implying an increase in its kinetic energy (dynamic pressure) – occurs with a simultaneous decrease in (the sum of) its potential energy (including the static pressure) and internal energy. If the fluid is flowing out of a reservoir, the sum of all forms of energy is the same on all streamlines because in a reservoir the energy per unit volume (the sum of pressure and gravitational potential

is the same everywhere.Bernoulli's principle can also be derived directly from Isaac Newton's Second Law of Motion. If a small volume of fluid is flowing horizontally from a region of high pressure to a region of low pressure, then there is more pressure behind than in front. This gives a net force on the volume, accelerating it along the streamline.

Fluid particles are subject only to pressure and their own weight. If a fluid is flowing horizontally and along a section of a streamline, where the speed increases it can only be because the fluid on that section has moved from a region of higher pressure to a region of lower pressure; and if its speed decreases, it can only be because it has moved from a region of lower pressure to a region of higher pressure. Consequently, within a fluid flowing horizontally, the highest speed occurs where the pressure is lowest, and the lowest speed occurs where the pressure is highest.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

26) **Hubble's Law**

Hubble's law, also known as the Hubble–Lemaître law or Lemaître's law, is the observation in physical cosmology that galaxies are moving away from Earth at speeds proportional to their distance. In other words, the farther they are, the faster they are moving away from Earth. The velocity of the galaxies has been determined by their redshift, a shift of the light they emit toward the red end of the visible spectrum.

Hubble's law is considered the first observational basis for the expansion of the universe, and today it serves as one of the pieces of evidence most often cited in support of the Big Bang model. The motion of astronomical objects due solely to this expansion is known as the Hubble flow. It is described by the equation

with the constant of proportionality—the Hubble constant—between the "proper distance" D to a galaxy, which can change over time, unlike the comoving distance, and its speed of separation v, i.e. the derivative of proper distance with respect to cosmological time coordinate.The Hubble constant is most frequently quoted in (km/s)/Mpc, thus giving the speed in km/s of a galaxy 1 megaparsec

away, and its value is about 70 (km/s)/Mpc. However, the SI unit of H0 is simply , and the SI unit for the reciprocal of is simply the second. The reciprocal of is known as the Hubble time. The Hubble constant can also be interpreted as the relative rate of expansion. In this form H0 = 7%/Gyr, meaning that at the current rate of expansion it takes a billion years for an unbound structure to grow by 7%.Although widely attributed to Edwin Hubble, the notion of the universe expanding at a calculable rate was first derived from general relativity equations in 1922 by Alexander Friedmann. Friedmann published a set of equations, now known as the Friedmann equations, showing that the universe might be expanding, and presenting the expansion speed if that were the case. Then Georges Lemaître, in a 1927 article, independently derived that the universe might be expanding, observed the proportionality between recessional velocity of, and distance to, distant bodies, and suggested an estimated value for the proportionality constant; this constant, when Edwin Hubble confirmed the existence of cosmic expansion and determined a more accurate value for it two years later, came to be known by his name as the Hubble constant. Hubble inferred the recession velocity of the objects from their redshifts, many of which were earlier measured and related to velocity by Vesto Slipher in 1917. Though the Hubble constant H0 is roughly constant in the velocity-distance space at any given moment in time, the Hubble parameter H, which the Hubble constant is the current value of, varies with time, so the term constant is sometimes thought of as somewhat of a misnomer.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

27) **Stokes' law**

In 1851, George Gabriel Stokes derived an expression, now known as Stokes law, for the frictional force – also called drag force – exerted on spherical objects with very small Reynolds numbers in a viscous fluid. Stokes' law is derived by solving the Stokes flow limit for small Reynolds numbers of the Navier–Stokes equations.

**Statement of the law**

The force of viscosity on a small sphere moving through a viscous fluid is given by:

where:

is the frictional force – known as Stokes' drag – acting on the interface between the fluid and the particle

is the dynamic viscosity (some authors use the symbol )

R is the radius of the spherical object

v is the flow velocity relative to the object.

In SI units,

is given in newtons in Pa·s , R in meters, and v in m/s.Stokes' law makes the following assumptions for the behavior of a particle in a fluid:

* Laminar flow

* Spherical particles

* Homogeneous (uniform in composition) material

* Smooth surfaces

* Particles do not interfere with each other.

Particles do not interfere with each other.

For molecules Stokes' law is used to define their Stokes radius and diameter.

The CGS unit of kinematic viscosity was named "stokes" after his work.

**Applications**

Stokes' law is the basis of the falling-sphere viscometer, in which the fluid is stationary in a vertical glass tube. A sphere of known size and density is allowed to descend through the liquid. If correctly selected, it reaches terminal velocity, which can be measured by the time it takes to pass two marks on the tube. Electronic sensing can be used for opaque fluids. Knowing the terminal velocity, the size and density of the sphere, and the density of the liquid, Stokes' law can be used to calculate the viscosity of the fluid. A series of steel ball bearings of different diameters are normally used in the classic experiment to improve the accuracy of the calculation. The school experiment uses glycerine or golden syrup as the fluid, and the technique is used industrially to check the viscosity of fluids used in processes. Several school experiments often involve varying the temperature and/or concentration of the substances used in order to demonstrate the effects this has on the viscosity. Industrial methods include many different oils, and polymer liquids such as solutions.

The importance of Stokes' law is illustrated by the fact that it played a critical role in the research leading to at least three Nobel Prizes.

Stokes' law is important for understanding the swimming of microorganisms and sperm; also, the sedimentation of small particles and organisms in water, under the force of gravity.

In air, the same theory can be used to explain why small water droplets (or ice crystals) can remain suspended in air (as clouds) until they grow to a critical size and start falling as rain (or snow and hail). Similar use of the equation can be made in the settling of fine particles in water or other fluids.

For molecules Stokes' law is used to define their Stokes radius and diameter.

The CGS unit of kinematic viscosity was named "stokes" after his work.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

28) **Wien's displacement law**

Wien's displacement law states that the black-body radiation curve for different temperatures will peak at different wavelengths that are inversely proportional to the temperature. The shift of that peak is a direct consequence of the Planck radiation law, which describes the spectral brightness of black-body radiation as a function of wavelength at any given temperature. However, it had been discovered by Wilhelm Wien several years before Max Planck developed that more general equation, and describes the entire shift of the spectrum of black-body radiation toward shorter wavelengths as temperature increases.

Formally, Wien's displacement law states that the spectral radiance of black-body radiation per unit wavelength, peaks at the wavelength

given by:where T is the absolute temperature. b is a constant of proportionality called Wien's displacement constant, equal to

or. This is an inverse relationship between wavelength and temperature. So the higher the temperature, the shorter or smaller the wavelength of the thermal radiation. The lower the temperature, the longer or larger the wavelength of the thermal radiation. For visible radiation, hot objects emit bluer light than cool objects. If one is considering the peak of black body emission per unit frequency or per proportional bandwidth, one must use a different proportionality constant. However, the form of the law remains the same: the peak wavelength is inversely proportional to temperature, and the peak frequency is directly proportional to temperature.

Wien's displacement law may be referred to as "Wien's law", a term which is also used for the Wien approximation.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

29) **Dalton's Law**

Dalton's law (also called Dalton's law of partial pressures) states that in a mixture of non-reacting gases, the total pressure exerted is equal to the sum of the partial pressures of the individual gases. This empirical law was observed by John Dalton in 1801 and published in 1802. Dalton's law is related to the ideal gas laws.

**Formula**

Mathematically, the pressure of a mixture of non-reactive gases can be defined as the summation:

where

represent the partial pressures of each component.where

is the mole fraction of the ith component in the total mixture of n components .**Volume-based concentration**

The relationship below provides a way to determine the volume-based concentration of any individual gaseous component

where is the concentration of component i.

Dalton's law is not strictly followed by real gases, with the deviation increasing with pressure. Under such conditions the volume occupied by the molecules becomes significant compared to the free space between them. In particular, the short average distances between molecules increases intermolecular forces between gas molecules enough to substantially change the pressure exerted by them, an effect not included in the ideal gas model.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

30) **Poiseuille's Law**

**Hagen–Poiseuille equation**

In nonideal fluid dynamics, the Hagen–Poiseuille equation, also known as the Hagen–Poiseuille law, Poiseuille law or Poiseuille equation, is a physical law that gives the pressure drop in an incompressible and Newtonian fluid in laminar flow flowing through a long cylindrical pipe of constant cross section. It can be successfully applied to air flow in lung alveoli, or the flow through a drinking straw or through a hypodermic needle. It was experimentally derived independently by Jean Léonard Marie Poiseuille in 1838 and Gotthilf Heinrich Ludwig Hagen, and published by Poiseuille in 1840–41 and 1846. The theoretical justification of the Poiseuille law was given by George Stokes in 1845.

The assumptions of the equation are that the fluid is incompressible and Newtonian; the flow is laminar through a pipe of constant circular cross-section that is substantially longer than its diameter; and there is no acceleration of fluid in the pipe. For velocities and pipe diameters above a threshold, actual fluid flow is not laminar but turbulent, leading to larger pressure drops than calculated by the Hagen–Poiseuille equation.

Poiseuille's equation describes the pressure drop due to the viscosity of the fluid; other types of pressure drops may still occur in a fluid (see a demonstration here). For example, the pressure needed to drive a viscous fluid up against gravity would contain both that as needed in Poiseuille's law plus that as needed in Bernoulli's equation, such that any point in the flow would have a pressure greater than zero (otherwise no flow would happen).

Another example is when blood flows into a narrower constriction, its speed will be greater than in a larger diameter (due to continuity of volumetric flow rate), and its pressure will be lower than in a larger diameter (due to Bernoulli's equation). However, the viscosity of blood will cause additional pressure drop along the direction of flow, which is proportional to length traveled[4] (as per Poiseuille's law). Both effects contribute to the actual pressure drop.

**Equation**

In standard fluid-kinetics notation:

where:

is the pressure difference between the two ends,L is the length of pipe,

is the dynamic viscosity,

Q is the volumetric flow rate,

R is the pipe radius,

A is the cross sectional area of pipe.

The equation does not hold close to the pipe entrance.:

The equation fails in the limit of low viscosity, wide and/or short pipe. Low viscosity or a wide pipe may result in turbulent flow, making it necessary to use more complex models, such as the Darcy–Weisbach equation. The ratio of length to radius of a pipe should be greater than one forty-eighth of the Reynolds number for the Hagen–Poiseuille law to be valid. If the pipe is too short, the Hagen–Poiseuille equation may result in unphysically high flow rates; the flow is bounded by Bernoulli's principle, under less restrictive conditions, by

because it is impossible to have negative (absolute) pressure (not to be confused with gauge pressure) in an incompressible flow.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

31) **Lambert's cosine law**

In optics, Lambert's cosine law says that the radiant intensity or luminous intensity observed from an ideal diffusely reflecting surface or ideal diffuse radiator is directly proportional to the cosine of the angle

between the direction of the incident light and the surface normal; . The law is also known as the cosine emission law or Lambert's emission law. It is named after Johann Heinrich Lambert, from his Photometria, published in 1760.A surface which obeys Lambert's law is said to be Lambertian, and exhibits Lambertian reflectance. Such a surface has the same radiance when viewed from any angle. This means, for example, that to the human eye it has the same apparent brightness (or luminance). It has the same radiance because, although the emitted power from a given area element is reduced by the cosine of the emission angle, the solid angle, subtended by surface visible to the viewer, is reduced by the very same amount. Because the ratio between power and solid angle is constant, radiance (power per unit solid angle per unit projected source area) stays the same.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

32) **Graham's law**

Graham's law of effusion (also called Graham's law of diffusion) was formulated by Scottish physical chemist Thomas Graham in 1848. Graham found experimentally that the rate of effusion of a gas is inversely proportional to the square root of the molar mass of its particles. This formula can be written as:

where:

is the rate of effusion for the first gas. (volume or number of moles per unit time).is the rate of effusion for the second gas.

is the molar mass of gas 1

is the molar mass of gas 2.

Graham's law states that the rate of diffusion or of effusion of a gas is inversely proportional to the square root of its molecular weight. Thus, if the molecular weight of one gas is four times that of another, it would diffuse through a porous plug or escape through a small pinhole in a vessel at half the rate of the other (heavier gases diffuse more slowly). A complete theoretical explanation of Graham's law was provided years later by the kinetic theory of gases. Graham's law provides a basis for separating isotopes by diffusion—a method that came to play a crucial role in the development of the atomic bomb.

Graham's law is most accurate for molecular effusion which involves the movement of one gas at a time through a hole. It is only approximate for diffusion of one gas in another or in air, as these processes involve the movement of more than one gas.

In the same conditions of temperature and pressure, the molar mass is proportional to the mass density. Therefore, the rates of diffusion of different gases are inversely proportional to the square roots of their mass densities.

.Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

33) **Curie–Weiss law**

The Curie–Weiss law describes the magnetic susceptibility

of a ferromagnet in the paramagnetic region above the Curie point:where C is a material-specific Curie constant, T is the absolute temperature, and

is the Curie temperature, both measured in kelvin. The law predicts a singularity in the susceptibility at . Below this temperature, the ferromagnet has a spontaneous magnetization. The name is given after Pierre Curie and Pierre-Ernest Weiss.**Brief summary of related concepts**

The magnetic moment of a magnet is a quantity that determines the torque it will experience in an external magnetic field. A loop of electric current, a bar magnet, an electron, a molecule, and a planet all have magnetic moments.

The magnetization or magnetic polarization of a magnetic material is the vector field that expresses the density of permanent or induced magnetic moments. The magnetic moments can originate from microscopic electric currents caused by the motion of electrons in individual atoms, or the spin of the electrons or the nuclei. Net magnetization results from the response of a material to an external magnetic field, together with any unbalanced magnetic moment that may be present even in the absence of the external magnetic field, for example, in sufficiently cold iron. The latter is called spontaneous magnetization. Other materials that share this property with iron, like Nickel and magnetite, are called ferromagnets. The threshold temperature below which a material is ferromagnetic is called the Curie temperature and varies between materials.

**Limitations**

In many materials, the Curie–Weiss law fails to describe the susceptibility in the immediate vicinity of the Curie point, since it is based on a mean-field approximation. Instead, there is a critical behavior of the form

with the critical exponent γ. However, at temperatures

the expression of the Curie–Weiss law still holds true, but with replaced by a temperature Θ that is somewhat higher than the actual Curie temperature. Some authors call Θ the Weiss constant to distinguish it from the temperature of the actual Curie point.Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

34) **Gauss's law**

In physics and electromagnetism, Gauss's law, also known as Gauss's flux theorem, (or sometimes simply called Gauss's theorem) is a law relating the distribution of electric charge to the resulting electric field. In its integral form, it states that the flux of the electric field out of an arbitrary closed surface is proportional to the electric charge enclosed by the surface, irrespective of how that charge is distributed. Even though the law alone is insufficient to determine the electric field across a surface enclosing any charge distribution, this may be possible in cases where symmetry mandates uniformity of the field. Where no such symmetry exists, Gauss's law can be used in its differential form, which states that the divergence of the electric field is proportional to the local density of charge.

The law was first formulated by Joseph-Louis Lagrange in 1773, followed by Carl Friedrich Gauss in 1835, both in the context of the attraction of ellipsoids. It is one of Maxwell's four equations, which forms the basis of classical electrodynamics. Gauss's law can be used to derive Coulomb's law, and vice versa.

**Qualitative description**

In words, Gauss's law states that

*The net electric flux through any hypothetical closed surface is equal to*

Gauss's law has a close mathematical similarity with a number of laws in other areas of physics, such as Gauss's law for magnetism and Gauss's law for gravity. In fact, any inverse-square law can be formulated in a way similar to Gauss's law: for example, Gauss's law itself is essentially equivalent to the inverse-square Coulomb's law, and Gauss's law for gravity is essentially equivalent to the inverse-square Newton's law of gravity.

The law can be expressed mathematically using vector calculus in integral form and differential form; both are equivalent since they are related by the divergence theorem, also called Gauss's theorem. Each of these forms in turn can also be expressed two ways: In terms of a relation between the electric field E and the total electric charge, or in terms of the electric displacement field D and the free electric charge.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

35) **Fermi–Dirac statistics**

Fermi–Dirac statistics (F–D statistics) is a type of quantum statistics that applies to the physics of a system consisting of many non-interacting, identical particles that obey the Pauli exclusion principle. A result is the Fermi–Dirac distribution of particles over energy states. It is named after Enrico Fermi and Paul Dirac, each of whom derived the distribution independently in 1926 (although Fermi derived it before Dirac). Fermi–Dirac statistics is a part of the field of statistical mechanics and uses the principles of quantum mechanics.

F–D statistics applies to identical and indistinguishable particles with half-integer spin (1/2, 3/2, etc.), called fermions, in thermodynamic equilibrium. For the case of negligible interaction between particles, the system can be described in terms of single-particle energy states. A result is the F–D distribution of particles over these states where no two particles can occupy the same state, which has a considerable effect on the properties of the system. F–D statistics is most commonly applied to electrons, a type of fermion with spin 1/2.

A counterpart to F–D statistics is Bose–Einstein statistics (B–E statistics), which applies to identical and indistinguishable particles with integer spin (0, 1, 2, etc.) called bosons. In classical physics, Maxwell–Boltzmann statistics (M–B statistics) is used to describe particles that are identical and treated as distinguishable. For both B–E and M–B statistics, more than one particle can occupy the same state, unlike F–D statistics.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

36) **Bell's theorem**

Bell's theorem is a term encompassing a number of closely related results in physics, all of which determine that quantum mechanics is incompatible with local hidden-variable theories given some basic assumptions about the nature of measurement. "Local" here refers to the principle of locality, the idea that a particle can only be influenced by its immediate surroundings, and that interactions mediated by physical fields can only occur at speeds no greater than the speed of light. "Hidden variables" are hypothetical properties possessed by quantum particles, properties that are undetectable but still affect the outcome of experiments. In the words of physicist John Stewart Bell, for whom this family of results is named, "If [a hidden-variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local."

The term is broadly applied to a number of different derivations, the first of which was introduced by John Stewart Bell in a 1964 paper titled "On the Einstein Podolsky Rosen Paradox". Bell's paper was a response to a 1935 thought experiment that Albert Einstein, Boris Podolsky and Nathan Rosen proposed, arguing that quantum physics is an "incomplete" theory. By 1935, it was already recognized that the predictions of quantum physics are probabilistic. Einstein, Podolsky and Rosen presented a scenario that involves preparing a pair of particles such that the quantum state of the pair is entangled, and then separating the particles to an arbitrarily large distance. The experimenter has a choice of possible measurements that can be performed on one of the particles. When they choose a measurement and obtain a result, the quantum state of the other particle apparently collapses instantaneously into a new state depending upon that result, no matter how far away the other particle is. This suggests that either the measurement of the first particle somehow also interacted with the second particle at faster than the speed of light, or that the entangled particles had some unmeasured property which pre-determined their final quantum states before they were separated. Therefore, assuming locality, quantum mechanics must be incomplete, as it cannot give a complete description of the particle's true physical characteristics. In other words, quantum particles, like electrons and photons, must carry some property or attributes not included in quantum theory, and the uncertainties in quantum theory's predictions would then be due to ignorance or unknowability of these properties, later termed "hidden variables".

Bell carried the analysis of quantum entanglement much further. He deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. This constraint would later be named the Bell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Consequently, the only way that hidden variables could explain the predictions of quantum physics is if they are "nonlocal", which is to say that somehow the two particles were able to interact instantaneously no matter how widely the two particles are separated.

Multiple variations on Bell's theorem were put forward in the following years, introducing other closely related conditions generally known as Bell (or "Bell-type") inequalities. The first rudimentary experiment designed to test Bell's theorem was performed in 1972 by John Clauser and Stuart Freedman. More advanced experiments, known collectively as Bell tests, have been performed many times since. Often, these experiments have had the goal of "closing loopholes", that is, ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. To date, Bell tests have consistently found that physical systems obey quantum mechanics and violate Bell inequalities; which is to say that the results of these experiments are incompatible with any local hidden variable theory.

The exact nature of the assumptions required to prove a Bell-type constraint on correlations has been debated by physicists and by philosophers. While the significance of Bell's theorem is not in doubt, its full implications for the interpretation of quantum mechanics remain unresolved.

**Additional Information**

According to a scientist named Bell, for local hidden variables, there is no corporal concept that can propagate the quantum mechanics calculations.

The Physicist, John Stewart Bell brought up a theorem, and later on, it was named as Bell’s theorem. There are some microscopic properties present inside a particle known as hidden variables.

In general, for microscopes (existing microscopes), it is very difficult to keep an eye on them and study their behavior.

A Physicist known as Heisenberg stated the Uncertainty Principle. According to his principle, there is no existence of variables outside the context of observation.

Quantum mechanics had no scope in the inacceptable situation after EPR. The EPR quantum mechanics was imperfect in the logic that it botched to version for some rudiments of physical reality. Also, it has desecrated the principle of the finite propagation speed of physical effects.

**Bell's Theorem Formula**

A formula has been developed by John Stewart Bell to measure the subatomic phenomenon.

The formula is:

P (X = Y) + P (Y = Z) + P (Z = X) ≥ 1

Here,

X, Y, and Z = Photon measurement variables

Mathematically,

P (X = Y) is the probability of X = Y (applicable)

**Bell’s Theorem Experiment**

Two observers, commonly known as Alice and Bob, have taken part in the EPR assumed experiment. The EPR performs in liberated quantities of spin on a pair of electrons, equipped at a source in an unusual state known as a spin-singlet state.

Bell has found a definite inference of EPR. Alice calculated the spin in the x-direction along with Bob's measurement in the x-direction. It was measured with confidence whereas instantaneously before the measurement of Alice, the result of Bob resulted statistically merely.

Through the experiment, he depicted a statement which stated that “when the spin is in the x-direction but not an element of physical reality, the properties pass from Alice to Bob rapidly.

According to Bell, the entangled particles communicate with each other faster than light. If we believe about anything that can go faster than light, we are wrong as per scientists since nothing goes faster than light.

Bell’s Theorem Proof states about the particle getting twisted out.

**What is Bell's Inequality?**

We can learn and explain Bell’s inequality with the help of quantum mechanics. The behavior of electrons inside a magnetic field is quite interesting. We can study them with the help of quantum mechanics.

The result of the electron inside the magnetic field sets them apart as half of the electron goes towards the right side, and the other half of the electrons go towards the left.

Again, the electrons which are situated at the right side are sent towards another magnetic field, which is perpendicular to the left.

Also, they get separated in a different way that few of them go down, and few of them go up. This randomness of electrons can be studied with the help of Bell’s theorem.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

37) **Huygens–Fresnel principle**

The Huygens–Fresnel principle (named after Dutch physicist Christiaan Huygens and French physicist Augustin-Jean Fresnel) states that every point on a wavefront is itself the source of spherical wavelets, and the secondary wavelets emanating from different points mutually interfere. The sum of these spherical wavelets forms a new wavefront. As such, the Huygens-Fresnel principle is a method of analysis applied to problems of luminous wave propagation both in the far-field limit and in near-field diffraction as well as reflection.

**History**

In 1678, Huygens proposed that every point reached by a luminous disturbance becomes a source of a spherical wave; the sum of these secondary waves determines the form of the wave at any subsequent time. He assumed that the secondary waves travelled only in the "forward" direction and it is not explained in the theory why this is the case. He was able to provide a qualitative explanation of linear and spherical wave propagation, and to derive the laws of reflection and refraction using this principle, but could not explain the deviations from rectilinear propagation that occur when light encounters edges, apertures and screens, commonly known as diffraction effects. The resolution of this error was finally explained by David A. B. Miller in 1991. The resolution is that the source is a dipole (not the monopole assumed by Huygens), which cancels in the reflected direction.

In 1818, Fresnel showed that Huygens's principle, together with his own principle of interference could explain both the rectilinear propagation of light and also diffraction effects. To obtain agreement with experimental results, he had to include additional arbitrary assumptions about the phase and amplitude of the secondary waves, and also an obliquity factor. These assumptions have no obvious physical foundation but led to predictions that agreed with many experimental observations, including the Poisson spot.

Poisson was a member of the French Academy, which reviewed Fresnel's work. He used Fresnel's theory to predict that a bright spot ought to appear in the center of the shadow of a small disc, and deduced from this that the theory was incorrect. However, Arago, another member of the committee, performed the experiment and showed that the prediction was correct. (Lisle had observed this fifty years earlier. This was one of the investigations that led to the victory of the wave theory of light over then predominant corpuscular theory.

In antenna theory and engineering, the reformulation of the Huygens–Fresnel principle for radiating current sources is known as surface equivalence principle.

**Huygens' principle as a microscopic model**

The Huygens–Fresnel principle provides a reasonable basis for understanding and predicting the classical wave propagation of light. However, there are limitations to the principle, namely the same approximations done for deriving the Kirchhoff's diffraction formula and the approximations of near field due to Fresnel. These can be summarized in the fact that the wavelength of light is much smaller than the dimensions of any optical components encountered.

Kirchhoff's diffraction formula provides a rigorous mathematical foundation for diffraction, based on the wave equation. The arbitrary assumptions made by Fresnel to arrive at the Huygens–Fresnel equation emerge automatically from the mathematics in this derivation.

A simple example of the operation of the principle can be seen when an open doorway connects two rooms and a sound is produced in a remote corner of one of them. A person in the other room will hear the sound as if it originated at the doorway. As far as the second room is concerned, the vibrating air in the doorway is the source of the sound.

**Modern physics interpretations**

Not all experts agree that the Huygens' principle is an accurate microscopic representation of reality. For instance, Melvin Schwartz argued that "Huygens' principle actually does give the right answer but for the wrong reasons".

This can be reflected in the following facts:

* The microscopic mechanics to create photons and of emission, in general, is essentially acceleration of electrons.

* The original analysis of Huygens included amplitudes only. It includes neither phases nor waves propagating at different speeds (due to diffraction within continuous media), and therefore does not take into account interference.

* The Huygens analysis also does not include polarization for light which imply a vector potential, where instead sound waves can be described with a scalar potential and there is no unique and natural translation between the two.

* In the Huygens description, there is no explanation of why we choose only the forward-going (retarded wave or forward envelope of wave fronts) versus the backward-propagating advanced wave (backward envelope).

* In the Fresnel approximation there is a concept of non-local behavior due to the sum of spherical waves with different phases that comes from the different points of the wave front, and non local theories are subject of many debates (e.g., not being Lorentz covariant) and of active research.

* The Fresnel approximation can be interpreted in a quantum probabilistic manner but is unclear how much this sum of states (i.e., wavelets on the wavefront) is a complete list of states that are meaningful physically or represents more of an approximation on a generic basis like in the linear combination of atomic orbitals (LCAO) method.

The Huygens' principle is essentially compatible with quantum field theory in the far field approximation, considering effective fields in the center of scattering, considering small perturbations, and in the same sense that quantum optics is compatible with classical optics, other interpretations are subject of debates and active research.

The Feynman model where every point in an imaginary wave front as large as the room is generating a wavelet, shall also be interpreted in these approximations and in a probabilistic context, in this context remote points can only contribute minimally to the overall probability amplitude.

Quantum field theory does not include any microscopic model for photon creation and the concept of single photon is also put under scrutiny on a theoretical level.

**Mathematical expression of the principle**

Consider the case of a point source located at a point

, vibrating at a frequency f. The disturbance may be described by a complex variable known as the complex amplitude. It produces a spherical wave with wavelength , wavenumber . Within a constant of proportionality, the complex amplitude of the primary wave at the point Q located at a distance from is:.Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

38) **Brewster’s Angle**

Brewster's angle (also known as the polarization angle) is an angle of incidence at which light with a particular polarization is perfectly transmitted through a transparent dielectric surface, with no reflection. When unpolarized light is incident at this angle, the light that is reflected from the surface is therefore perfectly polarized. This special angle of incidence is named after the Scottish physicist Sir David Brewster (1781–1868).

**Explanation**

When light encounters a boundary between two media with different refractive indices, some of it is usually reflected as shown in the figure above. The fraction that is reflected is described by the Fresnel equations, and depends on the incoming light's polarization and angle of incidence.

The Fresnel equations predict that light with the p polarization (electric field polarized in the same plane as the incident ray and the surface normal at the point of incidence) will not be reflected if the angle of incidence is

where

is the refractive index of the initial medium through which the light propagates (the "incident medium"), and is the index of the other medium. This equation is known as Brewster's law, and the angle defined by it is Brewster's angle.The physical mechanism for this can be qualitatively understood from the manner in which electric dipoles in the media respond to p-polarized light. One can imagine that light incident on the surface is absorbed, and then re-radiated by oscillating electric dipoles at the interface between the two media. The polarization of freely propagating light is always perpendicular to the direction in which the light is travelling. The dipoles that produce the transmitted (refracted) light oscillate in the polarization direction of that light. These same oscillating dipoles also generate the reflected light. However, dipoles do not radiate any energy in the direction of the dipole moment. If the refracted light is p-polarized and propagates exactly perpendicular to the direction in which the light is predicted to be specularly reflected, the dipoles point along the specular reflection direction and therefore no light can be reflected.

With simple geometry this condition can be expressed as

,where

is the angle of reflection (or incidence) and is the angle of refraction.Using Snell's law,

one can calculate the incident angle

at which no light is reflected:.Solving for

gives.

For a glass medium (n2 ≈ 1.5) in air (n1 ≈ 1), Brewster's angle for visible light is approximately 56°, while for an air-water interface (n2 ≈ 1.33), it is approximately 53°. Since the refractive index for a given medium changes depending on the wavelength of light, Brewster's angle will also vary with wavelength.

The phenomenon of light being polarized by reflection from a surface at a particular angle was first observed by Étienne-Louis Malus in 1808. He attempted to relate the polarizing angle to the refractive index of the material, but was frustrated by the inconsistent quality of glasses available at that time. In 1815, Brewster experimented with higher-quality materials and showed that this angle was a function of the refractive index, defining Brewster's law.

Brewster's angle is often referred to as the "polarizing angle", because light that reflects from a surface at this angle is entirely polarized perpendicular to the plane of incidence ("s-polarized"). A glass plate or a stack of plates placed at Brewster's angle in a light beam can, thus, be used as a polarizer. The concept of a polarizing angle can be extended to the concept of a Brewster wavenumber to cover planar interfaces between two linear bianisotropic materials. In the case of reflection at Brewster's angle, the reflected and refracted rays are mutually perpendicular.

For magnetic materials, Brewster's angle can exist for only one of the incident wave polarizations, as determined by the relative strengths of the dielectric permittivity and magnetic permeability.This has implications for the existence of generalized Brewster angles for dielectric metasurfaces.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

39) **Carnot's Theorem**

In thermodynamics, Carnot's theorem, developed in 1824 by Nicolas Léonard Sadi Carnot, also called Carnot's rule, is a principle that specifies limits on the maximum efficiency that any heat engine can obtain.

Carnot's theorem states that all heat engines operating between the same two thermal or heat reservoirs can't have efficiencies greater than a reversible heat engine operating between the same reservoirs. A corollary of this theorem is that every reversible heat engine operating between a pair of heat reservoirs is equally efficient, regardless of the working substance employed or the operation details. Since a Carnot heat engine is also a reversible engine, the efficiency of all the reversible heat engines is determined as the efficiency of the Carnot heat engine that depends solely on the temperatures of its hot and cold reservoirs.

The maximum efficiency (i.e., the Carnot heat engine efficiency) of a heat engine operating between cold and hot reservoirs, denoted as H and C respectively, is the ratio of the temperature difference between the reservoirs to the hot reservoir temperature, expressed in the equation

where and are the absolute temperatures of the hot and cold reservoirs, respectively, and the efficiency is the ratio of the work done by the engine (to the surroundings) to the heat drawn out of the hot reservoir (to the engine). is greater than zero if and only if there is a temperature difference between the two thermal reservoirs. Since is the upper limit of all reversible and irreversible heat engine efficiencies, it is concluded that work from a heat engine can be produced if and only if there is a temperature difference between two thermal reservoirs connecting to the engine.

Carnot's theorem is a consequence of the second law of thermodynamics. Historically, it was based on contemporary caloric theory, and preceded the establishment of the second law.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

40) **Fick's laws of diffusion**

Fick's laws of diffusion describe diffusion and were derived by Adolf Fick in 1855. They can be used to solve for the diffusion coefficient, D. Fick's first law can be used to derive his second law which in turn is identical to the diffusion equation.

A diffusion process that obeys Fick's laws is called normal or Fickian diffusion; otherwise, it is called anomalous diffusion or non-Fickian diffusion.

**History**

In 1855, physiologist Adolf Fick first reported his now well-known laws governing the transport of mass through diffusive means. Fick's work was inspired by the earlier experiments of Thomas Graham, which fell short of proposing the fundamental laws for which Fick would become famous. Fick's law is analogous to the relationships discovered at the same epoch by other eminent scientists: Darcy's law (hydraulic flow), Ohm's law (charge transport), and Fourier's Law (heat transport).

Fick's experiments (modeled on Graham's) dealt with measuring the concentrations and fluxes of salt, diffusing between two reservoirs through tubes of water. It is notable that Fick's work primarily concerned diffusion in fluids, because at the time, diffusion in solids was not considered generally possible. Today, Fick's Laws form the core of our understanding of diffusion in solids, liquids, and gases (in the absence of bulk fluid motion in the latter two cases). When a diffusion process does not follow Fick's laws (which happens in cases of diffusion through porous media and diffusion of swelling penetrants, among others), it is referred to as non-Fickian.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

41) **Moseley's Law**

Moseley's law is an empirical law concerning the characteristic x-rays emitted by atoms. The law had been discovered and published by the English physicist Henry Moseley in 1913-1914. Until Moseley's work, "atomic number" was merely an element's place in the periodic table and was not known to be associated with any measurable physical quantity.[3] In brief, the law states that the square root of the frequency of the emitted x-ray is approximately proportional to the atomic number.

**History**

The historic periodic table was roughly ordered by increasing atomic weight, but in a few famous cases the physical properties of two elements suggested that the heavier ought to precede the lighter. An example is cobalt having a weight of 58.9 and nickel having an atomic weight of 58.7.

Henry Moseley and other physicists used x-ray diffraction to study the elements, and the results of their experiments led to organizing the periodic table by proton count.

**Apparatus**

Since the spectral emissions for the lighter elements would be in the soft X-ray range (absorbed by air), the spectrometry apparatus had to be enclosed inside a vacuum. Details of the experimental setup are documented in the journal articles "The High-Frequency Spectra of the Elements" Part I and Part II.

**Results**

Moseley found that the

lines (in Siegbahn notation) were indeed related to the atomic number, Z.Following Bohr's lead, Moseley found that for the spectral lines, this relationship could be approximated by a simple formula, later called Moseley's Law.

where:

*

is the frequency of the observed x-ray emission line* A and b are constants that depend on the type of line (that is, K, L, etc. in x-ray notation)

* Rydberg frequency and = 1 for lines, and Rydberg frequency and for lines.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,124

42) **Casimir effect**

In quantum field theory, the Casimir effect is a physical force acting on the macroscopic boundaries of a confined space which arises from the quantum fluctuations of the field. It is named after the Dutch physicist Hendrik Casimir, who predicted the effect for electromagnetic systems in 1948.

In the same year, Casimir together with Dirk Polder described a similar effect experienced by a neutral atom in the vicinity of a macroscopic interface which is referred to as the Casimir–Polder force. Their result is a generalization of the London–van der Waals force and includes retardation due to the finite speed of light. Since the fundamental principles leading to the London–van der Waals force, the Casimir and the Casimir–Polder force, respectively, can be formulated on the same footing, the distinction in nomenclature nowadays serves a historical purpose mostly and usually refers to the different physical setups.

It was not until 1997 that a direct experiment by S. Lamoreaux quantitatively measured the Casimir force to within 5% of the value predicted by the theory.

The Casimir effect can be understood by the idea that the presence of macroscopic material interfaces, such as conducting metals and dielectrics, alters the vacuum expectation value of the energy of the second-quantized electromagnetic field. Since the value of this energy depends on the shapes and positions of the materials, the Casimir effect manifests itself as a force between such objects.

Any medium supporting oscillations has an analogue of the Casimir effect. For example, beads on a string as well as plates submerged in turbulent water or gas illustrate the Casimir force.

In modern theoretical physics, the Casimir effect plays an important role in the chiral bag model of the nucleon; in applied physics it is significant in some aspects of emerging microtechnologies and nanotechnologies.

**Physical properties**

The typical example is of two uncharged conductive plates in a vacuum, placed a few nanometers apart. In a classical description, the lack of an external field means that there is no field between the plates, and no force would be measured between them. When this field is instead studied using the quantum electrodynamic vacuum, it is seen that the plates do affect the virtual photons which constitute the field, and generate a net force – either an attraction or a repulsion depending on the specific arrangement of the two plates. Although the Casimir effect can be expressed in terms of virtual particles interacting with the objects, it is best described and more easily calculated in terms of the zero-point energy of a quantized field in the intervening space between the objects. This force has been measured and is a striking example of an effect captured formally by second quantization.

The treatment of boundary conditions in these calculations has led to some controversy. In fact, "Casimir's original goal was to compute the van der Waals force between polarizable molecules" of the conductive plates. Thus it can be interpreted without any reference to the zero-point energy (vacuum energy) of quantum fields.

Because the strength of the force falls off rapidly with distance, it is measurable only when the distance between the objects is extremely small. On a submicron scale, this force becomes so strong that it becomes the dominant force between uncharged conductors. In fact, at separations of 10 nm – about 100 times the typical size of an atom – the Casimir effect produces the equivalent of about 1 atmosphere of pressure (the precise value depending on surface geometry and other factors).

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline