Gist
Collagen is the most abundant protein in the body. Its fiber-like structure is used to make connective tissue. Like the name implies, this type of tissue connects other tissues and is a major component of bone, skin, muscles, tendons, and cartilage.
Summary
Collagen is any of a group of proteins that are components of whitish, rather inelastic fibres of great tensile strength present in tendon and ligament and in the connective tissue layer of the skin—dermis—and in dentin and cartilage. Collagenous fibres occur in bundles up to several hundred microns wide, and the individual fibres can be separated into fine fibrils; the fibrils, furthermore, consist of even finer filaments with a periodic banded structure.
Collagen is a scleroprotein, being one of a family of proteins marked by low solubility in water. Collagen is especially rich in the amino acid glycine, and it is the only protein known to contain a substantial proportion of hydroxyproline. Upon exposure to boiling water, collagen is converted to gelatin.
Details
Collagen is a type of protein. Certain foods, such as animal skin and ligaments, are rich in collagen. Collagen is also available as a supplement.
Many people hoping to support the health of their skin, joints, and hair pop collagen supplements daily or add collagen powder to their morning coffee, tea, or smoothie.
Even though the use of collagen supplements and other collagen products is on the rise, most people don’t know what collagen actually is or what it does in the body.
This article tells you everything you need to know about collagen, including what it is, what it does in your body, and whether collagen supplements are worth it.
What is collagen and why is it important?
Collagen is a type of protein. In fact, it’s the most abundant structural protein in animals. A structural protein is one that makes up the structure or framework of your cells and tissues.
There are 28 known types of collagen, with type I collagen accounting for 90% of the collagen in the human body.
Collagen is composed mainly of the amino acids glycine, proline, and hydroxyproline. These amino acids form three strands, which make up the triple-helix structure characteristic of collagen.
Collagen is found in connective tissue, skin, tendons, bones, and cartilage. It provides structural support to tissues and plays important roles in cellular processes, including :
* tissue repair
* immune response
* cellular communication
* cellular migration, a process necessary for tissue maintenance
Connective tissue cells called fibroblasts produce and maintain collagen. As people grow older, their collagen becomes fragmented, fibroblast function becomes impaired, and collagen production slows.
These changes, along with the loss of another key structural protein called elastin, lead to signs of aging such as sagging skin and wrinkles.
Collagen uses
Your body naturally produces collagen, and you can consume it through dietary sources such as chicken skin and fish skin as well as collagen supplements.
Oral and topical collagen products like supplements and face creams are popular for treating signs of aging such as wrinkles, loss of skin hydration, and joint pain.
You can buy collagen in powder, capsule, and liquid form.
You can take it as a supplement or add it to beverages — both hot and cold — and foods such as oatmeal, yogurt, and energy balls.
Healthcare professionals also use collagen and collagen-based materials in the medical field, including in treating wounds, burns, and diabetic ulcers.
Additionally, cosmetics companies use collagen in products like moisturizers and serums because of its moisturizing and humectant properties.
SUMMARY
Your body makes collagen naturally. Collagen is found in connective tissue, skin, tendon, bone, and cartilage and has many functions. It’s also present in some foods, and you can take it as a supplement.
What causes collagen loss?
As you age, your collagen production naturally declines. Additionally, collagen becomes fragmented and more loosely distributed.
These changes lead to the characteristic signs of aging, such as wrinkles and dry, sagging skin. The integrity of the collagen found in the skeletal system decreases with age as well, leading to reductions in bone strength.
While collagen loss and damage as you age are inevitable, certain dietary and lifestyle factors can accelerate this process.
For example, smoking cigarettes is known to degrade collagen and cause skin aging, wrinkles, and loss of elasticity.
Excessive drinking has also been shown to accelerate skin aging by reducing collagen production and damaging skin repair mechanisms.
Additionally, following a diet high in added sugar and ultra-processed foods can lead to premature aging by contributing to a process called glycation, which reduces collagen turnover and interferes with collagen’s ability to interact with surrounding cells and proteins.
Excessive sun exposure degrades collagen production as well, so wearing sunscreen and avoiding excessive sun exposure can help prevent signs of premature skin aging.
SUMMARY
Age-related collagen loss is unavoidable, but dietary and lifestyle factors such as smoking and excessive alcohol intake can speed up this process.
What foods are rich in collagen?
Collagen is found in all animals, and it’s concentrated in some parts of an animal, such as skin and joints.
Here are few examples of collagen-rich foods:
* bones, skin, and ligaments of animals, such as chicken skin and pig knuckle
* certain types of seafood, such as fish skin and jellyfish
* products made from animal parts such as bones and ligaments, including bone broth
Because your body naturally produces collagen from amino acids, you can support collagen production by ensuring that you’re eating adequate amounts of protein from foods like poultry, fish, beans, and eggs.
In addition to amino acids, your body needs other dietary components for collagen production and maintenance.
For example, vitamin C is necessary for collagen synthesis, so having low or deficient levels of vitamin C can lead to impaired collagen production.
Therefore, consuming plenty of vitamin C-rich foods can help support healthy collagen production. For example, try citrus fruits, peppers, greens, and berries.
What’s more, consuming a diet high in beneficial plant compounds could also help improve skin health by reducing inflammation and protecting against collagen degradation.
SUMMARY
Certain foods, such as animal skin and ligaments, are rich in collagen. A collagen-supportive diet should include protein-rich foods as well as fruits and vegetables, which are rich in vitamin C and other antioxidant and anti-inflammatory compounds.
What are the benefits of taking collagen?
Studies have shown that taking collagen supplements may offer a few benefits.
Potential skin benefits
One of the most popular uses of collagen supplements is to support skin health. Research suggests that taking collagen supplements may improve certain aspects of skin health and appearance.
A review of 19 studies that included 1,125 participants (95% women) between the ages of 20 and 70 found that taking hydrolyzed collagen improved skin hydration, elasticity, and wrinkles compared with placebo treatments.
Hydrolyzed collagen is a common type of collagen used in supplements that is created using a process called hydrolysis. This process breaks down the protein into smaller pieces, making it easier for the body to absorb.
A number of studies have shown that taking collagen supplements may improve skin hydration and elasticity and reduce the appearance of wrinkles.
However, keep in mind that many of these studies were funded by companies that manufacture collagen products, which could have influenced the study results.
The doses of collagen shown to be effective for improving skin health in research studies vary, though most studies have used 2.5–15 grams per day for 8 weeks or longer.
Potential benefits for bones
In addition to improving some aspects of skin health and appearance, collagen supplements may offer a few other benefits.
One study looked at the effects of taking collagen supplements in 102 women in postmenopause who had reduced bone mineral density (BMD).
Those who took 5 grams of collagen peptides per day for 1 year had significant increases in BMD in their spine and femur (a bone in the lower leg) compared with participants who took a placebo.
A follow-up study in 31 of these women found that taking 5 grams of collagen daily for a total of 4 years was associated with a progressive increase in BMD.
The researchers found that participants’ BMD increased by 5.79–8.16% in the spine and by 1.23–4.21% in the femur during the follow-up period.
These findings suggest that taking collagen supplements long-term may help increase bone mineral density in people in postmenopause, who are at a greater risk of developing osteopenia and osteoporosis.
What’s more, one review article concluded that taking oral collagen supplements reduced participants’ symptoms related to osteoarthritis, including stiffness.
Collagen supplements may provide other health benefits as well, such as improving body composition in certain populations when combined with resistance training.
It’s important to note that studies observed these beneficial effects of taking collagen mainly in older women with low bone mineral density.
Therefore, collagen supplements may not have the same effects in other populations, such as men, those who are younger, or those who don’t have low bone mineral density.
Additional Information
Collagen is the main structural protein in the extracellular matrix found in the body's various connective tissues. As the main component of connective tissue, it is the most abundant protein in mammals, making up from 25% to 35% of the whole-body protein content. Collagen consists of amino acids bound together to form a triple helix of elongated fibril[ known as a collagen helix. It is mostly found in connective tissue such as cartilage, bones, tendons, ligaments, and skin. Vitamin C is vital for collagen synthesis, and Vitamin E improves the production of collagen.
Depending upon the degree of mineralization, collagen tissues may be rigid (bone) or compliant (tendon) or have a gradient from rigid to compliant (cartilage). Collagen is also abundant in corneas, blood vessels, the gut, intervertebral discs, and the dentin in teeth. In muscle tissue, it serves as a major component of the endomysium. Collagen constitutes one to two percent of muscle tissue and accounts for 6% of the weight of the skeletal muscle tissue. The fibroblast is the most common cell that creates collagen. Gelatin, which is used in food and industry, is collagen that has been irreversibly hydrolyzed using heat, basic solutions or weak acids.
]]>
Hi AnthonyRBrown;
AnthonyRBrown (post #1) wrote:FERMATS LAST THEOREM DEMONSTRATION! BY, ANTHONY.R.BROWN (1998 SOLVED )
THE INFINITE MATHEMATICAL PATTERN OF CUBE NUMBERS!
BELOW IS MY DEMONSTRATION OF FERMATS LAST THEOREM,IT IS BASED ON THE INFINITE MATHEMATICAL PATTERN OF THE FIRST (10) CUBE NUMBERS AS A GROUP! YOU WILL SEE THAT THE LAST OR SINGLE NUMBER,FROM EACH OF THE (10) CUBE NUMBERS REPEATS ITSELF AT THE END OF EACH CUBE NUMBER, IN THE COLUMN,GIVEN IN BRACKETS () THIS PATTERN REPEATS ITSELF NO MATTER HOW LARGE OR SMALL THE CUBE NUMBERS ARE....
AnthonyRBrown (post #2) wrote:Below is a BASIC program to show my FLT Demonstration...
Lines 1-37: Lots of code
Lines 38-50:
PRINT TAB(95); " { FERMATS LAST THEOREM } DEMONSTRATION PROGRAM "
PRINT TAB(95); " BY,Anthony.R.Brown V.01/01/1998 "
PRINT TAB(14); "*******************************************************"
PRINT " THE PROBLEM IS AS FOLLOWS! "
PRINT " are there any whole numbers e.g (x,y,z) cube numbers "
PRINT " where x3 + y3 = z3 NOTICE = (Xn,Yn,Zn) n Must be greater than (2) "
PRINT " an example that does not work is given below "
PRINT " x = 64 cube y = 64 cube Z = 125 cube X + Y = 128 (+ 3) > Z "
PRINT " if you could use zero?? then the answer would be, "
PRINT " x = 0 y = 0 z = 0 simple! X + Y = Z "
Lines 51-345: Lots more codeFrom what you said in post #2 ("Below is a BASIC program to show my FLT Demonstration...{ FERMATS LAST THEOREM } DEMONSTRATION PROGRAM"), I thought that the program would use the "FERMATS LAST THEOREM DEMONSTRATION!" Pattern concepts that you described in post #1 and subsequent posts.
However, it doesn't.
Hmm...
Hi,
The program does,you have to Run the "INPUT " ENTER (Y) TO RUN MAKE CUBE NUMBER PROGRAM! "; RUNMAKECUBES"
When it starts it's the 2nd option which I agree some do not find it!
]]>Gist
Astronomy is the study of everything in the universe beyond Earth's atmosphere. That includes objects we can see with our naked eyes, like the Sun , the Moon , the planets, and the stars . It also includes objects we can only see with telescopes or other instruments, like faraway galaxies and tiny particles.
Summary
Astronomy is the science that encompasses the study of all extraterrestrial objects and phenomena. Until the invention of the telescope and the discovery of the laws of motion and gravity in the 17th century, astronomy was primarily concerned with noting and predicting the positions of the Sun, Moon, and planets, originally for calendrical and astrological purposes and later for navigational uses and scientific interest. The catalog of objects now studied is much broader and includes, in order of increasing distance, the solar system, the stars that make up the Milky Way Galaxy, and other, more distant galaxies. With the advent of scientific space probes, Earth also has come to be studied as one of the planets, though its more-detailed investigation remains the domain of the Earth sciences.
The scope of astronomy
Since the late 19th century, astronomy has expanded to include astrophysics, the application of physical and chemical knowledge to an understanding of the nature of celestial objects and the physical processes that control their formation, evolution, and emission of radiation. In addition, the gases and dust particles around and between the stars have become the subjects of much research. Study of the nuclear reactions that provide the energy radiated by stars has shown how the diversity of atoms found in nature can be derived from a universe that, following the first few minutes of its existence, consisted only of hydrogen, helium, and a trace of lithium. Concerned with phenomena on the largest scale is cosmology, the study of the evolution of the universe. Astrophysics has transformed cosmology from a purely speculative activity to a modern science capable of predictions that can be tested.
Its great advances notwithstanding, astronomy is still subject to a major constraint: it is inherently an observational rather than an experimental science. Almost all measurements must be performed at great distances from the objects of interest, with no control over such quantities as their temperature, pressure, or chemical composition. There are a few exceptions to this limitation—namely, meteorites (most of which are from the asteroid belt, though some are from the Moon or Mars), rock and soil samples brought back from the Moon, samples of comet and asteroid dust returned by robotic spacecraft, and interplanetary dust particles collected in or above the stratosphere. These can be examined with laboratory techniques to provide information that cannot be obtained in any other way. In the future, space missions may return surface materials from Mars, or other objects, but much of astronomy appears otherwise confined to Earth-based observations augmented by observations from orbiting satellites and long-range space probes and supplemented by theory.
Determining astronomical distances
A central undertaking in astronomy is the determination of distances. Without a knowledge of astronomical distances, the size of an observed object in space would remain nothing more than an angular diameter and the brightness of a star could not be converted into its true radiated power, or luminosity. Astronomical distance measurement began with a knowledge of Earth’s diameter, which provided a base for triangulation. Within the inner solar system, some distances can now be better determined through the timing of radar reflections or, in the case of the Moon, through laser ranging. For the outer planets, triangulation is still used. Beyond the solar system, distances to the closest stars are determined through triangulation, in which the diameter of Earth’s orbit serves as the baseline and shifts in stellar parallax are the measured quantities. Stellar distances are commonly expressed by astronomers in parsecs (pc), kiloparsecs, or megaparsecs. (1 pc = 3.086 × {10}^{18} cm, or about 3.26 light-years [1.92 × {10}^{13} miles].) Distances can be measured out to around a kiloparsec by trigonometric parallax (see star: Determining stellar distances). The accuracy of measurements made from Earth’s surface is limited by atmospheric effects, but measurements made from the Hipparcos satellite in the 1990s extended the scale to stars as far as 650 parsecs, with an accuracy of about a thousandth of an arc second. The Gaia satellite is expected to measure stars as far away as 10 kiloparsecs to an accuracy of 20 percent. Less-direct measurements must be used for more-distant stars and for galaxies.
Two general methods for determining galactic distances are described here. In the first, a clearly identifiable type of star is used as a reference standard because its luminosity has been well determined. This requires observation of such stars that are close enough to Earth that their distances and luminosities have been reliably measured. Such a star is termed a “standard candle.” Examples are Cepheid variables, whose brightness varies periodically in well-documented ways, and certain types of supernova explosions that have enormous brilliance and can thus be seen out to very great distances. Once the luminosities of such nearer standard candles have been calibrated, the distance to a farther standard candle can be calculated from its calibrated luminosity and its actual measured intensity.
The second method for galactic distance measurements makes use of the observation that the distances to galaxies generally correlate with the speeds with which those galaxies are receding from Earth (as determined from the Doppler shift in the wavelengths of their emitted light). This correlation is expressed in the Hubble law: velocity = H × distance, in which H denotes Hubble’s constant, which must be determined from observations of the rate at which the galaxies are receding. There is widespread agreement that H lies between 67 and 73 kilometres per second per megaparsec (km/sec/Mpc). H has been used to determine distances to remote galaxies in which standard candles have not been found.
Details
Astronomy is a natural science that studies celestial objects and the phenomena that occur in the cosmos. It uses mathematics, physics, and chemistry in order to explain their origin and their overall evolution. Objects of interest include planets, moons, stars, nebulae, galaxies, meteoroids, asteroids, and comets. Relevant phenomena include supernova explosions, gamma ray bursts, quasars, blazars, pulsars, and cosmic microwave background radiation. More generally, astronomy studies everything that originates beyond Earth's atmosphere. Cosmology is a branch of astronomy that studies the universe as a whole.
Astronomy is one of the oldest natural sciences. The early civilizations in recorded history made methodical observations of the night sky. These include the Egyptians, Babylonians, Greeks, Indians, Chinese, Maya, and many ancient indigenous peoples of the Americas. In the past, astronomy included disciplines as diverse as astrometry, celestial navigation, observational astronomy, and the making of calendars.
Professional astronomy is split into observational and theoretical branches. Observational astronomy is focused on acquiring data from observations of astronomical objects. This data is then analyzed using basic principles of physics. Theoretical astronomy is oriented toward the development of computer or analytical models to describe astronomical objects and phenomena. These two fields complement each other. Theoretical astronomy seeks to explain observational results and observations are used to confirm theoretical results.
Astronomy is one of the few sciences in which amateurs play an active role. This is especially true for the discovery and observation of transient events. Amateur astronomers have helped with many important discoveries, such as finding new comets.
Etymology
Astronomy means "law of the stars" (or "culture of the stars" depending on the translation). Astronomy should not be confused with astrology, the belief system which claims that human affairs are correlated with the positions of celestial objects. Although the two fields share a common origin, they are now entirely distinct.
Use of terms "astronomy" and "astrophysics"
"Astronomy" and "astrophysics" are synonyms. Based on strict dictionary definitions, "astronomy" refers to "the study of objects and matter outside the Earth's atmosphere and of their physical and chemical properties", while "astrophysics" refers to the branch of astronomy dealing with "the behavior, physical properties, and dynamic processes of celestial objects and phenomena". In some cases, as in the introduction of the introductory textbook The Physical Universe by Frank Shu, "astronomy" may be used to describe the qualitative study of the subject, whereas "astrophysics" is used to describe the physics-oriented version of the subject. However, since most modern astronomical research deals with subjects related to physics, modern astronomy could actually be called astrophysics. Some fields, such as astrometry, are purely astronomy rather than also astrophysics. Various departments in which scientists carry out research on this subject may use "astronomy" and "astrophysics", partly depending on whether the department is historically affiliated with a physics department, and many professional astronomers have physics rather than astronomy degrees. Some titles of the leading scientific journals in this field include The Astronomical Journal, The Astrophysical Journal, and Astronomy & Astrophysics.
History
Ancient times
In early historic times, astronomy only consisted of the observation and predictions of the motions of objects visible to the naked eye. In some locations, early cultures assembled massive artifacts that may have had some astronomical purpose. In addition to their ceremonial uses, these observatories could be employed to determine the seasons, an important factor in knowing when to plant crops and in understanding the length of the year.
Before tools such as the telescope were invented, early study of the stars was conducted using the naked eye. As civilizations developed, most notably in Egypt, Mesopotamia, Greece, Persia, India, China, and Central America, astronomical observatories were assembled and ideas on the nature of the Universe began to develop. Most early astronomy consisted of mapping the positions of the stars and planets, a science now referred to as astrometry. From these observations, early ideas about the motions of the planets were formed, and the nature of the Sun, Moon and the Earth in the Universe were explored philosophically. The Earth was believed to be the center of the Universe with the Sun, the Moon and the stars rotating around it. This is known as the geocentric model of the Universe, or the Ptolemaic system, named after Ptolemy.
A particularly important early development was the beginning of mathematical and scientific astronomy, which began among the Babylonians, who laid the foundations for the later astronomical traditions that developed in many other civilizations. The Babylonians discovered that lunar eclipses recurred in a repeating cycle known as a saros.
Following the Babylonians, significant advances in astronomy were made in ancient Greece and the Hellenistic world. Greek astronomy is characterized from the start by seeking a rational, physical explanation for celestial phenomena. In the 3rd century BC, Aristarchus of Samos estimated the size and distance of the Moon and Sun, and he proposed a model of the Solar System where the Earth and planets rotated around the Sun, now called the heliocentric model. In the 2nd century BC, Hipparchus discovered precession, calculated the size and distance of the Moon and invented the earliest known astronomical devices such as the astrolabe. Hipparchus also created a comprehensive catalog of 1020 stars, and most of the constellations of the northern hemisphere derive from Greek astronomy. The Antikythera mechanism (c. 150–80 BC) was an early analog computer designed to calculate the location of the Sun, Moon, and planets for a given date. Technological artifacts of similar complexity did not reappear until the 14th century, when mechanical astronomical clocks appeared in Europe.
Middle Ages
Medieval Europe housed a number of important astronomers. Richard of Wallingford (1292–1336) made major contributions to astronomy and horology, including the invention of the first astronomical clock, the Rectangulus which allowed for the measurement of angles between planets and other astronomical bodies, as well as an equatorium called the Albion which could be used for astronomical calculations such as lunar, solar and planetary longitudes and could predict eclipses. Nicole Oresme (1320–1382) and Jean Buridan (1300–1361) first discussed evidence for the rotation of the Earth, furthermore, Buridan also developed the theory of impetus (predecessor of the modern scientific theory of inertia) which was able to show planets were capable of motion without the intervention of angels. Georg von Peuerbach (1423–1461) and Regiomontanus (1436–1476) helped make astronomical progress instrumental to Copernicus's development of the heliocentric model decades later.
Astronomy flourished in the Islamic world and other parts of the world. This led to the emergence of the first astronomical observatories in the Muslim world by the early 9th century. In 964, the Andromeda Galaxy, the largest galaxy in the Local Group, was described by the Persian Muslim astronomer Abd al-Rahman al-Sufi in his Book of Fixed Stars. The SN 1006 supernova, the brightest apparent magnitude stellar event in recorded history, was observed by the Egyptian Arabic astronomer Ali ibn Ridwan and Chinese astronomers in 1006. Iranian scholar Al-Biruni observed that, contrary to Ptolemy, the Sun's apogee (highest point in the heavens) was mobile, not fixed. Some of the prominent Islamic (mostly Persian and Arab) astronomers who made significant contributions to the science include Al-Battani, Thebit, Abd al-Rahman al-Sufi, Biruni, Abū Ishāq Ibrāhīm al-Zarqālī, Al-Birjandi, and the astronomers of the Maragheh and Samarkand observatories. Astronomers during that time introduced many Arabic names now used for individual stars.
It is also believed that the ruins at Great Zimbabwe and Timbuktu may have housed astronomical observatories. In Post-classical West Africa, Astronomers studied the movement of stars and relation to seasons, crafting charts of the heavens as well as precise diagrams of orbits of the other planets based on complex mathematical calculations. Songhai historian Mahmud Kati documented a meteor shower in August 1583. Europeans had previously believed that there had been no astronomical observation in sub-Saharan Africa during the pre-colonial Middle Ages, but modern discoveries show otherwise.
For over six centuries (from the recovery of ancient learning during the late Middle Ages into the Enlightenment), the Roman Catholic Church gave more financial and social support to the study of astronomy than probably all other institutions. Among the Church's motives was finding the date for Easter.
Scientific revolution
During the Renaissance, Nicolaus Copernicus proposed a heliocentric model of the solar system. His work was defended by Galileo Galilei and expanded upon by Johannes Kepler. Kepler was the first to devise a system that correctly described the details of the motion of the planets around the Sun. However, Kepler did not succeed in formulating a theory behind the laws he wrote down. It was Isaac Newton, with his invention of celestial dynamics and his law of gravitation, who finally explained the motions of the planets. Newton also developed the reflecting telescope.
Improvements in the size and quality of the telescope led to further discoveries. The English astronomer John Flamsteed catalogued over 3000 stars, More extensive star catalogues were produced by Nicolas Louis de Lacaille. The astronomer William Herschel made a detailed catalog of nebulosity and clusters, and in 1781 discovered the planet Uranus, the first new planet found.
During the 18–19th centuries, the study of the three-body problem by Leonhard Euler, Alexis Claude Clairaut, and Jean le Rond d'Alembert led to more accurate predictions about the motions of the Moon and planets. This work was further refined by Joseph-Louis Lagrange and Pierre Simon Laplace, allowing the masses of the planets and moons to be estimated from their perturbations.
Significant advances in astronomy came about with the introduction of new technology, including the spectroscope and photography. Joseph von Fraunhofer discovered about 600 bands in the spectrum of the Sun in 1814–15, which, in 1859, Gustav Kirchhoff ascribed to the presence of different elements. Stars were proven to be similar to the Earth's own Sun, but with a wide range of temperatures, masses, and sizes.
The existence of the Earth's galaxy, the Milky Way, as its own group of stars was only proved in the 20th century, along with the existence of "external" galaxies. The observed recession of those galaxies led to the discovery of the expansion of the Universe. Theoretical astronomy led to speculations on the existence of objects such as black holes and neutron stars, which have been used to explain such observed phenomena as quasars, pulsars, blazars, and radio galaxies. Physical cosmology made huge advances during the 20th century. In the early 1900s the model of the Big Bang theory was formulated, heavily evidenced by cosmic microwave background radiation, Hubble's law, and the cosmological abundances of elements. Space telescopes have enabled measurements in parts of the electromagnetic spectrum normally blocked or blurred by the atmosphere. In February 2016, it was revealed that the LIGO project had detected evidence of gravitational waves in the previous September.
Observational astronomy
The main source of information about celestial bodies and other objects is visible light, or more generally electromagnetic radiation. Observational astronomy may be categorized according to the corresponding region of the electromagnetic spectrum on which the observations are made. Some parts of the spectrum can be observed from the Earth's surface, while other parts are only observable from either high altitudes or outside the Earth's atmosphere. Specific information on these subfields is given below.
Radio astronomy
Radio astronomy uses radiation with wavelengths greater than approximately one millimeter, outside the visible range. Radio astronomy is different from most other forms of observational astronomy in that the observed radio waves can be treated as waves rather than as discrete photons. Hence, it is relatively easier to measure both the amplitude and phase of radio waves, whereas this is not as easily done at shorter wavelengths.
Although some radio waves are emitted directly by astronomical objects, a product of thermal emission, most of the radio emission that is observed is the result of synchrotron radiation, which is produced when electrons orbit magnetic fields. Additionally, a number of spectral lines produced by interstellar gas, notably the hydrogen spectral line at 21 cm, are observable at radio wavelengths.[9][47]
A wide variety of other objects are observable at radio wavelengths, including supernovae, interstellar gas, pulsars, and active galactic nuclei.
Infrared astronomy
Infrared astronomy is founded on the detection and analysis of infrared radiation, wavelengths longer than red light and outside the range of our vision. The infrared spectrum is useful for studying objects that are too cold to radiate visible light, such as planets, circumstellar disks or nebulae whose light is blocked by dust. The longer wavelengths of infrared can penetrate clouds of dust that block visible light, allowing the observation of young stars embedded in molecular clouds and the cores of galaxies. Observations from the Wide-field Infrared Survey Explorer (WISE) have been particularly effective at unveiling numerous galactic protostars and their host star clusters. With the exception of infrared wavelengths close to visible light, such radiation is heavily absorbed by the atmosphere, or masked, as the atmosphere itself produces significant infrared emission. Consequently, infrared observatories have to be located in high, dry places on Earth or in space. Some molecules radiate strongly in the infrared. This allows the study of the chemistry of space; more specifically it can detect water in comets.
Optical astronomy
Historically, optical astronomy, also called visible light astronomy, is the oldest form of astronomy. Images of observations were originally drawn by hand. In the late 19th century and most of the 20th century, images were made using photographic equipment. Modern images are made using digital detectors, particularly using charge-coupled devices (CCDs) and recorded on modern medium. Although visible light itself extends from approximately 4000 Å to 7000 Å (400 nm to 700 nm), that same equipment can be used to observe some near-ultraviolet and near-infrared radiation.
Ultraviolet astronomy
Ultraviolet astronomy employs ultraviolet wavelengths between approximately 100 and 3200 Å (10 to 320 nm). Light at those wavelengths is absorbed by the Earth's atmosphere, requiring observations at these wavelengths to be performed from the upper atmosphere or from space. Ultraviolet astronomy is best suited to the study of thermal radiation and spectral emission lines from hot blue stars (OB stars) that are very bright in this wave band. This includes the blue stars in other galaxies, which have been the targets of several ultraviolet surveys. Other objects commonly observed in ultraviolet light include planetary nebulae, supernova remnants, and active galactic nuclei. However, as ultraviolet light is easily absorbed by interstellar dust, an adjustment of ultraviolet measurements is necessary.
X-ray astronomy
X-ray astronomy uses X-ray wavelengths. Typically, X-ray radiation is produced by synchrotron emission (the result of electrons orbiting magnetic field lines), thermal emission from thin gases above {10}^7 (10 million) kelvins, and thermal emission from thick gases above {10}^7 Kelvin. Since X-rays are absorbed by the Earth's atmosphere, all X-ray observations must be performed from high-altitude balloons, rockets, or X-ray astronomy satellites. Notable X-ray sources include X-ray binaries, pulsars, supernova remnants, elliptical galaxies, clusters of galaxies, and active galactic nuclei.
Gamma-ray astronomy
Gamma ray astronomy observes astronomical objects at the shortest wavelengths of the electromagnetic spectrum. Gamma rays may be observed directly by satellites such as the Compton Gamma Ray Observatory or by specialized telescopes called atmospheric Cherenkov telescopes.[47] The Cherenkov telescopes do not detect the gamma rays directly but instead detect the flashes of visible light produced when gamma rays are absorbed by the Earth's atmosphere.
Most gamma-ray emitting sources are actually gamma-ray bursts, objects which only produce gamma radiation for a few milliseconds to thousands of seconds before fading away. Only 10% of gamma-ray sources are non-transient sources. These steady gamma-ray emitters include pulsars, neutron stars, and black hole candidates such as active galactic nuclei.
Fields not based on the electromagnetic spectrum
In addition to electromagnetic radiation, a few other events originating from great distances may be observed from the Earth.
In neutrino astronomy, astronomers use heavily shielded underground facilities such as SAGE, GALLEX, and Kamioka II/III for the detection of neutrinos. The vast majority of the neutrinos streaming through the Earth originate from the Sun, but 24 neutrinos were also detected from supernova 1987A. Cosmic rays, which consist of very high energy particles (atomic nuclei) that can decay or be absorbed when they enter the Earth's atmosphere, result in a cascade of secondary particles which can be detected by current observatories. Some future neutrino detectors may also be sensitive to the particles produced when cosmic rays hit the Earth's atmosphere.
Gravitational-wave astronomy is an emerging field of astronomy that employs gravitational-wave detectors to collect observational data about distant massive objects. A few observatories have been constructed, such as the Laser Interferometer Gravitational Observatory LIGO. LIGO made its first detection on 14 September 2015, observing gravitational waves from a binary black hole.[56] A second gravitational wave was detected on 26 December 2015 and additional observations should continue but gravitational waves require extremely sensitive instruments.
The combination of observations made using electromagnetic radiation, neutrinos or gravitational waves and other complementary information, is known as multi-messenger astronomy.
Astrometry and celestial mechanics
One of the oldest fields in astronomy, and in all of science, is the measurement of the positions of celestial objects. Historically, accurate knowledge of the positions of the Sun, Moon, planets and stars has been essential in celestial navigation (the use of celestial objects to guide navigation) and in the making of calendars.
Careful measurement of the positions of the planets has led to a solid understanding of gravitational perturbations, and an ability to determine past and future positions of the planets with great accuracy, a field known as celestial mechanics. More recently the tracking of near-Earth objects will allow for predictions of close encounters or potential collisions of the Earth with those objects.
The measurement of stellar parallax of nearby stars provides a fundamental baseline in the cosmic distance ladder that is used to measure the scale of the Universe. Parallax measurements of nearby stars provide an absolute baseline for the properties of more distant stars, as their properties can be compared. Measurements of the radial velocity and proper motion of stars allow astronomers to plot the movement of these systems through the Milky Way galaxy. Astrometric results are the basis used to calculate the distribution of speculated dark matter in the galaxy.
During the 1990s, the measurement of the stellar wobble of nearby stars was used to detect large extrasolar planets orbiting those stars.
Additional Information
Astronomy is one of the oldest scientific disciplines that has evolved from the humble beginnings of counting stars and charting constellations with the naked eye to the impressive showcase of humankind's technological capabilities that we see today.
Despite the progress astronomy has made over millennia, astronomers are still working hard to understand the nature of the universe and humankind's place in it. That question has only gotten more complex as our understanding of the universe grew with our expanding technical capabilities.
As the depths of the sky opened in front of our increasingly sophisticated telescopes, and sensitive detectors enabled us to spot the weirdest types of signals, the star-studded sky that our ancestors gazed at turned into a zoo of mind-boggling objects including black holes, white dwarfs, neutron stars and supernovas.
At the same time, the two-dimensional constellations that inspired the imagination of early sky-watchers were reduced to an optical illusion, behind which the swirling of galaxies hurtling through spacetime reveals a story that began with the Big Bang some 13.8 billion years ago.
What is astronomy?
Astronomy uses mathematics, physics and chemistry to study celestial objects and phenomena.
What are the four types of astronomy?
Astronomy cannot be divided solely into four types. It is a broad discipline encompassing many subfields including observational astronomy, theoretical astronomy, planetary science, astrophysics, cosmology and astrobiology.
What do you study in astronomy?
Those who study astronomy explore the structure and origin of the universe including the stars, planets, galaxies and black holes that reside in it. Astronomers aim to answer fundamental questions about our universe through theory and observation.
What's the difference between astrology and astronomy?
Astrology is widely considered to be a pseudoscience that attempts to explain how the position and motion of celestial objects such as planets affect people and events on Earth. Astronomy is the scientific study of the universe using mathematics, physics, and chemistry.
Most of today's citizens of planet Earth live surrounded by the inescapable glow of modern urban lighting and can hardly imagine the awe-inspiring presence of the pristine star-studded sky that illuminated the nights for ancient tribes and early civilizations. We can guess how drawn our ancestors were to that overwhelming sight from the role that sky-watching played in their lives.
Ancient monuments, such as the 5,000 years old Stonehenge in the U.K., were built to reflect the journey of the sun in the sky, which helped keep track of time and organize life in an age that solely depended on seasons. Art pieces depicting the moon and stars were discovered dating back several thousand years, such as the "world's oldest star map," the bronze-age Nebra disk.
Ancient Assyro-Babylonians around 1,000 B.C. systematically observed and recorded periodical motions of celestial bodies, according to the European Space Agency (ESA), and similar records exist also from early China. In fact, according to the University of Oregon, astronomy can be considered the first science as it's the one for which the oldest written records exist.
Ancient Greeks elevated sky-watching to a new level. Aristarchus of Samos made the first (highly inaccurate) attempt to calculate the distance of Earth to the sun and moon, and Hipparchus sometimes considered the father of empirical astronomy, cataloged the positions of over 800 stars using just the naked eye. He also developed the brightness scale that is still in use today, according to ESA.
]]>
How BIG! Can a Hole be?
If we cut a Hole in paper,the Hole can be any shape as I do not think it's important for this problem?
When cutting the Hole in paper I imagine we are restricted by what paper is made of? or maybe not?
So if we have some material that we cut the Hole in, with no restrictions regarding what the material is made of,can the Hole be an infinite size?
Or is there some problem regarding the size the outer of the Hole can be?
Then we have the problem when we have made the Hole,we have the Hole = H and the outer material = M,and one final variable the outside of both H and M which equals = O
Which comes to the final Question can H,M,O all be of Infinite size?
A.R.B
]]>Mathematicians and scientists have always been intrigued by the mysteries behind ‘infinity’. The Hilbert’s paradox of an infinite hotel is one such thought experiment that arouses curiosity and intrigues people to this day.
Imagine that your boss asked you to go on an urgent business trip to one of the busiest places in the country. The first thing you would do is book your tickets and look up a hotel with available rooms. Alas! After two hours of searching, you find that all the hotels are full.
Suddenly, an ‘Infinite Hotel’ advertisement comes up, so you visit the website and manage to find a room for yourself, even if the hotel is full.
Now, you might begin to wonder… what is this infinite hotel exactly? How is it possible that, despite having no vacancy, vacant rooms can still be arranged for guests? It’s contradictory to logic!
Infinity Is Vague
The concept of infinity is interesting and requires you to have an open mind while exploring various aspects of it. Mathematics is such that it needs you to give up your pre-assumed notions and dive into the unknown.
Infinity is a vague concept that implies endless possibilities, giving rise to a huge number of paradoxes. It could be appropriate to say that mathematicians have made this concept even more complex by introducing terms like ‘countably infinite’ and ‘uncountably infinite’.
On the one hand, we say that infinite things are unbounded and unfathomable, implying ‘uncountable infinite things’, and on the other hand, we consider a very large collection of things as ‘countably infinite’, which can be counted one at a time, even though it’s counting won’t come to an end. Isn’t that strange?
The ‘Infinite Hotel Paradox’
The ‘Infinite Hotel Paradox’ is one such thought experiment proposed by David Hilbert in 1924 that explores the infinite nature of numbers and the properties of an infinite set.
Let’s assume that there is a grand hotel called the ‘Infinite Hotel,’ which has a countably infinite number of occupied rooms. The rooms correspond to the natural numbers in the number series. Using common sense, one would conclude that since the hotel is completely occupied, no more guests can be accommodated, but here comes the twist with “infinity”.
Infinity is unbounded, so no matter how big the number is, there is always a bigger number. Thus, if you want a room there, the hotel manager can easily arrange it for you.
The ‘Infinite Hotel Paradox’
The ‘Infinite Hotel Paradox’ is one such thought experiment proposed by David Hilbert in 1924 that explores the infinite nature of numbers and the properties of an infinite set.
Let’s assume that there is a grand hotel called the ‘Infinite Hotel,’ which has a countably infinite number of occupied rooms. The rooms correspond to the natural numbers in the number series. Using common sense, one would conclude that since the hotel is completely occupied, no more guests can be accommodated, but here comes the twist with “infinity”.
Infinity is unbounded, so no matter how big the number is, there is always a bigger number. Thus, if you want a room there, the hotel manager can easily arrange it for you.
The Tricky Solution
There is no last room, but there always exists the next room in this Infinite Hotel. And therein lies the solution to accommodating extra guests into new rooms.
The clever manager can shift each guest to the immediate next room, simultaneously. For example, move the guest from room #1 to room #2, room #2 to room #3, room #3 to room #4, and so on. In this way, the guest in room #n is shifted to room #n+1.
In this way, after shifting every guest, room #1 is empty, where one more guest can be accommodated.
]]>Gist
Heavy water is a compound that is made up of oxygen and deuterium, a heavier isotope of hydrogen which is denoted by '2H' or 'D'. Heavy water is also called deuterium oxide and is denoted by the chemical formula D2O.
Summary
Heavy water (D2O) is water composed of deuterium, the hydrogen isotope with a mass double that of ordinary hydrogen, and oxygen. (Ordinary water has a composition represented by H2O.) Thus, heavy water has a molecular weight of about 20 (the sum of twice the atomic weight of deuterium, which is 2, plus the atomic weight of oxygen, which is 16), whereas ordinary water has a molecular weight of about 18 (twice the atomic weight of ordinary hydrogen, which is 1, plus oxygen, which is 16).
Ordinary water as obtained from most natural sources contains about one deuterium atom for every 6,760 ordinary hydrogen atoms. and the residual water is thus enriched in deuterium content. Continued electrolysis of hundreds of litres of water until only a few millilitres remain yields practically pure deuterium oxide. This operation, until 1943 the only large-scale method used, has been superseded by less expensive processes, such as fractional distillation (D2O becomes concentrated in the liquid residue because it is less volatile than H2O). The heavy water produced is used as a moderator of neutrons in nuclear power plants. In the laboratory heavy water is employed as an isotopic tracer in studies of chemical and biochemical processes.
Details
Heavy water (deuterium oxide, 2H2O, D2O) is a form of water whose hydrogen atoms are all deuterium (2
H or D, also known as heavy hydrogen) rather than the common hydrogen-1 isotope (1
H or H, also called protium) that makes up most of the hydrogen in normal water. The presence of the heavier hydrogen isotope gives the water different nuclear properties, and the increase in mass gives it slightly different physical and chemical properties when compared to normal water.
Deuterium is a heavy hydrogen isotope. Heavy water contains deuterium atoms and is used in nuclear reactors. Semiheavy water (HDO) is more common than pure heavy water, while heavy-oxygen water is denser but lacks unique properties. Tritiated water is radioactive due to tritium content.
Heavy water (D2O) has different physical properties than regular water, such as being 10.6% denser and having a higher melting point. Heavy water is less dissociated at a given temperature, and it does not have the slightly blue color of regular water. While it has no significant taste difference, it can taste slightly sweet. Heavy water affects biological systems by altering enzymes, hydrogen bonds, and cell division in eukaryotes. It can be lethal to multicellular organisms at concentrations over 50%. However, some prokaryotes like bacteria can survive in a heavy hydrogen environment. Heavy water can be toxic to humans, but a large amount would be needed for poisoning to occur.
Deuterated water (HDO) occurs naturally in normal water and can be separated through distillation, electrolysis, or chemical exchange processes. The most cost-effective process for producing heavy water is the Girdler sulfide process. Heavy water is used in various industries and is sold in different grades of purity. Some of its applications include nuclear magnetic resonance, infrared spectroscopy, neutron moderation, neutrino detection, metabolic rate testing, neutron capture therapy, and the production of radioactive materials such as plutonium and tritium.
Composition
Deuterium is a hydrogen isotope with a nucleus containing a neutron and a proton; the nucleus of a protium (normal hydrogen) atom consists of just a proton. The additional neutron makes a deuterium atom roughly twice as heavy as a protium atom.
A molecule of heavy water has two deuterium atoms in place of the two protium atoms of ordinary "light" water. The term heavy water as defined by the IUPAC Gold Book can also refer to water in which a higher than usual proportion of hydrogen atoms are deuterium rather than protium. For comparison, ordinary water (the "ordinary water" used for a deuterium standard) contains only about 156 deuterium atoms per million hydrogen atoms, meaning that 0.0156% of the hydrogen atoms are of the heavy type. Thus heavy water as defined by the Gold Book includes hydrogen-deuterium oxide (HDO) and other mixtures of D2O, H2O, and HDO in which the proportion of deuterium is greater than usual. For instance, the heavy water used in CANDU reactors is a highly enriched water mixture that contains mostly deuterium oxide D2O, but also some hydrogen-deuterium oxide and a smaller amount of ordinary hydrogen oxide H
2O. It is 99.75% enriched by hydrogen atom-fraction—meaning that 99.75% of the hydrogen atoms are of the heavy type; however, heavy water in the Gold Book sense need not be so highly enriched. The weight of a heavy water molecule, however, is not substantially different from that of a normal water molecule, because about 89% of the molecular weight of water comes from the single oxygen atom rather than the two hydrogen atoms.
Heavy water is not radioactive. In its pure form, it has a density about 11% greater than water, but is otherwise physically and chemically similar. Nevertheless, the various differences in deuterium-containing water (especially affecting the biological properties) are larger than in any other commonly occurring isotope-substituted compound because deuterium is unique among heavy stable isotopes in being twice as heavy as the lightest isotope. This difference increases the strength of water's hydrogen–oxygen bonds, and this in turn is enough to cause differences that are important to some biochemical reactions. The human body naturally contains deuterium equivalent to about five grams of heavy water, which is harmless. When a large fraction of water (> 50%) in higher organisms is replaced by heavy water, the result is cell dysfunction and death.
Heavy water was first produced in 1932, a few months after the discovery of deuterium. With the discovery of nuclear fission in late 1938, and the need for a neutron moderator that captured few neutrons, heavy water became a component of early nuclear energy research. Since then, heavy water has been an essential component in some types of reactors, both those that generate power and those designed to produce isotopes for nuclear weapons. These heavy water reactors have the advantage of being able to run on natural uranium without using graphite moderators that pose radiological[8] and dust explosion[9] hazards in the decommissioning phase. The graphite moderated Soviet RBMK design tried to avoid using either enriched uranium or heavy water (being cooled with ordinary "light" water instead) which produced the positive void coefficient that was one of a series of flaws in reactor design leading to the Chernobyl disaster. Most modern reactors use enriched uranium with ordinary water as the moderator.
Additional Information
What is heavy water?
Heavy water, in simple terms, is water whose hydrogen molecules are exchanged with deuterium (the heavy hydrogen isotope). Therefore, heavy water has a higher boiling point and freezing point than ordinary water. The hydrogen atom has one proton and the deuterium nucleus has one proton and one neutron. This more neutron increases its mass. One mole of ordinary water is 18 grams and heavy water is 20 grams. Therefore, a liter of heavy water has a mass greater than one liter of light water. Due to the difference between the nuclear properties of deuterium and hydrogen in terms of "neutron angular momentum and magnetic moment", heavy water and deuterium are also used in various research fields.
History of heavy water
For the first time, Mr. Russell predicted the existence of deuterium using a helical periodic table. Six years later, in 1931, Harold Yuri of Columbia University discovered it. In 1933, for example, Gilbert Newton-Lewis prepared a sample of pure heavy water by electrolysis. Hossie and Huffer used heavy water in 1944 to conduct environmental tracking experiments to investigate the rate at which water transports through the human body.
How is heavy water produced?
There are various methods including chemical exchange (isotopic), distillation, electrolysis, membrane application, thermal penetration, laser, photochemistry, and adsorption to produce heavy water. Each method has its own advantages and disadvantages and according to the characteristics. So far, only the first three methods have been implemented on an industrial scale. For the first time, the use of the electrolysis (electroplating) method led to the production of heavy water. Since the boiling point of heavy water is higher than ordinary water, evaporation and distillation methods are used to produce it. The difference between the mass of heavy water and light water is considerable and the difference between the boiling points of ordinary water and heavy water is possible. Facilitates the separation of heavy water and its purification. Normally, for every 6,400 to 7,000 ordinary water molecules, there is one heavy water molecule that is physically and chemically purified to produce it.
Heavy water and deuterium applications
The use of heavy water mainly includes two parts: nuclear application and research application (scientific research in the fields of geology, biology, medicine, physics, chemistry, engineering), which we will examine in the following:
Nuclear applications
Neutron Slower: Heavy water is used in some nuclear reactors as a neutron slower. Light water can also be used as a neutron attenuator, but because light water also absorbs thermal neutrons, enriched uranium should be used as fuel in these reactors. But the heavy water reactor can use natural or (unenriched) uranium as fuel. Thus, the production of heavy water is related to the discussion of preventing the proliferation of nuclear weapons. Besides, the use of deuterium and tritium gas for energy production in the fusion process is used. The name helium produces a huge amount of energy.
Non-nuclear applications of heavy water and deuterium
1- Nuclear Magnetic Resonance Spectroscopy: Nowadays, the use of nuclear magnetic resonance spectroscopy (NMR) technique to identify and study the molecular structure is widely used. In this technique, because the signals of the hydrogen atom completely cover the NMR spectrum, heavy water or deuterium solvents are used to study the structure of molecules. Thousands of new compounds are synthesized daily in the world. The first step in identifying these compounds is using the technique. NMR and widespread use of deuterium solvents, the origin of deuterium in deuterium solvents is also heavy.
2. Pharmacy
According to scientists, replacing hydrogen with deuterium improves the properties of drugs and significantly reduces their toxicity. Many companies around the world now produce a variety of deuterium drugs.
3.Use of heavy water in the oil and gas industries:
Today, heavy water is widely used to determine the best location for drilling oil and gas wells and the saturation of such reservoirs.
4. Optical fibers and semiconductors
Another very important application of heavy water is the production of optical fibers and deuterium semiconductors. Replacing deuterium isotopes with hydrogen atoms in these compounds increases their lifespan by up to 10 times and also greatly enhances their electrical properties. Improves a lot. These effects have been such that today leading companies in electronics have started to produce this technology on a commercial level.
5. Deuterium lamp manufacturing industry
These lamps are widely used in spectroscopy devices. In these devices, the source of radiation is deuterium, which provides a continuous source of radiation. A chromatograph menu separates this radiation source and a narrow range of wavelengths is reached to the sample tube by optical instruments. The consumption of deuterium for the production of these lamps is very small, but the price of these lamps is high due to high technology.
6. Neutrino detection. A neutrino detector has been installed deep in the ground in an old mine to prevent cosmic rays from reaching it. The observatory's main goal is to answer the question of whether electron neutrinos produced by fusion in the sun are converted to other types of neutrinos on the way to Earth. Heavy water is essential for these experiments because it provides the deuterium needed to detect a variety of neutrinos.
7- Investigating the energy consumption of living organisms.
A mixture of heavy water with (H218O) water with oxygen whose isotope 18O is used to perform experiments to measure the metabolic rate of humans and animals. This metabolic test is called the DLW "Double Marked Water Test". In this method, the subject first drinks some watermarked with deuterium and oxygen 18 with a specified isotopic concentration.
Then, at specific time intervals, isotopic analysis of D / H and 18O/16O in the urine or saliva sample is performed. Oxygen 18 is excreted from the body in the form of water and carbon dioxide, while deuterium is excreted only in the form of water. Therefore, the difference in the amount of oxygen 18 over a period of time is a reflection of carbon dioxide production. Oxidation of fat is carbohydrate and protein. So the difference in oxygen levels 18 indicates the speed of the body's metabolism.
8- Application of heavy water as a tracer in hydrology
Due to the stability of the deuterium isotope and the absence of any environmental hazard in modern hydrological techniques(stable isotope analysis), by injecting heavy water into groundwater sources, accurate information can be obtained on determining the origin and direction of velocity and direction currents in aquifers. In this method, in groundwater aquifers to determine the path of water movement in porous media, evaluate the groundwater flow velocity, estimate the permeability coefficient of aquifers, origin and feeding ground of groundwater, the relationship of aquifers with each other, and Study of contaminants used.
9_Diagnostic studies in medical science
In general, the basis of nuclear medicine knowledge is the use of radioisotopes and radiopharmaceuticals, and heavy water can be used as a neutron target to produce these materials. Radiopharmaceuticals are used to diagnose and treat diseases such as cancer, benign and malignant tumors, heart failure, and coronary heart disease. For example, in the PET scan technique, labeled glucose or fluorine are widely used. Heavy water is used to label glucose.
]]>
Gist
Exponential growth is a process that increases quantity over time at an ever-increasing rate. It occurs when the instantaneous rate of change (that is, the derivative) of a quantity with respect to time is proportional to the quantity itself.
Details
Exponential growth is a process that increases quantity over time at an ever-increasing rate. It occurs when the instantaneous rate of change (that is, the derivative) of a quantity with respect to time is proportional to the quantity itself. Described as a function, a quantity undergoing exponential growth is an exponential function of time, that is, the variable representing time is the exponent (in contrast to other types of growth, such as quadratic growth). Exponential growth is the inverse of logarithmic growth.
If the constant of proportionality is negative, then the quantity decreases over time, and is said to be undergoing exponential decay instead. In the case of a discrete domain of definition with equal intervals, it is also called geometric growth or geometric decay since the function values form a geometric progression.
The formula for exponential growth of a variable x at the growth rate r, as time t goes on in discrete intervals (that is, at integer times 0, 1, 2, 3, ...), is
where x0 is the value of x at time 0. The growth of a bacterial colony is often used to illustrate it. One bacterium splits itself into two, each of which splits itself resulting in four, then eight, 16, 32, and so on. The amount of increase keeps increasing because it is proportional to the ever-increasing number of bacteria. Growth like this is observed in real-life activity or phenomena, such as the spread of virus infection, the growth of debt due to compound interest, and the spread of viral videos. In real cases, initial exponential growth often does not last forever, instead slowing down eventually due to upper limits caused by external factors and turning into logistic growth.
Terms like "exponential growth" are sometimes incorrectly interpreted as "rapid growth". Indeed, something that grows exponentially can in fact be growing slowly at first.
Examples
Bacteria exhibit exponential growth under optimal conditions.
Biology
The number of microorganisms in a culture will increase exponentially until an essential nutrient is exhausted, so there is no more of that nutrient for more organisms to grow. Typically the first organism splits into two daughter organisms, who then each split to form four, who split to form eight, and so on. Because exponential growth indicates constant growth rate, it is frequently assumed that exponentially growing cells are at a steady-state. However, cells can grow exponentially at a constant rate while remodeling their metabolism and gene expression.
A virus (for example COVID-19, or smallpox) typically will spread exponentially at first, if no artificial immunization is available. Each infected person can infect multiple new people.
Physics
Avalanche breakdown within a dielectric material. A free electron becomes sufficiently accelerated by an externally applied electrical field that it frees up additional electrons as it collides with atoms or molecules of the dielectric media. These secondary electrons also are accelerated, creating larger numbers of free electrons. The resulting exponential growth of electrons and ions may rapidly lead to complete dielectric breakdown of the material.
Nuclear chain reaction (the concept behind nuclear reactors and nuclear weapons). Each uranium nucleus that undergoes fission produces multiple neutrons, each of which can be absorbed by adjacent uranium atoms, causing them to fission in turn. If the probability of neutron absorption exceeds the probability of neutron escape (a function of the shape and mass of the uranium), the production rate of neutrons and induced uranium fissions increases exponentially, in an uncontrolled reaction. "Due to the exponential rate of increase, at any point in the chain reaction 99% of the energy will have been released in the last 4.6 generations. It is a reasonable approximation to think of the first 53 generations as a latency period leading up to the actual explosion, which only takes 3–4 generations."
Positive feedback within the linear range of electrical or electroacoustic amplification can result in the exponential growth of the amplified signal, although resonance effects may favor some component frequencies of the signal over others.
Economics
Economic growth is expressed in percentage terms, implying exponential growth.
Finance
Compound interest at a constant interest rate provides exponential growth of the capital.
Pyramid schemes or Ponzi schemes also show this type of growth resulting in high profits for a few initial investors and losses among great numbers of investors.
Computer science
Processing power of computers.
In computational complexity theory, computer algorithms of exponential complexity require an exponentially increasing amount of resources (e.g. time, computer memory) for only a constant increase in problem size. So for an algorithm of time complexity 2x, if a problem of size x = 10 requires 10 seconds to complete, and a problem of size x = 11 requires 20 seconds, then a problem of size x = 12 will require 40 seconds. This kind of algorithm typically becomes unusable at very small problem sizes, often between 30 and 100 items (most computer algorithms need to be able to solve much larger problems, up to tens of thousands or even millions of items in reasonable times, something that would be physically impossible with an exponential algorithm). Also, the effects of Moore's Law do not help the situation much because doubling processor speed merely increases the feasible problem size by a constant. E.g. if a slow processor can solve problems of size x in time t, then a processor twice as fast could only solve problems of size x + constant in the same time t. So exponentially complex algorithms are most often impractical, and the search for more efficient algorithms is one of the central goals of computer science today.
Internet phenomena
Internet contents, such as internet memes or videos, can spread in an exponential manner, often said to "go viral" as an analogy to the spread of viruses. With media such as social networks, one person can forward the same content to many people simultaneously, who then spread it to even more people, and so on, causing rapid spread. For example, the video Gangnam Style was uploaded to YouTube on 15 July 2012, reaching hundreds of thousands of viewers on the first day, millions on the twentieth day, and was cumulatively viewed by hundreds of millions in less than two months.
]]>
Gist
Sonic boom is a shock wave that is produced by an aircraft or other object flying at a speed equal to or exceeding the speed of sound and that is heard on the ground as a sound like a clap of thunder.
When an aircraft travels at subsonic speed, the pressure disturbances, or sounds, that it generates extend in all directions. Because this disturbance is transmitted earthward continuously to every point along the path, there are no sharp disturbances or changes of pressure. At supersonic speeds, however, the pressure field is confined to a region extending mostly to the rear and extending from the craft in a restricted widening cone (called a Mach cone). As the aircraft proceeds, the trailing parabolic edge of that cone of disturbance intercepts the Earth, producing on Earth a sound of a sharp bang or boom. When such an aircraft flies at a low altitude, the shock wave may be of sufficient intensity to cause glass breakage and other damage. The intensity of the sonic boom is determined not only by the distance between the craft and the ground but also by the size and shape of the aircraft, the types of maneuvers that it makes, and the atmospheric pressure, temperature, and winds. If the aircraft is especially long, double sonic booms might be detected, one emanating from the leading edge of the plane and one from the trailing edge.
Summary
Sonic boom is a common name for the loud noise that is created by the 'shock wave' produced by the air-plane that is traveling at speeds greater than that of sound ( speed of sound is approximately 332 m/s or 1195 km/hr or 717 miles/hour). These speeds are called supersonic speeds, hence this phenomena is sometimes called the supersonic boom.
Normally, for a plane that is going at subsonic speeds (lower than that of sound), the sound of the plane is radiated in all directions. However, the individual sound wavelets are compressed at the front of the plane and further spread at the back of the plane because of the forward speed of the plane. This effect is known as the Doppler effect and accounts for the change of the 'pitch' of the plane's sound as it passes us. When the plane is approaching us it's sound has a higher pitch than if it is going away from us.
Now, if the plane is traveling at the supersonic speeds (greater than that of sound), it is going faster than it's own sound. As a result, a pressure (sound is variation in pressure) wave is produced in the shape of the cone whose vertex is at the nose of the plane, and whose base is behind the plane. The angle opening of the cone depends on the actual speed the plane is traveling at. All of the sound pressure is contained in this cone.
So imagine now this plane in a level flight. Before the plane passes you, you can only see it but you can not hear anything. The pressure cone is trailing behind the plane. Once your ears intersect the edge of this cone, your will hear a very loud sound - the sonic boom. Therefore you will hear the sonic boom once your ears intersect this cone, and not when the plane breaks the sound barrier (as it is commonly misunderstood)
The sonic booms can be sometimes quite loud. For a commercial supersonic transport plane (SST), it can be as loud as 136 decibels, or 120 Pa (in units of pressure).
Details
A sonic boom is a sound associated with shock waves created when an object travels through the air faster than the speed of sound. Sonic booms generate enormous amounts of sound energy, sounding similar to an explosion or a thunderclap to the human ear.
The crack of a supersonic bullet passing overhead or the crack of a bullwhip are examples of a sonic boom in miniature.
Sonic booms due to large supersonic aircraft can be particularly loud and startling, tend to awaken people, and may cause minor damage to some structures. This led to the prohibition of routine supersonic flight overland. Although they cannot be completely prevented, research suggests that with careful shaping of the vehicle, the nuisance due to the sonic booms may be reduced to the point that overland supersonic flight may become a feasible option.
A sonic boom does not occur only at the moment an object crosses the sound barrier and neither is it heard in all directions emanating from the supersonic object. Rather, the boom is a continuous effect that occurs while the object is traveling at supersonic speeds and affects only observers that are positioned at a point that intersects a region in the shape of a geometrical cone behind the object. As the object moves, this conical region also moves behind it and when the cone passes over the observer, they will briefly experience the "boom".
Causes
When an aircraft passes through the air, it creates a series of pressure waves in front of the aircraft and behind it, similar to the bow and stern waves created by a boat. These waves travel at the speed of sound and, as the speed of the object increases, the waves are forced together, or compressed, because they cannot get out of each other's way quickly enough. Eventually, they merge into a single shock wave, which travels at the speed of sound, a critical speed known as Mach 1, which is approximately 1,192 km/h (741 mph) at sea level and 20 °C (68 °F).
In smooth flight, the shock wave starts at the nose of the aircraft and ends at the tail. Because the different radial directions around the aircraft's direction of travel are equivalent (given the "smooth flight" condition), the shock wave forms a Mach cone, similar to a vapour cone, with the aircraft at its tip.
There is a rise in pressure at the nose, decreasing steadily to a negative pressure at the tail, followed by a sudden return to normal pressure after the object passes. This "overpressure profile" is known as an N-wave because of its shape. The "boom" is experienced when there is a sudden change in pressure; therefore, an N-wave causes two booms – one when the initial pressure rise reaches an observer, and another when the pressure returns to normal. This leads to a distinctive "double boom" from a supersonic aircraft. When the aircraft is maneuvering, the pressure distribution changes into different forms, with a characteristic U-wave shape.
Since the boom is being generated continually as long as the aircraft is supersonic, it fills out a narrow path on the ground following the aircraft's flight path, a bit like an unrolling red carpet, and hence known as the boom carpet. Its width depends on the altitude of the aircraft. The distance from the point on the ground where the boom is heard to the aircraft depends on its altitude and the angle
For today's supersonic aircraft in normal operating conditions, the peak overpressure varies from less than 50 to 500 Pa (1 to 10 psf (pound per square foot)) for an N-wave boom. Peak overpressures for U-waves are amplified two to five times the N-wave, but this amplified overpressure impacts only a very small area when compared to the area exposed to the rest of the sonic boom. The strongest sonic boom ever recorded was 7,000 Pa (144 psf) and it did not cause injury to the researchers who were exposed to it. The boom was produced by an F-4 flying just above the speed of sound at an altitude of 100 feet (30 m). In recent tests, the maximum boom measured during more realistic flight conditions was 1,010 Pa (21 psf). There is a probability that some damage—shattered glass, for example—will result from a sonic boom. Buildings in good condition should suffer no damage by pressures of 530 Pa (11 psf) or less. And, typically, community exposure to sonic boom is below 100 Pa (2 psf). Ground motion resulting from the sonic boom is rare and is well below structural damage thresholds accepted by the U.S. Bureau of Mines and other agencies.
The power, or volume, of the shock wave, depends on the quantity of air that is being accelerated, and thus the size and shape of the aircraft. As the aircraft increases speed the shock cone gets tighter around the craft and becomes weaker to the point that at very high speeds and altitudes, no boom is heard. The "length" of the boom from front to back depends on the length of the aircraft to a power of 3/2. Longer aircraft therefore "spread out" their booms more than smaller ones, which leads to a less powerful boom.
Several smaller shock waves can and usually do form at other points on the aircraft, primarily at any convex points, or curves, the leading wing edge, and especially the inlet to engines. These secondary shockwaves are caused by the air being forced to turn around these convex points, which generates a shock wave in supersonic flow.
The later shock waves are somewhat faster than the first one, travel faster, and add to the main shockwave at some distance away from the aircraft to create a much more defined N-wave shape. This maximizes both the magnitude and the "rise time" of the shock which makes the boom seem louder. On most aircraft designs the characteristic distance is about 40,000 feet (12,000 m), meaning that below this altitude the sonic boom will be "softer". However, the drag at this altitude or below makes supersonic travel particularly inefficient, which poses a serious problem.
Supersonic aircraft
Supersonic aircraft are any aircraft that can achieve flight faster than Mach 1, which refers to the speed of sound. "Supersonic includes speeds up to five times Mach than the speed of sound, or Mach 5." (Dunbar, 2015) The top mileage per hour for a supersonic aircraft normally ranges from 700 to 1,500 miles per hour (1,100 to 2,400 km/h). Typically, most aircraft do not exceed 1,500 mph (2,414 km/h). There are many variations of supersonic aircraft. Some models of supersonic aircraft make use of better-engineered aerodynamics that allow a few sacrifices in the aerodynamics of the model for thruster power. Other models use the efficiency and power of the thruster to allow a less aerodynamic model to achieve greater speeds. A typical model found in United States military use ranges from an average of $13 million to $35 million U.S. dollars.
]]>
Gist
Physical education provides cognitive content and instruction designed to develop motor skills, knowledge, and behaviors for physical activity and physical fitness.
Summary
Physical education is the foundation of a Comprehensive School Physical Activity Program. It is an academic subject characterized by a planned, sequential K–12 curriculum (course of study) that is based on the national standards for physical education. Physical education provides cognitive content and instruction designed to develop motor skills, knowledge, and behaviors for physical activity and physical fitness. Supporting schools to establish physical education daily can provide students with the ability and confidence to be physically active for a lifetime.
There are many benefits of physical education in schools. When students get physical education, they can:
* Increase their level of physical activity.
* Improve their grades and standardized test scores.
* Stay on-task in the classroom.
Increased time spent in physical education does not negatively affect students’ academic achievement.
Details
Physical education, often abbreviated to Phys. Ed. or PE, is a subject taught in schools around the world. PE is taught during primary and secondary education and encourages psychomotor, cognitive, and effective learning through physical activity and movement exploration to promote health and physical fitness. When taught correctly and in a positive manner, children and teens can receive a storm of health benefits. These include reduced metabolic disease risk, improved cardiorespiratory fitness, and better mental health. In addition, PE classes can produce positive effects on students' behavior and academic performance. Research has shown that there is a positive correlation between brain development and exercising. Researchers in 2007 found a profound gain in English Arts standardized test scores among students who had 56 hours of physical education in a year, compared to those who had 28 hours of physical education a year.
Many physical education programs also include health education as part of the curriculum. Health education is the teaching of information on the prevention, control, and treatment of diseases.
Curriculum in Physical Education
A highly effective physical education program aims to develop physical literacy through the acquisition of skills, knowledge, physical fitness, and confidence. Physical education curricula promote healthy development of children, encourage interest in physical activity and sport, improve learning of health and physical education concepts, and accommodate for differences in student populations to ensure that every child receives health benefits. These core principles are implemented through sport participation, sports skill development, knowledge of physical fitness and health, as well as mental health and social adaptation.
Physical education curriculum at the secondary level includes a variety of team and individual sports, as well as leisure activities. Some examples of physical activities include basketball, soccer, volleyball, track and field, badminton, tennis, walking, cycling, and swimming. Chess is another activity that is included in the PE curriculum in some parts of the world. Chess helps students to develop their cognitive thinking skills and improves focus, while also teaching about sportsmanship and fair play. Gymnastics and wrestling activities offer additional opportunities for students to improve the different areas of physical fitness including flexibility, strength, aerobic endurance, balance, and coordination. Additional activities in PE include football, netball, hockey, rounders, cricket, four square, racing, and numerous other children's games. Physical education also teaches nutrition, healthy habits, and individuality of needs.
Pedagogy
The main goals in teaching modern physical education are:
* To expose children and teens to a wide variety of exercise and healthy activities. Because P.E. can be accessible to nearly all children, it is one of the only opportunities that can guarantee beneficial and healthy activity in children.
* To teach skills to maintain a lifetime of fitness as well as health.
* To encourage self-reporting and monitoring of exercise.
* To individualize duration, intensity, and type of activity.
* To focus feedback on the work, rather than the result.
* To provide active role models.
It is critical for physical educators to foster and strengthen developing motor skills and to provide children and teens with a basic skill set that builds their movement repertoire, which allows students to engage in various forms of games, sports, and other physical activities throughout their lifetime.
These goals can be achieved in a variety of ways. National, state, and local guidelines often dictate which standards must be taught in regards to physical education. These standards determine what content is covered, the qualifications educators must meet, and the textbooks and materials which must be used. These various standards include teaching sports education, or the use of sports as exercise; fitness education, relating to overall health and fitness; and movement education, which deals with movement in a non-sport context.
These approaches and curricula are based on pioneers in PE, namely, Francois Delsarte, Liselott Diem, and Rudolf von Laban, who, in the 1800s focused on using a child's ability to use their body for self-expression. This, in combination with approaches in the 1960s, (which featured the use of the body, spatial awareness, effort, and relationships) gave birth to the modern teaching of physical education.
Recent research has also explored the role of physical education for moral development in support of social inclusion and social justice agendas, where it is under-researched, especially in the context of disability, and the social inclusion of disabled people.
Technology use in physical education
Many physical education classes utilize technology to assist their pupils in effective exercise. One of the most affordable and popular tools is a simple video recorder. With this, students record themselves, and, upon playback, can see mistakes they are making in activities like throwing or swinging. Studies show that students find this more effective than having someone try to explain what they are doing wrong, and then trying to correct it.
Educators may also use technology such as pedometers and heart rate monitors to make step and heart rate goals for students. Implementing pedometers in physical education can improve physical activity participation, motivation and enjoyment.
Other technologies that can be used in a physical education setting include video projectors and GPS systems. Gaming systems and their associated games, such as the Kinect, Wii, and Wii Fit can also be used. Projectors are used to show students proper form or how to play certain games. GPS systems can be used to get students active in an outdoor setting, and active exergames[clarification needed] can be used by teachers to show students a good way to stay fit in and out of a classroom setting. Exergames, or digital games that require the use of physical movement to participate, can be used as a tool to encourage physical activity and health in young children.
Technology integration can increase student motivation and engagement in the Physical Education setting. However, the ability of educators to effectively use technology in the classroom is reliant on a teacher's perceived competence in their ability to integrate technology into the curriculum.
Beyond traditional tools, recent AI advancements are introducing new methods for personalizing physical education, especially for adolescents. AI applications like adaptive coaching are starting to show promise in enhancing student motivation and program effectiveness in physical education settings.
Additional Information
Physical education is training in physical fitness and in skills requiring or promoting such fitness. Many traditional societies included training in hunting, ritual dance, and military skills, while others—especially those emphasizing literacy—often excluded physical skills.
The spread of literacy in the West between 1500 and 1800 coincided with a new awareness that fitness helps the mind. Gymnasiums opened across Europe, the first in Copenhagen in 1799. The German Turnverein movement grew, expanding to the United States with immigration. Per Ling developed a teaching system for physical education in Stockholm in 1814, and Otto Spiess (1810–1858) popularized another system in Germany. As public schools in Germany, Denmark, and the United States tried these systems, physical education joined baccalaureate curricula, becoming a major at Columbia University in 1901 and elsewhere later.
Japan’s schools have linked physical and mental training since the 17th century. Public schools with compulsory physical education were founded in 1872; the trend since 1945 has been toward individual physical and mental development. The Soviet Union, after 1917, placed great emphasis on physical education, both in schools and in special physical education institutes.
Today, physical education is a required course in many primary and secondary schools in countries with compulsory education. Most teaching takes place inside gymnasiums or other facilities built specifically for physical education activities, although outdoor sports are also emphasized.
]]>
Gist
Geometry is the branch of mathematics that deals with shapes, angles, dimensions and sizes of a variety of things we see in everyday life. Geometry is derived from Ancient Greek words – 'Geo' means 'Earth' and 'metron' means 'measurement'.
Summary
Geometry is the branch of mathematics concerned with the shape of individual objects, spatial relationships among various objects, and the properties of surrounding space. It is one of the oldest branches of mathematics, having arisen in response to such practical problems as those found in surveying, and its name is derived from Greek words meaning “Earth measurement.” Eventually it was realized that geometry need not be limited to the study of flat surfaces (plane geometry) and rigid three-dimensional objects (solid geometry) but that even the most abstract thoughts and images might be represented and developed in geometric terms.
Details
Geometry (from Ancient Greek (geōmetría) 'land measurement'; from (gê) 'earth, land', and (métron) 'a measure') is a branch of mathematics concerned with properties of space such as the distance, shape, size, and relative position of figures. Geometry is, along with arithmetic, one of the oldest branches of mathematics. A mathematician who works in the field of geometry is called a geometer. Until the 19th century, geometry was almost exclusively devoted to Euclidean geometry, which includes the notions of point, line, plane, distance, angle, surface, and curve, as fundamental concepts.
Originally developed to model the physical world, geometry has applications in almost all sciences, and also in art, architecture, and other activities that are related to graphics. Geometry also has applications in areas of mathematics that are apparently unrelated. For example, methods of algebraic geometry are fundamental in Wiles's proof of Fermat's Last Theorem, a problem that was stated in terms of elementary arithmetic, and remained unsolved for several centuries.
During the 19th century several discoveries enlarged dramatically the scope of geometry. One of the oldest such discoveries is Carl Friedrich Gauss' Theorema Egregium ("remarkable theorem") that asserts roughly that the Gaussian curvature of a surface is independent from any specific embedding in a Euclidean space. This implies that surfaces can be studied intrinsically, that is, as stand-alone spaces, and has been expanded into the theory of manifolds and Riemannian geometry. Later in the 19th century, it appeared that geometries without the parallel postulate (non-Euclidean geometries) can be developed without introducing any contradiction. The geometry that underlies general relativity is a famous application of non-Euclidean geometry.
Since the late 19th century, the scope of geometry has been greatly expanded, and the field has been split in many subfields that depend on the underlying methods—differential geometry, algebraic geometry, computational geometry, algebraic topology, discrete geometry (also known as combinatorial geometry), etc.—or on the properties of Euclidean spaces that are disregarded—projective geometry that consider only alignment of points but not distance and parallelism, affine geometry that omits the concept of angle and distance, finite geometry that omits continuity, and others. This enlargement of the scope of geometry led to a change of meaning of the word "space", which originally referred to the three-dimensional space of the physical world and its model provided by Euclidean geometry; presently a geometric space, or simply a space is a mathematical structure on which some geometry is defined.
History
The earliest recorded beginnings of geometry can be traced to ancient Mesopotamia and Egypt in the 2nd millennium BC. Early geometry was a collection of empirically discovered principles concerning lengths, angles, areas, and volumes, which were developed to meet some practical need in surveying, construction, astronomy, and various crafts. The earliest known texts on geometry are the Egyptian Rhind Papyrus (2000–1800 BC) and Moscow Papyrus (c. 1890 BC), and the Babylonian clay tablets, such as Plimpton 322 (1900 BC). For example, the Moscow Papyrus gives a formula for calculating the volume of a truncated pyramid, or frustum. Later clay tablets (350–50 BC) demonstrate that Babylonian astronomers implemented trapezoid procedures for computing Jupiter's position and motion within time-velocity space. These geometric procedures anticipated the Oxford Calculators, including the mean speed theorem, by 14 centuries. South of Egypt the ancient Nubians established a system of geometry including early versions of sun clocks.
In the 7th century BC, the Greek mathematician Thales of Miletus used geometry to solve problems such as calculating the height of pyramids and the distance of ships from the shore. He is credited with the first use of deductive reasoning applied to geometry, by deriving four corollaries to Thales's theorem. Pythagoras established the Pythagorean School, which is credited with the first proof of the Pythagorean theorem, though the statement of the theorem has a long history. Eudoxus (408–c. 355 BC) developed the method of exhaustion, which allowed the calculation of areas and volumes of curvilinear figures, as well as a theory of ratios that avoided the problem of incommensurable magnitudes, which enabled subsequent geometers to make significant advances. Around 300 BC, geometry was revolutionized by Euclid, whose Elements, widely considered the most successful and influential textbook of all time, introduced mathematical rigor through the axiomatic method and is the earliest example of the format still used in mathematics today, that of definition, axiom, theorem, and proof. Although most of the contents of the Elements were already known, Euclid arranged them into a single, coherent logical framework. The Elements was known to all educated people in the West until the middle of the 20th century and its contents are still taught in geometry classes today. Archimedes (c. 287–212 BC) of Syracuse, Italy used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave remarkably accurate approximations of pi. He also studied the spiral bearing his name and obtained formulas for the volumes of surfaces of revolution.
Indian mathematicians also made many important contributions in geometry. The Shatapatha Brahmana (3rd century BC) contains rules for ritual geometric constructions that are similar to the Sulba Sutras. According to (Hayashi 2005, p. 363), the Śulba Sūtras contain "the earliest extant verbal expression of the Pythagorean Theorem in the world, although it had already been known to the Old Babylonians. They contain lists of Pythagorean triples, which are particular cases of Diophantine equations. In the Bakhshali manuscript, there are a handful of geometric problems (including problems about volumes of irregular solids). The Bakhshali manuscript also "employs a decimal place value system with a dot for zero." Aryabhata's Aryabhatiya (499) includes the computation of areas and volumes. Brahmagupta wrote his astronomical work Brāhmasphuṭasiddhānta in 628. Chapter 12, containing 66 Sanskrit verses, was divided into two sections: "basic operations" (including cube roots, fractions, ratio and proportion, and barter) and "practical mathematics" (including mixture, mathematical series, plane figures, stacking bricks, sawing of timber, and piling of grain). In the latter section, he stated his famous theorem on the diagonals of a cyclic quadrilateral. Chapter 12 also included a formula for the area of a cyclic quadrilateral (a generalization of Heron's formula), as well as a complete description of rational triangles (i.e. triangles with rational sides and rational areas).
In the Middle Ages, mathematics in medieval Islam contributed to the development of geometry, especially algebraic geometry. Al-Mahani (b. 853) conceived the idea of reducing geometrical problems such as duplicating the cube to problems in algebra. Thābit ibn Qurra (known as Thebit in Latin) (836–901) dealt with arithmetic operations applied to ratios of geometrical quantities, and contributed to the development of analytic geometry. Omar Khayyam (1048–1131) found geometric solutions to cubic equations. The theorems of Ibn al-Haytham (Alhazen), Omar Khayyam and Nasir al-Din al-Tusi on quadrilaterals, including the Lambert quadrilateral and Saccheri quadrilateral, were early results in hyperbolic geometry, and along with their alternative postulates, such as Playfair's axiom, these works had a considerable influence on the development of non-Euclidean geometry among later European geometers, including Vitello (c. 1230 – c. 1314), Gersonides (1288–1344), Alfonso, John Wallis, and Giovanni Girolamo Saccheri.
In the early 17th century, there were two important developments in geometry. The first was the creation of analytic geometry, or geometry with coordinates and equations, by René Descartes (1596–1650) and Pierre de Fermat (1601–1665). This was a necessary precursor to the development of calculus and a precise quantitative science of physics. The second geometric development of this period was the systematic study of projective geometry by Girard Desargues (1591–1661). Projective geometry studies properties of shapes which are unchanged under projections and sections, especially as they relate to artistic perspective.
Two developments in geometry in the 19th century changed the way it had been studied previously. These were the discovery of non-Euclidean geometries by Nikolai Ivanovich Lobachevsky, János Bolyai and Carl Friedrich Gauss and of the formulation of symmetry as the central consideration in the Erlangen programme of Felix Klein (which generalized the Euclidean and non-Euclidean geometries). Two of the master geometers of the time were Bernhard Riemann (1826–1866), working primarily with tools from mathematical analysis, and introducing the Riemann surface, and Henri Poincaré, the founder of algebraic topology and the geometric theory of dynamical systems. As a consequence of these major changes in the conception of geometry, the concept of "space" became something rich and varied, and the natural background for theories as different as complex analysis and classical mechanics.
Additional Information
Geometry is one of the oldest branches of mathematics that is concerned with the shape, size, angles, and dimensions of objects in our day-to-day life. Geometry in mathematics plays a crucial role in understanding the physical world around us and has a wide range of applications in various fields, from architecture and engineering to art and physics.
There are two types of shapes in Euclidean Geometry: Two dimensional and Three-dimensional shapes. Flat shapes are 2D shapes in plane geometry that include triangles, squares, rectangles, and circles. 3D shapes in solid geometry such as a cube, cuboids, cones, and so on are also known as solids.
Fundamental geometry is based on points, lines, and planes, as described in coordinate geometry.
Geometry is the study of different varieties of shapes, figures, and sizes. It gives us knowledge about distances, angles, patterns, areas, and volumes of shapes.
The principles of geometry depend on points, lines, angles, and planes. All the geometrical shapes are based on these geometrical concepts.
The word Geometry is made up of two Ancient Greek words- ‘Geo’ means ‘Earth’ and ‘metron’ means ‘measurement’.
Geometry Definition in Maths
Geometry is a branch of mathematics that studies the properties, measurement, and relationships of points, lines, angles, surfaces, and solids.
Branches of Geometry
The geometry can be divided into different parts:
* Algebraic Geometry
* Discrete Geometry
* Differential Geometry
* Euclidean Geometry
* Non-Euclidean Geometry(Elliptical Geometry and Hyperbolic Geometry)
* Convex Geometry
* Topology.
]]>
For example,
Similarly,
There are many more....
]]>Gist
Neutrons, along with protons, are subatomic particles found inside the nucleus of every atom. The only exception is hydrogen, where the nucleus contains only a single proton. Neutrons have a neutral electric charge (neither negative nor positive) and have slightly more mass than positively charged protons.
Summary
A Neutron is a neutral subatomic particle that, in conjunction with protons, makes up the nucleus of every atom except ordinary hydrogen (whose nucleus has one proton and no neutrons). Along with protons and electrons, it is one of the three basic particles making up atoms, the basic building blocks of all matter and chemistry.
The neutron has no electric charge and a rest mass equal to 1.67492749804 × {10}^{-27} kg—marginally greater than that of the proton but 1,838.68 times greater than that of the electron. Neutrons and protons, commonly called nucleons, are bound together in the dense inner core of an atom, the nucleus, where they account for 99.9 percent of the atom’s mass. Developments in high-energy particle physics in the 20th century revealed that neither the neutron nor the proton is a true elementary particle. Rather, they are composites of extremely small elementary particles called quarks. The neutron is composed of two down quarks, each with 1/3 elementary charge, and one up quark, with 2/3 elementary charge. The nucleus is bound together by the residual effect of the strong force, a fundamental interaction that governs the behaviour of the quarks that make up the individual protons and neutrons.
Neutron, neutral subatomic particle that, in conjunction with protons, makes up the nucleus of every atom except ordinary hydrogen (whose nucleus has one proton and no neutrons). Along with protons and electrons, it is one of the three basic particles making up atoms, the basic building blocks of all matter and chemistry.
A free neutron—one that is not incorporated into a nucleus—is subject to radioactive decay of a type called beta decay. It breaks down into a proton, an electron, and an antineutrino (the antimatter counterpart of the neutrino, a particle with no charge and little or no mass); the half-life for this decay process is 611 seconds. Because it readily disintegrates in this manner, the neutron does not exist in nature in its free state, except among other highly energetic particles in cosmic rays. Since free neutrons are electrically neutral, they pass unhindered through the electrical fields within atoms and so constitute a penetrating form of radiation, interacting with matter almost exclusively through relatively rare collisions with atomic nuclei.
Neutrons and protons are classified as hadrons, subatomic particles that are subject to the strong force. Hadrons, in turn, have been shown to possess internal structure in the form of quarks, fractionally charged subatomic particles that are thought to be among the fundamental components of matter. Like the proton and other baryon particles, the neutron consists of three quarks. In fact, the neutron possesses a magnetic dipole moment; i.e., it behaves like a minute magnet in ways that suggest that it is an entity of moving electric charges.
Details
The neutron is a subatomic particle, symbol n or n0, which has a neutral (not positive or negative) charge, and a mass slightly greater than that of a proton. Protons and neutrons constitute the nuclei of atoms. Since protons and neutrons behave similarly within the nucleus, they are both referred to as nucleons. Nucleons have a mass of approximately one atomic mass unit, or dalton, symbol Da. Their properties and interactions are described by nuclear physics. Protons and neutrons are not elementary particles; each is composed of three quarks.
The chemical properties of an atom are mostly determined by the configuration of electrons that orbit the atom's heavy nucleus. The electron configuration is determined by the charge of the nucleus, which is determined by the number of protons, or atomic number. The number of neutrons is the neutron number. Neutrons do not affect the electron configuration.
Atoms of a chemical element that differ only in neutron number are called isotopes. For example, carbon, with atomic number 6, has an abundant isotope carbon-12 with 6 neutrons and a rare isotope carbon-13 with 7 neutrons. Some elements occur in nature with only one stable isotope, such as fluorine. Other elements occur with many stable isotopes, such as tin with ten stable isotopes, or with no stable isotope, such as technetium.
The properties of an atomic nucleus depend on both atomic and neutron numbers. With their positive charge, the protons within the nucleus are repelled by the long-range electromagnetic force, but the much stronger, but short-range, nuclear force binds the nucleons closely together. Neutrons are required for the stability of nuclei, with the exception of the single-proton hydrogen nucleus. Neutrons are produced copiously in nuclear fission and fusion. They are a primary contributor to the nucleosynthesis of chemical elements within stars through fission, fusion, and neutron capture processes.
The neutron is essential to the production of nuclear power. In the decade after the neutron was discovered by James Chadwick in 1932, neutrons were used to induce many different types of nuclear transmutations. With the discovery of nuclear fission in 1938, it was quickly realized that, if a fission event produced neutrons, each of these neutrons might cause further fission events, in a cascade known as a nuclear chain reaction. These events and findings led to the first self-sustaining nuclear reactor (Chicago Pile-1, 1942) and the first nuclear weapon (Trinity, 1945).
Dedicated neutron sources like neutron generators, research reactors and spallation sources produce free neutrons for use in irradiation and in neutron scattering experiments. A free neutron spontaneously decays to a proton, an electron, and an antineutrino, with a mean lifetime of about 15 minutes. Free neutrons do not directly ionize atoms, but they do indirectly cause ionizing radiation, so they can be a biological hazard, depending on dose. A small natural "neutron background" flux of free neutrons exists on Earth, caused by cosmic ray showers, and by the natural radioactivity of spontaneously fissionable elements in the Earth's crust.
Neutrons in an atomic nucleus
An atomic nucleus is formed by a number of protons, Z (the atomic number), and a number of neutrons, N (the neutron number), bound together by the nuclear force. Protons and neutrons each have a mass of approximately one dalton. The atomic number determines the chemical properties of the atom, and the neutron number determines the isotope or nuclide. The terms isotope and nuclide are often used synonymously, but they refer to chemical and nuclear properties, respectively. Isotopes are nuclides with the same atomic number, but different neutron number. Nuclides with the same neutron number, but different atomic number, are called isotones. The atomic mass number, A, is equal to the sum of atomic and neutron numbers. Nuclides with the same atomic mass number, but different atomic and neutron numbers, are called isobars. The mass of a nucleus is always slightly less than the sum of its proton and neutron masses: the difference in mass represents the mass equivalent to nuclear binding energy, the energy which would need to be added to take the nucleus apart.
The nucleus of the most common isotope of the hydrogen atom (with the chemical symbol 1H) is a lone proton. The nuclei of the heavy hydrogen isotopes deuterium (D or 2H) and tritium (T or 3H) contain one proton bound to one and two neutrons, respectively. All other types of atomic nuclei are composed of two or more protons and various numbers of neutrons. The most common nuclide of the common chemical element lead, 208Pb, has 82 protons and 126 neutrons, for example.
Protons and neutrons behave almost identically under the influence of the nuclear force within the nucleus. They are therefore both referred to collectively as nucleons. The concept of isospin, in which the proton and neutron are viewed as two quantum states of the same particle, is used to model the interactions of nucleons by the nuclear or weak forces. Because of the strength of the nuclear force at short distances, the nuclear energy binding nucleons is more than seven orders of magnitude larger than the electromagnetic energy binding electrons in atoms. Nuclear reactions (such as nuclear fission) therefore have an energy density that is more than ten million times that of chemical reactions. Ultimately, the ability of the nuclear force to store energy arising from the electromagnetic repulsion of nuclear components is the basis for most of the energy that makes nuclear reactors or bombs possible. In nuclear fission, the absorption of a neutron by a heavy nuclide (e.g., uranium-235) causes the nuclide to become unstable and break into light nuclides and additional neutrons. The positively charged light nuclides then repel, releasing electromagnetic potential energy.
Additional Information
Neutrons are tiny subatomic particles that — along with protons — form the nucleus of an atom.
While the number of protons defines what element an atom is, the number of neutrons in the nucleus can vary, resulting in different isotopes of an element. For example, ordinary hydrogen contains one proton and no neutrons, but the isotopes of hydrogen, deuterium and tritium, have one and two neutrons, respectively, alongside the proton.
Neutrons are composite particles made up of three smaller, elementary particles called quarks, held together by the Strong Force. Specifically, a neutron contains one 'up' and two 'down' quarks. Particles made from three quarks are called baryons, and hence baryons contribute to all the baryonic 'visible' matter in the universe.
After Ernest Rutherford (with help from Ernest Marsden and Hans Geiger's gold-leaf experiment) had discovered in 1911 that atoms have a nucleus, and then nine years later discovered that atomic nuclei are made, at least in part, by protons, the discovery of the neutron in 1932 by James Chadwick naturally followed.
The idea that there must be something else in an atom's nucleus came from the fact that the number of protons didn't match an atom's atomic weight. For example, an oxygen atom contains 8 protons, but has an atomic weight of 16, suggesting that it contains 8 other particles. However, these mystery particles would have to be electrically neutral, since atoms normally have no overall electric charge (the negative charge of the electrons cancels out the positive charge of the protons).
At the time, various scientists were experimenting with alpha particles, which are another name for helium nuclei, bombarding a material made from the element beryllium with an alpha particle stream. When the alpha particles impacted beryllium atoms, they produced mysterious particles that appeared to originate from within the beryllium atoms. Chadwick took these experiments one step further and saw that when the mystery particles hit a target made of paraffin wax, they would knock loose protons at high energy. In order to do this, Chadwick reasoned, the mystery particles must have more or less the same mass as a proton. Chadwick proclaimed this mystery particle to be the neutron, and in 1935 he won a Nobel Prize for his discovery.
As their name suggests, neutrons are electrically neutral, so they have no charge. Their mass is 1.008 times the mass of the proton — in other words, it's approximately 0.1% heavier.
Neutrons don't like to exist on their own outside the nucleus. The binding energy of the Strong Force between them and protons in the nucleus keeps them stable, but when out on their own they undergo beta decay after about 15 minutes, transforming into a proton, an electron and an antineutrino.
Albert Einstein, in his famous equation E = mc^2, said that mass and energy are equivalent. Although the mass of a neutron and a proton are only slightly different, this slight difference means that a neutron has more mass, and therefore more energy, than a proton and an electron combined. That's why, when a neutron decays, it produces a proton and an electron.
An isotope is a variation of an element that has more neutrons. For instance, at the top of this article, we gave the example of the hydrogen isotopes deuterium and tritium, which have 1 and 2 extra neutrons, respectively. Some isotopes are stable, deuterium for instance. Others are unstable and inevitably undergo radioactive decay. Tritium is unstable — it has a half-life of about 12 years (a half-life is the time it takes on average for half of a given amount of an isotope like tritium to decay), but other isotopes decay far more rapidly, in a matter of minutes, second or even fractions of a second.
Neutrons are also essential tools in nuclear reactions, in particular when inducing a chain reaction. Neutrons absorbed by atomic nuclei create unstable isotopes that then undergo nuclear fission (splitting into two smaller daughter nuclei of other elements). For example, when uranium-235 absorbs an extra neutron, it becomes unstable and breaks apart, releasing energy in the process.
Neutrons are also instrumental in the creation of heavy elements in massive stars, through a mechanism known as the r-process, with "r" meaning "rapid". This process was first detailed in the famous, Nobel Prize-winning B2FH paper by Margaret and Geoffrey Burbidge, William Fowler and Fred Hoyle that described the origins of the elements through stellar nucleosynthesis — the forging of elements by stars.
Stars like the sun can produce elements of oxygen, nitrogen and carbon through nuclear fusion reactions. More massive stars can keep going and create shells of increasingly heavier elements all the way down to iron-56 in the star's core. At this point, the reactions require more energy to be put into them to fuse elements heavier than iron than what is actually produced by those reactions, so those reactions cease, energy production grinds to a halt and the core of the star collapses, instigating a supernova. And it's in the incredibly violent blast of a supernova that conditions can become extreme enough to liberate lots of free neutrons in a short space of time.
In the supernova blast, atomic nuclei are then able to sweep up all these free neutrons before they all decay (this is why it's described as rapid), to instigate r-process nucleosynthesis. Once the nuclei are full of neutrons they turn unstable and undergo beta decay, transforming those extra neutrons into protons. The addition of these protons changes the type of element that a nucleus is, hence it's a way of creating new, heavy elements such as gold, platinum and other precious metals. The gold in your jewelry was made billions of years ago by rapid neutron capture in a supernova!
]]>
Gist
An electron is an elementary particle consisting of a charge of negative electricity equal to about 1.602 × {10}^{-19} coulomb and having a mass when at rest of about 9.109 × {10}^{-31} kilogram or about ¹/₁₈₃₆ that of a proton.
Summary
Electron, one of the three basic subatomic particles—along with protons and neutrons—that make up atoms, the basic building blocks of all matter and chemistry. The negatively charged electrons circle an atom’s central nucleus, which is formed by positively charged protons and the electrically neutral particles called neutrons. (The nucleus of the ordinary hydrogen atom is an exception, containing only one proton and no neutrons.) Like opposite ends of a magnet that attract one another, the negative electrons are attracted to a positive force, which binds them to the nucleus. The nucleus is small and dense compared with the electrons, which are the lightest charged particles in nature. The electrons circle the nucleus in orbital paths called shells, each of which holds only a certain number of electrons.
The electron was discovered in 1897 by the English physicist J.J. Thomson during investigations of cathode rays. His discovery of electrons, which he initially called corpuscles, played a pivotal role in revolutionizing knowledge of atomic structure. Under ordinary conditions electrons are bound to the positively charged nuclei of atoms by the attraction between opposite electric charges. In a neutral atom the number of electrons is identical to the number of positive charges on the nucleus. Any atom, however, may have more or fewer electrons than positive charges and thus be negatively or positively charged as a whole; these charged atoms are known as ions. Not all electrons are associated with atoms; some occur in a free state with ions in the form of matter known as plasma.
Within any given atom, electrons move about the nucleus in an orderly arrangement of orbitals, the attraction between electrons and nucleus overcoming repulsion among the electrons that would otherwise cause them to fly apart. These orbitals are organized in concentric shells proceeding outward from the nucleus with an increasing number of subshells. The electrons in orbitals closest to the nucleus are held most tightly; those in the outermost orbitals are shielded by intervening electrons and are the most loosely held by the nucleus. As the electrons move about within this structure, they form a diffuse cloud of negative charge that occupies nearly the entire volume of the atom. The arrangement of electrons in orbitals and shells around the nucleus is referred to as the electronic configuration of the atom. This electronic configuration determines not only the size of an individual atom but also the chemical activity of the atom. The classification of elements within groups of similar elements in the periodic table, for example, is based on the similarity in their electron structures.
Within the field of particle physics, there are two ways of classifying electrons. The electron is a fermion, a type of particle named after the Fermi-Dirac statistics that describe its behaviour. All fermions are characterized by half-integer values of their spin, where spin corresponds to the intrinsic angular momentum of the particle. The concept of spin is embodied in the wave equation for the electron formulated by P.A.M. Dirac. The Dirac wave equation also predicts the existence of the antimatter counterpart of the electron, the positron. Within the fermion group of subatomic particles, the electron can be further classified as a lepton. A lepton is a subatomic particle that reacts only by the electromagnetic, weak, and gravitational forces; it does not respond to the short-range strong force that acts between quarks and binds protons and neutrons in the atomic nucleus.
The lightest stable subatomic particle known, the electron carries a negative charge of 1.602176634 × {10}^{-19} coulomb, which is considered the basic unit of electric charge. The rest mass of the electron is 9.1093837015 × {10}^{-31} kg, which is only 1/1,836 the mass of a proton. An electron is therefore considered nearly massless in comparison with a proton or a neutron, and the electron mass is not included in calculating the mass number of an atom.
Details
The electron is a subatomic particle with a negative one elementary electric charge. Electrons belong to the first generation of the lepton particle family, and are generally thought to be elementary particles because they have no known components or substructure. The electron's mass is approximately 1/1836 that of the proton. Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value, expressed in units of the reduced Planck constant, ħ. Being fermions, no two electrons can occupy the same quantum state, per the Pauli exclusion principle. Like all elementary particles, electrons exhibit properties of both particles and waves: They can collide with other particles and can be diffracted like light. The wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a longer de Broglie wavelength for a given energy.
Electrons play an essential role in numerous physical phenomena, such as electricity, magnetism, chemistry, and thermal conductivity; they also participate in gravitational, electromagnetic, and weak interactions. Since an electron has charge, it has a surrounding electric field; if that electron is moving relative to an observer, the observer will observe it to generate a magnetic field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law. Electrons radiate or absorb energy in the form of photons when they are accelerated.
Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields. Special telescopes can detect electron plasma in outer space. Electrons are involved in many applications, such as tribology or frictional charging, electrolysis, electrochemistry, battery technologies, electronics, welding, cathode-ray tubes, photoelectricity, photovoltaic solar panels, electron microscopes, radiation therapy, lasers, gaseous ionization detectors, and particle accelerators.
Interactions involving electrons with other subatomic particles are of interest in fields such as chemistry and nuclear physics. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without allows the composition of the two known as atoms. Ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding.
In 1838, British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms. Irish physicist George Johnstone Stoney named this charge 'electron' in 1891, and J. J. Thomson and his team of British physicists identified it as a particle in 1897 during the cathode-ray tube experiment.
Electrons participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance, when cosmic rays enter the atmosphere. The antiparticle of the electron is called the positron; it is identical to the electron, except that it carries electrical charge of the opposite sign. When an electron collides with a positron, both particles can be annihilated, producing gamma ray photons.
Additional Information
An electron is a negatively charged subatomic particle that can be either bound to an atom or free (not bound). An electron that is bound to an atom is one of the three primary types of particles within the atom -- the other two are protons and neutrons.
Together, protons and electrons form an atom's nucleus. A proton has a positive charge that counters the electron's negative charge. When an atom has the same number of protons and electrons, it is in a neutral state.
Electrons are unique from the other particles in multiple ways. They exist outside of the nucleus, are significantly smaller in mass and exhibit both wave-like and particle-like characteristics. An electron is also an elementary particle, which means that it is not made up of smaller components. Protons and neutrons are thought to be made up of quarks, so they are not elementary particles.
Shells, subshells and orbitals
In the early days of atomic study, scientists believed that an atom's electrons circled the nucleus in spherical orbits at specific distances, much like planets circle a sun. In this model -- referred to as the Bohr model -- the orbits furthest from the nucleus contain the greatest amount of energy. When an electron jumps from a higher energy orbit to a lower energy orbit, the atom releases electromagnetic radiation.
The Bohr model is no longer thought to be accurate, particularly as it pertains to how the electrons orbit the nucleus. While the model can still be useful in understanding the basics of electron distribution and different energy levels, it fails to consider the complexity of that distribution and how electrons inhabit the space around the nucleus, according to current quantum theory.
Electron movement is determined by calculating the probability of finding electrons in specific regions within the space that surrounds the atom's nucleus -- rather than by assuming fixed trajectories. The mathematically defined regions are based on three structural patterns:
* Shells. The concept of a shell originates with the Bohr model, although the theory around shells has evolved. Physicists now believe that a shell is a region of probability surrounding the nucleus. An atom can contain up to seven electron shells, depending on the type of atom. The shells exist at different levels around the nucleus. Shells furthest from the nucleus have the highest amounts of energy and those nearest have the lowest. Each shell is limited to a specific number of electrons, depending on its level and the configuration. A shell can contain one or more subshells, and a subshell can contain one or more orbitals.
* Subshells. A subshell is a collection of one or more orbitals of a specific type. There are four types of orbitals and subsequently four types of subshells -- designated as s, p, d and f, depending on their orbitals. An s subshell contains one s orbital, a p subshell contains three p orbitals, a d subshell contains five d orbitals, and an f subshell contains seven f orbitals. It has also been theorized that an atom can support a g subshell that contains nine g orbitals.
* Orbitals. An orbital is a specifically shaped region of space around the nucleus where an electron is most likely found. In other words, it is the region with the highest probability (over 90%) of containing the electron as it travels around the nucleus. An orbital might be shaped like a sphere (s orbital), a dumbbell (p orbital) or a more complex shape (d and f orbitals). Whatever its shape, an orbital can include a maximum of two electrons.
An atom's shells are numbered consecutively, starting at the nucleus and working out. A shell's number is often referred to as its n value. For example, the third shell might be referred to as n=3 or 3n. Letters are also sometimes used to refer to the shells. These include K, L, M, N, O, P and Q, again starting from the nucleus and working out. For instance, the third shell might be referred to as the M shell or 3m.
Each shell contains one or more specific types of subshells, which determine the maximum number of electrons that the shell can contain. For example, the first shell (K) contains a single s subshell that includes only one s orbital. As a result, the maximum number electrons that the shell can contain is two. This means that an atom that has only a K shell is limited to two electrons. Only two elements, hydrogen and helium, have a single shell. Hydrogen contains only one electron and helium contains two.
The subshell/orbital configuration varies from one shell to the next, growing more complex until the fifth shell, at which point the complexity starts to taper off. For instance, the second shell (L) includes an s subshell and a p subshell. The s subshell contains one s orbital, and the p subshell contains three p orbitals. This means the shell can support up to eight electrons.
However, an atom with an L shell also contains a K shell. In fact, the L shell will start filling up after the K shell is filled. This means that an atom with an L shell can support up to 10 electrons because of the presence of both the K and L shells. For example, lithium and neon contain both K and L shells. A lithium atom has only three electrons, two in the K shell and one in the L shell, but a neon atom has 10 electrons, two in the K shell and eight in the L shell.
In general, this same pattern continues for all seven shells, with the inner shells filling up with electrons before the outer shells. However, this is only a tendency. Electrons gravitate toward the most stable configuration, which is usually the inner shells, but it's also possible for an outer shell to start filling up with electrons before the lower shell is completely full.
Regardless of the order in which shells fill with electrons, the shells themselves determine the maximum number of electrons they can support based on their subshells and orbitals. All but the first shell includes a p subshell, only the third through sixth shells contain d subshells, and only the fourth and fifth contain f subshells. All seven shells include an s subshell.
Electrons and electricity
In electrical conductors, current flows as a result of electrons jumping from atom to atom as they move from negative to positive electric poles. In semiconductor materials, current also results from electron movement, however, the movement is based on electron deficiencies in atoms. An electron-deficient atom in a semiconductor is called a hole. In this case, the current moves from positive to negative electric poles.
The charge of a single electron is referred to as the unit electrical charge. It carries a negative charge that is equal to but opposite the positive charge on a proton or hole. However, the amount of electrical charge is usually not measured on a single electron because that amount is so small.
Instead, the standard unit of electrical charge is the coulomb (symbolized by C). A coulomb contains about 6.24 x {10}^{18} electrons. An electron's charge (symbolized by e) is about 1.60 x {10}^{-19} C. The mass of an electron at rest (symbolized by me) is approximately 9.11 x {10}^{-31} kilograms (kg). If electrons are accelerated to nearly the speed of light, as in a particle accelerator, they will have greater mass because of relativistic effects.
]]>
Gist
Proton is an elementary particle that is identical with the nucleus of the hydrogen atom, that along with the neutron is a constituent of all other atomic nuclei, that carries a positive charge numerically equal to the charge of an electron, and that has a mass of 1.673 × {10}^{-27} kilogram.
Summary
A Proton is one of the three basic subatomic particles—along with neutrons and electrons—that make up atoms, the basic building blocks of all matter and chemistry. It is the positively charged particle that, together with the electrically neutral particles called neutrons, make up the nucleus of an atom. (The nucleus of the ordinary hydrogen atom is an exception; it contains one proton but no neutrons.) Neutrons are slightly heavier than protons in mass, but the proton is 1,836 times heavier than the mass of an electron, which is the lightest charged particle in nature. The proton’s positive charge is equal and opposite to the negative charge on an electron, meaning a neutral atom has an equal number of protons and electrons.
More than 90 types of atoms exist in nature, and each kind of atom forms a different chemical element. Every nucleus of a given chemical element has the same number of protons, and this quantity of protons defines the atomic number of the element and determines the element’s position in the periodic table.
The discovery of the proton dates to the earliest investigations of atomic structure. While studying streams of ionized gaseous atoms and molecules from which electrons had been stripped, Wilhelm Wien (1898) and J.J. Thomson (1910) identified a positive particle equal in mass to the hydrogen atom. Ernest Rutherford showed (1919) that nitrogen under alpha-particle bombardment ejects what appear to be hydrogen nuclei. By 1920 he had accepted the hydrogen nucleus as an elementary particle, naming it proton.
High-energy particle-physics studies in the late 20th century refined the structural understanding of the nature of the proton within the group of subatomic particles. Protons and neutrons have been shown to be made up of smaller particles and are classified as baryons—particles composed of three elementary units of matter known as quarks.
Protons from ionized hydrogen are given high velocities in particle accelerators and are commonly used as projectiles to produce and study nuclear reactions. Protons are the chief constituent of primary cosmic rays and are among the products of some types of artificial nuclear reactions.
Details
A proton is a stable subatomic particle, symbol p + with a positive electric charge of +1 e (elementary charge). Its mass is slightly less than that of a neutron and 1,836 times the mass of an electron (the proton-to-electron mass ratio). Protons and neutrons, each with masses of approximately one atomic mass unit, are jointly referred to as "nucleons" (particles present in atomic nuclei).
One or more protons are present in the nucleus of every atom. They provide the attractive electrostatic central force that binds the atomic electrons. The number of protons in the nucleus is the defining property of an element, and is referred to as the atomic number (represented by the symbol Z). Since each element has a unique number of protons, each element has its own unique atomic number, which determines the number of atomic electrons and consequently the chemical characteristics of the element.
The word proton is Greek for "first", and this name was given to the hydrogen nucleus by Ernest Rutherford in 1920. In previous years, Rutherford had discovered that the hydrogen nucleus (known to be the lightest nucleus) could be extracted from the nuclei of nitrogen by atomic collisions. Protons were therefore a candidate to be a fundamental or elementary particle, and hence a building block of nitrogen and all other heavier atomic nuclei.
Although protons were originally considered to be elementary particles, in the modern Standard Model of particle physics, protons are now known to be composite particles, containing three valence quarks, and together with neutrons are now classified as hadrons. Protons are composed of two up quarks of charge +2/3 e and one down quark of charge −1/3 e. The rest masses of quarks contribute only about 1% of a proton's mass. The remainder of a proton's mass is due to quantum chromodynamics binding energy, which includes the kinetic energy of the quarks and the energy of the gluon fields that bind the quarks together. Because protons are not fundamental particles, they possess a measurable size; the root mean square charge radius of a proton is about 0.84–0.87 fm (1 fm = {10}^{-15} m). In 2019, two different studies, using different techniques, found this radius to be 0.833 fm, with an uncertainty of ±0.010 fm.
Free protons occur occasionally on Earth: thunderstorms can produce protons with energies of up to several tens of MeV. At sufficiently low temperatures and kinetic energies, free protons will bind to electrons. However, the character of such bound protons does not change, and they remain protons. A fast proton moving through matter will slow by interactions with electrons and nuclei, until it is captured by the electron cloud of an atom. The result is a diatomic or polyatomic ion containing hydrogen. In a vacuum, when free electrons are present, a sufficiently slow proton may pick up a single free electron, becoming a neutral hydrogen atom, which is chemically a free radical. Such "free hydrogen atoms" tend to react chemically with many other types of atoms at sufficiently low energies. When free hydrogen atoms react with each other, they form neutral hydrogen molecules (H2), which are the most common molecular component of molecular clouds in interstellar space.
Free protons are routinely used for accelerators for proton therapy or various particle physics experiments, with the most powerful example being the Large Hadron Collider.
Description
Protons are spin -1/2 fermions and are composed of three valence quarks, making them baryons (a sub-type of hadrons). The two up quarks and one down quark of a proton are held together by the strong force, mediated by gluons. A modern perspective has a proton composed of the valence quarks (up, up, down), the gluons, and transitory pairs of sea quarks. Protons have a positive charge distribution, which decays approximately exponentially, with a root mean square charge radius of about 0.8 fm.
Protons and neutrons are both nucleons, which may be bound together by the nuclear force to form atomic nuclei. The nucleus of the most common isotope of the hydrogen atom (with the chemical symbol "H") is a lone proton. The nuclei of the heavy hydrogen isotopes deuterium and tritium contain one proton bound to one and two neutrons, respectively. All other types of atomic nuclei are composed of two or more protons and various numbers of neutrons.
History
The concept of a hydrogen-like particle as a constituent of other atoms was developed over a long period. As early as 1815, William Prout proposed that all atoms are composed of hydrogen atoms (which he called "protyles"), based on a simplistic interpretation of early values of atomic weights (see Prout's hypothesis), which was disproved when more accurate values were measured.
In 1886, Eugen Goldstein discovered canal rays (also known as anode rays) and showed that they were positively charged particles (ions) produced from gases. However, since particles from different gases had different values of charge-to-mass ratio (q/m), they could not be identified with a single particle, unlike the negative electrons discovered by J. J. Thomson. Wilhelm Wien in 1898 identified the hydrogen ion as the particle with the highest charge-to-mass ratio in ionized gases.
Following the discovery of the atomic nucleus by Ernest Rutherford in 1911, Antonius van den Broek proposed that the place of each element in the periodic table (its atomic number) is equal to its nuclear charge. This was confirmed experimentally by Henry Moseley in 1913 using X-ray spectra.
In 1917, experiments (reported in 1919 and 1925), Rutherford proved that the hydrogen nucleus is present in other nuclei, a result usually described as the discovery of protons. These experiments began after Rutherford observed that when alpha particles would strike air, Rutherford could detect scintillation on a zinc sulfide screen produced at a distance well beyond the distance of alpha-particle range of travel but instead corresponding to the range of travel of hydrogen atoms (protons). After experimentation, Rutherford traced the reaction to the nitrogen in air and found that when alpha particles were introduced into pure nitrogen gas, the effect was larger. In 1919, Rutherford assumed that the alpha particle merely knocked a proton out of nitrogen, turning it into carbon. After observing Blackett's cloud chamber images in 1925, Rutherford realized that the alpha particle was absorbed because if the alpha particle was not absorbed, then the alpha particle must knock a proton off of nitrogen creating 3 charged particles (a negatively charged carbon, a proton, and an alpha particle). It can be shown that it will create three tracks in the cloud chamber, but instead we observe only 2 tracks in the cloud chamber. After capture of the alpha particle, a hydrogen nucleus is ejected, so that heavy oxygen, not carbon, is the result – i.e., the atomic number Z of the nucleus is increased rather than reduced.
Depending on one's perspective, either 1919 (when it was seen experimentally as derived from another source than hydrogen) or 1920 (when it was recognized and proposed as an elementary particle) may be regarded as the moment when the proton was 'discovered'.
Rutherford knew hydrogen to be the simplest and lightest element and was influenced by Prout's hypothesis that hydrogen was the building block of all elements. Discovery that the hydrogen nucleus is present in other nuclei as an elementary particle led Rutherford to give the hydrogen nucleus H+ a special name as a particle, since he suspected that hydrogen, the lightest element, contained only one of these particles. He named this new fundamental building block of the nucleus the proton, after the neuter singular of the Greek word for "first". However, Rutherford also had in mind the word protyle as used by Prout. Rutherford spoke at the British Association for the Advancement of Science at its Cardiff meeting beginning 24 August 1920. At the meeting, he was asked by Oliver Lodge for a new name for the positive hydrogen nucleus to avoid confusion with the neutral hydrogen atom. He initially suggested both proton and prouton (after Prout). Rutherford later reported that the meeting had accepted his suggestion that the hydrogen nucleus be named the "proton", following Prout's word "protyle". The first use of the word "proton" in the scientific literature appeared in 1920.
]]>
Gist
A biopsy is a procedure to remove a piece of tissue or a sample of cells from your body so that it can be tested in a laboratory. You may undergo a biopsy if you're experiencing certain signs and symptoms or if your health care provider has identified an area of concern.
Summary
A biopsy is a medical procedure or surgery that a doctor performs to obtain a sample of cells. This can help them diagnose cancer and other health conditions that can cause abnormalities.
In some cases, your doctor may decide that he or she needs a sample of your tissue or your cells to help diagnose an illness or identify a cancer. The removal of tissue or cells for analysis is called a biopsy.
While a biopsy may sound scary, it’s important to remember that most are entirely pain-free and low-risk procedures. Depending on your situation, a piece of skin, tissue, organ, or suspected tumor will be surgically removed and sent to a lab for testing.
Why a biopsy is done
If you have been experiencing symptoms normally associated with cancer, and your doctor has located an area of concern, he or she may order a biopsy to help determine if that area is cancerous.
A biopsy is the only sure way to diagnosis most cancers. Imaging tests like CT scans and X-rays can help identify areas of concerns, but they can’t differentiate between cancerous and noncancerous cells.
Biopsies are typically associated with cancer, but just because your doctor orders a biopsy, it doesn’t mean that you have cancer. Doctors use biopsies to test whether abnormalities in your body are caused by cancer or by other conditions.
For example, if a woman has a lump in her breast, an imaging test would confirm the lump, but a biopsy is the only way to determine whether it’s breast cancer or another noncancerous condition, such as polycystic fibrosis.
Types of biopsies
There are several different kinds of biopsies. Your doctor will choose the type to use based on your condition and the area of your body that needs closer review.
Whatever the type, you’ll be given local anesthesia to numb the area where the incision is made.
Bone marrow biopsy
Inside some of your larger bones, like the hip or the femur in your leg, blood cells are produced in a spongy material called marrow.
If your doctor suspects that there are problems with your blood, you may undergo a bone marrow biopsy. This test can single out both cancerous and noncancerous conditions like leukemia, anemia, infection, or lymphoma. The test is also used to check if cancer cells from another part of the body have spread to your bones.
Bone marrow is most easily accessed using a long needle inserted into your hipbone. This may be done in a hospital or doctor’s office. The insides of your bones cannot be numbed, so some people feel a dull pain during this procedure. Others, however, only feel an initial sharp pain as the local anesthetic is injected.
Endoscopic biopsy
Endoscopic biopsies are used to reach tissue inside the body in order to gather samples from places like the bladder, colon, or lung.
During this procedure, your doctor uses a flexible thin tube called an endoscope. The endoscope has a tiny camera and a light at the end. A video monitor allows your doctor to view the images. Small surgical tools are also inserted into the endoscope. Using the video, your doctor can guide these to collect a sample.
The endoscope can be inserted through a small incision in your body, or through any opening in the body, including the mouth, nose, rectum, or urethra. Endoscopies normally take anywhere from five to 20 minutes.
This procedure can be done in a hospital or in a doctor’s office. Afterward, you might feel mildly uncomfortable, or have bloating, gas, or a sore throat. These will all pass in time, but if you are concerned, you should contact your doctor.
Needle biopsies
Needle biopsies are used to collect skin samples, or for any tissue that is easily accessible under the skin. The different types of needle biopsies include the following:
* Core needle biopsies use medium-sized needle to extract a column of tissue, in the same way that core samples are taken from the earth.
* Fine needle biopsies use a thin needle that is attached to a syringe, allowing fluids and cells to be drawn out.
* Image-guided biopsies are guided with imaging procedures — such as X-ray or CT scans — so your doctor can access specific areas, such as the lung, liver, or other organs.
* Vacuum-assisted biopsies use suction from a vacuum to collect cells.
Skin biopsy
If you have a rash or lesion on your skin which is suspicious for a certain condition, does not respond to therapy prescribed by your doctor, or the cause of which is unknown, your doctor may perform or order a biopsy of the involved area of skin. This can be done by using local anesthesia and removing a small piece of the area with a razor blade, a scalpel, or a small, circular blade called a “punch.” The specimen will be sent to the lab to look for evidence of conditions such as infection, cancer, and inflammation of the skin structures or blood vessels.
Surgical biopsy
Sometimes a patient may have an area of concern that cannot be safely or effectively reached using the methods described above or the results of other biopsy specimens have been negative. An example would be a tumor in the abdomen near the aorta. In this case, a surgeon may need to get a specimen using a laparoscope or by making a traditional incision.
Details
A biopsy is a medical test commonly performed by a surgeon, interventional radiologist, or an interventional cardiologist. The process involves extraction of sample cells or tissues for examination to determine the presence or extent of a disease. The tissue is then fixed, dehydrated, embedded, sectioned, stained and mounted before it is generally examined under a microscope by a pathologist; it may also be analyzed chemically. When an entire lump or suspicious area is removed, the procedure is called an excisional biopsy. An incisional biopsy or core biopsy samples a portion of the abnormal tissue without attempting to remove the entire lesion or tumor. When a sample of tissue or fluid is removed with a needle in such a way that cells are removed without preserving the histological architecture of the tissue cells, the procedure is called a needle aspiration biopsy. Biopsies are most commonly performed for insight into possible cancerous or inflammatory conditions.
History
The Arab physician Abulcasis (1013–1107) developed one of the earliest diagnostic biopsies. He used a needle to puncture a goiter and then characterized the material.
Etymology
The term biopsy reflects the Greek words bios, "life," and opsis, "a sight."
The French dermatologist Ernest Besnier introduced the word biopsie to the medical community in 1879.
Medical use:
Cancer
When cancer is suspected, a variety of biopsy techniques can be applied. An excisional biopsy is an attempt to remove an entire lesion. When the specimen is evaluated, in addition to diagnosis, the amount of uninvolved tissue around the lesion, the surgical margin of the specimen is examined to see if the disease has spread beyond the area biopsied. "Clear margins" or "negative margins" means that no disease was found at the edges of the biopsy specimen. "Positive margins" means that disease was found, and a wider excision may be needed, depending on the diagnosis.
When intact removal is not indicated for a variety of reasons, a wedge of tissue may be taken in an incisional biopsy. In some cases, a sample can be collected by devices that "bite" a sample. A variety of sizes of needle can collect tissue in the lumen (core biopsy). Smaller diameter needles collect cells and cell clusters, fine needle aspiration biopsy.
Pathologic examination of a biopsy can determine whether a lesion is benign or malignant, and can help differentiate between different types of cancer. In contrast to a biopsy that merely samples a lesion, a larger excisional specimen called a resection may come to a pathologist, typically from a surgeon attempting to eradicate a known lesion from a patient. For example, a pathologist would examine a mastectomy specimen, even if a previous nonexcisional breast biopsy had already established the diagnosis of breast cancer. Examination of the full mastectomy specimen would confirm the exact nature of the cancer (subclassification of tumor and histologic "grading") and reveal the extent of its spread (pathologic "staging").
Liquid biopsy
There are two types of liquid biopsy (which is not really a biopsy as they are blood tests that do not require a biopsy of tissue): circulating tumor cell assays or cell-free circulating tumor DNA tests. These methods provide a non-invasive alternative to repeat invasive biopsies to monitor cancer treatment, test available drugs against the circulating tumor cells, evaluate the mutations in cancer and plan individualized treatments. In addition, because cancer is a heterogeneous genetic disease, and excisional biopsies provide only a snapshot in time of some of the rapid, dynamic genetic changes occurring in tumors, liquid biopsies provide some advantages over tissue biopsy-based genomic testing. In addition, excisional biopsies are invasive, cannot be used repeatedly, and are ineffective in understanding the dynamics of tumor progression and metastasis. By detecting, quantifying and characterisation of vital circulating tumor cells or genomic alterations in CTCs and cell-free DNA in blood, liquid biopsy can provide real-time information on the stage of tumor progression, treatment effectiveness, and cancer metastasis risk. This technological development could make it possible to diagnose and manage cancer from repeated blood tests rather than from a traditional biopsy.
Circulating tumor cell tests are already available but not covered by insurance yet at maintrac and under development by many pharmaceutical companies. Those tests analyze circulating tumor cells (CTCs) Analysis of individual CTCs demonstrated a high level of heterogeneity seen at the single cell level for both protein expression and protein localization and the CTCs reflected both the primary biopsy and the changes seen in the metastatic sites.
Analysis of cell-free circulating tumor DNA (cfDNA) has an advantage over circulating tumor cells assays in that there is approximately 100 times more cell-free DNA than there is DNA in circulating tumor cells. These tests analyze fragments of tumor-cell DNA that are continuously shed by tumors into the bloodstream. Companies offering cfDNA next generation sequencing testing include Personal Genome Diagnostics and Guardant Health. These tests are moving into widespread use when a tissue biopsy has insufficient material for DNA testing or when it is not safe to do an invasive biopsy procedure, according to a recent report of results on over 15,000 advanced cancer patients sequenced with the Guardant Health test.
A 2014 study of the blood of 846 patients with 15 different types of cancer in 24 institutions was able to detect the presence of cancer DNA in the body. They found tumor DNA in the blood of more than 80 percent of patients with metastatic cancers and about 47 percent of those with localized tumors. The test does not indicate the tumor site(s) or other information about the tumor. The test did not produce false positives.
Such tests may also be useful to assess whether malignant cells remain in patients whose tumors have been surgically removed. Up to 30 percent are expected to relapse because some tumor cells remain. Initial studies identified about half the patients who later relapsed, again without false positives.
Another potential use is to track the specific DNA mutations driving a tumor. Many new cancer medications block specific molecular processes. Such tests could allow easier targeting of therapy to tumor.
Precancerous conditions
For easily detected and accessed sites, any suspicious lesions may be assessed. Originally, this was skin or superficial masses. X-ray, then later CT, MRI, and ultrasound along with endoscopy extended the range.
Inflammatory conditions
A biopsy of the temporal arteries is often performed for suspected vasculitis. In inflammatory bowel disease (Crohn's disease and ulcerative colitis), frequent biopsies are taken to assess the activity of disease and to assess changes that precede malignancy.
Biopsy specimens are often taken from part of a lesion when the cause of a disease is uncertain or its extent or exact character is in doubt. Vasculitis, for instance, is usually diagnosed on biopsy.
* Kidney disease: Biopsy and fluorescence microscopy are key in the diagnosis of alterations of renal function. The immunofluorescence plays vital role in the diagnosis of Crescentic glomerulonephritis.
* Infectious disease: Lymph node enlargement may be due to a variety of infectious or autoimmune diseases.
* Metabolic disease: Some conditions affect the whole body, but certain sites are selectively biopsied because they are easily accessed. Amyloidosis is a condition where degraded proteins accumulate in body tissues. In order to make the diagnosis, the gingival.
* Transplantation: Biopsies of transplanted organs are performed in order to determine that they are not being rejected or that the disease that necessitated transplant has not recurred.
* Fertility: A testicular biopsy is used for evaluating the fertility of men and find out the cause of a possible infertility, e.g. when sperm quality is low, but hormone levels still are within normal ranges.
Analysis of biopsied material
After the biopsy is performed, the sample of tissue that was removed from the patient is sent to the pathology laboratory. A pathologist specializes in diagnosing diseases (such as cancer) by examining tissue under a microscope. When the laboratory receives the biopsy sample, the tissue is processed and an extremely thin slice of tissue is removed from the sample and attached to a glass slide. Any remaining tissue is saved for use in later studies, if required.
The slide with the tissue attached is treated with dyes that stain the tissue, which allows the individual cells in the tissue to be seen more clearly. The slide is then given to the pathologist, who examines the tissue under a microscope, looking for any abnormal findings. The pathologist then prepares a report that lists any abnormal or important findings from the biopsy. This report is sent to the surgeon who originally performed the biopsy on the patient.
]]>