Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#1151 2021-10-05 00:05:14

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1128) Harvard University

Harvard University, oldest institution of higher learning in the United States (founded 1636) and one of the nation’s most prestigious. It is one of the Ivy League schools. The main university campus lies along the Charles River in Cambridge, Massachusetts, a few miles west of downtown Boston. Harvard’s total enrollment is about 23,000.

Harvard’s history began when a college was established at New Towne, which was later renamed Cambridge for the English alma mater of some of the leading colonists. Classes began in the summer of 1638 with one master in a single frame house and a “college yard.” Harvard was named for a Puritan minister, John Harvard, who left the college his books and half of his estate.

At its inception Harvard was under church sponsorship, although it was not formally affiliated with any religious body. During its first two centuries the college was gradually liberated, first from clerical and later from political control, until in 1865 the university alumni began electing members of the governing board. During his long tenure as Harvard’s president (1869–1909), Charles W. Eliot made Harvard into an institution with national influence.

The alumni and faculty of Harvard have been closely associated with many areas of American intellectual and political development. By the end of the first decade of the 21st century, Harvard had educated seven U.S. presidents—John Adams, John Quincy Adams, Rutherford B. Hayes, Theodore Roosevelt, Franklin D. Roosevelt, John F. Kennedy, and Barack Obama—and a number of justices, cabinet officers, and congressional leaders. Literary figures among Harvard graduates include Ralph Waldo Emerson, Oliver Wendell Holmes, Henry David Thoreau, James Russell Lowell, Henry James, Henry Adams, T.S. Eliot, John Dos Passos, E.E. Cummings, Walter Lippmann, and Norman Mailer. Other notable intellectual figures who graduated from or taught at Harvard include the historians Francis Parkman, W.E.B. Du Bois, and Samuel Eliot Morison; the astronomer Benjamin Peirce; the chemist Wolcott Gibbs; and the naturalist Louis Agassiz. William James introduced the experimental study of psychology into the United States at Harvard in the 1870s.

Harvard’s undergraduate school, Harvard College, contains about one-third of the total student body. The core of the university’s teaching staff consists of the faculty of arts and sciences, which includes the graduate faculty of arts and sciences. The university has graduate or professional schools of medicine, law, business, divinity, education, government, dental medicine, design, and public health. The schools of law, medicine, and business are particularly prestigious. Among the advanced research institutions affiliated with Harvard are the Museum of Comparative Zoology (founded in 1859 by Agassiz), the Gray Herbarium, the Peabody Museum of Archaeology and Ethnology, the Arnold Arboretum, and the Fogg Art Museum. Also associated with the university are an astronomical observatory in Harvard, Massachusetts; the Dumbarton Oaks Research Library and Collection in Washington, D.C., a centre for Byzantine and pre-Columbian studies; and the Harvard-Yenching Institute in Cambridge for research on East and Southeast Asia. The Harvard University Library is one of the largest and most important university libraries in the world.

Radcliffe College, one of the Seven Sisters schools, evolved from informal instruction offered to individual women or small groups of women by Harvard University faculty in the 1870s. In 1879 a faculty group called the Harvard Annex made a full course of study available to women, despite resistance to coeducation from the university’s administration. Following unsuccessful efforts to have women admitted directly to degree programs at Harvard, the Annex, which had incorporated as the Society for the Collegiate Instruction of Women, chartered Radcliffe College in 1894. The college was named for the colonial philanthropist Ann Radcliffe, who established the first scholarship fund at Harvard in 1643.

Until the 1960s Radcliffe operated as a coordinate college, drawing most of its instructors and other resources from Harvard. Radcliffe graduates, however, were not granted Harvard degrees until 1963. Diplomas from that time on were signed by the presidents of both Harvard and Radcliffe. Women undergraduates enrolled at Radcliffe were technically also enrolled at Harvard College, and instruction was coeducational.

Although its 1977 agreement with Harvard University called for the integration of select functions, Radcliffe College maintained a separate corporate identity for its property and endowments and continued to offer complementary educational and extracurricular programs for both undergraduate and graduate students, including career programs, a publishing course, and graduate-level workshops and seminars in women’s studies.

In 1999 Radcliffe and Harvard formally merged, and a new school, the Radcliffe Institute for Advanced Study at Harvard University, was established. The institute focuses on Radcliffe’s former fields of study and programs and also offers such new ones as nondegree educational programs and the study of women, gender, and society.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1152 2021-10-07 00:59:34

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1129) University of Munich

University of Munich, in full Ludwig Maximilian University of Munich, German Ludwig-Maximilians Universität München, autonomous coeducational institution of higher learning supported by the state of Bavaria in Germany. It was founded in 1472 at Ingolstadt by the duke of Bavaria, who modeled it after the University of Vienna. During the Protestant Reformation, Johann Eck made the university a centre of Roman Catholic opposition to Martin Luther. In 1799 schools of economics and political science were established, and the following year King Maximilian Joseph moved the school to Landshut, giving it the name Ludwig Maximilian. The dukes of Bavaria continued their strong support for the school, and in 1826 King Louis I moved it to Munich. A technical school with courses in agriculture and forestry was founded in 1868. The university’s faculty of Catholic theology continues to be influential, although a faculty of Protestant theology has been added. Affiliated with the University of Munich are more than 200 constituent institutes, seminars, and clinics.

Ludwig Maximilian University of Munich (also referred to as LMU or the University of Munich) is a public research university located in Munich, Germany.

The University of Munich is Germany's sixth-oldest university in continuous operation. Originally established in Ingolstadt in 1472 by Duke Ludwig IX of Bavaria-Landshut, the university was moved in 1800 to Landshut by King Maximilian I of Bavaria when Ingolstadt was threatened by the French, before being relocated to its present-day location in Munich in 1826 by King Ludwig I of Bavaria. In 1802, the university was officially named Ludwig-Maximilians-Universität by King Maximilian I of Bavaria in his as well as the university's original founder's honour.

The University of Munich is associated with 43 Nobel laureates (as of October 2020). Among these were Wilhelm Röntgen, Max Planck, Werner Heisenberg, Otto Hahn and Thomas Mann. Pope Benedict XVI was also a student and professor at the university. Among its notable alumni, faculty and researchers are inter alia Rudolf Peierls, Josef Mengele, Richard Strauss, Walter Benjamin, Joseph Campbell, Muhammad Iqbal, Marie Stopes, Wolfgang Pauli, Bertolt Brecht, Max Horkheimer, Karl Loewenstein, Carl Schmitt, Gustav Radbruch, Ernst Cassirer, Ernst Bloch, Konrad Adenauer. The LMU has recently been conferred the title of "University of Excellence" under the German Universities Excellence Initiative.

LMU is currently the second-largest university in Germany in terms of student population; in the winter semester of 2018/2019, the university had a total of 51,606 matriculated students. Of these, 9,424 were freshmen while international students totalled 8,875 or approximately 17% of the student population. As for operating budget, the university records in 2018 a total of 734,9 million euros in funding without the university hospital; with the university hospital, the university has a total funding amounting to approximately 1.94 billion euros.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1153 2021-10-08 00:40:41

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1130) University of Oxford

University of Oxford, English autonomous institution of higher learning at Oxford, Oxfordshire, England, one of the world’s great universities. It lies along the upper course of the River Thames (called by Oxonians the Isis), 50 miles (80 km) north-northwest of London.

Sketchy evidence indicates that schools existed at Oxford by the early 12th century. By the end of that century, a university was well established, perhaps resulting from the barring of English students from the University of Paris around 1167. Oxford was modeled on the University of Paris, with initial faculties of theology, law, medicine, and the liberal arts.

In the 13th century the university gained added strength, particularly in theology, with the establishment of several religious orders, principally Dominicans and Franciscans, in the town of Oxford. The university had no buildings in its early years; lectures were given in hired halls or churches. The various colleges of Oxford were originally merely endowed boardinghouses for impoverished scholars. They were intended primarily for masters or bachelors of arts who needed financial assistance to enable them to continue study for a higher degree. The earliest of these colleges, University College, was founded in 1249. Balliol College was founded about 1263, and Merton College in 1264.

During the early history of Oxford, its reputation was based on theology and the liberal arts. But it also gave more-serious treatment to the physical sciences than did the University of Paris: Roger Bacon, after leaving Paris, conducted his scientific experiments and lectured at Oxford from 1247 to 1257. Bacon was one of several influential Franciscans at the university during the 13th and 14th centuries. Among the others were Duns Scotus and William of Ockham. John Wycliffe (c. 1330–84) spent most of his life as a resident Oxford doctor.

Beginning in the 13th century, the university gained charters from the crown, but the religious foundations in Oxford town were suppressed during the Protestant Reformation. In 1571 an act of Parliament led to the incorporation of the university. The university’s statutes were codified by its chancellor, Archbishop William Laud, in 1636. In the early 16th century, professorships began to be endowed. And in the latter part of the 17th century, interest in scientific studies increased substantially. During the Renaissance, Desiderius Erasmus carried the new learning to Oxford, and such scholars as William Grocyn, John Colet, and Sir Thomas More enhanced the university’s reputation. Since that time Oxford has traditionally held the highest reputation for scholarship and instruction in the classics, theology, and political science.

In the 19th century the university’s enrollment and its professorial staff were greatly expanded. The first women’s college at Oxford, Lady Margaret Hall, was founded in 1878, and women were first admitted to full membership in the university in 1920. In the 20th century Oxford’s curriculum was modernized. Science came to be taken much more seriously and professionally, and many new faculties were added, including ones for modern languages and economics. Postgraduate studies also expanded greatly in the 20th century.
Oxford houses two renowned scholarly institutions, the Bodleian Library and the Ashmolean Museum of Art and Archaeology, as well as the Museum of the History of Science (established 1924). The Oxford University Press, established in 1478, is one of the largest and most prestigious university publishers in the world.

Oxford has been associated with many of the greatest names in British history, from John Wesley and Cardinal Wolsey to Oscar Wilde and Sir Richard Burton and Cecil Rhodes and Sir Walter Raleigh. The astronomer Edmond Halley studied at Oxford, and the physicist Robert Boyle performed his most important research there. Prime ministers who studied at Oxford include William Pitt the Elder, George Canning, Sir Robert Peel, William Gladstone, Lord Salisbury, H.H. Asquith, Clement Atlee, Anthony Eden, Harold Macmillan, Edward Heath, Harold Wilson, and Margaret Thatcher. Among the many notable writers associated with the university are Lewis Carroll, C.S. Lewis, and J.R.R. Tolkien; the latter two were members of the Inklings, an informal Oxford literary group in the mid-20th century.

The colleges and collegial institutions of the University of Oxford include All Souls (1438), Balliol (1263–68), Brasenose (1509), Christ Church (1546), Corpus Christi (1517), Exeter (1314), Green (1979), Harris Manchester (founded 1786; inc. 1996), Hertford (founded 1740; inc. 1874), Jesus (1571), Keble (founded 1868; inc. 1870), Kellogg (1990), Lady Margaret Hall (founded 1878; inc. 1926), Linacre (1962), Lincoln (1427), Magdalen (1458), Mansfield (founded 1886; inc. 1995), Merton (1264), New (1379), Nuffield (founded 1937; inc. 1958), Oriel (1326), Pembroke (1624), Queen’s (1341), St. Anne’s (founded 1879; inc. 1952), St. Antony’s (1950), St. Catherine’s (1962), St. Cross (1965), St. Edmund Hall (1278), St. Hilda’s (founded 1893; inc. 1926), St. Hugh’s (founded 1886; inc. 1926), St. John’s (1555), St. Peter’s (founded 1929; inc. 1961), Somerville (founded 1879; inc. 1926), Templeton (founded 1965; inc. 1995), Trinity (1554–55), University (1249), Wadham (1612), Wolfson (founded 1966; inc. 1981), and Worcester (founded 1283; inc. 1714). Among the university’s private halls are Blackfriars (founded 1921; inc. 1994), Campion (founded 1896; inc. 1918), Greyfriars (founded 1910; inc. 1957), Regent’s Park College (founded 1810; inc. 1957), St. Benet’s (founded 1897; inc. 1918), and Wycliffe (founded 1877; inc. 1996).


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1154 2021-10-09 00:07:29

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1131) Tungsten

Tungsten (W), also called wolfram, chemical element, an exceptionally strong refractory metal of Group 6 (VIb) of the periodic table, used in steels to increase hardness and strength and in lamp filaments.

Tungsten metal was first isolated (1783) by the Spanish chemists and mineralogists Juan José and Fausto Elhuyar by charcoal reduction of the oxide (WO3) derived from the mineral wolframite. Earlier (1781) the Swedish chemist Carl Wilhelm Scheele had discovered tungstic acid in a mineral now known as scheelite, and his countryman Torbern Bergman concluded that a new metal could be prepared from the acid. The names tungsten and wolfram have been used for the metal since its discovery, though everywhere Jön Jacob Berzelius’s symbol W prevails. In British and American usage, tungsten is preferred; in Germany and a number of other European countries, wolfram is accepted.

Element Properties

atomic number  :  74
atomic weight  :  183.85
melting point  :  3,410 °C (6,152 °F)
boiling point  :  5,660 °C (10,220 °F)
density  :  19.3 grams/cubic centimetres  at 20 °C (68 °F)
oxidation states  :      +2, +3, +4, +5, +6

Occurrence, Properties, And Uses

The amount of tungsten in Earth’s crust is estimated to be 1.5 parts per million, or about 1.5 grams per ton of rock. China is the dominant producer of tungsten; in 2016 it produced over 80 percent of total tungsten mined, and it contained nearly two-thirds of the world’s reserves. Vietnam, Russia, Canada, and Bolivia produce most of the remainder. Tungsten does not occur as a free metal. It is about as abundant as tin or as molybdenum, which it resembles, and half as plentiful as uranium. Although tungsten occurs as tungstenite—tungsten disulfide, WS2—the most important ores in this case are the tungstates such as scheelite (calcium tungstate, CaWO4), stolzite (lead tungstate, PbWO4), and wolframite—a solid solution or a mixture or both of the isomorphous substances ferrous tungstate (FeWO4) and manganous tungstate (MnWO4).

For tungsten the ores are concentrated by magnetic and mechanical processes, and the concentrate is then fused with alkali. The crude melts are leached with water to give solutions of sodium tungstate, from which hydrous tungsten trioxide is precipitated upon acidification, and the oxide is then dried and reduced to metal with hydrogen.

Tungsten is rather resistant to attack by acids, except for mixtures of concentrated nitric and hydrofluoric acids, and it can be attacked rapidly by alkaline oxidizing melts, such as fused mixtures of potassium nitrate and sodium hydroxide or sodium peroxide; aqueous alkalies, however, are without effect. It is inert to oxygen at normal temperature but combines with it readily at red heat, to give the trioxides, and is attacked by fluorine at room temperature, to give the hexafluorides.

Tungsten metal has a nickel-white to grayish lustre. Among metals it has the highest melting point, at 3,410 °C (6,170 °F), the highest tensile strength at temperatures of more than 1,650 °C (3,002 °F), and the lowest coefficient of linear thermal expansion (4.43 × 10−6 per °C at 20 °C [68 °F]). Tungsten is ordinarily brittle at room temperature. Pure tungsten can, however, be made ductile by mechanical working at high temperatures and can then be drawn into very fine wire. Tungsten was first commercially employed as a lamp filament material and thereafter used in many electrical and electronic applications. It is used in the form of tungsten carbide for very hard and tough dies, tools, gauges, and bits. Much tungsten goes into the production of tungsten steels, and some has been used in the aerospace industry to fabricate rocket-engine nozzle throats and leading-edge reentry surfaces.

Natural tungsten is a mixture of five stable isotopes: tungsten-180 (0.12 percent), tungsten-182 (26.50 percent), tungsten-183 (14.31 percent), tungsten-184 (30.64 percent), and tungsten-186 (28.43 percent). Tungsten crystals are isometric and, by X-ray analysis, are seen to be body-centred cubic.


Chemically, tungsten is relatively inert. Compounds have been prepared, however, in which the element has oxidation states from 0 to +6. The states above +2, especially +6, are most common. In the +4, +5, and +6 states, tungsten forms a variety of complexes.

The most important tungsten compound is tungsten carbide (WC), which is noted for its hardness (9.5 on the Mohs scale, where the maximum, diamond, is 10). It is used alone or in combination with other metals to impart wear-resistance to cast iron and the cutting edges of saws and drills. Tungsten also forms hard, refractory, and chemically inert interstitial compounds with boron, nitrogen, and silicon upon direct reaction with those elements at high temperatures.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1155 2021-10-10 00:04:26

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1132) Barium

Barium (Ba), chemical element, one of the alkaline-earth metals of Group 2 (IIa) of the periodic table. The element is used in metallurgy, and its compounds are used in pyrotechnics, petroleum production, and radiology.

Element Properties

atomic number  :  56
atomic weight  :  137.33
melting point  :  727 °C (1,341 °F)
boiling point  :  1,805 °C (3,281 °F)
specific gravity  :  3.51 (at 20 °C, or 68 °F)
oxidation state  :  +2

Occurrence, Properties, And Uses

Barium, which is slightly harder than lead, has a silvery white lustre when freshly cut. It readily oxidizes when exposed to air and must be protected from oxygen during storage. In nature it is always found combined with other elements. The Swedish chemist Carl Wilhelm Scheele discovered (1774) a new base (baryta, or barium oxide, BaO) as a minor constituent in pyrolusite, and from that base he prepared some crystals of barium sulfate, which he sent to Johan Gottlieb Gahn, the discoverer of manganese. A month later Gahn found that the mineral barite is also composed of barium sulfate, BaSO4. A particular crystalline form of barite found near Bologna, Italy, in the early 17th century, after being heated strongly with charcoal, glowed for a time after exposure to bright light. The phosphorescence of “Bologna stones” was so unusual that it attracted the attention of many scientists of the day, including Galileo. Only after the electric battery became available could Sir Humphry Davy finally isolate (1808) the element itself by electrolysis.

Barium minerals are dense (e.g., BaSO4, 4.5 grams per cubic centimetre; BaO, 5.7 grams per cubic centimetre), a property that was the source of many of their names and of the name of the element itself (from the Greek barys, “heavy”). Ironically, metallic barium is comparatively light, only 30 percent denser than aluminum. Its cosmic abundance is estimated as 3.7 atoms (on a scale where the abundance of silicon = 106 atoms). Barium constitutes about 0.03 percent of Earth’s crust, chiefly as the minerals barite (also called barytes or heavy spar) and witherite. Between six and eight million tons of barite are mined every year, more than half of it in China. Lesser amounts are mined in India, the United States, and Morocco. Commercial production of barium depends upon the electrolysis of fused barium chloride, but the most effective method is the reduction of the oxide by heating with aluminum or silicon in a high vacuum. A mixture of barium monoxide and peroxide can also be used in the reduction. Only a few tons of barium are produced each year.

The metal is used as a getter in electron tubes to perfect the vacuum by combining with final traces of gases, as a deoxidizer in copper refining, and as a constituent in certain alloys. The alloy with nickel readily emits electrons when heated and is used for this reason in electron tubes and in spark plug electrodes. The detection of barium (atomic number 56) after uranium (atomic number 92) had been bombarded by neutrons was the clue that led to the recognition of nuclear fission in 1939.

Naturally occurring barium is a mixture of six stable isotopes: barium-138 (71.7 percent), barium-137 (11.2 percent), barium-136 (7.8 percent), barium-135 (6.6 percent), barium-134 (2.4 percent), and barium-132 (0.10 percent). Barium-130 (0.11 percent) is also naturally occurring but undergoes decay by double electron capture with an extremely long half-life (more than 4 × 1021 years). More than 30 radioactive isotopes of barium are known, with mass numbers ranging from 114 to 153. The isotope with the longest half-life (barium-133, 10.5 years) is used as a gamma-ray reference source.


In its compounds, barium has an oxidation state of +2. The Ba2+ ion may be precipitated from solution by the addition of carbonate (CO32−), sulfate (SO42−), chromate (CrO42−), or phosphate (PO43−) anions. All soluble barium compounds are toxic to mammals, probably by interfering with the functioning of potassium ion channels.

Barium sulfate (BaSO4) is a white, heavy insoluble powder that occurs in nature as the mineral barite. Almost 80 percent of world consumption of barium sulfate is in drilling muds for oil. It is also used as a pigment in paints, where it is known as blanc fixe (i.e., “permanent white”) or as lithopone when mixed with zinc sulfide. The sulfate is widely used as a filler in paper and rubber and finds an important application as an opaque medium in the X-ray examination of the gastrointestinal tract.

Most barium compounds are produced from the sulfate via reduction to the sulfide, which is then used to prepare other barium derivatives. About 75 percent of all barium carbonate (BaCO3) goes into the manufacture of specialty glass, either to increase its refractive index or to provide radiation shielding in cathode-ray and television tubes. The carbonate also is used to make other barium chemicals, as a flux in ceramics, in the manufacture of ceramic permanent magnets for loudspeakers, and in the removal of sulfate from salt brines before they are fed into electrolytic cells (for the production of chlorine and alkali). On heating, the carbonate forms barium oxide, BaO, which is employed in the preparation of cuprate-based high-temperature superconductors such as YBa2Cu3O7−x. Another complex oxide, barium titanate (BaTiO3), is used in capacitors, as a piezoelectric material, and in nonlinear optical applications.

Barium chloride (BaCl2·2H2O), consisting of colourless crystals that are soluble in water, is used in heat-treating baths and in laboratories as a chemical reagent to precipitate soluble sulfates. Although brittle, crystalline barium fluoride (BaF2) is transparent to a broad region of the electromagnetic spectrum and is used to make optical lenses and windows for infrared spectroscopy. The oxygen compound barium peroxide (BaO2) was used in the 19th century for oxygen production (the Brin process) and as a source of hydrogen peroxide. Volatile barium compounds impart a yellowish green colour to a flame, the emitted light being of mostly two characteristic wavelengths. Barium nitrate, formed with the nitrogen-oxygen group NO3−, and chlorate, formed with the chlorine-oxygen group ClO3−, are used for this effect in green signal flares and fireworks.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1156 2021-10-11 01:01:39

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1133) Stanford University

Stanford University, official name Leland Stanford Junior University, private coeducational institution of higher learning at Stanford, California, U.S. (adjacent to Palo Alto), one of the most prestigious in the country. The university was founded in 1885 by railroad magnate Leland Stanford and his wife, Jane (née Lathrop), and was dedicated to their deceased only child, Leland, Jr.; it opened in 1891. The university campus largely occupies Stanford’s former Palo Alto farm. The buildings, conceived by landscape architect Frederick Law Olmsted and designed by architect Charles Allerton Coolidge, are of soft buff sandstone in a style similar to the old California mission architecture, being long and low with wide colonnades, open arches, and red-tiled roofs. The campus sustained heavy damage from earthquakes in 1906 and 1989 but was rebuilt each time. The university was coeducational from the outset, though between 1899 and 1933 enrollment of women students was limited to 500.

Stanford maintains overseas study centres in France, Italy, Germany, England, Argentina, Mexico, Chile, Japan, and Russia; about one-third of its undergraduates study at one of these sites for one or two academic quarters. A study and internship program is also offered in Washington, D.C. The university offers a broad range of undergraduate, graduate, and professional degree programs in schools of law, medicine, education, engineering, business, earth sciences, and humanities and sciences. Total enrollment exceeds 16,000.

Stanford is a national centre for research and is home to more than 120 research institutes. The Hoover Institution on War, Revolution and Peace—founded in 1919 by Stanford alumnus (and future U.S. president) Herbert Hoover to preserve documents related to World War I—contains more than 1.6 million volumes and 50 million documents dealing with 20th-century international relations and public policy. The Stanford Linear Accelerator Center (SLAC), established in 1962, is one of the world’s premier laboratories for research in particle physics. Other noted research facilities include the Stanford Institute for Economic Policy Research, the Institute for International Studies, and the Stanford Humanities Center.

The Stanford Medical Center, completed on the campus in 1959, is one of the top teaching hospitals in the country. Other notable campus locations are the Iris & B. Gerald Cantor Center for Visual Arts (housing the university museum) and its adjacent sculpture garden, containing works by Auguste Rodin, and Hanna House (1937), designed by architect Frank Lloyd Wright. Adjacent to the campus is the Stanford Research Park (1951), one of the world’s principal locations for the development of electronics and computer technology. The Hopkins Marine Station is maintained by the university at Pacific Grove on Monterey Bay, and a biological field station is located near the campus at Jasper Ridge Biological Preserve.

Stanford’s distinguished faculty has included many Nobel laureates, including Milton Friedman (economics), Arthur Kornberg (biochemistry), and Burton Richter (physics). Among the university’s many notable alumni are writers John Steinbeck and Ken Kesey, painter Robert Motherwell, U.S. Supreme Court Justices William Hubbs Rehnquist and Sandra Day O’Connor, astronaut Sally Ride, and golfer Tiger Woods.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1157 2021-10-12 00:31:50

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1134) Thanatophobia

Fear of Death Phobia – Thanatophobia

The extreme and often irrational thought or fear of death leads to the phobia known as Thanatophobia. Very severe cases of thanatophobia often negatively impact the day to day functioning of the individual suffering from this condition. Often s/he refuses to leave the home owing to this fear. The talk or thought of death (or what lies after death) can trigger panic attacks in the patients. Thanatophobia is also known by various other names such as:

•    Fear of entombment or the fear of being buried
•    Dying phobia
•    Fear of cremation
•    Thantophobia
•    Fear of the unknown

Causes of the fear of death phobia

As is the case with several other kinds of fears and phobias, the fear of death also results from external events (traumatic past) or internalization/predisposition of extreme concepts about death. As children, we learn that death is inevitable and non-predictable. But this knowledge can paralyze or overwhelm the person coping from Thanatophobia.

Symptoms of Thanatophobia

The mere mention of death or images or thoughts thereof can trigger a crippling anxiety in the patient. Following emotional, mental and physical symptoms are experienced by thanatophobic patients:

•    Physical Symptoms: Dizziness, dry mouth, sweating, palpitations, nausea, stomach pain, trembling, sensation of choking, chest pain or discomfort, hot or cold flashes, numbness and tingling sensations.
•    Mental Symptoms: Loss of control- feeling of going crazy with automatic or uncontrollable reactions, repetition of gory thoughts, inability to distinguish between reality and unreality.
•    Emotional symptoms: Desire to flee and escape from current situation, extreme avoidance, persistent worry and terrifying or overwhelming thoughts. Additionally, anger, sadness and guilt may also be present.

Diagnosis and Treatment of Thanatophobia

Before considering the diagnosis of the fear of death, it is important to consider a few conditions that are mistaken for Thanatophobia. Depression, ADHD and bipolar disorders are often linked to this type of phobia. In other cases, undiagnosed conditions like Alzheimer’s disease, migraines, concentration disorders, strokes, schizophrenia, and epilepsy etc may actually be related to Thanatophobia.

Diagnosis of thanatophobia is best done by the patient himself. If the extreme thoughts of the fear of death are affecting his/her life so much so that one is unable to leave the home or compromising upon one’s daily functioning then s/he must discuss this with a medical doctor. After ruling out any physical conditions, the doctor might refer the patient to a mental health professional to further evaluate the condition.

Many kinds of treatments and therapies are available today to help individuals cope with Thanatophobia.

•    Anti anxiety medicines (as yet there are no scientific studies that have proven the efficiency of treating the fear of death phobia). Anxiety medications can also have side effects.
•    Hypnotherapy
•    Religious counseling
•    Talk therapy
•    Neuro linguistic programming
•    Cognitive Behavior therapy and Behavior therapy
•    Relaxation techniques like imagery, meditation, controlled breathing and positive reaffirmations/visualizations
•    Exposure therapy or regression therapy wherein the patient is made to relive certain events, analyze them and interpret them correctly. This helps one resolve issues surrounding the event.
•    Self help techniques
•    Group therapies with other patients suffering from Thanatophobia

The goal of each of these therapies is to help the patient pinpoint the exact inciting factor of the fear of death. The therapists help the patient understand why the fear is unfounded and systematically and gradually help the patient cope with these thoughts. This, in turn, helps the patient control his/her physical and mental responses to the fear of death.

In conclusion

Thantophobia or Thanatophobia is a complex phobia which, if left untreated, can touch every aspect of the individual’s life. However, one must not lose hope but opt for treatments and therapies that can help him/her cope with it. Family and friends can also play a very important role in helping the individual deal with one’s fear of death.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1158 2021-10-13 00:21:41

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1135) Massachusetts Institute of Technology

Massachusetts Institute of Technology (MIT), privately controlled coeducational institution of higher learning famous for its scientific and technological training and research. It was chartered by the state of Massachusetts in 1861 and became a land-grant college in 1863. William Barton Rogers, MIT’s founder and first president, had worked for years to organize an institution of higher learning devoted entirely to scientific and technical training, but the outbreak of the American Civil War delayed the opening of the school until 1865, when 15 students enrolled for the first classes, held in Boston. MIT moved to Cambridge, Massachusetts, in 1916; its campus is located along the Charles River.

Under the administration of president Karl T. Compton (1930–48), the institute evolved from a well-regarded technical school into an internationally known centre for scientific and technical research. During the Great Depression, its faculty established prominent research centres in a number of fields, most notably analog computing (led by Vannevar Bush) and aeronautics (led by Charles Stark Draper). During World War II, MIT administered the Radiation Laboratory, which became the nation’s leading centre for radar research and development, as well as other military laboratories. After the war, MIT continued to maintain strong ties with military and corporate patrons, who supported basic and applied research in the physical sciences, computing, aerospace, and engineering.

MIT offers both graduate and undergraduate education. There are five academic schools—the School of Architecture and Planning, the School of Engineering, the School of Humanities, Arts, and Social Science, the MIT Sloan School of Management, and the School of Science—and the Whitaker College of Health Sciences and Technology. While MIT is perhaps best known for its programs in engineering and the physical sciences, other areas—notably economics, political science, urban studies, linguistics, and philosophy—are also strong. Admission is extremely competitive, and undergraduate students are often able to pursue their own original research. Total enrollment is about 10,000.

MIT has numerous research centres and laboratories. Among its facilities are a nuclear reactor, a computation centre, geophysical and astrophysical observatories, a linear accelerator, a space research centre, wind tunnels, an artificial intelligence laboratory, a centre for cognitive science, and an international studies centre. MIT’s library system is extensive and includes a number of specialized libraries. There are also several museums.

The MIT community is driven by a shared purpose: to make a better world through education, research, and innovation. We are fun and quirky, elite but not elitist, inventive and artistic, obsessed with numbers, and welcoming to talented people regardless of where they come from.

Founded to accelerate the nation’s industrial revolution, MIT is profoundly American. With ingenuity and drive, our graduates have invented fundamental technologies, launched new industries, and created millions of American jobs. At the same time, and without the slightest sense of contradiction, MIT is profoundly global. Our community gains tremendous strength as a magnet for talent from around the world. Through teaching, research, and innovation, MIT’s exceptional community pursues its mission of service to the nation and the world.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1159 2021-10-14 00:27:16

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1136) California Institute of Technology

California Institute of Technology, byname Caltech, private coeducational university and research institute in Pasadena, California, U.S., emphasizing graduate and undergraduate instruction and research in pure and applied science and engineering. The institute comprises six divisions: biology; chemistry and chemical engineering; engineering and applied science; geologic and planetary sciences; humanities and social sciences; and physics, mathematics, and astronomy. Total enrollment is approximately 2,000, of which more than half are graduate students.

Superbly equipped, and staffed by a faculty of some 1,000 distinguished and creative scientists, Caltech is considered one of the world’s major research centres. Dozens of eminent scientists (including many Nobel Prize winners) have worked and taught there, including physicists Robert Andrews Millikan, Richard P. Feynman, and Murray Gell-Mann; astronomer George Ellery Hale; and chemist Linus Pauling. In 1958 the Jet Propulsion Laboratory at Caltech, operating in conjunction with the National Aeronautics and Space Administration, launched Explorer I, the first U.S. satellite, and it subsequently conducted other programs of space and lunar exploration. Caltech operates astronomical observatories at Owens Valley, Mount Palomar, and Big Bear Lake in California and at Mauna Kea in Hawaii. Other institute facilities include a seismological laboratory in Pasadena and a marine biological laboratory at Corona del Mar.

Caltech was established in 1891 as a school for arts and crafts. First called Throop University and later Throop Polytechnic Institute, it assumed its present name in 1920. The institute originally included curricula in business and education, but in 1907 it dropped several programs and began specializing in science and technology, with a focus on creativity and research.

The California Institute of Technology (or Caltech) is one of the foremost scientific and technical institutions in the United States. It is a private university and research institute located in Pasadena, California, about 12 miles (19 kilometers) from Los Angeles. Caltech traces its origins back to Throop University, a school of arts and crafts founded in 1891. It began specializing in science and technology in 1907 and took its present name in 1920. Caltech has been coeducational since 1970, but men still outnumber women. The university enrolls roughly 2,000 students, the majority of whom are graduate students.

Caltech conducts degree programs from the bachelor’s through the doctoral level. The university consists of six academic divisions: biology; chemistry and chemical engineering; engineering and applied science; geological and planetary sciences; physics, mathematics, and astronomy; and humanities and social sciences. Its programs in science and engineering are ranked among the best in the United States. Caltech was also the first scientific institution to require undergraduates to take at least 20 percent of their courses in humanities and cultural studies. The institute provides many opportunities for undergraduates to conduct research, including a summer research fellowship program. The students work on independent projects under the guidance of a faculty sponsor, and many have their work published in scientific journals. To encourage an atmosphere of trust in an intense environment, everyone at Caltech is expected to follow an honor code, which states that “no member of the Caltech community shall take unfair advantage of any other member of the Caltech community.”

Superbly equipped and staffed by a faculty of distinguished and creative scientists, Caltech is considered one of the world’s major research centers. Dozens of eminent scientists, including many Nobel prize winners, have worked and taught there. Among them have been physicists Robert Andrews Millikan, Richard P. Feynman, and Murray Gell-Mann, astronomer George Ellery Hale, and chemist Linus Pauling. Caltech operates the Jet Propulsion Laboratory (JPL) in conjunction with the National Aeronautics and Space Administration (NASA). In 1958 JPL launched Explorer I, the first U.S. satellite, and since then it has conducted many other space exploration programs. Caltech also operates astronomical observatories at Owens Valley, Mount Palomar, Big Bear Lake, and Cedar Flat in California and at Mauna Kea in Hawaii. Other institute facilities include a seismological laboratory in Pasadena and a marine biological laboratory at Corona del Mar, California.

Caltech’s Beavers, the school’s varsity sports teams, compete in Division III of the National Collegiate Athletic Association (NCAA). School colors are orange and white.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1160 2021-10-15 01:04:03

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1137) Imperial College London

Imperial College London, institution of higher learning in London. It is one of the leading research colleges or universities in England. Its main campus is located in South Kensington (in Westminster), and its medical school is linked with several London teaching hospitals. Its three- to five-year courses of study lead to bachelor’s, master’s, and doctorate degrees. These degree programs include the biological and physical sciences, engineering, computing, geology, and preclinical and clinical medicine. Among its research centres are the Centre for Environmental Technology, National Heart and Lung Institute, Centre for Population Biology, Centre for Composite Materials, and Centre for the History of Science, Technology and Medicine. Total enrollment is approximately 12,000, including over 4,700 engineering students.

The Royal College of Science was founded in 1845 by Prince Albert, the consort of Queen Victoria. The Royal School of Mines was founded in 1851, and the City and Guilds College was founded in 1884. The institutions united to form the Imperial College of Science and Technology in 1907 and became a school of the University of London in 1908. An act of Parliament in 1988 made St. Mary’s Hospital Medical School, founded in 1854, the college’s fourth school. The National Heart and Lung Institute was joined with the college in 1995, creating with St. Mary’s the new Imperial College School of Medicine. The Charing Cross and Westminster Medical School, as well as the Royal Postgraduate Medical School, were merged with the institution in 1997, and in 2000 it also merged with Wye College. In 2006 Imperial College withdrew from the University of London in order to become an independent university.

Imperial College London has a 240-acre (97-hectare) site with wetland, farmland, parkland, and laboratories at Silwood Park near Ascot, Berkshire; it also owns a mine near Truro, Cornwall.

Ranked 7th in the world in the QS World University Rankings® 2022, Imperial College London is a one-of-a-kind institution in the UK, focusing solely on science, engineering, medicine and business. Imperial offers an education that is research-led, exposing you to real world challenges with no easy answers, teaching that opens everything up to question and opportunities to work across multi-cultural, multi-national teams.

Imperial is based in South Kensington in London, in an area known as ‘Albertopolis’, Prince Albert and Sir Henry Cole’s 19th century vision for an area where science and the arts would come together. As a result, Imperial’s neighbours include a number of world leading cultural organizations including the Science, Natural History and Victoria and Albert museums; the Royal Colleges of Art and Music; and the Royal Albert Hall, where all of their students also graduate.

There is plenty of green space too, including two Royal Parks (Hyde Park and Kensington Gardens) within 10 minutes’ walk of campus. Travel to and from the area is also really easy as it’s served by three Tube lines and many bus routes.

One of the most distinctive elements of an Imperial education is that students join a community of world-class researchers. The cutting edge and globally influential nature of this research is what Imperial is best known for. It’s the focus on the practical application of their research – particularly in addressing global challenges – and the high level of interdisciplinary collaboration that makes their research so effective. Read more about their research impact on their research and innovation webpages.

The number of award winners, Nobel Prize holders and prestigious Fellowships (Royal Society, Royal Academy of Engineering, Academy of Medical Sciences) amongst their staff is a testament to the outstanding contributions they have made in their respective fields.

Imperial is one of the most international universities in the world, with 59% of its student body in 2019-20 being non-UK citizens and more than 140 countries are currently represented on campus. Meanwhile, the College’s staff, like their students, are diverse in their cultural backgrounds, nationalities and experiences.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1161 2021-10-16 03:28:25

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1138) Yale University

Founded in 1701, Yale University is a private Ivy League research university located in New Haven, Connecticut. It is one of the nine Colonial Colleges chartered before the American Revolution and the third-oldest institution of higher education in the USA. The 300 years old institution traces its roots to the 1640s when colonial clergymen took an initiative to lay the foundations of a local college in order to preserve the tradition of European liberal education in the New World. In 1701, Yale was established as a collegiate school near Saybrook, Connecticut. The collegiate school was renamed as Yale College in 1718 in recognition of the donation of books and goods made by Welsh merchant Elihu Yale.

In 1750, one of the oldest buildings in New Haven and a national historic landmark, Connecticut Hall was constructed. Yale law school, established in 1824, is consistently ranked as a prominent law school in the nation. The law school holds a great history of nurturing and training outstanding jurists, judges, practitioners, and government officials. In 1836, Yale Literary magazine was founded. Considered as an oldest literary review in the country, it emerged at the forefront of novel ways of studying literature. In 1861, Yale became the first university in America to award Doctor of Philosophy degrees. The University currently consists of fourteen constituent schools including the undergraduate liberal arts college, the Graduate school of Arts and Sciences and twelve professional schools.

In 2018, the University delivered teaching to 13,433 full-time and part-time students including 5,964 undergraduates and 7,469 graduate and professional students. In addition to more than 80 majors available to undergraduates, the University offers a number of supplementary programs intended to give students specialized knowledge across a variety of areas. The undergraduate students can choose from over 2,000 courses offered each year. Yale’s student body, one of the most diverse in the world includes students from different backgrounds and experiences. In 2018, 2,694 students nearly 20.7% of international students took admission at Yale representing 123 countries. Majority of the international student population comes from Canada, China, Germany, India, South Korea, and the United Kingdom.

Yale’s Endowment generated an 11.3% return. The Endowment grew from $22.5 billion to $30.31 billion, in the past ten-year period. The Endowment’s performance exceeded its benchmark and outpaced institutional fund indices with annual returns of 6.6%, over the past ten years.

The University has graduated many notable alumni consisting of 61 Nobel laureates, 78 MacArthur Fellows, 247 Rhodes Scholars and 119 Marshall Scholars, 5 Fields Medalists and 3 Turing award winners. In addition, five U.S. Presidents, including George W. Bush, Bill Clinton, George H.W. Bush, William Howard Taft and Gerald Ford, 19 U.S. Supreme Court Justices and a number of billionaires have been affiliated with Yale University. Some royals have also attended Yale including Crown Princess Victoria of Sweden, Prince Rostislav Romanov and Prince Akiiki Hosea Nyabongo.

Yale University, private university in New Haven, Connecticut, one of the Ivy League schools. It was founded in 1701 and is the third oldest university in the United States. Yale was originally chartered by the colonial legislature of Connecticut as the Collegiate School and was held at Killingworth and other locations. In 1716 the school was moved to New Haven, and in 1718 it was renamed Yale College in honour of a wealthy British merchant and philanthropist, Elihu Yale, who had made a series of donations to the school. Yale’s initial curriculum emphasized classical studies and strict adherence to orthodox Puritanism.

Yale’s medical school was organized in 1810. The divinity school arose from a department of theology created in 1822, and a law department became affiliated with the college in 1824. The geologist Benjamin Silliman, who taught at Yale between 1802 and 1853, did much to make the experimental and applied sciences a respectable field of study in the United States. While at Yale he founded the American Journal of Science and Arts (later shortened to American Journal of Science), which was one of the great scientific journals of the world in the 19th century. Yale’s Sheffield Scientific School, begun in the 1850s, was one of the leading scientific and engineering centres until 1956, when it merged with Yale College and ceased to exist.

A graduate school of arts and sciences was organized in 1847, and a school of art was created in 1866. Music, forestry and environmental studies, nursing, drama, management, architecture, physician associate, and public health professional school programs were subsequently established. The college was renamed Yale University in 1864. Women were first admitted to the graduate school in 1892, but the university did not become fully coeducational until 1969. A system of residential colleges was instituted in the 1930s.

Yale is highly selective in its admissions and is among the nation’s most highly rated schools in terms of academic and social prestige. It includes Yale College (undergraduate), the Graduate School of Arts and Sciences, and 12 professional schools.

The Yale University Library, with more than 15 million volumes, is one of the largest in the United States. Yale’s extensive art galleries, the first in an American college, were established in 1832 when John Trumbull donated a gallery to house his paintings of the American Revolution. Yale’s Peabody Museum of Natural History houses important collections of paleontology, archaeology, and ethnology.

Yale’s graduates have included U.S. Presidents William Howard Taft, Gerald Ford, George H.W. Bush, Bill Clinton, and George W. Bush; Civil War-era leader John C. Calhoun; theologian Jonathan Edwards; inventors Eli Whitney and Samuel F.B. Morse; and lexicographer Noah Webster. After several years of debate, in 2017 the university announced that the name of Calhoun College, one of the original residential colleges, would be changed to Hopper College, after the 20th-century mathematician, naval officer, and Yale alumna Grace Hopper. Advocates of the renaming had argued that it was inappropriate for the university to honour Calhoun, who had been an ardent proponent of slavery and a white supremacist.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1162 2021-10-17 00:06:06

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1139) Ophthalmology

Ophthalmology, medical specialty dealing with the diagnosis and treatment of diseases and disorders of the eye. The first ophthalmologists were oculists. These paramedical specialists practiced on an itinerant basis during the Middle Ages. Georg Bartisch, a German physician who wrote on eye diseases in the 16th century, is sometimes credited with founding the medical practice of ophthalmology. Many important eye operations were first developed by oculists, as, for example, the surgical correction of strabismus, first performed in 1738. The first descriptions of visual defects included those of glaucoma (1750), night blindness (1767), colour blindness (1794), and astigmatism (1801).

The first formal course in ophthalmology was taught at the medical school of the University of Göttingen in 1803, and the first medical eye clinic with an emphasis on teaching, the London Eye Infirmary, was opened in 1805, initiating the modern specialty. Advances in optics by the Dutch physician Frans Cornelis Donders in 1864 established the modern system of prescribing and fitting eyeglasses to a particular vision problem. The invention of the ophthalmoscope for looking at the interior of the eye created the possibility of relating eye defects to internal medical conditions.

In the 20th century, advances in the field have chiefly involved the prevention of eye disease through regular eye examinations and the early treatment of congenital eye defects. Another major development was the eye bank, first established in 1944 in New York, which made corneal tissues for transplantation more generally available.

The function of the human eye is to receive visual images. Whatever adversely affects vision is the concern of the ophthalmologist, whether it be caused by faulty development of the eye, disease, injury, degeneration, senescence, or refraction. He makes tests of visual function and examines the interior of the eye as part of a general physical examination for symptoms of systemic or neurologic diseases. He prescribes medical treatment for eye disease and glasses for refraction and performs surgical operations where indicated.

An ophthalmologist is a medical or osteopathic doctor who specializes in eye and vision care. Ophthalmologists differ from optometrists and opticians in their levels of training and in what they can diagnose and treat.

When it's time to get your eyes checked, make sure you are seeing the right eye care professional for your needs. Each member of the eye care team plays an important role in providing eye care, but many people confuse the different providers and their roles in maintaining your eye health. The levels of training and expertise—and what they are allowed to do for you—are the major difference between the types of eye care provider.

Ophthalmologists are eye physicians with advanced medical and surgical training

Ophthalmologists complete 12 to 13 years of training and education, and are licensed to practice medicine and surgery. This advanced training allows ophthalmologists to diagnose and treat a wider range of conditions than optometrists and opticians. Typical training includes a four-year college degree followed by at least eight years of additional medical training.

An ophthalmologist diagnoses and treats all eye diseases, performs eye surgery and prescribes and fits eyeglasses and contact lenses to correct vision problems. Many ophthalmologists are also involved in scientific research on the causes and cures for eye diseases and vision disorders. Because they are medical doctors, ophthalmologists can sometimes recognize other health problems that aren't directly related to the eye, and refer those patients to the right medical doctors for treatment.

Some ophthalmologists have specialized expertise in specific eye conditions

While ophthalmologists are trained to care for all eye problems and conditions, some ophthalmologists specialize further in a specific area of medical or surgical eye care. This person is called a subspecialist. He or she usually completes one or two years of additional, more in-depth training (called a Fellowship) in one of the main subspecialty areas such as Glaucoma, Retina, Cornea, Pediatrics, Neurology, Oculo-Plastic Surgery or others. This added training and knowledge prepares an ophthalmologist to take care of more complex or specific conditions in certain areas of the eye or in certain groups of patients.

Optometrists provide vision tests, prescribe lenses and treat certain eye conditions

Optometrists are healthcare professionals who provide primary vision care ranging from vision testing and correction to the diagnosis, treatment, and management of vision changes. An optometrist is not a medical doctor. An optometrist receives a doctor of optometry (OD) degree after completing 2 to 4 years of college-level education, followed by four years of optometry school. They are licensed to practice optometry, which primarily involves performing eye exams and vision tests, prescribing and dispensing corrective lenses, detecting certain eye abnormalities and prescribing medications for certain eye diseases. Many ophthalmologists and optometrists work together in the same offices, as a team. In the United States, what optometrists are licensed to do for patients can vary from state to state.

Opticians fit eyeglasses and contact lenses

Opticians are technicians trained to design, verify and fit eyeglass lenses and frames, contact lenses and other devices to correct eyesight. They use prescriptions supplied by ophthalmologists or optometrists, but do not test vision or write prescriptions for visual correction. Opticians are not permitted to diagnose or treat eye diseases.

Ophthalmic medical assistants help physicians examine and treat patients

These technicians work in the ophthalmologist's office and are trained to perform a variety of tests and help the physician with examining and treating patients.

Ophthalmic technicians/technologists assist with medical tests and minor surgeries

These are highly trained or experienced medical assistants who assist the physician with more complicated or technical medical tests and minor office surgery.

Ophthalmic registered nurses deliver medications and assist with surgeries

These clinicians have undergone special nursing training and may have additional training in ophthalmic nursing. They may assist the physician in more technical tasks, such as injecting medications or assisting with hospital or office surgery. Some ophthalmic registered nurses also serve as clinic or hospital administrators.

Ophthalmic photographers use cameras to document a patient's eyes

These individuals use specialized cameras and photographic methods to document patients' eye conditions in photographs.

See the right eye care provider at the right time

Without healthy vision it can be hard to work, play, drive or even recognize a face. Many factors can affect eyesight, including other health problems like high blood pressure or diabetes. Having a family member with eye disease can make you more prone to having that condition. Sight-stealing eye disease can appear at any time. Often vision changes are unnoticeable at first and difficult to detect.

If you've never had a complete, dilated eye exam, the American Academy of Ophthalmology recommends that everyone have a complete medical eye exam by age 40, and then as often as recommended by your ophthalmologist. Even if you're healthy, it's important to have a baseline eye exam, to compare against in the future and help spot changes or problems.

There are many possible symptoms of eye disease. If you have any concerns about your eyes or vision, visit an ophthalmologist. A complete, medical eye exam by an ophthalmologist could be the first step toward saving your sight.

Ophthalmology Subspecialists

From Aqueous to Zonules: Subspecialists Focus on Different Parts of the Eye

When you visit an ophthalmologist, you are seeing the only kind of doctor who is trained in all aspects of eye care. A comprehensive ophthalmologist (also known as a general ophthalmologist) can diagnose and treat eye diseases, perform eye surgery and prescribe and fit eyeglasses and contact lenses. Many comprehensive ophthalmologists have additional training to treat specific eye conditions, such as glaucoma or cataracts. But if your comprehensive ophthalmologist finds you have a condition that requires more specific care for a certain part of the eye, he or she will have you see a subspecialist.

More Training for More Focus

Subspecialists have intensive training in a particular area of the eye. To become subspecialists, ophthalmologists add a fellowship to their years of medical training. A fellowship prepares an ophthalmologist to treat more specific or complex conditions in certain parts of the eye or in certain types of patients. Fellowship-trained ophthalmologists have a total of 9 to 10 years of training after they finish college.

What Subspecialist Should You See?

If your comprehensive ophthalmologist decides you need to see a subspecialist for your condition, he or she may refer you to one for follow-up care.


The cornea is the clear, dome-shaped covering in front of the iris and pupil. A cornea subspecialist diagnoses and manages corneal eye disease, including Fuchs’ dystrophy and keratoconus. Many cornea subspecialists also perform refractive surgery (such as LASIK), as well as corneal transplants. They also handle corneal trauma as well as complicated contact lens fittings.


The retina is the light-sensitive tissue lining the back of the eye. The macula is a small area of the retina responsible for your central, detail vision. A retina specialist diagnoses and manages retinal diseases, including macular degeneration and diabetic eye disease. They surgically repair torn and detached retinas and treat problems with the vitreous, the gel-like substance in the middle of the eyeball.


Glaucoma is a disease that affects the optic nerve, which connects the eye to the brain. If your eye does not circulate fluid inside the eye properly, pressure builds inside the eye and damages the optic nerve. Glaucoma subspecialists use medicine, laser and surgery to manage eye pressure.


Pediatric ophthalmologists treat eye conditions in infants and children. They diagnose and treat misalignment of the eyes, uncorrected refractive errors and vision differences between the two eyes, as well as childhood eye diseases and other conditions. Strabismus specialists also treat adults with eyes that do not work properly together.


Oculoplastic surgeons repair damage to or problems with the eyelids, bones and other structures around the eyeball, and in the tear drainage system. They do medical injections around the eyes and face to improve the look and function of facial structures.


Neuro-ophthalmologists take care of vision problems related to how the eyes interact with the brain, nerves and muscles. Among other conditions, they diagnose and treat optic nerve problems, various types of vision loss, double vision, abnormal eye movements, unequal pupil size, and eyelid abnormalities. Diseases which can cause these problems include strokes, brain tumors, multiple sclerosis, and thyroid eye disease.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1163 2021-10-18 00:09:02

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1140) Neutron star

Neutron star, any of a class of extremely dense, compact stars thought to be composed primarily of neutrons. Neutron stars are typically about 20 km (12 miles) in diameter. Their masses range between 1.18 and 1.97 times that of the Sun, but most are 1.35 times that of the Sun. Thus, their mean densities are extremely high—about {10}^{14} times that of water. This approximates the density inside the atomic nucleus, and in some ways a neutron star can be conceived of as a gigantic nucleus. It is not known definitively what is at the centre of the star, where the pressure is greatest; theories include hyperons, kaons, and pions. The intermediate layers are mostly neutrons and are probably in a “superfluid” state. The outer 1 km (0.6 mile) is solid, in spite of the high temperatures, which can be as high as 1,000,000 K. The surface of this solid layer, where the pressure is lowest, is composed of an extremely dense form of iron.

Another important characteristic of neutron stars is the presence of very strong magnetic fields, upward of {10}^{12} gauss (Earth’s magnetic field is 0.5 gauss), which causes the surface iron to be polymerized in the form of long chains of iron atoms. The individual atoms become compressed and elongated in the direction of the magnetic field and can bind together end-to-end. Below the surface, the pressure becomes much too high for individual atoms to exist.

The discovery of pulsars in 1967 provided the first evidence of the existence of neutron stars. Pulsars are neutron stars that emit pulses of radiation once per rotation. The radiation emitted is usually radio waves, but pulsars are also known to emit in optical, X-ray, and gamma-ray wavelengths. The very short periods of, for example, the Crab (NP 0532) and Vela pulsars (33 and 83 milliseconds, respectively) rule out the possibility that they might be white dwarfs. The pulses result from electrodynamic phenomena generated by their rotation and their strong magnetic fields, as in a dynamo. In the case of radio pulsars, neutrons at the surface of the star decay into protons and electrons. As these charged particles are released from the surface, they enter the intense magnetic field that surrounds the star and rotates along with it. Accelerated to speeds approaching that of light, the particles give off electromagnetic radiation by synchrotron emission. This radiation is released as intense radio beams from the pulsar’s magnetic poles.

Many binary X-ray sources, such as Hercules X-1, contain neutron stars. Cosmic objects of this kind emit X-rays by compression of material from companion stars accreted onto their surfaces.

Neutron stars are also seen as objects called rotating radio transients (RRATs) and as magnetars. The RRATs are sources that emit single radio bursts but at irregular intervals ranging from four minutes to three hours. The cause of the RRAT phenomenon is unknown. Magnetars are highly magnetized neutron stars that have a magnetic field of between {10}^{14} and {10}^{15} gauss.

Most investigators believe that neutron stars are formed by supernova explosions in which the collapse of the central core of the supernova is halted by rising neutron pressure as the core density increases to about {10}^{15} grams per cubic cm. If the collapsing core is more massive than about three solar masses, however, a neutron star cannot be formed, and the core would presumably become a black hole.

What Is a Neutron Star?

Neutron stars are the remnants of giant stars that died in a fiery explosion known as a supernova. After such an outburst, the cores of these former stars compact into an ultradense object with the mass of the sun packed into a ball the size of a city.

How do neutron stars form?

Ordinary stars maintain their spherical shape because the heaving gravity of their gigantic mass tries to pull their gas toward a central point, but is balanced by the energy from nuclear fusion in their cores, which exerts an outward pressure, according to NASA. At the end of their lives, stars that are between four and eight times the sun's mass burn through their available fuel and their internal fusion reactions cease. The stars' outer layers rapidly collapse inward, bouncing off the thick core and then blasting out again as a violent supernova.

But the dense core continues to collapse, generating pressures so high that protons and electrons are squeezed together into neutrons, as well as lightweight particles called neutrinos that escape into the distant universe. The end result is a star whose mass is 90% neutrons, which can't be squeezed any tighter, and therefore the neutron star can't break down any further.

Characteristics of a neutron star

Astronomers first theorized about the existence of these bizarre stellar entities in the 1930s, shortly after the neutron was discovered. But it wasn't until 1967 that scientists had good evidence for neutron stars in reality. A graduate student named Jocelyn Bell at the University of Cambridge in England noticed strange pulses in her radio telescope, arriving so regularly that at first she thought they might be a signal from an alien civilization, according to the American Physical Society. The patterns turned out not to be E.T. but rather radiation emitted by rapidly spinning neutron stars.

The supernova that gives rise to a neutron star imparts a great deal of energy to the compact object, causing it to rotate on its axis between 0.1 and 60 times per second, and up to 700 times per second. The formidable magnetic fields of these entities produce high-powered columns of radiation, which can sweep past the Earth like lighthouse beams, creating what's known as a pulsar.

The properties of neutron stars are utterly out of this world — a single teaspoon of neutron-star material would weigh a billion tons. If you were to somehow stand on their surface without dying, you'd experience a force of gravity 2 billion times stronger than what you feel on Earth.

An ordinary neutron star's magnetic field might be trillions of times stronger than Earth's. But some neutron stars have even more extreme magnetic fields, a thousand or more times the average neutron star. This creates an object known as a magnetar.

Starquakes on the surface of a magnetar — the equivalent of crustal movements on Earth that generate earthquakes — can release tremendous amounts of energy. In one-tenth of a second, a magnetar might produce more energy than the sun has emitted in the last 100,000 years, according to NASA.

Research on neutron stars

Researchers have considered using the stable, clock-like pulses of neutron stars to aid in spacecraft navigation, much like GPS beams help guide people on Earth. An experiment on the International Space Station called Station Explorer for X-ray Timing and Navigation Technology (SEXTANT) was able to use the signal from pulsars to calculate the ISS’s location to within 10 miles (16 km).

But a great deal remains to be understood about neutron stars. For instance, in 2019, astronomers spotted the most massive neutron star ever seen — with about 2.14 times the mass of our sun packed into a sphere most likely around 12.4 miles (20 km) across. At this size, the object is just at the limit where it should have collapsed into a black hole, so researchers are examining it closely to better understand the odd physics potentially at work holding it up.

Researchers are also gaining new tools to better study neutron-star dynamics. Using the Laser Interferometer Gravitational-Wave Observatory (LIGO), physicists have been able to observe the gravitational waves emitted when two neutron stars circle one another and then collide. These powerful mergers might be responsible for making many of the precious metals we have on Earth, including platinum and gold, and radioactive elements, such as uranium.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1164 2021-10-19 00:07:36

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1141) Red Giant Star

Red giant stars: Facts, definition & the future of the sun

A red giant star is a dying star in the last stages of stellar evolution. In only a few billion years, our own sun will turn into a red giant star, expand and engulf the inner planets, possibly even Earth. What does the future hold for the light of our solar system and others like it?

Forming a giant

Most of the stars in the universe are main sequence stars — those converting hydrogen into helium via nuclear fusion. A main sequence star may have a mass between a third to eight times that of the sun and eventually burn through the hydrogen in its core. Over its life, the outward pressure of fusion has balanced against the inward pressure of gravity. Once the fusion stops, gravity takes the lead and compresses the star smaller and tighter.

Temperatures increase with the contraction, eventually reaching levels where helium is able to fuse into carbon. Depending on the mass of the star, the helium burning might be gradual or might begin with an explosive flash.

"Although fusion is no longer taking place in the core, the rise in temperature heats up the shell of hydrogen surrounding the core until it is hot enough to start hydrogen fusion, producing more energy than when it was a main sequence star," the Australia Telescope National Facility says on their website.

Red giant stars reach sizes of 100 million to 1 billion kilometers in diameter (62 million to 621 million miles), 100 to 1,000 times the size of the sun today. Because the energy is spread across a larger area, surface temperatures are actually cooler, reaching only 2,200 to 3,200 degrees Celsius (4,000 to 5,800 degrees Fahrenheit), a little over half as hot as the sun. This temperature change causes stars to shine in the redder part of the spectrum, leading to the name red giant, though they are often more orangish in appearance.

In 2017, an international team of astronomers identified the surface of the red giant π Gruis in detail using the European Southern Observatory's Very Large Telescope. They found that the red giant's surface has just a few convective cells, or granules, that are each about 75 million miles (120 million kilometers) across. By comparison, the sun has about two million convective cells about 930 miles (1,500 km) across.

Stars spend approximately a few thousand to 1 billion years as a red giant. Eventually, the helium in the core runs out and fusion stops. The star shrinks again until a new helium shell reaches the core. When the helium ignites, the outer layers of the star are blown off in huge clouds of gas and dust known as planetary nebulae. These shells are much larger and fainter than their parent stars.

The core continues to collapse in on itself. Smaller stars such as the sun end their lives as compact white dwarfs. The material of larger, more massive stars fall inward until the star eventually becomes a supernova, blowing off gas and dust in a dramatic fiery death.

The future of the sun

In approximately 5 billion years, the sun will begin the helium-burning process, turning into a red giant star. When it expands, its outer layers will consume Mercury and Venus, and reach Earth. Scientists are still debating whether or not our planet will be engulfed, or whether it will orbit dangerously close to the dimmer star. Either way, life as we know it on Earth will cease to exist.

"A similar fate may await the inner planets in our solar system, when the sun becomes a red giant and expands all the way out to Earth's orbit some five billion years from now," astronomer Alex Wolszczan, an astronomer at Pennsylvania State University, said in a statement.

"The future of the Earth is to die with the sun boiling up the oceans, but the hot rock will survive," astrophysicist Don Kurtz, of the University of Lancashire, told Reuters.

The changing sun may provide hope to other planets, however. When stars morph into red giants, they change the habitable zones of their system. The habitable zone is the region where liquid water can exist, considered by most scientists to be the area ripe for life to evolve. Because a star remains a red giant for approximately a billion years, it may be possible for life to arise on bodies in the outer solar system, which will be closer to the sun.

"When a star ages and brightens, the habitable zone moves outward and you're basically giving a second wind to a planetary system," exoplanet scientist Ramses M. Ramirez, a researcher at Cornell's Carl Sagan Institute, said in a statement. "Currently objects in these outer regions are frozen in our own solar system, like Europa and Enceladus — moons orbiting Jupiter and Saturn."

The window of opportunity will only be open briefly, however. When the sun and other smaller stars shrinks back down to a white dwarf, the life-giving light will dissipate. And supernovae from larger stars could present other habitability issues.

A red giant is a luminous giant star of low or intermediate mass (roughly 0.3–8 solar masses (M☉)) in a late phase of stellar evolution. The outer atmosphere is inflated and tenuous, making the radius large and the surface temperature around 5,000 K (4,700 °C; 8,500 °F) or lower. The appearance of the red giant is from yellow-orange to red, including the spectral types K and M, but also class S stars and most carbon stars.

Red giants vary in the way by which they generate energy:

* most common red giants are stars on the red-giant branch (RGB) that are still fusing hydrogen into helium in a shell surrounding an inert helium core
* red-clump stars in the cool half of the horizontal branch, fusing helium into carbon in their cores via the triple-alpha process
* asymptotic-giant-branch (AGB) stars with a helium burning shell outside a degenerate carbon–oxygen core, and a hydrogen-burning shell just beyond that.

Many of the well-known bright stars are red giants, because they are luminous and moderately common. The K0 RGB star Arcturus is 36 light-years away, and Gamma Crucis is the nearest M-class giant at 88 light-years' distance.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1165 2021-10-20 00:02:34

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1142) White dwarf star

White dwarf star, any of a class of faint stars representing the endpoint of the evolution of intermediate- and low-mass stars. White dwarf stars, so called because of the white colour of the first few that were discovered, are characterized by a low luminosity, a mass on the order of that of the Sun, and a radius comparable to that of Earth. Because of their large mass and small dimensions, such stars are dense and compact objects with average densities approaching 1,000,000 times that of water.

Unlike most other stars that are supported against their own gravitation by normal gas pressure, white dwarf stars are supported by the degeneracy pressure of the electron gas in their interior. Degeneracy pressure is the increased resistance exerted by electrons composing the gas, as a result of stellar contraction (see degenerate gas). The application of the so-called Fermi-Dirac statistics and of special relativity to the study of the equilibrium structure of white dwarf stars leads to the existence of a mass-radius relationship through which a unique radius is assigned to a white dwarf of a given mass; the larger the mass, the smaller the radius. Furthermore, the existence of a limiting mass is predicted, above which no stable white dwarf star can exist. This limiting mass, known as the Chandrasekhar limit, is on the order of 1.4 solar masses. Both predictions are in excellent agreement with observations of white dwarf stars.

The central region of a typical white dwarf star is composed of a mixture of carbon and oxygen. Surrounding this core is a thin envelope of helium and, in most cases, an even thinner layer of hydrogen. A very few white dwarf stars are surrounded by a thin carbon envelope. Only the outermost stellar layers are accessible to astronomical observations.

White dwarfs evolve from stars with an initial mass of up to three or four solar masses or even possibly higher. After quiescent phases of hydrogen and helium burning in its core—separated by a first red-giant phase—the star becomes a red giant for a second time. Near the end of this second red-giant phase, the star loses its extended envelope in a catastrophic event, leaving behind a dense, hot, and luminous core surrounded by a glowing spherical shell. This is the planetary-nebula phase. During the entire course of its evolution, which typically takes several billion years, the star will lose a major fraction of its original mass through stellar winds in the giant phases and through its ejected envelope. The hot planetary-nebula nucleus left behind has a mass of 0.5–1.0 solar mass and will eventually cool down to become a white dwarf.

White dwarfs have exhausted all their nuclear fuel and so have no residual nuclear energy sources. Their compact structure also prevents further gravitational contraction. The energy radiated away into the interstellar medium is thus provided by the residual thermal energy of the nondegenerate ions composing its core. That energy slowly diffuses outward through the insulating stellar envelope, and the white dwarf slowly cools down. Following the complete exhaustion of this reservoir of thermal energy, a process that takes several additional billion years, the white dwarf stops radiating and has by then reached the final stage of its evolution and becomes a cold and inert stellar remnant. Such an object is sometimes called a black dwarf.

White dwarf stars are occasionally found in binary systems, as is the case for the white dwarf companion to the brightest star in the night sky, Sirius. White dwarf stars also play an essential role in Type Ia supernovae and in the outbursts of novae and of other cataclysmic variable stars.

White dwarfs are the hot, dense remnants of long-dead stars. They are the stellar cores left behind after a star has exhausted its fuel supply and blown its bulk of gas and dust into space. These exotic objects mark the final stage of evolution for most stars in the universe – including our sun – and light the way to a deeper understanding of cosmic history.

A single white dwarf contains roughly the mass of our sun in a volume no bigger than our planet. Their small size makes white dwarfs difficult to find. No white dwarfs can be seen with the unaided eye.

The light they generate comes from the slow, steady release of prodigious amounts of energy stored up after billions of years spent as a star’s nuclear powerhouse.

White dwarfs are born when a star shuts down. A star spends most of its life in a precarious balance between gravity and outward gas pressure. The weight of a couple octillion tons of gas pressing down on the stellar core drives densities and temperatures high enough to ignite nuclear fusion: the fusing together of hydrogen nuclei to form helium. The steady release of thermonuclear energy prevents the star from collapsing on itself.

Once the star runs out of hydrogen in its center, the star shifts to fusing helium into carbon and oxygen. Hydrogen fusion moves to a shell surrounding the core. The star inflates and becomes a red giant. For most stars – our sun included – this is the beginning of the end. As the star expands and the stellar winds blow at an increasingly ferocious rate, the star’s outer layers escape the relentless pull of gravity.

As the red giant star evaporates, it leaves behind its core. The exposed core is a newly born white dwarf.

The white dwarf consists of an exotic stew of helium, carbon, and oxygen nuclei swimming in a sea of highly energetic electrons. The combined pressure of the electrons holds up the white dwarf, preventing further collapse towards an even stranger entity like a neutron star or black hole.

The infant white dwarf is incredibly hot and bathes the surrounding space in a glow of ultraviolet light and X-rays. Some of this radiation is intercepted by the outflows of gas that have left the confines of the now dead star. The gas responds by fluorescing with a rainbow of colors called a planetary nebula. These nebulae – like the Ring Nebula in the constellation Lyra the Harp – give us a peek into our sun’s future.

The white dwarf now has before it a long, quiet future. As the trapped heat trickles out, it slowly cools and dims. Eventually it will become an inert lump of carbon and oxygen floating invisibly in space: a black dwarf. But the universe isn’t old enough for any black dwarfs to have formed. The first white dwarfs born in the earliest generations of stars are still, 14 billion years later, cooling off. The coolest white dwarfs we know of, with temperature around 4,000 degrees Celsius (7,000 degrees Fahrenheit), may also be some of the oldest relics in the cosmos.

But not all white dwarfs go quietly into the night. White dwarfs that orbit other stars lead to highly explosive phenomena. The white dwarf starts things off by siphoning gas off its companion. Hydrogen is transferred across a gaseous bridge and spilled onto the white dwarf’s surface. As the hydrogen accumulates, its temperature and density reach a flash point where the entire shell of newly acquired fuel violently fuses releasing a tremendous amount of energy. This flash, called a nova, causes the white dwarf to briefly flare with the brilliance of 50,000 suns and then slowly fade back into obscurity.

If the gas collects fast enough, however, it can push the entire white dwarf past a critical point. Rather than a thin shell of fusion, the whole star can suddenly come back to life. Unregulated, the violent release of energy detonates the white dwarf. The entire stellar core is obliterated in one of the most energetic events in the universe: a Type 1a supernova. In one second, the white dwarf releases as much energy as the sun does in its entire 10 billion year lifetime. For weeks or months, it can even outshine an entire galaxy.

Such brilliance makes Type 1a supernovae visible from across the universe. Astronomers use them as “standard candles” to measure distances to the farthest reaches of the cosmos. Observations of detonating white dwarfs in distant galaxies led to a discovery that netted the 2011 Nobel prize in physics: the expansion of the universe is accelerating. Dead stars have breathed life into our most fundamental assumptions about the nature of time and space.

White dwarfs – the cores left behind after a star has exhausted its fuel supply – are sprinkled throughout every galaxy. Like a stellar graveyard, they are the tombstones of nearly every star that lived and died. Once the sites of stellar furnaces where new atoms were forged, these ancient stars have been repurposed as an astronomer’s tool that have upended our understanding of the evolution of the universe.

Bottom line: White dwarfs are the remnants of dead stars. They are the dense stellar cores left behind after a star has exhausted its fuel supply and blown its gases into space.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1166 2021-10-21 00:09:17

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1143) Nebula

Nebula, (Latin: “mist” or “cloud”) plural nebulae or nebulas, any of the various tenuous clouds of gas and dust that occur in interstellar space. The term was formerly applied to any object outside the solar system that had a diffuse appearance rather than a pointlike image, as in the case of a star. This definition, adopted at a time when very distant objects could not be resolved into great detail, unfortunately includes two unrelated classes of objects: the extragalactic nebulae, now called galaxies, which are enormous collections of stars and gas, and the galactic nebulae, which are composed of the interstellar medium (the gas between the stars, with its accompanying small solid particles) within a single galaxy. Today the term nebula generally refers exclusively to the interstellar medium.

In a spiral galaxy the interstellar medium makes up 3 to 5 percent of the galaxy’s mass, but within a spiral arm its mass fraction increases to about 20 percent. About 1 percent of the mass of the interstellar medium is in the form of “dust”—small solid particles that are efficient in absorbing and scattering radiation. Much of the rest of the mass within a galaxy is concentrated in visible stars, but there is also some form of dark matter that accounts for a substantial fraction of the mass in the outer regions.

The most conspicuous property of interstellar gas is its clumpy distribution on all size scales observed, from the size of the entire Milky Way Galaxy (about 1020 metres, or hundreds of thousands of light-years) down to the distance from Earth to the Sun (about {10}^{11} metres, or a few light-minutes). The large-scale variations are seen by direct observation; the small-scale variations are observed by fluctuations in the intensity of radio waves, similar to the “twinkling” of starlight caused by unsteadiness in the Earth’s atmosphere. Various regions exhibit an enormous range of densities and temperatures. Within the Galaxy’s spiral arms about half the mass of the interstellar medium is concentrated in molecular clouds, in which hydrogen occurs in molecular form (H2) and temperatures are as low as 10 kelvins (K). These clouds are inconspicuous optically and are detected principally by their carbon monoxide (CO) emissions in the millimetre wavelength range. Their densities in the regions studied by CO emissions are typically 1,000 H2 molecules per cubic cm. At the other extreme is the gas between the clouds, with a temperature of 10 million K and a density of only 0.001 H+ ion per cubic cm. Such gas is produced by supernovae, the violent explosions of unstable stars.

This article surveys the basic varieties of galactic nebulae distinguished by astronomers and their chemical composition and physical properties.

Classes of nebulae

All nebulae observed in the Milky Way Galaxy are forms of interstellar matter—namely, the gas between the stars that is almost always accompanied by solid grains of cosmic dust. Their appearance differs widely, depending not only on the temperature and density of the material observed but also on how the material is spatially situated with respect to the observer. Their chemical composition, however, is fairly uniform; it corresponds to the composition of the universe in general in that approximately 90 percent of the constituent atoms are hydrogen and nearly all the rest are helium, with oxygen, carbon, neon, nitrogen, and the other elements together making up about two atoms per thousand. On the basis of appearance, nebulae can be divided into two broad classes: dark nebulae and bright nebulae. Dark nebulae appear as irregularly shaped black patches in the sky and blot out the light of the stars that lie beyond them. Bright nebulae appear as faintly luminous glowing surfaces; they either emit their own light or reflect the light of nearby stars.

Dark nebulae are very dense and cold molecular clouds; they contain about half of all interstellar material. Typical densities range from hundreds to millions (or more) of hydrogen molecules per cubic centimetre. These clouds are the sites where new stars are formed through the gravitational collapse of some of their parts. Most of the remaining gas is in the diffuse interstellar medium, relatively inconspicuous because of its very low density (about 0.1 hydrogen atom per cubic cm) but detectable by its radio emission of the 21-cm line of neutral hydrogen.

Bright nebulae are comparatively dense clouds of gas within the diffuse interstellar medium. They have several subclasses: (1) reflection nebulae, (2) H II regions, (3) diffuse ionized gas, (4) planetary nebulae, and (5) supernova remnants.

Reflection nebulae reflect the light of a nearby star from their constituent dust grains. The gas of reflection nebulae is cold, and such objects would be seen as dark nebulae if it were not for the nearby light source.

H II regions are clouds of hydrogen ionized (separated into positive H+ ions and free electrons) by a neighbouring hot star. The star must be of stellar type O or B, the most massive and hottest of normal stars in the Galaxy, in order to produce enough of the radiation required to ionize the hydrogen.

Diffuse ionized gas, so pervasive among the nebular clouds, is a major component of the Galaxy. It is observed by faint emissions of positive hydrogen, nitrogen, and sulfur ions (H+, N+, and S+) detectable in all directions. These emissions collectively require far more power than the much more spectacular H II regions, planetary nebulae, or supernova remnants that occupy a tiny fraction of the volume.

Planetary nebulae are ejected from stars that are dying but are not massive enough to become supernovae—namely, red giant stars. That is to say, a red giant has shed its outer envelope in a less-violent event than a supernova explosion and has become an intensely hot star surrounded by a shell of material that is expanding at a speed of tens of kilometres per second. Planetary nebulae typically appear as rather round objects of relatively high surface brightness. Their name is derived from their superficial resemblance to planets—i.e., their regular appearance when viewed telescopically as compared with the chaotic forms of other types of nebula.

Supernova remnants are the clouds of gas expanding at speeds of hundreds or even thousands of kilometres per second from comparatively recent explosions of massive stars. If a supernova remnant is younger than a few thousand years, it may be assumed that the gas in the nebula was mostly ejected by the exploded star. Otherwise, the nebula would consist chiefly of interstellar gas that has been swept up by the expanding remnant of older objects.

Historical survey of the study of nebulae

Pre-20th-century observations of nebulae

In 1610, two years after the invention of the telescope, the Orion Nebula, which looks like a star to the naked eye, was discovered by the French scholar and naturalist Nicolas-Claude Fabri de Peiresc. In 1656 Christiaan Huygens, the Dutch scholar and scientist, using his own greatly superior instruments, was the first to describe the bright inner region of the nebula and to determine that its inner star is not single but a compact quadruple system.

National Aeronautics and Space Administration

Early 18th-century observational astronomers gave high priority to comet seeking. A by-product of their search was the discovery of many bright nebulae. Several catalogs of special objects were compiled by comet researchers; by far the best known is that of the Frenchman Charles Messier, who in 1781 compiled a catalog of 103 nebulous, or extended, objects in order to prevent their confusion with comets. Most are clusters of stars, 35 are galaxies, and 11 are nebulae. Even today many of these objects are commonly referred to by their Messier catalog number; M20, for instance, is the great Trifid Nebula, in the constellation Sagittarius.

The work of the Herschels

By far the greatest observers of the early and middle 19th century were the English astronomers William Herschel and his son John. Between 1786 and 1802 William Herschel, aided by his sister Caroline, compiled three catalogs totaling about 2,500 clusters, nebulae, and galaxies. John Herschel later added to the catalogs 1,700 other nebulous objects in the southern sky visible from the Cape Observatory in South Africa but not from London and 500 more objects in the northern sky visible from England.

The catalogs of the Herschels formed the basis for the great New General Catalogue (NGC) of J.L. Dreyer, published in 1888. It contains the location and a brief description of 7,840 nebulae, galaxies, and clusters. In 1895 and 1908 it was supplemented by two Index Catalogues (IC) of 5,386 additional objects. The list still included galaxies as well as true nebulae, for they were often at this time still indistinguishable. Most of the brighter galaxies are still identified by their NGC or IC numbers according to their listing in the New General Catalogue or Index Catalogues.

Advances brought by photography and spectroscopy

The advent of photography, which allows the recording of faint details invisible to the unaided eye and provides a permanent record of the observation for study of fine details at leisure, caused a revolution in the understanding of nebulae. In 1880 the first photograph of the Orion Nebula was made, but really good ones were not obtained until 1883. These early photographs showed a wealth of detail extending out to distances unsuspected by visual observers.

Much can be learned about the physical nature of an astronomical object by studying its spectrum—i.e., the resolution of its light into different wavelengths (or colours). Study of the spectrum of an object provides a decisive test as to whether it is composed of unresolved stars (as are galaxies) or glowing gas. Stars radiate at all wavelengths, almost always with dark absorption lines superimposed, while hot, transparent gas clouds radiate only emission lines at certain wavelengths characteristic of their constituent gases. In 1864 observation of the spectrum of the Orion Nebula showed bright emission lines of glowing gases, with conspicuous hydrogen lines and some green lines even brighter. By contrast, the spectrum of galaxies was found to be stellar, so a distinction between galaxies and nebulae—that nebulae are gaseous and galaxies are stellar—was appreciated at this time, although the true sizes and distances of galaxies were not demonstrated until the 20th century.

20th-century discoveries

The 20th century witnessed enormous advances in observational techniques as well as in the scientific understanding of the physical processes that operate in interstellar matter. In 1930 a German optical worker, Bernhard Schmidt, invented an extremely fast wide-angled camera ideal for photographing faint extended nebulae. Photographic plates became progressively more sensitive to an ever-widening range of colours, but photography has been completely replaced by photoelectric devices. Most images are now recorded with so-called charge-coupled devices (CCDs) that act as arrays of tiny photoelectric cells, each recording the light from a small patch of sky. Modern CCDs consist of square arrays of up to 4,000 cells on each side, or 16 million independent photocells, capable of observing the sky simultaneously. Electronic detectors are up to 100 times more sensitive than photography, can record a much wider range of light levels, and are sensitive to a much wider range of wavelengths, from 0.1 micrometre in the ultraviolet (accessible only from satellites orbiting above Earth’s atmosphere) to more than 1.2 micrometres in the infrared.

Spacecraft allow the observation of radiation normally absorbed by Earth’s atmosphere: gamma and X-rays (which have very short wavelengths), far-ultraviolet radiation (with wavelengths shorter than about 0.3 micrometre, below which atmospheric ozone is strongly absorbing), and infrared (from about 3 micrometres to 1 mm), strongly absorbed by atmospheric water vapour and carbon dioxide. Gamma rays, X-rays, and ultraviolet radiation reveal the physical conditions in the hottest regions in space (extending to some 100 million kelvins in shocked supernova gas). Infrared radiation reveals the conditions within dark cold molecular clouds, into which starlight cannot penetrate because of absorbing dust layers.

The primary means of studying nebulae is not by images but by spectra, which show the relative distribution of the radiation among various wavelengths (or colours for optical radiation). Spectra can be obtained by means of prisms (as in the earlier part of the 20th century), diffraction gratings, or crystals, in the case of X-rays. A particularly useful instrument is the echelle spectrograph, in which one coarsely ruled grating spreads the electromagnetic radiation in one direction, while another finely ruled grating disperses it in the perpendicular direction. This device, often used both in spacecraft and on the ground, allows astronomers to record simultaneously a wide range of wavelengths with very high spectral resolution (i.e., to distinguish slightly differing wavelengths). For even higher spectral resolution astronomers employ Fabry-Pérot interferometers. Spectra provide powerful diagnostics of the physical conditions within nebulae. Images and spectra provided by Earth-orbiting satellites, especially the Hubble Space Telescope, have yielded data of unprecedented quality.

Ground-based observations also have played a major role in recent advances in scientific understanding of nebulae. The emission of gas in the radio and submillimetre wavelength ranges provides crucial information regarding physical conditions and molecular composition. Large radio telescope arrays, in which several individual telescopes function collectively as a single enormous instrument, give spatial resolutions in the radio regime far superior to any yet achieved by optical means.

Chemical composition and physical processes

Many characteristics of nebulae are determined by the physical state of their constituent hydrogen, by far the most abundant element. For historical reasons, nebulae in which hydrogen is mainly ionized (H+) are called H II regions, or diffuse nebulae; those in which hydrogen is mainly neutral are designated H I regions; and those in which the gas is in molecular form (H2) are referred to as molecular clouds. The distinction is important because of major differences in the radiation that is present in the various regions and consequently in the physical conditions and processes that are important. Radiation is a wave but is carried by packets called photons. Each photon has a specified wavelength and precise energy that it carries, with gamma rays (short wavelengths) carrying the most and X-rays, ultraviolet, optical, infrared, microwave, and radio waves following in order of decreasing energies (or increasing wavelengths). Neutral hydrogen atoms are extremely efficient at absorbing ionizing radiation—that is, an energy per photon of at least 13.6 electron volts (or, equivalently, a wavelength of less than 0.0912 micrometre). If the hydrogen is mainly neutral, no radiation with energy above this threshold can penetrate except for photons with energies in the X-ray range and above (thousands of electron volts or more), in which case the hydrogen becomes somewhat transparent. The absorption by neutral hydrogen abruptly reduces the radiation field to almost zero for energies above 13.6 electron volts. This dearth of hydrogen-ionizing radiation implies that no ions requiring more ionizing energy than hydrogen can be produced, and the ionic species of all elements are limited to the lower stages of ionization. Within H II regions, with almost all the hydrogen ionized and thereby rendered nonabsorbing, photons of all energies propagate, and ions requiring energetic radiation for their production (e.g., O++) occur.

Ultraviolet photons with energies of more than 11.2 electron volts can dissociate molecular hydrogen (H2) into two H atoms. In H I regions there are enough of these photons to prevent the amount of H2 from becoming large, but the destruction of H2 as fast as it forms takes its toll on the number of photons of suitable energies. Furthermore, interstellar dust is a fairly efficient absorber of photons throughout the optical and ultraviolet range. In some regions of space the number of photons with energies higher than 11.2 volts is reduced to the level where H2 cannot be destroyed as fast as it is produced on grain surfaces. In this case, H2 becomes the dominant form of hydrogen present. The gas is then part of a molecular cloud. The role of interstellar dust in this process is crucial because H2 cannot be formed efficiently in the gas phase.

Interstellar dust

Only about 0.7 percent of the mass of the interstellar medium is in the form of solid grains, but these grains have a profound effect on the physical conditions within the gas. Their main effect is to absorb stellar radiation; for photons unable to ionize hydrogen and for wavelengths outside absorption lines or bands, the dust grains are much more opaque than the gas. The dust absorption increases with photon energy, so long-wavelength radiation (radio and far-infrared) can penetrate dust freely, near-infrared rather well, and ultraviolet relatively poorly. Dark, cold molecular clouds, within which all star formation takes place, owe their existence to dust. Besides absorbing starlight, the dust acts to heat the gas under some conditions (by ejecting electrons produced by the photoelectric effect, following the absorption of a stellar photon) and to cool the gas under other conditions (because the dust can radiate energy more efficiently than the gas and so in general is colder). The largest chemical effect of dust is to provide the only site of molecular hydrogen formation on grain surfaces. It also removes some heavy elements (especially iron and silicon) that would act as coolants to the gas. The optical appearance of most nebulae is significantly modified by the obscuring effects of the dust.

The chemical composition of the gas phase of the interstellar medium alone, without regard to the solid dust, can be determined from the strength of narrow absorption lines that are produced by the gas in the spectra of background stars. Comparison of the composition of the gas with cosmic (solar) abundances shows that almost all the iron, magnesium, and silicon, much of the carbon, and only some of the oxygen and nitrogen are contained in the dust. The absorption and scattering properties of the dust reveal that the solid grains are composed partially of silicaceous material similar to terrestrial rocks, though of an amorphous rather than crystalline variety. The grains also have a carbonaceous component. The carbon dust probably occurs in at least two forms: (1) grains, either free-flying or as components of composite grains that also contain silicates, and (2) individual, freely floating aromatic hydrocarbon molecules, with a range varying from 70 to several hundred carbon atoms and some hydrogen atoms that dangle from the outer edges of the molecule or are trapped in the middle of it. It is merely convention that these molecules are referred to as dust, since the smallest may be only somewhat larger than the largest molecules observed with a radio telescope. Both of the dust components are needed to explain spectroscopic features arising from the dust. In addition, there are probably mantles of hydrocarbon on the surfaces of the grains. The size of the grains ranges from perhaps as small as 0.0003 micrometre for the tiniest hydrocarbon molecules to a substantial fraction of a micrometre; there are many more small grains than large ones.

The dust cannot be formed directly from purely gaseous material at the low densities found even in comparatively dense interstellar clouds, which would be considered an excellent laboratory vacuum. For a solid to condense, the gas density must be high enough to allow a few atoms to collide and stick together long enough to radiate away their energy to cool and form a solid. Grains are known to form in the outer atmospheres of cool supergiant stars, where the gas density is comparatively high (perhaps {10}^{9} times what it is in typical nebulae). The grains are then blown out of the stellar atmosphere by radiation pressure (the mechanical force of the light they absorb and scatter). Calculations indicate that refracting materials, such as the constituents of the grains proposed above, should condense in this way.

There is clear indication that the dust is heavily modified within the interstellar medium by interactions with itself and with the interstellar gas. The absorption and scattering properties of dust show that there are many more smaller grains in the diffuse interstellar medium than in dense clouds; apparently in the dense medium the small grains have coagulated into larger ones, thereby lowering the ability of the dust to absorb radiation with short wavelengths (namely, ultraviolet, near 0.1 micrometre). The gas-phase abundances of some elements, such as iron, magnesium, and nickel, also are much lower in the dense regions than in the diffuse gas, although even in the diffuse gas most of these elements are missing from the gas and are therefore condensed into dust. These systematic interactions of gas and dust show that dust grains collide with gas atoms much more rapidly than one would expect if the dust and gas simply drifted together. There must be disturbances, probably magnetic in nature, that keep the dust and gas moving with respect to each other.

The motions of gas within nebulae of all types are clearly chaotic and complicated. There are sometimes large-scale flows, such as when a hot star forms on the outer edge of a cold, quiescent dark molecular cloud and ionizes an H II region in its vicinity. The pressure strongly increases in the newly ionized zone, so the ionized gas flows out through the surrounding material. There are also expanding structures resembling bubbles surrounding stars that are ejecting their outer atmospheres into stellar winds.


Besides these organized flows, nebulae of all types always show chaotic motions called turbulence. This is a well-known phenomenon in gas dynamics that results when there is low viscosity in flowing fluids, so the motions become chaotic eddies that transfer kinetic and magnetic energy and momentum from large scales down to small sizes. On small-enough scales viscosity always becomes important, and the energy is converted into heat, which is kinetic energy on a molecular scale. Turbulence in nebulae has profound, but poorly understood, effects on their energy balance and pressure support.

Turbulence is observed by means of the widths of the emission or absorption lines in a nebular spectrum. No line can be precisely sharp in wavelength, because the energy levels of the atom or ion from which it arises are not precisely sharp. Actual lines are usually much broader than this intrinsic width because of the Doppler effect arising from motions of the atoms along the line of sight. The emission line of an atom is shifted to longer wavelengths if it is receding from the observer and to shorter wavelengths if it is approaching. Part of the observed broadening is easily explained by thermal motions, since v2, the averaged squared speed, is proportional to T/m, where T is the temperature and m is the mass of the atom. Thus, hydrogen atoms move the fastest at any given temperature. Observations show that in fact hydrogen lines are broader than those of other elements but not as much as expected from thermal motions alone. Turbulence represents bulk motions, independent of the mass of the atoms. This chaotic motion of gas atoms of all masses would explain the observations. The physical question, though, is what maintains the turbulence. Why do the turbulent cascades not carry kinetic energy from large-size scales into ever-shorter-size scales and finally into heat?

The answer is that energy is continuously injected into the gases by a variety of processes. One involves strong stellar winds from hot stars, which are blown off at speeds of thousands of kilometres per second. Another arises from the violently expanding remnants of supernova explosions, which sometimes start at 20,000 km (12,000 miles) per second and gradually slow to typical cloud speeds (10 km [6 miles] per second). A third process is the occasional collision of clouds moving in the overall galactic gravitational potential. All these processes inject energy on large scales that can undergo turbulent cascading to heat.

Galactic magnetic field

There is a pervasive magnetic field that threads the spiral arms of the Milky Way Galaxy and extends to thousands of light-years above the galactic plane. The evidence for the existence of this field comes from radio synchrotron emission produced by very energetic electrons moving through it and from the polarization of starlight that is produced by elongated dust grains that tend to be aligned with the magnetic field. The magnetic field is very strongly coupled to the gas because it acts upon the embedded electrons, even the few in H I regions, and the electrons impart some motion of the other constituents by means of collisions. The gas and field are effectively confined to moving together, even though the gas can slip along the field freely. The field has an important influence upon the turbulence because it exerts a pressure similar to gas pressure, thereby influencing the motions of the gas. The resulting complex interactions and wave motions have been studied in extensive numerical calculations.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1167 2021-10-23 00:52:17

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1144) Brown dwarf

Brown dwarf, astronomical object that is intermediate between a planet and a star. Brown dwarfs usually have a mass less than 0.075 that of the Sun, or roughly 75 times that of Jupiter. (This maximum mass is a little higher for objects with fewer heavy elements than the Sun.) Many astronomers draw the line between brown dwarfs and planets at the lower fusion boundary of about 13 Jupiter masses. The difference between brown dwarfs and stars is that, unlike stars, brown dwarfs do not reach stable luminosities by thermonuclear fusion of normal hydrogen. Both stars and brown dwarfs produce energy by fusion of deuterium (a rare isotope of hydrogen) in their first few million years. The cores of stars then continue to contract and get hotter until they fuse hydrogen. However, brown dwarfs prevent further contraction because their cores are dense enough to hold themselves up with electron degeneracy pressure. (Those brown dwarfs above 60 Jupiter masses begin to fuse hydrogen, but they then stabilize, and the fusion stops.)

Brown dwarfs are not actually brown but appear from deep red to magenta depending on their temperature. Objects below about 2,200 K, however, do actually have mineral grains in their atmospheres. The surface temperatures of brown dwarfs depend on both their mass and their age. The most massive and youngest brown dwarfs have temperatures as high as 2,800 K, which overlaps with the temperatures of very low-mass stars, or red dwarfs. (By comparison, the Sun has a surface temperature of 5,800 K.) All brown dwarfs eventually cool below the minimum main-sequence stellar temperature of about 1,800 K. The oldest and smallest can be as cool as about 300 K.

Brown dwarfs were first hypothesized in 1963 by American astronomer Shiv Kumar, who called them “black” dwarfs. American astronomer Jill Tarter proposed the name “brown dwarf” in 1975; although brown dwarfs are not brown, the name stuck because these objects were thought to have dust, and the more accurate “red dwarf” already described a different type of star. Searches for brown dwarfs in the 1980s and 1990s found several candidates; however, none was confirmed as a brown dwarf. In order to distinguish brown dwarfs from stars of the same temperature, one can search their spectra for evidence of lithium (which stars destroy when hydrogen fusion begins). Alternatively, one can look for (fainter) objects below the minimum stellar temperature. In 1995 both methods paid off. Astronomers at the University of California, Berkeley, observed lithium in an object in the Pleiades, but this result was not immediately and widely embraced. This object, however, was later accepted as the first binary brown dwarf. Astronomers at Palomar Observatory and Johns Hopkins University found a companion to a low-mass star called Gliese 229 B. The detection of methane in its spectrum showed that it has a surface temperature less than 1,200 K. Its extremely low luminosity, coupled with the age of its stellar companion, implies that it is about 50 Jupiter masses. Hence, Gliese 229 B was the first object widely accepted as a brown dwarf. Infrared sky surveys and other techniques have now uncovered hundreds of brown dwarfs. Some of them are companions to stars; others are binary brown dwarfs; and many of them are isolated objects. They seem to form in much the same way as stars, and there may be 1–10 percent as many brown dwarfs as stars.

The amount of mass a star is born with is what determines its fate. Stars are objects born with large masses – and therefore strong self-gravity – so that the star squeezes in on itself, creating high internal temperatures. The high temperatures spark thermonuclear fusion reactions, which enable stars to shine. Planets, on the other hand, have much smaller masses and consequently weaker gravity and no internal fusion; they shine mainly with light reflected from their stars. Brown dwarfs fall somewhere between the masses of giant planets like Saturn and Jupiter, and the smallest stars.

We could speak of brown dwarf masses as fractions of our sun’s mass, but astronomers typically use Jupiter’s mass as a standard measure. A value of 13 Jupiter-masses is considered to be the upper limit for gas giant planets. If the mass of the gas planet is larger than 13 Jupiter masses, a thermonuclear burning (fusion) of deuterium – a rare element leftover from the Big Bang – can occur in the object’s interior. Deuterium is another name for ‘heavy hydrogen,’ which is hydrogen with a neutron attached to the proton in the nucleus of the atom (rather than only a proton alone). A value of greater than 80 Jupiter-masses is the lower limit for burning normal hydrogen – the process by which stars are able to shine – and therefore for enabling an object to qualify as a full-fledged star.

Thus a brown dwarf is typically defined as any body lying in the mass range of 13 and 80 Jupiter-masses.

What is a star?

A star is a large collection of dust and gas that has condensed from a primordial cloud that was disturbed in some way. Various mechanisms can cause the disturbance. For example, the shock wave from a distant supernova – exploding star – might disturb a primordial cloud in space, centuries or millennia later and many light-years away. The cloud loses its uniformity, and areas with slightly higher density (and thus more gravity) start to attract lighter molecules.

As matter keeps clumping tighter and tighter due to its now-increasing gravity, it gets hotter and hotter, just like a bicycle tire gets hotter to the touch as you inflate it and thereby compress the air molecules inside. Eventually the matter reaches a critical mass; the star starts to fuse deuterium with regular hydrogen, making helium-3 molecules. This occurs at a low temperature (slightly less than 1,000,000 degrees Kelvin or 1,800,000 Fahrenheit).

At the point where fusion begins, we can describe a star differently. Now the star is an object in perfect balance (however temporarily) between the outward-pushing force caused by the fusion reactions in its core, and inward-pushing force of its own self-gravity. Gravity wants to crush a star further, but fusion prevents that from happening. Fusion wants to expand the star, but gravity won’t let it. The result is a fine balance: a star.

If deuterium fusion didn’t take place there would be very few stars in the universe with more than three times the mass of our sun. That’s because – if hydrogen fusion started as soon as the mass and temperature were high enough – the star wouldn’t yet have enough mass for its own self-gravity to resist the outward-pushing pressure of the hydrogen fusion reactions. The star would expand, and this expansion would cause its internal temperature to drop, thus slowing and ultimately ending the hydrogen fusion reactions stars require in order to shine.

Deuterium fusion keeps a star cold enough to allow time for the star to accumulate sufficient mass so that when hydrogen fusion actually starts (around 13,000,000 degrees K or 23,000,000 F), it can continue. By that time, the star is dense enough to have enough self-gravity to resist expansion, so that temperatures stay high in its interior.

In most cases, you are left with a single major accretion that forms a hydrogen-fusion-powered star. It is also possible that, in dense clouds, multiple stars can form. And thus we have double-star systems (called a binary system) and triple stars (trinary) and quadruple stars (quaternary).

Indeed, there are examples of very complex systems with five, six, and seven stars, called quintuple (quintenary), sextuple (sextenary), and septuple (septenary), respectively (click each number for examples). These can fall into orbits around each other that (although very complex) can still be stable enough to allow planetary formation.

5 differently-colored globes, one immense one labeled sun, three much smaller ones, and one tiny one labeled Earth.
General size comparison between a low mass star, a brown dwarf, and the planet Jupiter. Although brown dwarfs are up to 80 times more massive than Jupiter, their size would only be about 10-15% larger. Image via Wikimedia Commons.

What is a planet?

After stellar formation and the beginning of hydrogen fusion, a solar wind spawns and sweeps the remaining gas out of the system. There will be several minor accretions too bulky to be pushed away by the outward pressure of the solar wind. They will, in fact, fall inward, towards the star.

Since everything in the universe has angular momentum – in other words, since the cloud is rotating or spinning – particles in the initial cloud collecting to form the star will have a tendency to fall in toward the star in a long spiral path. This increases their fall time and thus angular speed, which is why planets end up themselves rotating (spinning) and orbiting their stars generally all in the same direction.

Due to collisions and mutual attractions altering the orbits of the newly forming protoplanets, many will reach an equilibrium point and settle into a stable orbit. These will eventually become true planets – either rocky worlds like Earth or Mars, or gas giants like Jupiter or Saturn – by accreting the remaining small leftovers of the original primordial cloud via their own gravity.

Planets of different sizes and colors on a black background with Earth being much smaller than the others.

What’s the difference between stars and planets?

Stars form from the collapse of gas and dust in a primordial cloud. Consequently, they have a relatively low amount of what astronomers call metals (to astronomers, metallicity refers to any element heavier than hydrogen and helium).

The star collects up the majority of gaseous material in the primordial cloud and its planets form by accreting the leftovers. Planets form with much, much less mass than stars, and thus have much weaker gravity. The lighter elements like hydrogen and helium – so common in stars – tend to escape a planet’s weaker gravitational pull. Thus – relative to stars – planets have high metal content. Planets typically orbit stars. By astronomers’ most recent definition of the word planet, they clear their own orbits of debris.

Of course, we don’t really know what brown dwarfs look like. They’re far away, and we’ve never seen one up close. But here’s an artist’s concept of the brown dwarf called Luhman 16A, basd on recent evidence of Jupiter-like bands on its surface. Image via Caltech/ R. Hurt (IPAC).

Where does that leave brown dwarfs?

Brown dwarfs accumulate material like a star, not like a planet. They condense from a gaseous cloud – and are higher in mass than planets and so have stronger gravity – and thus they hold onto their lighter elements (hydrogen and helium) more effectively than planets and so have a relatively low metal content. Their only failing feature is that they didn’t collect enough material to begin normal stellar fusion. Although, they can sustain deuterium fusion until the deuterium is gone, which is actually essential to stellar formation with larger masses, as explained earlier.

Brown dwarfs have been found orbiting other suns at distances of 1,000 astronomical units (AU) or more. One AU = one Earth-sun distance. Not all brown dwarfs orbit far from their stars, however; some have been found orbiting at closer distances, and a few rogue brown dwarfs have been spotted not orbiting any star, although, of course, these are tough to find!

As a comparison, of the known planets in our own solar system, Neptune is the major planet orbiting farthest from our sun at 30 AU.

So brown dwarfs are not planets, and they are failed stars, not massive enough to power hydrogen fusion reactions. Thus they get their own classification.

Why brown?

What we now call brown dwarfs were first proposed to exist in the 1960s by astronomer Shiv S. Kumar, who originally called these objects black dwarfs. He pictured them as dark substellar objects floating freely in space that were not massive enough to sustain hydrogen fusion. The name brown dwarf name was later coined by astronomer and SETI researcher Jill Tarter in her Ph.D. dissertation. She was looking to define an upper limit to the maximum mass an object could possess before beginning hydrogen fusion, and thus becoming a full-fledged star.

Stars are clearly not “brown” and many such objects are in the temperature range of 300 to 500 Kelvin (80 to 440 F, or body temperature for a human being and upwards), so they only radiate in the infrared portion of the electromagnetic spectrum. Since black dwarf was already taken as describing objects at the end point in stellar evolution (cold white dwarfs) – and red dwarf also had a role to fulfill, as the name for small, cool stars – brown must have seemed an appropriate compromise.

Bottom line: Brown dwarfs are objects with a mass that range between the heaviest gas planets and the lightest stars, which makes them distinct enough to qualify for their own classification. So they are typically defined as a body lying in the range of greater than 13 and less than 80 Jupiter-masses. They can be found orbiting stars or other brown dwarfs, or traveling around in the galaxy all on their own.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1168 2021-10-24 00:31:01

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1145) Mysophobia

Mysophobia, also known as verminophobia, germophobia, germaphobia, bacillophobia and bacteriophobia, is a pathological fear of contamination and germs. The term was coined by William A. Hammond in 1879 when describing a case of obsessive–compulsive disorder (OCD) exhibited in repeatedly washing one's hands. Mysophobia has long been related to compulsive hand washing. Names pertaining directly to the abnormal fear of dirt and filth include molysmophobia or molysomophobia, rhypophobia, and rupophobia, whereas the terms bacillophobia and bacteriophobia specifically refer to the fear of bacteria and microbes in general.

Signs and symptoms

People who suffer from mysophobia usually display signs including:

* excessive hand washing
* an avoidance of locations that might contain a high presence of germs
* a fear of physical contact, especially with strangers
* excessive effort dedicated to cleaning and sanitizing one's environment
* a refusal to share personal items
* a fear of becoming ill

Mysophobia greatly affects the everyday life of individuals and can range in severity of symptoms from difficult breathing, excessive perspiration, increased heart rate, and states of panic when exposed to germ-enhanced conditions.


There are many underlying factors and reasons that a person may develop mysophobia, such as anxiety, depression, or a traumatic situation. Developing in a culture where hygiene is heavily integrated into society (use of hand sanitizers, toilet seat covers, and antibacterial wipes for commonly used items such as grocery carts), can also be a main driving force for the development of mysophobia.

Mysophobia is a type of phobia that centers on an extreme and irrational fear of germs, dirt, or contamination. It is normal and prudent to be concerned about issues such as cross-contamination of foods, exposure to the bodily fluids of others, and maintaining good hygiene. However, if you have mysophobia, these normal concerns become overblown and disruptive to everyday life.

The condition is also known by other names including:

* Germophobia
* Bacillophobia
* Bacteriophobia
* Verminophobia

The phobia is often linked to obsessive-compulsive disorder (OCD), but people who don't have OCD can have it as well. The phobia is believed to be fairly common and can affect people from all walks of life.

This article discusses the symptoms, diagnosis, causes, and treatments for mysophobia, It also covers some of the things that you can do to cope with this type of phobia.


Common symptoms of mysophobia include behaviors that are used to avoid exposure to germs or contamination. These symptoms may include:

* Avoiding places that are thought to contain a lot of germs or dirt
* Extreme fear of becoming contaminated
* Excessive hand washing
* Obsessing over cleanliness
* Overusing cleaning or sanitizing products

If you have mysophobia, you may experience certain symptoms when you are exposed to dirt or bacteria. Such symptoms can include:

* Crying
* Heart palpitations
* Shaking
* Sweating

These symptoms may occur only when the object of your phobia is visible, as is the case when digging in a garden, or when you believe that germ contact may have occurred, such as when shaking hands with someone or using a doorknob.

You may take multiple showers each day. You might carry and use hand sanitizer frequently. You may be unwilling to use public restrooms, share food, or take public transportation.


Mysophobia can lead to a number of behavioral and emotional symptoms such as avoidance, anxiety, and physiological signs of fear and panic.


It is important to note that mysophobia is not recognized as a distinct condition in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). Instead, it would be considered a specific phobia if the symptoms meet a specific set of diagnostic criteria.

To be diagnosed with a specific phobia, symptoms must lead to:

* Avoidance or extreme distress
* Immediate anxiety response
* Unreasonable or excessive fear

Additionally, these symptoms must affect a person's ability to function normally in different areas of their life. The symptoms must not be caused by another mental disorder and the symptoms need to be present for six months or longer.


Because people with mysophobia fear germs carried by others, the condition can lead people to avoid social situations. You might avoid expected gatherings such as work parties, holiday get-togethers, and meetings. When you do participate, you may find yourself avoiding physical contact and sanitizing your hands more frequently.

Over time, these behaviors can lead to isolation. Your friends and relatives might not understand, and they could perceive you as hostile or even paranoid. You could develop social phobia, in which you begin to fear contact with others.


The exact causes of mysophobia are not entirely clear, although a number of different factors are believed to play a role. Some things that can increase the risk of developing a phobia such as mysophobia include:

* A family history of anxiety, depression, or other phobias
* Experiencing a traumatic event that causes a person to become overly focused on germs, dirt, or contamination
* Having obsessive-compulsive disorder (OCD)

Some people believe that the increased availability and use of products such as hand sanitizer and other cleaning products may also play a role in causing mysophobia.

Mysophobia and OCD

Mysophobia is thought to be related to obsessive-compulsive disorder (OCD). OCD obsessions are repeated, persistent, and unwanted urges or images that cause distress or anxiety. These obsessions may intrude when you're trying to think of or do other things.

Obsessions often have themes, such as:

* A fear of contamination
* A need to have things orderly and symmetrical
* Aggressive or horrific thoughts about harming yourself or others
* Unwanted thoughts, including aggression

One of the most common symptoms of mysophobia is frequent hand washing, which is also a common symptom of OCD. However, the motivation for handwashing is different.

Mysophobia vs. OCD

People with OCD are compelled to relieve the distress they experience as a result of the non-completion of the act itself, while people with mysophobia are compelled to complete the act specifically to remove germs. The difference is subtle, and many people experience both conditions, so it is important to see a mental health professional for an accurate diagnosis.


Fortunately, mysophobia can be successfully managed. It is important to visit a mental health professional as soon as possible since the condition tends to worsen over time. Treatments that your therapist may recommend include medication, psychotherapy, or a combination of the two.


Medications are not usually prescribed on their own to address specific phobias such as mysophobia. However, sometimes medications may be prescribed to help manage some symptoms or to treat co-occurring mental health conditions. Medications are most effective when they are used in combination with psychotherapy.


Depending on your therapist’s orientation, you may be encouraged to explore the root of the phobia, or you may simply be taught how to manage the symptoms.

There are a number of types of therapy that can be used to help treat phobias, but two of the most effective approaches are cognitive behavioral therapy (CBT) and exposure therapy.

Cognitive behavioral therapy involves identifying and changing the negative thought patterns that contribute to the phobia.

Exposure therapy focuses on gradually and progressively exposing people to the source of their fear. Over time, people are able to learn to relax and the fear response begins to lessen.
Online therapy may be another option you might want to consider. Online therapy has been found to be effective in the treatment of a number of mental health conditions. Studies also suggest that virtual reality exposure therapy can be just as effective as real-world exposure therapy.

The 9 Best Online Therapy Programs We've tried, tested and written unbiased reviews of the best online therapy programs including Talkspace, Betterhelp, and Regain.


In addition to getting professional treatment, there are other self-help strategies you can use to help find relief. Some techniques you might want to try include:

* Deep breathing
* Getting regular exercise
* Getting enough sleep
* Gradually exposing yourself to your fear
* Lowering caffeine intake
* Meditation
* Mindfulness practices
* Yoga

You may also find it helpful to join a phobia support group where you can discuss resources and coping strategies with people who have had similar experiences. Check with local resources to see if there are any groups in your area or look online for available resources.

Myso is the Greek word for germs and Phobos means fear. Thus, Mysophobia is the excessive and often irrational fear of microbes or getting contaminated with germs. Mysophobia is also known as germophobia.

Fear of Germs Phobia - Mysophobia

People with an excessive fear of germs believe the world to be a ‘filthy place’ and may develop obsessive-compulsive disorders. As a result, they are always washing or cleaning well beyond a concern with cleanliness. They are known to spend major parts of their day doing these activities over and over.

Mysophobics may also spend vast amounts of money on buying cleaning products and exposing themselves more than necessary to the harmful chemicals which many of them contain.

It is important to note the difference between ‘being tidy/orderly’ to being a Mysophobe. A Mysophobic individual is mainly concerned with contamination and sterilization as unlike a tidy person who would only clean surfaces to ensure there is no dust.

Many people with the extreme fear of germs also tend to think about microbes all the time. They fear getting contaminated from dirt, dust, grime or people who are sneezing or coughing. The more often a Mysophobe falls sick, the likelier s/he is bound to believe the need to clean. This can severely impact one’s daily functioning.

Causes of the fear of germs phobia

Mysophobia usually stems from an Obsessive compulsive disorder or OCD. The sufferer feels the need to wash his/her hands frequently, which is one of the characteristics of OCD. Naturally, in case of Mysophobia, the motivation to frequently wash stems from the fear of microbes unlike that in the OCD where it is more of a matter of following routine. That being said; most patients are known to suffer from both conditions. A thorough medical evaluation is hence necessary to determine if it is Mysophobia or OCD.

Heredity and genetics are believed to have a strong link to the fear of germs phobia. Children with an obsessive-compulsive parent or caregiver are more likely to become Mysophobes.

Additionally, a traumatic (personal or witnessed) event in the past or sometimes even a random event can trigger Mysophobia.
Media, learning about germs at school or getting sick after coming in contact with germs can reinforce one’s belief’s about microbes to the extent that the individual learns to excessively fear germs.

Symptoms of Mysophobia

Depending on the level of fear, different symptoms may be seen in the individual:

Physical symptoms of a panic attack (in what is perceived to be the presence of germs) such as shaking, dry mouth, sweating, nausea, rapid and irregular heartbeat etc are seen in people suffering from the excessive fear of germs. The patient is also likely to indulge in unreasonable behavior or actions like:

* Washing frequently and excessively.
* Refusing to use public bathrooms.
* Avoiding all kinds of social activities or places that include coming in contact with ‘germy’ people or animals.
* Refusing to share personal items like combs, brushes, or food with anyone.

Gradually, the individual may impose many restrictions upon himself including refusing to touch the doorknobs directly or shaking hands with anyone, as well as constantly using products like hand sanitizers or soaps, which, in large quantities, are (paradoxically) known to make one more prone to infections. Thus, Mysophobia can severely impact one’s occupational, social and familial activities.

Treatment for fear of germs

A combination of therapies is recommended for treating phobias like Mysophobia and anxiety disorders like OCD. These include drugs, cognitive behavior therapy, exposure and gradual desensitization therapies as well as relaxation training.

Exposure therapy consists of helping the phobic relearn how to encounter germs gradually until he is able to refrain from washing his hands. The individuals also learn to focus on calming techniques and develop the ability to remain in a ‘contaminated environment’ without having a panic attack.

Cognitive behavior therapies help the person with a fear of germs change his attitude and thoughts about them. This involves writing down negative and positive thoughts such as “I fear I will die from germs” to “Germs are sometimes healthy and useful to us” and so on. The patients are then asked to decide on beliefs that are healthier and useful to them.

Germs are a necessary part of our lives and for a person with excessive fear of germs; life can be very stressful and complicated. However, there is hope and many treatment options that can help one heal completely from Mysophobia.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1169 2021-10-25 00:27:02

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1146) Trypophobia

Fear of Holes Phobia – Trypophobia

The fear of holes, or trypophobia, is an irrational and persistent fear of holes, generally not the huge ones but the tiny holes seen within asymmetrical clusters. It is a rather unusual, albeit, a common type of phobia, wherein sufferers report having an adverse reaction to images of holes or objects with holes.

Typical symptoms of trypophobia

According to researchers Geoff Cole and Arnold Wilkins of the University of Essex, the brains of trypophobic individuals associate the holes with some kind of danger. The kind of danger one senses or imagines it yet to be established.

The fear of holes not only covers holes in the form of images,  the individual may also fear holes in meat, clusters/pores on the skin, on vegetables or fruits or even those in sponges, wood, honeycombs etc.  For some people, even the mere verbal mention of “fear of small holes” is enough to trigger trembling and shuddering.

The reaction displayed by each trypophobic individuals is different: some feel their skin ‘crawling’, others may shudder, a few report feeling itchy while still others report feeling physically sickened or disgusted. Some phobics also report thoughts of falling into the holes triggering major panic attacks.

Causes of trypophobia

A group dedicated to the fear of holes on a popular social media site has tried to establish the causes behind this as yet unexplored phobia. Often people are completely unaware that they have a latent form of trypophobia until they actually see images of holes. Individuals in the group have volunteered the following probable causes behind this unusual fear:

* Deep rooted emotional problem-Some object associated with childhood that triggers traumatic memories associated with holes. Possible bee stings in the past that led to a swelling wherein the swollen skin displayed every pore.

* Scientists have also reported that evolution may be one of the major causes behind the fear of holes. They explain this fact by giving the example of “pockmarked objects” which do not seem “quite right or completely normal”.  Some primitive portion of the brain perceives or associates these ‘pockmarks’ with something dangerous.

* Holes also tend to be associated with organic objects like rashes or skin blisters that typically follow an episode of measles or chicken pox.

Treating fear of holes

Facing your fear of holes is the best way to overcome it. Trypophobia is as yet an unexplored science, however, the same methods of treatment used for overcoming other kinds of anxiety and phobias can be used for treating trypophobia:

Cognitive behavior therapy: This therapy focuses on altering a person’s thinking. This includes converting harmful or unproductive thought patterns into controlled and positive ones. It eventually helps the trypophobic individual distinguish between reality and imagination.

If a deep rooted emotional problem is the likely cause behind the fear of holes, then behavior therapy, counseling and hypnosis may also prove to be very effective in treating trypophobia.

Neuro Linguistic programming therapy is also being used for treating trypophobia. This includes exposing the subject to his/her fears and altering or reprogramming them in order to diminish the phobia.

In conclusion

As can be seen, a lot needs to be studied in order to determine the origin and exact cause of the fear of holes. There are no diagnostic tests for determining if one has the fear of holes. However, if an image of holes or the mere thought thereof is taking an extreme form or affecting your day to day life, it is best to undergo one of the therapies explained above in order to regain control over this fear.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1170 2021-10-26 00:25:23

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1147) Flare star

A flare star is a variable star that can undergo unpredictable dramatic increases in brightness for a few minutes. It is believed that the flares on flare stars are analogous to solar flares in that they are due to the magnetic energy stored in the stars' atmospheres.

Flare star, also called Uv Ceti Star, any star that varies in brightness, sometimes by more than one magnitude, within a few minutes. The cause is thought to be the eruption of flares much larger than, but otherwise similar to, those observed on the Sun. Flare stars are sometimes called UV Ceti stars, from a prototype star in the constellation Cetus. Proxima Centauri, the closest star to the Sun, is a flare star. All known flare stars are red dwarfs; flares in intrinsically brighter stars are not presently detectable. In UV Ceti and a few others, radio flares have been observed often corresponding to the optical outbursts.

UV Ceti and the flare stars

Our galaxy is filled with billions of red dwarf stars, all of which are too dim to see with the naked eye. Lying at the faint, red end of the Hertzsprung-Russell diagram, their small masses -- a few tenths that of the Sun -- make them much cooler and dimmer than our own Sun. In fact, few of these stars have been detected beyond a dozen or so parsecs of our solar system. However, some of these stars belong to the spectacular class of variables known as the flare stars or the UV Ceti variables. At irregular and unpredictable intervals, they can dramatically increase in brightness over a broad wavelength range from X-rays to radio waves for anywhere from a few minutes to a few hours. The fact that such small, unassuming stars can suddenly undergo incredibly energetic events make the flare stars one of the more intriguing targets for variable star observers.

A short history of flare stars

Although flare stars may have been detected as early as 1924, the earliest confirmed observations are attributed to W.J. Luyten, who discovered strongly variable spectra in two high proper-motion dwarf stars now known as V1396 Cyg and AT Mic. In particular, Luyten noted that the emission lines of hydrogen were first observed in a very bright state, but then rapidly faded. Several similar stars were found in short order: V371 Ori, WX UMa, YZ CMi, and DO Cep were all discovered in the late 1930's and early 1940's.

However, the field of flare star research really took off with the discovery of flares on Luyten 726-8, another high proper-motion binary, in September of 1948 (Joy & Humason 1949). Astronomers observing this star at Mount Wilson discovered a huge increase in brightness over a very short time. Later analysis of the spectra taken during the observation revealed a change in brightness of over four magnitudes, and a rise in effective temperature to well over 10,000 K. These incredible changes faded just as rapidly, with the star returning to its cool, quiescent state in less than a day. Today, the star is known as UV Ceti, the class prototype for the flare stars.

Since their initial detection and characterization in visible light, the UV Ceti stars have also been detected over a wide wavelength range, from X-rays to radio. The coincidence between radio and optical flaring in the UV Ceti stars was noticed as early as 1966 (Lovell & Solomon), and X-ray flares were first detected in 1975 (Heise et al.). Many multiwavelength campaigns have been conducted on various flare stars, and as a result, we now have a reasonably good physical picture of how flares work. The number of known flare stars is also increasing with time: the GCVS currently lists 1620 stars of UV Ceti (UV) or UV Ceti + Nebular (UVN) type. Recently, variable emission lines have been detected in young brown dwarf stars (Liebert 2003), raising the exciting possibility that brown dwarfs may also exhibit flaring activity.

The Characteristics and Physics of Flare stars

As a class, the known flare stars have spectral types of late M through late-K, corresponding to temperatures between about 2500 to 4000 K. Often, they have detectable emission lines of hydrogen and calcium in their spectra, indicating chromospheric activity. They have masses between 0.1 and 0.6 times that of the Sun; some brown dwarfs may exhibit flaring activity, though the study of these stars is still very much in its infancy. Many of the known flare stars are members of young stellar associations (e.g. the Orion and Taurus star-forming regions), though some older flare stars are known. Many are also known to be binary stars, and this may correspond to an increased likelihood of activity. Some of the UV Ceti flare stars are also members of the BY Draconis class of spotted variables.

Variability in the flare stars is characterized by rapid, irregular, large-amplitude increases in stellar brightness, followed by a much slower decay (from minutes to hours) back to a quiescent level. The strongest variations occur in the blue end of the optical spectrum: a flare may cause a one-magnitude change in brightness in the V-band, but five magnitudes in the U-band. Flares are typically accompanied by brightening of the emission line spectra of the star, particularly of the Balmer series of hydrogen, and the appearance of ionized helium lines as well. Flares have also been observed in the radio and X-ray regions of the spectrum, though they are not necessarily coincident with optical flares.

It is now believed that flares on the UV Ceti stars are analogous to solar flares in nearly all respects. On the Sun, flares are caused by the sudden release of magnetic energy via magnetic reconnection events. The solar photosphere is threaded with magnetic fields that move and change in strength over time. Solar material and the magnetic fields are coupled together, and one of the effects of this can be sunspots. In sunspots, the magnetic field prevents convection which transports heat from the interior to the surface -- when the heat transfer is blocked, material at the surface cools down. Another effect of this coupling can be a solar flare -- if the magnetic field can rearrange itself to a lower-energy configuration, the excess energy gets transferred to the plasma within and around the magnetic field. When this happens, the solar plasma is rapidly heated, and can even be accelerated to relativistic speeds. The super-heated plasma radiates vigorously in ultraviolet (and even X-ray) light, producing a flare -- a rapid spike in brightness. In addition to thermal heating, particle acceleration also results in the emission of non-thermal radiation, including gamma-rays from collision-induced nuclear reactions. As the gas cools and the energetic particles dissipate, the brightness of the flare decays with an exponential timescale.

Similar things are believed to happen on flare stars, with a few important differences. One difference is that flare stars are intrinsically faint in visible light, particularly at shorter wavelengths. Thus the flare drastically raises the ultraviolet-blue continuum of the star, along with emission lines atypical of cool stars (like ionized helium). Another difference is that the absolute sizes of flares on flare stars may be a significant fraction of the size of the star itself (perhaps as much as one fifth of the circumference!) rather than being limited to a few thousand kilometers as on the Sun. These two things combine to cause very large luminosity changes in the flare stars, particularly in the blue end of the spectrum.

The flare stars are known to be bright at X-ray and radio wavelengths as well. The physics of flare star radio flares are likely the same as those on the Sun: a magnetic event accelerates charged particles that interact with magnetic fields to produce cyclotron and synchrotron radiation. X-ray flares have also been observed, but the flare stars are also known to have very large quiescent X-ray luminosities, most likely from a large, bright corona. Their X-ray luminosities can be on the order of one percent of the total bolometric luminosity -- far, far greater than would be expected for a low-mass, non-interacting star.

The flare stars spend relatively little time at their brightest -- perhaps a few minutes per flare -- and the occurrences of flares are unpredictable. Therefore, flares are difficult to catch. The best way to observe one is to devote one night to a single flare star, and monitor it once every few minutes, much like one would observe a short-period eclipsing binary. In the ideal (but unlikely) case, these observations would allow you to catch the quiescent pre-flare brightness level, the rapid rise of a flare, and its decay to quiescence. Unfortunately, flares are unpredictable, so it may be awhile before you catch a flare star in the act. Flare stars are ideally suited to CCD and (especially) photoelectric observations because of the high time-resolution needed to catch all phases of the flare, and photomultipliers have the additional benefit of being very blue-sensitive. If you are using a CCD or photomultiplier, U, B, or V filters or their equivalent are recommended, since the flare amplitudes are larger at bluer wavelengths. However, many flare events have been detected by visual observers, so persistent visual observers should have no problem observing flare stars.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1171 2021-10-27 00:35:59

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1148) Magnitude

Magnitude, in astronomy, measure of the brightness of a star or other celestial body. The brighter the object, the lower the number assigned as a magnitude. In ancient times, stars were ranked in six magnitude classes, the first magnitude class containing the brightest stars. In 1850 the English astronomer Norman Robert Pogson proposed the system presently in use. One magnitude is defined as a ratio of brightness of 2.512 times; e.g., a star of magnitude 5.0 is 2.512 times as bright as one of magnitude 6.0. Thus, a difference of five magnitudes corresponds to a brightness ratio of 100 to 1. After standardization and assignment of the zero point, the brightest class was found to contain too great a range of luminosities, and negative magnitudes were introduced to spread the range.

Apparent magnitude is the brightness of an object as it appears to an observer on Earth. The Sun’s apparent magnitude is -26.7, that of the full Moon is about -11, and that of the bright star Sirius, -1.5. The faintest objects visible through the Hubble Space Telescope are of (approximately) apparent magnitude 30. Absolute magnitude is the brightness an object would exhibit if viewed from a distance of 10 parsecs (32.6 light-years). The Sun’s absolute magnitude is 4.8.

Bolometric magnitude is that measured by including a star’s entire radiation, not just the portion visible as light. Monochromatic magnitude is that measured only in some very narrow segment of the spectrum. Narrow-band magnitudes are based on slightly wider segments of the spectrum and broad-band magnitudes on areas wider still. Visual magnitude may be called yellow magnitude because the eye is most sensitive to light of that colour.

What is stellar magnitude?

Sometimes you’ll read how bright a star or planet appears from Earth at a particular magnitude. The word magnitude in astronomy, unless stated otherwise, usually refers to a celestial object’s apparent brightness or apparent visual magnitude. The intrinsic brightness of stars, on the other hand, is called luminosity or absolute magnitude. For the remainder of this post, we’ll be using the word magnitude to talk about a star’s apparent visual magnitude.

The magnitude scale dates back to the ancient astronomers Hipparchus and Ptolemy, whose star catalogs listed stars by their magnitudes.

According to this ancient scale, the brightest stars in our sky are 1st magnitude, and the very dimmest stars to the eye alone are 6th magnitude. A 2nd-magnitude star is still modesty bright but fainter than a 1st-magnitude star, and a 5th-magnitude star is still pretty faint but brighter than a 6th-magnitude star.

This system remains intact to this day, though with some modification.

Of course, most stars aren’t like Spica in that they don’t fall so precisely at a whole number on the magnitude scale. That’s why – for astronomers – any star with a magnitude between 0.50 and 1.50 is considered to be of 1st-magnitude brightness.

Consider the 1st-magnitude star Aldebaran, which has an apparent magnitude of 0.87. Meanwhile, the 1st-magnitude star Regulus has a magnitude of 1.36. Both are considered 1st magnitude stars – among the sky’s brightest stars – although their brightnesses are not exactly equal.

Okay, let’s talk about some astronomical shorthand. In astronomy, the intrinsic or true brightness of a star – sometimes called its absolute magnitude is represented by a capital letter M. Meanwhile, apparent magnitude – or how bright a star appears from Earth – is presented by a lower case letter m.

This system of numbering for apparent magnitude confuses some people. Just remember, the dimmer the star, the higher the magnitude number. Regulus (m = 1.36) is actually fainter than Spica (m = 1), yet Aldebaran (m = 0.87) is brighter than Spica.

The magnitude scale is much like golf in that the lower number means a greater brightness on the magnitude scale and a better score in golf.

Loosely speaking, the 21 stars that are brighter than magnitude 1.50 are called 1st-magnitude stars.

However, the 0-magnitude star Vega (m = 0.00) is actually one magnitude brighter than Spica (m = 1.00), and the star Sirius with a negative magnitude (m = -1.44) is nearly two and one-half magnitudes brighter than Spica.

One magnitude corresponds to a brightness factor of 2.512 times

Modern astronomy has added precision to the magnitude scale. A difference of 5 magnitudes corresponds to a brightness factor of a hundredfold. In other words, a 1st-magnitude star is 100 times brighter than a 6th-magnitude star – or conversely, a 6th-magnitude star is 100 times dimmer than a 1st-magnitude star. The fifth root of 100 approximately equals 2.512, so a difference of one magnitude corresponds to a brightness factor of about 2.512 times.

1m: brightness factor of 2.512
2m: brightness factor of 2.512 x 2.512 = 6.31
3m: brightness factor of 2.512 x 2.512 x 2.512 = 15.84
4m: brightness factor of 2.512 x 2.512 x 2.512 x 2.512 = 39.81
5m: brightness factor of 2.512 x 2.512 x 2.512 x 2.512 x 2.512 = 100

A higher positive number means a fainter celestial object; whereas a higher negative number means a brighter celestial object. For instance, Venus at its brightest has a magnitude of -4.6 and the faintest star visible to the unaided eye has a magnitude of +6.0.

Extending the magnitude scale

However, there’s a far larger range of brightness than just 5 magnitudes (brightness factor of one hundredfold) in our sky. The sun, moon, plus the planets Venus and Jupiter are much, much brighter than 1st-magnitude; and telescopes let us see stars that are millions of times fainter than 6th-magnitude.

Nowadays, the magnitude system includes not just stars but also the sun, moon, planets, asteroids and comets within the solar system, and star clusters and galaxies that reside outside the solar system. Astronomers even list the magnitudes of man-made satellites circling our planet.

Because a difference of 5 magnitudes corresponds to a brightness factor of 100 times, then a difference of 10 magnitudes corresponds to a brightness factor of 10,000 times (100 x 100 = 10,000). In addition, a difference of 15 magnitudes corresponds to a brightness factor of 1,000,000 times, and a difference of 20 magnitudes corresponds to a brightness factor of 100,000,000 times.

10m = 100 x 100 = brightness factor of 10,000 times
15m = 100 x 100 x 100 = brightness factor of 1,000,000 times
20m = 100 x 100 x 100 x 100 = brightness factor of 100,000,000 times

How much brighter is the sun than the full moon?

Looking at the table above, we find the magnitude of the sun at -26.74 and the full moon at -12.74. Those numbers may be abstract to the point of meaningless for many of us, but let’s see if we can bring this arcane magnitude system down to Earth. First of all, we find that the magnitude difference between the sun and moon equals 14 magnitudes: -12.74 -(-26.74) = -12.74 + 26.74 = 14.00.

Magnitude difference of sun and full moon: -12.74 -(-26.74) = -12.74 + 26.74 = 14.00

Or if you prefer:

Magnitude difference of sun and full moon: -26.74 -(-12.74) = -26.74 + 12.74 = -14.00

We can divide this magnitude difference between the sun and moon into 10m and 4m. Looking at our chart above, we see 10m = a brightness factor of 10,000 and 4m = a brightness factor of 39.81. We then multiply 10,000 by 39.81 to find that the sun is nearly 400,000 times brighter than the full moon.

Brightness variation of sun and full moon: 10,000 x 39.81 = 398,100 times brighter than the full moon.

Bottom line: The stellar magnitude system devised by ancients was much less confusing when it only applied to those stars visible to the unaided eye. The brightest stars were 1st-magnitude and the faintest stars were 6th-magnitude. However, modern astronomy has expanded the magnitude scale to include brighter celestial objects (such as the sun, moon and Venus) and in the other direction to telescopic objects that lie beyond the limit of the naked eye. Therefore, the brightest celestial objects have the highest negative numbers while the faintest have the highest positive numbers.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1172 2021-10-28 00:36:44

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1149) Constellation

Constellation, in astronomy, any of certain groupings of stars that were imagined—at least by those who named them—to form conspicuous configurations of objects or creatures in the sky. Constellations are useful in assisting astronomers and navigators to locate certain stars.

From the earliest times the star groups known as constellations, the smaller groups (parts of constellations) known as asterisms, and also individual stars have received names connoting some meteorological phenomena or symbolizing religious or mythological beliefs. At one time it was held that the constellation names and myths were of Greek origin; this view has now been disproved, and an examination of the Hellenic myths associated with the stars and star groups in the light of the records revealed by the deciphering of Mesopotamian cuneiforms leads to the conclusion that in many, if not all, cases the Greek myth has a Mesopotamian parallel.

The earliest Greek work that purported to treat the constellations as constellations, of which there is certain knowledge, is the Phainomena of Eudoxus of Cnidus (c. 395–337 BCE). The original is lost, but a versification by Aratus (c. 315–245 BCE), a poet at the court of Antigonus II Gonatas, king of Macedonia, is extant, as is a commentary by Hipparchus (mid-2nd century BCE).

Three hundred years after Hipparchus, the Alexandrian astronomer Ptolemy (100–170 CE) adopted a very similar scheme in his Uranometria, which appears in the seventh and eighth books of his Almagest, the catalog being styled the “accepted version.” The names and orientation of the 48 constellations therein adopted are, with but few exceptions, identical with those used at the present time.

The majority of the remaining 40 constellations that are now accepted were added by European astronomers in the 17th and 18th centuries. In the 20th century the delineation of precise boundaries for all the 88 constellations was undertaken by a committee of the International Astronomical Union. By 1930 it was possible to assign any star to a constellation.

Origin of the Constellations

Ever since people first wandered the Earth, great significance has been given to the celestial objects seen in the sky. Throughout human history and across many different cultures, names and mythical stories have been attributed to the star patterns in the night sky, thus giving birth to what we know as constellations.

When were the first constellations recorded? Archaeological studies have identified possible astronomical markings painted on the walls in the cave system at Lascaux in southern France. Our ancestors may have recorded their view of the night sky on the walls of their cave some 17 300 years ago. It is thought that the Pleiades star cluster is represented alongside the nearby cluster of the Hyades. Was the first ever depiction of a star pattern made over seventeen millennia ago?

Over half of the 88 constellations the IAU recognizes today are attributed to ancient Greek, which consolidated the earlier works by the ancient Babylonian, Egyptian and Assyrian. Forty eight of the constellations we know were recorded in the seventh and eighth books of Claudius Ptolemy’s Almagest, although the exact origin of these constellations still remains uncertain. Ptolemy’s descriptions are probably strongly influenced by the work of Eudoxus of Knidos in around 350 BC. Between the 16th and 17th century AD, European astronomers and celestial cartographers added new constellations to the 48 previously described by Ptolemy; these new constellations were mainly “new discoveries” made by the Europeans who first explored the southern hemisphere. Those who made particular contributions to the “new” constellations include the Polish-born, German astronomer Johannes Hevelius; three Dutch cartographers, Frederick de Houtman, Pieter Dirksz Keyser and Gerard Mercator; the French astronomer Nicolas Louis de Lacaille; the Flemish mapmaker Petrus Plancius and the Italian navigator Amerigo Vespucci.

IAU and the 88 Constellations

Originally the constellations were defined informally by the shapes made by their star patterns, but, as the pace of celestial discoveries quickened in the early 20th century, astronomers decided it would be helpful to have an official set of constellation boundaries. One reason was to aid in the naming of new variable stars, which brighten and fade rather than shine steadily. Such stars are named for the constellation in which they reside, so it is important to agree where one constellation ends and the next begins.

Eugène Delporte originally listed the 88 “modern” constellations on behalf of the IAU Commission 3 (Astronomical Notations), in Délimitation scientifique des constellations.

Constellation Figures

In star maps it is common to mark line “patterns” that represent the shapes that give the name to the constellations. However, the IAU defines a constellation by its boundary (indicated by sky coordinates) and not by its pattern and the same constellation may have several variants in its representation.

The constellations should be differentiated from asterisms. Asterisms are patterns or shapes of stars that are not related to the known constellations, but nonetheless are widely recognised by laypeople or in the amateur astronomy community. Examples of asterisms include the seven bright stars in Ursa Major known as “the Plough” in Europe or “the Big Dipper” in America, as well as “the Summer Triangle”, a large triangle, seen in the summer night sky in the northern hemisphere and composed of the bright stars Altair, Deneb and Vega. Whilst a grouping of stars may be officially designated a constellation by the IAU, this does not mean that the stars in that constellation are necessarily grouped together in space. Sometimes stars will be physically close to each other, like the Pleiades, but constellations are generally really a matter of perspective. They are simply our Earth-based interpretation of two dimensional star patterns on the sky made up of stars of many differing brightnesses and distances from Earth.

Constellation Names

Each Latin constellation name has two forms: the nominative, for use when talking about the constellation itself, and the genitive, or possessive, which is used in star names. For instance, Hamal, the brightest star in the constellation Aries (nominative form), is also called Alpha Arietis (genitive form), meaning literally “the alpha of Aries”.

The Latin names of all the constellations, their abbreviated names and boundaries can be found in the table below. They are a mix of the ancient Greek patterns recorded by Ptolemy as well as some more “modern” patterns observed later by more modern astronomers.

The IAU adopted three-letter abbreviations of the constellation names at its inaugural General Assembly in Rome in 1922. So, for instance, Andromeda is abbreviated to And whilst Draco is abbreviated to Dra.

Pronunciation of Constellation Names

Experienced astronomers, both professional and amateur, pronounce constellation names in many different ways, but have no trouble understanding each other.  There is no single correct way of pronouncing a constellation name, and there are several sources that address the issue.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1173 2021-10-29 00:12:58

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1150) Dinosaur

A brief history of dinosaurs

Dinosaurs ruled the Earth for about 174 million years. Here's what we know about their history.

Dinosaurs were a successful group of animals that emerged between 240 million and 230 million years ago and came to rule the world until about 66 million years ago, when a giant asteroid slammed into Earth. During that time, dinosaurs evolved from a group of mostly dog- and horse-size creatures into the most enormous beasts that ever existed on land.

Some meat-eating dinosaurs shrank over time and evolved into birds. So, in that sense, only the non-avian dinosaurs went extinct. (For the purposes of this article, "dinosaurs" will refer to non-avian dinosaurs, unless otherwise stated.)

During the roughly 174 million years that dinosaurs existed, the world changed greatly. When dinosaurs first appeared in the Triassic period (252 million to 201 million years ago), they roamed the supercontinent of Pangea. But by the time the asteroid hit at the end of the Cretaceous period (145 million to 66 million years ago), the continents were in approximately the same place they are today.

What Are Dinosaurs?

The oldest unequivocal dinosaur fossils, dating to about 231 million years ago, are from Ischigualasto Provincial Park in northwestern Argentina, and include the genuses Herrerasaurus, Eoraptor and Eodromaeus. Scientists are still debating whether Nyasasaurus, a genus found in Tanzania that dates to about 240 million years ago, is also an early dinosaur or a dinosauromorph, a group that includes dinosaurs and their close relatives, said Steve Brusatte, a paleontologist at the University of Edinburgh in Scotland.

Whenever they first appeared, the dinosaurs' unique anatomy set them apart from other animal groups. Dinosaurs are archosaurs, a clade (different groups of animals that share a common ancestor) that includes crocodilians, pterosaurs, dinosaurs and birds. The archosaurs emerged after the end-Permian extinction about 252 million years ago. Over time, some archosaurs, including dinosauromorphs, adapted an upright posture, meaning they had legs under their bodies, rather than out to their sides.

"Sprawling is all well and good for cold-blooded critters that don't need to move very fast. Tucking your limbs under your body, however, opens up a new world of possibilities," Brusatte wrote in "The Rise and Fall of the Dinosaurs: A New History of a Lost World" (William Morrow, 2018). As archosaur evolution progressed, dinosauromorphs gained long tails, big leg muscles and extra hip bones that enabled them to move quickly and efficiently, Brusatte wrote.

Some dinosauromorphs evolved into dinosaurs. The differences between the two are small, but dinosaurs' anatomy offered increased benefits, including arms that could move in and out, neck vertebrae that could support stronger muscles than before, and a joint where the thigh bone meets the pelvis, Brusatte wrote.

This unique anatomy helped dinosaurs become successful. Having an upright posture also freed the hands, allowing dinosaurs such as iguanodonts to grasp branches and carnivorous dinosaurs to claw and kill prey, noted Gregory Erickson, a paleobiologist at Florida State University. Ultimately, having free arms "allowed gliding then flight in birds," he said.

Moreover, dinosaurs were likely warm blooded, according to research on their growth rates. "When you become a warm blooded animal, you can operate 24/7," Erickson told Live Science. "You're not at the whims of the environment in terms of being active."

Initially, dinosaurs were not as diverse as the crocodile-like archosaurs they were living alongside, Brusatte noted. In fact, dinosaurs "didn't become too successful right away; the crocs ruled the Triassic, then the end-Triassic extinction hit and the dinosaurs survived and took over."

The clade Dinosauria (which means "terrible lizard" in Greek) was coined in 1842 by the English paleontologist Richard Owen, who included the meat-eating theropod Megalosaurus, the long-necked sauropodomorph Cetiosaurus and the ornithiscian Iguanodon as the first known species in the clade, according to the book "Dinosaurs Rediscovered" (Thames & Hudson, 2019).

Each of these dinosaurs, it turns out, represents one of the three major dinosaur groups.

Types Of Dinosaurs

As of 2021, there were 1,545 scientifically described dinosaur species, according to the Paleobiology Database. About 50 previously unknown species are described each year, meaning there's roughly one newfound species described each week, Brusatte said.

All of these dinosaurs fit into one of three groups: Ornithischia, Sauropodomorpha and Theropoda.

Ornithischia dinosaurs include beaked plant-eaters, such as Stegosaurus, duck-billed dinosaurs (also called hadrosaurs), as well as horned dinosaurs like Triceratops and armored dinosaurs like Ankylosaurus. Some ornithischians walked on four legs, while others walked on two.

Sauropodomorpha dinosaurs were long-necked, pot-bellied dinosaurs that had tiny heads and column-like limbs. This group includes sauropods (such as Diplodocus), their smaller antecedents (including Chromogisaurus) and extra-large sauropods known as titanosaurs (such as Dreadnoughtus and Argentinosaurus), which are among the largest land animals that have ever existed.

Theropoda is a group of meat-eating dinosaurs, although some (such as Chilesaurus diegosuarezi) changed their diet to be herbivorous or omnivorous. Theropods include Tyrannosaurus rex and Velociraptor, as well as birds, which evolved from small theropods.

So, how are these groups related? It's up for debate. Ornithischian dinosaurs have a backward-pointing pubis bone in the hip, earning them the name bird-hipped dinosaurs. (However, they are not the ancestors of birds; theropods are.) Meanwhile, theropods and sauropodomorphs have saurischian or "reptile hips," which are also seen in modern crocodiles and lizards, according to the book "Dinosaurs Rediscovered."

Historically, it was thought that the reptile-hipped theropods and sauropodomorphs were more closely related to each other than to ornithischians. However, a 2017 study in the journal Nature uprooted the dinosaur family tree by suggesting that ornithischians and theropods were more closely related, based on analyses of 74 dinosaur species, Live Science previously reported. Shortly after, another 2017 study in the journal Nature found that neither family tree, as well as a third that is rarely considered, is statistically significant from the other, meaning all the suggested family trees are equally plausible until more evidence comes forth.

When Did Dinosaurs Live?

Dinosaurs lived during most of the Mesozoic era, a geological age that lasted from 252 million to 66 million years ago. The Mesozoic era includes the Triassic, Jurassic and Cretaceous periods.

Dinosaurs arose from small dinosauromorph ancestors in the Triassic period, when the climate was harsh and dry. They faced "competition from the croc-line archosaurs for tens of millions of years, [but] finally prevailed when Pangea began to split," Brusatte told Live Science. At this time, volcanoes erupted along the cracks of the supercontinent, causing global warming and mass extinction, he said.

During the Jurassic period (201 million to 145 million years ago), dinosaurs rose to dominance and some grew to huge sizes. For example, Vouivria damparisensis, the earliest titanosaur, dates to 160 million years ago. It weighed about 33,000 lbs. (15,000 kilograms) and measured more than 50 feet (15 meters) long. Iconic dinosaurs from this period include Brontosaurus, Brachiosaurus, Diplodocus and Stegosaurus. During the Jurassic, flowering plants evolved and birds, including Archaeopteryx, first appeared. There was "a small extinction at the end of the Jurassic that we still know little about," Brusatte said.

In the Cretaceous period, dinosaur dominance continued as the continents moved farther apart. Famous dinosaurs from this period include T. rex, Triceratops, Spinosaurus and Velociraptor. The largest dinosaurs on record, including Argentinosaurus, date to the Cretaceous. The Cretaceous period ended with the Cretaceous-Tertiary (K-Pg) extinction event, when a 6-mile-wide (10 kilometers) asteroid collided with Earth, leaving an impact crater more than 110 miles (180 km) in diameter in the Yucatan Peninsula of what is now Mexico.

The impact area, known as the Chicxulub (CHEEK-sheh-loob) crater, has evidence of "shocked quartz" and small glass-like spheres known as tektites, which form when rock is rapidly vaporized and cooled — geologic clues that a space rock struck there with incredible force, Betsy Kruk, an associate paleontologist with Paleo Solutions, a paleontological consulting company based in California, previously told Live Science. Chemical analyses from the sedimentary rock at Chicxulub melted and mixed together at temperatures on par with an asteroid strike about 66 million years ago, she added.

What Is The Largest Dinosaur? The Smallest Dinosaur?

Some dinosaurs were enormous, but others were pipsqueaks. The smallest dinosaur on record is an avian dinosaur that's alive today: the bee hummingbird (Mellisuga helenae) from Cuba, which measures just over 2 inches (5 centimeters) long and weighs less than 0.07 ounce (2 grams). As for extinct, non-avian dinosaurs, there are a few contenders for smallest beast, including a bat-like dinosaur from China named Ambopteryx longibrachium that measured 13 inches (32 cm) long and weighed about 11 oz (306 g), according to a 2019 study in the journal Nature.

Titanosaurs were the largest dinosaurs. However, because paleontologists rarely find an entire skeleton, and because soft tissues, such as organs and muscles, rarely fossilize, it's challenging to determine dinosaur mass. However, contenders for the title of world's largest dinosaur include Argentinosaurus, which weighed up to 110 tons (100 metric tons), an unnamed 98 million-year-old titanosaur from Argentina that weighed upward of 69 tons (63 metric tons), and Patagotitan, which also weighed in at 69 tons.

The longest dinosaur is likely Diplodocus or Mamenchisaurus — long and slender sauropod dinosaurs that were about 115 feet (35 m) long. The tallest dinosaur is likely Giraffatitan, a 40-foot-tall (12 m) sauropod dinosaur from the late Jurassic, about 150 million years ago, which lived in what is now Tanzania.

Pterosaurs Not Dinosaurs

Many amazing animals lived during the dinosaur age, and some are confused with dinosaurs. The most common misconception is calling  pterosaurs dinosaurs: They are not. Pterosaurs are winged reptiles and archosaurs, meaning they are relatives of dinosaurs, but they are not dinosaurs.

The order Crocodilia includes extinct and living crocodiles and their close relatives. Crocodilians are archosaurs, but they are not dinosaurs. Living crocodilians and birds (which are dinosaurs) are the only surviving members of the Archosauria clade.

The Mesozoic oceans teemed with sea life, including predatory reptiles known as mosasaurs (such as Mosasaurus), plesiosaurs and ichthyosaurs. However, none of these reptiles are dinosaurs.

Did Dinosaurs Have Feathers?

Yes, some dinosaurs flaunted feathers, as do their bird descendants. Feathers don't fossilize well, but some remarkable fossils, especially those from Liaoning province in China that were buried in the aftermath of a volcanic eruption, have preserved feathers. Here are a few examples: Zhenyuanlong suni, Yutyrannus huali and Jianianhualong tengi.

It's unclear why dinosaurs first evolved feathers, but they could have been used for the following: as insulation to keep dinosaurs and their incubated eggs warm; for display to use for communication between dinosaurs, such as courtship displays; and for gliding or powered flight, Michael Habib, a research associate at the Dinosaur Institute at the Natural History Museum of Los Angeles County, previously told Live Science.

Initially, it was thought that only theropods and their descendants sported feathers, but researchers have also found downy feathers on the plant-eating ornithischian dinosaur Kulindadromeus zabaikalicus, suggesting that feathers were more widespread than previously thought, a 2014 study in the journal Science found.

It's possible that pterosaurs had feathers, according to a 2018 study in the journal Nature Ecology & Evolution, but more feathered specimens need to be found and analyzed to say so for sure.

Notably, even T. rex had feathers. However, depictions of dinosaurs rarely have feathers in popular culture, including the "Jurassic Park" movies. Paleontologist Jack Horner, who served as a scientific adviser on some of the "Jurassic Park" movies, remembers telling director Steven Spielberg that the dinosaurs should have feathers.

"Even when 'Jurassic Park' came out [in 1993], we knew that Velociraptors should have feathers, but at that time, it would have been technically difficult to do it, just from a CG [computer-generated] point of view. And Steven wasn't really too excited about it, anyway. When I told him they should be colorful and they should be feathered, and he said, 'Feathered Technicolor dinosaurs aren't scary enough,'" Horner previously told Live Science.

Could Dinosaurs Fly?

Some dinosaurs could fly, including the earliest known bird — Archaeopteryx — discovered in Germany and dating to about 150 million years ago, during the late Jurassic.

However, unlike most birds today, extinct dinosaurs likely just flew short distances. Research shows that powerful leg muscles, big wings and a relatively small body size were needed for takeoff and flight in ancient birds and bird-like dinosaurs, Habib previously told Live Science. His research suggests that the bird-like dinosaurs Microraptor, Rahonavis, and five avian genuses — Archaeopteryx, Sapeornis, Jeholornis, Eoconfuciusornis and Confuciusornis — would have been able to launch (without running) from the ground to initiate flight.

The bat-like dinosaur Yi qi, dating to China's Jurassic period, could likely glide, according to a 2015 study in the journal Nature.

Why Did Dinosaurs Go Extinct?

It's up for debate how well the dinosaurs were doing before the asteroid crashed into Earth. A handful of studies suggest that in the late Cretaceous, dinosaur extinctions were rising and diversity was declining, especially among herbivorous dinosaurs. But these studies rely on incomplete fossil data and models that may not tell the whole story, Live Science previously reported.

Even if dinosaur diversity was dropping, it's possible they could have bounced back had the asteroid not hit, Brusatte told Live Science. Dinosaurs lived on every continent, including Antarctica, and they filled different rungs in various ecosystems, from plant-eater to apex carnivore. "Dinosaurs had experienced many rises and falls in diversity over their 150-plus million year evolutionary history," he said. If the mass extinction hadn't happened, it's possible "They would still be thriving today as more than birds."

In the aftermath of the asteroid collision, long-term pain followed chaos. The collision caused massive destruction, including a shockwave, heat pulse, wildfires, tsunamis (including an immediate mile-high tsunami), volcanic eruptions, lethal acid rain and earthquakes. Dust and grime that the asteroid kicked up hovered in the air. "This rain of hot dust raised global temperatures for hours after the impact, and cooked alive animals that were too large to seek shelter," according to Kruk. "Small animals that could shelter underground, underwater, or perhaps in caves or large tree trunks, may have been able to survive this initial heat blast."

The dust and particles remained in the air, blocking the sun for several years afterward and causing a nuclear winter that cooled the planet and led to the deaths of countless plants and animals, Brusatte and Kruk said.

"Smaller, omnivorous terrestrial animals, like mammals, lizards, turtles, or birds, may have been able to survive as scavengers feeding on the carcasses of dead dinosaurs, fungi, roots and decaying plant matter, while smaller animals with lower metabolisms were best able to wait the disaster out," Kruk previously said. Moreover, the asteroid also pulverized carbon-rich rocks, which released carbon into the atmosphere and led to "global warming for a few thousand years," after the nuclear winter ended, Brusatte said.

Scientists used to wonder if the Deccan Traps volcanic eruptions in what is now India played a role in the mass extinction. But recent studies "show that the Deccan probably had very little impact," Brusatte said. It was "most likely an innocent bystander" — the asteroid is what caused the extinction.

Can Dinosaurs Be Brought Back?

In the popular movie franchise "Jurassic Park," scientists find dinosaur DNA preserved in an ancient mosquito caught in amber, and then fill in the DNA gaps with frog DNA. It's an entertaining plot, but the science is far from sound. For instance, amber does not preserve DNA well, and frogs are not at all closely related to dinosaurs; they're not archosaurs, and a 2017 study in the journal Proceedings of the National Academy of Sciences even found that frog evolution took off after the asteroid impact.

For myriad reasons, it's currently impossible to bring extinct dinosaurs back. While dinosaur proteins and blood vessels have been found, scientists have yet to rigorously identify DNA from an extinct dinosaur. DNA begins decaying the moment an organism dies, but parts of it can be preserved in the right circumstances. That said, the oldest sequenced DNA on record belongs to a roughly 1 million-year-old mammoth, and dinosaurs went extinct about 66 million years ago.

Some scientists are studying how to reverse-engineer birds into dinosaurs, including the so-called "dino-chicken," which would have a lengthened tail, teeth, arms and fingers. One group even gave chicken embryos dinosaur-like snouts. However, the "chickenosaurus" wouldn't be a replica of an ancient dinosaur, but rather a dinosaur-like bird, the researchers told Live Science.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1174 2021-10-30 03:11:03

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1151) Lung

Lung, in air-breathing vertebrates, either of the two large organs of respiration located in the chest cavity and responsible for adding oxygen to and removing carbon dioxide from the blood. In humans each lung is encased in a thin membranous sac called the pleura, and each is connected with the trachea (windpipe) by its main bronchus (large air passageway) and with the heart by the pulmonary arteries. The lungs are soft, light, spongy, elastic organs that normally, after birth, always contain some air. If healthy, they will float in water and crackle when squeezed; diseased lungs sink.

In the inner side of each lung, about two-thirds of the distance from its base to its apex, is the hilum, the point at which the bronchi, pulmonary arteries and veins, lymphatic vessels, and nerves enter the lung. The main bronchus subdivides many times after entering the lung; the resulting system of tubules resembles an inverted tree. The diameters of the bronchi diminish eventually to less than 1 mm (0.04 inch). The branches 3 mm and less in diameter are known as bronchioles, which lead to minute air sacs called alveoli (see pulmonary alveolus), where the actual gas molecules of oxygen and carbon dioxide are exchanged between the respiratory spaces and the blood capillaries.

You may know that the human brain is composed of two halves, but what fraction of the human body is made up of blood? Test both halves of your mind in this human anatomy quiz.
Each lung is divided into lobes separated from one another by a tissue fissure. The right lung has three major lobes; the left lung, which is slightly smaller because of the asymmetrical placement of the heart, has two lobes. Internally, each lobe further subdivides into hundreds of lobules. Each lobule contains a bronchiole and affiliated branches, a thin wall, and clusters of alveoli.

In addition to respiratory activities, the lungs perform other bodily functions. Through them, water, alcohol, and pharmacologic agents can be absorbed and excreted. Normally, almost a quart of water is exhaled daily; anesthetic gases such as ether and nitrous oxide can be absorbed and removed by the lungs. The lung is also a true metabolic organ. It is involved in the synthesis, storage, transformation, and degradation of a variety of substances, including pulmonary surfactant, fibrin, and other functionally diverse molecules (i.e., histamine, angiotensin, and prostaglandins).

A person not engaged in vigorous physical activity uses only about one-twentieth of the total available gaseous-exchange surface of the lung. Pressure inside the lungs is equal to that of the surrounding atmosphere. The lungs always remain somewhat inflated because of a partial vacuum between the membrane covering the lung and that which lines the chest. Air is drawn into the lungs when the diaphragm (the muscular portion between the abdomen and the chest) and the intercostal muscles contract, expanding the chest cavity and lowering the pressure between the lungs and chest wall as well as within the lungs. This drop in pressure inside the lungs draws air in from the atmosphere.

The lungs are frequently involved in infections and injuries. Some infections can destroy vast areas of a lung, rendering it useless. Inflammation from toxic substances, such as tobacco smoke, asbestos, and environmental dusts, can also produce significant damage to the lung. Healed lung tissue becomes a fibrous scar unable to perform respiratory duties. There is no functional evidence that lung tissue, once destroyed, can be regenerated.

Lungs are sacks of tissue located just below the rib cage and above the diaphragm. They are an important part of the respiratory system and waste management for the body.


A person's lungs are not the same size. The right lung is a little wider than the left lung, but it is also shorter. According to York University, the right lung is shorter because it has to make room for the liver, which is right beneath it. The left lung is narrower because it must make room for the heart.

Typically, a man's lungs can hold more air than a woman's. At rest, a man's lungs can hold around 750 cubic centimeters (about 1.5 pints) of air, while a woman's can hold around 285 to 393 cc (0.6 to 0.8 pints) of air, according to York University. "The lungs are over-engineered to accomplish the job that we ask them to do," said Dr. Jonathan P. Parsons, a professor of internal medicine, associate director of Clinical Services, and director of the Division of Pulmonary, Allergy, Critical Care and Sleep Medicine at the OSU Asthma Center at The Ohio State University. "In healthy people without chronic lung disease, even at maximum exercise intensity, we only use 70 percent of the possible lung capacity."


According to the American Lung Association, adults typically take 15 to 20 breaths a minute, which comes to around 20,000 breaths a day. Babies tend to breath faster than adults. For example, a newborn's normal breathing rate is about 40 times each minute while the average resting respiratory rate for adults is 12 to 16 breaths per minute.

Though breathing seems simple, it is a very complex process.

The right lung is divided into three different sections, called lobes. The left lung has just two lobes. The lobes are made of sponge-like tissue that is surrounded by a membrane called pleura, which separates the lungs from the chest wall. Each lung half has its own pleura sack. This is why, when one lung is punctured, the other can go on working.

The lungs are like bellows. When they expand, they pull air into the body. When they compress, they expel carbon dioxide, a waste gas that bodies produce. Lungs do not have muscles to pump air in and out, though. The diaphragm and rib cage essentially pump the lungs.

As a person breathes, air travels down the throat and into the trachea, also known as the windpipe. The trachea divides into smaller passages called the bronchial tubes. The bronchial tubes go into each lung. The bronchial tubes branch out into smaller subdivisions throughout each side of the lung. The smallest branches are called bronchioles and each bronchiole has an air sac, also called alveoli. There are around 480 million alveoli in the human lungs, according to the Department of Anatomy of the University of Göttingen.

The alveoli have many capillary veins in their walls. Oxygen passes through the alveoli, into the capillaries and into the blood. It is carried to the heart and then pumped throughout the body to the tissues and organs.

As oxygen is going into the bloodstream, carbon dioxide passes from the blood into the alveoli and then makes its journey out of the body. This process is called gas exchange. When a person breathes shallowly, carbon dioxide accumulates inside the body. This accumulation causes yawning, according to York University.

The lungs have a special way to protect themselves. Cilia, which look like a coating of very small hairs, line the bronchial tubes. The cilia wave back and forth spreading mucus into the throat so that it can be dispelled by the body. Mucus cleans out the lungs and rids them of dust, germs and any other unwanted items that may end up in the lungs.

Diseases & conditions

The lungs can have a wide range of problems that can stem from genetics, bad habits, an unhealthy diet and viruses. "The most common lung related conditions I see are reactive airways or asthma, as well as smoking-related emphysema, in my general practice," Dr. Jack Jacoub, a medical oncologist and director of thoracic oncology at Memorial Care Cancer Institute at Orange Coast Memorial Medical Center in Fountain Valley, California, told Live Science.

Asthma, also called reactive airway disease before a diagnosis of asthma, is a lung disease where the air passageways in the lungs become inflamed and narrowed, making it hard to breath. In the United States, more than 25 million people, including 7 million children, have asthma, according to the National Heart, Lung, and Blood Institute.

Lung cancer is cancer that originates in the lungs. It is the No. 1 cause of deaths from cancer in the United States for both men and women, according to the Mayo Clinic. Symptoms of cancer include coughing up blood, a cough that doesn't go away, shortness of breath, wheezing, chest pain, headaches, hoarseness, weight loss and bone pain.

Chronic obstructive pulmonary disease (COPD) is long-term lung disease that prevents a person from breathing properly due to excess mucus or the degeneration of the lungs. Chronic bronchitis and emphysema are considered COPD diseases. About 11.4 million people in the United States suffer from COPD, with about 80 to 90 percent of COPD deaths attributed to smoking, according to the American Cancer Society.

Sometimes, those with COPD get lung transplants, replacement lungs garnered from organ donors, to save their lives. Research is also being done on growing new lungs from stem cells. Currently, stem cells extracted from the patient's blood or bone marrow are being used as a treatment to heal damaged lung tissue.

Lung infections, such as bronchitis or pneumonia, are usually caused by viruses, but can also be caused by fungal organisms or bacteria, according to Ohio State University. Some severe or chronic lung infections can cause fluid in the lungs and other symptoms such as swollen lymph nodes, coughing up blood and a persistent fever.

Being overweight can also affect the lungs. "Yes, being overweight does adversely affect the lungs because it increases the work and energy expenditure to breath," said Jacoub. "In the most extreme form, it acts like a constricting process or vest around the chest such as that seen in the 'Pickwickian syndrome.'"

Promoting good lung health

One of the best ways to promote good lung health is to avoid cigarette smoke because at least 70 out of the 7,000 chemicals in cigarette smoke damages the cells within the lungs. According to the Mayo Clinic, people who smoke have the greatest risk of lung cancer. The more a person smokes, the greater the risk. Those who smoke are 15 to 30 times more likely to get lung cancer according to the Centers for Disease Control and Prevention. If a person quits, their lungs can heal from much of the damage, said Dr. Norman Edelman, a senior scientific adviser for the American Lung Association and a specialist in pulmonary medicine. [Do Smokers' Lungs Heal After They Quit?]

The Rush University Medical Center also suggests practicing deep breathing exercises, staying hydrated and regular exercise to keep the lungs healthy. Parsons also recommends having homes tested for radon. "Radon is a naturally occurring radioactive gas produced by the breakdown of uranium in the ground. It typically leaks into a house through cracks in the foundation and walls. Radon is the main cause of lung cancer in nonsmokers, and the second-leading cause of the disease after smoking," said Parsons.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1175 2021-10-31 00:09:15

Registered: 2005-06-28
Posts: 34,916

Re: Miscellany

1152) Calorimeter

Calorimeter, device for measuring the heat developed during a mechanical, electrical, or chemical reaction, and for calculating the heat capacity of materials.

Calorimeters have been designed in great variety. One type in widespread use, called a bomb calorimeter, basically consists of an enclosure in which the reaction takes place, surrounded by a liquid, such as water, that absorbs the heat of the reaction and thus increases in temperature. Measurement of this temperature rise and a knowledge of the weight and heat characteristics of the container and liquid permits the total amount of heat generated to be calculated.

The design of a typical bomb calorimeter is shown in the Figure. The material to be analyzed is deposited inside a steel reaction vessel called a bomb. The steel bomb is placed inside a bucket filled with water, which is kept at a constant temperature relative to the entire calorimeter by use of a heater and a stirrer. The temperature of the water is monitored with a thermometer fitted with a magnifying eyepiece, which allows accurate readings to be taken. Heat losses are minimized by inserting an air space between the bucket and an exterior insulating jacket. Slots at the top of the steel bomb allow ignition wires and an oxygen supply to enter the vessel, both of which are critical in starting the chemical reaction. When an electric current passes through the ignition coil, a combustion reaction occurs. The heat released from the sample is largely absorbed by the water, which results in an increase in temperature. Bomb calorimeters have been developed to the point that heats of combustion of organic materials can be measured with results reproducible within 0.01 percent.

If you’re looking to outfit your lab with a variety of analytical instruments, a calorimeter may be something you add to your list.  A calorimeter is a device used for calorimetry, or measuring heat capacity or the heat of physical changes or chemical reactions. In pharmaceuticals, they are used in drug design. In the chemical industry, they are used for quality control, and in biological studies, they are used for metabolic rate examination.

What Does A Calorimeter Do?

A calorimeter measures the change in heat. Simple calorimeters are made with a metal container of water, positioned above a combustion chamber. A thermometer is used to measure the heat change in the amount of water. The simplest versions of the device can be made at home using two coffee cups or styrofoam cups, though it is not as accurate as lab equipment. There are, however, several other types that are much more complex.

The temperature of liquid changes when it loses or gains energy. The calorimeter measures the mass of the liquid along with the temperature change, to determine the amount of energy change.

It is different from a thermal analysis in that thermal analyzers measure properties of a material at various temperatures.

How Does It Work?

Calorimeters are made with two vessels – an outer and an inner. The air between the two serves as a heat insulator, so there is little to no heat exchange between the contents of the inner vessel and the outside environment. Lab calorimeters use a fiber ring made with insulating material to keep the inner vessel in the center of the outer vessel. There is a thermometer to measure the temperature of the liquid in the inner container, and a stirrer to stir the liquid to distribute the heat throughout the container.

If there is an exothermic reaction, one that releases thermal energy through heat or light, in the solution in the calorimeter, the temperature rises. If there is an endothermic reaction, one that absorbs thermal energy from the surroundings, heat is lost and the temperature decreases. If the two liquids do not transfer energy between them, the substances are in thermal equilibrium. The difference in temperature, along with the mass and specific heat of the solution, makes it possible to determine how much heat the reaction uses.

The temperature change is used to calculate the enthalpy change per mole of substance A when substances A and B are reacted. The formula is:

q = Cv(Tf – Ti )

Q is the amount of heat in joules.

Cv is the calorimeter’s heat capacity in joules per Kelvin (j/K)

Tf is the final temperature, and Ti is the initial temperature.

Types of Calorimeters Used in the Lab

Adiabatic: Adiabatic calorimeters are used to study runaway reactions. With this type, some heat is always lost. As a result, a correction factor is applied to compensate for that heat loss.

Reaction: With this type, chemical processes occur within a closed, insulated container. The heat flow vs. time is measured to determine the reaction heat. It’s used to find the maximum heat a reaction releases, or for reactions that need to run at a constant temperature.

Heat flow: With this type, a heating/cooling jacket controls the temperature of the physical process or the temperature of the jacket. The heat of the reaction is determined by measuring the temperature difference between the heat transfer fluid and the process fluid. It is necessary to know the fill volumes, specific heat, and heat transfer coefficient before a correct answer can be found.

Heat balance: With this type of calorimeter, the heating/cooling jacket controls the temperature of the process. Heat is measured by monitoring the heat that’s gained or lost by the transfer fluid.

Power compensation: This uses a heater added to the vessel so it maintains a constant temperature. The energy for the heater can be adjusted as the reaction requires. The calorimetry signal comes from electrical power.

Constant flux: This comes from heat balance calorimetry, but has a specialized control to maintain a constant flow of heat across the container wall.

Bomb Calorimeter: Also known as a constant-volume calorimeter, this is built to withstand the pressure that builds up as a result of a reaction as air heats in the reaction vessel. The change in water temperature is used to calculate the heat of combustion.

Calvet-type: This type uses a 3D fluxmeter sensor that’s made of a series of thermocouple rings. It is well-suited for larger sample size because it allows for a larger reaction container size, without affecting the measurement accuracy.

Constant-pressure: The coffee cup calorimeter you can make at home is an example of a constant-pressure calorimeter. It measures the thermodynamic change in a solution, under constant pressure.

Differential Scanning: With DSC, there are typically two pans, one sample pan, and one reference pan. The sample pan contains the sample while the reference pan remains empty. Each pan is heated separately at a specific rate, and this rate is maintained throughout the experiment. A computer system ensures that each pan heats up at the same rate, however, so that a measurement can be taken. The heater underneath the sample pan has to work harder than the empty reference pan, meaning it puts out more heat. The difference in the amount of heat put out is how a measurement is made.

Isothermal titration: In this type, the heat of reaction is used to follow a titration experiment. It’s possible to determine the midpoint of the reaction and its enthalpy and its binding affinity. It’s helpful in the pharmaceutical industry to classify potential drug candidates.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


Board footer

Powered by FluxBB