Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#2326 2024-09-29 00:10:34

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,096

Re: Miscellany

2326) Sand clock

Gist

An hourglass (or sandglass, sand timer, or sand clock) is a device used to measure the passage of time. It comprises two glass bulbs connected vertically by a narrow neck that allows a regulated flow of a substance (historically sand) from the upper bulb to the lower one due to gravity.

A sand clock works on the principle that all the sand from the upper chamber falls into the lower chamber in a fixed amount of time.

Which country invented sand clock?

Often referred to as the 'sand clock' the hourglass is not just a pretty ancient ornament tucked away on a modern shelf. Invented in the 8th century by a French monk called Liutprand, the hourglass was actually used as a timekeeping device.

An hourglass is an early device for measuring intervals of time. It is also known as a sandglass or a log glass when used in conjunction with the common log for ascertaining the speed of a ship. It consists of two pear-shaped bulbs of glass, united at their apexes and having a minute passage formed between them.

Summary

hourglass, an early device for measuring intervals of time. It is also known as a sandglass or a log glass when used in conjunction with the common log for ascertaining the speed of a ship. It consists of two pear-shaped bulbs of glass, united at their apexes and having a minute passage formed between them. A quantity of sand (or occasionally mercury) is enclosed in the bulbs, and the size of the passage is so proportioned that this media will completely run through from one bulb to another in the time it is desired to measure—e.g., an hour or a minute. Instruments of this kind, which have no great pretensions to accuracy, were formerly common in churches.

Details

An hourglass (or sandglass, sand timer, or sand clock) is a device used to measure the passage of time. It comprises two glass bulbs connected vertically by a narrow neck that allows a regulated flow of a substance (historically sand) from the upper bulb to the lower one due to gravity. Typically, the upper and lower bulbs are symmetric so that the hourglass will measure the same duration regardless of orientation. The specific duration of time a given hourglass measures is determined by factors including the quantity and coarseness of the particulate matter, the bulb size, and the neck width.

Depictions of an hourglass as a symbol of the passage of time are found in art, especially on tombstones or other monuments, from antiquity to the present day. The form of a winged hourglass has been used as a literal depiction of the Latin phrase tempus fugit ("time flies").

History:

Antiquity

The origin of the hourglass is unclear. Its predecessor the clepsydra, or water clock, is known to have existed in Babylon and Egypt as early as the 16th century BCE.

Middle Ages

There are no records of the hourglass existing in Europe prior to the Late Middle Ages; the first documented example dates from the 14th century, a depiction in the 1338 fresco Allegory of Good Government by Ambrogio Lorenzetti.

Use of the marine sandglass has been recorded since the 14th century. The written records about it were mostly from logbooks of European ships. In the same period it appears in other records and lists of ships stores. The earliest recorded reference that can be said with certainty to refer to a marine sandglass dates from c. 1345, in a receipt of Thomas de Stetesham, clerk of the King's ship La George, in the reign of Edward III of England; translated from the Latin, the receipt says: in 1345:

The same Thomas accounts to have paid at Lescluse, in Flanders, for twelve glass horologes (" pro xii. orlogiis vitreis "), price of each 4½ gross', in sterling 9s. Item, For four horologes of the same sort (" de eadem secta "), bought there, price of each five gross', making in sterling 3s. 4d.

Marine sandglasses were popular aboard ships, as they were the most dependable measurement of time while at sea. Unlike the clepsydra, hourglasses using granular materials were not affected by the motion of a ship and less affected by temperature changes (which could cause condensation inside a clepsydra). While hourglasses were insufficiently accurate to be compared against solar noon for the determination of a ship's longitude (as an error of just four minutes would correspond to one degree of longitude), they were sufficiently accurate to be used in conjunction with a chip log to enable the measurement of a ship's speed in knots.

The hourglass also found popularity on land as an inexpensive alternative to mechanical clocks. Hourglasses were commonly seen in use in churches, homes, and work places to measure sermons, cooking time, and time spent on breaks from labor. Because they were being used for more everyday tasks, the model of the hourglass began to shrink. The smaller models were more practical and very popular as they made timing more discreet.

After 1500, the hourglass was not as widespread as it had been. This was due to the development of the mechanical clock, which became more accurate, smaller and cheaper, and made keeping time easier. The hourglass, however, did not disappear entirely. Although they became relatively less useful as clock technology advanced, hourglasses remained desirable in their design. The oldest known surviving hourglass resides in the British Museum in London.

Not until the 18th century did John Harrison come up with a marine chronometer that significantly improved on the stability of the hourglass at sea. Taking elements from the design logic behind the hourglass, he made a marine chronometer in 1761 that was able to accurately measure the journey from England to Jamaica accurate within five seconds.

Design

Little written evidence exists to explain why its external form is the shape that it is. The glass bulbs used, however, have changed in style and design over time. While the main designs have always been ampoule in shape, the bulbs were not always connected. The first hourglasses were two separate bulbs with a cord wrapped at their union that was then coated in wax to hold the piece together and let sand flow in between. It was not until 1760 that both bulbs were blown together to keep moisture out of the bulbs and regulate the pressure within the bulb that varied the flow.

Material

While some early hourglasses actually did use silica sand as the granular material to measure time, many did not use sand at all. The material used in most bulbs was "powdered marble, tin/lead oxides, [or] pulverized, burnt eggshell". Over time, different textures of granule matter were tested to see which gave the most constant flow within the bulbs. It was later discovered that for the perfect flow to be achieved the ratio of granule bead to the width of the bulb neck needed to be 1/12 or more but not greater than 1/2 the neck of the bulb.

Practical uses

Hourglasses were an early dependable and accurate measure of time. The rate of flow of the sand is independent of the depth in the upper reservoir, and the instrument will not freeze in cold weather. From the 15th century onwards, hourglasses were being used in a range of applications at sea, in the church, in industry, and in cookery.

During the voyage of Ferdinand Magellan around the globe, 18 hourglasses from Barcelona were in the ship's inventory, after the trip had been authorized by King Charles I of Spain. It was the job of a ship's page to turn the hourglasses and thus provide the times for the ship's log. Noon was the reference time for navigation, which did not depend on the glass, as the sun would be at its zenith. A number of sandglasses could be fixed in a common frame, each with a different operating time, e.g. as in a four-way Italian sandglass likely from the 17th century, in the collections of the Science Museum, in South Kensington, London, which could measure intervals of quarter, half, three-quarters, and one hour (and which were used in churches, for priests and ministers to measure lengths of sermons).

Modern practical uses

While hourglasses are no longer widely used for keeping time, some institutions do maintain them. Both houses of the Australian Parliament use three hourglasses to time certain procedures, such as divisions.

Sand timers are sometimes included with boardgames such as Pictionary and Boggle that place time constraints on rounds of play.

Symbolic uses

Unlike most other methods of measuring time, the hourglass concretely represents the present as being between the past and the future, and this has made it an enduring symbol of time as a concept.

The hourglass, sometimes with the addition of metaphorical wings, is often used as a symbol that human existence is fleeting, and that the "sands of time" will run out for every human life. It was used thus on pirate flags, to evoke fear through imagery associated with death. In England, hourglasses were sometimes placed in coffins, and they have graced gravestones for centuries. The hourglass was also used in alchemy as a symbol for hour.

The former Metropolitan Borough of Greenwich in London used an hourglass on its coat of arms, symbolising Greenwich's role as the origin of Greenwich Mean Time (GMT). The district's successor, the Royal Borough of Greenwich, uses two hourglasses on its coat of arms.

Modern symbolic uses

Recognition of the hourglass as a symbol of time has survived its obsolescence as a timekeeper. For example, the American television soap opera Days of Our Lives (1965–present) displays an hourglass in its opening credits, with narration by Macdonald Carey: "Like sands through the hourglass, so are the days of our lives."

Various computer graphical user interfaces may change the pointer to an hourglass while the program is in the middle of a task, and may not accept user input. During that period of time, other programs, such as those open in other windows, may work normally. When such an hourglass does not disappear, it suggests a program is in an infinite loop and needs to be terminated, or is waiting for some external event (such as the user inserting a CD).

Unicode has an HOURGLASS symbol at U+231B.

In the 21st century, the Extinction symbol came into use as a symbol of the Holocene extinction and climate crisis. The symbol features an hourglass to represent time "running out" for extinct and endangered species, and also to represent time "running out" for climate change mitigation.

Hourglass motif

Because of its symmetry, graphic signs resembling an hourglass are seen in the art of cultures which never encountered such objects. Vertical pairs of triangles joined at the apex are common in Native American art; both in North America, where it can represent, for example, the body of the Thunderbird or (in more elongated form) an enemy scalp, and in South America, where it is believed to represent a Chuncho jungle dweller. In Zulu textiles they symbolise a married man, as opposed to a pair of triangles joined at the base, which symbolise a married woman. Neolithic examples can be seen among Spanish cave paintings. Observers have even given the name "hourglass motif" to shapes which have more complex symmetry, such as a repeating circle and cross pattern from the Solomon Islands. Both the members of Project Tic Toc, from television series the Time Tunnel and the Challengers of the Unknown use symbols of the hourglass representing either time travel or time running out.

Marketing-Hourglass.png.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2327 2024-09-30 00:02:05

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,096

Re: Miscellany

2327) Asteroid

Gist

Asteroids are small, rocky objects that orbit the Sun. Although asteroids orbit the Sun like planets, they are much smaller than planets. Asteroids are small, rocky objects that orbit the sun. Although asteroids orbit the sun like planets, they are much smaller than planets.

They probably consist of clay and silicate rocks, and are dark in appearance. They are among the most ancient objects in the solar system. The S-types ("stony") are made up of silicate materials and nickel-iron. The M-types are metallic (nickel-iron).

Most asteroids can be found orbiting the Sun between Mars and Jupiter within the main asteroid belt. Asteroids range in size from Vesta – the largest at about 329 miles (530 kilometers) in diameter – to bodies that are less than 33 feet (10 meters) across.

Summary

Asteroids come in a variety of shapes and sizes and teach us about the formation of the solar system.

Asteroids are the rocky remnants of material leftover from the formation of the solar system and its planets approximately 4.6 billion years ago.

The majority of asteroids originate from the main asteroid belt located between Mars and Jupiter, according to NASA. NASA's current asteroid count is over 1 million.

Asteroids orbit the sun in highly flattened, or "elliptical" circles, often rotating erratically, tumbling and falling through space.

Many large asteroids have one or more small companion moons. An example of this is Didymos, a half-mile (780 meters) wide asteroid that is orbited by the moonlet Dimorphos which measures just 525 feet (160 m) across.

Asteroids are also often referred to as "minor planets" and can range in size from the largest known example, Vesta, which has a diameter of around 326 miles (525 kilometers), to bodies that are less than 33 feet (10 meters) across.

Vesta recently snatched the "largest asteroid title" from Ceres, which NASA now classifies as a dwarf planet. Ceres is the largest object in the main asteroid belt while Vesta is the second largest.

As well as coming in a range of sizes, asteroids come in a variety of shapes from near spheres to irregular double-lobed peanut-shaped asteroids like Itokawa. Most asteroid surfaces are pitted with impact craters from collisions with other space rocks.

Though a majority of asteroids lurk in the asteroid belt, NASA says, the massive gravitational influence of Jupiter, the solar system's largest planet, can send them hurtling through in random directions, including through the inner solar system and thus towards Earth. But don't worry, NASA's Planetary Defense Coordination Office is keeping a watchful eye on near-Earth objects (NEOs), including asteroids, to assess the impact hazard and aid the U.S. government in planning for a response to a possible impact threat.

What is an asteroid?

Using NASA definitions, an asteroid is "A relatively small, inactive, rocky body orbiting the sun," while a comet is a "relatively small, at times active, object whose ices can vaporize in sunlight forming an atmosphere (coma) of dust and gas and, sometimes, a tail of dust and/or gas."

Additionally, a meteorite is a "meteoroid that survives its passage through the Earth's atmosphere and lands upon the Earth's surface" and a meteor is defined as a "light phenomenon which results when a meteoroid enters the Earth's atmosphere and vaporizes; a shooting star."

What are asteroids made of?

Before the formation of the planets of the solar system, the infant sun was surrounded by a disk of dust and gas, called a protoplanetary disk. While most of this disk collapsed to form the planets some material was left over.

"Simply put, asteroids are leftovers rocky material from the time of the solar system formation. They are the initial bricks that built the planets," Fred Jourdan, a planetary scientist at Curtin University told Space.com in an email "So all the material that formed all those asteroids is about 4.55 billion years old." 

In the chaotic conditions of the early solar system, this material repeatedly clashed together with small grains clustering to form small rocks, which clustered to form larger rocks and eventually planetesimals  —  bodies that don't grow large enough to form planets. Further collisions shattered apart these planetesimals, with these fragments and rocks forming the asteroids we see today.

"All that happened 4.5 billion years ago but the solar system has remained a very dynamic place since then," Jourdan added. "During the next few billions of years until the present, some asteroids smashed into each other and destroyed each other, and the debris recombined and formed what we call rubble pile asteroids."

This means asteroids can also differ by how solid they are. Some asteroids are one solid monolithic body, while others like Bennu are essentially floating rubble piles, made of smaller bodies loosely bound together gravitationally.

"I would say there are three types of asteroids. The first one is the monolith chondritic asteroid, so that's the real brick of the solar system," Jourdan explained. "These asteroids remained relatively unchanged since their formation. Some of them are rich in silicates, and some of them are rich in carbon with different tales to tell."

The second type is the differentiated asteroids which for a while behaved like they were tiny planets forming a metallic core, a mantle, and a volcanic crust. Jourdan said these asteroids would look layered like an egg if cut from the side, with the best example of this being Vesta, which he calls his "favorite asteroid."

"The last type is the rubble pile asteroids so it's just when asteroids smashed into each other and the fragment that is ejected reassemble together," Jourdan continued. These asteroids are made of boulders, rocks, pebbles, dust, and a lot of void spacing which makes them so resistant to any further impacts. In that regard, rubble piles are a bit like giant space cushions."

How often do asteroids hit Earth?

Asteroids large enough to cause damage on the ground hit Earth about once per century, as you go to larger and larger asteroids, the impacts are increasingly infrequent. At the extremely small end, desk-sized asteroids hit Earth about once a month, but they just produce bright fireballs as they burn up in the atmosphere. As you go to larger and larger asteroids, the impacts are increasingly infrequent.

What's the difference between asteroids, meteorites and comets?

Asteroids are the rocky/dusty small bodies orbiting the sun. Meteorites are pieces on the ground left over after an asteroid breaks up in the atmosphere. Most asteroids are not strong, and when they disintegrate in the atmosphere they often produce a shower of meteorites on the ground.

Comets are also small bodies orbiting the sun, but they also contain ices that produce a gas and dust atmosphere and tail when they get near the sun and heat up.

Details

An asteroid is a minor planet—an object that is neither a true planet nor an identified comet— that orbits within the inner Solar System. They are rocky, metallic, or icy bodies with no atmosphere, classified as C-type (carbonaceous), M-type (metallic), or S-type (silicaceous). The size and shape of asteroids vary significantly, ranging from small rubble piles under a kilometer across and larger than meteoroids, to Ceres, a dwarf planet almost 1000 km in diameter. A body is classified as a comet, not an asteroid, if it shows a coma (tail) when warmed by solar radiation, although recent observations suggest a continuum between these types of bodies.

Of the roughly one million known asteroids, the greatest number are located between the orbits of Mars and Jupiter, approximately 2 to 4 AU from the Sun, in a region known as the main asteroid belt. The total mass of all the asteroids combined is only 3% that of Earth's Moon. The majority of main belt asteroids follow slightly elliptical, stable orbits, revolving in the same direction as the Earth and taking from three to six years to complete a full circuit of the Sun.

Asteroids have historically been observed from Earth. The first close-up observation of an asteroid was made by the Galileo spacecraft. Several dedicated missions to asteroids were subsequently launched by NASA and JAXA, with plans for other missions in progress. NASA's NEAR Shoemaker studied Eros, and Dawn observed Vesta and Ceres. JAXA's missions Hayabusa and Hayabusa2 studied and returned samples of Itokawa and Ryugu, respectively. OSIRIS-REx studied Bennu, collecting a sample in 2020 which was delivered back to Earth in 2023. NASA's Lucy, launched in 2021, is tasked with studying ten different asteroids, two from the main belt and eight Jupiter trojans. Psyche, launched October 2023, aims to study the metallic asteroid Psyche.

Near-Earth asteroids have the potential for catastrophic consequences if they strike Earth, with a notable example being the Chicxulub impact, widely thought to have induced the Cretaceous–Paleogene mass extinction. As an experiment to meet this danger, in September 2022 the Double Asteroid Redirection Test spacecraft successfully altered the orbit of the non-threatening asteroid Dimorphos by crashing into it.

Terminology

In 2006, the International Astronomical Union (IAU) introduced the currently preferred broad term small Solar System body, defined as an object in the Solar System that is neither a planet, a dwarf planet, nor a natural satellite; this includes asteroids, comets, and more recently discovered classes. According to IAU, "the term 'minor planet' may still be used, but generally, 'Small Solar System Body' will be preferred."

Historically, the first discovered asteroid, Ceres, was at first considered a new planet. It was followed by the discovery of other similar bodies, which with the equipment of the time appeared to be points of light like stars, showing little or no planetary disc, though readily distinguishable from stars due to their apparent motions. This prompted the astronomer Sir William Herschel to propose the term asteroid, coined in Greek, or asteroeidēs, meaning 'star-like, star-shaped', and derived from the Ancient Greek astēr 'star, planet'. In the early second half of the 19th century, the terms asteroid and planet (not always qualified as "minor") were still used interchangeably.

Traditionally, small bodies orbiting the Sun were classified as comets, asteroids, or meteoroids, with anything smaller than one meter across being called a meteoroid. The term asteroid, never officially defined, but can be informally used to mean "an irregularly shaped rocky body orbiting the Sun that does not qualify as a planet or a dwarf planet under the IAU definitions". The main difference between an asteroid and a comet is that a comet shows a coma (tail) due to sublimation of its near-surface ices by solar radiation. A few objects were first classified as minor planets but later showed evidence of cometary activity. Conversely, some (perhaps all) comets are eventually depleted of their surface volatile ices and become asteroid-like. A further distinction is that comets typically have more eccentric orbits than most asteroids; highly eccentric asteroids are probably dormant or extinct comets.

The minor planets beyond Jupiter's orbit are sometimes also called "asteroids", especially in popular presentations. However, it is becoming increasingly common for the term asteroid to be restricted to minor planets of the inner Solar System. Therefore, this article will restrict itself for the most part to the classical asteroids: objects of the asteroid belt, Jupiter trojans, and near-Earth objects.

For almost two centuries after the discovery of Ceres in 1801, all known asteroids spent most of their time at or within the orbit of Jupiter, though a few, such as 944 Hidalgo, ventured farther for part of their orbit. Starting in 1977 with 2060 Chiron, astronomers discovered small bodies that permanently resided further out than Jupiter, now called centaurs. In 1992, 15760 Albion was discovered, the first object beyond the orbit of Neptune (other than Pluto); soon large numbers of similar objects were observed, now called trans-Neptunian object. Further out are Kuiper-belt objects, scattered-disc objects, and the much more distant Oort cloud, hypothesized to be the main reservoir of dormant comets. They inhabit the cold outer reaches of the Solar System where ices remain solid and comet-like bodies exhibit little cometary activity; if centaurs or trans-Neptunian objects were to venture close to the Sun, their volatile ices would sublimate, and traditional approaches would classify them as comets.

The Kuiper-belt bodies are called "objects" partly to avoid the need to classify them as asteroids or comets. They are thought to be predominantly comet-like in composition, though some may be more akin to asteroids. Most do not have the highly eccentric orbits associated with comets, and the ones so far discovered are larger than traditional comet nuclei. Other recent observations, such as the analysis of the cometary dust collected by the Stardust probe, are increasingly blurring the distinction between comets and asteroids, suggesting "a continuum between asteroids and comets" rather than a sharp dividing line.

In 2006, the IAU created the class of dwarf planets for the largest minor planets—those massive enough to have become ellipsoidal under their own gravity. Only the largest object in the asteroid belt has been placed in this category: Ceres, at about 975 km (606 mi) across.

Additional Information

Asteroids, sometimes called minor planets, are rocky, airless remnants left over from the early formation of our solar system about 4.6 billion years ago.

Most asteroids can be found orbiting the Sun between Mars and Jupiter within the main asteroid belt. Asteroids range in size from Vesta – the largest at about 329 miles (530 kilometers) in diameter – to bodies that are less than 33 feet (10 meters) across. The total mass of all the asteroids combined is less than that of Earth's Moon.

During the 18th century, astronomers were fascinated by a mathematical expression called Bode's law. It appeared to predict the locations of the known planets, but with one exception...

Bode's law suggested there should be a planet between Mars and Jupiter. When Sir William Herschel discovered Uranus, the seventh planet, in 1781, at a distance that corresponded to Bode's law, scientific excitement about the validity of this mathematical expression reached an all-time high. Many scientists were absolutely convinced that a planet must exist between Mars and Jupiter.

By the end of the century, a group of astronomers had banded together to use the observatory at Lilienthal, Germany, owned by Johann Hieronymous Schröter, to hunt down this missing planet. They called themselves, appropriately, the 'Celestial Police'.

Despite their efforts, they were beaten by Giuseppe Piazzi, who discovered what he believed to be the missing planet on New Year's Day, 1801, from the Palermo Observatory.

The new body was named Ceres but subsequent observations swiftly established that it could not be classed a major planet as its diameter is just 940 kilometres (Pluto, the smallest planet, has a diameter is just over 2300 kilometres). Instead, it was classified as a 'minor planet' and the search for the 'real' planet continued.

Between 1801 and 1808, astronomers tracked down a further three minor planets within this region of space: Pallas, Juno and Vesta, each smaller than Ceres. It became obvious that there was no single large planet out there and enthusiasm for the search waned.

A fifth asteroid, Astraea, was discovered in 1845 and interest in the asteroids as a new 'class' of celestial object began to build. In fact, since that time new asteroids have been discovered almost every year.

It soon became obvious that a 'belt' of asteroids existed between Mars and Jupiter. This collection of space debris was the 'missing planet'. It was almost certainly prevented from forming by the large gravitational field of adjacent Jupiter.

Now there are a number of telescopes dedicated to the task of finding new asteroids. Specifically, these instruments are geared towards finding any asteroids that cross the Earth's orbit and may therefore pose an impact hazard.

asteroid-10-things.jpg?1


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2328 2024-10-01 00:04:25

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,096

Re: Miscellany

2328) Table

Gist

A table is an item of furniture with a raised flat top and is supported most commonly by 1 to 4 legs (although some can have more). It is used as a surface for working at, eating from or on which to place things.

Summary

Table, basic article of furniture, known and used in the Western world since at least the 7th century bce, consisting of a flat slab of stone, metal, wood, or glass supported by trestles, legs, or a pillar.

Egyptian tables were made of wood, Assyrian of metal, and Grecian usually of bronze. Roman tables took on quite elaborate forms, the legs carved in the shapes of animals, sphinxes, or grotesque figures. Cedar and other exotic woods with a decorative grain were employed for the tops, and the tripod legs were made of bronze or other metals.

Early medieval tables were of a fairly basic type, but there were certain notable exceptions; Charlemagne, for example, possessed two tables of silver and one of gold, probably constructed of wood covered with thin sheets of metal. With the growing formality of life in the feudal period, tables took on a greater social significance. Although small tables were used in private apartments, in the great hall of a feudal castle the necessity of feeding a host of retainers stimulated the development of an arrangement whereby the master and his guests sat at a rectangular table on a dais surmounted by a canopy, while the rest of the household sat at tables placed at right angles to this one.

One of the few surviving examples of a large (and much restored) round table dating from the 15th century is at Winchester Castle in Hampshire, Eng. For the most part, circular tables were intended for occasional uses. The most common type of large medieval dining table was of trestle construction, consisting of massive boards of oak or elm resting on a series of central supports to which they were affixed by pegs, which could be removed and the table dismantled. Tables with attached legs, joined by heavy stretchers fixed close to the floor, appeared in the 15th century. They were of fixed size and heavy to move, but in the 16th century an ingenious device known as a draw top made it possible to double the length of the table. The top was composed of three leaves, two of which could be placed under the third and extended on runners when required. Such tables were usually made of oak or elm but sometimes of walnut or cherry. The basic principle involved is still applied to some extending tables.

Growing technical sophistication meant that from the middle of the 16th century onward tables began to reflect far more closely than before the general design tendencies of their period and social context. The typical Elizabethan draw table, for instance, was supported on four vase-shaped legs terminating in Ionic capitals, reflecting perfectly the boisterous decorative atmosphere of the age. The despotic monarchies that yearned after the splendours of Louis XIV’s Versailles promoted a fashion for tables of conspicuous opulence. Often made in Italy, these tables, which were common between the late 17th and mid-18th century, were sometimes inlaid with elaborate patterns of marquetry or rare marbles; others, such as that presented by the City of London to Charles II on his restoration as king of England, were entirely covered in silver or were made of ebony with silver mountings.

Increasing contact with the East in the 18th century stimulated a taste for lacquered tables for occasional use. Indeed, the pattern of development in the history of the table that became apparent in this century was that, whereas the large dining table showed few stylistic changes, growing sophistication of taste and higher standards of living led to an increasing degree of specialization in occasional-table design. A whole range of particular functions was now being catered to, a tendency that persisted until at least the beginning of the 20th century. Social customs such as tea-drinking fueled the development of these specialized forms. The exploitation of man-made materials in the second half of the 20th century produced tables of such materials as plastic, metal, fibreglass, and even corrugated cardboard.

Details

A table is an item of furniture with a raised flat top and is supported most commonly by 1 to 4 legs (although some can have more). It is used as a surface for working at, eating from or on which to place things. Some common types of tables are the dining room tables, which are used for seated persons to eat meals; the coffee table, which is a low table used in living rooms to display items or serve refreshments; and the bedside table, which is commonly used to place an alarm clock and a lamp. There are also a range of specialized types of tables, such as drafting tables, used for doing architectural drawings, and sewing tables.

Common design elements include:

* Top surfaces of various shapes, including rectangular, square, rounded, semi-circular or oval
* Legs arranged in two or more similar pairs. It usually has four legs. However, some tables have three legs, use a single heavy pedestal, or are attached to a wall.
* Several geometries of folding table that can be collapsed into a smaller volume (e.g., a TV tray, which is a portable, folding table on a stand)
* Heights ranging up and down from the most common 18–30 inches (46–76 cm) range, often reflecting the height of chairs or bar stools used as seating for people making use of a table, as for eating or performing various manipulations of objects resting on a table
* A huge range of sizes, from small bedside tables to large dining room tables and huge conference room tables
* Presence or absence of drawers, shelves or other areas for storing items
* Expansion of the table surface by insertion of leaves or locking hinged drop leaf sections into a horizontal position (this is particularly common for dining tables)

Etymology

The word table is derived from Old English tabele, derived from the Latin word tabula ('a board, plank, flat top piece'), which replaced the Old English bord; its current spelling reflects the influence of the French table.

History

Some very early tables were made and used by the Ancient Egyptians around 2500 BC, using wood and alabaster. They were often little more than stone platforms used to keep objects off the floor, though a few examples of wooden tables have been found in tombs. Food and drinks were usually put on large plates deposed on a pedestal for eating. The Egyptians made use of various small tables and elevated playing boards. The Chinese also created very early tables in order to pursue the arts of writing and painting, as did people in Mesopotamia, where various metals were used.

The Greeks and Romans made more frequent use of tables, notably for eating, although Greek tables were pushed under a bed after use. The Greeks invented a piece of furniture very similar to the guéridon. Tables were made of marble or wood and metal (typically bronze or silver alloys), sometimes with richly ornate legs. Later, the larger rectangular tables were made of separate platforms and pillars. The Romans also introduced a large, semicircular table to Italy, the mensa lunata. Plutarch mentions use of "tables" by Persians.

Furniture during the Middle Ages is not as well known as that of earlier or later periods, and most sources show the types used by the nobility. In the Eastern Roman Empire, tables were made of metal or wood, usually with four feet and frequently linked by x-shaped stretchers. Tables for eating were large and often round or semicircular. A combination of a small round table and a lectern seemed very popular as a writing table.

In western Europe, although there was variety of form — the circular, semicircular, oval and oblong were all in use — tables appear to have been portable and supported upon trestles fixed or folding, which were cleared out of the way at the end of a meal. Thus Charlemagne possessed three tables of silver and one of gold, probably made of wood and covered with plates of the precious metals. The custom of serving dinner at several small tables, which is often supposed to be a modern refinement, was followed in the French châteaux, and probably also in the English castles, as early as the 13th century.

Refectory tables first appeared at least as early as the 17th century, as an advancement of the trestle table; these tables were typically quite long and wide and capable of supporting a sizeable banquet in the great hall or other reception room of a castle.

Shape, height, and function

Tables come in a wide variety of materials, shapes, and heights dependent upon their origin, style, intended use and cost. Many tables are made of wood or wood-based products; some are made of other materials including metal and glass. Most tables are composed of a flat surface and one or more supports (legs). A table with a single, central foot is a pedestal table. Long tables often have extra legs for support.

Table tops can be in virtually any shape, although rectangular, square, round (e.g. the round table), and oval tops are the most frequent. Others have higher surfaces for personal use while either standing or sitting on a tall stool.

Many tables have tops that can be adjusted to change their height, position, shape, or size, either with foldable, sliding or extensions parts that can alter the shape of the top. Some tables are entirely foldable for easy transportation, e.g. camping or storage, e.g., TV trays. Small tables in trains and aircraft may be fixed or foldable, although they are sometimes considered as simply convenient shelves rather than tables.

Tables can be freestanding or designed for placement against a wall. Tables designed to be placed against a wall are known as pier tables[9] or console tables (French: console, "support bracket") and may be bracket-mounted (traditionally), like a shelf, or have legs, which sometimes imitate the look of a bracket-mounted table.

Types

Tables of various shapes, heights, and sizes are designed for specific uses:

* Dining room tables are designed to be used for formal dining.
* Bedside tables, nightstands, or night tables are small tables used in a bedroom. They are often used for convenient placement of a small lamp, alarm clock, glasses, or other personal items.
* Drop-leaf tables have a fixed section in the middle and a hinged section (leaf) on either side that can be folded down.
* Gateleg tables have one or two hinged leaves supported by hinged legs.
* Coffee tables are low tables designed for use in a living room, in front of a sofa, for convenient placement of drinks, books, or other personal items.
* Refectory tables are long tables designed to seat many people for meals.
* Drafting tables usually have a top that can be tilted for making a large or technical drawing. They may also have a ruler or similar element integrated.
* Workbenches are sturdy tables, often elevated for use with a high stool or while standing, which are used for assembly, repairs, or other precision handwork.
* Nested tables are a set of small tables of graduated size that can be stacked together, each fitting within the one immediately larger. They are for occasional use (such as a tea party), hence the stackable design.

Specialized types

Historically, various types of tables have become popular for specific uses:

* Loo tables were very popular in the 18th and 19th centuries as candlestands, tea tables, or small dining tables, although they were originally made for the popular card game loo or lanterloo. Their typically round or oval tops have a tilting mechanism, which enables them to be stored out of the way (e.g. in room corners) when not in use. A further development in this direction was the "birdcage" table, the top of which could both revolve and tilt.

* Pembroke tables, first introduced during the 18th century, were popular throughout the 19th century. Their main characteristic was a rectangular or oval top with folding or drop leaves on each side. Most examples have one or more drawers and four legs, sometimes connected by stretchers. Their design meant they could easily be stored or moved about and conveniently opened for serving tea, dining, writing, or other occasional uses. One account attributes the design of the Pembroke table to Henry Herbert, 9th Earl of Pembroke (1693-1751).

* Sofa tables are similar to Pembroke tables and usually have longer and narrower tops. They were specifically designed for placement directly in front of sofas for serving tea, writing, dining, or other convenient uses. Generally speaking, a sofa table is a tall, narrow table used behind a sofa to hold lamps or decorative objects.

* Work tables were small tables designed to hold sewing materials and implements, providing a convenient work place for women who sewed. They appeared during the 18th century and were popular throughout the 19th century. Most examples have rectangular tops, sometimes with folding leaves, and usually one or more drawers fitted with partitions. Early examples typically have four legs, often standing on casters, while later examples sometimes have turned columns or other forms of support.

* Drum tables are round tables introduced for writing, with drawers around the platform.

* End tables are small tables typically placed beside couches or armchairs. Often lamps will be placed on an end table.

* Overbed tables are narrow rectangular tables whose top is designed for use above the bed, especially for hospital patients.

* Billiards tables are bounded tables on which billiards-type games are played. All provide a flat surface, usually composed of slate and covered with cloth, elevated above the ground.

* Chess tables are a type of games table that integrates a chessboard.

* Table tennis tables are usually masonite or a similar wood, layered with a smooth low-friction coating. they are divided into two halves by a low net, which separates opposing players.

* Poker tables or card tables are used to play poker or other card games.

KKCT003.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2329 2024-10-02 00:05:44

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,096

Re: Miscellany

2329) Calculus (Medicine)

Gist

A calculus ( pl. : calculi), often called a stone, is a concretion of material, usually mineral salts, that forms in an organ or duct of the body.

Calculus, renal: A stone in the kidney (or lower down in the urinary tract). Also called a kidney stone. The stones themselves are called renal caluli. The word "calculus" (plural: calculi) is the Latin word for pebble. Renal stones are a common cause of blood in the urine and pain in the abdomen, flank, or groin.

Treating a staghorn calculus usually means surgery. The entire stone, even small pieces, must be removed so they can't lead to infection or the formation of new stones. One way to remove staghorn stones is with a percutaneous nephrolithotomy (PCNL).

A urologist can remove the kidney stone or break it into small pieces with the following treatments: Shock wave lithotripsy. The doctor can use shock wave lithotripsy link to blast the kidney stone into small pieces. The smaller pieces of the kidney stone then pass through your urinary tract.

What is a calculi in the kidneys?

Kidney stones (also called renal calculi, nephrolithiasis or urolithiasis) are hard deposits made of minerals and salts that form inside your kidneys. Diet, excess body weight, some medical conditions, and certain supplements and medications are among the many causes of kidney stones.

Summary

Kidney stone is a common clinical problem faced by clinicians. The prevalence of the disease is increasing worldwide. As the affected population is getting younger and recurrence rates are high, dietary modifications, lifestyle changes, and medical management are essential. Patients with recurrent stone disease need careful evaluation for underlying metabolic disorder. Medical management should be used judiciously in all patients with kidney stones, with appropriate individualization. This chapter focuses on medical management of kidney stones.

Urinary tract stones (urolithiasis) are known to the mankind since antiquity. Kidney stone is not a true diagnosis; rather it suggests a broad variety of underlying diseases. Kidney stones are mainly composed of calcium salts, uric acid, cysteine, and struvite. Calcium oxalate and calcium phosphate are the most common types accounting for >80% of stones, followed by uric acid (8–10%) and cysteine, struvite in remainders.

The incidence of urolithiasis is increasing globally, with geographic, racial, and gender variation in its occurrence. Epidemiological study (1979) in the western population reveals the incidence of urolithiasis to be 124 per 100,000 in males and 36 per 100,000 in females. The lifetime risk of having urolithiasis is higher in the Middle East (20–25%) and western countries (10–15%) and less common in Africans and Asian population. Stone disease carries high risk of recurrence after the initial episode, of around 50% at 5 years and 70% at 9 years.

Positive family history of stone disease, young age at onset, recurrent urinary tract infections (UTIs), and underlying diseases like renal tubular acidosis (RTA) and hyperparathyroidism are the major risk factors for recurrence. High incidence and recurrence rate add enormous cost and loss of work days.

Though the pathogenesis of stone disease is not fully understood, systematic metabolic evaluation, medical treatment of underlying conditions, and patient-specific modification in diet and lifestyle are effective in reducing the incidence and recurrence of stone disease.

Details

A calculus (pl.: calculi), often called a stone, is a concretion of material, usually mineral salts, that forms in an organ or duct of the body. Formation of calculi is known as lithiasis. Stones can cause a number of medical conditions.

Some common principles (below) apply to stones at any location, but for specifics see the particular stone type in question.

Calculi are not to be confused with gastroliths, which are ingested rather than grown endogenously.

Types

* Calculi in the inner ear are called otoliths
* Calculi in the urinary system are called urinary calculi and include kidney stones (also called renal calculi or nephroliths) and bladder stones (also called vesical calculi or cystoliths). They can have any of several compositions, including mixed. Principal compositions include oxalate and urate.
* Calculi of the gallbladder and bile ducts are called gallstones and are primarily developed from bile salts and cholesterol derivatives.
* Calculi in the nasal passages (rhinoliths) are rare.
* Calculi in the gastrointestinal tract (enteroliths) can be enormous. Individual enteroliths weighing many pounds have been reported in horses.
* Calculi in the stomach are called gastric calculi (not to be confused with gastroliths which are exogenous in nature).
* Calculi in the salivary glands are called salivary calculi (sialoliths).
* Calculi in the tonsils are called tonsillar calculi (tonsilloliths).
* Calculi in the veins are called venous calculi (phleboliths).
* Calculi in the skin, such as in sweat glands, are not common but occasionally occur.
* Calculi in the navel are called omphaloliths.
* Calculi are usually asymptomatic, and large calculi may have required many years to grow to their large size.

Cause

* From an underlying abnormal excess of the mineral, e.g., with elevated levels of calcium (hypercalcaemia) that may cause kidney stones, dietary factors for gallstones.
* Local conditions at the site in question that promote their formation, e.g., local bacteria action (in kidney stones) or slower fluid flow rates, a possible explanation of the majority of salivary duct calculus occurring in the submandibular salivary gland.
* Enteroliths are a type of calculus found in the intestines of animals (mostly ruminants) and humans, and may be composed of inorganic or organic constituents.
* Bezoars are lumps of indigestible material in the stomach and/or intestines; most commonly, they consist of hair (in which case they are also known as hairballs). A bezoar may form the nidus of an enterolith.
* In kidney stones, calcium oxalate is the most common mineral type (see nephrolithiasis). Uric acid is the second most common mineral type, but an in vitro study showed uric acid stones and crystals can promote the formation of calcium oxalate stones.

Pathophysiology

Stones can cause disease by several mechanisms:

* Irritation of nearby tissues, causing pain, swelling, and inflammation
* Obstruction of an opening or duct, interfering with normal flow and disrupting the function of the organ in question
* Predisposition to infection (often due to disruption of normal flow)

A number of important medical conditions are caused by stones:

* Nephrolithiasis (kidney stones)
** Can cause hydronephrosis (swollen kidneys) and kidney failure
** Can predispose to pyelonephritis (kidney infections)
** Can progress to urolithiasis
* Urolithiasis (urinary bladder stones)
** Can progress to bladder outlet obstruction
* Cholelithiasis (gallstones)
** Can predispose to cholecystitis (gall bladder infections) and ascending cholangitis (biliary tree infection)
** Can progress to choledocholithiasis (gallstones in the bile duct) and gallstone pancreatitis (inflammation of the pancreas)
* Gastric calculi can cause colic, obstruction, torsion, and necrosis.

Diagnosis

Diagnostic workup varies by the stone type, but in general:

* Clinical history and physical examination
Imaging studies:
* Some stone types (mainly those with substantial calcium content) can be detected on X-ray and CT scan
* Many stone types can be detected by ultrasound

Factors contributing to stone formation (as in #Etiology) are often tested:
* Laboratory testing can give levels of relevant substances in blood or urine
* Some stones can be directly recovered (at surgery, or when they leave the body spontaneously) and sent to a laboratory for analysis of content

Treatment

Modification of predisposing factors can sometimes slow or reverse stone formation. Treatment varies by stone type, but, in general:

* Healthy diet and exercise (promotes flow of energy and nutrition)
* Drinking fluids (water and electrolytes like lemon juice, diluted vinegar e.g. in pickles, salad dressings, sauces, soups, shrubs mix)
* Surgery (lithotomy)
* Medication / antibiotics
* Extracorporeal shock wave lithotripsy (ESWL) for removal of calculi

History

The earliest operation for curing stones is given in the Sushruta Samhita (6th century BCE). The operation involved exposure and going up through the floor of the bladder.

The care of this disease was forbidden to the physicians that had taken the Hippocratic Oath because:

* There was a high probability of intraoperative and postoperative surgical complications like infection or bleeding
* The physicians would not perform surgery as in ancient cultures they were two different professions

Etymology

The word comes from Latin calculus "small stone", from calx "limestone, lime", probably related to Greek chalix "small stone, pebble, rubble", which many trace to a Proto-Indo-European language root for "split, break up". Calculus was a term used for various kinds of stones. In the 18th century it came to be used for accidental or incidental mineral buildups in human and animal bodies, like kidney stones and minerals on teeth.

Additional Information

If your doctor suspects that you have a kidney stone, you may have diagnostic tests and procedures, such as:

* Blood testing. Blood tests may reveal too much calcium or uric acid in your blood. Blood test results help monitor the health of your kidneys and may lead your doctor to check for other medical conditions.
* Urine testing. The 24-hour urine collection test may show that you're excreting too many stone-forming minerals or too few stone-preventing substances. For this test, your doctor may request that you perform two urine collections over two consecutive days.
* Imaging. Imaging tests may show kidney stones in your urinary tract. High-speed or dual energy computerized tomography (CT) may reveal even tiny stones. Simple abdominal X-rays are used less frequently because this kind of imaging test can miss small kidney stones.

Ultrasound, a noninvasive test that is quick and easy to perform, is another imaging option to diagnose kidney stones.

* Analysis of passed stones. You may be asked to urinate through a strainer to catch stones that you pass. Lab analysis will reveal the makeup of your kidney stones. Your doctor uses this information to determine what's causing your kidney stones and to form a plan to prevent more kidney stones.

Right-renal-pelvic-cystine-stone.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2330 2024-10-02 20:04:33

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,096

Re: Miscellany

2330) Cafeteria

Gist

A cafeteria a restaurant in which the customers serve themselves or are served at a counter but carry their own food to their tables.

A cafeteria is a self-service restaurant in a large shop or workplace.

It is a restaurant, especially one for staff or workers, where people collect their meals themselves and carry them to their tables.

Summary

A cafeteria is a self-service restaurant in which customers select various dishes from an open-counter display. The food is usually placed on a tray, paid for at a cashier’s station, and carried to a dining table by the customer. The modern cafeteria, designed to facilitate a smooth flow of patrons, is particularly well adapted to the needs of institutions—schools, hospitals, corporations—attempting to serve large numbers of people efficiently and inexpensively. In addition to providing quick service, the cafeteria requires fewer service personnel than most other commercial eating establishments.

Early versions of self-service restaurants began to appear in the late 19th century in the United States. In 1891 the Young Women’s Christian Association (YWCA) of Kansas City, Missouri, established what some food-industry historians consider the first cafeteria. This institution, founded to provide low-cost meals for working women, was patterned after a Chicago luncheon club for women where some aspects of self-service were already in practice. Cafeterias catering to the public opened in several U.S. cities in the 1890s, but cafeteria service did not become widespread until shortly after the turn of the century, when it became the accepted method of providing food for employees of factories and other large businesses.

Details

A cafeteria, sometimes called a canteen outside the U.S. and Canada, is a type of food service location in which there is little or no waiting staff table service, whether in a restaurant or within an institution such as a large office building or school; a school dining location is also referred to as a dining hall or lunchroom (in American English). Cafeterias are different from coffeehouses, although the English term came from the Spanish term cafetería, which carries the same meaning.

Instead of table service, there are food-serving counters/stalls or booths, either in a line or allowing arbitrary walking paths. Customers take the food that they desire as they walk along, placing it on a tray. In addition, there are often stations where customers order food, particularly items such as hamburgers or tacos which must be served hot and can be immediately prepared with little waiting. Alternatively, the patron is given a number and the item is brought to their table. For some food items and drinks, such as sodas, water, or the like, customers collect an empty container, pay at check-out, and fill the container after check-out. Free unlimited-second servings are often allowed under this system. For legal purposes (and the consumption patterns of customers), this system is rarely, if at all, used for alcoholic drinks in the United States.

Customers are either charged a flat rate for admission (as in a buffet) or pay at check-out for each item. Some self-service cafeterias charge by the weight of items on a patron's plate. In universities and colleges, some students pay for three meals a day by making a single large payment for the entire semester.

As cafeterias require few employees, they are often found within a larger institution, catering to the employees or clientele of that institution. For example, schools, colleges and their residence halls, department stores, hospitals, museums, places of worship, amusement parks, military bases, prisons, factories, and office buildings often have cafeterias. Although some of such institutions self-operate their cafeterias, many outsource their cafeterias to a food service management company or lease space to independent businesses to operate food service facilities. The three largest food service management companies servicing institutions are Aramark, Compass Group, and Sodexo.

At one time, upscale cafeteria-style restaurants dominated the culture of the Southern United States, and to a lesser extent the Midwest. There were numerous prominent chains of them: Bickford's, Morrison's Cafeteria, Piccadilly Cafeteria, S&W Cafeteria, Apple House, Luby's, K&W, Britling, Wyatt's Cafeteria, and Blue Boar among them. Currently, two Midwestern chains still exist, Sloppy Jo's Lunchroom and Manny's, which are both located in Illinois. There were also several smaller chains, usually located in and around a single city. These institutions, except K&W, went into a decline in the 1960s with the rise of fast food and were largely finished off in the 1980s by the rise of all-you-can-eat buffets and other casual dining establishments. A few chains—particularly Luby's and Piccadilly Cafeterias (which took over the Morrison's chain in 1998)—continue to fill some of the gap left by the decline of the older chains. Some of the smaller Midwestern chains, such as MCL Cafeterias centered in Indianapolis, are still in business.

History

Perhaps the first self-service restaurant (not necessarily a cafeteria) in the U.S. was the Exchange Buffet in New York City, which opened September 4, 1885, and catered to an exclusively male clientele. Food was purchased at a counter and patrons ate standing up. This represents the predecessor of two formats: the cafeteria, described below, and the automat.

During the 1893 World's Columbian Exposition in Chicago, entrepreneur John Kruger built an American version of the smörgåsbords he had seen while traveling in Sweden. Emphasizing the simplicity and light fare, he called it the 'Cafeteria' - Spanish for 'coffee shop'. The exposition attracted over 27 million visitors (half the U.S. population at the time) in six months, and it was because of Kruger's operation that the United States first heard the term and experienced the self-service dining format.

Meanwhile, the chain of Childs Restaurants quickly grew from about 10 locations in New York City in 1890 to hundreds across the U.S. and Canada by 1920. Childs is credited with the innovation of adding trays and a "tray line" to the self-service format, introduced in 1898 at their 130 Broadway location. Childs did not change its format of sit-down dining, however. This was soon the standard design for most Childs Restaurants, and, ultimately, the dominant method for succeeding cafeterias.

It has been conjectured that the 'cafeteria craze' started in May 1905, when Helen Mosher opened a downtown L.A. restaurant where people chose their food at a long counter and carried their trays to their tables. California has a long history in the cafeteria format - notably the Boos Brothers Cafeterias, and the Clifton's and Schaber's. The earliest cafeterias in California were opened at least 12 years after Kruger's Cafeteria, and Childs already had many locations around the country. Horn & Hardart, an automat format chain (different from cafeterias), was well established in the mid-Atlantic region before 1900.

Between 1960 and 1981, the popularity of cafeterias was overcome by fast food restaurants and fast casual restaurant formats.

Outside the United States, the development of cafeterias can be observed in France as early as 1881 with the passing of the Ferry Law. This law mandated that public school education be available to all children. Accordingly, the government also encouraged schools to provide meals for students in need, thus resulting in the conception of cafeterias or cantine (in French). According to Abramson, before the creation of cafeterias, only some students could bring home-cooked meals and be properly fed in schools.

As cafeterias in France became more popular, their use spread beyond schools and into the workforce. Thus, due to pressure from workers and eventually new labor laws, sizable businesses had to, at minimum, provide established eating areas for their workers. Support for this practice was also reinforced by the effects of World War II when the importance of national health and nutrition came under great attention.

Other names

A cafeteria in a U.S. military installation is known as a chow hall, a mess hall, a galley, a mess deck, or, more formally, a dining facility, often abbreviated to DF, whereas in common British Armed Forces parlance, it is known as a cookhouse or mess. Students in the United States often refer to cafeterias as lunchrooms, which also often serve school breakfast. Some school cafeterias in the U.S. and Canada have stages and movable seating that allow use as auditoriums. These rooms are known as cafetoriums or All Purpose Rooms. In some older facilities, a school's gymnasium is also often used as a cafeteria, with the kitchen facility being hidden behind a rolling partition outside non-meal hours. Newer rooms which also act as the school's grand entrance hall for crowd control and are used for multiple purposes are often called the commons.

Cafeterias serving university dormitories are sometimes called dining halls or dining commons. A food court is a type of cafeteria found in many shopping malls and airports featuring multiple food vendors or concessions. However, a food court could equally be styled as a type of restaurant as well, being more aligned with the public, rather than institutionalized, dining. Some institutions, especially schools, have food courts with stations offering different types of food served by the institution itself (self-operation) or a single contract management company, rather than leasing space to numerous businesses. Some monasteries, boarding schools, and older universities refer to their cafeteria as a refectory. Modern-day British cathedrals and abbeys, notably in the Church of England, often use the phrase refectory to describe a cafeteria open to the public. Historically, the refectory was generally only used by monks and priests. For example, although the original 800-year-old refectory at Gloucester Cathedral (the stage setting for dining scenes in the Harry Potter movies) is now mostly used as a choir practice area, the relatively modern 300-year-old extension, now used as a cafeteria by staff and public alike, is today referred to as the refectory.

A cafeteria located within a movie or TV studio complex is often called a commissary.

College cafeteria

In American English, a college cafeteria is a cafeteria intended for college students. In British English, it is often called the refectory. These cafeterias can be a part of a residence hall or in a separate building.  Many of these colleges employ their students to work in the cafeteria.  The number of meals served to students varies from school to school but is normally around 21 meals per week.  Like normal cafeterias, a person will have a tray to select the food that they want, but (at some campuses) instead of paying money, pays beforehand by purchasing a meal plan.

The method of payment for college cafeterias is commonly in the form of a meal plan, whereby the patron pays a certain amount at the start of the semester and details of the plan are stored on a computer system. Student ID cards are then used to access the meal plan. Meal plans can vary widely in their details and are often not necessary to eat at a college cafeteria. Typically, the college tracks students' plan usage by counting the number of predefined meal servings, points, dollars, or buffet dinners. The plan may give the student a certain number of any of the above per week or semester and they may or may not roll over to the next week or semester.

Many schools offer several different options for using their meal plans. The main cafeteria is usually where most of the meal plan is used but smaller cafeterias, cafés, restaurants, bars, or even fast food chains located on campus, on nearby streets, or in the surrounding town or city may accept meal plans. A college cafeteria system often has a virtual monopoly on the students due to an isolated location or a requirement that residence contracts include a full meal plan.

Taiwanese cafeteria

There are many self-service bento shops in Taiwan. The store puts the dishes in the self-service area for the customers to pick them up by themselves. After the customers choose, they go to the cashier to check out; many stores use the staff to visually check the amount of food when assessing the price, and some stores use the method of weighing.

cafeteria-remodel.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2331 2024-10-03 16:49:51

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,096

Re: Miscellany

2331) Antenna

Gist

An antenna is a metallic structure that captures and/or transmits radio electromagnetic waves. Antennas come in all shapes and sizes from little ones that can be found on your roof to watch TV to really big ones that capture signals from satellites millions of miles away.

An antenna is a device that is made out of a conductive, metallic material and has the purpose of transmitting and/or receiving electromagnetic waves, usually radio wave signals. The purpose of transmitting and receiving radio waves is to communicate or broadcast information at the speed of light.

Summary

An antenna is a metallic structure that captures and/or transmits radio electromagnetic waves. Antennas come in all shapes and sizes from little ones that can be found on your roof to watch TV to really big ones that capture signals from satellites millions of miles away.

The antennas that Space Communications and Navigation (SCaN) uses are a special bowl shaped antenna that focuses signals at a single point called a parabolic antenna. The bowl shape is what allows the antennas to both capture and transmit electromagnetic waves. These antennas move horizontally (measured in hour angle/declination) and vertically (measured in azimuth/elevation) in order to capture and transmit the signal.

SCaN has over 65 antennas that help capture and transmit data to and from satellites in space.

Details

In radio engineering, an antenna (American English) or aerial (British English) is an electronic device that converts an alternating electric current into radio waves (transmitting), or radio waves into an electric current (receiving). It is the interface between radio waves propagating through space and electric currents moving in metal conductors, used with a transmitter or receiver. In transmission, a radio transmitter supplies an electric current to the antenna's terminals, and the antenna radiates the energy from the current as electromagnetic waves (radio waves). In reception, an antenna intercepts some of the power of a radio wave in order to produce an electric current at its terminals, that is applied to a receiver to be amplified. Antennas are essential components of all radio equipment.

An antenna is an array of conductors (elements), electrically connected to the receiver or transmitter. Antennas can be designed to transmit and receive radio waves in all horizontal directions equally (omnidirectional antennas), or preferentially in a particular direction (directional, or high-gain, or "beam" antennas). An antenna may include components not connected to the transmitter, parabolic reflectors, horns, or parasitic elements, which serve to direct the radio waves into a beam or other desired radiation pattern. Strong directivity and good efficiency when transmitting are hard to achieve with antennas with dimensions that are much smaller than a half wavelength.

The first antennas were built in 1888 by German physicist Heinrich Hertz in his pioneering experiments to prove the existence of electromagnetic waves predicted by the 1867 electromagnetic theory of James Clerk Maxwell. Hertz placed dipole antennas at the focal point of parabolic reflectors for both transmitting and receiving. Starting in 1895, Guglielmo Marconi began development of antennas practical for long-distance, wireless telegraphy, for which he received the 1909 Nobel Prize in physics.

Terminology

The words antenna and aerial are used interchangeably. Occasionally the equivalent term "aerial" is used to specifically mean an elevated horizontal wire antenna. The origin of the word antenna relative to wireless apparatus is attributed to Italian radio pioneer Guglielmo Marconi. In the summer of 1895, Marconi began testing his wireless system outdoors on his father's estate near Bologna and soon began to experiment with long wire "aerials" suspended from a pole. In Italian a tent pole is known as l'antenna centrale, and the pole with the wire was simply called l'antenna. Until then wireless radiating transmitting and receiving elements were known simply as "terminals". Because of his prominence, Marconi's use of the word antenna spread among wireless researchers and enthusiasts, and later to the general public.

Antenna may refer broadly to an entire assembly including support structure, enclosure (if any), etc., in addition to the actual RF current-carrying components. A receiving antenna may include not only the passive metal receiving elements, but also an integrated preamplifier or mixer, especially at and above microwave frequencies.

Additional Information

antenna, component of radio, television, and radar systems that directs incoming and outgoing radio waves. Antennas are usually metal and have a wide variety of configurations, from the mastlike devices employed for radio and television broadcasting to the large parabolic reflectors used to receive satellite signals and the radio waves generated by distant astronomical objects.

The first antenna was devised by the German physicist Heinrich Hertz. During the late 1880s he carried out a landmark experiment to test the theory of the British mathematician-physicist James Clerk Maxwell that visible light is only one example of a larger class of electromagnetic effects that could pass through air (or empty space) as a succession of waves. Hertz built a transmitter for such waves consisting of two flat, square metallic plates, each attached to a rod, with the rods in turn connected to metal spheres spaced close together. An induction coil connected to the spheres caused a spark to jump across the gap, producing oscillating currents in the rods. The reception of waves at a distant point was indicated by a spark jumping across a gap in a loop of wire.

The Italian physicist Guglielmo Marconi, the principal inventor of wireless telegraphy, constructed various antennas for both sending and receiving, and he also discovered the importance of tall antenna structures in transmitting low-frequency signals. In the early antennas built by Marconi and others, operating frequencies were generally determined by antenna size and shape. In later antennas frequency was regulated by an oscillator, which generated the transmitted signal.

More powerful antennas were constructed during the 1920s by combining a number of elements in a systematic array. Metal horn antennas were devised during the subsequent decade following the development of waveguides that could direct the propagation of very high-frequency radio signals.

Over the years, many types of antennas have been developed for different purposes. An antenna may be designed specifically to transmit or to receive, although these functions may be performed by the same antenna. A transmitting antenna, in general, must be able to handle much more electrical energy than a receiving antenna. An antenna also may be designed to transmit at specific frequencies. In the United States, amplitude modulation (AM) radio broadcasting, for instance, is done at frequencies between 535 and 1,605 kilohertz (kHz); at these frequencies, a wavelength is hundreds of metres or yards long, and the size of the antenna is therefore not critical. Frequency modulation (FM) broadcasting, on the other hand, is carried out at a range from 88 to 108 megahertz (MHz). At these frequencies a typical wavelength is about 3 metres (10 feet) long, and the antenna must be adjusted more precisely to the electromagnetic wave, both in transmitting and in receiving. Antennas may consist of single lengths of wire or rods in various shapes (dipole, loop, and helical antennas), or of more elaborate arrangements of elements (linear, planar, or electronically steerable arrays). Reflectors and lens antennas use a parabolic dish to collect and focus the energy of radio waves, in much the same way that a parabolic mirror in a reflecting telescope collects light rays. Directional antennas are designed to be aimed directly at the signal source and are used in direction-finding.

More Information

An antenna or aerial is a metal device made to send or receive radio waves. Many electronic devices like radio, television, radar, wireless LAN, cell phone, and GPS need antennas to do their job. Antennas work both in air and outer space.

The word 'antenna' is from Guglielmo Marconi's test with wireless equipment in 1895. For the test, he used a 2.5 meter long pole antenna with a tent pole called ' l'antenna centrale ' in Italian. So his antenna was simply called ' l'antenna '. After that, the word 'antenna' became popular among people and had the meaning it has today. The plural of antenna is either antennas or antennae (U.S. and Canada tends to use antennas more than other places).

Types of antennas

Each one is made to work for a specific frequency range. The antenna's length or size usually depends on the wavelength (1/frequency) it uses.

Different kinds of antenna have different purposes. For example, the isotropic radiator is an imaginary antenna that sends signals equally in all directions. The dipole antenna is simply two wires with one end of each wire connected to the radio and the other end standing free in space. It sends or receives signals in all directions except where the wires are pointing. Some antennas are more directional. Horn is used where high gain is needed, the wavelength is short. Satellite television and radio telescopes mostly use dish antennas.

15M_Antenna.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2332 2024-10-04 16:27:11

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,096

Re: Miscellany

2332) Solar System

Gist

The Solar System is the gravitationally bound system of the Sun and the objects that orbit it. It formed about 4.6 billion years ago when a dense region of a molecular cloud collapsed, forming the Sun and a protoplanetary disc.

Summary

The solar system, assemblage consisting of the Sun—an average star in the Milky Way Galaxy—and those bodies orbiting around it: 8 (formerly 9) planets with more than 210 known planetary satellites (moons); many asteroids, some with their own satellites; comets and other icy bodies; and vast reaches of highly tenuous gas and dust known as the interplanetary medium. The solar system is part of the "observable universe," the region of space that humans can actually or theoretically observe with the aid of technology. Unlike the observable universe, the universe is possibly infinite.

The Sun, Moon, and brightest planets were visible to the naked eyes of ancient astronomers, and their observations and calculations of the movements of these bodies gave rise to the science of astronomy. Today the amount of information on the motions, properties, and compositions of the planets and smaller bodies has grown to immense proportions, and the range of observational instruments has extended far beyond the solar system to other galaxies and the edge of the known universe. Yet the solar system and its immediate outer boundary still represent the limit of our physical reach, and they remain the core of our theoretical understanding of the cosmos as well. Earth-launched space probes and landers have gathered data on planets, moons, asteroids, and other bodies, and this data has been added to the measurements collected with telescopes and other instruments from below and above Earth’s atmosphere and to the information extracted from meteorites and from Moon rocks returned by astronauts. All this information is scrutinized in attempts to understand in detail the origin and evolution of the solar system—a goal toward which astronomers continue to make great strides.

Composition of the solar system

Located at the centre of the solar system and influencing the motion of all the other bodies through its gravitational force is the Sun, which in itself contains more than 99 percent of the mass of the system. The planets, in order of their distance outward from the Sun, are Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune. Four planets—Jupiter through Neptune—have ring systems, and all but Mercury and Venus have one or more moons. Pluto had been officially listed among the planets since it was discovered in 1930 orbiting beyond Neptune, but in 1992 an icy object was discovered still farther from the Sun than Pluto. Many other such discoveries followed, including an object named Eris that appears to be at least as large as Pluto. It became apparent that Pluto was simply one of the larger members of this new group of objects, collectively known as the Kuiper belt. Accordingly, in August 2006 the International Astronomical Union (IAU), the organization charged by the scientific community with classifying astronomical objects, voted to revoke Pluto’s planetary status and place it under a new classification called dwarf planet. For a discussion of that action and of the definition of planet approved by the IAU, see planet.

Any natural solar system object other than the Sun, a planet, a dwarf planet, or a moon is called a small body; these include asteroids, meteoroids, and comets. Most of the more than one million asteroids, or minor planets, orbit between Mars and Jupiter in a nearly flat ring called the asteroid belt. The myriad fragments of asteroids and other small pieces of solid matter (smaller than a few tens of metres across) that populate interplanetary space are often termed meteoroids to distinguish them from the larger asteroidal bodies.

The solar system’s several billion comets are found mainly in two distinct reservoirs. The more-distant one, called the Oort cloud, is a spherical shell surrounding the solar system at a distance of approximately 50,000 astronomical units (AU)—more than 1,000 times the distance of Pluto’s orbit. The other reservoir, the Kuiper belt, is a thick disk-shaped zone whose main concentration extends 30–50 AU from the Sun, beyond the orbit of Neptune but including a portion of the orbit of Pluto. (One astronomical unit is the average distance from Earth to the Sun—about 150 million km [93 million miles].) Just as asteroids can be regarded as rocky debris left over from the formation of the inner planets, Pluto, its moon Charon, Eris, and the myriad other Kuiper belt objects can be seen as surviving representatives of the icy bodies that accreted to form the cores of Neptune and Uranus. As such, Pluto and Charon may also be considered to be very large comet nuclei. The Centaur objects, a population of comet nuclei having diameters as large as 200 km (125 miles), orbit the Sun between Jupiter and Neptune, probably having been gravitationally perturbed inward from the Kuiper belt. The interplanetary medium—an exceedingly tenuous plasma (ionized gas) laced with concentrations of dust particles—extends outward from the Sun to about 123 AU.

The solar system even contains objects from interstellar space that are just passing through. Two such interstellar objects have been observed. ‘Oumuamua had an unusual cigarlike or pancakelike shape and was possibly composed of nitrogen ice. Comet Borisov was much like the comets of the solar system but with a much higher abundance of carbon monoxide.

Details

The Solar System is the gravitationally bound system of the Sun and the objects that orbit it. It formed about 4.6 billion years ago when a dense region of a molecular cloud collapsed, forming the Sun and a protoplanetary disc. The Sun is a typical star that maintains a balanced equilibrium by the fusion of hydrogen into helium at its core, releasing this energy from its outer photosphere. Astronomers classify it as a G-type main-sequence star.

The largest objects that orbit the Sun are the eight planets. In order from the Sun, they are four terrestrial planets (Mercury, Venus, Earth and Mars); two gas giants (Jupiter and Saturn); and two ice giants (Uranus and Neptune). All terrestrial planets have solid surfaces. Inversely, all giant planets do not have a definite surface, as they are mainly composed of gases and liquids. Over 99.86% of the Solar System's mass is in the Sun and nearly 90% of the remaining mass is in Jupiter and Saturn.

There is a strong consensus among astronomers[e] that the Solar System has at least nine dwarf planets: Ceres, Orcus, Pluto, Haumea, Quaoar, Makemake, Gonggong, Eris, and Sedna. There are a vast number of small Solar System bodies, such as asteroids, comets, centaurs, meteoroids, and interplanetary dust clouds. Some of these bodies are in the asteroid belt (between Mars's and Jupiter's orbit) and the Kuiper belt (just outside Neptune's orbit). Six planets, seven dwarf planets, and other bodies have orbiting natural satellites, which are commonly called 'moons'.

The Solar System is constantly flooded by the Sun's charged particles, the solar wind, forming the heliosphere. Around 75–90 astronomical units from the Sun, the solar wind is halted, resulting in the heliopause. This is the boundary of the Solar System to interstellar space. The outermost region of the Solar System is the theorized Oort cloud, the source for long-period comets, extending to a radius of 2,000–200,000 AU. The closest star to the Solar System, Proxima Centauri, is 4.25 light-years (269,000 AU) away. Both stars belong to the Milky Way galaxy.

Formation and evolution:

Past

The Solar System formed at least 4.568 billion years ago from the gravitational collapse of a region within a large molecular cloud. This initial cloud was likely several light-years across and probably birthed several stars. As is typical of molecular clouds, this one consisted mostly of hydrogen, with some helium, and small amounts of heavier elements fused by previous generations of stars.

As the pre-solar nebula collapsed, conservation of angular momentum caused it to rotate faster. The center, where most of the mass collected, became increasingly hotter than the surroundings. As the contracting nebula spun faster, it began to flatten into a protoplanetary disc with a diameter of roughly 200 AU and a hot, dense protostar at the center. The planets formed by accretion from this disc, in which dust and gas gravitationally attracted each other, coalescing to form ever larger bodies. Hundreds of protoplanets may have existed in the early Solar System, but they either merged or were destroyed or ejected, leaving the planets, dwarf planets, and leftover minor bodies.

Due to their higher boiling points, only metals and silicates could exist in solid form in the warm inner Solar System close to the Sun (within the frost line). They would eventually form the rocky planets of Mercury, Venus, Earth, and Mars. Because these refractory materials only comprised a small fraction of the solar nebula, the terrestrial planets could not grow very large.

The giant planets (Jupiter, Saturn, Uranus, and Neptune) formed further out, beyond the frost line, the point between the orbits of Mars and Jupiter where material is cool enough for volatile icy compounds to remain solid. The ices that formed these planets were more plentiful than the metals and silicates that formed the terrestrial inner planets, allowing them to grow massive enough to capture large atmospheres of hydrogen and helium, the lightest and most abundant elements. Leftover debris that never became planets congregated in regions such as the asteroid belt, Kuiper belt, and Oort cloud.

Within 50 million years, the pressure and density of hydrogen in the center of the protostar became great enough for it to begin thermonuclear fusion. As helium accumulates at its core, the Sun is growing brighter; early in its main-sequence life its brightness was 70% that of what it is today. The temperature, reaction rate, pressure, and density increased until hydrostatic equilibrium was achieved: the thermal pressure counterbalancing the force of gravity. At this point, the Sun became a main-sequence star. Solar wind from the Sun created the heliosphere and swept away the remaining gas and dust from the protoplanetary disc into interstellar space.

Following the dissipation of the protoplanetary disk, the Nice model proposes that gravitational encounters between planetisimals and the gas giants caused each to migrate into different orbits. This led to dynamical instability of the entire system, which scattered the planetisimals and ultimately placed the gas giants in their current positions. During this period, the grand tack hypothesis suggests that a final inward migration of Jupiter dispersed much of the asteroid belt, leading to the Late Heavy Bombardment of the inner planets.

Present and future

The Solar System remains in a relatively stable, slowly evolving state by following isolated, gravitationally bound orbits around the Sun. Although the Solar System has been fairly stable for billions of years, it is technically chaotic, and may eventually be disrupted. There is a small chance that another star will pass through the Solar System in the next few billion years. Although this could destabilize the system and eventually lead millions of years later to expulsion of planets, collisions of planets, or planets hitting the Sun, it would most likely leave the Solar System much as it is today.

The Sun's main-sequence phase, from beginning to end, will last about 10 billion years for the Sun compared to around two billion years for all other subsequent phases of the Sun's pre-remnant life combined. The Solar System will remain roughly as it is known today until the hydrogen in the core of the Sun has been entirely converted to helium, which will occur roughly 5 billion years from now. This will mark the end of the Sun's main-sequence life. At that time, the core of the Sun will contract with hydrogen fusion occurring along a shell surrounding the inert helium, and the energy output will be greater than at present. The outer layers of the Sun will expand to roughly 260 times its current diameter, and the Sun will become a red giant. Because of its increased surface area, the surface of the Sun will be cooler (2,600 K (4,220 °F) at its coolest) than it is on the main sequence.

The expanding Sun is expected to vaporize Mercury as well as Venus, and render Earth and Mars uninhabitable (possibly destroying Earth as well). Eventually, the core will be hot enough for helium fusion; the Sun will burn helium for a fraction of the time it burned hydrogen in the core. The Sun is not massive enough to commence the fusion of heavier elements, and nuclear reactions in the core will dwindle. Its outer layers will be ejected into space, leaving behind a dense white dwarf, half the original mass of the Sun but only the size of Earth. The ejected outer layers may form a planetary nebula, returning some of the material that formed the Sun—but now enriched with heavier elements like carbon—to the interstellar medium.

solar-system2.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2333 2024-10-05 00:02:14

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,096

Re: Miscellany

2333) Solar Eclipse

Gist

A solar eclipse occurs when the moon “eclipses” the sun. This means that the moon, as it orbits the Earth, comes in between the sun and the Earth, thereby blocking the sun and preventing any sunlight from reaching us.

There are four types of solar eclipses

* Partial solar eclipse: The moon blocks the sun, but only partially. As a result, some part of the sun is visible, whereas the blocked part appears dark. A partial solar eclipse is the most common type of solar eclipse.

* Annular solar eclipse: The moon blocks out the sun in such a way that the periphery of the sun remains visible. The unobscured and glowing ring, or “annulus,” around the sun is also popularly known as the “ring of fire.” This is the second most common type of eclipse.

* Total solar eclipse: As the word "total" suggests, the moon totally blocks out the sun for a few minutes, leading to a period of darkness -- and the resulting eclipse is called a total solar eclipse. During this period of darkness, one can witness the solar corona, which is usually too dim to notice when the sun is at its full glory. Also noticeable is the diamond ring effect, or "Baily's beads," which occurs when some of the sunlight is able to reach us because the moon's surface is not perfectly round. These imperfections (in the form of craters and valleys) can allow sunlight to pass through, and this appears just like a bright, shining diamond.

* Hybrid solar eclipse: The rarest of all eclipses is a hybrid eclipse, which shifts between a total and annular eclipse. During a hybrid eclipse, some locations on Earth will witness the moon completely blocking the sun (a total eclipse), whereas other regions will observe an annular eclipse.

Summary

A solar eclipse occurs when the moon is positioned between Earth and the sun and casts a shadow over Earth.

Solar eclipses only occur during a new moon phase, usually about twice a year, when the moon aligns itself in such a way that it eclipses the sun, according to NASA.

A solar eclipse is caused by the moon passing between the sun and Earth, casting a shadow over Earth.

When the moon crosses the ecliptic — Earth's orbital plane — it is known as a lunar node. The distance at which the new moon approaches a node will determine the type of solar eclipse. The type of solar eclipse is also affected by the moon's distance from Earth and the distance between Earth and the sun.

A total solar eclipse occurs when the moon passes between the sun and Earth, completely obscuring the face of the sun. These solar eclipses are possible because the diameter of the sun is about 400 times that of the moon, but also approximately 400 times farther away, says the Natural History Museum.

An annular solar eclipse occurs when the moon passes between the sun and Earth when it is near its farthest point from Earth. At this distance, the moon appears smaller than the sun and doesn't cover the entire face of the sun. Instead, a ring of light is created around the moon.

A partial solar eclipse occurs when the moon passes between the sun and Earth when the trio is not perfectly aligned. As a result, only the penumbra (the partial shadow) passes over you, and the sun will be partially obscured.

A rare hybrid solar eclipse occurs when the moon's distance from Earth is near its limits for the inner shadow — the umbra — to reach Earth and because the planet is curved. Hybrid solar eclipses are also called annular-total (A-T) eclipses. In most cases, a hybrid eclipse starts as an annular eclipse because the tip of the umbra falls just short of making contact with Earth; then it becomes total because the roundness of the planet reaches up and intercepts the shadow's tip near the middle of the path, then finally it returns to annular toward the end of the path.

Approximately twice a year we experience an eclipse season. This is when the new moon aligns itself in such a way that it eclipses the sun. Solar eclipses do not occur every time there is a new moon phase because the moon's orbit is tilted about 5 degrees relative to Earth's orbit around the sun. For this reason, the moon's shadow usually passes either above or below Earth.

The type of solar eclipse will affect what happens and what observers will be able to see. According to the educational website SpaceEdge Academy, 28% of solar eclipses are total, 35% are partial, 32% are annular and only 5% are hybrid.

During a total solar eclipse the sky will darken and observers, with the correct safety equipment, may be able to see the sun's outer atmosphere, known as the corona. This makes for an exciting skywatching target for solar observers as the corona is usually obscured by the bright face of the sun.

During an annular solar eclipse, the moon doesn't fully obscure the face of the sun, as is the case in a total eclipse. Instead, it dramatically appears as a dark disk obscuring a larger bright disk, giving the appearance of a ring of light around the moon. These eclipses are aptly known as "ring of fire" eclipses.

Partial solar eclipses appear as if the moon is taking a "bite" out of the sun. As the trio of the sun, Earth and moon is not perfectly lined up, only part of the sun will appear to be obscured by the moon. When a total or annular solar eclipse occurs, observers outside the area covered by the moon's umbra (the inner shadow) will see a partial eclipse instead.

During a hybrid solar eclipse observers will be able to see either an annular or total solar eclipse depending on where they are located.

Details

A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby obscuring the view of the Sun from a small part of Earth, totally or partially. Such an alignment occurs approximately every six months, during the eclipse season in its new moon phase, when the Moon's orbital plane is closest to the plane of Earth's orbit. In a total eclipse, the disk of the Sun is fully obscured by the Moon. In partial and annular eclipses, only part of the Sun is obscured. Unlike a lunar eclipse, which may be viewed from anywhere on the night side of Earth, a solar eclipse can only be viewed from a relatively small area of the world. As such, although total solar eclipses occur somewhere on Earth every 18 months on average, they recur at any given place only once every 360 to 410 years.

If the Moon were in a perfectly circular orbit and in the same orbital plane as Earth, there would be total solar eclipses once a month, at every new moon. Instead, because the Moon's orbit is tilted at about 5 degrees to Earth's orbit, its shadow usually misses Earth. Solar (and lunar) eclipses therefore happen only during eclipse seasons, resulting in at least two, and up to five, solar eclipses each year, no more than two of which can be total. Total eclipses are rarer because they require a more precise alignment between the centers of the Sun and Moon, and because the Moon's apparent size in the sky is sometimes too small to fully cover the Sun.

An eclipse is a natural phenomenon. In some ancient and modern cultures, solar eclipses were attributed to supernatural causes or regarded as bad omens. Astronomers' predictions of eclipses began in China as early as the 4th century BC; eclipses hundreds of years into the future may now be predicted with high accuracy.

Looking directly at the Sun can lead to permanent eye damage, so special eye protection or indirect viewing techniques are used when viewing a solar eclipse. Only the total phase of a total solar eclipse is safe to view without protection. Enthusiasts known as eclipse chasers or umbraphiles travel to remote locations to see solar eclipses.

Types

The Sun's distance from Earth is about 400 times the Moon's distance, and the Sun's diameter is about 400 times the Moon's diameter. Because these ratios are approximately the same, the Sun and the Moon as seen from Earth appear to be approximately the same size: about 0.5 degree of arc in angular measure.

The Moon's orbit around Earth is slightly elliptical, as is Earth's orbit around the Sun. The apparent sizes of the Sun and Moon therefore vary. The magnitude of an eclipse is the ratio of the apparent size of the Moon to the apparent size of the Sun during an eclipse. An eclipse that occurs when the Moon is near its closest distance to Earth (i.e., near its perigee) can be a total eclipse because the Moon will appear to be large enough to completely cover the Sun's bright disk or photosphere; a total eclipse has a magnitude greater than or equal to 1.000. Conversely, an eclipse that occurs when the Moon is near its farthest distance from Earth (i.e., near its apogee) can be only an annular eclipse because the Moon will appear to be slightly smaller than the Sun; the magnitude of an annular eclipse is less than 1.

Because Earth's orbit around the Sun is also elliptical, Earth's distance from the Sun similarly varies throughout the year. This affects the apparent size of the Sun in the same way, but not as much as does the Moon's varying distance from Earth. When Earth approaches its farthest distance from the Sun in early July, a total eclipse is somewhat more likely, whereas conditions favour an annular eclipse when Earth approaches its closest distance to the Sun in early January.

There are three main types of solar eclipses:

Total eclipse

A total eclipse occurs on average every 18 months when the dark silhouette of the Moon completely obscures the bright light of the Sun, allowing the much fainter solar corona to be visible. During an eclipse, totality occurs only along a narrow track on the surface of Earth. This narrow track is called the path of totality.

Annular eclipse

An annular eclipse, like a total eclipse, occurs when the Sun and Moon are exactly in line with Earth. During an annular eclipse, however, the apparent size of the Moon is not large enough to completely block out the Sun. Totality thus does not occur; the Sun instead appears as a very bright ring, or annulus, surrounding the dark disk of the Moon. Annular eclipses occur once every one or two years, not annually. The term derives from the Latin root word anulus, meaning "ring", rather than annus, for "year".

Partial eclipse

A partial eclipse occurs about twice a year, when the Sun and Moon are not exactly in line with Earth and the Moon only partially obscures the Sun. This phenomenon can usually be seen from a large part of Earth outside of the track of an annular or total eclipse. However, some eclipses can be seen only as a partial eclipse, because the umbra passes above Earth's polar regions and never intersects Earth's surface. Partial eclipses are virtually unnoticeable in terms of the Sun's brightness, as it takes well over 90% coverage to notice any darkening at all. Even at 99%, it would be no darker than civil twilight.

Terminology:

Hybrid eclipse

A hybrid eclipse (also called annular/total eclipse) shifts between a total and annular eclipse. At certain points on the surface of Earth, it appears as a total eclipse, whereas at other points it appears as annular. Hybrid eclipses are comparatively rare.

A hybrid eclipse occurs when the magnitude of an eclipse changes during the event from less to greater than one, so the eclipse appears to be total at locations nearer the midpoint, and annular at other locations nearer the beginning and end, since the sides of Earth are slightly further away from the Moon. These eclipses are extremely narrow in their path width and relatively short in their duration at any point compared with fully total eclipses; the 2023 April 20 hybrid eclipse's totality is over a minute in duration at various points along the path of totality. Like a focal point, the width and duration of totality and annularity are near zero at the points where the changes between the two occur.

Central eclipse

Central eclipse is often used as a generic term for a total, annular, or hybrid eclipse. This is, however, not completely correct: the definition of a central eclipse is an eclipse during which the central line of the umbra touches Earth's surface. It is possible, though extremely rare, that part of the umbra intersects with Earth (thus creating an annular or total eclipse), but not its central line. This is then called a non-central total or annular eclipse. Gamma is a measure of how centrally the shadow strikes. The last (umbral yet) non-central solar eclipse was on April 29, 2014. This was an annular eclipse. The next non-central total solar eclipse will be on April 9, 2043.

Eclipse phases

The visual phases observed during a total eclipse are called:

* First contact—when the Moon's limb (edge) is exactly tangential to the Sun's limb.
* Second contact—starting with Baily's Beads (caused by light shining through valleys on the Moon's surface) and the diamond ring effect. Almost the entire disk is covered.
* Totality—the Moon obscures the entire disk of the Sun and only the solar corona is visible.
* Third contact—when the first bright light becomes visible and the Moon's shadow is moving away from the observer. Again a diamond ring may be observed.
* Fourth contact—when the trailing edge of the Moon ceases to overlap with the solar disk and the eclipse ends.

solar_eclipse.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2334 2024-10-06 00:03:32

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,096

Re: Miscellany

2334) Lunar Eclipse

Gist

Lunar eclipses occur at the full moon phase. When Earth is positioned precisely between the Moon and Sun, Earth’s shadow falls upon the surface of the Moon, dimming it and sometimes turning the lunar surface a striking red over the course of a few hours. Each lunar eclipse is visible from half of Earth.

A lunar eclipse occurs when the Sun, the Earth and the Moon all fall in a straight line. The Earth comes in between the Sun and the Moon and blocks the sunlight from reaching the Moon.

Summary

Lunar eclipses occur when Earth moves between the sun and the moon, casting a shadow across the lunar surface.

Lunar eclipses can only take place during a full moon and are a popular event for skywatchers around the world, as they can be enjoyed without any special equipment, unlike solar eclipses.

The next lunar eclipse will be a total lunar eclipse on March 13-14, 2025.

A lunar eclipse is caused by Earth blocking sunlight from reaching the moon and creating a shadow across the lunar surface.

The sun-blocking Earth casts two shadows that fall on the moon during a lunar eclipse: The umbra is a full, dark shadow, and the penumbra is a partial outer shadow.

There are three types of lunar eclipses depending on how the sun, Earth and moon are aligned at the time of the event.   

* Total lunar eclipse: Earth's shadow is cast across the entire lunar surface.
* Partial lunar eclipse: During a partial lunar eclipse, only part of the moon enters Earth's shadow, which may look like it is taking a "bite" out of the lunar surface. Earth's shadow will appear dark on the side of the moon facing Earth. How much of a "bite" we see depends on how the sun, Earth and moon align, according to NASA.
* Penumbral lunar eclipse: The faint outer part of Earth's shadow is cast across the lunar surface. This type of eclipse is not as dramatic as the other two and can be difficult to see. 

During a total lunar eclipse, the lunar surface turns a rusty red color, earning the nickname "blood moon". The eerie red appearance is caused by sunlight interacting with Earth's atmosphere.

When sunlight reaches Earth, our atmosphere scatters and filters different wavelengths. Shorter wavelengths such as blue light are scattered outward, while longer wavelengths like red are bent — or refracted — into Earth's umbra, according to the Natural History Museum. When the moon passes through Earth's umbra during a total lunar eclipse, the red light reflects off the lunar surface, giving the moon its blood-red appearance.

"How gold, orange, or red the moon appears during a total lunar eclipse depends on how much dust, water, and other particles are in Earth's atmosphere" according to NASA scientists. Other atmospheric factors such as temperature and humidity also affect the moon's appearance during a lunar eclipse.

Details

A lunar eclipse is an astronomical event that occurs when the Moon moves into the Earth's shadow, causing the Moon to be darkened. Such an alignment occurs during an eclipse season, approximately every six months, during the full moon phase, when the Moon's orbital plane is closest to the plane of the Earth's orbit.

This can occur only when the Sun, Earth, and Moon are exactly or very closely aligned (in syzygy) with Earth between the other two, which can happen only on the night of a full moon when the Moon is near either lunar node. The type and length of a lunar eclipse depend on the Moon's proximity to the lunar node.

When the Moon is totally eclipsed by the Earth (a "deep eclipse"), it takes on a reddish color that is caused by the planet when it completely blocks direct sunlight from reaching the Moon's surface, as the only light that is reflected from the lunar surface is what has been refracted by the Earth's atmosphere. This light appears reddish due to the Rayleigh scattering of blue light, the same reason sunrises and sunsets are more orange than during the day.

Unlike a solar eclipse, which can only be viewed from a relatively small area of the world, a lunar eclipse may be viewed from anywhere on the night side of Earth. A total lunar eclipse can last up to nearly two hours, while a total solar eclipse lasts only a few minutes at any given place, because the Moon's shadow is smaller. Also, unlike solar eclipses, lunar eclipses are safe to view without any eye protection or special precautions.

Types of lunar eclipse

Earth's shadow can be divided into two distinctive parts: the umbra and penumbra. Earth totally occludes direct solar radiation within the umbra, the central region of the shadow. However, since the Sun's diameter appears to be about one-quarter of Earth's in the lunar sky, the planet only partially blocks direct sunlight within the penumbra, the outer portion of the shadow.

Penumbral lunar eclipse

A penumbral lunar eclipse occurs when part or all of the Moon's near side passes into the Earth's penumbra. No part of the moon is in the Earth's umbra during this event, meaning that on all or a part of the Moon's surface facing Earth, the sun is partially blocked. The penumbra causes a subtle dimming of the lunar surface, which is only visible to the naked eye when the majority of the Moon's diameter has immersed into Earth's penumbra. A special type of penumbral eclipse is a total penumbral lunar eclipse, during which the entire Moon lies exclusively within Earth's penumbra. Total penumbral eclipses are rare, and when these occur, the portion of the Moon closest to the umbra may appear slightly darker than the rest of the lunar disk.

Partial lunar eclipse

When the Moon's near side penetrates partially into the Earth's umbra, it is known as a partial lunar eclipse, while a total lunar eclipse occurs when the entire Moon enters the Earth's umbra. During this event, one part of the Moon is in the Earth's umbra, while the other part is in the Earth's penumbra. The Moon's average orbital speed is about 1.03 km/s (2,300 mph), or a little more than its diameter per hour, so totality may last up to nearly 107 minutes. Nevertheless, the total time between the first and last contacts of the Moon's limb with Earth's shadow is much longer and could last up to 236 minutes.

Total lunar eclipse

When the Moon's near side entirely passes into the Earth's umbral shadow, a total lunar eclipse occurs. Just prior to complete entry, the brightness of the lunar limb—the curved edge of the Moon still being hit by direct sunlight—will cause the rest of the Moon to appear comparatively dim. The moment the Moon enters a complete eclipse, the entire surface will become more or less uniformly bright, being able to reveal stars surrounding it. Later, as the Moon's opposite limb is struck by sunlight, the overall disk will again become obscured. This is because, as viewed from the Earth, the brightness of a lunar limb is generally greater than that of the rest of the surface due to reflections from the many surface irregularities within the limb: sunlight striking these irregularities is always reflected back in greater quantities than that striking more central parts, which is why the edges of full moons generally appear brighter than the rest of the lunar surface. This is similar to the effect of velvet fabric over a convex curved surface, which, to an observer, will appear darkest at the center of the curve. It will be true of any planetary body with little or no atmosphere and an irregular cratered surface (e.g., Mercury) when viewed opposite the Sun.

Central lunar eclipse

Central lunar eclipse is a total lunar eclipse during which the Moon passes near and through the centre of Earth's shadow, contacting the antisolar point. This type of lunar eclipse is relatively rare.

The relative distance of the Moon from Earth at the time of an eclipse can affect the eclipse's duration. In particular, when the Moon is near apogee, the farthest point from Earth in its orbit, its orbital speed is the slowest. The diameter of Earth's umbra does not decrease appreciably within the changes in the Moon's orbital distance. Thus, the concurrence of a totally eclipsed Moon near apogee will lengthen the duration of totality.

Selenelion

A selenelion or selenehelion, also called a horizontal eclipse, occurs where and when both the Sun and an eclipsed Moon can be observed at the same time. The event can only be observed just before sunset or just after sunrise, when both bodies will appear just above opposite horizons at nearly opposite points in the sky. A selenelion occurs during every total lunar eclipse—it is an experience of the observer, not a planetary event separate from the lunar eclipse itself. Typically, observers on Earth located on high mountain ridges undergoing false sunrise or false sunset at the same moment of a total lunar eclipse will be able to experience it. Although during selenelion the Moon is completely within the Earth's umbra, both it and the Sun can be observed in the sky because atmospheric refraction causes each body to appear higher (i.e., more central) in the sky than its true geometric planetary positions.

Timing

The timing of total lunar eclipses is determined by what are known as its "contacts" (moments of contact with Earth's shadow):

* P1 (First contact): Beginning of the penumbral eclipse. Earth's penumbra touches the Moon's outer limb.
* U1 (Second contact): Beginning of the partial eclipse. Earth's umbra touches the Moon's outer limb.
* U2 (Third contact): Beginning of the total eclipse. The Moon's surface is entirely within Earth's umbra.
* Greatest eclipse: The peak stage of the total eclipse. The Moon is at its closest to the center of Earth's umbra.
* U3 (Fourth contact): End of the total eclipse. The Moon's outer limb exits Earth's umbra.
* U4 (Fifth contact): End of the partial eclipse. Earth's umbra leaves the Moon's surface.
* P4 (Sixth contact): End of the penumbral eclipse. Earth's penumbra no longer makes contact with the Moon.

3-s2.0-B9780128205853000028-f02-15-9780128205853.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2335 2024-10-07 00:06:38

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,096

Re: Miscellany

2335) Flower

Gist

A flower, sometimes known as a bloom or blossom, is the reproductive structure found in flowering plants.

A flower, also known as a bloom or blossom, is the reproductive structure found in flowering plants (plants of the division Angiospermae). Flowers consist of a combination of vegetative organs – sepals that enclose and protect the developing flower.

Flowers are composed of many distinct components: sepals, petals, stamens, and carpels. These components are arranged in whorls and attach to an area called the receptacle, which is at the end of the stem that leads to the flower. This stem is called the peduncle.

Flowers consist of a combination of vegetative organs – sepals that enclose and protect the developing flower. These petals attract pollinators, and reproductive organs that produce gametophytes, which in flowering plants produce gametes.

Summary

A flower, also known as a bloom or blossom, is the reproductive structure found in flowering plants (plants of the division Angiospermae). Flowers consist of a combination of vegetative organs – sepals that enclose and protect the developing flower. These petals attract pollinators, and reproductive organs that produce gametophytes, which in flowering plants produce gametes. The male gametophytes, which produce sperm, are enclosed within pollen grains produced in the anthers. The female gametophytes are contained within the ovules produced in the ovary.

Most flowering plants depend on animals, such as bees, moths, and butterflies, to transfer their pollen between different flowers, and have evolved to attract these pollinators by various strategies, including brightly colored, conspicuous petals, attractive scents, and the production of nectar, a food source for pollinators. In this way, many flowering plants have co-evolved with pollinators to be mutually dependent on services they provide to one another—in the plant's case, a means of reproduction; in the pollinator's case, a source of food.

When pollen from the anther of a flower is deposited on the stigma, this is called pollination. Some flowers may self-pollinate, producing seed using pollen from a different flower of the same plant, but others have mechanisms to prevent self-pollination and rely on cross-pollination, when pollen is transferred from the anther of one flower to the stigma of another flower on a different individual of the same species. Self-pollination happens in flowers where the stamen and carpel mature at the same time, and are positioned so that the pollen can land on the flower's stigma. This pollination does not require an investment from the plant to provide nectar and pollen as food for pollinators. Some flowers produce diaspores without fertilization (parthenocarpy). After fertilization, the ovary of the flower develops into fruit containing seeds.

Flowers have long been appreciated for their beauty and pleasant scents, and also hold cultural significance as religious, ritual, or symbolic objects, or sources of medicine and food.

Details

A flower is the characteristic reproductive structure of angiosperms. As popularly used, the term “flower” especially applies when part or all of the reproductive structure is distinctive in colour and form.

In their range of colour, size, form, and anatomical arrangement, flowers present a seemingly endless variety of combinations. They range in size from minute blossoms to giant blooms. In some plants, such as poppy, magnolia, tulip, and petunia, each flower is relatively large and showy and is produced singly, while in other plants, such as aster, snapdragon, and lilac, the individual flowers may be very small and are borne in a distinctive cluster known as an inflorescence. Regardless of their variety, all flowers have a uniform function, the reproduction of the species through the production of seed.

Form and types

Basically, each flower consists of a floral axis upon which are borne the essential organs of reproduction (stamens and pistils) and usually accessory organs (sepals and petals); the latter may serve to both attract pollinating insects and protect the essential organs. The floral axis is a greatly modified stem; unlike vegetative stems, which bear leaves, it is usually contracted, so that the parts of the flower are crowded together on the stem tip, the receptacle. The flower parts are usually arrayed in whorls (or cycles) but may also be disposed spirally, especially if the axis is elongate. There are commonly four distinct whorls of flower parts: (1) an outer calyx consisting of sepals; within it lies (2) the corolla, consisting of petals; (3) the androecium, or group of stamens; and in the centre is (4) the gynoecium, consisting of the pistils.

The sepals and petals together make up the perianth, or floral envelope. The sepals are usually greenish and often resemble reduced leaves, while the petals are usually colourful and showy. Sepals and petals that are indistinguishable, as in lilies and tulips, are sometimes referred to as tepals. The androecium, or male parts of the flower, comprise the stamens, each of which consists of a supporting filament and an anther, in which pollen is produced. The gynoecium, or female parts of the flower, comprises one or more pistils, each of which consists of an ovary, with an upright extension, the style, on the top of which rests the stigma, the pollen-receptive surface. The ovary encloses the ovules, or potential seeds. A pistil may be simple, made up of a single carpel, or ovule-bearing modified leaf; or compound, formed from several carpels joined together.

A flower having sepals, petals, stamens, and pistils is complete; lacking one or more of such structures, it is said to be incomplete. Stamens and pistils are not present together in all flowers. When both are present the flower is said to be perfect, or bisexual, regardless of a lack of any other part that renders it incomplete (see photograph). A flower that lacks stamens is pistillate, or female, while one that lacks pistils is said to be staminate, or male. When the same plant bears unisexual flowers of both sexes, it is said to be monoecious (e.g., tuberous begonia, hazel, oak, corn); when the male and female flowers are on different plants, the plant is dioecious (e.g., date, holly, cottonwood, willow); when there are male, female, and bisexual flowers on the same plant, the plant is termed polygamous.

A flower may be radially symmetrical, as in roses and petunias, in which case it is termed regular or actinomorphic. A bilaterally symmetrical flower, as in orchids and snapdragons, is irregular or zygomorphic.

Pollination

The stamens and pistils are directly involved with the production of seed. The stamen bears microsporangia (spore cases) in which are developed numerous microspores (potential pollen grains); the pistil bears ovules, each enclosing an egg cell. When a microspore germinates, it is known as a pollen grain. When the pollen sacs in a stamen’s anther are ripe, the anther releases them and the pollen is shed. Fertilization can occur only if the pollen grains are transferred from the anther to the stigma of a pistil, a process known as pollination.

Self-pollination

There are two chief kinds of pollination: (1) self-pollination, the pollination of a stigma by pollen from the same flower or another flower on the same plant; and (2) cross-pollination, the transfer of pollen from the anther of a flower of one plant to the stigma of the flower of another plant of the same species. Self-pollination occurs in many species, but in the others, perhaps the majority, it is prevented by such adaptations as the structure of the flower, self-incompatibility, and the maturation of stamens and pistils of the same flower or plant at different times. Cross-pollination may be brought about by a number of agents, chiefly insects and wind. Wind-pollinated flowers generally can be recognized by their lack of colour, odour, or nectar, while animal-pollinated flowers (see photograph) are conspicuous by virtue of their structure, colour, or the production of scent or nectar.

After a pollen grain has reached the stigma, it germinates, and a pollen tube protrudes from it. This tube, containing two male gametes (sperms), extends into the ovary and reaches the ovule, discharging its gametes so that one fertilizes the egg cell, which becomes an embryo, and the other joins with two polar nuclei to form the endosperm. (Normally many pollen grains fall on a stigma; they all may germinate, but only one pollen tube enters any one ovule.) Following fertilization, the embryo is on its way to becoming a seed, and at this time the ovary itself enlarges to form the fruit.

Cultural significance

Flowers have been symbols of beauty in most civilizations of the world, and flower giving is still among the most popular of social amenities. As gifts, flowers serve as expressions of affection for spouses, other family members, and friends; as decorations at weddings and other ceremonies; as tokens of respect for the deceased; as cheering gifts to the bedridden; and as expressions of thanks or appreciation. Most flowers bought by the public are grown in commercial greenhouses or horticultural fields and then sold through wholesalers to retail florists.

Additional Information

Sexual reproductive structure of plants, especially of angiosperms (flowering plants).

Supplement

Flowers are plant structures involved in sexual reproduction. Thus, they are typically comprised of sexual reproductive structures (i.e. androecium and gynoecium) in addition to nonessential parts such as sepals and petals. And the presence/absence of these structures may be used to describe flowers and flowering plants (angiosperms).

Complete and incomplete flowers:

Flowers that have these four structures are called complete; those lacking in one or more of these structures are called incomplete. Many flowering plants produce conspicuous, colorfoul, scented petals in order to attract insect pollinators. There are plants, like grasses, that produce flowers that are less-conspicuous and lacking in petals. These plants do not require insects but rely on other agents of pollination, such as wind.

Perfect (bisexual) and imperfect (unisexual) flowers:

Flowers that have both male and female reproductive structures are called bisexual or perfect. Flowers that bear either male (androecium) or female reproductive structures (gynoecium) are referred to as unisexual or imperfect. With only one reproductive organ present, imperfect flowers are also described as incomplete flowers.

Regular and irregular flowers:

Flowers that display symmetry are described as regular flowers in contrast to irregular flowers that do not.

Monoecious and dioecious plants:

Plants may be described as monoecious or dioecious. A monoecious plant bears both male and female imperfect flowers. A dioecious plant is a plant producing only one type of imperfect flowers, i.e. male or female flowers. Therefore, a dioecious plant may either be a male or female plant depending on the flower they produce.

71af16119ea1f145ac43baff0c9ade2d.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2336 2024-10-08 00:03:05

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,096

Re: Miscellany

2336) Porter (Carrier)

Gist

Porters carry luggage for guests. Porters can be found in hotels and motels, on ships and in transport terminals such as wharves, airports and train stations. They not only look after luggage, but direct guests to their rooms, cabins or berths and provide concierge services when necessary.

Summary

According to the Britannica Dictionary, a porter is a person who carries luggage or bags, or performs other duties, at a hotel, airport, or train station:

* Hotel, airport, or train station: A porter's job is to carry luggage or bags for customers.
* Train: In the US, a porter's job is to assist passengers on a train.
* Hospital: In Britain, a porter's job is to move patients, equipment, and medical supplies around the hospital.
* College or hospital: In Britain, a porter's job is to let people into the college or hospital.

Details

A porter, also called a bearer, is a person who carries objects or cargo for others. The range of services conducted by porters is extensive, from shuttling luggage aboard a train (a railroad porter) to bearing heavy burdens at altitude in inclement weather on multi-month mountaineering expeditions. They can carry items on their backs (backpack) or on their heads. The word "porter" derives from the Latin portare (to carry).

The use of humans to transport cargo dates to the ancient world, prior to domesticating animals and development of the wheel. Historically it remained prevalent in areas where slavery was permitted, and exists today where modern forms of mechanical conveyance are impractical or impossible, such as in mountainous terrain, or thick jungle or forest cover.

Over time, slavery diminished and technology advanced, but the role of porter for specialized transporting services remains strong in the 21st century. Examples include bellhops at hotels, redcaps at railway stations, skycaps at airports, and bearers on adventure trips engaged by foreign travelers.

Expeditions

Porters, frequently called Sherpas in the Himalayas (after the ethnic group most Himalayan porters come from), are also an essential part of mountaineering: they are typically highly skilled professionals who specialize in the logistics of mountain climbing, not merely people paid to carry loads (although carrying is integral to the profession). Frequently, porters/Sherpas work for companies who hire them out to climbing groups, to serve both as porters and as mountain guides; the term "guide" is often used interchangeably with "Sherpa" or "porter", but there are certain differences. Porters are expected to prepare the route before and/or while the main expedition climbs, climbing up beforehand with tents, food, water, and equipment (enough for themselves and for the main expedition), which they place in carefully located deposits on the mountain. This preparation can take months of work before the main expedition starts. Doing this involves numerous trips up and down the mountain, until the last and smallest supply deposit is planted shortly below the peak. When the route is prepared, either entirely or in stages ahead of the expedition, the main body follows. The last stage is often done without the porters, they remaining at the last camp, a quarter mile or below the summit, meaning only the main expedition is given the credit for mounting the summit. In many cases, since the porters are going ahead, they are forced to freeclimb, driving spikes and laying safety lines for the main expedition to use as they follow. Porters (such as Sherpas for example), are frequently local ethnic types, well adapted to living in the rarified atmosphere and accustomed to life in the mountains. Although they receive little glory, porters or Sherpas are often considered among the most skilled of mountaineers, and are generally treated with respect, since the success of the entire expedition is only possible through their work. They are also often called upon to stage rescue expeditions when a part of the party is endangered or there is an injury; when a rescue attempt is successful, several porters are usually called upon to transport the injured climber(s) back down the mountain so the expedition can continue. A well known incident where porters attempted to rescue numerous stranded climbers, and often died as a result, is the 2008 K2 disaster. Sixteen Sherpas were killed in the 2014 Mount Everest ice avalanche, inciting the entire Sherpa guide community to refuse to undertake any more ascents for the remainder of the year, making any further expeditions impossible.

History

Human adaptability and flexibility led to the early use of humans for transporting gear. Porters were commonly used as beasts of burden in the ancient world, when labor was generally cheap and slavery widespread. The ancient Sumerians, for example, enslaved women to shift wool and flax.

In the early Americas, where there were few native beasts of burden, all goods were carried by porters called Tlamemes in the Nahuatl language of Mesoamerica. In colonial times, some areas of the Andes employed porters called silleros to carry persons, particularly Europeans, as well as their luggage across the difficult mountain passes. Throughout the globe porters served, and in some areas continue to, as such littermen, particularly in crowded urban areas.

Many great works of engineering were created solely by muscle power in the days before machinery or even wheelbarrows and wagons; massive workforces of workers and bearers would complete impressive earthworks by manually lugging the earth, stones, or bricks in baskets on their backs.

Porters were very important to the local economies of many large cities in Brazil during the 1800s, where they were known as ganhadores. In 1857, ganhadores in Salvador, Bahia, went on strike in the first general strike in the country's history.

Contribution to mountain climbing expeditions

The contributions of porters can often go overlooked. Amir Mehdi was a Pakistani mountaineer and porter known for being part of the team which managed the first successful ascent of Nanga Parbat in 1953, and of K2 in 1954 with an Italian expedition. He, along with the Italian mountaineer Walter Bonatti, are also known for having survived a night at the highest open bivouac - 8,100 metres (26,600 ft) - on K2 in 1954. Fazal Ali, who was born in the Shimshal Valley in Pakistan North, is – according to the Guinness Book of World Records – the only man ever to have scaled K2 (8611 m) three times, in 2014, 2017 and 2018, all without oxygen, but his achievements have gone largely unrecognised.

Today

Porters are still paid to shift burdens in many third-world countries where motorized transport is impractical or unavailable, often alongside pack animals.

The Sherpa people of Nepal are so renowned as mountaineering porters that their ethnonym is synonymous with that profession. Their skill, knowledge of the mountains and local culture, and ability to perform at altitude make them indispensable for the highest Himalayan expeditions.

Porters at Indian railway stations are called coolies, a term for unskilled Asian labourer derived from the Chinese word for porter.

Mountain porters are also still in use in a handful of more developed countries, including Slovakia and Japan (bokka). These men (and more rarely women) regularly resupply mountain huts and tourist chalets at high-altitude mountain ranges.

In North America

Certain trade-specific terms are used for forms of porters in North America, including bellhop (hotel porter), redcap (railway station porter), and skycap (airport porter).

The practice of railroad station porters wearing red-colored caps to distinguish them from blue-capped train personnel with other duties was begun on Labor Day of 1890 by an African-American porter in order to stand out from the crowds at Grand Central Terminal in New York City. The tactic immediately caught on, over time adapted by other forms of porters for their specialties.

istanbul-airport-greeeting-ist.jpg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2337 2024-10-09 00:02:10

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,096

Re: Miscellany

2337) Electrum

Gist

Electrum is a naturally occurring alloy of gold and silver, with trace amounts of copper and other metals. Its color ranges from pale to bright yellow, depending on the proportions of gold and silver. It has been produced artificially and is also known as "green gold".

Summary

Electrum was called ‘djaam’. It was believed to be a natural alloy of gold and silver and the terminology is mentioned only in ancient civilisation. It was brought from the mountains of the eastern desert and Nubia. It is one of the seven known metals since prehistoric times, along with gold, copper, mercury, tin, iron and lead. Aristotle suggested that the six non-noble metals would eventually become gold to attain perfection by transmutation.

Electrum, from the Greek ‘elektron’ (a substance that develops electricity under friction) is a common name given to all intermediate varieties in the isomorphous Au-Ag series. The physico-chemical properties of electrum vary with the silver content. Physically, with increasing silver the colour changes from yellow to near-white, the metal becomes less dense and decreases from about 800 to around 550 fine. Chemically with increasing silver, the lower fineness mixture becomes less stable than higher fineness gold and is thus more prone to alteration by weathering. Electrum is sometimes coated with halogens and sulphide compounds, which yield a thin film of native silver under suitably reducing conditions.

Additional Information

Electrum, natural or artificial alloy of gold with at least 20 percent silver, which was used to make the first known coins in the Western world. Most natural electrum contains copper, iron, palladium, bismuth, and perhaps other metals. The colour varies from white-gold to brassy, depending on the percentages of the major constituents and copper. In the ancient world the chief source was Lydia, in Asia Minor, where the alloy was found in the area of the Pactolus River, a small tributary of the Hermus (modern Gediz Nehri, in Turkey). The first Occidental coinage, possibly begun by King Gyges (7th century bc) of Lydia, consisted of irregular ingots of electrum bearing his stamp as a guarantee of negotiability at a predetermined value.

Details

Electrum is a naturally occurring alloy of gold and silver, with trace amounts of copper and other metals. Its color ranges from pale to bright yellow, depending on the proportions of gold and silver. It has been produced artificially and is also known as "green gold".

Electrum was used as early as the third millennium BC in the Old Kingdom of Egypt, sometimes as an exterior coating to the pyramidions atop ancient Egyptian pyramids and obelisks. It was also used in the making of ancient drinking vessels. The first known metal coins made were of electrum, dating back to the end of the 7th century or the beginning of the 6th century BC.

Etymology

The name electrum is the Latinized form of the Greek word (ḗlektron), mentioned in the Odyssey, referring to a metallic substance consisting of gold alloyed with silver. The same word was also used for the substance amber, likely because of the pale yellow color of certain varieties. (It is from amber’s electrostatic properties that the modern English words electron and electricity are derived.) Electrum was often referred to as "white gold" in ancient times but could be more accurately described as pale gold because it is usually pale yellow or yellowish-white in color. The modern use of the term white gold usually concerns gold alloyed with any one or a combination of nickel, silver, platinum and palladium to produce a silver-colored gold.

Composition

Electrum consists primarily of gold and silver but is sometimes found with traces of platinum, copper and other metals. The name is mostly applied informally to compositions between 20–80% gold and 80–20% silver, but these are strictly called gold or silver depending on the dominant element. Analysis of the composition of electrum in ancient Greek coinage dating from about 600 BC shows that the gold content was about 55.5% in the coinage issued by Phocaea. In the early classical period the gold content of electrum ranged from 46% in Phokaia to 43% in Mytilene. In later coinage from these areas, dating to 326 BC, the gold content averaged 40% to 41%. In the Hellenistic period electrum coins with a regularly decreasing proportion of gold were issued by the Carthaginians. In the later Eastern Roman Empire controlled from Constantinople the purity of the gold coinage was reduced.

History

Electrum is mentioned in an account of an expedition sent by Pharaoh Sahure of the Fifth Dynasty of Egypt. It is also discussed by Pliny the Elder in his Naturalis Historia. It is also mentioned in the Bible, in the first chapter of the book of the prophet Ezekiel.

Early coinage

The earliest known electrum coins, Lydian coins and East Greek coins found under the Temple of Artemis at Ephesus, are currently dated to the last quarter of the 7th century BC (625–600 BC). Electrum is believed to have been used in coins c. 600 BC in Lydia during the reign of Alyattes.

Electrum was much better for coinage than gold, mostly because it was harder and more durable, but also because techniques for refining gold were not widespread at the time. The gold content of naturally occurring electrum in modern western Anatolia ranges from 70% to 90%, in contrast to the 45–55% of gold in electrum used in ancient Lydian coinage of the same geographical area. This suggests that the Lydians had already solved the refining technology for silver and were adding refined silver to the local native electrum some decades before introducing pure silver coins.

In Lydia, electrum was minted into coins weighing 4.7 grams (0.17 oz), each valued at 1⁄3 stater (meaning "standard"). Three of these coins—with a weight of about 14.1 grams (0.50 oz)—totaled one stater, about one month's pay for a soldier. To complement the stater, fractions were made: the trite (third), the hekte (sixth), and so forth, including 1/24 of a stater, and even down to 1/48 and 1/96 of a stater. The 1⁄96 stater was about 0.14 grams (0.0049 oz) to 0.15 grams (0.0053 oz). Larger denominations, such as a one stater coin, were minted as well.

Because of variation in the composition of electrum, it was difficult to determine the exact worth of each coin. Widespread trading was hampered by this problem, as the intrinsic value of each electrum coin could not be easily determined. This suggests that one reason for the invention of coinage in that area was to increase the profits from seigniorage by issuing currency with a lower gold content than the commonly circulating metal.

These difficulties were eliminated circa 570 BC when the Croeseids, coins of pure gold and silver, were introduced. However, electrum currency remained common until approximately 350 BC. The simplest reason for this was that, because of the gold content, one 14.1 gram stater was worth as much as ten 14.1 gram silver pieces.

tumblr_n6e46d2hVi1tscbfqo1_640.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2338 2024-10-10 00:04:18

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,096

Re: Miscellany

2338) Electric shaver

Summary

Razor, keen-edged cutting implement for shaving or cutting hair. Prehistoric cave drawings show that clam shells, shark’s teeth, and sharpened flints were used as shaving implements. Solid gold and copper razors have been found in Egyptian tombs of the 4th millennium bce. According to the Roman historian Livy, the razor was introduced in Rome in the 6th century bce by Lucius Tarquinius Priscus, legendary king of Rome; but shaving did not become customary until the 5th century bce.

Steel razors with ornamented handles and individually hollow-ground blades were crafted in Sheffield, England, the centre of the cutlery industry, in the 18th and 19th centuries. The hard crucible steel produced there by Benjamin Huntsman in 1740 was first rejected and then, later, after its adoption in France, deemed superior by the local manufacturers.

In the United States a hoe-shaped safety razor, a steel blade with a guard along one edge, was produced in 1880, and, at the beginning of the 20th century, King Camp Gillette combined the hoe shape with the double-edged replaceable blade. In the early 1960s several countries began to manufacture stainless steel blades for safety razors, with the advantage of longer use.

The popularity of the long-wearing double-edged blade was greatly eclipsed by the development of inexpensive cartridge-style injector blades, designed to fit into disposable plastic handles. The cartridge had only one cutting edge, but many manufacturers produced a “double-edged” instrument by placing two blades on one side. By the early 21st century, safety razors with up to five blades were also common.

Electric razors were patented as early as 1900 in the United States, but the first to be successfully manufactured was that on which Jacob Schick, a retired U.S. Army colonel, applied for a patent in 1928 and that he placed on the market in 1931. Competitive models soon appeared. In the electric razor a shearing head, driven by a small motor, is divided into two sections: the outer consists of a series of slots to grip the hairs and the inner of a series of saw blades. Models vary in the number and design of the blades, in the shape of the shearing head (round or flat), and in auxiliary devices such as clippers for sideburns.

Details

An electric shaver (also known as the dry razor, electric razor, or simply shaver) is a razor with an electrically powered rotating or oscillating blade. The electric shaver usually does not require the use of shaving cream, soap, or water. The razor may be powered by a small DC motor, which is either powered by batteries or mains electricity. Many modern ones are powered using rechargeable batteries. Alternatively, an electro-mechanical oscillator driven by an AC-energized solenoid may be used. Some very early mechanical shavers had no electric motor and had to be powered by hand, for example by pulling a cord to drive a flywheel.

Electric shavers fall into two main categories: foil or rotary-style. Users tend to prefer one or the other. Many modern shavers are cordless; they are charged up with a plug charger or they are placed within a cleaning and charging unit.

History

The first person to receive a patent for a razor powered by electricity was John Francis O'Rourke, a New York civil engineer, with his US patent 616554 filed in 1898. The first working electric razor was invented in 1915 by German engineer Johann Bruecker. Others followed suit, such as the American Col. Jacob Schick, considered to be the father of the modern electric razor, who patented the first electric razor in 1930. The Remington Rand Corporation developed the electric razor further, first producing the electric razor in 1937. Another important inventor was Prof. Alexandre Horowitz, from Philips Laboratories in the Netherlands, who designed one of the first rotary razors. It has a shaving head consisting of cutters that cut off the hair entering the head of the razor at skin level. Roland Ullmann from Braun in Germany was another inventor who was decisive for development of the modern electric razor. He was the first to fuse rubber and metal elements on shavers and developed more than 100 electrical razors for Braun. In the course of his career Ullmann filed well over 100 patents for innovations in the context of dry shavers. The major manufacturers introduce new improvements to the hair-cutting mechanism of their products every few years. Each manufacturer sells several different generations of cutting mechanism at the same time, and for each generation, several models with different features and accessories to reach various price points. The improvements to the cutting mechanisms tend to 'trickle-down' to lower-priced models over time.

Early versions of electric razors were meant to be used on dry skin only. Many recent electric razors have been designed to allow for wet/dry use, which also allows them to be cleaned using running water or an included cleaning machine, reducing cleaning effort. Some patience is necessary when starting to use a razor of this type, as the skin usually takes some time to adjust to the way that the electric razor lifts and cuts the hairs. Moisturizers designed specifically for electric shaving are available.

Battery-powered electric razors

In the late 1940s, the first electric razors that were battery-powered entered the market. In 1960, Remington introduced the first rechargeable battery-powered electric razor. Battery-operated electric razors have been available using rechargeable batteries sealed inside the razor's case, previously nickel cadmium or, more recently, nickel metal hydride. Some modern shavers use Lithium-ion batteries (which do not suffer from memory effect). Sealed battery shavers either have built-in or external charging devices. Some shavers may be designed to plug directly into a wall outlet with a swing-out or pop-up plug, or have a detachable AC cord. Other shavers have recharging base units that plug into an AC outlet and provide DC power at the base contacts (eliminating the need for the AC-to-DC converter to be inside the razor, reducing the risk of electric shock). In order to prevent any risk of electric shock, shavers designed for wet use usually do not allow corded use and will not turn on until the charging adapter cord is disconnected or the shaver is removed from the charging base.

Razor vs. trimmer

An electric razor and an electric trimmer are essentially the same devices by build, the major difference coming in terms of their usage and the blades that they come with.

Electric razors are made specifically for providing a clean shave. It has lesser battery power but more aggression towards clipping hair. Electric Trimmers, on the other hand, are not meant for clean shaves. They come with special combs fixed onto them that aid in proper grooming and trimming of the beard stubs to desired shapes and sizes.

General

Some models, generally marketed as "travel razors" (or "travel shavers"), use removable rechargeable or disposable batteries, usually size AA or AAA. This offers the option of purchasing batteries while traveling instead of carrying a charging device.

Water-resistance and wet/dry electric shavers

Many modern electric shavers are water-resistant, allowing the user to clean the shaver in water. In order to ensure electrical safety, the charging/power cord for the shaver must be unplugged from it before the unit is cleaned using water.

Some shavers are labeled as "Wet/Dry" which means the unit can be used in wet environments, for wet shaving. Such models are always battery-powered and usually the electronics will not allow turning the unit on while the charging adapter is plugged-in. This is necessary to ensure electrical safety, as it would be unsafe to use a plugged-in shaver in bathtub or shower.

Lady shaver

A lady shaver is a device designed to shave a woman's body hair. The design is usually similar to a man's foil shaver. Often a shaving attachment is a feature of an epilator which is supplied as a separate head-attachment (different from the epilating one).

Body hair shaver

Traditional men's shavers are designed for shaving facial hair. However, other shaver products are made specifically to facilitate shaving of body hair.

618DXJnNDIL-scaled.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2339 2024-10-11 00:03:18

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,096

Re: Miscellany

2239) Fire Extinguisher

Gist

There are five different fire extinguishers, which are:

* Water, water mist or water spray fire extinguishers.
* Foam fire extinguishers.
* Dry Powder – standard or specialist fire extinguishers.
* Carbon Dioxide ('CO2') fire extinguishers.
* Wet Chemical fire extinguishers.

Fire extinguishers apply an agent that will cool burning heat, smother fuel or remove oxygen so the fire cannot continue to burn. A portable fire extinguisher can quickly control a small fire if applied by an individual properly trained. Fire extinguishers are located throughout every building on campus.

Summary

Fire extinguisher is a portable or movable apparatus used to put out a small fire by directing onto it a substance that cools the burning material, deprives the flame of oxygen, or interferes with the chemical reactions occurring in the flame. Water performs two of these functions: its conversion to steam absorbs heat, and the steam displaces the air from the vicinity of the flame. Many simple fire extinguishers, therefore, are small tanks equipped with hand pumps or sources of compressed gas to propel water through a nozzle. The water may contain a wetting agent to make it more effective against fires in upholstery, an additive to produce a stable foam that acts as a barrier against oxygen, or an antifreeze. Carbon dioxide is a common propellant, brought into play by removing the locking pin of the cylinder valve containing the liquefied gas; this method has superseded the process, used in the soda-acid fire extinguisher, of generating carbon dioxide by mixing sulfuric acid with a solution of sodium bicarbonate.

Numerous agents besides water are used; the selection of the most appropriate one depends primarily on the nature of the materials that are burning. Secondary considerations include cost, stability, toxicity, ease of cleanup, and the presence of electrical hazard.

Small fires are classified according to the nature of the burning material. Class A fires involve wood, paper, and the like; Class B fires involve flammable liquids, such as cooking fats and paint thinners; Class C fires are those in electrical equipment; Class D fires involve highly reactive metals, such as sodium and magnesium. Water is suitable for putting out fires of only one of these classes (A), though these are the most common. Fires of classes A, B, and C can be controlled by carbon dioxide, halogenated hydrocarbons such as halons, or dry chemicals such as sodium bicarbonate or ammonium dihydrogen phosphate. Class D fires ordinarily are combated with dry chemicals.

A primitive hand pump for directing water at a fire was invented by Ctesibius of Alexandria about 200 bce, and similar devices were employed during the Middle Ages. In the early 1700s devices created independently by English chemists Ambrose Godfrey and French C. Hoppfer used explosive charges to disperse fire-suppressing solutions. English inventor Capt. George Manby introduced a handheld fire extinguisher—a three-gallon tank containing a pressurized solution of potassium carbonate—in 1817. Modern incarnations employing a variety of chemical solutions are essentially modifications of Manby’s design.

Details

A fire extinguisher is a handheld active fire protection device usually filled with a dry or wet chemical used to extinguish or control small fires, often in emergencies. It is not intended for use on an out-of-control fire, such as one which has reached the ceiling, endangers the user (i.e., no escape route, smoke, explosion hazard, etc.), or otherwise requires the equipment, personnel, resources or expertise of a fire brigade. Typically, a fire extinguisher consists of a hand-held cylindrical pressure vessel containing an agent that can be discharged to extinguish a fire. Fire extinguishers manufactured with non-cylindrical pressure vessels also exist but are less common.

There are two main types of fire extinguishers: stored-pressure and cartridge-operated. In stored pressure units, the expellant is stored in the same chamber as the firefighting agent itself. Depending on the agent used, different propellants are used. With dry chemical extinguishers, nitrogen is typically used; water and foam extinguishers typically use air. Stored pressure fire extinguishers are the most common type. Cartridge-operated extinguishers contain the expellant gas in a separate cartridge that is punctured before discharge, exposing the propellant to the extinguishing agent. This type is not as common, used primarily in areas such as industrial facilities, where they receive higher-than-average use. They have the advantage of simple and prompt recharge, allowing an operator to discharge the extinguisher, recharge it, and return to the fire in a reasonable amount of time. Unlike stored pressure types, these extinguishers use compressed carbon dioxide instead of nitrogen, although nitrogen cartridges are used on low-temperature (–60 rated) models. Cartridge-operated extinguishers are available in dry chemical and dry powder types in the U.S. and water, wetting agent, foam, dry chemical (classes ABC and B.C.), and dry powder (class D) types in the rest of the world.

Fire extinguishers are further divided into handheld and cart-mounted (also called wheeled extinguishers). Handheld extinguishers weigh from 0.5 to 14 kilograms (1.1 to 30.9 lb), and are hence, easily portable by hand. Cart-mounted units typically weigh more than 23 kilograms (51 lb). These wheeled models are most commonly found at construction sites, airport runways, heliports, as well as docks and marinas.

History

The first fire extinguisher of which there is any record was patented in England in 1723 by Ambrose Godfrey, a celebrated chemist at that time. It consisted of a cask of fire-extinguishing liquid containing a pewter chamber of gunpowder. This was connected with a system of fuses which were ignited, exploding the gunpowder and scattering the solution. This device was probably used to a limited extent, as Bradley's Weekly Messenger for November 7, 1729, refers to its efficiency in stopping a fire in London.

A portable pressurised fire extinguisher, the 'Extincteur' was invented by British Captain George William Manby and demonstrated in 1816 to the 'Commissioners for the affairs of Barracks'; it consisted of a copper vessel of 3 gallons (13.6 liters) of pearl ash (potassium carbonate) solution contained within compressed air. When operated it expelled liquid onto the fire.

One of the first fire extinguisher patents was issued to Alanson Crane of Virginia on Feb. 10, 1863.

Thomas J. Martin, an American inventor, was awarded a patent for an improvement in the Fire Extinguishers on March 26, 1872. His invention is listed in the U. S. Patent Office in Washington, DC under patent number 125,603.

The soda-acid extinguisher was first patented in 1866 by Francois Carlier of France, which mixed a solution of water and sodium bicarbonate with tartaric acid, producing the propellant carbon dioxide (CO2) gas. A soda-acid extinguisher was patented in the U.S. in 1880 by Almon M. Granger. His extinguisher used the reaction between sodium bicarbonate solution and sulfuric acid to expel pressurized water onto a fire. A vial of concentrated sulfuric acid was suspended in the cylinder. Depending on the type of extinguisher, the vial of acid could be broken in one of two ways. One used a plunger to break the acid vial, while the second released a lead stopple that held the vial closed. Once the acid was mixed with the bicarbonate solution, carbon dioxide gas was expelled and thereby pressurized the water. The pressurized water was forced from the canister through a nozzle or short length of hose.

The cartridge-operated extinguisher was invented by Read & Campbell of England in 1881, which used water or water-based solutions. They later invented a carbon tetrachloride model called the "Petrolex" which was marketed toward automotive use.

The chemical foam extinguisher was invented in 1904 by Aleksandr Loran in Russia, based on his previous invention of fire fighting foam. Loran first used it to extinguish a pan of burning naphtha. It worked and looked similar to the soda-acid type, but the inner parts were slightly different. The main tank contained a solution of sodium bicarbonate in water, whilst the inner container (somewhat larger than the equivalent in a soda-acid unit) contained a solution of aluminium sulphate. When the solutions were mixed, usually by inverting the unit, the two liquids reacted to create a frothy foam, and carbon dioxide gas. The gas expelled the foam in the form of a jet. Although liquorice-root extracts and similar compounds were used as additives (stabilizing the foam by reinforcing the bubble-walls), there was no "foam compound" in these units. The foam was a combination of the products of the chemical reactions: sodium and aluminium salt-gels inflated by the carbon dioxide. Because of this, the foam was discharged directly from the unit, with no need for an aspirating branchpipe (as in newer mechanical foam types). Special versions were made for rough service, and vehicle mounting, known as apparatus of fire department types. Key features were a screw-down stopper that kept the liquids from mixing until it was manually opened, carrying straps, a longer hose, and a shut-off nozzle. Fire department types were often private label versions of major brands, sold by apparatus manufacturers to match their vehicles. Examples are Pirsch, Ward LaFrance, Mack, Seagrave, etc. These types are some of the most collectable extinguishers as they cross into both the apparatus restoration and fire extinguisher areas of interest.

In 1910, The Pyrene Manufacturing Company of Delaware filed a patent for using carbon tetrachloride (CTC, or CCl4) to extinguish fires.[8] The liquid vaporized and extinguished the flames by inhibiting the chemical chain reaction of the combustion process (it was an early 20th-century presupposition that the fire suppression ability of carbon tetrachloride relied on oxygen removal). In 1911, they patented a small, portable extinguisher that used the chemical. This consisted of a brass or chrome container with an integrated handpump, which was used to expel a jet of liquid towards the fire. It was usually of 1 imperial quart (1.1 L) or 1 imperial pint (0.57 L) capacity but was also available in up to 2 imperial gallons (9.1 L) size. As the container was unpressurized, it could be refilled after use through a filling plug with a fresh supply of CTC.

Another type of carbon tetrachloride extinguisher was the fire grenade. This consisted of a glass sphere filled with CTC, that was intended to be hurled at the base of a fire (early ones used salt-water, but CTC was more effective). Carbon tetrachloride was suitable for liquid and electrical fires and the extinguishers were fitted to motor vehicles. Carbon tetrachloride extinguishers were withdrawn in the 1950s because of the chemical's toxicity – exposure to high concentrations damages the nervous system and internal organs. Additionally, when used on a fire, the heat can convert CTC to phosgene gas, formerly used as a chemical weapon.

The carbon dioxide extinguisher was invented (at least in the US) by the Walter Kidde Company in 1924 in response to Bell Telephone's request for an electrically non-conductive chemical for extinguishing the previously difficult-to-extinguish fires in telephone switchboards. It consisted of a tall metal cylinder containing 7.5 pounds (3.4 kg) of CO2 with a wheel valve and a woven brass, cotton-covered hose, with a composite funnel-like horn as a nozzle. CO2 is still popular today as it is an ozone-friendly clean agent and is used heavily in film and television production to extinguish burning stuntmen. Carbon dioxide extinguishes fire mainly by displacing oxygen. It was once thought that it worked by cooling, although this effect on most fires is negligible. An anecdotal report of a carbon dioxide fire extinguisher was published in Scientific American in 1887 which describes the case of a basement fire at a Louisville, Kentucky pharmacy which melted a lead pipe charge with CO2 (called carbonic acid gas at the time) intended for a soda fountain which immediately extinguished the flames thus saving the building. Also in 1887, carbonic acid gas was described as a fire extinguisher for engine chemical fires at sea and ashore.

In 1928, DuGas (later bought by ANSUL) came out with a cartridge-operated dry chemical extinguisher, which used sodium bicarbonate specially treated with chemicals to render it free-flowing and moisture-resistant. It consisted of a copper cylinder with an internal CO2 cartridge. The operator turned a wheel valve on top to puncture the cartridge and squeezed a lever on the valve at the end of the hose to discharge the chemical. This was the first agent available for large-scale three-dimensional liquid and pressurized gas fires, but remained largely a specialty type until the 1950s, when small dry chemical units were marketed for home use. ABC dry chemical came over from Europe in the 1950s, with Super-K being invented in the early 1960s and Purple-K being developed by the United States Navy in the late 1960s. Manually applied dry agents such as graphite for class D (metal) fires had existed since World War II, but it was not until 1949 that Ansul introduced a pressurized extinguisher using an external CO2 cartridge to discharge the agent. Met-L-X (sodium chloride) was the first extinguisher developed in the US, with graphite, copper, and several other types being developed later.

In the 1940s, Germany invented the liquid chlorobromomethane (CBM) for use in aircraft. It was more effective and slightly less toxic than carbon tetrachloride and was used until 1969. Methyl bromide was discovered as an extinguishing agent in the 1920s and was used extensively in Europe. It is a low-pressure gas that works by inhibiting the chain reaction of the fire and is the most toxic of the vaporizing liquids, used until the 1960s. The vapor and combustion by-products of all vaporizing liquids were highly toxic and could cause death in confined spaces.

In the 1970s, Halon 1211 came over to the United States from Europe where it had been used since the late 1940s or early 1950s. Halon 1301 had been developed by DuPont and the United States Army in 1954. Both 1211 and 1301 work by inhibiting the chain reaction of the fire, and in the case of Halon 1211, cooling class A fuels as well. Halon is still in use today but is falling out of favor for many uses due to its environmental impact. Europe and Australia have severely restricted its use, since the Montreal Protocol of 1987. Less severe restrictions have been implemented in the United States, the Middle East, and Asia.

398-front_240cd8e4-67c9-406a-ab4f-3c8592727fae_2048x2048.jpg?v=1627080388


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2340 Yesterday 00:02:57

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,096

Re: Miscellany

2240) Ink Cartridge

Gist

An ink cartridge or inkjet cartridge is the component of an inkjet printer that contains the ink to be deposited onto paper during printing.

Is an ink cartridge refillable?

Yes, in many cases you can refill ink cartridges. However, if the cartridge has dried ink inside it, don't try refilling it as it will likely not work properly.

Details

An ink cartridge or inkjet cartridge is the component of an inkjet printer that contains the ink to be deposited onto paper during printing. It consists of one or more ink reservoirs and can include electronic contacts and a chip to exchange information with the printer.

Design:

Thermal

Most consumer inkjet printers use a thermal inkjet. Inside each partition of the ink reservoir is a heating element with a tiny metal plate or resistor. In response to a signal given by the printer, a tiny current flows through the metal or resistor making it warm, and the ink in contact with the heated resistor is vaporized into a tiny steam bubble inside the nozzle. As a result, an ink droplet is forced out of the cartridge nozzle onto the paper. This process takes a fraction of a millisecond.

The printing depends on the smooth flow of ink, which can be hindered if the ink begins to dry at the print head, as can happen when an ink level becomes low. Dried ink can be cleaned from a cartridge print head using 91% denatured isopropyl alcohol (not rubbing alcohol). Tap water contains contaminants that may clog the print head, so distilled water and a lint-free cloth is recommended.

The ink also acts as a coolant to protect the metal-plate heating elements − when the ink supply is depleted, and printing is attempted, the heating elements in thermal cartridges often burn out, permanently damaging the print head. When the ink first begins to run low, the cartridge should be refilled or replaced, to avoid overheating damage to the print head.

Piezoelectric

Piezoelectric printers use a piezoelectric crystal in each nozzle instead of a heating element. When current is applied, the crystal changes shape or size, increasing the pressure in the ink channel and thus forcing a droplet of ink from the nozzle. There are two types of crystals used: those that elongate when subjected to electricity or bi-morphs which bend. The ink channels in a piezoelectric ink jet print head can be formed using a variety of techniques, but one common method is lamination of a stack of metal plates, each of which includes precision micro-fabricated features of various shapes (i.e. containing an ink channel, orifice, reservoir and crystal). This cool environment allows the use of inks which react badly when heated. For example, roughly 1/1000 of every ink jet is vaporized due to the intense heat, and ink must be designed to not clog the printer with the products of thermal decomposition. Piezoelectric printers can in some circumstances make a smaller ink drop than thermal inkjets.

Parts

Cartridge body

Stores the ink of the ink cartridge. May contain hydrophobic foam that prevents refilling.

Printhead

Some ink cartridges combine ink storage and printheads into one assembly with four main additional parts:

* Nozzle Plate: Expels ink onto the paper.
* Cover Plate: Protects the nozzles.
* Common Ink Chamber: A reservoir holding a small amount of ink prior to being 'jetted' onto the paper.
* Piezoelectric Substrate (in Piezoelectric printers) : houses the piezoelectric crystal.
* Metallic plate / resistor (in Thermal printers): Heats the ink with a small current.

Variants

* Color inkjets use the CMYK color model: cyan, magenta, yellow, and the key, black. Over the years, two distinct forms of black have become available: one that blends readily with other colors for graphical printing, and a near-waterproof variant for text.

* Most modern inkjets carry a black cartridge for text, and either a single CMYK combined or a discrete cartridge for each color; while keeping colors separate was initially rare, it has become common in more recent years. Some higher-end inkjets offer cartridges for extra colors.

* Some cartridges contain ink specially formulated for printing photographs.

* All printer suppliers produce their own type of ink cartridges. Cartridges for different printers are often incompatible — either physically or electrically.

* Some manufacturers incorporate the printer's head into the cartridge (examples include HP, Dell, and Lexmark), while others such as Epson keep the print head a part of the printer itself. Both sides make claims regarding their approach leading to lower costs for the consumer.

* In 2014, Epson introduced a range of printers that use refillable ink tanks, providing a major reduction in printing cost. This operates similar to continuous ink system printers. Epson does not subsidize the cost of these printers termed its "EcoTank" range.

Pricing

Ink cartridges are typically priced at $13 to $75/US fl oz ($1,664 to $9,600/US gal; $440 to $2,536/L) of ink, meaning that refill cartridges sometimes cost a substantial fraction of the cost of the printer. To save money, many people use compatible ink cartridges from a vendor other than the printer manufacturer. A study by British consumer watchdog Which? found that in some cases, printer ink from the manufacturer is more expensive than champagne. Others use aftermarket inks, refilling their own ink cartridges using a kit that includes bulk ink. The high cost of cartridges has also provided an incentive for counterfeiters to supply cartridges falsely claiming to be made by the original manufacturer. The print cartridge industry failed to earn $3 billion in 2009 due to this, according to an International Data Corporation estimate.

Another alternative involves modifications of an original cartridge allowing use of continuous ink systems with external ink tanks. Some manufacturers, including Canon and Epson, have introduced new models featuring in-built continuous ink systems. Overall, This was seen as a welcome move by users.

Consumer exploitation lawsuits

It can sometimes be cheaper to buy a new printer than to replace the set of ink cartridges supplied with the printer. The major printer manufacturers − Hewlett Packard, Lexmark, Dell, Canon, Epson and Brother − use a "razor and blades" business model, often breaking even or losing money selling printers while expecting to make a profit by selling cartridges over the life of the printer. Since much of the printer manufacturers' profits are from ink and toner cartridge sales, some of these companies have taken various actions against aftermarket cartridges.

Some printer manufacturers set up their cartridges to interact with the printer, preventing operation when the ink level is low, or when the cartridge has been refilled. One researcher with the magazine Which? overrode such an interlocked system and found that in one case he could print up to 38% more good quality pages after the chip stated that the cartridge was empty. In the United Kingdom, in 2003, the cost of ink has been the subject of an Office of Fair Trading investigation, as Which? magazine has accused manufacturers of a lack of transparency about the price of ink and called for an industry standard for measuring ink cartridge performance. Which? stated that color HP cartridges cost over seven times more per milliliter than 1985 Dom Perignon.

In 2006, Epson lost a class action lawsuit that claimed their inkjet printers and ink cartridges stop printer operation due to "empty" cartridge notifications even when usable ink still remains. Epson settled the case by giving $45 e-coupons in their online stores for people who bought Epson inkjet printers and ink cartridges from April 8, 1999, to May 8, 2006.

In 2010, HP lost three class action lawsuits: 1.) claims of HP inkjet printers giving false low ink notifications, 2.) claims of cyan ink being spent when printing with black ink, 3.) claims of ink cartridges being disabled by printers upon being detected as "empty" even if they are not yet empty. HP paid $5 million in settlement.

In 2017, Halte à L’Obsolescence Programmêe (HOP) — End Planned Obsolescence — filed a lawsuit and won against Brother, Canon, Epson, HP and other companies for intentionally shortening product life spans - inkjet printers and ink cartridges included. The companies were fined €15,000.

In September 2018, HP lost a class action lawsuit where plaintiffs claim HP printer firmware updates caused fake error messages upon using third party ink cartridges. HP settled the case with $1.5 million.

In October 2019, Epson had a class action complaint filed against it for printer firmware updates that allegedly prevented printer operation upon detection of third-party ink cartridges.

inkjet-cartridge-is-a-component-of-an-inkjet-printer-p3fesexizi0t39rscecczo6x4h9qqsdqy7lozopohs.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2341 Today 00:03:59

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,096

Re: Miscellany

2241) Fruit

Gist

Fruits and vegetables contain important vitamins, minerals and plant chemicals. They also contain fibre. There are many varieties of fruit and vegetables available and many ways to prepare, cook and serve them. A diet high in fruit and vegetables can help protect you against cancer, diabetes and heart disease.

In a botanical sense, a fruit is the fleshy or dry ripened ovary of a flowering plant, enclosing the seed or seeds. Apricots, bananas, and grapes, as well as bean pods, corn grains, tomatoes, cucumbers, and (in their shells) acorns and almonds, are all technically fruits.

Fruits are an excellent source of essential vitamins and minerals, and they are high in fiber. Fruits also provide a wide range of health-boosting antioxidants, including flavonoids. Eating a diet high in fruits and vegetables can reduce a person's risk of developing heart disease, cancer, inflammation, and diabetes.

Summary

In botany, a fruit is the seed-bearing structure in flowering plants that is formed from the ovary after flowering.

Fruits are the means by which flowering plants (also known as angiosperms) disseminate their seeds. Edible fruits in particular have long propagated using the movements of humans and other animals in a symbiotic relationship that is the means for seed dispersal for the one group and nutrition for the other; humans and many other animals have become dependent on fruits as a source of food. Consequently, fruits account for a substantial fraction of the world's agricultural output, and some (such as the apple and the pomegranate) have acquired extensive cultural and symbolic meanings.

In common language usage, fruit normally means the seed-associated fleshy structures (or produce) of plants that typically are sweet or sour and edible in the raw state, such as apples, bananas, grapes, lemons, oranges, and strawberries. In botanical usage, the term fruit also includes many structures that are not commonly called 'fruits' in everyday language, such as nuts, bean pods, corn kernels, tomatoes, and wheat grains.

Details

A fruit is the fleshy or dry ripened ovary of a flowering plant, enclosing the seed or seeds. Thus, apricots, bananas, and grapes, as well as bean pods, corn grains, tomatoes, cucumbers, and (in their shells) acorns and almonds, are all technically fruits. Popularly, however, the term is restricted to the ripened ovaries that are sweet and either succulent or pulpy. For treatment of the cultivation of fruits, see fruit farming. For treatment of the nutrient composition and processing of fruits, see fruit processing.

Botanically, a fruit is a mature ovary and its associated parts. It usually contains seeds, which have developed from the enclosed ovule after fertilization, although development without fertilization, called parthenocarpy, is known, for example, in bananas. Fertilization induces various changes in a flower: the anthers and stigma wither, the petals drop off, and the sepals may be shed or undergo modifications; the ovary enlarges, and the ovules develop into seeds, each containing an embryo plant. The principal purpose of the fruit is the protection and dissemination of the seed.

Fruits are important sources of dietary fibre, vitamins (especially vitamin C), and antioxidants. Although fresh fruits are subject to spoilage, their shelf life can be extended by refrigeration or by the removal of oxygen from their storage or packaging containers. Fruits can be processed into juices, jams, and jellies and preserved by dehydration, canning, fermentation, and pickling. Waxes, such as those from bayberries (wax myrtles), and vegetable ivory from the hard fruits of a South American palm species (Phytelephas macrocarpa) are important fruit-derived products. Various drugs come from fruits, such as morphine from the fruit of the opium poppy.

Types of fruits

The concept of “fruit” is based on such an odd mixture of practical and theoretical considerations that it accommodates cases in which one flower gives rise to several fruits (larkspur) as well as cases in which several flowers cooperate in producing one fruit (mulberry). Pea and bean plants, exemplifying the simplest situation, show in each flower a single pistil (female structure), traditionally thought of as a megasporophyll or carpel. The carpel is believed to be the evolutionary product of an originally leaflike organ bearing ovules along its margin. This organ was somehow folded along the median line, with a meeting and coalescing of the margins of each half, the result being a miniature closed but hollow pod with one row of ovules along the suture. In many members of the rose and buttercup families, each flower contains a number of similar single-carpelled pistils, separate and distinct, which together represent what is known as an apocarpous gynoecium. In other cases, two to several carpels (still thought of as megasporophylls, although perhaps not always justifiably) are assumed to have fused to produce a single compound gynoecium (pistil), whose basal part, or ovary, may be uniloculate (with one cavity) or pluriloculate (with several compartments), depending on the method of carpel fusion.

Most fruits develop from a single pistil. A fruit resulting from the apocarpous gynoecium (several pistils) of a single flower may be referred to as an aggregate fruit. A multiple fruit represents the gynoecia of several flowers. When additional flower parts, such as the stem axis or floral tube, are retained or participate in fruit formation, as in the apple or strawberry, an accessory fruit results.

Certain plants, mostly cultivated varieties, spontaneously produce fruits in the absence of pollination and fertilization; such natural parthenocarpy leads to seedless fruits such as bananas, oranges, grapes, and cucumbers. Since 1934, seedless fruits of tomato, cucumber, peppers, holly, and others have been obtained for commercial use by administering plant growth substances, such as indoleacetic acid, indolebutyric acid, naphthalene acetic acid, and β-naphthoxyacetic acid, to the ovaries in flowers (induced parthenocarpy).

Classification systems for mature fruits take into account the number of carpels constituting the original ovary, dehiscence (opening) versus indehiscence, and dryness versus fleshiness. The properties of the ripened ovary wall, or pericarp, which may develop entirely or in part into fleshy, fibrous, or stony tissue, are important. Often three distinct pericarp layers can be identified: the outer (exocarp), the middle (mesocarp), and the inner (endocarp). All purely morphological systems (i.e., classification schemes based on structural features) are artificial. They ignore the fact that fruits can be understood only functionally and dynamically.

There are two broad categories of fruits: fleshy fruits, in which the pericarp and accessory parts develop into succulent tissues, as in eggplants, oranges, and strawberries; and dry fruits, in which the entire pericarp becomes dry at maturity. Fleshy fruits include (1) the berries, such as tomatoes, blueberries, and cherries, in which the entire pericarp and the accessory parts are succulent tissue, (2) aggregate fruits, such as blackberries and strawberries, which form from a single flower with many pistils, each of which develops into fruitlets, and (3) multiple fruits, such as pineapples and mulberries, which develop from the mature ovaries of an entire inflorescence. Dry fruits include the legumes, cereal grains, capsulate fruits, and nuts.

Brazil nut

As strikingly exemplified by the word nut, popular terms often do not properly describe the botanical nature of certain fruits. A Brazil nut, for example, is a thick-walled seed enclosed in a likewise thick-walled capsule along with several sister seeds. A coconut is a drupe (a stony-seeded fruit) with a fibrous outer part. A walnut is a drupe in which the pericarp has differentiated into a fleshy outer husk and an inner hard “shell”; the “meat” represents the seed—two large convoluted cotyledons, a minute epicotyl and hypocotyl, and a thin papery seed coat. A peanut is an indehiscent legume fruit. An almond is a drupe “stone”; i.e., the hardened endocarp usually contains a single seed. Botanically speaking, blackberries and raspberries are not true berries but aggregates of tiny drupes. A juniper “berry” is not a fruit at all but the cone of a gymnosperm. A mulberry is a multiple fruit made up of small nutlets surrounded by fleshy sepals. And strawberry represents a much-swollen receptacle (the tip of the flower stalk bearing the flower parts) bearing on its convex surface an aggregation of tiny brown achenes (small single-seeded fruits).

Dispersal

Fruits play an important role in the seed dispersal of many plant species. In dehiscent fruits, such as poppy capsules, the seeds are usually dispersed directly from the fruits, which may remain on the plant. In fleshy or indehiscent fruits, the seeds and fruit are commonly moved away from the parent plant together. In many plants, such as grasses and lettuce, the outer integument and ovary wall are completely fused, so seed and fruit form one entity; such seeds and fruits can logically be described together as “dispersal units,” or diaspores. For further discussion on seed dispersal, see seed: agents of dispersal.

Animal dispersal

A wide variety of animals aid in the dispersal of seeds, fruits, and diaspores. Many birds and mammals, ranging in size from mice and kangaroo rats to elephants, act as dispersers when they eat fruits and diaspores. In the tropics, chiropterochory (dispersal by large bats such as flying foxes, Pteropus) is particularly important. Fruits adapted to these animals are relatively large and drab in colour with large seeds and a striking (often rank) odour. Such fruits are accessible to bats because of the pagoda-like structure of the tree canopy, fruit placement on the main trunk, or suspension from long stalks that hang free of the foliage. Examples include mangoes, guavas, breadfruit, carob, and several fig species. In South Africa a desert melon (Cucumis humifructus) participates in a symbiotic relationship with aardvarks—the animals eat the fruit for its water content and bury their own dung, which contains the seeds, near their burrows.

Additionally, furry terrestrial mammals are the agents most frequently involved in epizoochory, the inadvertent carrying by animals of dispersal units. Burlike fruits, or those diaspores provided with spines, hooks, claws, bristles, barbs, grapples, and prickles, are genuine hitchhikers, clinging tenaciously to their carriers. Their functional shape is achieved in various ways: in cleavers, or goose grass (Galium aparine), and in enchanter’s nightshade (Circaea lutetiana), the hooks are part of the fruit itself; in common agrimony (Agrimonia eupatoria), the fruit is covered by a persistent calyx (the sepals, parts of the flower, which remain attached beyond the usual period) equipped with hooks; and in wood avens (Geum urbanum), the persistent styles have hooked tips. Other examples are bur marigolds, or beggar’s-ticks (Bidens species); buffalo bur (Solanum rostratum); burdock (Arctium); Acaena; and many Medicago species. The last-named, with dispersal units highly resistant to damage from hot water and certain chemicals (dyes), have achieved wide global distribution through the wool trade. A somewhat different principle is employed by the so-called trample burrs, said to lodge themselves between the hooves of large grazing mammals. Examples are mule grab (Proboscidea) and the African grapple plant (Harpagophytum). In water burrs, such as those of the water chestnut Trapa, the spines should probably be considered as anchoring devices.

Birds, being preening animals, rarely carry burlike diaspores on their bodies. They do, however, transport the very sticky (viscid) fruits of Pisonia, a tropical tree of the four-o’clock family, to distant Pacific islands in this way. Small diaspores, such as those of sedges and certain grasses, may also be carried in the mud sticking to waterfowl and terrestrial birds.

Synzoochory, deliberate carrying of diaspores by animals, is practiced when birds carry diaspores in their beaks. The European mistle thrush (Turdus viscivorus) deposits the viscid seeds of mistletoe (Viscum album) on potential host plants when, after a meal of the berries, it whets its bill on branches or simply regurgitates the seeds. The North American (Phoradendron) and Australian (Amyema) mistletoes are dispersed by various birds, and the comparable tropical species of the plant family Loranthaceae by flower-peckers (of the bird family Dicaeidae), which have a highly specialized gizzard that allows seeds to pass through but retains insects. Plants may also profit from the forgetfulness and sloppy habits of certain nut-eating birds that cache part of their food but neglect to recover everything or that drop units on their way to a hiding place. Best known in this respect are the nutcrackers (Nucifraga), which feed largely on the “nuts” of beech, oak, walnut, chestnut, and hazelnut; the jays (Garrulus), which hide hazelnuts and acorns; the nuthatches; and the California woodpecker (Melanerpes formicivorus), which may embed literally thousands of acorns, almonds, and pecan nuts in bark fissures or holes of trees. Rodents may aid in dispersal by stealing the embedded diaspores and burying them. In Germany, an average jay may transport about 4,600 acorns per season, over distances of up to 4 km (2.5 miles).

Most ornithochores (plants with bird-dispersed seeds) have conspicuous diaspores attractive to such fruit-eating birds as thrushes, pigeons, barbets (members of the bird family Capitonidae), toucans (family Ramphastidae), and hornbills (family Bucerotidae), all of which either excrete or regurgitate the hard part undamaged. Such diaspores have a fleshy, sweet, or oil-containing edible part; a striking colour (often red or orange); no pronounced smell; protection against being eaten prematurely, in the form of acids and tannins that are present only in the green fruit; protection of the seed against digestion, afforded by bitterness, hardness, or the presence of poisonous compounds; permanent attachment; and, finally, absence of a hard outer cover. In contrast to bat-dispersed diaspores, they occupy no special position on the plant. Examples are rose hips, plums, dogwood fruits, barberry, red currant, mulberry, nutmeg fruits, figs, blackberries, and others. The natural and abundant occurrence of Euonymus, which is a largely tropical genus, in temperate Europe and Asia, can be understood only in connection with the activities of birds. Birds also contributed substantially to the repopulation with plants of the Krakatoa island group in Indonesia after the catastrophic volcanic eruption there in 1883. Birds have made Lantana (originally American) a pest in Indonesia and Australia; the same is true of black cherries (Prunus serotina) in parts of Europe, Rubus species in Brazil and New Zealand, and olives (Olea europaea) in Australia.

Many intact fruits and seeds can serve as fish bait—those of Sonneratia, for example, for the catfish Arius maculatus. Certain Amazon River fishes react positively to the audible “explosions” of the ripe fruits of Eperua rubiginosa. The largest freshwater wetlands in the world, found in Brazil’s Pantanal, become inundated with seasonal floods at a time when many plants are releasing their fruits. Pacu fish (Metynnis) feed on submerged and floating fruits and disperse the seeds when they defecate. It is thought that at least one plant species (Bactris glaucescens) relies exclusively on pacu for seed dispersal.

Fossil evidence indicates that saurochory, dispersal by reptiles, is very ancient. The giant Galapagos tortoise is important for the dispersal of local cacti and tomatoes, and iguanas are known to eat and disperse a number of smaller fruits, including the iguana hackberry (Celtis iguanaea). The name alligator apple, for Annona glabra, refers to its method of dispersal, an example of saurochory.

Wind dispersal

Winged fruits are most common in trees and shrubs, such as maple, ash, elm, birch, alder, and dipterocarps (a family of about 600 species of Old World tropical trees). The one-winged propeller type, as found in maple, is called a samara. When fruits have several wings on their sides, rotation may result, as in rhubarb and dock species. Sometimes accessory parts form the wings—for example, the bracts (small green leaflike structures that grow just below flowers) in linden (Tilia).

Many fruits form plumes, some derived from persisting and ultimately hairy styles, as in clematis, avens, and anemones; some from the perianth, as in the sedge family (Cyperaceae); and some from the pappus, a calyx structure, as in dandelion and Jack-go-to-bed-at-noon (Tragopogon). In woolly fruits and seeds, the pericarp or the seed coat is covered with cottonlike hairs—e.g., willow, poplar or cottonwood, cotton, and balsa. In some cases, the hairs may serve double duty in that they function in water dispersal as well as in wind dispersal.

Poppies have a mechanism in which the wind has to swing the slender fruitstalk back and forth before the seeds are thrown out through pores near the top of the capsule. The inflated indehiscent pods of Colutea arborea, a steppe plant, represent balloons capable of limited air travel before they hit the ground and become windblown tumbleweeds.

Other forms of dispersal

Geocarpy is defined as either the production of fruits underground, as in the arum lilies (Stylochiton and Biarum), in which the flowers are already subterranean, or the active burying of fruits by the mother plant, as in the peanut (Arachis hypogaea). In the American hog peanut (Amphicarpa bracteata), pods of a special type are buried by the plant and are cached by squirrels later on. Kenilworth ivy (Cymbalaria), which normally grows on stone or brick walls, stashes its fruits away in crevices after strikingly extending the flower stalks. Not surprisingly, geocarpy is most often encountered in desert plants; however, it also occurs in violet species, in subterranean clover (Trifolium subterraneum), and in begonias (Begonia hypogaea) of the African rainforest.

Barochory, the dispersal of seeds and fruits by gravity alone, is demonstrated by the heavy fruits of horse chestnut.

2-2-2-3foodgroups_fruits_detailfeature.jpg?sfvrsn=64942d53_4


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

Board footer

Powered by FluxBB