Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#1951 2023-11-04 00:06:36

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1955) Ash

Gist

Ash is the grey or black powdery substance that is left after something is burnt. You can also refer to this substance as ashes.

It is the powdery residue of matter that remains after burning.

Summary

Ash is the solid, somewhat powdery substance that is left over after any fuel undergoes combustion. Broadly speaking, coal ash and wood ash are the two most talked about types of ash, although ash is created during any process of incomplete combustion. Due to the variety of potential fuels, the chemical composition and even appearance of ash can vary drastically.

Incomplete combustion means that there is not enough oxygen present when the material is burned to completely consume the fuel. Instead of only carbon dioxide and water vapour being created, incomplete combustion can result in the production of soot, smoke, and ash. Depending on what is burned, ash can consist of different chemical components. However, the main chemical component of ash is carbon, with varying amounts of other elements including calcium, magnesium, potassium, and phosphorus - all of which were not burned when the fuel was used.

Details

Ash or ashes are the solid remnants of fires. Specifically, ash refers to all non-aqueous, non-gaseous residues that remain after something burns. In analytical chemistry, to analyse the mineral and metal content of chemical samples, ash is the non-gaseous, non-liquid residue after complete combustion.

Ashes as the end product of incomplete combustion are mostly mineral, but usually still contain an amount of combustible organic or other oxidizable residues. The best-known type of ash is wood ash, as a product of wood combustion in campfires, fireplaces, etc. The darker the wood ashes, the higher the content of remaining charcoal from incomplete combustion. The ashes are of different types. Some ashes contain natural compounds that make soil fertile. Others have chemical compounds that can be toxic but may break up in soil from chemical changes and microorganism activity.

Like soap, ash is also a disinfecting agent (alkaline). The World Health Organization recommends ash or sand as alternative for handwashing when soap is not available.

Natural occurrence

Ash occurs naturally from any fire that burns vegetation, and may disperse in the soil to fertilise it, or clump under it for long enough to carbonise into coal.

Specific types

* Wood ash
* Products of coal combustion

** Bottom ash
** Fly ash

* Cigarette or cigar ash
* Incinerator bottom ash, a form of ash produced in incinerators
* Volcanic ash, ash that consists of fragmented glass, rock, and minerals that appears during an eruption.

Cremation ashes

Cremation ashes, also called cremated remains or "cremains," are the bodily remains left from cremation. They often take the form of a grey powder resembling coarse sand. While often referred to as ashes, the remains primarily consist of powdered bone fragments due to the cremation process, which eliminates the body's organic materials. People often store these ashes in containers like urns, although they are also sometimes buried or scattered in specific locations.

Additional Information

Ash is the grayish-white to black powdery residue left when something is burned. It is also the general term used to describe the inorganic matter in a fuel. Ash of all fossil fuels, with the possible exception of natural gas, contains constituents that promote corrosion on the fire side of boiler components. Ash can cause corrosion in boilers or other heat exchangers.

Ash is the solid residue of combustion. Ash that does not rise is called bottom ash and ash that rises is known as fly ash.

In an industrial context, fly ash usually refers to ash produced during combustion of coal.

Coal ash may be used as an additive for the production of cement or concrete. However, many biomass ashes contain much higher concentrations of alkali than the standards which regulate the use of coal ashes allow.

The chemical composition of an ash depends on that of the substance burned. Wood ash contains metal carbonates and oxides formed from metals originally compounded in the wood. Coal ash usually has a high mineral content and is sometimes contaminated with rock; during combustion the mineral matter may become partially fused, forming cinders or clinker. Bone ash is largely made up of calcium phosphate. Seaweed ash contains sodium carbonate, potassium carbonate, and iodine that can be extracted.

In the case of furnace-wall corrosion, melting points between 635° and 770°F (335° and 410°C) have been reported for ash constituents on furnace walls under severe coal-ash corrosion. Reducing conditions exacerbates fuel-ash corrosion. The presence of carbon monoxide and/or unburned carbon and hydrogen sulfide promote the formation of metallic sulfides.

Tube spacers, used in some boilers, rapidly deteriorate when oil of greater than 0.02% ash content is fired. Because these spacers are not cooled and are near flue gas temperatures, they are in a liquid ash environment. Fuel oil additives can greatly prolong the life of these components. Most dry fuel additive preparations are used to treat fuels with high ash content, such as coal, bark or black liquor. Metal oxides are used for this purpose.

Did you know that every year the fire department responds to thousands of cases of fires caused by improperly disposing fireplace ashes? This may be surprising, especially for those who are unfamiliar with proper ash disposal.

Ashes themselves, even if not giving off any smoke, can stay hot for days after the fire goes out. Even if you can’t feel any heat radiating off of them, it’s still possible there are hot coals deep underneath the ash.

The reason for this is because ash acts as an insulator for these coals, helping them stay warm without burning themselves out for a long period of time. These latent coals can be hot enough to ignite paper, wood, vegetation and even melt through plastics.

Often times this doesn’t have to be direct contact either – just carrying an open container through a house can cause pieces of this cinder to fall or spark out, igniting combustibles inside your home.

To help avoid damaging your fireplace make sure to not burn anything that isn’t properly approved for a fireplace. You want to avoid any garbage and cardboard, as they can ignite a fire within the chimney causing irreparable damage to it.

When disposing of ashes always avoid windy days, as this wind can pick up even cooler embers and reignite them – turning them into a potentially devastating ball of fire.

Keeping all of this in mind, along with following proper disposal of your ashes is a simple, yet integral process. Fireplaces can be a source of comfort and beauty, but if you don’t follow a set procedure it can be devastating to your house and potentially be life-threatening.

Fly ash

Fly ash, flue ash, coal ash, or pulverised fuel ash (in the UK) – plurale tantum: coal combustion residuals (CCRs) – is a coal combustion product that is composed of the particulates (fine particles of burned fuel) that are driven out of coal-fired boilers together with the flue gases. Ash that falls to the bottom of the boiler's combustion chamber (commonly called a firebox) is called bottom ash. In modern coal-fired power plants, fly ash is generally captured by electrostatic precipitators or other particle filtration equipment before the flue gases reach the chimneys. Together with bottom ash removed from the bottom of the boiler, it is known as coal ash.

Depending upon the source and composition of the coal being burned, the components of fly ash vary considerably, but all fly ash includes substantial amounts of silicon dioxide (SiO2) (both amorphous and crystalline), aluminium oxide (Al2O3) and calcium oxide (CaO), the main mineral compounds in coal-bearing rock strata.

The use of fly ash as a lightweight aggregate (LWA) offers a valuable opportunity to recycle one of the largest waste streams in the US. In addition, fly ash can offer many benefits, both economically and environmentally when utilized as a LWA.

The minor constituents of fly ash depend upon the specific coal bed composition but may include one or more of the following elements or compounds found in trace concentrations (up to hundreds of ppm): gallium, beryllium, boron, cadmium, chromium, hexavalent chromium, cobalt, lead, manganese, mercury, molybdenum, selenium, strontium, thallium, and vanadium, along with very small concentrations of dioxins, PAH compounds, and other trace carbon compounds.

In the past, fly ash was generally released into the atmosphere, but air pollution control standards now require that it be captured prior to release by fitting pollution control equipment. In the United States, fly ash is generally stored at coal power plants or placed in landfills. About 43% is recycled, often used as a pozzolan to produce hydraulic cement or hydraulic plaster and a replacement or partial replacement for Portland cement in concrete production. Pozzolans ensure the setting of concrete and plaster and provide concrete with more protection from wet conditions and chemical attack.

In the case that fly (or bottom) ash is not produced from coal, for example when solid waste is incinerated in a waste-to-energy facility to produce electricity, the ash may contain higher levels of contaminants than coal ash. In that case the ash produced is often classified as hazardous waste.

ash-disposal-1.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1952 2023-11-05 00:06:48

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1956) Box office

Gist

Box office is the place in a cinema or theatre where tickets are sold.

The box office in a theatre, cinema, or concert hall is the place where the tickets are sold.

When people talk about the box office, they are referring to the degree of success of a film or play in terms of the number of people who go to watch it or the amount of money it makes.

Details

A box office or ticket office is a place where tickets are sold to the public for admission to an event. Patrons may perform the transaction at a countertop, through a hole in a wall or window, or at a wicket. By extension, the term is frequently used, especially in the context of the film industry, as a metonym for the amount of business a particular production, such as a film or theatre show, receives. The term is also used to refer to a ticket office at an arena or a stadium.

Box office business can be measured in the terms of the number of tickets sold or the amount of money raised by ticket sales (revenue). The projection and analysis of these earnings is greatly important for the creative industries and often a source of interest for fans. This is predominant in the Hollywood movie industry.

To determine if a movie made a profit, it is not correct to directly compare the box office gross with the production budget, because the movie theater keeps nearly half of the gross on average. The split varies from movie to movie, and the percentage for the distributor is generally higher in early weeks. Usually the distributor gets a percentage of the revenue after first deducting a "house allowance" or "house nut". It is also common that the distributor gets either a percentage of the gross revenue, or a higher percentage of the revenue after deducting the nut, whichever is larger. The distributor's share of the box office gross is often referred to as the "distributor rentals", especially for box office reporting of older films.

Etymology

The name box office was used at the Globe Theatre, owned by William Shakespeare, and also in wider Elizabethan theatre. Admission was collected in a tudor money box by a ticket seller at the entrance to the theatre. While the name box office was used in Elizabethan theatre, there is disagreement around whether the term originates from this time.

The term box office was being widely used from at least 1786, deriving from the office from which theatre boxes were being sold. The term box office was being used to describe total sales from at least 1904.

Related terminology

The following is film industry specific terminology used by box office reporters such as Variety and Box Office Mojo. For films released in North America, box office figures are usually divided between domestic, meaning the United States and Canada, and foreign which includes all other countries. Weekly box office figures are now normally taken to be from Friday through Thursday to allow for the fact that most films are officially released in the United States on a Friday. With Variety being published for many years every Wednesday, most weekly box office figures they reported from the 1920s to the 1990s were for the week from Thursday to Wednesday. A large component of the weekly gross is the weekend box office. Historically, this was reported as the box office receipts around Friday through Sunday plus any public holidays close to the weekend, such as a 4-day Memorial Day weekend, however, with the increased regularity of reporting of box office figures, a comparable 3-day figure for the Friday to Sunday is now also used. In particular, the weekend box office for the initial week of release, or opening weekend, is often widely reported.

Theaters is the number of theaters in which the movie is showing. Since a single theater may show a movie on multiple screens, the total number of screens or engagements is used as another measure. The theaters measure is used to classify whether a film is in wide release, meaning at least 600 theaters, or limited release which is less than 600 theaters. Occasionally, a film may achieve wide release after an initial limited release; Little Miss Sunshine is an example of this.

Gross refers to gross earnings. On average, the movie's distributor receives a little more than half of the final gross (often referred to as the rentals) with the remainder going to the exhibitor (i.e., movie theater).

Multiple is the ratio of a film's total gross to that of the opening weekend. A film that earns $20 million on its opening weekend and finishes with $80 million has a multiple of 4. From 2004 to 2014, films viewers graded as A+ on CinemaScore had a 4.8 multiple, while films graded as F had a 2.2 multiple.

Admissions refers to the number of tickets sold at the box office. In countries such as France, box office reporting was historically reported in terms of admissions, with rules regulated by the government and fines issued if exhibitors failed to report the data. Other countries which historically reported box office figures in terms of admissions include European countries such as Germany, Italy, and Spain, the Soviet Union, and South Korea. Box Office Mojo estimates the North American ticket sales by dividing the domestic box office gross by the average ticket price (ATP) of a given year, a method that Box Office India uses to estimate Indian footfalls (ticket sales).

Box office lists

For lists of films which are major box-office hits, see List of highest-grossing films, List of films by box office admissions and Lists of highest-grossing films. Films that are considered to have been very unsuccessful at the box office are called box office bombs or box office flops.

Box office reporting

There are numerous websites that monitor box-office receipts, such as BoxOffice, Box Office Mojo, The Numbers, Box Office India, Koimoi, and ShowBIZ Data. These sites provide box office information for hundreds of movies. Data for older movies is often incomplete due to the way box office reporting evolved, especially in the U.S., and the availability of information prior to the introduction of the internet.

History:

Rise of Hollywood
Variety started reporting box office results by theatre on March 3, 1922 to give exhibitors around the country information on a film's performance on Broadway, which was often where first run showings of a film were held. In addition to New York City, they also endeavoured to include all of the key cities in the U.S. in future and initially also reported results for 10 other cities including Chicago and Los Angeles.

In 1929, the first issue of The Motion Picture Almanac was released and included a list of the top 104 grossing films for the past year. In 1932, Variety published the studios' top-grossing films of the year and has maintained this tradition annually since. In 1937, BoxOffice magazine began publishing box office reports. Beginning in the 1930s, BoxOffice magazine published a Barometer issue in January, which reported the performance of movies for the year expressed as percentages.

Golden era of film

In 1946, Variety started to publish a weekly National Box Office survey on page 3 indicating the performance of the week's hits and flops based on the box office results of 25 key U.S. cities.

Later in 1946, Variety published a list of All-Time Top Grossers with a list of films that had achieved or gave promise of earning $4,000,000 or more in domestic (U.S. and Canada) theatrical rentals. This became a leading source of data for a film's performance. Variety would publish an updated all-time list annually for over 50 years, normally in their anniversary edition each January. The anniversary edition would also normally contain the list of the top performing films of the year.

Dawn of modern film industry

In the late 1960s, Variety used an IBM 360 computer to collate the grosses from their weekly reports of 22 to 24 U.S. cities from January 1, 1968. The data came from up to 800 theatres which represented around 5% of the U.S. cinema population at the time but around one-third of the total U.S. box office grosses. In 1969, Variety started to publish a list of the top 50 grossing films each week. "The Love Bug" was the number one on the first chart published for the week ending April 16, 1969. The chart was discontinued in 1990.

In 1974, Nat Fellman founded Exhibitor Relations Co., the first company set up to track box office grosses, which it collected from the studios. Two years later, Marcy Polier, an employee of the Mann theater chain, set up Centralized Grosses to collate U.S. daily box office data on a centralized basis from theaters rather than each theater chain collating their own numbers from other theater chains. The company later became National Gross Service then Entertainment Data, Inc. (EDI).

Except for disclosures by the studios on very successful films, total domestic (U.S. and Canada) box office gross information for films was not readily available until National Gross Service started to collate this data around 1981. The collation of grosses led to wider reporting of domestic box office grosses for films. Arthur D. Murphy, a former U.S. Navy lieutenant at Variety was one of the first to organize and chart that information and report it in a meaningful form. During the 1980s, Daily Variety started to publish a weekly chart of the domestic box office grosses of films collated from the studios as compared to the Top 50 chart in Variety which was based on a sample of theatre grosses from key markets.

Gradually the focus of a film's performance became its box office gross rather than the rentals that Variety continued to report annually. Prior to the tracking of these grosses, domestic or worldwide box office grosses is not available for many earlier films so the only domestic or worldwide data available is still often the rental figures.

Murphy started to publish Art Murphy's Box Office Register annually from 1984 detailing U.S. box office grosses.

In 1984, EDI started to report Canadian grosses as well and by 1985 was reporting data for 15,000 screens. In 1987, EDI set up a database of box office information which included data on certain films back to 1970. By 1991, all U.S. studios had agreed to share their complete data reports with EDI. By then box office results were publicized, with Entertainment Tonight segments on the weekend's top films, increasing public discussion of poorly performing films. In 1990, EDI opened an office in the UK, moved into Germany in 1993 and Spain in 1995 reporting box office data for those markets. EDI were acquired by ACNielsen Corporation in 1997 for $26 million and became Nielsen EDI.

By the 1990s, Daily Variety started to report studio's weekend estimates from Sundays on Monday mornings which led to other media reporting the data earlier. When Entertainment Weekly was launched in 1990 it started to publish the top 10 box office weekend lists from Exhibitor Relations and the company was also supplying box office data to companies such as the Los Angeles Times, CNN and the Associated Press.

In 1994, Variety published their first annual global box office chart showing the top 100 grossing films internationally for the prior year.

On August 7, 1998, Box Office Mojo was launched by Brandon Gray and in 1999 he started posting the Friday grosses sourced from Exhibitor Relations so that they were publicly available for free online on Saturdays[26] and posted the Sunday estimates on Sundays. In July 2008, Box Office Mojo was purchased by Amazon.com through its subsidiary, IMDb.

Modern film industry

Rentrak started tracking box office data from point of sale in 2001 and started to rival EDI in providing the studios with data. In December 2009, Rentrak acquired Nielsen EDI for $15 million, and became the sole provider of worldwide box office ticket sales revenue and attendance information which is used by many of the websites noted above.

On October 23, 2019, Box Office Mojo unveiled a dramatic redesign resembling IMDb, and was rebranded as "Box Office Mojo by IMDbPro" with some of the content move to the subscription based IMDbPro.

US box office reporting largely paused for the first time in 26 years in March 2020, as nearly all theaters nationwide were closed because of the coronavirus pandemic. Only drive-in theaters, which are typically not included in box office reporting, remained open.

KYDGQH7R65BYTPQATT4RANFURQ.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1953 2023-11-06 00:07:03

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1957) Meteor shower

Gist

A Meteor shower is an occasion when a number of meteors (= pieces of matter in space that produce a bright light as they travel) move fast across the sky at night.

Summary

If it's time for a meteor shower, you won't need a telescope, binoculars, or a high mountain to have a "star gazing" party. You might need a warm sleeping bag and an alarm clock to wake you in the middle of the night. But then just lying down in your own back yard will put you in the perfect spot to enjoy a great show.

Meteors

A meteor is a space rock—or meteoroid—that enters Earth's atmosphere. As the space rock falls toward Earth, the resistance—or drag—of the air on the rock makes it extremely hot. What we see is a "shooting star." That bright streak is not actually the rock, but rather the glowing hot air as the hot rock zips through the atmosphere.

When Earth encounters many meteoroids at once, we call it a meteor shower.

Why would Earth encounter many meteoroids at once? Well, comets, like Earth and the other planets, also orbit the sun. Unlike the nearly circular orbits of the planets, the orbits of comets are usually quite lop-sided.

As a comet gets closer to the sun, some of its icy surface boils off, releasing lots of particles of dust and rock. This comet debris gets strewn out along the comet's path, especially in the inner solar system (where we live) as the sun's heat boils off more and more ice and debris. Then, several times each year as Earth makes its journey around the sun, its orbit crosses the orbit of a comet, which means Earth smacks into a bunch of comet debris.

But not to worry!

The meteoroids are usually small, from dust particle to boulder size. They are almost always small enough to quickly burn up in our atmosphere, so there's little chance any of them will strike Earth's surface. But there is a good chance that you can see a beautiful shooting star show in the middle of the night!

In the case of a meteor shower, the glowing streaks may appear anywhere in the sky, but their "tails" all seem to point back to the same spot in the sky. That's because all the meteors are coming at us at the same angle, and as they get closer to Earth the effect of perspective makes them seem to get farther apart. It's like standing in the middle of railroad tracks and seeing how the two tracks come together in the distance.

Meteor showers are named for the constellation where the meteors appear to be coming from. So, for example, the Orionids Meteor Shower, which occurs in October each year, appear to be originating near the constellation Orion the Hunter.

Details

A meteor shower is a celestial event in which a number of meteors are observed to radiate, or originate, from one point in the night sky. These meteors are caused by streams of cosmic debris called meteoroids entering Earth's atmosphere at extremely high speeds on parallel trajectories. Most meteors are smaller than a grain of sand, so almost all of them disintegrate and never hit the Earth's surface. Very intense or unusual meteor showers are known as meteor outbursts and meteor storms, which produce at least 1,000 meteors an hour, most notably from the Leonids. The Meteor Data Centre lists over 900 suspected meteor showers of which about 100 are well established. Several organizations point to viewing opportunities on the Internet. NASA maintains a daily map of active meteor showers.

Historical developments

A meteor shower in August 1583 was recorded in the Timbuktu manuscripts. In the modern era, the first great meteor storm was the Leonids of November 1833. One estimate is a peak rate of over one hundred thousand meteors an hour, but another, done as the storm abated, estimated more than two hundred thousand meteors during the 9 hours of the storm, over the entire region of North America east of the Rocky Mountains. American Denison Olmsted (1791–1859) explained the event most accurately. After spending the last weeks of 1833 collecting information, he presented his findings in January 1834 to the American Journal of Science and Arts, published in January–April 1834, and January 1836. He noted the shower was of short duration and was not seen in Europe, and that the meteors radiated from a point in the constellation of Leo. He speculated the meteors had originated from a cloud of particles in space. Work continued, yet coming to understand the annual nature of showers though the occurrences of storms perplexed researchers.

The actual nature of meteors was still debated during the 19th century. Meteors were conceived as an atmospheric phenomenon by many scientists (Alexander von Humboldt, Adolphe Quetelet, Julius Schmidt) until the Italian astronomer Giovanni Schiaparelli ascertained the relation between meteors and comets in his work "Notes upon the astronomical theory of the falling stars" (1867). In the 1890s, Irish astronomer George Johnstone Stoney (1826–1911) and British astronomer Arthur Matthew Weld Downing (1850–1917) were the first to attempt to calculate the position of the dust at Earth's orbit. They studied the dust ejected in 1866 by comet 55P/Tempel-Tuttle before the anticipated Leonid shower return of 1898 and 1899. Meteor storms were expected, but the final calculations showed that most of the dust would be far inside Earth's orbit. The same results were independently arrived at by Adolf Berberich of the Königliches Astronomisches Rechen Institut (Royal Astronomical Computation Institute) in Berlin, Germany. Although the absence of meteor storms that season confirmed the calculations, the advance of much better computing tools was needed to arrive at reliable predictions.

In 1981, Donald K. Yeomans of the Jet Propulsion Laboratory reviewed the history of meteor showers for the Leonids and the history of the dynamic orbit of Comet Tempel-Tuttle. A graph from it was adapted and re-published in Sky and Telescope. It showed relative positions of the Earth and Tempel-Tuttle and marks where Earth encountered dense dust. This showed that the meteoroids are mostly behind and outside the path of the comet, but paths of the Earth through the cloud of particles resulting in powerful storms were very near paths of nearly no activity.

In 1985, E. D. Kondrat'eva and E. A. Reznikov of Kazan State University first correctly identified the years when dust was released which was responsible for several past Leonid meteor storms. In 1995, Peter Jenniskens predicted the 1995 Alpha Monocerotids outburst from dust trails. In anticipation of the 1999 Leonid storm, Robert H. McNaught, David Asher, and Finland's Esko Lyytinen were the first to apply this method in the West. In 2006 Jenniskens published predictions for future dust trail encounters covering the next 50 years. Jérémie Vaubaillon continues to update predictions based on observations each year for the Institut de Mécanique Céleste et de Calcul des Éphémérides (IMCCE).

Radiant point

Because meteor shower particles are all traveling in parallel paths and at the same velocity, they will appear to an observer below to radiate away from a single point in the sky. This radiant point is caused by the effect of perspective, similar to parallel railroad tracks converging at a single vanishing point on the horizon. Meteor showers are normally named after the constellation from which the meteors appear to originate. This "fixed point" slowly moves across the sky during the night due to the Earth turning on its axis, the same reason the stars appear to slowly march across the sky. The radiant also moves slightly from night to night against the background stars (radiant drift) due to the Earth moving in its orbit around the Sun. See IMO Meteor Shower Calendar 2017 (International Meteor Organization) for maps of drifting "fixed points."

When the moving radiant is at the highest point, it will reach the observer's sky that night. The Sun will be just clearing the eastern horizon. For this reason, the best viewing time for a meteor shower is generally slightly before dawn — a compromise between the maximum number of meteors available for viewing and the brightening sky, which makes them harder to see.

Naming

Meteor showers are named after the nearest constellation, or bright star with a Greek or Roman letter assigned that is close to the radiant position at the peak of the shower, whereby the grammatical declension of the Latin possessive form is replaced by "id" or "ids." Hence, meteors radiating from near the star Delta Aquarii (declension "-i") are called the Delta Aquariids. The International Astronomical Union's Task Group on Meteor Shower Nomenclature and the IAU's Meteor Data Center keep track of meteor shower nomenclature and which showers are established.

Origin of meteoroid streams

A meteor shower results from an interaction between a planet, such as Earth, and streams of debris from a comet. Comets can produce debris by water vapor drag, as demonstrated by Fred Whipple in 1951, and by breakup. Whipple envisioned comets as "dirty snowballs," made up of rock embedded in ice, orbiting the Sun. The "ice" may be water, methane, ammonia, or other volatiles, alone or in combination. The "rock" may vary in size from a dust mote to a small boulder. Dust mote sized solids are orders of magnitude more common than those the size of sand grains, which, in turn, are similarly more common than those the size of pebbles, and so on. When the ice warms and sublimates, the vapor can drag along dust, sand, and pebbles.

Each time a comet swings by the Sun in its orbit, some of its ice vaporizes, and a certain number of meteoroids will be shed. The meteoroids spread out along the entire trajectory of the comet to form a meteoroid stream, also known as a "dust trail" (as opposed to a comet's "gas tail" caused by the tiny particles that are quickly blown away by solar radiation pressure).

Recently, Peter Jenniskens has argued that most of our short-period meteor showers are not from the normal water vapor drag of active comets, but the product of infrequent disintegrations, when large chunks break off a mostly dormant comet. Examples are the Quadrantids and Geminids, which originated from a breakup of asteroid-looking objects, (196256) 2003 EH1 and 3200 Phaethon, respectively, about 500 and 1000 years ago. The fragments tend to fall apart quickly into dust, sand, and pebbles and spread out along the comet's orbit to form a dense meteoroid stream, which subsequently evolves into Earth's path.

Dynamical evolution of meteoroid streams

Shortly after Whipple predicted that dust particles traveled at low speeds relative to the comet, Milos Plavec was the first to offer the idea of a dust trail, when he calculated how meteoroids, once freed from the comet, would drift mostly in front of or behind the comet after completing one orbit. The effect is simple celestial mechanics – the material drifts only a little laterally away from the comet while drifting ahead or behind the comet because some particles make a wider orbit than others. These dust trails are sometimes observed in comet images taken at mid infrared wavelengths (heat radiation), where dust particles from the previous return to the Sun are spread along the orbit of the comet.

The gravitational pull of the planets determines where the dust trail would pass by Earth orbit, much like a gardener directing a hose to water a distant plant. Most years, those trails would miss the Earth altogether, but in some years, the Earth is showered by meteors. This effect was first demonstrated from observations of the 1995 alpha Monocerotids, and from earlier not widely known identifications of past Earth storms.

Over more extended periods, the dust trails can evolve in complicated ways. For example, the orbits of some repeating comets, and meteoroids leaving them, are in resonant orbits with Jupiter or one of the other large planets – so many revolutions of one will equal another number of the other. This creates a shower component called a filament.

A second effect is a close encounter with a planet. When the meteoroids pass by Earth, some are accelerated (making wider orbits around the Sun), others are decelerated (making shorter orbits), resulting in gaps in the dust trail in the next return (like opening a curtain, with grains piling up at the beginning and end of the gap). Also, Jupiter's perturbation can dramatically change sections of the dust trail, especially for a short period comets, when the grains approach the giant planet at their furthest point along the orbit around the Sun, moving most slowly. As a result, the trail has a clumping, a braiding or a tangling of crescents, of each release of material.

The third effect is that of radiation pressure which will push less massive particles into orbits further from the Sun – while more massive objects (responsible for bolides or fireballs) will tend to be affected less by radiation pressure. This makes some dust trail encounters rich in bright meteors, others rich in faint meteors. Over time, these effects disperse the meteoroids and create a broader stream. The meteors we see from these streams are part of annual showers, because Earth encounters those streams every year at much the same rate.

When the meteoroids collide with other meteoroids in the zodiacal cloud, they lose their stream association and become part of the "sporadic meteors" background. Long since dispersed from any stream or trail, they form isolated meteors, not a part of any shower. These random meteors will not appear to come from the radiant of the leading shower.

Famous meteor showers:

Perseids and Leonids

In most years, the most visible meteor shower is the Perseids, which peak on 12 August of each year at over one meteor per minute. NASA has a tool to calculate how many meteors per hour are visible from one's observing location.

The Leonid meteor shower peaks around 17 November of each year. The Leonid shower produces a meteor storm, peaking at rates of thousands of meteors per hour. Leonid storms gave birth to the term meteor shower when it was first realised that, during the November 1833 storm, the meteors radiated from near the star Gamma Leonis. The last Leonid storms were in 1999, 2001 (two), and 2002 (two). Before that, there were storms in 1767, 1799, 1833, 1866, 1867, and 1966. When the Leonid shower is not storming, it is less active than the Perseids.

Extraterrestrial meteor showers

Any other Solar System body with a reasonably transparent atmosphere can also have meteor showers. As the Moon is in the neighborhood of Earth it can experience the same showers, but will have its own phenomena due to its lack of an atmosphere per se, such as vastly increasing its sodium tail. NASA now maintains an ongoing database of observed impacts on the moon maintained by the Marshall Space Flight Center whether from a shower or not.

Many planets and moons have impact craters dating back large spans of time. But new craters, perhaps even related to meteor showers are possible. Mars, and thus its moons, is known to have meteor showers. These have not been observed on other planets as yet but may be presumed to exist. For Mars in particular, although these are different from the ones seen on Earth because of the different orbits of Mars and Earth relative to the orbits of comets. The Martian atmosphere has less than one percent of the density of Earth's at ground level, at their upper edges, where meteoroids strike; the two are more similar. Because of the similar air pressure at altitudes for meteors, the effects are much the same. Only the relatively slower motion of the meteoroids due to increased distance from the sun should marginally decrease meteor brightness. This is somewhat balanced because the slower descent means that Martian meteors have more time to ablate.

On March 7, 2004, the panoramic camera on Mars Exploration Rover Spirit recorded a streak which is now believed to have been caused by a meteor from a Martian meteor shower associated with comet 114P/Wiseman-Skiff. A strong display from this shower was expected on December 20, 2007. Other showers speculated about are a "Lambda Geminid" shower associated with the Eta Aquariids of Earth (i.e., both associated with Comet 1P/Halley), a "Beta Canis Major" shower associated with Comet 13P/Olbers, and "Draconids" from 5335 Damocles.

Isolated massive impacts have been observed at Jupiter: The 1994 Comet Shoemaker–Levy 9 which formed a brief trail as well, and successive events since then. Meteors or meteor showers have been discussed for most of the objects in the Solar System with an atmosphere: Mercury, Venus, Saturn's moon Titan, Neptune's moon Triton, and Pluto.

Additional Information

Meteor shower is a temporary rise in the rate of meteor sightings, caused by the entry into Earth’s atmosphere of a number of meteoroids (see meteor and meteoroid) at approximately the same place in the sky and the same time of year, traveling in parallel paths and apparently having a common origin. Most meteor showers are known or believed to be associated with active or defunct comets; they represent Earth’s passage through the orbits of these comets and its collision with the streams of debris (typically of sand-grain to pebble size) that have been left behind. The showers return annually, but, because the densities of meteoroids in the streams (commonly called meteor streams) are not uniform, the intensities of the showers can vary considerably from year to year.

A meteor shower’s name is usually derived from that of the constellation (or of a star therein) in which the shower’s radiant is situated—i.e., the point in the sky from which perspective makes the parallel meteor tracks seem to originate. Some showers have been named for an associated comet; e.g., the Andromedids were formerly called the Bielids, after Biela’s Comet. The Cyrillid shower of 1913 had no radiant (the meteoroids seemed to enter the atmosphere from a circular orbit around Earth) and was named for St. Cyril of Alexandria, on whose feast day (formerly celebrated on February 9) the shower was observed. The great Leonid meteor shower of Nov. 12, 1833, in which hundreds of thousands of meteors were observed in one night, was seen all over North America and initiated the first serious study of meteor showers. It was later established that very strong Leonid showers recur at 33–34-year intervals (the orbital period of its associated comet, Tempel-Tuttle), and occasional records of its appearances have been traced back to about AD 902. Since about 1945, radar observations have revealed meteor showers regularly occurring in the daylight sky, where they are invisible to the eye.

ioozSoc4rmejFkDL5qSTMb-970-80.jpg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1954 2023-11-07 00:17:48

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1958) Bluetooth

Summary

Bluetooth is universal for short-range wireless voice and data communication. It is a Wireless Personal Area Network (WPAN) technology and is used for exchanging data over smaller distances. This technology was invented by Ericson in 1994. It operates in the unlicensed, industrial, scientific, and medical (ISM) band from 2.4 GHz to 2.485 GHz. Maximum devices that can be connected at the same time are 7. Bluetooth ranges up to 10 meters. It provides data rates up to 1 Mbps or 3 Mbps depending upon the version. The spreading technique that it uses is FHSS (Frequency-hopping spread spectrum). A Bluetooth network is called a piconet and a collection of interconnected piconets is called scatternet.

What is Bluetooth?

Bluetooth simply follows the principle of transmitting and receiving data using radio waves. It can be paired with the other device which has also Bluetooth but it should be within the estimated communication range to connect. When two devices start to share data, they form a network called piconet which can further accommodate more than five devices.

Points to remember for Bluetooth:

* Bluetooth Transmission capacity 720 kbps.
* Bluetooth is Wireless.
* Bluetooth is a Low-cost short-distance radio communications standard.
* Bluetooth is robust and flexible.
* Bluetooth is cable replacement technology that can be used to connect almost any device to any other device.
* The basic architecture unit of Bluetooth is a piconet.

Details

You get that important call you’ve been waiting for, and you scramble for earphones in your bag. You groan as you find them, wires all a-tangle like yesterday’s spaghetti. And then, when you try to transfer photos from the phone to your computer, you can’t find that elusive USB cable in your desk.

Sound familiar? No? That’s because you’re using Bluetooth to connect your earphones, your phone, and your computer—no wires, no fuss. But do you know how Bluetooth makes your life so much easier?

Bluetooth—named for a 10th-century Danish king, incidentally—uses radio waves to transmit information between two devices directly. The radio waves used by Bluetooth are much weaker than those involved with Wi-Fi or cellular signals, two other common ways to connect devices. Weaker radio waves mean that less power is being used to generate them, which makes Bluetooth a particularly useful technology for battery-powered devices. Those weaker radio waves also mean that Bluetooth typically works only over short distances, of less than 30 feet, or about 9 meters. (Incidentally, long-range Bluetooth devices do exist, but they either require power not usually seen in the commercial domain or are products of precision engineering that exists only in prototypes.) But a Bluetooth connection between two devices will stay active as long as they remain within range, without the need for a router or any other intervening device.

When Bluetooth-enabled devices are close to each other, they automatically detect each other. Bluetooth uses 79 different radio frequencies in a small band around 2.4 GHz. This band is used by Wi-Fi too, but Bluetooth uses so little power that interference with Wi-Fi communication is negligible. When two devices are being paired, they randomly pick up one of the available 79 frequencies to make a connection, and, once that connection is established, they keep hopping across these frequencies many times a second. The connection will automatically break if the devices move too far apart, and they’ll reconnect once they come within range again. Security can be applied too: devices can be configured to accept connections only from “trusted devices,” and passwords can be used to block malicious actors.

All of this means that you can think of Bluetooth as a bit like a duck swimming in a placid lake. There’s a lot of churning under the surface, as connections are made and broken and renewed so that data can flow, but on the surface everything looks calm and effortless. No drama. No wires.

Bluetooth is technology standard used to enable short-range wireless communication between electronic devices. Bluetooth was developed in the late 1990s and soon achieved massive popularity in consumer devices.

In 1998 Ericsson, the Swedish manufacturer of mobile telephones, assembled a consortium of computer and electronics companies to bring to the consumer market a technology they had been developing for several years that was aimed at freeing computers, phones, personal digital assistants (PDAs), and other devices from the wires required to transfer data between them. Because the protocol would operate on radio frequencies, rather than the infrared spectrum used by traditional remote controls, such devices would not have to maintain a line of sight to communicate. Bluetooth, named for Harald I Bluetooth, the 10th-century Danish king who unified Denmark and Norway, was developed to enable a wide range of devices to work together. Its other key features were low power usage—enabling simple battery operation—and relatively low cost.

The consortium, known as the Bluetooth SIG (Special Interest Group), released the Bluetooth 1.0 specifications in 1999. After a difficult initial launch period, in which it looked like the costlier but faster IEEE 802.11b, or Wi-Fi, protocol might render Bluetooth obsolete, it began to gain a market foothold. The technology first appeared in mobile phones and desktop computers in 2000 and spread to printers and mobile computers (laptops) the following year. By the middle of the decade, Bluetooth headsets for mobile phones had become near-ubiquitous, and the technology was being incorporated into television sets, wristwatches, sunglasses, picture frames, and many other consumer products. Within the first 10 years of the protocol, nearly two billion Bluetooth-enabled products were shipped. The steady growth in Bluetooth use continued, and in 2020 four billion Bluetooth-enabled products were shipped.

Additional Information

Bluetooth is a short-range wireless technology standard that is used for exchanging data between fixed and mobile devices over short distances and building personal area networks (PANs). In the most widely used mode, transmission power is limited to 2.5 milliwatts, giving it a very short range of up to 10 metres (33 ft). It employs UHF radio waves in the ISM bands, from 2.402 GHz to 2.48 GHz. It is mainly used as an alternative to wire connections, to exchange files between nearby portable devices and connect cell phones and music players with wireless headphones.

Bluetooth is managed by the Bluetooth Special Interest Group (SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. The IEEE standardized Bluetooth as IEEE 802.15.1, but no longer maintains the standard. The Bluetooth SIG oversees development of the specification, manages the qualification program, and protects the trademarks. A manufacturer must meet Bluetooth SIG standards to market it as a Bluetooth device. A network of patents apply to the technology, which are licensed to individual qualifying devices. As of 2021, 4.7 billion Bluetooth integrated circuit chips are shipped annually.

Etymology

The name "Bluetooth" was proposed in 1997 by Jim Kardach of Intel, one of the founders of the Bluetooth SIG. The name was inspired by a conversation with Sven Mattisson who related Scandinavian history through tales from Frans G. Bengtsson's The Long Ships, a historical novel about Vikings and the 10th-century Danish king Harald Bluetooth. Upon discovering a picture of the runestone of Harald Bluetooth in the book A History of the Vikings by Gwyn Jones, Kardach proposed Bluetooth as the codename for the short-range wireless program which is now called Bluetooth.

5213ccb1757b7f4c568b4568.jpg?_gl=1*fhavsh*_ga*MTQ5NzA2MTgxNC4xNjk5MzU5NDIy*_ga_T369JS7J9N*MTY5OTM1OTQyMi4xLjAuMTY5OTM1OTQyMi42MC4wLjA.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1955 2023-11-08 00:12:52

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1959) Bubble

Gist

A bubble is a floating ball of air. You can also blow a bubble with gum. If you’re in your own little bubble, you’re living in a fantasy, separated from the rest of the world by a thin layer of your imagination.

Whether you're blowing bubbles in the park or enjoying a bubble bath, those physical bubbles are made of liquid and gas. If something bubbles up, it comes to the surface. That can be an actual bubble or a just a feeling. A bubble can also be an idea that won’t work either because it’s a fantasy or because it’s an uncontrollable scheme, like a housing or economic bubble. The thing about bubbles is that they burst.

Details:

Bubble (physics)

A bubble is a globule of a gas substance in a liquid. In the opposite case, a globule of a liquid in a gas, is called a drop. Due to the Marangoni effect, bubbles may remain intact when they reach the surface of the immersive substance.

Common examples

Bubbles are seen in many places in everyday life, for example:

* As spontaneous nucleation of supersaturated carbon dioxide in soft drinks
* As vapor in boiling water
* As air mixed into agitated water, such as below a waterfall
* As sea foam
* As a soap bubble
* As given off in chemical reactions, e.g., baking soda + vinegar
* As a gas trapped in glass during its manufacture
* As the indicator in a spirit level

Physics and chemistry

Bubbles form and coalesce into globular shapes because those shapes are at a lower energy state. For the physics and chemistry behind it, see nucleation.

Appearance

Bubbles are visible because they have a different refractive index (RI) than the surrounding substance. For example, the RI of air is approximately 1.0003 and the RI of water is approximately 1.333. Snell's Law describes how electromagnetic waves change direction at the interface between two mediums with different RI; thus bubbles can be identified from the accompanying refraction and internal reflection even though both the immersed and immersing mediums are transparent.

The above explanation only holds for bubbles of one medium submerged in another medium (e.g. bubbles of gas in a soft drink); the volume of a membrane bubble (e.g. soap bubble) will not distort light very much, and one can only see a membrane bubble due to thin-film diffraction and reflection.

Applications

Nucleation can be intentionally induced, for example, to create a bubblegram in a solid.

In medical ultrasound imaging, small encapsulated bubbles called contrast agent are used to enhance the contrast.

In thermal inkjet printing, vapor bubbles are used as actuators. They are occasionally used in other microfluidics applications as actuators.

The violent collapse of bubbles (cavitation) near solid surfaces and the resulting impinging jet constitute the mechanism used in ultrasonic cleaning. The same effect, but on a larger scale, is used in focused energy weapons such as the bazooka and the torpedo. Pistol shrimp also uses a collapsing cavitation bubble as a weapon. The same effect is used to treat kidney stones in a lithotripter. Marine mammals such as dolphins and whales use bubbles for entertainment or as hunting tools. Aerators cause the dissolution of gas in the liquid by injecting bubbles.

Bubbles are used by chemical and metallurgic engineer in processes such as distillation, absorption, flotation and spray drying. The complex processes involved often require consideration for mass and heat transfer and are modeled using fluid dynamics.

The star-nosed mole and the American water shrew can smell underwater by rapidly breathing through their nostrils and creating a bubble.

Research on the origin of life on Earth suggests that bubbles may have played an integral role in confining and concentrating precursor molecules for life, a function currently performed by cell membranes.

Pulsation

When bubbles are disturbed (for example when a gas bubble is injected underwater), the wall oscillates. Although it is often visually masked by much larger deformations in shape, a component of the oscillation changes the bubble volume (i.e. it is pulsation) which, in the absence of an externally-imposed sound field, occurs at the bubble's natural frequency. The pulsation is the most important component of the oscillation, acoustically, because by changing the gas volume, it changes its pressure, and leads to the emission of sound at the bubble's natural frequency. For air bubbles in water, large bubbles (negligible surface tension and thermal conductivity) undergo adiabatic pulsations, which means that no heat is transferred either from the liquid to the gas or vice versa.

Excited bubbles trapped underwater are the major source of liquid sounds, such as inside our knuckles during knuckle cracking, and when a rain droplet impacts a surface of water.

Physiology and medicine

Injury by bubble formation and growth in body tissues is the mechanism of decompression sickness, which occurs when supersaturated dissolved inert gases leave the solution as bubbles during decompression. The damage can be due to mechanical deformation of tissues due to bubble growth in situ, or by blocking blood vessels where the bubble has lodged.

Arterial gas embolism can occur when a gas bubble is introduced to the circulatory system and lodges in a blood vessel that is too small for it to pass through under the available pressure difference. This can occur as a result of decompression after hyperbaric exposure, a lung overexpansion injury, during intravenous fluid administration, or during surgery.

bubble_science_for_kids.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1956 2023-11-09 00:03:32

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1960) USB Flash Drive

Gist

A flash drive is a small, ultra-portable storage device which, unlike an optical drive or a traditional hard drive, has no moving parts.

Flash drives connect to computers and other devices via a built-in USB Type-A or USB-C plug, making one a kind of combination USB device and cable.

Flash drives are often referred to as pen drives, thumb drives, or jump drives. The terms USB drive and solid-state drive (SSD) are also sometimes used but most of the time those refer to larger, not-so-mobile USB-based storage devices like external hard drives.

Summary

A USB flash drive (also called a thumb drive in the US, or a memory stick in the UK & pen drive or pendrive in many countries) is a data storage device that includes flash memory with an integrated USB interface. It is typically removable, rewritable and much smaller than an optical disc. Most weigh less than 30 g (1 oz). Since first appearing on the market in late 2000, as with virtually all other computer memory devices, storage capacities have risen while prices have dropped. As of March 2016, flash drives with anywhere from 8 to 256 gigabytes (GB) were frequently sold, while 512 GB and 1 terabyte (TB) units were less frequent. As of 2023, 2 TB flash drives were the largest currently in production. Some allow up to 100,000 write/erase cycles, depending on the exact type of memory chip used, and are thought to physically last between 10 and 100 years under normal circumstances (shelf storage time).

Common uses of USB flash drives are for storage, supplementary back-ups, and transferring of computer files. Compared with floppy disks or CDs, they are smaller, faster, have significantly more capacity, and are more durable due to a lack of moving parts. Additionally, they are less vulnerable to electromagnetic interference than floppy disks, and are unharmed by surface scratches (unlike CDs). However, as with any flash storage, data loss from bit leaking due to prolonged lack of electrical power and the possibility of spontaneous controller failure due to poor manufacturing could make it unsuitable for long-term archiving of data. The ability to retain data is affected by the controller's firmware, internal data redundancy, and error correction algorithms.

Until about 2005, most desktop and laptop computers were supplied with floppy disk drives in addition to USB ports, but floppy disk drives became obsolete after widespread adoption of USB ports and the larger USB drive capacity compared to the "1.44 megabyte" (1440 kilobyte) 3.5-inch floppy disk.

USB flash drives use the USB mass storage device class standard, supported natively by modern operating systems such as Windows, Linux, macOS and other Unix-like systems, as well as many BIOS boot ROMs. USB drives with USB 2.0 support can store more data and transfer faster than much larger optical disc drives like CD-RW or DVD-RW drives and can be read by many other systems such as the Xbox One, PlayStation 4, DVD players, automobile entertainment systems, and in a number of handheld devices such as smartphones and tablet computers, though the electronically similar SD card is better suited for those devices, due to their standardized form factor, which allows the card to be housed inside a device without protruding.

A flash drive consists of a small printed circuit board carrying the circuit elements and a USB connector, insulated electrically and protected inside a plastic, metal, or rubberized case, which can be carried in a pocket or on a key chain, for example. Some are equipped with an I/O indication LED that lights up or blinks upon access. The USB connector may be protected by a removable cap or by retracting into the body of the drive, although it is not likely to be damaged if unprotected. Most flash drives use a standard type-A USB connection allowing connection with a port on a personal computer, but drives for other interfaces also exist (e.g. micro-USB and USB-C ports). USB flash drives draw power from the computer via the USB connection. Some devices combine the functionality of a portable media player with USB flash storage; they require a battery only when used to play music on the go.

Details

A USB flash drive -- also known as a USB stick, USB thumb drive or pen drive -- is a plug-and-play portable storage device that uses flash memory and is lightweight enough to attach to a keychain. A USB flash drive can be used in place of a compact disc. When a user plugs the flash memory device into the USB port, the computer's operating system (OS) recognizes the device as a removable drive and assigns it a drive letter.

A USB flash drive can store important files and data backups, carry favorite settings or applications, run diagnostics to troubleshoot computer problems or launch an OS from a bootable USB. The drives support Microsoft Windows, Linux, MacOS, different flavors of Linux and many BIOS boot ROMs.

The first USB flash drive came on the market in 2000 with a storage capacity of 8 megabytes (MB). Drives now come in capacities ranging between 8 gigabytes (GB) and 1 terabyte (TB), depending on manufacturer, and future capacity levels are expected to reach 2 TB.

The memory within most USB flash drives is multi-level cell (MLC), which is good for 3,000 to 5,000 program-erase cycles. However, some drives are designed with single-level cell (SLC) memory that supports approximately 100,000 writes.

How a USB flash drive is used also affects its life expectancy. The more users delete and write new data on the device, the more likely it will degrade.

USB specifications

There are three main USB specifications that USB flash drives can connect through: 1.0, 2.0 and 3.0. Each specification publication allows for faster data transfer rates than the previous version. There have also been several prereleases and various updates in addition to these three versions.

USB 1.0 was released in January 1996. It was available in two versions:

* USB 1.0 low-speed: Provides a data transfer rate of 1.5 megabits per second (Mbps).
* USB 1.0 high-speed: Has a data transfer rate of 12 Mbps.

Version 1.1, an update that fixed various issues in 1.0, was released in September 1998 and was more widely adopted.

USB 2.0, also known as Hi-Speed USB, was released in April 2000. It was developed by the USB 2.0 Promoter Group, an organization led by Compaq, Hewlett-Packard (now Hewlett Packard Enterprise), Intel, Lucent Technologies, Microsoft, NEC Corp. and Philips. USB 2.0 features a maximum data transfer rate of 480 Mbps. This boosted performance by up to 40 times. It is backward-compatible so USB flash drives using original USB technology can easily transition.

USB 3.0, also known as SuperSpeed USB, was introduced in November 2008. The first 3.0-compatible USB storage began shipping in January 2010. SuperSpeed USB was developed by the USB Promoter Group to increase the data transfer rate and lower power consumption. With SuperSpeed USB, the data transfer rate increased 10 times from Hi-Speed USB to 5 Gigabits per second (Gbps). It features lower power requirements when active and idle, and is backward-compatible with USB 2.0. USB 3.1, known as SuperSpeed+ or SuperSpeed USB 10 Gbps, was released in July 2013. It bumped up the data transfer rate and improved data encoding for higher throughput.

Pros and cons of USB flash drives

USB flash drives are small and light, use little power and have no moving parts. The devices, whether they are encased in plastic or rubber, are strong enough to withstand mechanical shocks, scratches and dust, and generally are waterproof.

Data on USB flash drives can be retained for long periods when the device is unplugged from a computer, or when the computer is powered-down with the drive left in. This makes a USB flash drive convenient for transferring data between a desktop computer and a notebook computer, or for personal backup needs.

Unlike most removable drives, a USB flash drive does not require rebooting after it is attached, does not require batteries or an external power supply, and is not platform dependent. Several manufacturers offer additional features such as password protection and downloadable drivers that allow the device to be compatible with older systems that do not have USB ports.

Drawbacks to USB flash drives include the ability to handle a limited number of write and erase cycles before the drive fails, data leakage and exposure to malware. Data leakage is a problem because the devices are portable and hard to track. A security breach due to malware can occur when the device is plugged into an infected system. However, encryption and a routine scan of the USB flash drive are common approaches in protecting against a security breach.

Major vendors

Examples of USB flash drive manufacturers include Hewlett Packard Enterprise, Kingston Technology Corp., Lexar Media Inc., SanDisk, Seagate Technology, Sony Corp., Toshiba Corp. and Verbatim Americas LLC.

Additional Information

USB flash drive is a small portable data storage device that uses flash memory and has an integrated universal serial bus (USB) interface. Most flash drives have between 2 and 64 gigabytes (GB) of memory, but some drives can store as much as 2 terabytes (TB).

A flash drive consists of a small printed circuit board that syncs with a computer via a USB connector (usually a Type A, although other interfaces exist). One or more flash memory chips are fastened to the board. These memory chips determine the storage capacity of the drive. A small microcontroller chip manages the flash drive’s data and functions. These components are protected by an insulated case made of plastic, rubber, or metal. Some flash drives include a write-protect switch, which regulates whether data can be written in the drive’s memory. USB drives may also feature LEDs that inform the user when a connection has been made, a retractable connecter or a protective cap for the connector, and a hole through which a user can thread a key chain or lanyard.

USB flash drives may be preferable to hard disk and optical drives for several reasons. First, USB flash drives use integrated circuit technology—a form of solid-state technology—and have no moving parts. Flash drives are therefore invulnerable to many types of damage that affect hard disk and optical drives, such as scratches, dust, and magnetic fields. In addition, unlike hard disk drives, flash drives are nonvolatile, meaning they do not need a battery backup, since they retain information even without a power source.

One of the major advantages of USB flash drives is their portability. Prior to the development of the USB flash drive and fast Internet connections, moving large amounts of data was an onerous process. Since CD-writing technology was largely unavailable on personal computers before the 1990s, most users relied on floppy disks, which had a maximum capacity of 1.44 megabytes (MB) each. Large files had to be either broken up across many floppy disks or written on high-density disks that required special, expensive, and sometimes unreliable hardware. The cheapest flash drive on the modern market can hold hundreds of times more information than a floppy disk or a high-density disk.

The question of who invented the USB flash drive is contentious. The Israeli company M-Systems, which was acquired by SanDisk in 2006, filed the first patent for a “USB-based PC flash disk” in April 1999. Later that year, however, IBM sent an invention disclosure by employee Shimon Shmueli to the U.S. Patent and Trademark Office. Pua Khein-Seng, CEO of Taiwan-based Phison Electronics Corporation, also claimed in 1999 to have invented the device. Trek 2000 International, a tech company based in Singapore, was the first to actually sell a USB flash drive, which it called a ThumbDrive. Netac, a company based in China, also claimed to have invented the USB flash drive in 1999. However, its claim is difficult to prove since its U Disk USB flash drive did not appear on the market until 2002. In comparison, IBM released its version in 2000 and Phison in 2001.

The controversy around the invention of the USB flash drive is driven mainly by the fact that, in the late 1990s, pairing a USB interface with flash memory was, according to industry experts, an obvious next step in technological development. Indeed, on those very grounds, many companies have fought others’ attempts to patent the USB flash drive. It is therefore highly likely that most of these claimants did independently develop their own USB flash drives.

Regardless of how it originated, the USB flash drive evolved with great speed. IBM began selling 8-MB flash drives, manufactured by M-Systems, in 2000. USB flash drives with 1 GB of space hit the market in 2004, and in 2010 those models were eclipsed by new versions boasting 128 GB. Kingston Technology released the first 1-TB flash drive in 2013 and the first 2-TB drive in 2017. Moreover, companies became creative with the design of the devices, competing not only in size and storage capacity but also by customizing their devices’ shells (e.g., making a flash drive that resembles an ice cream bar). Some businesses and other organizations had flash drives made with their logos on them as promotional tools.

Despite the recent migration of much data storage to cloud-based services, such as Google Drive, USB flash drives remain popular. The speed of the devices vastly outpaces that of any Internet connection, so they are more convenient for transferring large file sets, such as libraries of high-definition movies. Moreover, software applications can be run directly off USB flash drives, without installation on a computer’s hard drive. In addition, USB flash drives continue to be useful for safely backing up important data.

GettyImages-1074802904-9ad2aa05bc664a54b465507c6718bea8.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1957 2023-11-10 00:04:24

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1961) Whistle

Gist

A whistle is a small wind instrument in which sound is produced by the forcible passage of breath through a slit in a short tube.
ii) a device through which air or steam is forced into a cavity or against a thin edge to produce a loud sound.

Details

A whistle is an instrument which produces sound from a stream of gas, most commonly air. It may be mouth-operated, or powered by air pressure, steam, or other means. Whistles vary in size from a small slide whistle or nose flute type to a large multi-piped church organ.

Whistles have been around since early humans first carved out a gourd or branch and found they could make sound with it. In prehistoric Egypt, small shells were used as whistles. Many present day wind instruments are inheritors of these early whistles. With the rise of more mechanical power, other forms of whistles have been developed.

One characteristic of a whistle is that it creates a pure, or nearly pure, tone. The conversion of flow energy to sound comes from an interaction between a solid material and a fluid stream. The forces in some whistles are sufficient to set the solid material in motion. Classic examples are Aeolian tones that result in galloping power lines, or the Tacoma Narrows Bridge (the so-called "Galloping Gertie" of popular media). Other examples are circular disks set into vibration.

History:

Early whistles

Whistles made of bone or wood have been used for thousands of years. Whistles were used by the Ancient Greeks to keep the stroke of galley slaves. Archaeologists have found a terracotta whistle at the ruins of the ancient Greek city of Assos, most probably a child's toy placed in a child's grave as a burial gift. The English used whistles during the Crusades to signal orders to archers. Boatswain pipes were also used in the age of sail aboard naval vessels to issue commands and salute dignitaries.

Joseph Hudson

Joseph Hudson set up J Hudson & Co in Birmingham in 1870. With his younger brother James, he designed the "Acme City" brass whistle. This became the first referee whistle used at association football matches during the 1878–79 Football Association Cup match between Nottingham Forest and Sheffield. Prior to the introduction of the whistle, handkerchiefs were used by the umpires to signal to the players.

In 1883, he began experimenting with pea-whistle designs that could produce an intense sound that could grab attention from over a mile away. His invention was discovered by accident when he dropped his violin and it shattered on the floor. Observing how the discordant sound of the breaking strings travelled (trill effect), Hudson had the idea to put a pea in the whistle. Prior to this, whistles were much quieter and were only thought of as musical instruments or toys for children. After observing the problems that local police were having with effectively communicating with rattles, he realised that his whistle designs could be used as an effective aid to their work.

Bird whistle

Hudson demonstrated his whistle to Scotland Yard and was awarded his first contract in 1884. Both rattles and whistles were used to call for back-up in areas where neighbourhood beats overlapped, and following their success in the Metropolitan Police of London, the whistle was adopted by most police forces in the United Kingdom.

World War I

During World War I, officers of the British Army and United States Army used whistles to communicate with troops, command charges and warn when artillery pieces were going to fire. Most whistles used by the British were manufactured by J & Hudson Co.

whistle_161802563_1000.jpg?version=5.0.23


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1958 2023-11-11 00:07:51

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1962) Fort

Gist

A fort is a military building designed to be defended from attack, consisting of an area surrounded by a strong wall, in which soldiers are based.

Details

A fortification (also called a fort, fortress, or stronghold) is a military construction designed for the defense of territories in warfare, and is used to establish rule in a region during peacetime. The term is derived from Latin fortis ("strong") and facere ("to make").

From very early history to modern times, defensive walls have often been necessary for cities to survive in an ever-changing world of invasion and conquest. Some settlements in the Indus Valley Civilization were the first small cities to be fortified. In ancient Greece, large stone walls had been built in Mycenaean Greece, such as the ancient site of Mycenae (known for the huge stone blocks of its 'cyclopean' walls). A Greek phrourion was a fortified collection of buildings used as a military garrison, and is the equivalent of the Roman castellum or fortress. These constructions mainly served the purpose of a watch tower, to guard certain roads, passes, and borders. Though smaller than a real fortress, they acted as a border guard rather than a real strongpoint to watch and maintain the border.

The art of setting out a military camp or constructing a fortification traditionally has been called "castrametation" since the time of the Roman legions. Fortification is usually divided into two branches: permanent fortification and field fortification. There is also an intermediate branch known as semi-permanent fortification. Castles are fortifications which are regarded as being distinct from the generic fort or fortress in that they are a residence of a monarch or noble and command a specific defensive territory.

Roman forts and hill forts were the main antecedents of castles in Europe, which emerged in the 9th century in the Carolingian Empire. The Early Middle Ages saw the creation of some towns built around castles.

Medieval-style fortifications were largely made obsolete by the arrival of cannons in the 14th century. Fortifications in the age of black powder evolved into much lower structures with greater use of ditches and earth ramparts that would absorb and disperse the energy of cannon fire. Walls exposed to direct cannon fire were very vulnerable, so the walls were sunk into ditches fronted by earth slopes to improve protection.

The arrival of explosive shells in the 19th century led to another stage in the evolution of fortification. Star forts did not fare well against the effects of high explosives, and the intricate arrangements of bastions, flanking batteries and the carefully constructed lines of fire for the defending cannon could be rapidly disrupted by explosive shells. Steel-and-concrete fortifications were common during the 19th and early 20th centuries. The advances in modern warfare since World War I have made large-scale fortifications obsolete in most situations.

Nomenclature

Many United States Army installations are known as forts, although they are not always fortified. During the pioneering era of North America, many outposts on the frontiers, even non-military outposts, were referred to generically as forts. Larger military installations may be called fortresses; smaller ones were once known as fortalices. The word fortification can refer to the practice of improving an area's defense with defensive works. City walls are fortifications but are not necessarily called fortresses.

The art of setting out a military camp or constructing a fortification traditionally has been called castrametation since the time of the Roman legions. Laying siege to a fortification and of destroying it is commonly called siegecraft or siege warfare and is formally known as poliorcetics. In some texts, this latter term also applies to the art of building a fortification.

Fortification is usually divided into two branches: permanent fortification and field fortification. Permanent fortifications are erected at leisure, with all the resources that a state can supply of constructive and mechanical skill, and are built of enduring materials. Field fortifications—for example breastworks—and often known as fieldworks or earthworks, are extemporized by troops in the field, perhaps assisted by such local labour and tools as may be procurable and with materials that do not require much preparation, such as soil, brushwood, and light timber, or sandbags (see sangar). An example of field fortification was the construction of Fort Necessity by George Washington in 1754.

There is also an intermediate branch known as semi-permanent fortification. This is employed when in the course of a campaign it becomes desirable to protect some locality with the best imitation of permanent defences that can be made in a short time, ample resources and skilled civilian labour being available. An example of this is the construction of Roman forts in England and in other Roman territories where camps were set up with the intention of staying for some time, but not permanently.

Castles are fortifications which are regarded as being distinct from the generic fort or fortress in that it describes a residence of a monarch or noble and commands a specific defensive territory. An example of this is the massive medieval castle of Carcassonne.

Forts

Forts in modern American usage often refer to space set aside by governments for a permanent military facility; these often do not have any actual fortifications, and can have specializations (military barracks, administration, medical facilities, or intelligence).

However, there are some modern fortifications that are referred to as forts. These are typically small semi-permanent fortifications. In urban combat, they are built by upgrading existing structures such as houses or public buildings. In field warfare they are often log, sandbag or gabion type construction.

Such forts are typically only used in low-level conflicts, such as counterinsurgency conflicts or very low-level conventional conflicts, such as the Indonesia–Malaysia confrontation, which saw the use of log forts for use by forward platoons and companies. The reason for this is that static above-ground forts cannot survive modern direct or indirect fire weapons larger than mortars, RPGs and small arms.

Prisons and others

Fortifications designed to keep the inhabitants of a facility in rather than attacker out can also be found, in prisons, concentration camps, and other such facilities. Those are covered in other articles, as most prisons and concentration camps are not primarily military forts (although forts, camps, and garrison towns have been used as prisons and/or concentration camps; such as Theresienstadt, Guantanamo Bay detention camp and the Tower of London for example).

Mehrangarh-Fort_18th-oct.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1959 2023-11-12 00:08:59

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1963) Boutique

Gist

A boutique is a small shop that sells fashionable clothes, shoes, jewellery, etc.

Details

A boutique is a retail shop that deals in high end fashionable clothing or accessories. The word is French for "shop", which derives ultimately from the Ancient Greek (apothēkē) "storehouse".

The term boutique and also designer refer (with some differences) to both goods and services, which are containing some element that is claimed to justify an extremely high price.

Etymology and usage

The term boutique entered common English parlance in the late 1960s.

Some multi-outlet businesses (Chain stores) can be referred to as boutiques if they target small, upscale niche markets. Although some boutiques specialize in hand-made items and other unique products, others simply produce T-shirts, stickers, and other fashion accessories in artificially small runs and sell them at high prices.

Lifestyle

In the late 1990s, some European retail traders developed the idea of tailoring a shop towards a lifestyle theme, in what they called "concept stores," which specialized in cross-selling without using separate departments. One of the first concept stores was 10 Corso Como in Milan, Italy, founded in 1990, followed by Colette in Paris and Quartier 206 in Berlin. Several well-known American chains such as Tiffany & Co., Urban Outfitters, Dash, and The Gap, Australian chain Billabong and, though less common, Lord & Taylor, adapted to the concept store trend after 2000.

64598260eaab7f0e85e86fb8_truckee-clothing-boutiques-vintage-shops.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1960 2023-11-13 00:08:15

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1964) Pilot

Gist

A pilot is a person who is trained to fly an aircraft.

Aeronautics. A person duly qualified to operate an airplane, balloon, or other aircraft.

* A commercial pilot operates aircraft to transport cargo and passengers to their destinations safely.
* A commercial pilot has various skills and qualities to perform their work, including time management, coordination, navigation and verbal communication.
* Requirements to become a commercial pilot include a private pilot certificate, a satisfactory instrument rating, supervised flight hours and the successful completion of written and practical tests.

Summary

You can get into this job through:

* a university course
* applying directly
* a trainee scheme
* specialist courses run by private training organisations

University

You could do a university degree in air transport or aviation if you're 18 or over.

Your university degree will:

* include commercial pilot training with an approved flight training organisation
* lead to a 'frozen' Air Transport Pilot Licence (ATPL) which allows you to work as a co-pilot and build up the necessary flying hours to become a captain

Medical certificates

You'll need to have a minimum of a Class 2 medical certificate before you start a course.

You'll then need to apply for the higher level Class 1 medical certificate during your course to get your Commercial Pilot Licence. You could choose to apply for the Class 1 medical certificate before you start your course.

Fees and funding

As well as standard university fees, you'll need to fund the flight training part of your course. Your university can advise you about this.

Entry requirements

You'll usually need:

2 to 3 A levels, or equivalent, for a degree.

Direct Application

You could apply directly to the Civil Aviation Authority's Military Accreditation Scheme to become a commercial pilot if you have flying experience in the armed forces.

Other Routes

You could apply to join a pilot training programme with a passenger airline.

Private flying school

You could also train with a private flying school to get your Commercial Pilot Licence. Courses can take at least a year and 6 months of full time study.

You can find details about flight training schools from the Civil Aviation Authority (CAA).

Details

An aircraft pilot or aviator is a person who controls the flight of an aircraft by operating its directional flight controls. Some other aircrew members, such as navigators or flight engineers, are also considered aviators because they are involved in operating the aircraft's navigation and engine systems. Other aircrew members, such as drone operators, flight attendants, mechanics and ground crew, are not classified as aviators.

In recognition of the pilots' qualifications and responsibilities, most militaries and many airlines worldwide award aviator badges to their pilots.

Definition

The first recorded use of the term aviator (aviateur in French) was in 1887, as a variation of aviation, from the Latin avis (meaning bird), coined in 1863 by G. J. G. de La Landelle [fr] in Aviation Ou Navigation Aérienne ("Aviation or Air Navigation"). The term aviatrix (aviatrice in French), now archaic, was formerly used for a female pilot. The term aviator (aviateur in French), now archaic, was formerly used for a male pilot. People who operate aircraft obtain a pilot licence. Aviation regulations referred to pilots. These terms were used more in the early days of aviation, when airplanes were extremely rare, and connoted bravery and adventure. For example, a 1905 reference work described the Wright brothers' first airplane: "The weight, including the body of the aviator, is a little more than 700 pounds".

History

To ensure the safety of people in the air and on the ground, early aviation soon required that aircraft be under the operational control of a properly trained, certified pilot at all times, who is responsible for the safe and legal completion of the flight. The Aéro-Club de France delivered the first certificate to Louis Blériot in 1908—followed by Glenn Curtiss, Léon Delagrange, and Robert Esnault-Pelterie. The British Royal Aero Club followed in 1910 and the Aero Club of America in 1911 (Glenn Curtiss receiving the first).

Civilian

Civilian pilots fly aircraft of all types privately for pleasure, charity, or in pursuance of a business, or commercially for non-scheduled (charter) and scheduled passenger and cargo air carriers (airlines), corporate aviation, agriculture (crop dusting, etc.), forest fire control, law enforcement, etc. When flying for an airline, pilots are usually referred to as airline pilots, with the pilot in command often referred to as the captain.

Airline

There were 290,000 airline pilots in the world in 2017 and aircraft simulator manufacturer CAE Inc. forecasts a need for 255,000 new ones for a population of 440,000 by 2027, 150,000 for growth and 105,000 to offset retirement and attrition : 90,000 in Asia-Pacific (average pilot age in 2016: 45.8 years), 85,000 in Americas (48 years), 50,000 in Europe (43.7 years) and 30,000 in Middle East & Africa (45.7 years).

Boeing expects 790,000 new pilots in 20 years from 2018, 635,000 for commercial aviation, 96,000 for business aviation and 59,000 for helicopters: 33% in Asia Pacific (261,000), 26% in North America (206,000), 18% in Europe (146,000), 8% in the Middle East (64,000), 7% in Latin America (57,000), 4% in Africa (29,000) and 3% in Russia/ Central Asia (27,000). By November 2017, due a shortage of qualified pilots, some pilots were leaving corporate aviation to return to airlines. In one example a Global 6000 pilot, making $250,000 a year for 10 to 15 flight hours a month, returned to American Airlines with full seniority. A Gulfstream G650 or Global 6000 pilot might earn between $245,000 and $265,000, and recruiting one may require up to $300,000. At the other end of the spectrum, constrained by the available pilots, some small carriers hire new pilots who need 300 hours to jump to airlines in a year. They may also recruit non-career pilots who have other jobs or airline retirees who want to continue to fly.

Automation

The number of airline pilots could decrease as automation replaces copilots and eventually pilots as well. In January 2017 Rhett Ross, CEO of Continental Motors said "my concern is that in the next two decades—if not sooner—automated and autonomous flight will have developed sufficiently to put downward pressure on both wages and the number and kind of flying jobs available. So if a kid asks the question now and he or she is 18, 20 years from now will be 2037 and our would-be careerist will be 38—not even mid-career. Who among us thinks aviation and especially for-hire flying will look like it does now?" Christian Dries, owner of Diamond Aircraft Austria said "Behind the curtain, aircraft manufacturers are working on a single-pilot  where the airplane can be controlled from the ground and only in case of malfunction does the pilot of the plane interfere. Basically the flight will be autonomous and I expect this to happen in the next five to six years for freighters."

In August 2017 financial company UBS predicted pilotless airliners are technically feasible and could appear around 2025, offering around $35bn of savings, mainly in pilot costs: $26bn for airlines, $3bn for business jets and $2.1bn for civil helicopters; $3bn/year from lower pilot training and aviation insurance costs due to safer flights; $1bn from flight optimisation (1% of global airlines' $133bn jet fuel bill in 2016); not counting revenue opportunity from increased capacity utilization.

Regulations have to adapt with air cargo likely at the forefront, but pilotless flights could be limited by consumer behaviour: 54% of 8,000 people surveyed are defiant while 17% are supportive, with acceptation progressively forecast.

AVweb reporter Geoff Rapoport stated, "pilotless aircraft are an appealing prospect for airlines bracing for the need to hire several hundred thousand new pilots in the next decade. Wages and training costs have been rapidly rising at regional U.S. airlines over the last several years as the major airlines have hired pilots from the regionals at unprecedented rates to cover increased air travel demand from economic expansion and a wave of retirements".

Going to pilotless airliners could be done in one bold step or in gradual improvements like by reducing the math crew for long haul missions or allowing single pilot cargo aircraft. The industry has not decided how to proceed yet. Present automated systems are not autonomous and must be monitored; their replacement could require artificial intelligence with machine learning while present certified software is deterministic. As the Airbus A350 would only need minor modifications, Air Caraibes and French Bee parent Groupe Dubreuil see two-pilot crews in long-haul operations, without a third pilot for rotation, happening around 2024–2025.

Single-pilot freighters could start with regional flights. The Air Line Pilots Association believe removing pilots would threaten aviation safety and opposes the April 2018 FAA Reauthorization Act's Section 744 establishing a research and development program to assist single-pilot cargo aircraft by remote and computer piloting.

For French aerospace research center Onera and avionics manufacturer Thales, artificial intelligence (AI) like consumer Neural Networks learning from large datasets cannot explain their operation and cannot be certified for safe air transport. Progress towards ‘explainable’ AIs can be expected in the next decade, as the Onera expects "leads" for a certifiable AI system, along EASA standards evolution.

Africa and Asia

In some countries, such as Pakistan, Thailand and several African nations, there is a strong relationship between the military and the principal national airlines, and many airline pilots come from the military; however, that is no longer the case in the United States and Western Europe. While the flight decks of U.S. and European airliners do have ex-military pilots, many pilots are civilians. Military training and flying, while rigorous, is fundamentally different in many ways from civilian piloting.

Canada

Operating an aircraft in Canada is regulated by the Aeronautics Act of 1985 and the Canadian Aviation Regulations provide rules for Pilot licensing in Canada.

Retirement age is provided by each airline with some set to age 60, but changes to the Canadian Human Rights Act have restricted retirement age set by the airlines.

United States

In the United States in 2020, there were 691,691 active pilot certificates. This was down from a high of over 800,000 active pilots in 1980. Of the active pilot certificate holders, there were 160,860 Private, 103,879 Commercial, 164,193 Airline Transport, and 222,629 Student.

In 1930, the Air Commerce Act established pilot licensing requirements for American civil aviation.

Commercial airline pilots in the United States have a mandatory retirement age of 65, having increased from age 60 in 2007.

Military

Military pilots fly with the armed forces, primarily the air forces, of a government or nation-state. Their tasks involve combat and non-combat operations, including direct hostile engagements and support operations. Military pilots undergo specialized training, often with weapons. Examples of military pilots include fighter pilots, bomber pilots, transport pilots, test pilots and astronauts.

Military pilots are trained with a different syllabus than civilian pilots, which is delivered by military instructors. This is due to the different aircraft, flight goals, flight situations and chains of responsibility. Many military pilots do transfer to civilian-pilot qualification after they leave the military, and typically their military experience provides the basis for a civilian pilot's license.

Unmanned aerial vehicles

Unmanned aerial vehicles (UAVs, also known as "drones") operate without a pilot on-board and are classed into two categories: autonomous aircraft that operate without active human control during flight and remotely piloted UAVs which are operated remotely by one or more persons. The person controlling a remotely piloted UAV may be referred to as its pilot or operator. Depending on the sophistication and use of the UAV, pilots/operators of UAVs may require certification or training, but are generally not subject to the licensing/certification requirements of pilots of manned aircraft.

Most jurisdictions have restrictions on the use of UAVs which have greatly limited their use in controlled airspace; UAVs have mostly been limited to military and hobbyist use. In the United States, use of UAVs is very limited in controlled airspace (generally, above 400 ft/122m and away from airports) and the FAA prohibits nearly all commercial use. Once regulations are made to allow expanded use of UAVs in controlled airspace, there is expected to be a large surge of UAVs in use and, consequently, high demand for pilots/operators of these aircraft.

Space

The general concept of an airplane pilot can be applied to human spaceflight, as well. The spacecraft pilot is the astronaut who directly controls the operation of a spacecraft. This term derives directly from the usage of the word "pilot" in aviation, where it is synonymous with "aviator".

Pilot certifications

Pilots are required to go through many hours of flight training and theoretical study, that differ depending on the country. The first step is acquiring the Private Pilot License (PPL), or Private Pilot Certificate. In the United States of America, this includes a minimum of 35 to 40 hours of flight training, the majority of which with a Certified Flight Instructor.

In the United States, an LSA (Light Sport Aircraft) license can be obtained in at least 20 hours of flight time.

Generally, the next step in a pilot's progression is Instrument Rating (IR), or Multi-Engine Rating (MEP) addons. Pilots may also choose to pursue a Commercial Pilot License (CPL) after completing their PPL. This is required if the pilot desires to pursue a professional career as a pilot. To captain an airliner, one must obtain an Airline Transport Pilot License (ATPL). In the United States after 1 August 2013, an ATPL is required even when acting as a first officer. Some countries/carriers require/use a multi-crew cooperation (MCC) certificate.

jon-flobrant-225260_optimized__1_.jpg?w=720&q=100&fm=webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1961 2023-11-14 00:04:07

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1965) Event Management

Gist

Event Management is the coordination, running and planning of all the people, teams and features that come together to create every kind of event.

Summary

Event management is the process of planning and hosting a variety of public and private events for social or business purposes. They may be large-scale or small-scale events and can include business conventions, training seminars, industry conferences, trade shows, ceremonies, parties, concerts, festivals and press conferences. Event managers must follow the clients' instructions and work within a specified budget and predetermined schedule. To set up the events, they must collaborate with various vendors.

The difference between event management and event planning is that while event planning concerns itself with coming up with workable event ideas and the activities that will take place during the events, event management leans more towards project management and deals with the organization and execution of the event plans. However, the roles do overlap often, and event managers may be involved with the creative planning aspects of the events as well.

For established and new companies, event management can be an essential aspect of their marketing strategy. Organizing small-scale or large-scale events can help promote a brand and further a business's interests. Events create opportunities for people who attend these events to learn about the hosting organization's products and services and may even convert attendees into loyal customers.

Additionally, along with making their brands better known, the events that the companies organize can provide marketing education and training to their employees. They can foster team-building exercises, improve relations between different departments, and boost networking across industries. The events may also celebrate business milestones and raise money for various causes.

Details

In all areas of business, there are some terms you’re never truly sure you know the definition of. However, when it comes to event management, the definition is easy. At its core, event management is the process of planning an event. This is any type of event, whether it be in-person, virtual, or hybrid. It’s synonymous with event planning and meeting planning. Just like those other terms, the scope of each project and the nitty-gritty details vary depending on the industry, company size, and more.

What is event management?

Event management is the process of creating and maintaining an event. This process spans from the very beginning of planning all the way to post-event strategizing.

At the start, an event manager makes planning decisions, such as the time, location, and theme of their event. During an event, event managers oversee the event live and make sure things run smoothly. After an event, event managers are tasked with reviewing event data, submitting KPI and ROI findings, and staying on the ball for any post-event offerings.

All different branches of planning go into event management, including various types of sourcing, designing, regulation checks, and on-site management. In event management, you could be in the process of creating a conference, a product launch, an internal sales kick-off, or even a wedding. Really, any event that requires considerable planning and execution is event management.

Event Management is Event Planning

Event management goes by many different names. Some event planners are called administrative assistants, some are called event coordinators, and others are called event technologists. What do all of these titles have in common? The individuals have some hand in planning an event. Whether the events are internal or external, large or small, in-person or virtual, they all have to be planned.

Virtual Event Management

In today's new environment, we have had to learn how to manage not only our in-person events but our virtual programs as well. Virtual event management requires the same steps as managing your in-person event, but with the added challenge of making sure that your content is twice as captivating. While in-person events have the added bonus of travel, networking, and free food, a virtual event largely relies on its content to keep attendees engaged. When managing a virtual event, make sure that your speakers are prepared to present their content virtually, and that your content is interesting and succinct.

Hybrid Event Management

Just as the industry is getting comfortable with virtual events, we're seeing a new event type emerging as a popular option: hybrid events. Hybrid events are a combination of virtual and in-person events. It offers all of the benefits of both but also comes with a  unique set of challenges. When hosting a hybrid event, you're managing two audiences – virtual and in-person – and you must decide which content and event programming will be available to each audience. If you're managing a hybrid event, make sure you consider the event from every angle when building your hybrid event strategy.

Different Aspects of Event Management

Building the Perfect Event

It starts simply. A theme. A plan. A goal. Your event has a purpose from the beginning, which will drive content, speakers, and the venue. Next, it’s time to set up the basics. You have to build a branded event website that entices visitors to attendee your event. Nowadays, it’s easier than ever to build a beautifully designed website, just by understanding Event Website Basics. Then, you’ll need secure payment processing so attendees can pay for events easily.

Promotion Across Channels with Automation

If no one knows about your event, how will they register? That’s why promotion is so important. Check out The Best Ways to Promote Your Event for inspiration. Targeted email marketing is a great way to promote your events when you have a vast database. Other ways to promote? Social media continues to be one of the best free promotional channels.

Managing Attendee Information and Communication

The purpose of the event is always to make connections. Event management doesn’t just involve choosing linens or the right virtual technology provider but also managing contacts as well as you can. During the event, you’ll gather leads that will go to sales. These leads will be critical when it comes to proving your Event ROI.

Measuring Your Success to Prove Event ROI

Event management doesn’t end when the event does. Over the course of the entire event, it’s important to prove success and identify areas of improvement. Data gained throughout the process will help you do this. Live polling is a great way to find out how attendees felt about the event.

There's Tech for That

Event management is about pulling together an incredible experience, facilitating connections, adding leads to sales pipeline, and proving success. It’s a difficult job that involves spinning an endless number of plates and working around the clock to create an unforgettable moment for attendees. And, it’s one that can be made a little easier by taking advantage of technology, especially when you look to plan a virtual event or a hybrid event. While many planners rely on sticky notes and spreadsheets, there’s tech out there that will save hours and take events to the next level.

Additional Information

Event management is the application of project management to the creation and development of small and/or large-scale personal or corporate events such as festivals, conferences, ceremonies, weddings, formal parties, concerts, or conventions. It involves studying the brand, identifying its target audience, devising the event concept, and coordinating the technical aspects before actually launching the event.

The events industry now includes events of all sizes from the Olympics down to business breakfast meetings. Many industries, celebrities, charitable organizations, and interest groups hold events in order to market their label, build business relationships, raise money, or celebrate achievement.

The process of planning and coordinating the event is usually referred to as event planning and which can include budgeting, scheduling, site selection, acquiring necessary permits, coordinating transportation and parking, arranging for speakers or entertainers, arranging decor, event security, catering, coordinating with third-party vendors, and emergency plans. Each event is different in its nature so process of planning and execution of each event differs on basis of the type of event.

The event manager is the person who plans and executes the event, taking responsibility for the creative, technical, and logistical elements. This includes overall event design, brand building, marketing and communication strategy, audio-visual production, script writing, logistics, budgeting, negotiation, and client service.

Due to the complexities involved, the extensive body of knowledge required, and the rapidly changing environment, event management is frequently cited as one of the most stressful career paths, in line next to surgeons.

Strategic marketing and communication

Event management might be a tool for strategic marketing and communication, used by companies of every size. Companies can benefit from promotional events as a way to communicate with current and potential customers. For instance, these advertising-focused events can occur as press conferences, promotional events, or product launches.

Event managers may also use traditional news media in order to target their audience, hoping to generate media coverage which will reach thousands or millions of people. They can also invite their audience to their events and reach them at the actual event.

Event venue

An event venue may be an onsite or offsite location. The event manager is responsible for operations at a rented event or entertainment venue as they are coordinating directly with the property owner. An event manager will monitor all aspects of the event on-site. Some of the tasks listed in the introduction may pass to the venue, but usually at a cost.

Events present substantial liability risk to organizers and venues. Consequently, most venues require the organizers to obtain blanket or event-specific general liability insurance of an amount not less than $1,000,000 per occurrence and $2,000,000 aggregate, which is the industry standard.

Corporate event managers book event venues to host corporate meetings, conferences, networking events, trade shows, product launches, team-building retreats or training sessions in a more tailored environment.

Sustainability

Sustainable event management (also known as event greening) is the process used to produce an event with particular concern for environmental, economic, and social issues. Sustainability in event management incorporates socially and environmentally responsible decision making into the planning, organization and implementation of, and participation in, an event. It involves including sustainable development principles and practices in all levels of event organization, and aims to ensure that an event is hosted responsibly. It represents the total package of interventions at an event, and needs to be done in an integrated manner. Event greening should start at the inception of the project, and should involve all the key role players, such as clients, organizers, venues, sub-contractors, and suppliers. A recent study shows that the trend of moving events from in-person to virtual and hybrid modes can reduce the carbon footprint by 94% (virtual) and by 67% (hybrid mode with over 50% in-person participation rate due to trade-offs between the per capita carbon footprint and in-person participation level).

Technology

Event management software companies provide event planning with software tools to handle many common activities such as delegate registration, hotel booking, travel booking, or allocation of exhibition floor space.

A recent trend in event technology is the use of mobile apps for events. This technology is advancing and allowing event professionals to simplify and manage intricate and simple events more effectively. Mobile apps have a range of uses. They can be used to hold relatively static information such as the agenda, speaker biographies, and general FAQs. They can also encourage audience participation and engagement through interactive tools such as live voting/polling, submitting questions to speakers during Q&A, or building live interactive "word clouds". Mobile event apps can also be used by event organizers as a means of communication. The mobile apps help to make a better overall outcome of events and also help to remove a lot of a tedious work from event organizers. Organizers can communicate with participants through the use of alerts, notifications, and push messages. They can also be used to collect feedback from the participants through the use of surveys in app. Some mobile event apps can help participants to engage with each other, with sponsors, and with the organizers with built-in networking functionality.

Education

There are an increasing number of universities which offer training in event management in the form of both certificates and undergraduate or graduate degrees.

The University of Central Florida's Rosen College of Hospitality Management offered the first ever Bachelor of Science degree in Event Management beginning in 2006. The program leverages core training in both hospitality, covering lodging operations, tourism, guest services, accounting, and marketing as well as event management, including sales, promotion, technology, design, risk management, and catering with electives available for specific interests, such as cruises, clubbing, wine, or trade shows. Other degree programs that do not offer a full degree usually offer concentrations, such as New York University, which offers a Bachelor of Science degree in Hotel and Tourism Management with a concentration in event management. The University of Florida offers a similar program as well.

Because of the limited number of undergraduate degree programs available, it is not uncommon for event managers to earn their degrees in business administration, marketing, or public relations. To supplement their candidacy, persons interested in event management typically earn one or more certifications which offer specialization into particular fields. Certifications available include:

* Certified Meeting Professional (CMP)
* Certified in Exhibition Management (CEM)
* Certified Trade Show Marketer (CTSM)
* Certificate in Meeting Management (CMM)
* Certified Professional in Catering and Events (CPCE)
* Certified Event Designer (CED)
* Certified Special Event Professional (CSEP).

0ba6cfa1-8d54-4bbc-b461-2a93963f6d6f.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1962 2023-11-15 00:50:03

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1966) Switch

Gist

A small button or something similar that you press up or down in order to turn on electricity.

Details

In electrical engineering, a switch is an electrical component that can disconnect or connect the conducting path in an electrical circuit, interrupting the electric current or diverting it from one conductor to another. The most common type of switch is an electromechanical device consisting of one or more sets of movable electrical contacts connected to external circuits. When a pair of contacts is touching current can pass between them, while when the contacts are separated no current can flow.

Switches are made in many different configurations; they may have multiple sets of contacts controlled by the same knob or actuator, and the contacts may operate simultaneously, sequentially, or alternately. A switch may be operated manually, for example, a light switch or a keyboard button, or may function as a sensing element to sense the position of a machine part, liquid level, pressure, or temperature, such as a thermostat. Many specialized forms exist, such as the toggle switch, rotary switch, mercury switch, push-button switch, reversing switch, relay, and circuit breaker. A common use is control of lighting, where multiple switches may be wired into one circuit to allow convenient control of light fixtures. Switches in high-powered circuits must have special construction to prevent destructive arcing when they are opened.

Description

Electrical switches: The most familiar form of switch is a manually operated electromechanical device with one or more sets of electrical contacts, which are connected to external circuits. Each set of contacts can be in one of two states: either "closed" meaning the contacts are touching and electricity can flow between them, or "open", meaning the contacts are separated and the switch is nonconducting. The mechanism actuating the transition between these two states (open or closed) is usually (there are other types of actions) either an "alternate action" (flip the switch for continuous "on" or "off") or "momentary" (push for "on" and release for "off") type.

A switch may be directly manipulated by a human as a control signal to a system, such as a computer keyboard button, or to control power flow in a circuit, such as a light switch. Automatically operated switches can be used to control the motions of machines, for example, to indicate that a garage door has reached its full open position or that a machine tool is in a position to accept another workpiece. Switches may be operated by process variables such as pressure, temperature, flow, current, voltage, and force, acting as sensors in a process and used to automatically control a system. For example, a thermostat is a temperature-operated switch used to control a heating process. A switch that is operated by another electrical circuit is called a relay. Large switches may be remotely operated by a motor drive mechanism. Some switches are used to isolate electric power from a system, providing a visible point of isolation that can be padlocked if necessary to prevent accidental operation of a machine during maintenance, or to prevent electric shock.

An ideal switch would have no voltage drop when closed, and would have no limits on voltage or current rating. It would have zero rise time and fall time during state changes, and would change state without "bouncing" between on and off positions.

Practical switches fall short of this ideal; as the result of roughness and oxide films, they exhibit contact resistance, limits on the current and voltage they can handle, finite switching time, etc. The ideal switch is often used in circuit analysis as it greatly simplifies the system of equations to be solved, but this can lead to a less accurate solution. Theoretical treatment of the effects of non-ideal properties is required in the design of large networks of switches, as for example used in telephone exchanges.

Contacts

In the simplest case, a switch has two conductive pieces, often metal, called contacts, connected to an external circuit, that touch to complete (make) the circuit, and separate to open (break) the circuit. The contact material is chosen for its resistance to corrosion, because most metals form insulating oxides that would prevent the switch from working. Contact materials are also chosen on the basis of electrical conductivity, hardness (resistance to abrasive wear), mechanical strength, low cost and low toxicity. The formation of oxide layers at contact surface, as well as surface roughness and contact pressure, determine the contact resistance, and wetting current of a mechanical switch. Sometimes the contacts are plated with noble metals, for their excellent conductivity and resistance to corrosion. They may be designed to wipe against each other to clean off any contamination. Nonmetallic conductors, such as conductive plastic, are sometimes used. To prevent the formation of insulating oxides, a minimum wetting current may be specified for a given switch design.

Contact terminology

In electronics, switches are classified according to the arrangement of their contacts. A pair of contacts is said to be "closed" when current can flow from one to the other. When the contacts are separated by an insulating air gap, they are said to be "open", and no current can flow between them at normal voltages. The terms "make" for closure of contacts and "break" for opening of contacts are also widely used.

The terms pole and throw are also used to describe switch contact variations. The number of "poles" is the number of electrically separate switches which are controlled by a single physical actuator. For example, a "2-pole" switch has two separate, parallel sets of contacts that open and close in unison via the same mechanism. The number of "throws" is the number of separate wiring path choices other than "open" that the switch can adopt for each pole. A single-throw switch has one pair of contacts that can either be closed or open. A double-throw switch has a contact that can be connected to either of two other contacts, a triple-throw has a contact which can be connected to one of three other contacts, etc.

In a switch where the contacts remain in one state unless actuated, such as a push-button switch, the contacts can either be normally open (abbreviated "n.o." or "no") until closed by operation of the switch, or normally closed ("n.c." or "nc") and opened by the switch action. A switch with both types of contact is called a changeover switch or double-throw switch. These may be "make-before-break" ("MBB" or shorting) which momentarily connects both circuits, or may be "break-before-make" ("BBM" or non-shorting) which interrupts one circuit before closing the other.

These terms have given rise to abbreviations for the types of switch which are used in the electronics industry such as "single-pole, single-throw" (SPST) (the simplest type, "on or off") or "single-pole, double-throw" (SPDT), connecting either of two terminals to the common terminal. In electrical power wiring (i.e., house and building wiring by electricians), names generally involve the suffix "-way"; however, these terms differ between British English and American English (i.e., the terms two way and three way are used with different meanings).

Contact bounce:

Bounce

Contact bounce (also called chatter) is a common problem with mechanical switches and relays, which arises as the result of electrical contact resistance (ECR) phenomena at interfaces. Switch and relay contacts are usually made of springy metals. When the contacts strike together, their momentum and elasticity act together to cause them to bounce apart one or more times before making steady contact. The result is a rapidly pulsed electric current instead of a clean transition from zero to full current. The effect is usually unimportant in power circuits, but causes problems in some analogue and logic circuits that respond fast enough to misinterpret the on‑off pulses as a data stream. In the design of micro-contacts, controlling surface structure (surface roughness) and minimizing the formation of passivated layers on metallic surfaces are instrumental in inhibiting chatter.

In the Hammond organ, multiple wires are pressed together under the piano keys of the manuals. Their bouncing and non-synchronous closing of the switches is known as Hammond Click and compositions exist that use and emphasize this feature. Some electronic organs have a switchable replica of this sound effect.

Light switches

In building wiring, light switches are installed at convenient locations to control lighting and occasionally other circuits. By use of multiple-pole switches, multiway switching control of a lamp can be obtained from two or more places, such as the ends of a corridor or stairwell. A wireless light switch allows remote control of lamps for convenience; some lamps include a touch switch which electronically controls the lamp if touched anywhere. In public buildings several types of vandal resistant switches are used to prevent unauthorized use.

Slide switches

Slide switches are mechanical switches using a slider that moves (slides) from the open (off) position to the closed (on) position.

Electronic switches

The term switch has since spread to a variety of solid state electronics that perform a switching function, but which are controlled electronically by active devices rather than purely mechanically. These are categorized in the article electronic switch. Electromechanical switches (such as the traditional relay, electromechanical crossbar, and Strowger switch) bridge the categorization.

?url=https%3A%2F%2Ftiimg.tistatic.com%2Ffp%2F1%2F007%2F738%2Felectrical-switch-boards-and-white-colour-with-sleek-design-and-portable--287.jpg&w=384&q=75


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1963 2023-11-16 00:10:03

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1967) Ink

Gist

Ink is a colored usually liquid material for writing and printing.

Summary

Ink is a fluid or paste of various colours, but usually black or dark blue, used for writing and printing. It is composed of a pigment or dye dissolved or dispersed in a liquid called the vehicle.

Writing inks date from about 2500 BC and were used in ancient Egypt and China. They consisted of lampblack ground with a solution of glue or gums, molded into sticks, and allowed to dry. Before use, the sticks were mixed with water. Various coloured juices, extracts, and suspensions of substances from plants, animals, and minerals also have been used as inks, including alizarin, indigo, pokeberries, cochineal, and sepia. For many centuries, a mixture of a soluble iron salt with an extract of tannin was used as a writing ink and is the basis of modern blue-black inks. The modern inks usually contain ferrous sulfate as the iron salt with a small amount of mineral organic acid. The resulting solution is light bluish black and, if used alone on paper, appears only faintly. After standing it becomes darker and insoluble in water, which gives it a permanent quality. To make the writing darker and more legible at the outset, dyes are added. Modern coloured inks and washable inks contain soluble synthetic dyes as the sole colouring matter. The writing fades in strong light and rinses out of washable fabrics but lasts for many years if not subjected to such effects.

India ink is a dispersion of carbon black in water; the suspension is stabilized with various substances, including shellac in borax solution, soap, gelatin, glue, gum arabic, and dextrin. It is used mainly for drawing.

Modern printing inks are usually less fluid than writing inks. The composition, viscosity, density, volatility, and diffusibility of ink are variable.

The Chinese experimented with printing at least as early as AD 500, with inks from plant substances mixed with coloured earths and soot or lampblack. When Johannes Gutenberg invented printing with movable type in Germany in about 1440, inks were made by mixing varnish or boiled linseed oil with lampblack. For more than 300 years such inks continued to be used with little modification in their composition.

In 1772 the first patent was issued in England for making coloured inks, and in the 19th century chemical drying agents appeared, making possible the use of a wide variety of pigments for coloured inks. Later, varnishes of varying stiffness were developed to make inks for different papers and presses. Varnish was replaced by mineral oil in inks when high-speed newspaper presses were introduced. The oil base penetrated rapidly into newsprint and dried quickly. Water-based inks are also used, especially for screen printing. It was not until the beginning of the 20th century that ink-making became a complicated chemical-industrial process.

The manufacture of modern inks takes into account the surface to be imprinted, the printing process, and special requirements for the job, such as colour, opacity, transparency, brilliance, lightfastness, surface hardness, pliability, wettability, purity, and odourlessness. Inks for low-speed letterpress printing—the process usually used in book production—are compounded of carbon black, a heavy varnish, and a drier to reduce the drying time. Many other vehicles, pigments, and modifiers may be used. Intaglio inks are composed of petroleum naphthas, resins, and coal-tar solvents. The intaglio printing process is used chiefly in printing rotogravure newspaper supplements and cartons, labels, and wrappers. Plastic materials are usually printed with aniline inks, which contain methyl alcohol, synthetic resins, and shellac.

Details

Ink is a gel, sol, or solution that contains at least one colorant, such as a dye or pigment, and is used to color a surface to produce an image, text, or design. Ink is used for drawing or writing with a pen, brush, reed pen, or quill. Thicker inks, in paste form, are used extensively in letterpress and lithographic printing.

Ink can be a complex medium, composed of solvents, pigments, dyes, resins, lubricants, solubilizers, surfactants, particulate matter, fluorescents, and other materials. The components of inks serve many purposes; the ink's carrier, colorants, and other additives affect the flow and thickness of the ink and its dry appearance.

History

Many ancient cultures around the world have independently discovered and formulated inks for the purposes of writing and drawing. The knowledge of the inks, their recipes and the techniques for their production comes from archaeological analysis or from written text itself. The earliest inks from all civilizations are believed to have been made with lampblack, a kind of soot, as this would have been easily collected as a by-product of fire.

Ink was used in Ancient Egypt for writing and drawing on papyrus from at least the 26th century BC. Egyptian red and black inks included iron and ocher as a pigment, in addition to phosphate, sulfate, chloride, and carboxylate ions; meanwhile, lead was used as a drier.

Chinese inks may go back as far as four millennia, to the Chinese Neolithic Period. These used plants, animal, and mineral inks based on such materials as graphite that were ground with water and applied with ink brushes. Direct evidence for the earliest Chinese inks, similar to modern inksticks, is around 256 BC in the end of the Warring States period and produced from soot and animal glue. The best inks for drawing or painting on paper or silk are produced from the resin of the pine tree. They must be between 50 and 100 years old. The Chinese inkstick is produced with a fish glue, whereas Japanese glue is from cow or stag.

India ink was invented in China, though materials were often traded from India, hence the name. The traditional Chinese method of making the ink was to grind a mixture of hide glue, carbon black, lampblack, and bone black pigment with a pestle and mortar, then pour it into a ceramic dish to dry. To use the dry mixture, a wet brush would be applied until it reliquified. The manufacture of India ink was well-established by the Cao Wei dynasty (220–265 AD). Indian documents written in Kharosthi with ink have been unearthed in Xinjiang. The practice of writing with ink and a sharp pointed needle was common in early South India. Several Buddhist and Jain sutras in India were compiled in ink.

Colorants:

Pigments

Pigment inks are used more frequently than dyes because they are more color-fast, but they are also more expensive, less consistent in color, and have less of a color range than dyes. Pigments are solid, opaque particles suspended in ink to provide color. Pigment molecules typically link together in crystalline structures that are 0.1–2 µm in size and comprise 5–30 percent of the ink volume. Qualities such as hue, saturation, and lightness vary depending on the source and type of pigment.

Dyes

Dye-based inks are generally much stronger than pigment-based inks and can produce much more color of a given density per unit of mass. However, because dyes are dissolved in the liquid phase, they have a tendency to soak into paper, potentially allowing the ink to bleed at the edges of an image.

To circumvent this problem, dye-based inks are made with solvents that dry rapidly or are used with quick-drying methods of printing, such as blowing hot air on the fresh print. Other methods include harder paper sizing and more specialized paper coatings. The latter is particularly suited to inks used in non-industrial settings (which must conform to tighter toxicity and emission controls), such as inkjet printer inks. Another technique involves coating the paper with a charged coating. If the dye has the opposite charge, it is attracted to and retained by this coating, while the solvent soaks into the paper. Cellulose, the wood-derived material most paper is made of, is naturally charged, and so a compound that complexes with both the dye and the paper's surface aids retention at the surface. Such a compound is commonly used in ink-jet printing inks.

An additional advantage of dye-based ink systems is that the dye molecules can interact with other ink ingredients, potentially allowing greater benefit as compared to pigmented inks from optical brighteners and color-enhancing agents designed to increase the intensity and appearance of dyes.

Dye-based inks can be used for anti-counterfeit purposes and can be found in some gel inks, fountain pen inks, and inks used for paper currency. These inks react with cellulose to bring about a permanent color change. Dye based inks are used to color hair.

Health and environmental aspects

There is a misconception that ink is non-toxic even if swallowed. Once ingested, ink can be hazardous to one's health. Certain inks, such as those used in digital printers, and even those found in a common pen can be harmful. Though ink does not easily cause death, repeated skin contact or ingestion can cause effects such as severe headaches, skin irritation, or nervous system damage. These effects can be caused by solvents, or by pigment ingredients such as p-Anisidine, which helps create some inks' color and shine.

Three main environmental issues with ink are:

* Heavy metals
* Non-renewable oils
* Volatile organic compounds

Some regulatory bodies have set standards for the amount of heavy metals in ink. There is a trend toward vegetable oils rather than petroleum oils in recent years in response to a demand for better environmental sustainability performance.

Ink uses up non-renewable oils and metals, which has a negative impact on the environment.

Carbon

Carbon inks were commonly made from lampblack or soot and a binding agent such as gum arabic or animal glue. The binding agent keeps carbon particles in suspension and adhered to paper. Carbon particles do not fade over time even when bleached or when in sunlight. One benefit is that carbon ink does not harm paper. Over time, the ink is chemically stable and therefore does not threaten the paper's strength. Despite these benefits, carbon ink is not ideal for permanence and ease of preservation. Carbon ink tends to smudge in humid environments and can be washed off surfaces. The best method of preserving a document written in carbon ink is to store it in a dry environment (Barrow 1972).

Recently, carbon inks made from carbon nanotubes have been successfully created. They are similar in composition to traditional inks in that they use a polymer to suspend the carbon nanotubes. These inks can be used in inkjet printers and produce electrically conductive patterns.

Iron gall (common ink)

Iron gall inks became prominent in the early 12th century; they were used for centuries and were widely thought to be the best type of ink. However, iron gall ink is corrosive and damages paper over time (Waters 1940). Items containing this ink can become brittle and the writing fades to brown. The original scores of Johann Sebastian Bach are threatened by the destructive properties of iron gall ink. The majority of his works are held by the German State Library, and about 25% of those are in advanced stages of decay (American Libraries 2000). The rate at which the writing fades is based on several factors, such as proportions of ink ingredients, amount deposited on the paper, and paper composition (Barrow 1972:16). Corrosion is caused by acid catalyzed hydrolysis and iron(II)-catalysed oxidation of cellulose (Rouchon-Quillet 2004:389).

Treatment is a controversial subject. No treatment undoes damage already caused by acidic ink. Deterioration can only be stopped or slowed. Some think it best not to treat the item at all for fear of the consequences. Others believe that non-aqueous procedures are the best solution. Yet others think an aqueous procedure may preserve items written with iron gall ink. Aqueous treatments include distilled water at different temperatures, calcium hydroxide, calcium bicarbonate, magnesium carbonate, magnesium bicarbonate, and calcium hyphenate. There are many possible side effects from these treatments. There can be mechanical damage, which further weakens the paper. Paper color or ink color may change, and ink may bleed. Other consequences of aqueous treatment are a change of ink texture or formation of plaque on the surface of the ink (Reibland & de Groot 1999).

Iron gall inks require storage in a stable environment, because fluctuating relative humidity increases the rate that formic acid, acetic acid, and furan derivatives form in the material the ink was used on. Sulfuric acid acts as a catalyst to cellulose hydrolysis, and iron (II) sulfate acts as a catalyst to cellulose oxidation. These chemical reactions physically weaken the paper, causing brittleness.

Indelible ink

Indelible means "un-removable". Some types of indelible ink have a very short shelf life because of the quickly evaporating solvents used. India, Mexico, Indonesia, Malaysia and other developing countries have used indelible ink in the form of electoral stain to prevent electoral fraud. Election ink based on silver nitrate was first applied in the 1962 Indian general election, after being developed at the National Physical Laboratory of India.

The election commission in India has used indelible ink for many elections. Indonesia used it in its last election in Aceh. In Mali, the ink is applied to the fingernail. Indelible ink itself is not infallible as it can be used to commit electoral fraud by marking opponent party members before they have chances to cast their votes. There are also reports of "indelible" ink washing off voters' fingers in Afghanistan.

41rvSGFrLFL_d1aa836d-9d3b-429e-bd24-f99cda7981e2_1024x1024@2x.jpg?v=1616589889


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1964 2023-11-17 00:08:41

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1968) Hip Fracture

Gist

A hip fracture is a break in the thighbone (femur) of your hip joint. Joints are areas where two or more bones meet. Your hip joint is a "ball and socket" joint, where your thighbone meets your pelvic bone. The ball part of your hip joint is the head of the thighbone.

Summary

A hip fracture is a break in the thighbone (femur) of your hip joint.

Joints are areas where two or more bones meet. Your hip joint is a "ball and socket" joint, where your thighbone meets your pelvic bone. The ball part of your hip joint is the head of the thighbone. The socket is a cup-like structure in your pelvic bone. This is called the acetabulum. Hip fracture is a serious injury and needs immediate medical attention.

Most hip fractures happen to people older than age 65. The incidence of hip fractures increases with age. Caucasians and Asians are more likely to be affected than others. This is primarily because of a higher rate of osteoporosis. Osteoporosis (loss of bone tissue) is a disease that weakens bones.

Women are more prone to osteoporosis than men. And hip fracture is more common among women. Aabout 2 million Americans have fractures each year because of osteoporosis.

Either a single break or multiple breaks can happen in a bone. A hip fracture is classified by the specific area of the break and the type of break or breaks in your bone.

The most common types of hip fractures are:

Femoral neck fracture. A femoral neck fracture happens 1 to 2 inches from your hip joint. This type of fracture is common among older adults and can be related to osteoporosis. This type of fracture may cause a complication because the break usually cuts off the blood supply to the head of the thighbone, which forms the hip joint.

Intertrochanteric hip fracture. An intertrochanteric hip fracture happens 3 to 4 inches from your hip joint. This type of fracture does not usually interrupt the blood supply to your bone and may be easier to repair.

Most hip fractures fall into these two categories in relatively equal numbers. Another type of fracture, called a stress fracture of the hip, may be harder to diagnose. This is a hairline crack in the thighbone that may not involve your whole bone. Overuse and repetitive motion can cause a stress fracture. The symptoms of this injury may mimic those of tendonitis or muscle strain.

What causes a hip fracture?

A fall is the most common reason for a hip fracture among the elderly. A few people may have a hip fracture happen spontaneously. If you are younger, a hip fracture is generally the result of a car accident, a fall from a great height, or severe trauma.

Hip fracture is more common in older people. This is because bones become thinner and weaker from calcium loss as a person ages. This is generally due to osteoporosis.

Bones affected by osteoporosis are more likely to break if you fall. Most hip fractures that older people get happen as a result of falling while walking on a level surface, often at home.

If you are a woman, you lose 30% to 50% of your bone density as you age. The loss of bone speeds up dramatically after menopause because you make less estrogen. Estrogen contributes to maintaining bone density and strength.

Who is at risk for hip fracture?

You are at risk for a hip fracture if you have osteoporosis. Older age also puts you at more risk. Other things that may raise your risk include:

* Excessive alcohol consumption
* Lack of physical activity
* Low body weight
* Poor nutrition, including a diet low in calcium and vitamin D
* Gender
* Tall stature
* Vision problems
* Thinking problems such as dementia
* Physical problems
* Medicines that cause bone loss
* Cigarette smoking
* Living in an assisted-care facility
* Increased risk for falls, related to conditions such as weakness, disability, or unsteady gait

There may be other risks, depending on your specific health condition.

Details

A hip fracture is a break that occurs in the upper part of the femur (thigh bone), at the femoral neck or (rarely) the femoral head. Symptoms may include pain around the hip, particularly with movement, and shortening of the leg. Usually the person cannot walk.

A hip fracture is usually a femoral neck fracture. Such fractures most often occur as a result of a fall. (Femoral head fractures are a rare kind of hip fracture that may also be the result of a fall but are more commonly caused by more violent incidents such as traffic accidents.) Risk factors include osteoporosis, taking many medications, alcohol use, and metastatic cancer. Diagnosis is generally by X-rays. Magnetic resonance imaging, a CT scan, or a bone scan may occasionally be required to make the diagnosis.

Pain management may involve opioids or a nerve block. If the person's health allows, surgery is generally recommended within two days. Options for surgery may include a total hip replacement or stabilizing the fracture with screws. Treatment to prevent blood clots following surgery is recommended.

About 15% of women break their hip at some point in life; women are more often affected than men. Hip fractures become more common with age. The risk of death in the year following a fracture is about 20% in older people.

Signs and symptoms

The classic clinical presentation of a hip fracture is an elderly patient who sustained a low-energy fall and now has groin pain and is unable to bear weight. Pain may be referred to the supracondylar knee. On examination, the affected extremity is often shortened and externally rotated compared to the unaffected leg.

Complications

Nonunion, failure of the fracture to heal, is common in fractures of the neck of the femur, but much more rare with other types of hip fracture. Avascular necrosis of the femoral head occurs frequently (20%) in intracapsular hip fractures, because the blood supply is interrupted.

Malunion, healing of the fracture in a distorted position, is very common. The thigh muscles tend to pull on the bone fragments, causing them to overlap and reunite incorrectly. Shortening, varus deformity, valgus deformity, and rotational malunion all occur often because the fracture may be unstable and collapse before it heals. This may not be as much of a concern in patients with limited independence and mobility.

Hip fractures rarely result in neurological or vascular injury.

Medical

Many people are unwell before breaking a hip; it is common for the break to have been caused by a fall due to some illness, especially in the elderly. Nevertheless, the stress of the injury, and a likely surgery, increases the risk of medical illness including heart attack, stroke, and chest infection.

Hip fracture patients are at considerable risk for thromboemoblism, blood clots that dislodge and travel in the bloodstream. Deep venous thrombosis (DVT) is when the blood in the leg veins clots and causes pain and swelling. This is very common after hip fracture as the circulation is stagnant and the blood is hypercoagulable as a response to injury. DVT can occur without causing symptoms. A pulmonary embolism (PE) occurs when clotted blood from a DVT comes loose from the leg veins and passes up to the lungs. Circulation to parts of the lungs is cut off which can be very dangerous. Fatal PE may have an incidence of 2% after hip fracture and may contribute to illness and mortality in other cases.

Mental confusion is extremely common following a hip fracture. It usually clears completely, but the disorienting experience of pain, immobility, loss of independence, moving to a strange place, surgery, and drugs combine to cause delirium or accentuate pre-existing dementia.

Urinary tract infection (UTI) can occur. Patients are immobilized and in bed for many days; they are frequently catheterised, commonly causing infection.

Prolonged immobilization and difficulty moving make it hard to avoid pressure sores on the sacrum and heels of patients with hip fractures. Whenever possible, early mobilization is advocated; otherwise, alternating pressure mattresses should be used.

Risk factors

Hip fracture following a fall is likely to be a pathological fracture. The most common causes of weakness in bone are:

* Osteoporosis.
* Other metabolic bone diseases such as Paget's disease, osteomalacia, osteopetrosis and osteogenesis imperfecta. Stress fractures may occur in the hip region with metabolic bone disease.
* Elevated levels of homocysteine, a toxic 'natural' amino acid.
* Benign or malignant primary bone tumors are rare causes of hip fractures.
* Metastatic cancer deposits in the proximal femur may weaken the bone and cause a pathological hip fracture.
* Infection in the bone is a rare cause of hip fracture.
* Tobacco smoking (associated with osteoporosis).

Mechanism:

Functional anatomy

The hip joint is a ball-and-socket joint. The femur connects at the acetabulum of the pelvis and projects laterally before angling medially and inferiorly to form the knee. Although this joint has three degrees of freedom, it is still stable due to the interaction of ligaments and cartilage. The labrum lines the circumference of the acetabulum to provide stability and shock absorption. Articular cartilage covers the concave area of acetabulum, providing more stability and shock absorption. Surrounding the entire joint itself is a capsule secured by the tendon of the psoas muscle and three ligaments. The iliofemoral, or Y, ligament is located anteriorly and serves to prevent hip hyperextension. The pubofemoral ligament is located anteriorly just underneath the iliofemoral ligament and serves primarily to resist abduction, extension, and some external rotation. Finally the ischiofemoral ligament on the posterior side of the capsule resists extension, adduction, and internal rotation. When considering the biomechanics of hip fractures, it is important to examine the mechanical loads the hip experiences during low energy falls.

Biomechanics

The hip joint is unique in that it experiences combined mechanical loads. An axial load along the shaft of the femur results in compressive stress. Bending load at the neck of the femur causes tensile stress along the upper part of the neck and compressive stress along the lower part of the neck. While osteoarthritis and osteoporosis are associated with bone fracture as we age, these diseases are not the cause of the fracture alone. Low energy falls from standing are responsible for the majority of fractures in the elderly, but fall direction is also a key factor. Elderly tend to fall to the side as instead of forward, and the lateral hip and strikes the ground first. During a sideways fall, the chances of hip fracture see a 15-fold and 12-fold increase in elderly males and females, respectively.

Neurological factors

Elderly individuals are also predisposed to hip fractures due to many factors that can compromise proprioception and balance, including medications, vertigo, stroke, and peripheral neuropathy.

shutterstock_149230139.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1965 2023-11-18 00:11:11

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1969) Skeleton

Gist

The skeleton is the supporting framework of an organism. It is typically made out of hard, rigid tissue that supports the form of the animal’s body and protects vulnerable organs.

For land-dwelling animals, skeletons are also necessary to support movement, since walking and flying rely on the ability to exert force on rigid levers such as legs and wings.

Arthropods such as insects have an “exoskeleton” – an outer covering of a hard material called chitin that protects their internal tissues and allows them to walk, jump, and fly.

Vertebrates such as humans have internal skeletons, made of a tissue called bone that gives the limbs their rigidness and protects vital organs such as the heart and brain.

Summary

A skeleton is the structural frame that supports the body of most animals. There are several types of skeletons, including the exoskeleton, which is a rigid outer shell that holds up an organism's shape; the endoskeleton, a rigid internal frame to which the organs and soft tissues attach; and the hydroskeleton, a flexible internal structure supported by the hydrostatic pressure of body fluids.

Vertebrates are animals with an endoskeleton centered around an axial vertebral column, and their skeletons are typically composed of bones and cartilages. Invertebrates are other animals that lack a vertebral column, and their skeletons vary, including hard-shelled exoskeleton (arthropods and most molluscs), plated internal shells (e.g. cuttlebones in some cephalopods) or rods (e.g. ossicles in echinoderms), hydrostatically supported body cavities (most), and spicules (sponges). Cartilage is a rigid connective tissue that is found in the skeletal systems of vertebrates and invertebrates.

Vertebrate skeletons

In most vertebrates, the main skeletal component is bone. Bones compose a unique skeletal system for each type of animal. Another important component is cartilage which in mammals is found mainly in the joint areas. In other animals, such as the cartilaginous fishes, which include the sharks, the skeleton is composed entirely of cartilage. The segmental pattern of the skeleton is present in all vertebrates, with basic units being repeated, such as in the vertebral column and the ribcage.

Bones are rigid organs that form part of the endoskeleton of vertebrates. They provide structural support for the body, assist in movement by opposing muscular contraction, and create a protective wall around internal organs. Bones are primarily made of inorganic minerals, such as hydroxyapatite, while the remainder is made of an organic matrix and water. The hollow tubular structure of bones provide considerable resistance against compression while staying lightweight. Most cells in bones are either osteoblasts, osteoclasts, or osteocytes.

Bone tissue is a type of dense connective tissue. One of the types of tissue that makes up bone tissue is mineralized tissue and this gives it rigidity and a honeycomb-like three-dimensional internal structure. Bones also produce red and white blood cells and serve as calcium and phosphate storage at the cellular level. Other types of tissue found in bones include marrow, endosteum and periosteum, nerves, blood vessels and cartilage.

During embryonic development, bones are developed individually from skeletogenic cells in the ectoderm and mesoderm. Most of these cells develop into separate bone, cartilage, and joint cells, and they are then articulated with one another. Specialized skeletal tissues are unique to vertebrates. Cartilage grows more quickly than bone, causing it to be more prominent earlier in an animal's life before it is overtaken by bone. Cartilage is also used in vertebrates to resist stress at points of articulation in the skeleton. Cartilage in vertebrates is usually encased in perichondrium tissue. Ligaments are elastic tissues that connect bones to other bones, and tendons are elastic tissues that connect muscles to bones.

Details

Skeleton is the supportive framework of an animal body. The skeleton of invertebrates, which may be either external or internal, is composed of a variety of hard nonbony substances. The more complex skeletal system of vertebrates is internal and is composed of several different types of tissues that are known collectively as connective tissues. This designation includes bone and the various fibrous substances that form the joints, connect bone to bone and bone to muscle, enclose muscle bundles, and attach the internal organs to the supporting structure.

Comparative study of skeletal systems

In addition to its supportive function, the animal skeleton may provide protection, facilitate movement, and aid in certain sensory functions. Support of the body is achieved in many protozoans by a simple stiff, translucent, nonliving envelope called a pellicle. In nonmoving (sessile) coelenterates, such as coral, whose colonies attain great size, it is achieved by dead structures, both internal and external, which form supporting axes. In the many groups of animals that can move, it is achieved either by external structures known as exoskeletons or by internal structures known as endoskeletons. Many animals remain erect or in their normal resting positions by means of a hydrostatic skeleton—i.e., fluid pressure in a confined space.

The skeleton’s protective function alone may be provided by structures situated on the body surface—e.g., the lateral sclerites of centipedes and the shell (carapace) of crabs. These structures carry no muscle and form part of a protective surface armour. The scales of fish, the projecting spines of echinoderms (e.g., sea urchins), the minute needlelike structures (spicules) of sponges, and the tubes of hydroids, all raised from the body surface, are similary protective. The bones of the vertebrate skull protect the brain. In the more advanced vertebrates and invertebrates, many skeletal structures provide a rigid base for the insertion of muscles as well as providing protection.

The skeleton facilitates movement in a variety of ways, depending on the nature of the animal. The bones of vertebrates and the exoskeletal and endoskeletal units of the cuticle of arthropods (e.g., insects, spiders, crabs) support opposing sets of muscles (i.e., extensors and flexors). In other animal groups the hydrostatic skeleton provides such support.

In a limited number of animals, the hard skeleton transmits vibrations that are sensed by the hearing mechanism. In some forms—e.g., bony fishes and fast-swimming squids—it aids in the formation of buoyancy mechanisms that enable the animal to adjust its specific gravity for traveling at different depths in the sea.

Principal types of skeletal elements

Certain types of skeletons usually characterize particular animal phyla, but there are a limited number of ways in which an animal can form its skeleton. Similar modes of skeleton formation have evolved independently in different groups to fulfill similar needs. The cartilaginous braincase of the octopus and the squid, which are invertebrates, has a microscopic structure similar to the cartilage of vertebrates. The calcareous (i.e., calcium-containing) internal skeleton of the echinoderms is simply constructed but is essentially not far different from the much more elaborate bones of vertebrates. Skeletal fibres of similar chemical composition occur in unrelated animal groups; for example, coiled shells of roughly similar chemical composition are present in gastropods (e.g., snails), brachiopods (e.g., lamp shells), and cephalopods (e.g., chambered nautilus). The mechanical properties of different skeletal types vary considerably according to the needs of animals of particular size ranges or habits (e.g., aquatic, terrestrial).

Skeletal elements are of six principal types: hard structures, semirigid structures, connective tissue, hydrostatic structures, elastic structures, and buoyancy devices.

Cuticular structures

Hard structures may be either internal or external. They may be composed of bone (calcareous or membranous structures that are rigid), crystals, cuticle, or ossicles (i.e., minute plates, rods, or spicules).

The scales of some fishes (e.g., sturgeon) may be heavy, forming a complete external jointed armour; calcareous deposits make them stiff. They grow at their margins, and their outer surfaces become exposed by disintegration of the covering cell layer, epithelium. Other fish scales—i.e., those of most modern bony fishes—are thin, membranous, and flexible.

Calcareous structures

The external shells of gastropods and bivalve mollusks (e.g., clams, scallops) are calcareous, stiff, and almost detached from the body. The laminated, or layered, shell grows by marginal and surface additions on the inner side. Muscles are inserted on part of the shell, and the body of the animal can be withdrawn into the protection of the shell. Chambered calcareous shells formed by cephalopods and by protozoans of the order Foraminifera become so large and so numerous that the broken remains of the shells may constitute a type of sand covering large areas of tropical beaches; the pieces may also consolidate into rock. Protozoans of the order Radiolaria form skeletons of silica in the form of very complicated bars. The body of the animal flows partly inside and partly outside among the bars.

Coral skeletons are also partly inside and partly outside the animal. Calcareous depositions below a young coral polyp (i.e., an individual member of the animal colony) are secreted by the ectoderm (generally, the outermost of three basic tissue layers), fixed to the surface to which the animal is attached, and thrown up into ridges, which form a cup into which the polyp can contract. A spreading of the base and the formation of more polyps on the base are followed by a central humping up of the soft tissue and further secretion of skeleton. An upright branch is thus formed, and, in time, large branching corals many feet high may arise from the seafloor. Most of the soft tissue is then external to an axial calcareous skeleton, but in rapidly growing corals the skeleton is perforate, and soft tissue lies both inside and outside it. Protection of the animal is provided by the skeletal cups into which each polyp can contract, but usually neither the whole colony nor a single animal has mobility.

The starfishes, brittlestars, and crinoids (Echinodermata) have many types of calcareous ossicles in the mesoderm (generally, the tissue layer between the gut and the outermost layer). These form units that articulate with each other along the arms, spines that project from the body covering and articulate with ossicles, and calcareous jaws (in sea urchins). Less well organized calcareous deposits stiffen the body wall between the arms of the starfish.

Crystals

Crystals form the basis of many skeletons, such as the calcareous triradiate (three-armed) and quadradiate (four-armed) spicules of calcareous sponges. The cellular components of the body of the sponge usually are not rigid and have no fixed continuity; cells from the outer, inner, and middle layers of a sponge are freely mobile. Spicules, which may be of silica, often extend far from the body. They can be shed at times and replaced by new spicules. Skeletal fibres are present in many sponges.

Calcareous spicules, large and small, form an important part of the skeleton of many coelenterates. Huge needlelike spicules, projecting beyond the soft tissue of sea pens (pennatulids), for example, both support the flanges that bear feeding polyps and hinder browsing by predators. Minute internal spicules may be jammed together to form a skeletal axis, as in the red coral. In some corals (Alcyonaria), spicules combine with fibres made of keratin (a protein also found in hair and feathers) or keratins with amorphous calcite (noncrystalline calcium carbonate) to form axial structures of great strength and size, enabling colonies to reach large bushlike proportions.

Skeletons consisting of cuticle but remote from the body surface give support and protection to other coelenterates, the colonial sedentary hydroids, and form tubes in which pogonophores (small threadlike aquatic animals) live. Exoskeletons that are superficially similar but quite different from hydroids and pogonophores in both manner of growth and internal support occur in the graptolites, an extinct group, and in the protochordates, Rhabdopleura and Cephalodiscus. Some graptolites, known only from fossil skeletal remains many millions of years old, had skeletons similar to those of Rhabdopleura.

In segmented and in many nonsegmented invertebrates, cuticle is secreted by the ectoderm and remains in contact with it. It is thin in annelid worms (e.g., the earthworm) and thicker in roundworms (nematodes) and arthropods. In many arthropods the cuticle is infolded to form endoskeletal structures of considerable complexity. Rigidity is imposed on parts of the cuticle of arthropods either by sclerotization or tanning, a process involving dehydration (as in crustaceans and insects), by calcification (as in millipedes), or by both, as in many crabs. In most arthropods the body and legs are clearly segmented. On the dorsal (upper) side of each segment is a so-called tergal sclerite of calcified or sclerotized cuticle, usually a ventral (lower) sternite and often lateral pleurites—i.e., side plates. There may be much fusion of sclerites on the same segment. Sometimes fusion occurs between dorsal sclerites of successive segments, to form rigid plates. Leg sclerotizations are usually cylindrical.

Internally, apodemes are hollow rods or flanges derived from the cuticle; they extend inward from the exoskeleton. Apodemes have a function similar to the bones of vertebrates, for they provide sites for muscle insertion, thereby allowing the leverage that can cause movement of other parts of the skeleton independent of hydrostatic forces. The apodemal system is most fully developed in the larger and more swiftly moving arthropods. The cuticle is a dead secretion and can only increase in thickness. At intervals an arthropod molts the entire cuticle, pulling out the apodemes. The soft body rapidly swells before secreting a new, stiff cuticle. The molting process limits the upper size of cuticle-bearing animals. Arthropods can never achieve the body size of the larger vertebrates, in which the bones grow with the body, because the mechanical difficulties of molting would be too great. The mechanical properties of bone limit terrestrial mammals to about the size of a 12-ton elephant. In water, however, bone can support a heavier animal, such as a blue whale weighing 100 tons. Bone is mechanically unsuited to support an animal as bulky as, for example, a large ship.

Semirigid structures

Flexible cuticular structures on the surface of unsegmented roundworms and arthropods are just as important in providing support as are the more rigid sclerites. Mobility between the sclerites of body and legs is maintained by regions of flexible cuticle, the arthrodial membranes. Some sclerites are stiffened by closely packed cones of sclerotization at their margins, forming structures that combine rigidity and flexibility.

The mesoglea layer, which lies between the ectoderm and the endoderm (the innermost tissue layer) of coelenterates, is thin in small species and massive in large ones. It forms a flexible skeleton, associated with supporting muscle fibres on both the ectodermal and endodermal sides. In many branched alcyonarians, or soft corals, the mesoglea is filled with calcareous spicules, which are not tightly packed and thus permit the axis of each coral branch to bend with the swell of the sea. As a result, soft corals, which are sessile and colonial, are very strong and can resist water movements without breaking. In this respect they are unlike the calcareous corals, which break in violent currents of water. The often beautifully coloured gorgonian corals, or sea fans, are supported by an internal horny axis of keratin. They too bend with the water movements, except when very large. In some forms the horny axis may be impregnated with lime. The horny axes are often orientated in complex branches set in one plane, so that the coral forms a feeding net across a prevailing current. Certain chordates also possess a flexible endoskeleton; the rodlike notochord occurs in adult lampreys and in most young fishes. Running just within the dorsal midline, it provides a mechanical basis for their swimming movements. In the higher vertebrates the notochord is surrounded by cartilage and finally replaced by bone. In many protochordates, however, the notochord remains unchanged. Cartilage too forms flexible parts of the endoskeletal system of vertebrates, such as between articulating bones and forming sections of ribs.

Connective tissue

Below the ectoderm of many animals, connective tissue forms sheets of varying complexity, existing as fine membranes or as complex superficial layers of fibres. Muscles inserted on the fibres form subepithelial complexes in many invertebrates; and vertebrate muscles are often inserted on firm sheets of connective tissue (fascia) deep in the body that are also formed by these fibres. Particular concentrations of collagen fibres, oriented in different directions, occur superficially in the soft-bodied Peripatus (a caterpillar-like terrestrial invertebrate). In coelenterates they also occur deep in the body. In many arthropods, collagen fibres form substantial endosternites (i.e., ridges on the inner surface of the exoskeleton in the region of the thorax) that are isolated from other skeletal structures. These fibres are not shed during molting, and the endosternites grow with the body. The fibres do not stretch, but their arrangement provides firm support for muscles and sometimes permits great changes in body shape.

The hydrostatic skeleton

The hydrostatic skeleton is made possible by closed fluid-filled internal spaces of the body. It is of great importance in a wide variety of animal groups because it permits the antagonistic action of muscles used in locomotion and other movements. The fluid spaces are part of the gastrovascular cavity in the Coelenterata, part of the coelomic cavity (between the gut and the body wall) in the worms, and hemocoelic (i.e., in a type of body cavity consisting of a complex of spaces between tissues and organs) or vascular in mollusks and arthropods. As the exoskeleton becomes more rigid and the apodemal endoskeleton more fully developed in arthropods, the importance of the hemocoele in promoting antagonistic muscle action decreases. In larger and more heavily sclerotized species, the hydrostatic skeleton is no longer of locomotory significance; the muscles work directly against the articulated skeleton, as in vertebrates.

Elastic structures

In the larger medusae, or jellyfishes (Coelenterata), the musculature is mainly circular. By contracting its bell-shaped body, the jellyfish narrows, ejecting water from under the bell; this pushes the animal in the opposite direction from that of the water. There are no antagonistic muscles to counteract the contracted circular muscles. A passive, slow return of the bell to its expanded shape is effected largely by the elasticity of the mesoglea layer, which crumples during the propulsive contraction. After the circular muscles relax, the distorted mesoglea fibres pull them out to expand the bell. In many of the larger mammals, elastic fibres are used more extensively. The elephant and the whale, for example, possess an abundance of elastic tissue in their musculature.

Elasticity of surface cuticle assists recovery movements in roundworms and arthropods, but the stresses and strains that cuticle can withstand are limited. Special sensory devices (chordotonal organs) convey the extent of stress in the cuticle to the animal’s nervous system, thus preventing the generation of stresses great enough to damage the structure. There are also elastic units in the base of the wings of some insects. These rather solid elastic structures alternately store and release energy. They have probably been important in the evolution of the extremely rapid wingbeat of some insects.


Buoyancy devices

Buoyancy devices are complex structures that involve both hard and soft parts of the animal. In vertebrates they may be closely associated with or form part of the auditory apparatus. A chain of auditory ossicles in mammals transmits vibrations from the tympanic membrane to the internal ear; simpler devices occur in the cold-blooded land vertebrates. In the roach fish, which has sensitive hearing, a chain of four Weberian ossicles connects the anterior, or forward, end of the swim bladder to the auditory organs of the head. Sound vibrations cause changes in volume in the anterior part of the bladder and are transmitted to the nervous system through the ossicles. The swim bladder of other fishes appears to be a buoyancy organ and not skeletal; however, cephalopods capable of swimming rapidly in both deep and shallow water possess air-filled buoyancy organs. The calcareous coiled shell of the bottom-dwelling Nautilus is heavy and chambered; the animal lives in the large chamber. The shell behind is coiled and composed of air-filled chambers that maintain the animal in an erect position. When the entire coiled, lightly constructed shell of Spirula sinks into the body, the animal has internal air spaces that can control its buoyancy and also its direction of swimming. In cuttlefish and squids, a shell that was originally chambered has become transformed into a laminated cuttlebone. Secretion and absorption of gases to and from the cuttlebone by the bloodstream provide a hydrostatic buoyancy mechanism that enables the squids to swim with little effort at various depths. This device has probably made it possible for some species to grow to a length of 18 metres (59 feet). Some siphonophores (Coelenterata) have a chambered gas-filled float, its walls stiffened with a chitinlike structure in Velella.

Varieties of invertebrate skeletons:

Skeletomusculature of a mobile coelenterate

A sea anemone provides an example of the way in which a hydrostatic skeleton can act as the means by which simple sheets of longitudinal and circular muscle fibres can antagonize each other to produce contrasting movements. The fluid-filled space is the large digestive, or internal, cavity of the body. If the mouth is slightly open when both longitudinal and circular muscles of the trunk contract, fluid flows out of the internal space, and the body shrinks. If the mouth is closed, the internal fluid-filled space cannot be compressed; thus, the body volume remains constant, and contraction of the longitudinal muscles of the trunk both shortens and widens the body. Contraction of the circular muscles pulls out relaxed longitudinal muscles, and the body lengthens. Appropriate coordination of muscular action working against the hydrostatic skeleton can produce locomotion movements—such as burrowing in sand or stepping along a hard surface—by billowing out one side of the base of the animal while the other side of the base contracts, forcing fluid into the relaxed, dilated portion. The forward dilated part sticks to the surface, and its muscles contract, pulling the animal forward.

The circular muscles lie outside a substantial layer of skeletal mesoglea fibrils; longitudinal muscles are internal to the layer. The muscle fibres are attached at either end to the mesoglea fibres, which, like vertebrate bones, cannot stretch. Unlike bones, however, the mesoglea sheet is able to change its shape, because its components (fibrils) are set in layers at an angle to each other and to the long axis of the body. Alteration in length and width of the body is accompanied by changes in the angle between two sheets of mesoglea fibrils; thus, support for the muscles can vary greatly in position. The range in change of shape of the sea anemone is implemented by simple muscles and connective-tissue mesoglea fibrils. The movements are characteristically slow, often occurring so slowly as to be invisible to the naked eye. Faster movements would engender greater increases in internal pressures, thus placing a needless burden on the musculature. All coelenterates utilize this slow hydrostatic-muscular system, but, as described for the jellyfish, some faster movements are also possible.

Skeletomusculature of an earthworm

The hydrostatic skeleton of many other animals is provided by the body cavity, or coelom, which is situated outside the alimentary canal and inside the body wall. In an earthworm the body cavity of each segment of the trunk is separated from that of the next by a partition, so that the segmented body possesses a series of more or less isolated coelomic, fluid-filled spaces of fixed volume. The body wall contains circular and longitudinal muscles and some minor muscles. As in the sea anemone, skeletal connective-tissue fibres form the muscle insertions. As a worm crawls or burrows, a group of segments shorten and widen, their total volume remaining the same; contact with the ground is maintained by projection of bristlelike structures from the cuticle (setae). Groups of short, wide segments are formed at intervals along the body; the segments between these groups are longer, narrower, and not in contact with the ground. As the worm crawls, the thickened zones appear to travel backward along the body, because the segments just behind each zone thicken, widen, and cling to the ground, while the segments at the front end of each wide zone free themselves from the ground and become longer and narrower. Thus, the head end of the body intermittently progresses forward over the ground or enters a crevice as the longitudinally extending segments are continuously being lengthened outward from the front end of each thickened zone. It is therefore only the long, narrow segments that are moving forward. This mechanism of crawling by the alternate and antagonistic action of the longitudinal and circular muscles is made possible by the hydrostatic action of the incompressible coelomic spaces. The movements of most other annelid worms are also controlled by a hydrostatic skeleton.

Skeletomusculature of arthropods

In arthropods the skeleton is formed in part by the cuticle covering the body surface, by internal connective-tissue fibres, and by a hydrostatic skeleton formed by the hemocoele, or enlarged blood-filled spaces. The cuticle may be flexible or stiff, but it does not stretch. In the Onychophora (e.g., Peripatus) the cuticle is thin and much-folded, thus allowing great changes in the body shape. The muscular body wall, as in annelids, works against the hydrostatic skeleton in the hemocoele. Each leg moves in a manner similar to the body movement of a sea anemone or a Hydra. But a unique lateral isolating mechanism allows suitable hydrostatic pressures to be available for each leg. Muscles of a particular leg thus can be used independently, no matter what the other legs may be doing or what influence the body movements may be having on the general hemocoele.

In most adult arthropods the cuticle is less flexible than in the Onychophora: localized stiff sclerites are separated by flexible joints between them, and, as a result, the hydrostatic action of the hemocoele is of less importance. Cuticle, secreted by the ectodermal cells, may be stiffened by deposition of lime or by tanning (sclerotization). Muscle fibres or their connective-tissue supports are connected to the cuticle by tonofibrils within the cytoplasm of ectodermal cells.

The joints between the stiffened sclerites consist of undifferentiated flexible cuticle. Between the distal (i.e., away from the central body axis) leg segments of many arthropods, the flexible cuticle at the joint is relatively large ventrally (i.e., on the lower side) and very short dorsally (i.e., on the upper side), thus forming a dorsal hinge. Flexor muscles (for drawing the limb toward the body) span the joint and cause flexure of the distal part of the leg. There are no extensor muscles, however, and straightening of the leg when it is off the ground is effected by hydrostatic pressure of the general hemocoele and by proximal depressor muscles that open the joint indirectly. Between the proximal leg segments (i.e., those closer to the point of insertion of the limb into the body), pivot joints are usually present. They are composed of a pair of imbricating facets near the edges of the overlapping cylinders that cover the leg segments, with one pair on the anterior face of the leg and another on the posterior face. A pair of antagonistic muscles span the leg joint and move the distal segment up or down, without reference to hydrostatic pressure.

The more-advanced arthropods—those with the most elaborate sclerites and joints—are no longer dependent upon hydrostatic forces for skeletomuscular action. Evolution away from the hydrostatic skeleton has made possible faster and stronger movements of one cuticular unit upon another. The type of skeletomusculature appropriate for producing fast movements, such as rapid running, jumping, or flying, is quite different from those producing strong movements, such as those used by burrowing arthropods.

The flexible edges of the sclerites of burrowing centipedes (Geophilomorpha) enable them to change their shape in an earthwormlike manner while preserving a complete armour of surface sclerites at all times. The marginal zones of the sclerites bear cones of sclerotization that are set in the flexible cuticle, thus permitting flexure in any direction without impairing strength. The surface of the arthropodan cuticle is rendered waterproof, or hydrofuge, by a variety of structures, such as waxy layers, scales, and hairs. These features enable the animals not only to resist desiccation on land but to exist in damp places without uptake of water—a process that could cause swelling of the body and lead to death. The cuticular endoskeleton is formed by an infolding of surface cuticle. Sometimes a large surface sclerite called a carapace covers both the head and the thorax, as in crabs and lobsters.

Connective-tissue fibres form substantial endoskeletal units in arthropods. The fibres are not united to the cuticle and are not shed during molting; rather, they grow with the body. A massive and compact endosternite (internal sternite), formed by connective-tissue fibres, frequently lies below the gut and above the nerve cord. In Limulus, the horseshoe crab, muscles from the anterior margin of the coxa (the leg segment nearest the body) are inserted on the endosternite, as are other muscles from the posterior margin.

The jointed cuticular skeleton of arthropods enables them to attain considerable size, up to a few metres in length, and to move rapidly. These animals have solved most of the problems presented by life on dry land in a manner unequaled by any other group of invertebrates. They have also evolved efficient flight by means of wings derived from the cuticle. The arthropods can never achieve the body size of the larger vertebrates, although mechanically they perform as well as smaller vertebrates. As mentioned above, the major limiting factor to size increase is the need to molt the exoskeleton.

Skeleton of echinoderms

Among the invertebrates, only the echinoderms possess an extensive mesodermal skeleton that is stiffened by calcification—as in vertebrates—and also grows with the body. The five-rayed symmetry of echinoderms may be likened to the vertebral axis of vertebrates. It is similarly supported; a series of ambulacral ossicles in each ray roughly corresponds with the vertebrae of vertebrates. The ossicles articulate with each other in mobile echinoderms such as starfishes and form the basis of the rapid movements of the arms of crinoids, brittlestars, and similar forms. The ambulacral ossicles and, in many cases, the surface spines provide protection for superficial nerve cords, which extend along the arms and around the mouth. The ossicles also protect the tubes of the water-vascular system, a hydraulic apparatus peculiar to echinoderms. In sea urchins a spherical, rigid body is formed by the five arms coming together dorsally around the math; the ambulacral ossicles are immobile, and the body wall between the ambulacra is made rigid by a layer of calcareous plates below the ectoderm, which completes the continuous spherical skeleton. Locomotion is carried out by extensible tube feet, soft structures that are pendant from the water-vascular system. Mobile spines also serve for locomotion in many classes, the base of the spine articulating with a part of some stable ossicle. The fine internal structure of echinoderm sclerites bears no resemblance to that of bone.

a1876195c76e26528fdea03d3bcfb5f7.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1966 2023-11-19 00:04:40

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1970) Fuel Oil

Gist

Fuel oil is fuel consisting mainly of residues from crude-oil distillation. It is used primarily for steam boilers in power plants, aboard ships, and in industrial plants. Commercial fuel oils usually are blended with other petroleum fractions to produce the desired viscosity and flash point. Flash point is usually higher than that of kerosene. The term fuel oil ordinarily does not include such fuels as kerosene.

Summary

Fuel oil is any of various fractions obtained from the distillation of petroleum (crude oil). Such oils include distillates (the lighter fractions) and residues (the heavier fractions). Fuel oils include heavy fuel oil (bunker fuel), marine fuel oil (MFO), furnace oil (FO), gas oil (gasoil), heating oils (such as home heating oil), diesel fuel, and others.

The term fuel oil generally includes any liquid fuel that is burned in a furnace or boiler to generate heat (heating oils), or used in an engine to generate power (as motor fuels). However, it does not usually include other liquid oils, such as those with a flash point of approximately 42 °C (108 °F), or oils burned in cotton- or wool-wick burners. In a stricter sense, fuel oil refers only to the heaviest commercial fuels that crude oil can yield, that is, those fuels heavier than gasoline (petrol) and naphtha.

Fuel oil consists of long-chain hydrocarbons, particularly alkanes, cycloalkanes, and aromatics. Small molecules, such as those in propane, naphtha, gasoline, and kerosene, have relatively low boiling points, and are removed at the start of the fractional distillation process. Heavier petroleum-derived oils like diesel fuel and lubricating oil are much less volatile and distill out more slowly.

Details

Fuel oil is any of various fractions obtained from the distillation of petroleum (crude oil). Such oils include distillates (the lighter fractions) and residues (the heavier fractions). Fuel oils include heavy fuel oil (bunker fuel), marine fuel oil (MFO), furnace oil (FO), gas oil (gasoil), heating oils (such as home heating oil), diesel fuel, and others.

The term fuel oil generally includes any liquid fuel that is burned in a furnace or boiler to generate heat (heating oils), or used in an engine to generate power (as motor fuels). However, it does not usually include other liquid oils, such as those with a flash point of approximately 42 °C (108 °F), or oils burned in cotton- or wool-wick burners. In a stricter sense, fuel oil refers only to the heaviest commercial fuels that crude oil can yield, that is, those fuels heavier than gasoline (petrol) and naphtha.

Fuel oil consists of long-chain hydrocarbons, particularly alkanes, cycloalkanes, and aromatics. Small molecules, such as those in propane, naphtha, gasoline, and kerosene, have relatively low boiling points, and are removed at the start of the fractional distillation process. Heavier petroleum-derived oils like diesel fuel and lubricating oil are much less volatile and distill out more slowly.

Uses

Oil has many uses; it heats homes and businesses and fuels trucks, ships, and some cars. A small amount of electricity is produced by diesel, but it is more polluting and more expensive than natural gas. It is often used as a backup fuel for peaking power plants in case the supply of natural gas is interrupted or as the main fuel for small electrical generators. In Europe, the use of diesel is generally restricted to cars (about 40%), SUVs (about 90%), and trucks and buses (over 99%). The market for home heating using fuel oil has decreased due to the widespread penetration of natural gas as well as heat pumps. However, it is very common in some areas, such as the Northeastern United States.

Residual fuel oil is less useful because it is so viscous that it has to be heated with a special heating system before use and it may contain relatively high amounts of pollutants, particularly sulfur, which forms sulfur dioxide upon combustion. However, its undesirable properties make it very cheap. In fact, it is the cheapest liquid fuel available. Since it requires heating before use, residual fuel oil cannot be used in road vehicles, boats or small ships, as the heating equipment takes up valuable space and makes the vehicle heavier. Heating the oil is also a delicate procedure, which is impractical on small, fast moving vehicles. However, power plants and large ships are able to use residual fuel oil.

Use of residual fuel oil was more common in the past. It powered boilers, railroad steam locomotives, and steamships. Locomotives, however, have become powered by diesel or electric power; steamships are not as common as they were previously due to their higher operating costs (most LNG carriers use steam plants, as "boil-off" gas emitted from the cargo can be used as a fuel source); and most boilers now use heating oil or natural gas. Some industrial boilers still use it and so do some old buildings, including in New York City. In 2011 New York City estimated that the 1% of its buildings that burned fuel oils No. 4 and No. 6 were responsible for 86% of the soot pollution generated by all buildings in the city. New York made the phase out of these fuel grades part of its environmental plan, PlaNYC, because of concerns for the health effects caused by fine particulates, and all buildings using fuel oil No. 6 had been converted to less polluting fuel by the end of 2015.

Residual fuel's use in electrical generation has also decreased. In 1973, residual fuel oil produced 16.8% of the electricity in the US. By 1983, it had fallen to 6.2%, and as of 2005, electricity production from all forms of petroleum, including diesel and residual fuel, is only 3% of total production. The decline is the result of price competition with natural gas and environmental restrictions on emissions. For power plants, the costs of heating the oil, extra pollution control and additional maintenance required after burning it often outweigh the low cost of the fuel. Burning fuel oil, particularly residual fuel oil, produces uniformly higher carbon dioxide emissions than natural gas.

Heavy fuel oils continue to be used in the boiler "lighting up" facility in many coal-fired power plants. This use is approximately analogous to using kindling to start a fire. Without performing this act it is difficult to begin the large-scale combustion process.

The chief drawback to residual fuel oil is its high initial viscosity, particularly in the case of No. 6 oil, which requires a correctly engineered system for storage, pumping, and burning. Though it is still usually lighter than water (with a specific gravity usually ranging from 0.95 to 1.03) it is much heavier and more viscous than No. 2 oil, kerosene, or gasoline. No. 6 oil must, in fact, be stored at around 38 °C (100 °F) heated to 65–120 °C (149–248 °F) before it can be easily pumped, and in cooler temperatures it can congeal into a tarry semisolid. The flash point of most blends of No. 6 oil is, incidentally, about 65 °C (149 °F). Attempting to pump high-viscosity oil at low temperatures was a frequent cause of damage to fuel lines, furnaces, and related equipment which were often designed for lighter fuels.

For comparison, BS 2869 Class G heavy fuel oil behaves in similar fashion, requiring storage at 40 °C (104 °F), pumping at around 50 °C (122 °F) and finalizing for burning at around 90–120 °C (194–248 °F).

Most of the facilities which historically burned No. 6 or other residual oils were industrial plants and similar facilities constructed in the early or mid 20th century, or which had switched from coal to oil fuel during the same time period. In either case, residual oil was seen as a good prospect because it was cheap and readily available. Most of these facilities have subsequently been closed and demolished, or have replaced their fuel supplies with a simpler one such as gas or No. 2 oil. The high sulfur content of No. 6 oil—up to 3% by weight in some extreme cases—had a corrosive effect on many heating systems (which were usually designed without adequate corrosion protection in mind), shortening their lifespans and increasing the polluting effects. This was particularly the case in furnaces that were regularly shut down and allowed to go cold, because the internal condensation produced sulfuric acid.

Environmental cleanups at such facilities are frequently complicated by the use of asbestos insulation on the fuel feed lines. No. 6 oil is very persistent, and does not degrade rapidly. Its viscosity and stickiness also make remediation of underground contamination very difficult, since these properties reduce the effectiveness of methods such as air stripping.

When released into water, such as a river or ocean, residual oil tends to break up into patches or tarballs – mixtures of oil and particulate matter such as silt and floating organic matter – rather than form a single slick. An average of about 5-10% of the material will evaporate within hours of the release, primarily the lighter hydrocarbon fractions. The remainder will then often sink to the bottom of the water column.

Health impacts

Because of the low quality of bunker fuel, when burnt it is especially harmful to the health of humans, causing serious illnesses and deaths. Prior to the IMO's 2020 sulfur cap, shipping industry air pollution was estimated to cause around 400,000 premature deaths each year, from lung cancer and cardiovascular disease, as well as 14 million childhood asthma cases each year.

Even after the introduction of cleaner fuel rules in 2020, shipping air pollution is still estimated to account for around 250,000 deaths each year, and around 6.4 million childhood asthma cases each year.

The hardest hit countries by air pollution from ships are China, Japan, the UK, Indonesia, and Germany. In 2015, shipping air pollution killed an estimated 20,520 people in China, 4,019 people in Japan, and 3,192 people in the UK.

According to an ICCT study, countries located on major shipping lanes are particularly exposed, and can see shipping account for a high percentage of overall deaths from transport sector air pollution. In Taiwan, shipping accounts for 70% of all transport-attributable air pollution deaths in 2015, followed by Morocco at 51%, Malaysia and Japan both at 41%, Vietnam at 39%, and the UK at 38%.

As well as commercial shipping, cruise ships also emit large amounts of air pollution, damaging people's health. Up to 2019, it was reported that the ships of the single largest cruise company, Carnival Corporation & plc, emitted ten times more sulfur dioxide than all of Europe's cars combined.

SF_dreamstime_m_90706670-OL.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1967 2023-11-20 00:03:56

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1971) Nursery

Gist

A place where young trees or other plants are raised for transplanting, for sale, or for experimental study.

Summary

Nursery is a place where plants are grown for transplanting, for use as stock for budding and grafting, or for sale. Commercial nurseries produce and distribute woody and herbaceous plants, including ornamental trees, shrubs, and bulb crops. While most nursery-grown plants are ornamental, the nursery business also includes fruit plants and certain perennial vegetables used in home gardens (e.g., asparagus, rhubarb). Some nurseries are kept for the propagation of native plants for ecological restoration. Greenhouses may be used for tender plants or to keep production going year round, but nurseries most commonly consist of shaded or exposed areas outside. Plants are commonly cultivated from seed or from cuttings and are often grown in pots or other temporary containers.

Details

A nursery is a place where plants are propagated and grown to a desired size. In a word, a nursery is a centre of seedling production where seedlings are produced and taken care of until transplantation in the main field. Mostly the plants concerned are for gardening, forestry, or conservation biology, rather than agriculture. They include retail nurseries, which sell to the general public; wholesale nurseries, which sell only to businesses such as other nurseries and commercial gardeners; and private nurseries, which supply the needs of institutions or private estates. Some will also work in plant breeding.

A "nurseryman" is a person who owns or works in a nursery.

Some nurseries specialize in certain areas, which may include: propagation and the selling of small or bare root plants to other nurseries; growing out plant materials to a saleable size, or retail sales. Nurseries may also specialize in one type of plant, e.g., groundcovers, shade plants, or rock garden plants. Some produce bulk stock, whether seedlings or grafted trees, of particular varieties for purposes such as fruit trees for orchards or timber trees for forestry. Some producers produce stock seasonally, ready in the spring for export to colder regions where propagation could have been started earlier or to regions where seasonal pests prevent profitable growing early in the season.

Nurseries

There are a number of different types of nurseries, broadly grouped as wholesale or retail nurseries, with some overlap depending on the specific operation. Wholesale nurseries produce plants in large quantities which are sold to retail nurseries, landscapers, garden centers, and other retail outlets which then sell to the public.

Wholesale nurseries may be small operations that produce a specific type of plant using a small area of land, or very larger operations covering many acres. They propagate plant material or buy plants from other nurseries, including rooted or unrooted cuttings, small rooted plants called plugs, or field grown bare root plants, which are planted and grown to a desired size. Some wholesale nurseries produce plants on contract for others which place an order for a specific number and size of plant, while others produce a wide range of plants that are offered for sale to other nurseries and landscapers and sold as first come first served.

Retail nurseries sell plants ready to be placed in the landscape or used in homes and businesses.

History

Nurseries as formal businesses can be traced back to the seventeenth century in England and were critical to the development of eighteenth-century landscape gardening. By the mid eighteenth century large nursery businesses had been established in England. By the early nineteenth century many nurseries had become highly specialised, with an 1839 guide to London nurseries identifying fourteen buildings in London and eight in the south or west of England that specialised in varieties of exotic plants, some being known for particular classes of plants. While customers at nurseries included professional gardeners, in the nineteenth century visiting the more upmarket plant nurseries had become a popular activity for the middle and upper classes. For instance, an advert in an 1842 tourist guidebook for Bristol boasted that Durdham Down Nurseries were 'a great attraction to the Nobility and Gentry', as well as serving experienced gardeners and farm bailiffs.

Methods

Propagation Nurseries produce new plants from seeds, cuttings, tissue culture, grafting, or division. The plants are then grown out to a salable size and either sold to other nurseries that may continue to grow the plants out in larger containers or field grow them to desired size. Propagation nurseries may also sell plant material large enough for retail sales and thus sale directly to retail nurseries or garden centers (which rarely propagated their own plants).

Nurseries may produce plants for reforestation, zoos, parks, and cities. With Tree nurseries in the U.S. producing around 1.3 billion seedlings per year for reforestation.

Nurseries grow plants in open fields, on container fields, in tunnels or greenhouses. In open fields, nurseries grow decorative trees, shrubs and herbaceous perennials. On a containerfield nurseries grow small trees, shrubs and herbaceous plants, usually destined for sales in garden centers. These have proper ventilation, sunlight etc. Plants may be grown by seeds, but the most common method is by planting cuttings, which can be taken from shoot tips or roots.

Conditioning

With the objective of fitting planting stock more able to withstand stresses after outplanting, various nursery treatments have been attempted or developed and applied to nursery stock. Buse and Day (1989), for instance, studied the effect of conditioning of white spruce and black spruce transplants on their morphology, physiology, and subsequent performance after outplanting. Root pruning, wrenching, and fertilization with potassium at 375 kg/ha were the treatments applied. Root pruning and wrenching modified stock in the nursery by decreasing height, root collar diameter, shoot:root ratio, and bud size, but did not improve survival or growth after planting. Fertilization reduced root growth in black spruce but not of white spruce.

Hardening off, frost hardiness

Seedlings vary in their susceptibility to injury from frost. Damage can be catastrophic if "unhardened" seedlings are exposed to frost. Frost hardiness may be defined as the minimum temperature at which a certain percentage of a random seedling population will survive or will sustain a given level of damage (Siminovitch 1963, Timmis and Worrall 1975). The term LT50 (lethal temperature for 50% of a population) is commonly used. Determination of frost hardiness in Ontario is based on electrolyte leakage from mainstem terminal tips 2 cm to 3 cm long in weekly samplings (Colombo and Hickie 1987). The tips are frozen then thawed, immersed in distilled water, the electrical conductivity of which depends on the degree to which cell membranes have been ruptured by freezing releasing electrolyte. A −15 °C frost hardiness level has been used to determine the readiness of container stock to be moved outside from the greenhouse, and −40 °C has been the level determining readiness for frozen storage (Colombo 1997).

In an earlier technique, potted seedlings were placed in a freezer chest and cooled to some level for some specific duration; a few days after removal, seedlings were assessed for damage using various criteria, including odour, general visual appearance, and examination of cambial tissue (Ritchie 1982).

Stock for fall planting must be properly hardened-off. Conifer seedlings are considered to be hardened off when the terminal buds have formed and the stem and root tissues have ceased growth. Other characteristics that in some species indicate dormancy are color and stiffness of the needles, but these are not apparent in white spruce.

Forest tree nurseries

Whether in the forest or in the nursery, seedling growth is fundamentally influenced by soil fertility, but nursery soil fertility is readily amenable to amelioration, much more so than is forest soil.

Nitrogen, phosphorus, and potassium are regularly supplied as fertilizers, and calcium and magnesium are supplied occasionally. Applications of fertilizer nitrogen do not build up in the soil to develop any appreciable storehouse of available nitrogen for future crops. Phosphorus and potassium, however, can be accumulated as a storehouse available for extended periods.

Fertilization permits seedling growth to continue longer through the growing season than unfertilized stock; fertilized white spruce attained twice the height of unfertilized. High fertility in the rooting medium favours shoot growth over root growth, and can produce top-heavy seedlings ill-suited to the rigors of the outplant site. Nutrients in oversupply can reduce growth or the uptake of other nutrients. As well, an excess of nutrient ions can prolong or weaken growth to interfere with the necessary development of dormancy and hardening of tissues in time to withstand winter weather.

Stock types, sizes and lots

Nursery stock size typically follows the normal curve when lifted for planting stock. The runts at the lower end of the scale are usually culled to an arbitrary limit, but, especially among bareroot stock, the range in size is commonly considerable. Dobbs (1976) and McMinn (1985a) examined how the performance of 2+0 bareroot white spruce related to differences in initial size of planting stock. The stock was regraded into large, medium, and small fractions according to fresh weight. The small fraction (20% of the original stock) had barely one-quarter of the dry matter mass of the large fraction at the time of outplanting. Ten years later, in the blade-scarified site, seedlings of the large fraction had almost 50% greater stem volume than had seedlings of the small fraction. Without site preparation, large stock were more than twice the size of small stock after 10 years.

Similar results were obtained with regraded 2+1 transplants sampled to determine root growth capacity. The large stock had higher RGC as well as greater mass than the small stock fraction.

The value of large size at the time of planting is especially apparent when outplants face strong competition from other vegetation, although high initial mass does not guarantee success. That the growth potential of planting stock depends on much more than size seems clear from the indifferent success of the transplanting of small 2+0 seedlings for use as 2+1 "reclaim" transplants. The size of bareroot white spruce seedlings and transplants also had a major influence on field performance.

The field performance among various stock types in Ontario plantations was examined by Paterson and Hutchison (1989): the white spruce stock types were 2+0, 1.5+0.5, 1.5+1.5, and 3+0. The nursery stock was grown at Midhurst Forest Tree Nursery, and carefully handled through lifting on 3 lift dates, packing, and hot-planting into cultivated weed-free loam. After 7 years, overall survival was 97%, with no significant differences in survival among stock types. The 1.5+1.5 stock with a mean height of 234 cm was significantly taller by 18% to 25% than the other stock types. The 1.5+1.5 stock also had significantly greater dbh than the other stock types by 30-43%. The best stock type was 57 cm taller and 1 cm greater in dbh than the poorest. Lifting date had no significant effect on growth or survival.

High elevation sites in British Columbia's southern mountains are characterized by a short growing season, low air and soil temperatures, severe winters, and deep snow. The survival and growth of Engelmann spruce and subalpine fir outplanted in 3 silvicultural trials on such sites in gaps of various sizes were compared by Lajzerowicz et al. (2006). Survival after 5 or 6 years decreased with smaller gaps. Height and diameter also decreased with decreasing size of gap; mean heights were 50 cm to 78 cm after 6 years, in line with height expectations for Engelmann spruce in a high-elevation planting study in southeastern British Columbia. In the larger gaps (≥1.0 ha), height increment by year 6 was ranging from 10 cm to 20 cm. Lajzerrowicz et al. Concluded that plantings of conifers in clearcuts at high elevations in the southern mountains of British Columbia are likely to be successful, even close to timberline; and group selection silvicultural systems based on gaps 0.1 ha or larger are also likely to succeed. Gaps smaller than 0.1 ha do not provide suitable conditions for obtaining adequate survival or for growth of outplanted conifers.

Planting stock

Planting stock, "seedlings, transplants, cuttings, and occasionally wildings, for use in planting out," is nursery stock that has been made ready for outplanting. The amount of seed used in white spruce seedling production and direct seeding varies with method.

A working definition of planting stock quality was accepted at the 1979 IUFRO Workshop on Techniques for Evaluating Planting Stock Quality in New Zealand: "The quality of planting stock is the degree to which that stock realizes the objectives of management (to the end of the rotation or achievement of specified sought benefits) at minimum cost. Quality is fitness for purpose." Clear expression of objectives is therefore prerequisite to any determination of planting stock quality. Not only does performance have to be determined, but performance has to be rated against the objectives of management. Planting stock is produced in order to give effect to the forest policy of the organization.

A distinction needs to be made between "planting stock quality" and "planting stock performance potential" (PSPP). The actual performance of any given batch of outplanted planting stock is determined only in part by the kind and condition, i.e., the intrinsic PSPP, of the planting stock.

The PSPP is impossible to estimate reliably by eye because outward appearance, especially of stock withdrawn from refrigerated storage, can deceive even experienced foresters, who would be offended if their ability were questioned to recognize good planting stock when they saw it. Prior to Wakeley's (1954) demonstration of the importance of the physiological state of planting stock in determining the ability of the stock to perform after outplanting, and to a considerable extent even afterwards, morphological appearance has generally served as the basis for estimating the quality of planting stock. Gradually, however, a realization developed that more was involved. Tucker et al. (1968), for instance, after assessing 10-year survival data from several experimental white spruce plantations in Manitoba noted that "Perhaps the most important point revealed here is that certain lots of transplants performed better than others", even though all transplants were handled and planted with care. The intuitive "stock that looks good must be good" is a persuasive, but potentially dangerous maxim. That greatest of teachers, Bitter Experience, has often enough demonstrated the fallibility of such assessment, even though the corollary "stock that looks bad must be bad" is likely to be well founded. The physiological qualities of planting stock are hidden from the eye and must be revealed by testing. The potential for survival and growth of a batch of planting stock may be estimated from various features, morphological and physiological, of the stock or a sample thereof.

The size and shape and general appearance of a seedling can nevertheless give useful indications of PSPP. In low-stress outplanting situations, and with a minimized handling and lifting-planting cycle, a system based on specification for nursery stock and minimum morphological standards for acceptable seedlings works tolerably well. In certain circumstances, benefits often accrue from the use of large planting stock of highly ranked morphological grades. Length of leading shoot, diameter of stem, volume of root system, shoot:root ratios, and height:diameter ratios have been correlated with performance under specific site and planting conditions. However, the concept that larger is better negates the underlying complexities. Schmidt-Vogt (1980), for instance, found that whereas mortality among large outplants is greater than among small in the year of planting, mortality in subsequent growing seasons is higher among small outplants than among large. Much of the literature on comparative seedling performance is clouded by uncertainty as to whether the stocks being compared share the same physiological condition; differences invalidate such comparisons.

Height and root-collar diameter are generally accepted as the most useful morphological criteria and are often the only ones used in specifying standards. Quantification of root system morphology is difficult but can be done, e.g. by using the photometric rhizometer to determine intercept area, or volume by displacement or gravimetric methods.

Planting stock is always subject to a variety of conditions that are never optimal in toto. The effect of sub-optimal conditions is to induce stress in the plants. The nursery manager aims, and is normally able to avoid stresses greater than moderate, i.e., restricting stresses to levels that can be tolerated by the plants without incurring serious damage. The adoption of nursery regimes to equip planting stock with characteristics conferring increased ability to withstand outplanting stresses, by managing stress levels in the nursery to "condition" planting stock to increase tolerance to various post-planting environmental stresses, has become widespread, particularly with containerized stock.

Outplanted stock that is unable to tolerate high temperatures occurring at soil surfaces will fail to establish on many forest sites, even in the far north.[43] Factors affecting heat tolerance were investigated by Colombo et al. (1995); the production and roles of heat shock proteins (HSPs) are important in this regard. HSPs, present constitutively in black spruce and many other, perhaps most, higher plants are important both for normal cell functioning and in a stress response mechanism following exposure to high, non-lethal temperature. In black spruce at least, there is an association between HSPs and increased levels of heat tolerance. Investigation of the diurnal variability in heat tolerance of roots and shoots in black spruce seedlings 14 to 16 weeks old found in all 4 trials that shoot heat tolerance was significantly greater in the afternoon than in the morning. The trend in root heat tolerance was similar to that found in the shoots; root systems exposed to 47 °C for 15 minutes in the afternoon averaged 75 new roots after a 2-week growth period, whereas only 28 new roots developed in root systems similarly exposed in the morning. HSP73 was detected in black spruce nuclear, mitochondrial, microsomal, and soluble protein fractions, while HSP72 was observed only in the soluble protein fraction. Seedlings exhibited constitutive synthesis of HSP73 at 26 °C in all except the nuclear membrane fraction in the morning; HSP levels at 26 °C in the afternoon were higher than in the morning in the mitochondrial and microsomal protein factions. Heat shock affected the abundance of HSPs depending on protein fraction and time of day. Without heat shock, nuclear membrane-bound HSP73 was absent from plants in the morning and only weakly present in the afternoon, and heat shock increased the abundance of nuclear membrane. Heat shock also affected the abundance of HSP73 in the afternoon, and caused HSP73 to appear in the morning. In the mitochondrial and microsomal protein fractions, an afternoon heat shock reduced HSP73, whereas a morning heat shock increased HSP73 in the mitochondrial but decreased it in the microsomal fraction. Heat shock increased soluble HSP72/73 levels in both the morning and afternoon. In all instances, shoot and root heat tolerances were significantly greater in the afternoon than in the morning.

Planting stock continues to respire during storage even if frozen. Temperature is the major factor controlling the rate, and care must be taken to avoid overheating. Navratil (1982) found that closed containers in cold storage averaged internal temperatures 1.5 °C to 2.0 °C above the nominal storage temperature. Depletion of reserves can be estimated from the decrease in dry weight. Cold-stored 3+0 white spruce nursery stock in northern Ontario had lost 9% to 16% of dry weight after 40 days of storage. Carbohydrates can also be determined directly.

The propensity of a root system to develop new roots or extend existing roots cannot be determined by eye, yet it is the factor that makes or breaks the outcome of an outplanting operation. The post-planting development of roots or root systems of coniferous planting stock is determined by many factors, some physiological, some environmental. Unsatisfactory rates of post-planting survival unrelated to the morphology of the stock, led to attempts to test the physiological condition of planting stock, particularly to quantify the propensity to produce new root growth. New root growth can be assumed to be necessary for successful establishment of stock after planting, but although the thesis that RGC is positively related to field performance would seem to be reasonable, supporting evidence has been meager.

The physiological condition of seedlings is reflected by changes in root activity. This is helpful in determining the readiness of stock for lifting and storing and also for outplanting after storage. Navratil (1982) reported a virtually perfect (R² = 0.99) linear relationship in the frequency of 3+0 white spruce white root tips longer than 10 mm with time in the fall at Pine Ridge Forest Nursery, Alberta, decreasing during a 3-week period to zero on October 13 in 1982.Root regenerating research with white spruce in Canada (Hambly 1973, Day and MacGillivray 1975, Day and Breunig 1997) followed similar lines to that of Stone's (1955) pioneering work in California.

Simpson and Ritchie (1997) debated the proposition that root growth potential of planting stock predicts field performance; their conclusion was that root growth potential, as a surrogate for seedling vigor, can predict field performance, but only under such situations as site conditions permit. Survival after planting is only partly a function of an outplant's ability to initiate roots in test conditions; root growth capacity is not the sole predictor of plantation performance.

Some major problems militate against greater use of RGC in forestry, including: unstandardized techniques; unstandardized quantification; uncertain correlation between quantified RGC and field performance; variability within given, nominally identical, kinds of planting stock; and the irrelevance of RGC test values determined on a sub-sample of a parent population that subsequently, before it is planted, undergoes any substantive physiological or physical change. In its present form, RGC testing is silviculturally useful chiefly as a means of detecting planting stock that, while visually unimpaired, is moribund.

Seedling moisture content can be increased or decreased in storage, depending on various factors including especially the type of container and the kind and amount of moisture-retaining material present. When seedlings exceed 20 bars PMS in storage, survival after outplanting becomes problematical. The Relative Moisture Content of stock lifted during dry conditions can be increased gradually when stored in appropriate conditions. White spruce (3+0) packed in Kraft bags in northern Ontario increased RMC by 20% to 36% within 40 days.

Bareroot 1.5+1.5 white spruce were taken from cold storage and planted early in May on a clear-felled boreal forest site in northeastern Ontario. Similar plants were potted and kept in a greenhouse. In outplanted trees, maximum stomatal conductances (g) were initially low (<0.01 cm/s), and initial base xylem pressure potentials (PSIb) were -2.0 MPa. During the growing season, g increased to about 0.20 cm/s and PSIb to -1.0 MPa. Minimum xylem pressure potential (PSIm) was initially -2.5 MPa, increasing to -2.0 MPa on day 40, and about -1.6 MPa by day 110. During the first half of the growing season, PSIm was below turgor loss point. The osmotic potential at turgor loss point decreased after planting to -2.3 MPa 28 days later. In the greenhouse, minimum values of PSIT were -2.5 MPa (in the first day after planting. the maximum bulk modulus of elasticity was greater in white spruce than in similarly treated jack pine and showed greater seasonal changes. Relative water content (RWC) at turgor loss was 80-87%. Available turgor (TA), defined as the integral of turgor over the range of RWC between PSIb and xylem pressure potential at the turgor loss point) was 4.0% for white spruce at the beginning of the season compared with 7.9% for jack pine, but for the rest of the season TA for jack pine was only 2%, to 3% that of white spruce. Diurnal turgor (Td), the integral of turgor over the range of RWC between PSIb and PSIm, as a percentage of TA was higher in field-planted white spruce than jack pine until the end of the season.

The stomata of both white and black spruce were more sensitive to atmospheric evaporative demands and plant moisture stress during the first growing season after outplanting on 2 boreal sites in northern Ontario than were jack pine stomata, physiological differences that favoured growth and establishment being more in jack pine than in the spruces.

With black spruce and jack pine, but not with white spruce, Grossnickle and Blake's (1987) findings warrant mention in relation to the bareroot-containerized debate. During the first growing season after outplanting, containerized seedlings of both species had greater needle conductance than bareroot seedlings over a range of absolute humidity deficits. Needle conductance of containerized seedlings of both species remained high during periods of high absolute humidity deficits and increasing plant moisture stress. Bareroot outplants of both species had a greater early season resistance to water-flow through the soil–plant–atmosphere continuum (SPAC) than had containerized outplants. Resistance to water flow through the SPAC decreased in bareroot stock of both species as the season progressed, and was comparable to containerized seedlings 9 to 14 weeks after planting. Bareroot black spruce had greater new-root development than containerized stock throughout the growing season.

The greater efficiency of water use in newly transplanted 3-year-old white spruce seedlings under low levels of absolute humidity difference in water-stressed plants immediately after planting helps explain the commonly observed favourable response of young outplants to the nursing effect of a partial canopy. Silvicultural treatments promoting higher humidity levels at the planting microsite should improve white spruce seedling photosynthesis immediately after planting.

Stock types (Seedling nomenclature)

Planting stock is grown under many diverse nursery culture regimes, in facilities ranging from sophisticated computerized greenhouses to open compounds. Types of stock include bareroot seedlings and transplants, and various kinds of containerized stock. For simplicity, both container-grown and bareroot stock are generally referred to as seedlings, and transplants are nursery stock that have been lifted and transplanted into another nursery bed, usually at wider spacing. The size and physiological character of stock vary with the length of growing period and with growing conditions. Until the technology of raising containerized nursery stock bourgeoned in the second half of the twentieth- century, bareroot planting stock classified by its age in years was the norm.

Classification by age

The number of years spent in the nursery seedbed by any particular lot of planting stock is indicated by the 1st of a series of numbers. The 2nd number indicates the years subsequently spent in the transplant line, and a zero is shown if indeed there has been no transplanting. A 3rd number, if any, would indicate the years subsequently spent after a second lifting and transplanting. The numbers are sometimes separated by dashes, but separation by plus sign is more logical inasmuch as the sum of the individual numbers gives the age of the planting stock. Thus 2+0 is 2-year-old seedling planting stock that has not been transplanted, and Candy's (1929) white spruce 2+2+3 stock had spent 2 years in the seedbed, 2 years in transplant lines, and another 3 years in transplant lines after a second transplanting. Variations have included such self-explanatory combinations, such as 1½+1½, etc.

The class of planting stock to use on a particular site is generally selected on the basis of historical record of survival, growth, and total cost of surviving trees. In the Lake States, Kittredge concluded that good stock of 2+1 white spruce was the smallest size likely to succeed and was better than larger and more expensive stock when judged by final cost of surviving trees.

Classification by seedling description code

Because age alone is an inadequate descriptor of planting stock, various codes have been developed to describe such components of stock characteristics as height, stem diameter, and shoot:root ratio. A description code may include an indication of the intended planting season.

Physiological characteristics

Neither age classification nor seedling description code indicate the physiological condition of planting stock, though rigid adherence to a given cultural regime together with observation of performance over a number of years of planting can produce stock suitable for performing on a "same again" basis.

Classification by system

Planting stock is raised under a variety of systems, but these have devolved generally into 2 main groupings: bareroot and containerized. Manuals specifically for the production of bareroot[ and containerized nursery stock are valuable resources for the nursery manager. As well, a lot of good information about nursery stock specific to regional jurisdictions is well presented by Cleary et al. (1978) for Oregon, Lavender et al. (1990) for British Columbia, and Wagner and Colombo (2001) for Ontario.

pexels-photo-6510873-1920w.jpeg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1968 2023-11-21 00:09:53

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1972) Park

Gist

A park is an area of land, usually in a largely natural state, for the enjoyment of the public, having facilities for rest and recreation, often owned, set apart, and managed by a city, state, or nation.
ii) an enclosed area or a stadium used for sports.
iii) a considerable extent of land forming the grounds of a country house.
iv) any area set aside for public recreation.

Summary

A park is an area of natural, semi-natural or planted space set aside for human enjoyment and recreation or for the protection of wildlife or natural habitats. Urban parks are green spaces set aside for recreation inside towns and cities. National parks and country parks are green spaces used for recreation in the countryside. State parks and provincial parks are administered by sub-national government states and agencies. Parks may consist of grassy areas, rocks, soil and trees, but may also contain buildings and other artifacts such as monuments, fountains or playground structures. Many parks have fields for playing sports such as baseball and football, and paved areas for games such as basketball. Many parks have trails for walking, biking and other activities. Some parks are built adjacent to bodies of water or watercourses and may comprise a beach or boat dock area. Urban parks often have benches for sitting and may contain picnic tables and barbecue grills.

The largest parks can be vast natural areas of hundreds of thousands of square kilometers (or square miles), with abundant wildlife and natural features such as mountains and rivers. In many large parks, camping in tents is allowed with a permit. Many natural parks are protected by law, and users may have to follow restrictions (e.g. rules against open fires or bringing in glass bottles). Large national and sub-national parks are typically overseen by a park ranger. Large parks may have areas for canoeing and hiking in the warmer months and, in some northern hemisphere countries, cross-country skiing and snowshoeing in colder months. There are also amusement parks that have live shows, fairground rides, refreshments, and games of chance or skill.

Active and passive recreation areas

Burnside Skatepark in Portland, Oregon is one of the world's most recognizable skateparks.
Parks can be divided into active and passive recreation areas. Active recreation is that which has an urban character and requires intensive development. It often involves cooperative or team activity, including playgrounds, ball fields, swimming pools, gymnasiums, and skateparks. Active recreation such as team sports, due to the need to provide substantial space to congregate, typically involves intensive management, maintenance, and high costs. Passive recreation, also called "low-intensity recreation" is that which emphasizes the open-space aspect of a park and allows for the preservation of natural habitat. It usually involves a low level of development, such as rustic picnic areas, benches, and trails.

Many smaller neighborhood parks are receiving increased attention and valuation as significant community assets and places of refuge in heavily populated urban areas. Neighborhood groups around the world are joining to support local parks that have suffered from urban decay and government neglect.

Passive recreation typically requires less management which can be provided at lower costs than active recreation. Some open space managers provide trails for physical activity in the form of walking, running, horse riding, mountain biking, snowshoeing, or cross-country skiing; or activities such as observing nature, bird watching, painting, photography, or picnicking. Limiting park or open space use to passive recreation over all or a portion of the park's area eliminates or reduces the burden of managing active recreation facilities and developed infrastructure. Passive recreation amenities require routine upkeep and maintenance to prevent degradation of the environment.

Private parks

Private parks are owned by individuals or businesses and are used at the discretion of the owner. There are a few types of private parks, and some which once were privately maintained and used have now been made open to the public.

Hunting parks were originally areas maintained as open space where residences, industry and farming were not allowed, often originally so that nobility might have a place to hunt. These were known for instance, as deer parks (deer being originally a term meaning any wild animal). Many country houses in Great Britain and Ireland still have parks of this sort, which since the 18th century have often been landscaped for aesthetic effect. They are usually a mixture of open grassland with scattered trees and sections of woodland, and are often enclosed by a high wall. The area immediately around the house is the garden. In some cases this will also feature sweeping lawns and scattered trees; the basic difference between a country house's park and its garden is that the park is grazed by animals, but they are excluded from the garden.

Details

Park is a large area of ground set aside for recreation. The earliest parks were those of the Persian kings, who dedicated many square miles to the sport of hunting; by natural progression such reserves became artificially shaped by the creation of riding paths and shelters until the decorative possibilities became an inherent part of their character. A second type of park derived from such open-air public meeting places as those in ancient Athens, where the functions of an exercising ground, a social concourse, and an athletes’ training ground were combined with elements of a sculpture gallery and religious centre.

In the parks of post-Renaissance times, there were extensive woods, rectilinear allées stretching between one vantage point and another, raised galleries, and, in many cases, elaborate aviaries and cages for wild beasts, attesting to the hunting proclivities of the lords. Later the concept of the public park was somewhat domesticated. An area devoted simply to green landscape, a salubrious and attractive breathing space as a relief from the densely populated and industrialized city of the mid-19th century, became important. Examples of this type of park include Birkenhead Park in England, designed by Sir Joseph Paxton; Jean Charles Alphand’s Bois de Boulogne, outside Paris; Central Park in New York City, designed by Frederick Law Olmsted and Calvert Vaux; the Botanic Gardens in Melbourne, Australia; and Akashi Park in Kōbe, Japan. The design was generally romantic in character. The primary purpose was to provide for passive recreation—walking and taking the air in agreeable surroundings reminiscent of the unspoiled country.

What primarily differentiates modern parks is their accommodation for active recreation. Park areas differ considerably from country to country, and their designs reflect differences in climate, cultural attitudes, social habits, and pastimes. In the gardens of the Generalife, a Spanish family may enjoy its holiday outing in a shaded bosque near a cool fountain. On an evening in Venice, a procession with banners and torches may sweep into one of the little piazzas. In the Buttes-Chaumont in Paris, children may reach out from wooden horses on the merry-go-round to seize a brass ring. During the bright summer weekends in Stockholm, residents cultivate vegetables in allotment gardens that are leased to them by the park department. In Israel, Iran, and Pakistan, basketball, football (soccer), and kabadei (a game like rugby) are played in parks; in Japan, volleyball, tennis, and sumo (wrestling) may be seen. Almost universally, there is recognition of the creative possibilities of leisure and of community responsibility to provide space and facilities for recreation.

The facilities include outdoor theatres, zoos, concert shells, historical exhibits, concessions for dining and dancing, amusement areas, boating, and areas for sports of all kinds, such as fly-casting pools and skating rinks. There is always the danger that the original reason for creating the park—i.e., to bring a part of nature within reach of the city dweller—will be sacrificed to its specific recreational functions. It is difficult to keep the balance, because the tempo of urban life has mounted and with it the requirements for intensive use.

Another danger to the public park is the automobile. With the tremendous growth in automobile traffic and, consequently, increasing pressure from traffic authorities for more land, there has been hardly a major city that has not lost sections of its parks to highways. There has been a growing awareness, particularly in Europe, that large-scale urban planning should be carried out in such a way that traffic functions are clearly separate and do not encroach on other spheres. In the United States, there have been victories for the park user against the automobile; in San Francisco, the state freeway was halted at the city limits, and, in New York City, Washington Square was closed to traffic.

It is unfortunate that the word park has come to connote almost exclusively the “romantic” style park or English garden of the 19th century. In truth, there are other traditions whose influence has been equally vital. How different from the Parisian Buttes-Chaumont, for instance, are the Tuileries across the river. These were laid out under the supervision of Marie de Médicis in the style of the Boboli Gardens in Florence. Also the parks of Versailles, the Belvedere Park in Vienna, the Vatican Gardens in Rome, Hellbrun in Salzburg, Blenheim in England, Drottningsholm in Sweden, and Peterhof (Petrodvorets) in Russia are all parks that were planned in the Italian Baroque tradition. They were intended not to be a foil or escape from the oppressive city but rather to be its central dramatic focus—a display for the opulence of rulers, a piazza for the moving of great crowds, from the tournament and guild ceremonies of Florence in the 17th century to the formal pageantry of the court. It was in the Baroque park that the handling, control, and stimulation of crowds in the open air developed as one of the great arts of the urban designer.

Another park tradition that has had worldwide influence is that of Islam. In Tehran, Marrakech, Sevilla (Seville), Lahore, and Delhi, this tradition is the dominant one and, as with all parks, developed according to the climate, social custom, and religious ethos. The original Muslim idea was to think of the garden as a paradise, a symbol of the afterlife as an oasis of beauty blooming in the earthly desert. Water and the cypress are the two main elements. Within the park, then, are water, the symbol of purity, in the four-way river of paradise, and trees (above all the cypress, symbolizing life), surrounded by high walls to keep out the dry wind. Everywhere, in keeping with Muslim belief, the design pattern is abstract rather than figurative. The fundamental idea creates its own specific technical skills; nowhere is there more artful use of irrigation for plants, of jets of water to cool the air, of orchards for shade, of colour to break up the sun’s glare, or of the use of masonry patterns than in these Islamic gardens.

The Taj Mahal in India dates from the 17th century, when by the testament of Shah Jahan this area of 20 acres (8 hectares) was to be maintained as a public grounds in perpetuity, where the poor could walk and pick fruit. In China and Japan, a similar opening of the royal precinct for public enjoyment, as with the Winter Palace or the Katsura Imperial Villa Gardens in Kyōto, has been a more recent development. The great religious shrines, however, have always resembled Western parks. The Horimonji temple in Tokyo, the Mimeguri shrine, the great Buddhist temple at Ise, and the Inner (Shintō) shrine at Mieshima are examples of an age-old garden tradition in which humanity is but “one of a thousand things” and where nature is presented in an idealized and symbolic way as an object for contemplation and spiritual enjoyment. In their techniques of horticulture and in their use of stones, water, and surface textures, the gardens of East Asia are of a high level. This Eastern tradition had its effect on European park design in the 18th century and again in the 20th century, as in the grounds for the UNESCO building in Paris, designed by Isamu Noguchi.

IDB-L-ALLEN-COL-0906-1.jpg?w=480


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1969 2023-11-22 00:06:16

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1973) Trophy

Gist

i) anything taken in war, hunting, competition, etc., especially when preserved as a memento; spoil, prize, or award.
ii) anything serving as a token or evidence of victory, valor, skill, etc.
iii) a symbol of success that is used to impress others.
iv) a carving, painting, or other representation of objects associated with or symbolic of victory or achievement.

Summary

Trophy, (from Greek tropaion, from tropē, “rout”), in ancient Greece, is a memorial of victory set up on the field of battle at the spot where the enemy had been routed. It consisted of captured arms and standards hung upon a tree or stake in the semblance of a man and was inscribed with details of the battle along with a dedication to a god or gods. After a naval victory, the trophy, composed of whole ships or their beaks, was laid out on the nearest beach. To destroy a trophy was regarded as a sacrilege since, as an object dedicated to a god, it must be left to decay naturally. The Romans continued the custom but usually preferred to construct trophies in Rome, with columns or triumphal arches serving the purpose in imperial times. Outside Rome, there are remains of huge stone memorials, once crowned by stone trophies, built by Augustus in 7/6 BC at La Turbie (near Nice, Fr.) and by Trajan c. AD 109 at Adamclisi in eastern Romania.

Details

A trophy is a tangible, durable reminder of a specific achievement, serving as recognition or evidence of merit. Trophies are most commonly awarded for sporting events, ranging from youth sports to professional level athletics. Additionally, trophies are presented for achievements in Academic, Arts and Entertainment, Business, Military, Professional awards, Community Service, Hunting, and Environmental accomplishments. In many contexts, especially in sports,(or, in North America, rings) are often given out either as the trophy or along with more traditional trophies.

Originally the word trophy, derived from the Greek tropaion, referred to arms, standards, other property, or human captives and body parts (e.g., headhunting) captured in battle. These war trophies commemorated the military victories of a state, army or individual combatant. In modern warfare trophy taking is discouraged, but this sense of the word is reflected in hunting trophies and human trophy collecting by serial killers.

Etymology

Trophies have marked victories since ancient times. The word trophy, coined in English in 1550, was derived from the French trophée in 1513, "a prize of war", from Old French trophee, from Latin trophaeum, monument to victory, variant of tropaeum, which in turn is the latinisation of the Greek  (tropaion), "of defeat" or "for defeat", but generally "of a turning" or "of a change", from τροπή (tropē), "a turn, a change" and that from the verb  (trepo), "to turn, to alter".

In ancient Greece, trophies were made on the battlefields of victorious battles, from captured arms and standards, and were hung upon a tree or a large stake made to resemble a warrior. Often, these ancient trophies were inscribed with a story of the battle and were dedicated to various gods. Trophies made about naval victories sometimes consisted of entire ships (or what remained of them) laid out on the beach. To destroy a trophy was considered a sacrilege.

The ancient Romans kept their trophies closer to home. The Romans built magnificent trophies in Rome, including columns and arches atop a foundation. Most of the stone trophies that once adorned huge stone memorials in Rome have been long since stolen.

History

In ancient Greece, the winners of the Olympic games initially received no trophies except laurel wreaths. Later the winner also received an amphora with sacred olive oil. In local games, the winners received different trophies, such as a tripod vase, a bronze shield or a silver cup.

In ancient Rome, money usually was given to winners instead of trophies.

Chalices were given to winners of sporting events at least as early as the very late 1600s in the New World. For example, the Kyp Cup (made by silversmith Jesse Kyp), a small, two-handled, sterling cup in the Henry Ford Museum, was given to the winner of a horse race between two towns in New England in about 1699. Chalices, particularly, are associated with sporting events, and were traditionally made in silver. Winners of horse races, and later boating and early automobile races, were the typical recipients of these trophies. The Davis Cup, Stanley Cup, America's Cup and numerous World Cups are all now famous cup-shaped trophies given to sports winners.

Today, the most common trophies are much less expensive, and thus much more pervasive, thanks to mass-produced plastic/resin trophies.

The oldest sports trophies in the world are the Carlisle Bells, a horse racing trophy dating back to 1559 and 1599 and were first awarded by Elizabeth I. The race has been run for over 400 years in Carlisle, Cumbria, United Kingdom. The bells are on show at the local museum, Tullie House, which houses a variety of historic artifacts from the area from Roman legions to present day.

Types

Contemporary trophies often depict an aspect of the event commemorated, for example in basketball tournaments, the trophy takes the shape of a basketball player, or a basketball. Trophies have been in the past objects of use such as two-handled cups, bowls, or mugs (all usually engraved); or representations such as statues of people, animals, and architecture while displaying words, numbers or images. While trophies traditionally have been made with metal figures, wood columns, and wood bases, in recent years they have been made with plastic figures and marble bases. This is to retain the weight traditionally associated with a quality award and make them more affordable to use as recognition items. Trophies increasingly have used resin depictions.

The Academy Awards Oscar is a trophy with a stylized human; the Hugo Award for science fiction is a space ship; and the Wimbledon awards for its singles champions are a large loving cup for men and a large silver plate for women.

A loving-cup trophy is a common variety of trophy; it is a cup shape, usually on a pedestal, with two or more handles, and is often made from silver or silver plate.

Hunting trophies are reminders of successes from hunting animals, such as an animal's head mounted to be hung on a wall. There's also people who get their animals Taxidermy, where you can have just the head, or you can have the full animal stuffed; and put out for show.

Perpetual trophies are held by the winner until the next event, when the winner must compete again in order to keep the trophy. In some competitions winners of a certain number of consecutive or non-consecutive events receive the trophy or its copy in permanent ownership. This was particularly common in the late 19th and early 20th centuries, and led to the discontinuation of many trophy events when the trophy was won permanently and the event organizers could not or would not purchase a new one.

gold-trophy-cup-1275-h-mcc428g.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1970 2023-11-23 00:05:16

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1974) Pebble

Gist

A pebble is a small, rounded stone, especially one worn smooth by the action of water.

Details

A pebble is a clast of rock with a particle size of 4–64 mm (0.16–2.52 in) based on the Udden-Wentworth scale of sedimentology. Pebbles are generally considered larger than granules (2–4 mm (0.079–0.157 in) in diameter) and smaller than cobbles (64–256 mm (2.5–10.1 in) in diameter). A rock made predominantly of pebbles is termed a conglomerate. Pebble tools are among the earliest known man-made artifacts, dating from the Palaeolithic period of human history.

A beach composed chiefly of surface pebbles is commonly termed a shingle beach. This type of beach has armoring characteristics with respect to wave erosion, as well as ecological niches that provide habitat for animals and plants.

Inshore banks of shingle (large quantities of pebbles) exist in some locations, such as the entrance to the River Ore, England, where the moving banks of shingle give notable navigational challenges.

Pebbles come in various colors and textures and can have streaks, known as veins, of quartz or other minerals. Pebbles are mostly smooth but, dependent on how frequently they come in contact with the sea, they can have marks of contact with other rocks or other pebbles. Pebbles left above the high water mark may have growths of organisms such as lichen on them, signifying the lack of contact with seawater.

Location

Pebbles on Earth exist in two types of locations – on the beaches of various oceans and seas, and inland where ancient seas used to cover the land. Then, when the seas retreated, the rocks became landlocked. Here, they entered lakes and ponds, and form in rivers, travelling into estuaries where the smoothing continues in the sea.

Beach pebbles and river pebbles (also known as river rock) are distinct in their geological formation and appearance.

Beach

Beach pebbles form gradually over time as the ocean water washes over loose rock particles. The result is a smooth, rounded appearance. The typical size range is from 2 mm to 50 mm. The colors range from translucent white to black, and include shades of yellow, brown, red and green. Some of the more plentiful pebble beaches are along the coast of the Pacific Ocean, beginning in Canada and extending down to the tip of South America in Argentina. Other pebble beaches are in northern Europe (particularly on the beaches of the Norwegian Sea), along the coast of the U.K. and Ireland, on the shores of Australia, and around the islands of Indonesia and Japan.

Inland

Inland pebbles (river pebbles of river rock) are usually found along the shores of large rivers and lakes. These pebbles form as the flowing water washes over rock particles on the bottom and along the shores of the river. The smoothness and color of river pebbles depends on several factors, such as the composition of the soil of the river banks, the chemical characteristics of the water, and the speed of the current. Because river current is gentler than the ocean waves, river pebbles are usually not as smooth as beach pebbles. The most common colors of river rock are black, grey, green, brown and white.

Human use

Beach pebbles and river pebbles are used for a variety of purposes, both outdoors and indoors. They can be sorted by colour and size, and they can also be polished to improve the texture and colour. Outdoors, beach pebbles are often used for landscaping, construction and as decorative elements. Beach pebbles are often used to cover walkways and driveways, around pools, in and around plant containers, on patios and decks. Beach and river pebbles are also used to create water-smart gardens in areas where water is scarce. Small pebbles are also used to create living spaces and gardens on the rooftops of buildings. Indoors, pebbles can be used as bookends and paperweights. Large pebbles are also used to create "pet rocks" for children.

Mars

On Mars, slabs of pebbly conglomerate rock have been found and have been interpreted by scientists as having formed in an ancient streambed. The gravels, which were discovered by NASA's Mars rover Curiosity, range from the size of sand particles to the size of golf balls. Analysis has shown that the pebbles were deposited by a stream that flowed at walking pace and was ankle- to hip-deep.

black-stone-pebbles.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1971 2023-11-24 21:20:54

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1975) Language

Gist

Language is the system of communication in speech and writing that is used by people of a particular country.
(2)It is the system of sounds and writing that human beings use to express their thoughts, ideas and feelings.

Summary

Language is a structured system of communication that consists of grammar and vocabulary. It is the primary means by which humans convey meaning, both in spoken and written forms, and may also be conveyed through sign languages. The vast majority of human languages have developed writing systems that allow for the recording and preservation of the sounds or signs of language. Human language is characterized by its cultural and historical diversity, with significant variations observed between cultures and across time. Human languages possess the properties of productivity and displacement, which enable the creation of an infinite number of sentences, and the ability to refer to objects, events, and ideas that are not immediately present in the discourse. The use of human language relies on social convention and is acquired through learning.

Estimates of the number of human languages in the world vary between 5,000 and 7,000. Precise estimates depend on an arbitrary distinction (dichotomy) established between languages and dialects. Natural languages are spoken, signed, or both; however, any language can be encoded into secondary media using auditory, visual, or tactile stimuli – for example, writing, whistling, signing, or braille. In other words, human language is modality-independent, but written or signed language is the way to inscribe or encode the natural human speech or gestures.

Depending on philosophical perspectives regarding the definition of language and meaning, when used as a general concept, "language" may refer to the cognitive ability to learn and use systems of complex communication, or to describe the set of rules that makes up these systems, or the set of utterances that can be produced from those rules. All languages rely on the process of semiosis to relate signs to particular meanings. Oral, manual and tactile languages contain a phonological system that governs how symbols are used to form sequences known as words or morphemes, and a syntactic system that governs how words and morphemes are combined to form phrases and utterances.

The scientific study of language is called linguistics. Critical examinations of languages, such as philosophy of language, the relationships between language and thought, how words represent experience, etc., have been debated at least since Gorgias and Plato in ancient Greek civilization. Thinkers such as Jean-Jacques Rousseau (1712–1778) have argued that language originated from emotions, while others like Immanuel Kant (1724–1804) have argued that languages originated from rational and logical thought. Twentieth century philosophers such as Ludwig Wittgenstein (1889–1951) argued that philosophy is really the study of language itself. Major figures in contemporary linguistics of these times include Ferdinand de Saussure and Noam Chomsky.

Language is thought to have gradually diverged from earlier primate communication systems when early hominins acquired the ability to form a theory of mind and shared intentionality. This development is sometimes thought to have coincided with an increase in brain volume, and many linguists see the structures of language as having evolved to serve specific communicative and social functions. Language is processed in many different locations in the human brain, but especially in Broca's and Wernicke's areas. Humans acquire language through social interaction in early childhood, and children generally speak fluently by approximately three years old. Language and culture are codependent. Therefore, in addition to its strictly communicative uses, language has social uses such as signifying group identity, social stratification, as well as use for social grooming and entertainment.

Languages evolve and diversify over time, and the history of their evolution can be reconstructed by comparing modern languages to determine which traits their ancestral languages must have had in order for the later developmental stages to occur. A group of languages that descend from a common ancestor is known as a language family; in contrast, a language that has been demonstrated to not have any living or non-living relationship with another language is called a language isolate. There are also many unclassified languages whose relationships have not been established, and spurious languages may have not existed at all. Academic consensus holds that between 50% and 90% of languages spoken at the beginning of the 21st century will probably have become extinct by the year 2100.

Details

Language is a system of conventional spoken, manual (signed), or written symbols by means of which human beings, as members of a social group and participants in its culture, express themselves. The functions of language include communication, the expression of identity, play, imaginative expression, and emotional release.

Characteristics of language:

Definitions of language

Many definitions of language have been proposed. Henry Sweet, an English phonetician and language scholar, stated: “Language is the expression of ideas by means of speech-sounds combined into words. Words are combined into sentences, this combination answering to that of ideas into thoughts.” The American linguists Bernard Bloch and George L. Trager formulated the following definition: “A language is a system of arbitrary vocal symbols by means of which a social group cooperates.” Any succinct definition of language makes a number of presuppositions and begs a number of questions. The first, for example, puts excessive weight on “thought,” and the second uses “arbitrary” in a specialized, though legitimate, way.

A number of considerations (marked in italics below) enter into a proper understanding of language as a subject:

Every physiologically and mentally typical person acquires in childhood the ability to make use, as both sender and receiver, of a system of communication that comprises a circumscribed set of symbols (e.g., sounds, gestures, or written or typed characters). In spoken language, this symbol set consists of noises resulting from movements of certain organs within the throat and mouth. In signed languages, these symbols may be hand or body movements, gestures, or facial expressions. By means of these symbols, people are able to impart information, to express feelings and emotions, to influence the activities of others, and to comport themselves with varying degrees of friendliness or hostility toward persons who make use of substantially the same set of symbols.

Different systems of communication constitute different languages; the degree of difference needed to establish a different language cannot be stated exactly. No two people speak exactly alike; hence, one is able to recognize the voices of friends over the telephone and to keep distinct a number of unseen speakers in a radio broadcast. Yet, clearly, no one would say that they speak different languages. Generally, systems of communication are recognized as different languages if they cannot be understood without specific learning by both parties, though the precise limits of mutual intelligibility are hard to draw and belong on a scale rather than on either side of a definite dividing line. Substantially different systems of communication that may impede but do not prevent mutual comprehension are called dialects of a language. In order to describe in detail the actual different language patterns of individuals, the term idiolect, meaning the habits of expression of a single person, has been coined.

Typically, people acquire a single language initially—their first language, or native tongue, the language used by those with whom, or by whom, they are brought up from infancy. Subsequent “second” languages are learned to different degrees of competence under various conditions. Complete mastery of two languages is designated as bilingualism; in many cases—such as upbringing by parents using different languages at home or being raised within a multilingual community—children grow up as bilinguals. In traditionally monolingual cultures, the learning, to any extent, of a second or other language is an activity superimposed on the prior mastery of one’s first language and is a different process intellectually.

Language, as described above, is species-specific to human beings. Other members of the animal kingdom have the ability to communicate, through vocal noises or by other means, but the most important single feature characterizing human language (that is, every individual language), against every known mode of animal communication, is its infinite productivity and creativity. Human beings are unrestricted in what they can communicate; no area of experience is accepted as necessarily incommunicable, though it may be necessary to adapt one’s language in order to cope with new discoveries or new modes of thought. Animal communication systems are by contrast very tightly circumscribed in what may be communicated. Indeed, displaced reference, the ability to communicate about things outside immediate temporal and spatial contiguity, which is fundamental to speech, is found elsewhere only in the so-called language of bees. Bees are able, by carrying out various conventionalized movements (referred to as bee dances) in or near the hive, to indicate to others the locations and strengths of food sources. But food sources are the only known theme of this communication system. Surprisingly, however, this system, nearest to human language in function, belongs to a species remote from humanity in the animal kingdom. On the other hand, the animal performance superficially most like human speech, the mimicry of parrots and of some other birds that have been kept in the company of humans, is wholly derivative and serves no independent communicative function. Humankind’s nearest relatives among the primates, though possessing a vocal physiology similar to that of humans, have not developed anything like a spoken language. Attempts to teach sign language to chimpanzees and other apes through imitation have achieved limited success, though the interpretation of the significance of ape signing ability remains controversial.

In most accounts, the primary purpose of language is to facilitate communication, in the sense of transmission of information from one person to another. However, sociolinguistic and psycholinguistic studies have drawn attention to a range of other functions for language. Among these is the use of language to express a national or local identity (a common source of conflict in situations of multiethnicity around the world, such as in Belgium, India, and Quebec). Also important are the “ludic” (playful) function of language—encountered in such phenomena as puns, riddles, and crossword puzzles—and the range of functions seen in imaginative or symbolic contexts, such as poetry, drama, and religious expression.

Language interacts with every aspect of human life in society, and it can be understood only if it is considered in relation to society. This article attempts to survey language in this light and to consider its various functions and the purposes it can and has been made to serve. Because each language is both a working system of communication in the period and in the community wherein it is used and also the product of its history and the source of its future development, any account of language must consider it from both these points of view.

The science of language is known as linguistics. It includes what are generally distinguished as descriptive linguistics and historical linguistics. Linguistics is now a highly technical subject; it embraces, both descriptively and historically, such major divisions as phonetics, grammar (including syntax and morphology), semantics, and pragmatics, dealing in detail with these various aspects of language.

languages-we-speak-in-united-states.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1972 2023-11-25 20:06:13

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1976) Intern

Intern

An intern is a doctor working in a hospital who is in their first year after completing their medical degree.

Details

Internship

An internship is a period of work experience offered by an organization for a limited period of time. Once confined to medical graduates, internship is used practice for a wide range of placements in businesses, non-profit organizations and government agencies. They are typically undertaken by students and graduates looking to gain relevant skills and experience in a particular field. Employers benefit from these placements because they often recruit employees from their best interns, who have known capabilities, thus saving time and money in the long run. Internships are usually arranged by third-party organizations that recruit interns on behalf of industry groups. Rules vary from country to country about when interns should be regarded as employees. The system can be open to exploitation by unscrupulous employers.

Internships for professional careers are similar in some ways. Similar to internships, apprenticeships transition students from vocational school into the workforce. The lack of standardization and oversight leaves the term "internship" open to broad interpretation. Interns may be high school students, college and university students, or post-graduate adults. These positions may be paid or unpaid and are temporary. Many large corporations, particularly investment banks, have "insights" programs that serve as a pre-internship event numbering a day to a week, either in person or virtually.

Typically, an internship consists of an exchange of services for experience between the intern and the organization. Internships are used to determine whether the intern still has an interest in that field after the real-life experience. In addition, an internship can be used to build a professional network that can assist with letters of recommendation or lead to future employment opportunities. The benefit of bringing an intern into full-time employment is that they are already familiar with the company, therefore needing little to no training. Internships provide current college students with the ability to participate in a field of their choice to receive hands-on learning about a particular future career, preparing them for full-time work following graduation.

Types

Internships exist in a wide variety of industries and settings. An internship can be paid, unpaid, or partially paid (in the form of a stipend). Internships may be part-time or full-time and are usually flexible with students' schedules. A typical internship lasts between one and four months, but can be shorter or longer, depending on the organization involved. The act of job shadowing may also constitute interning.

* Insights: Many large corporations, particularly investment banks, have "insights" programs that serve as a pre-internship event numbering a day to a week, either in person or virtually.
* Paid internships are common in professional fields including medicine, architecture, science, engineering, law, business (especially accounting and finance), technology, and advertising. Work experience internships usually occur during the second or third year of schooling. This type of internship is to expand an intern's knowledge both in their school studies and also at the company. The intern is expected to bring ideas and knowledge from school into the company.
* Work research, virtual research (graduation) or dissertation: This is mostly done by students who are in their final year of school. With this kind of internship, a student does research for a particular company. The company can have something that they feel they need to improve, or the student can choose a topic in the company themselves. The results of the research study will be put in a report and often will have to be presented.
* Unpaid internships are typically through non-profit charities and think tanks which often have unpaid or volunteer positions. State law and state enforcement agencies may impose requirements on unpaid internship programs under Minimum Wage Act. A program must meet criteria to be properly classified as an unpaid internship. Part of this requirement is proving that the intern is the primary beneficiary of the relationship. Unpaid interns perform work that is not routine and work that company doesn't depend upon.
* Partially-paid internships is when students are paid in the form of a stipend. Stipends are typically a fixed amount of money that is paid out on a regular basis. Usually, interns that are paid with stipends are paid on a set schedule associated with the organization.
* Virtual Internship are internships that are done remotely on email, phone, and web communication. This offers flexibility as physical presence isn't required. It still provides the capacity to gain job experience without the conventional requirement of being physically present in an office. Virtual interns generally have the opportunity to work at their own pace.
* International Internships are internships done in a country other than the one that the country of residence. These internships can either be in person or done remotely. Van Mol  analyzed employer perspectives on study abroad versus international internships in 31 European countries, finding that employers value international internships more than international study, while Predovic, Dennis and Jones  found that international internships developed cognitive skills like how new information is learned and the motivation to learn.
* Returnship are internships for experienced workers who are looking to return to the workforce after taking time away to care for parents or children.

Internship for a fee

Companies in search of interns often find and place students in mostly unpaid internships, for a fee. These companies charge students to assist with research, promising to refund the fee if no internship is found. The programs vary and aim to provide internship placements at reputable companies. Some companies may also provide controlled housing in a new city, mentorship, support, networking, weekend activities or academic credit. Some programs offer extra add-ons such as language classes, networking events, local excursions, and other academic options.

Some companies specifically fund scholarships and grants for low-income applicants. Critics of internships criticize the practice of requiring certain college credits to be obtained only through unpaid internships. Depending on the cost of the school, this is often seen as an unethical practice, as it requires students to exchange paid-for and often limited tuition credits to work an uncompensated job. Paying for academic credits is a way to ensure students complete the duration of the internship, since they can be held accountable by their academic institution. For example, a student may be awarded academic credit only after their university receives a positive review from the intern's supervisor at the sponsoring organization.

Secondary level work experience

Work experience in England was established in the 1970s by Jack Pidcock, Principal Careers Officer of Manchester Careers Service. The Service organized two weeks work experience for all Year 10 pupils in Manchester Local Education Authority schools, including those for pupils with special educational needs. Ironically, it was initially resisted by trade unions, and at first he had a job convincing schools, until eventually he persuaded the L.E.A. and councilors to go ahead. It became highly valued by pupils, teachers, inspectors, employers and politicians. Work experience provided a taste of the requirements and disciplines of work and an insight into possible vocational choices. It ran alongside professional, individual, impartial, face to face careers guidance by local careers advisers. A Conservative Government introduced the Education (Work Experience) Act 1973 which enabled all education authorities ‘to arrange for children under school-leaving age to have work experience, as part of their education’. The Conservative Liberal coalition government abolished compulsory work experience for students in England at key stage 4 (Years 10 to 11 for 14-16 years olds) in 2012. Recently a number of non-governmental and employer led bodies have become critical of pupils and students not understanding the ‘world of work’. Work experience is no longer offered on the national curriculum for students in years 10 and 11 in the United Kingdom. but is available for (3rd and 4th year in Scotland), Australia, New Zealand and the Republic of Ireland; every student who wishes to do so has a statutory right to take work experience. In 2011, however, the Wolf Review of Vocational Education proposed a significant policy change that—to reflect the fact that almost all students now stay past the age of 16—the requirement for pre-16 work experience in the UK should be removed. Work experience in this context is when students in an adult working environment more or less act as an employee, but with the emphasis on learning about the world of work. Placements are limited by safety and security restrictions, insurance cover and availability, and do not necessarily reflect eventual career choice but instead allow a broad experience of the world of work.

Most students do not get paid for work experience. However, some employers pay students, as this is considered part of their education. The duration varies according to the student's course, and other personal circumstances. Most students go out on work experience for one or two weeks in a year. Some students work in a particular workplace, perhaps one or two days a week for extended periods of time throughout the year—either for vocation reasons and commitment to alternative curricula or because they have social or behavioral problems.

University level work experience

At university level, work experience is often offered between the second and final years of an undergraduate degree course, especially in the science, engineering and computing fields. Courses of this nature are often called sandwich courses, with the work experience year itself known as the sandwich year. During this time, the students on work placement have the opportunity to use the skills and knowledge gained in their first two years, and see how they are applied to real world problems. This offers them useful insights for their final year and prepares them for the job market once their course has finished. Some companies sponsor students in their final year at university with the promise of a job at the end of the course. This is an incentive for the student to perform well during the placement as it helps with two otherwise unwelcome stresses: the lack of money in the final year, and finding a job when the university course ends.

1646993405-blog.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1973 2023-11-25 21:32:43

KerimF
Member
From: Aleppo-Syria
Registered: 2018-08-10
Posts: 238

Re: Miscellany

I liked to thank you for your great and continuous efforts in providing this collection of interesting info.

I wonder if you had the idea to make an index list of all your entries in this thread. Obviously, it will need to be updated continuously since it is related to a growing book (not a fixed one).

Best Regards,
Kerim


Every living thing has no choice but to execute its pre-programmed instructions embedded in it (known as instincts).
But only a human may have the freedom and ability to oppose his natural robotic nature.
But, by opposing it, such a human becomes no more of this world.

Offline

#1974 2023-11-25 21:39:06

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

Hi KerimF,

I post on different subjects as far as possible, mostly science.

Thanks for the feedback!

Jai.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1975 2023-11-26 16:52:27

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,333

Re: Miscellany

1977) Sports Commentator

Gist

A commentator is a person who gives his/her opinion about something on the radio, on television or in a newspaper.

Summary:

In sports broadcasting, a commentator is the person who is saying what is happening in the game. In the case of television commentary, the commentator is usually only heard and not often seen. In North American English, a commentator is also called an announcer or sportscaster. Often, the main commentator (called a play-by-play in North America) works with a color commentator and sometimes a sideline reporter. With big events, many sideline reporters are used.

Types of sports broadcasters

* Play-by-play announcers are the primary speakers. It is important for them to be easy to understand. They also have a good ability to describe the what is happening quickly during fast-moving sport. Play-by-play announcers are more likely to be a professional broadcast journalist. The name comes from the fact that they describe each play of the game as it happens.
* Color commentators bring experience and insight into the game. They are often asked questions by the play-by-play announcer to give them a topic for talk about. Color commentators were often players or coaches in the sport being talked about. The name from the fact that they provide more color to the broadcast. They provide extra interesting information that makes the broadcast more entertaining.
* Sideline reporters are reporters who are put in different spots around the venue. They give the color commentator more information to give to the audience. For example, some sideline reporters could be in the dressing room area while others could be at the team benches. The often interview players and coaches before, during and after the game. They also try to get more information on things such as injuries to players.

Most sports television broadcast have one play-by-play announcer and one color commentator. An example is NBC Sunday Night Football in the United States. It is called by Cris Collinsworth, a former American football receiver, and Al Michaels, a professional announcer. In the United Kingdom, there is not as much of a difference between play-by-play and color commentary. Two-man commentary teams usually have a person formal journalistic training but little or no sports experience leading the commentary, and an expert former (or current) competitor dealing with analysis. There are exceptions to this. All of the United Kingdom's major cricket and snooker commentators are former professionals in their sports. The well known Formula One racing commentator Murray Walker had no formal journalistic training and only a small amount of racing experience of his own.

Using a play-by-play announcer and one or more color commentators is standard as of 2012. In the past it was much more common for a play-by-play announcer to work alone.

Details

In sports broadcasting, a sports commentator (also known as sports announcer or sportscaster) provides a real-time commentary of a game or event, usually during a live broadcast, traditionally delivered in the historical present tense. Radio was the first medium for sports broadcasts, and radio commentators must describe all aspects of the action to listeners who cannot see it for themselves. In the case of televised sports coverage, commentators are usually presented as a voiceover, with images of the contest shown on viewers' screens and sounds of the action and spectators heard in the background. Television commentators are rarely shown on screen during an event, though some networks choose to feature their announcers on camera either before or after the contest or briefly during breaks in the action.

Types of commentators:

Main/play-by-play commentator

The main commentator, also called the play-by-play commentator or announcer in North America, blow-by-blow in combat sports coverage, lap-by-lap for motorsports coverage, or ball-by-ball for cricket coverage, is the primary speaker on the broadcast. Broadcasters in this role are adept at being articulate and carry an ability to describe each play or event of an often fast-moving sporting event. The play-by-play announcer is meant to convey the event as it is carried out. Because of their skill level, commentators like Al Michaels, Brian Anderson, Ian Eagle, Kevin Harlan, Jim Nantz, and Joe Buck in the U.S., David Coleman in the UK and Bruce McAvaney in Australia may have careers in which they call several different sports at one time or another. Other main commentators may, however, only call one sport (Mike Emrick, for example, is known almost exclusively as an ice hockey broadcaster and Peter Drury for the association football). The vast majority of play-by-play announcers are male; female play-by-play announcers had not seen sustained employment until the 21st century.

Radio and television play-by-play techniques involve slightly different approaches; radio broadcasts typically require the play-by-play host to say more to verbally convey the on-field activity that cannot be seen by the radio audience. It is unusual to have radio and television broadcasts share the same play-by-play commentator for the same event, except in cases of low production budgets or when a broadcaster is particularly renowned (Rick Jeanneret's hockey telecasts, for example, were simulcast on radio and television from 1997 until his 2022 retirement).

Analyst/color commentator

The analyst or color commentator provides expert analysis and background information, such as statistics, strategy on the teams and athletes, and occasionally anecdotes or light humor. They are usually former athletes or coaches in their respective sports, although there are some exceptions.

The term "color" refers to levity and insight provided by analyst. The most common format for a sports broadcast is to have an analyst/color commentator work alongside the main/play-by-play announcer. An example is NBC Sunday Night Football in the United States, which is called by color commentator Cris Collinsworth, a former NFL receiver, and play-by-play commentator Mike Tirico, a professional announcer. In the United Kingdom, however, there is a much less distinct division between play-by-play and color commentary, although two-man commentary teams usually feature an enthusiast with formal journalistic training but little or no competitive experience leading the commentary, and an expert former (or current) competitor following up with analysis or summary. There are however exceptions to this — most of the United Kingdom's leading cricket and snooker commentators are former professionals in their sports, while the former Formula One racing commentator Murray Walker had no formal journalistic training and only limited racing experience of his own (he had come from an advertising background and his initial hiring was more of a comic double act than a traditional sports commentary pairing). In the United States, Pat Summerall, a former professional kicker, spent most of his broadcasting career as a play-by-play announcer. Comedian Dennis Miller's short-lived run as part of the Monday Night Football booth in 2001 caused what Miller himself described as a "maelstrom" of perplexed reviews.

Although the combination of a play-by-play announcer and color commentator is now considered the standard, it was much more common for a broadcast to have only one play-by-play announcer working alone. Vin Scully, longtime announcer for the Los Angeles Dodgers, was one of the few examples of this practice lasting into the 21st century until he retired in 2016. The three-person booth is a format used on Monday Night Football, in which there are two color commentators, usually one being a former player or coach and the other being an outsider, such as a journalist (Howard Cosell was one long-running example) or a comedian (such as the aforementioned Dennis Miller).

Sideline reporter

A sideline reporter assists a sports broadcasting crew with sideline coverage of the playing field or court. The sideline reporter typically makes live updates on injuries and breaking news or conducts player interviews while players are on the field or court because the play-by-play broadcaster and color commentator must remain in their broadcast booth. Sideline reporters are often granted inside information about an important update, such as injury, because they have the credentials necessary to do so. In cases of big events, teams consisting of many sideline reporters are placed strategically so that the main commentator has many sources to turn to (for example some sideline reporters could be stationed in the dressing room area while others could be between the respective team benches). In the United States, sideline reporters are heavily restricted by NFL rules; in contrast, both the 2001 and 2020 incarnations of the XFL featured sideline reporters in a much more prominent role.

In motorsports, it is typical for there to be multiple pit reporters, covering the event from along pit road. Their responsibilities include covering breaking news trackside, probing crew chiefs and other team leaders about strategy, and commentating on pit stops from along the pit wall. On occasion in motorsport, the reporter on the sideline is an understudy to the lead commentator, as Fox NASCAR has used this tactic numerous times based on the career of Cup lead Mike Joy, a former pit reporter. Those who made the switch included Steve Byrnes (Truck Series, 2014), Vince Welch (Truck Series since late 2015), and Adam Alexander (did Cup for Fox-produced TNT broadcasts from 2010–14, Xfinity on Fox since 2015) did the same too.

Sports presenter/studio host

In British sports broadcasting, the presenter of a sports broadcast is usually distinct from the commentator, and often based in a remote broadcast television studio away from the sports venue. In North America, the on-air personality based in the studio is called the studio host. During their shows, the presenter/studio host may be joined by additional analysts or pundits, especially when showing highlights of various other matches (e.g. in 1985, Jim Nantz was the studio host for The Prudential College Football Report in Studio 43 in New York for CBS Sports, and during his four-year tenure there [the 1985 through 1988 college football seasons], he had Pat Haden [in 1985] and Ara Parseghian [in 1987 and 1988] as his co-hosts/pundits).

Other roles

Various sports may have different commentator roles to cover situations unique to that sport. In the 2010s, as popularized by Fox, American football broadcasts began to increasingly employ rules analysts to explain penalties and controversial calls, and analyze instant replay reviews to predict whether a call will or will not be overturned. These analysts are typically former referees.

Sportscaster

In North American English, sportscaster is a general term for any type of commentator in a sports broadcast. It may also refer to a sports talk show host or a newscaster covering sports news.

Esports

In video games, and particularly esports, commentators are often called shoutcasters; this term is derived from Shoutcast, an internet audio streaming plugin and protocol associated with the Winamp media player. They are also sometimes referred to as simply casters.

Sports-Commentator.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

Board footer

Powered by FluxBB