Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#1776 2023-05-19 15:08:47

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1779) Gastroenteritis

Gist

What is gastroenteritis? Gastroenteritis is an inflammation of the lining of the stomach and intestines. The main symptoms include vomiting and diarrhea. It is usually not serious in healthy people, but it can sometimes lead to dehydration or cause severe symptoms.

Details

Gastroenteritis, also known as infectious diarrhea and gastro, is an inflammation of the gastrointestinal tract including the stomach and intestine. Symptoms may include diarrhea, vomiting, and abdominal pain. Fever, lack of energy, and dehydration may also occur. This typically lasts less than two weeks. It is not related to influenza, even though in the U.S. it is sometimes called the "stomach flu".

Gastroenteritis is usually caused by viruses; however, gut bacteria, parasites, and fungi can also cause gastroenteritis. In children, rotavirus is the most common cause of severe disease. In adults, norovirus and Campylobacter are common causes. Eating improperly prepared food, drinking contaminated water or close contact with a person who is infected can spread the disease. Treatment is generally the same with or without a definitive diagnosis, so testing to confirm is usually not needed.

For young children in impoverished countries, prevention includes hand washing with soap, drinking clean water, breastfeeding babies instead of using formula, and proper disposal of human waste. The rotavirus vaccine is recommended as a prevention for children. Treatment involves getting enough fluids. For mild or moderate cases, this can typically be achieved by drinking oral rehydration solution (a combination of water, salts and sugar). In those who are breastfed, continued breastfeeding is recommended. For more severe cases, intravenous fluids may be needed. Fluids may also be given by a nasogastric tube. Zinc supplementation is recommended in children. Antibiotics are generally not needed. However, antibiotics are recommended for young children with a fever and bloody diarrhea.

In 2015, there were two billion cases of gastroenteritis, resulting in 1.3 million deaths globally. Children and those in the developing world are affected the most. In 2011, there were about 1.7 billion cases, resulting in about 700,000 deaths of children under the age of five. In the developing world, children less than two years of age frequently get six or more infections a year. It is less common in adults, partly due to the development of immunity.

Signs and symptoms

Gastroenteritis usually involves both diarrhea and vomiting. Sometimes, only one or the other is present. This may be accompanied by abdominal cramps. Signs and symptoms usually begin 12–72 hours after contracting the infectious agent. If due to a virus, the condition usually resolves within one week. Some viral infections also involve fever, fatigue, headache and muscle pain. If the stool is bloody, the cause is less likely to be viral and more likely to be bacterial. Some bacterial infections cause severe abdominal pain and may persist for several weeks.

Children infected with rotavirus usually make a full recovery within three to eight days. However, in poor countries treatment for severe infections is often out of reach and persistent diarrhea is common. Dehydration is a common complication of diarrhea. Severe dehydration in children may be recognized if the skin color and position returns slowly when pressed. This is called "prolonged capillary refill" and "poor skin turgor". Abnormal breathing is another sign of severe dehydration. Repeat infections are typically seen in areas with poor sanitation, and malnutrition. Stunted growth and long-term cognitive delays can result.

Reactive arthritis occurs in 1% of people following infections with Campylobacter species. Guillain–Barré syndrome occurs in 0.1%. Hemolytic uremic syndrome (HUS) may occur due to infection with Shiga toxin-producing Escherichia coli or Shigella species. HUS causes low platelet counts, poor kidney function, and low red blood cell count (due to their breakdown). Children are more predisposed to getting HUS than adults. Some viral infections may produce benign infantile seizures.

Cause

Viruses (particularly rotavirus (in children) and norovirus (in adults)) and the bacteria Escherichia coli and Campylobacter species are the primary causes of gastroenteritis. There are, however, many other infectious agents that can cause this syndrome including parasites and fungus. Non-infectious causes are seen on occasion, but they are less likely than a viral or bacterial cause. Risk of infection is higher in children due to their lack of immunity. Children are also at higher risk because they are less likely to practice good hygiene habits. Children living in areas without easy access to water and soap are especially vulnerable.

Viral

Rotaviruses, noroviruses, adenoviruses, and astroviruses are known to cause viral gastroenteritis. Rotavirus is the most common cause of gastroenteritis in children, and produces similar rates in both the developed and developing world. Viruses cause about 70% of episodes of infectious diarrhea in the pediatric age group. Rotavirus is a less common cause in adults due to acquired immunity. Norovirus is the cause in about 18% of all cases. Generally speaking, viral gastroenteritis accounts for 21–40% of the cases of infectious diarrhea in developed countries.

Norovirus is the leading cause of gastroenteritis among adults in America accounting for about 90% of viral gastroenteritis outbreaks. These localized epidemics typically occur when groups of people spend time proximate to each other, such as on cruise ships, in hospitals, or in restaurants. People may remain infectious even after their diarrhea has ended. Norovirus is the cause of about 10% of cases in children.

Bacterial

In some countries, Campylobacter jejuni is the primary cause of bacterial gastroenteritis, with half of these cases associated with exposure to poultry.  In children, bacteria are the cause in about 15% of cases, with the most common types being Escherichia coli, Salmonella, Shigella, and Campylobacter species. If food becomes contaminated with bacteria and remains at room temperature for a period of several hours, the bacteria multiply and increase the risk of infection in those who consume the food. Some foods commonly associated with illness include raw or undercooked meat, poultry, seafood, and eggs; raw sprouts; unpasteurized milk and soft cheeses; and fruit and vegetable juices. In the developing world, especially sub-Saharan Africa and Asia, cholera is a common cause of gastroenteritis. This infection is usually transmitted by contaminated water or food.

Toxigenic Clostridium difficile is an important cause of diarrhea that occurs more often in the elderly. Infants can carry these bacteria without developing symptoms. It is a common cause of diarrhea in those who are hospitalized and is frequently associated with antibiotic use. Staphylococcus aureus infectious diarrhea may also occur in those who have used antibiotics. Acute "traveler's diarrhea" is usually a type of bacterial gastroenteritis, while the persistent form is usually parasitic. Acid-suppressing medication appears to increase the risk of significant infection after exposure to a number of organisms, including Clostridium difficile, Salmonella, and Campylobacter species. The risk is greater in those taking proton pump inhibitors than with H2 antagonists.

Parasitic

A number of parasites can cause gastroenteritis. Giardia lamblia is most common, but Entamoeba histolytica, Cryptosporidium spp., and other species have also been implicated. As a group, these agents comprise about 10% of cases in children. Giardia occurs more commonly in the developing world, but this type of illness can occur nearly everywhere. It occurs more commonly in persons who have traveled to areas with high prevalence, children who attend day care, men who have physical relationship with men, and following disasters.

Transmission

Transmission may occur from drinking contaminated water or when people share personal objects. Water quality typically worsens during the rainy season and outbreaks are more common at this time. In areas with four seasons, infections are more common in the winter. Worldwide, bottle-feeding of babies with improperly sanitized bottles is a significant cause. Transmission rates are also related to poor hygiene, (especially among children), in crowded households, and in those with poor nutritional status. Adults who have developed immunities might still carry certain organisms without exhibiting symptoms. Thus, adults can become natural reservoirs of certain diseases. While some agents (such as Shigella) only occur in primates, others (such as Giardia) may occur in a wide variety of animals.

Non-infectious

There are a number of non-infectious causes of inflammation of the gastrointestinal tract. Some of the more common include medications (like NSAIDs), certain foods such as lactose (in those who are intolerant), and gluten (in those with celiac disease). Crohn's disease is also a non-infectious cause of (often severe) gastroenteritis. Disease secondary to toxins may also occur. Some food-related conditions associated with nausea, vomiting, and diarrhea include: ciguatera poisoning due to consumption of contaminated predatory fish, scombroid associated with the consumption of certain types of spoiled fish, tetrodotoxin poisoning from the consumption of puffer fish among others, and botulism typically due to improperly preserved food.

In the United States, rates of emergency department use for noninfectious gastroenteritis dropped 30% from 2006 until 2011. Of the twenty most common conditions seen in the emergency department, rates of noninfectious gastroenteritis had the largest decrease in visits in that time period.

Pathophysiology

Gastroenteritis is defined as vomiting or diarrhea due to inflammation of the small or large bowel, often due to infection. The changes in the small bowel are typically noninflammatory, while the ones in the large bowel are inflammatory. The number of pathogens required to cause an infection varies from as few as one (for Cryptosporidium) to as many as {10}^{8} (for Vibrio cholerae).

Diagnosis

Gastroenteritis is typically diagnosed clinically, based on a person's signs and symptoms. Determining the exact cause is usually not needed as it does not alter the management of the condition.

However, stool cultures should be performed in those with blood in the stool, those who might have been exposed to food poisoning, and those who have recently traveled to the developing world. It may also be appropriate in children younger than 5, old people, and those with poor immune function. Diagnostic testing may also be done for surveillance. As hypoglycemia occurs in approximately 10% of infants and young children, measuring serum glucose in this population is recommended. Electrolytes and kidney function should also be checked when there is a concern about severe dehydration.

Dehydration

A determination of whether or not the person has dehydration is an important part of the assessment, with dehydration typically divided into mild (3–5%), moderate (6–9%), and severe (≥10%) cases. In children, the most accurate signs of moderate or severe dehydration are a prolonged capillary refill, poor skin turgor, and abnormal breathing. Other useful findings (when used in combination) include sunken eyes, decreased activity, a lack of tears, and a dry mouth. A normal urinary output and oral fluid intake is reassuring. Laboratory testing is of little clinical benefit in determining the degree of dehydration. Thus the use of urine testing or ultrasounds is generally not needed.

Differential diagnosis

Other potential causes of signs and symptoms that mimic those seen in gastroenteritis that need to be ruled out include appendicitis, volvulus, inflammatory bowel disease, urinary tract infections, and diabetes mellitus. Pancreatic insufficiency, short bowel syndrome, Whipple's disease, coeliac disease, and laxative abuse should also be considered. The differential diagnosis can be complicated somewhat if the person exhibits only vomiting or diarrhea (rather than both).

Appendicitis may present with vomiting, abdominal pain, and a small amount of diarrhea in up to 33% of cases. This is in contrast to the large amount of diarrhea that is typical of gastroenteritis. Infections of the lungs or urinary tract in children may also cause vomiting or diarrhea. Classical diabetic ketoacidosis (DKA) presents with abdominal pain, nausea, and vomiting, but without diarrhea. One study found that 17% of children with DKA were initially diagnosed as having gastroenteritis.

gastroenteritis-8f6df0.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1777 2023-05-20 14:26:57

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1780) Microform

Summary

Microform, also called Microcopy, or Microrecord, is any process, photographic or electronic, for reproducing printed matter or other graphic material in a much-reduced size, which can then be re-enlarged by an optical apparatus for reading or reproduction. Microform systems provide durable, extremely compact, and easily accessible file records.

The earliest large-scale commercial use of greatly reduced-size copying onto narrow rolls of film (microfilm) resulted from the introduction of the Recordak system by the Eastman Kodak Company in 1928. Continuous, automatic cameras photographed documents on 16-millimetre film, and the first use was for copying checks in bank transit or clearing work. But it soon spread to a great variety of other applications in business, government, and education; and 35-millimetre film was used as well as 16-millimetre.

The science of microphotography burgeoned in the late 20th century, with scores of processes and a variety of miniaturizations being introduced. Generally speaking, a microform may be in the form of continuous media, such as roll or cartridge microfilm, or in that of individual and physically separate records, such as film chips (microfilm containing coded microimages, to be used in automatic retrieval systems) or microfiche (a sheet of microfilm displaying at the top a title or code readable with the naked eye). Use of the microform permits considerable space saving. The microform usually utilizes photographic techniques; however, other methods such as video magnetic tape recording have been used. Most microform techniques also permit the document file to be duplicated easily for dissemination or filing in different places.

Automated equipment for unit records or continuous media is also available to handle microforms at high speeds. Such equipment typically stores the microform image with a corresponding address number in the machine file. Given a request for a copy, the equipment automatically selects the proper image from the file and prepares a copy, either a printed paper copy of the original or a microform copy, depending upon the equipment. Some automated microform systems also include the indexing information in machine-coded form with the image. This permits a mechanized index search to be made; in some cases it can result in rapid delivery of copies of those records selected in response to the index query.

Details

Microforms are scaled-down reproductions of documents, typically either films or paper, made for the purposes of transmission, storage, reading, and printing. Microform images are commonly reduced to about 4% or 1⁄25 of the original document size. For special purposes, greater optical reductions may be used.

Three formats are common: microfilm (reels), microfiche (flat sheets), and aperture cards. Microcards, also known as "micro-opaques", a format no longer produced, were similar to microfiche, but printed on cardboard rather than photographic film.

Uses

Systems that mount microfilm images in punched cards have been widely used for archival storage of engineering information.

For example, when airlines demand archival engineering drawings to support purchased equipment (in case the vendor goes out of business, for example), they normally specify punch-card-mounted microfilm with an industry-standard indexing system punched into the card. This permits automated reproduction, as well as permitting mechanical card-sorting equipment to sort and select microfilm drawings.

Aperture card mounted microfilm is roughly 3% of the size and space of conventional paper or vellum engineering drawings. Some military contracts around 1980 began to specify digital storage of engineering and maintenance data because the expenses were even lower than microfilm, but these programs are now finding it difficult to purchase new readers for the old formats.

Microfilm first saw military use during the Franco-Prussian War of 1870–71. During the Siege of Paris, the only way for the provincial government in Tours to communicate with Paris was by pigeon post. As the pigeons could not carry paper dispatches, the Tours government turned to microfilm. Using a microphotography unit evacuated from Paris before the siege, clerks in Tours photographed paper dispatches and compressed them to microfilm, which were carried by homing pigeons into Paris and projected by magic lantern while clerks copied the dispatches onto paper.

Additionally, the US Victory Mail, and the British "Airgraph" system it was based on, were used for delivering mail between those at home and troops serving overseas during World War II. The systems worked by photographing large amounts of censored mail reduced to thumb-nail size onto reels of microfilm, which weighed much less than the originals would have. The film reels were shipped by priority air freight to and from the home fronts, sent to their prescribed destinations for enlarging at receiving stations near the recipients, and printed out on lightweight photo paper. These facsimiles of the letter-sheets were reproduced about one-quarter the original size and the miniature mails were then delivered to the addressee. Use of these microfilm systems saved significant volumes of cargo capacity needed for war supplies. An additional benefit was that the small, lightweight reels of microfilm were almost always transported by air, and as such were delivered much more quickly than any surface mail service could have managed.

Libraries began using microfilm in the mid-20th century as a preservation strategy for deteriorating newspaper collections. Books and newspapers that were deemed in danger of decay could be preserved on film and thus access and use could be increased. Microfilming was also a space-saving measure. In his 1945 book, The Scholar and the Future of the Research Library, Fremont Rider calculated that research libraries were doubling in space every sixteen years. His suggested solution was microfilming, specifically with his invention, the microcard. Once items were put onto film, they could be removed from circulation and additional shelf space would be made available for rapidly expanding collections. The microcard was superseded by microfiche. By the 1960s, microfilming had become standard policy.

In 1948, the Australian Joint Copying Project started; the intention to film records and archives from the United Kingdom relating to Australia and the Pacific. Over 10,000 reels were produced, making it one of the largest projects of its kind.

Around the same time, Licensed Betting Offices in the UK began using microphotography as a means of keeping compact records of bets taken. Betting shop customers would sometimes attempt to amend their betting slip receipt to attempt fraud, and so the microphotography camera (which also generally contained its own independent time-piece) found use as a definitive means of recording the exact details of each and every bet taken. The use of microphotography has now largely been replaced by digital 'bet capture' systems, which also allow a computer to settle the returns for each bet once the details of the wager have been 'translated' into the system by an employee. The added efficiency of this digital system has ensured that there are now very few, if indeed any, betting offices continuing to use microfilm cameras in the UK.

Visa and National City use microfilm (roll microfilm and fiche) to store financial, personal, and legal records.

Source code for computer programs was printed to microfiche during the 1970s and distributed to customers in this form.

Additionally, microfiche was used to write out long casework for some proofs such as the four color theorem.

Additional Information

Microforms are microreproductions of documents for transmission, storage, reading and printing. Microform images are commonly reduced about 25 times from the original document size. For special purposes, greater optical reductions may be used. All microform images may be provided as positives or negatives, more often the second. Microforms often come in three formats: microfilm (reels), aperture cards and microfiche (flat sheets). Microcards were similar to microfiche, but printed on paper cardboard rather than photographic films.

In the middle of the twentieth century, microforms became popular in library communities. Periodicals, books, and other collections were converted into microform. However, digital preservation has become increasingly popular, causing the decline of microform preservation.

Microform in the twentieth century

Microfom is used primarily for the preservation of documents, images, architectural or technological drawings, maps, and other forms of information. While microform was a primary preservation method in the twentieth century, digital preservation has increasingly become popular since.

Despite the increasing popularity of digital preservation, microform is still in use today in various institutions such as libraries and archives for a number of reasons.

First, filmed images are analog images of the original, reduced in size, and users can access the information with a simple equipment such as a magnifier. Digital technologies, however, require much more complex devices. Furthermore, when computer programs are updated, the original digital data may become inaccessible. Furthermore, the life span of CDs, DVDs, and other digital storage devices is still uncertain. For these reasons, digital preservation requires constant data migration. On the other hand, microforms are expected to last about 500 years, if they are properly preserved.

While microform has a number of advantages, it lacks some functions digital preservation that has: search capability; quick data transfer from one location to another location; massive storage capacity; easy manipulation of data.

To remedy some of those disadvantages, data preserved in microform are also digitized. At some institutions, users can choose to access information storage in various formats. For example, the Library of Congress in the U.S. provides copies of their collection in various formats. Users can choose and request from the followings:

* 35mm microfilm
* film to paper
* original filming
* photocopying
* cartographic scanning
* digital images to CD-ROM
* all formats of photographic reproduction

Today, libraries and archives use multiple preservation medium including paper, microform, and digital medium. The same information is often preserved in multiple formats. Since each format has its own advantages and disadvantages, all of these formats will be used in the near future.

Microform.JPG


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1778 2023-05-21 00:17:06

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1781) Concrete

Gist

Concrete is a composite material composed of fine and coarse aggregate bonded together with a fluid cement (cement paste) that hardens (cures) over time. Concrete is the second-most-used substance in the world after water, and is the most widely used building material.

Concrete is the most commonly used man-made material on earth. It is an important construction material used extensively in buildings, bridges, roads and dams. Its uses range from structural applications, to paviours, kerbs, pipes and drains.

Concrete is a composite material, consisting mainly of Portland cement, water and aggregate (gravel, sand or rock). When these materials are mixed together, they form a workable paste which then gradually hardens over time.

Summary

Concrete, in construction, is a structural material consisting of a hard, chemically inert particulate substance, known as aggregate (usually sand and gravel), that is bonded together by cement and water.

Among the ancient Assyrians and Babylonians, the bonding substance most often used was clay. The Egyptians developed a substance more closely resembling modern concrete by using lime and gypsum as binders. Lime (calcium oxide), derived from limestone, chalk, or (where available) oyster shells, continued to be the primary pozzolanic, or cement-forming, agent until the early 1800s. In 1824 an English inventor, Joseph Aspdin, burned and ground together a mixture of limestone and clay. This mixture, called portland cement, has remained the dominant cementing agent used in concrete production.

Aggregates are generally designated as either fine (ranging in size from 0.025 to 6.5 mm [0.001 to 0.25 inch]) or coarse (from 6.5 to 38 mm [0.25 to 1.5 inch] or larger). All aggregate materials must be clean and free from admixture with soft particles or vegetable matter, because even small quantities of organic soil compounds result in chemical reactions that seriously affect the strength of the concrete.

Concrete is characterized by the type of aggregate or cement used, by the specific qualities it manifests, or by the methods used to produce it. In ordinary structural concrete, the character of the concrete is largely determined by a water-to-cement ratio. The lower the water content, all else being equal, the stronger the concrete. The mixture must have just enough water to ensure that each aggregate particle is completely surrounded by the cement paste, that the spaces between the aggregate are filled, and that the concrete is liquid enough to be poured and spread effectively. Another durability factor is the amount of cement in relation to the aggregate (expressed as a three-part ratio—cement to fine aggregate to coarse aggregate). Where especially strong concrete is needed, there will be relatively less aggregate.

The strength of concrete is measured in pounds per square inch or kilograms per square centimetre of force needed to crush a sample of a given age or hardness. Concrete’s strength is affected by environmental factors, especially temperature and moisture. If it is allowed to dry prematurely, it can experience unequal tensile stresses that in an imperfectly hardened state cannot be resisted. In the process known as curing, the concrete is kept damp for some time after pouring to slow the shrinkage that occurs as it hardens. Low temperatures also adversely affect its strength. To compensate for this, an additive such as calcium chloride is mixed in with the cement. This accelerates the setting process, which in turn generates heat sufficient to counteract moderately low temperatures. Large concrete forms that cannot be adequately covered are not poured in freezing temperatures.

Concrete that has been hardened onto imbedded metal (usually steel) is called reinforced concrete, or ferroconcrete. Its invention is usually attributed to Joseph Monier, a Parisian gardener who made garden pots and tubs of concrete reinforced with iron mesh; he received a patent in 1867. The reinforcing steel, which may take the form of rods, bars, or mesh, contributes tensile strength. Plain concrete does not easily withstand stresses such as wind action, earthquakes, and vibrations and other bending forces and is therefore unsuitable in many structural applications. In reinforced concrete, the tensile strength of steel and the compressional strength of concrete render a member capable of sustaining heavy stresses of all kinds over considerable spans. The fluidity of the concrete mix makes it possible to position the steel at or near the point where the greatest stress is anticipated.

Another innovation in masonry construction is the use of prestressed concrete. It is achieved by either pretensioning or posttensioning processes. In pretensioning, lengths of steel wire, cables, or ropes are laid in the empty mold and then stretched and anchored. After the concrete has been poured and allowed to set, the anchors are released and, as the steel seeks to return to its original length, it compresses the concrete. In the posttensioning process, the steel is run through ducts formed in the concrete. When the concrete has hardened, the steel is anchored to the exterior of the member by some sort of gripping device. By applying a measured amount of stretching force to the steel, the amount of compression transmitted to the concrete can be carefully regulated. Prestressed concrete neutralizes the stretching forces that would rupture ordinary concrete by compressing an area to the point at which no tension is experienced until the strength of the compressed section is overcome. Because it achieves strength without using heavy steel reinforcements, it has been used to great effect to build lighter, shallower, and more elegant structures such as bridges and vast roofs.

In addition to its potential for immense strength and its initial ability to adapt to virtually any form, concrete is fire resistant and has become one of the most common building materials in the world.

Details

Concrete is a composite material composed of fine and coarse aggregate bonded together with a fluid cement (cement paste) that hardens (cures) over time. Concrete is the second-most-used substance in the world after water, and is the most widely used building material. Its usage worldwide, ton for ton, is twice that of steel, wood, plastics, and aluminium combined.[citation needed] Globally, the ready-mix concrete industry, the largest segment of the concrete market, is projected to exceed $600 billion in revenue by 2025. This widespread use results in a number of environmental impacts. Most notably, the production process for cement produces large volumes of greenhouse gas emissions, leading to net 8% of global emissions. Other environmental concerns include widespread illegal sand mining, impacts on the surrounding environment such as increased surface runoff or urban heat island effect, and potential public health implications from toxic ingredients. Significant research and development is being done to try to reduce the emissions or make concrete a source of carbon sequestration, and increase recycled and secondary raw materials content into the mix to achieve a circular economy. Concrete is expected to be a key material for structures resilient to climate disasters, as well as a solution to mitigate the pollution of other industries, capturing wastes such as coal fly ash or bauxite tailings and residue.

When aggregate is mixed with dry Portland cement and water, the mixture forms a fluid slurry that is easily poured and molded into shape. The cement reacts with the water through a process called concrete hydration that hardens over several hours to form a hard matrix that binds the materials together into a durable stone-like material that has many uses. This time allows concrete to not only be cast in forms, but also to have a variety of tooled processes preformed. The hydration process is exothermic, which means ambient temperature plays a significant role in how long it takes concrete to set. Often, additives (such as pozzolans or superplasticizers) are included in the mixture to improve the physical properties of the wet mix, delay or accelerate the curing time, or otherwise change the finished material. Most concrete is poured with reinforcing materials (such as rebar) embedded to provide tensile strength, yielding reinforced concrete.

In the past, lime based cement binders, such as lime putty, were often used but sometimes with other hydraulic cements, (water resistant) such as a calcium aluminate cement or with Portland cement to form Portland cement concrete (named for its visual resemblance to Portland stone). Many other non-cementitious types of concrete exist with other methods of binding aggregate together, including asphalt concrete with a bitumen binder, which is frequently used for road surfaces, and polymer concretes that use polymers as a binder. Concrete is distinct from mortar. Whereas concrete is itself a building material, mortar is a bonding agent that typically holds bricks, tiles and other masonry units together.

Etymology

The word concrete comes from the Latin word "concretus" (meaning compact or condensed), the perfect passive participle of "concrescere", from "con-" (together) and "crescere" (to grow).

Properties

Concrete has relatively high compressive strength, but much lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension (often steel). The elasticity of concrete is relatively constant at low stress levels but starts decreasing at higher stress levels as matrix cracking develops. Concrete has a very low coefficient of thermal expansion and shrinks as it matures. All concrete structures crack to some extent, due to shrinkage and tension. Concrete that is subjected to long-duration forces is prone to creep.

Tests can be performed to ensure that the properties of concrete correspond to specifications for the application.

The ingredients affect the strengths of the material. Concrete strength values are usually specified as the lower-bound compressive strength of either a cylindrical or cubic specimen as determined by standard test procedures.

The strengths of concrete is dictated by its function. Very low-strength—14 MPa (2,000 psi) or less—concrete may be used when the concrete must be lightweight. Lightweight concrete is often achieved by adding air, foams, or lightweight aggregates, with the side effect that the strength is reduced. For most routine uses, 20 to 32 MPa (2,900 to 4,600 psi) concrete is often used. 40 MPa (5,800 psi) concrete is readily commercially available as a more durable, although more expensive, option. Higher-strength concrete is often used for larger civil projects. Strengths above 40 MPa (5,800 psi) are often used for specific building elements. For example, the lower floor columns of high-rise concrete buildings may use concrete of 80 MPa (11,600 psi) or more, to keep the size of the columns small. Bridges may use long beams of high-strength concrete to lower the number of spans required. Occasionally, other structural needs may require high-strength concrete. If a structure must be very rigid, concrete of very high strength may be specified, even much stronger than is required to bear the service loads. Strengths as high as 130 MPa (18,900 psi) have been used commercially for these reasons.

Energy efficiency

The cement produced for making concrete accounts for about 8% of worldwide CO2 emissions per year (compared to, e.g., global aviation at 1.9%). The two largest sources of CO2 are produced by the cement manufacturing process, arising from (1) the decarbonation reaction of limestone in the cement kiln (T ≈ 950 °C), and (2) from the combustion of fossil fuel to reach the sintering temperature (T ≈ 1450 °C) of cement clinker in the kiln. The energy required for extracting, crushing, and mixing the raw materials (construction aggregates used in the concrete production, and also limestone and clay feeding the cement kiln) is lower. Energy requirement for transportation of ready-mix concrete is also lower because it is produced nearby the construction site from local resources, typically manufactured within 100 kilometers of the job site. The overall embodied energy of concrete at roughly 1 to 1.5 megajoules per kilogram is therefore lower than for many structural and construction materials.

Once in place, concrete offers a great energy efficiency over the lifetime of a building. Concrete walls leak air far less than those made of wood frames. Air leakage accounts for a large percentage of energy loss from a home. The thermal mass properties of concrete increase the efficiency of both residential and commercial buildings. By storing and releasing the energy needed for heating or cooling, concrete's thermal mass delivers year-round benefits by reducing temperature swings inside and minimizing heating and cooling costs. While insulation reduces energy loss through the building envelope, thermal mass uses walls to store and release energy. Modern concrete wall systems use both external insulation and thermal mass to create an energy-efficient building. Insulating concrete forms (ICFs) are hollow blocks or panels made of either insulating foam or rastra that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure.

Fire safety

Concrete buildings are more resistant to fire than those constructed using steel frames, since concrete has lower heat conductivity than steel and can thus last longer under the same fire conditions. Concrete is sometimes used as a fire protection for steel frames, for the same effect as above. Concrete as a fire shield, for example Fondu fyre, can also be used in extreme environments like a missile launch pad.

Options for non-combustible construction include floors, ceilings and roofs made of cast-in-place and hollow-core precast concrete. For walls, concrete masonry technology and Insulating Concrete Forms (ICFs) are additional options. ICFs are hollow blocks or panels made of fireproof insulating foam that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure.

Concrete also provides good resistance against externally applied forces such as high winds, hurricanes, and tornadoes owing to its lateral stiffness, which results in minimal horizontal movement. However, this stiffness can work against certain types of concrete structures, particularly where a relatively higher flexing structure is required to resist more extreme forces.

Earthquake safety

As discussed above, concrete is very strong in compression, but weak in tension. Larger earthquakes can generate very large shear loads on structures. These shear loads subject the structure to both tensile and compressional loads. Concrete structures without reinforcement, like other unreinforced masonry structures, can fail during severe earthquake shaking. Unreinforced masonry structures constitute one of the largest earthquake risks globally. These risks can be reduced through seismic retrofitting of at-risk buildings, (e.g. school buildings in Istanbul, Turkey).

shutterstock_288301829-min-1024x683.jpg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1779 2023-05-21 18:16:05

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1782) Silver coin

Silver coins are considered the oldest mass-produced form of coinage. Silver has been used as a coinage metal since the times of the Greeks; their silver drachmas were popular trade coins. The ancient Persians used silver coins between 612–330 BC. Before 1797, British pennies were made of silver.

As with all collectible coins, many factors determine the value of a silver coin, such as its rarity, demand, condition and the number originally minted. Ancient silver coins coveted by collectors include the Denarius and Miliarense, while more recent collectible silver coins include the Morgan Dollar and the Spanish Milled Dollar.

Other than collector's silver coins, silver bullion coins are popular among people who desire a "hedge" against currency inflation or store of value. Silver has an international currency symbol of XAG under ISO 4217.

Origins and early development of silver coins

The earliest coins in the world were minted in the kingdom of Lydia in Asia Minor around 600 BC. The coins of Lydia were made of electrum, which is a naturally occurring alloy of gold and silver, that was available within the territory of Lydia. The concept of coinage, i.e. stamped lumps of metal of a specified weight, quickly spread to adjacent regions, such as Aegina. In these neighbouring regions, inhabited by Greeks, coins were mostly made of silver. As Greek merchants traded with Greek communities (colonies) throughout the Mediterranean Sea, the Greek coinage concept soon spread through trade to the entire Mediterranean region. These early Greek silver coins were denominated in staters or drachmas and its fractions (obols).

More or less simultaneously with the development of the Lydian and Greek coinages, a coinage system was developed independently in China. The Chinese coins, however, were a different concept and they were made of bronze.

In the Mediterranean region, the silver and other precious metal coins were later supplemented with local bronze coinages, that served as small change, useful for transactions where small sums were involved.

The coins of the Greeks were issued by a great number of city-states, and each coin carried an indication of its place of origin. The coinage systems were not entirely the same from one place to another. However, the so-called Attic standard, Corinthian standard, Aiginetic standard and other standards defined the proper weight of each coin. Each of these standards were used in multiple places throughout the Mediterranean region.

In the 4th century BC, the Kingdom of Macedonia came to dominate the Greek world. The most powerful of their kings, Alexander the Great eventually launched an attack on the Persian Empire, defeating and conquering it. Alexander's Empire fell apart after his death in 323 BC, and the eastern mediterranean region and western Asia (previously Persian territory) were divided into a small number of kingdoms, replacing the city-state as the principal unit of Greek government. Greek coins were now issued by kings, and only to a lesser extent by cities. Greek rulers were now minting coins as far away as Egypt and central Asia. The tetradrachm (four drachms) was a popular coin throughout the region. This era is referred to as the hellenistic era.

While much of the Greek world was being transformed into monarchies, the Romans were expanding their control throughout the Italian Peninsula. The Romans minted their first coins during the early 3rd century BC. The earliest coins were - like other coins in the region - silver drachms with a supplementary bronze coinage. They later reverted to the silver denarius as their principal coin. The denarius remained an important Roman coin until the Roman economy began to crumble. During the 3rd century AD, the antoninianus was minted in quantity. This was originally a "silver" coin with low silver content, but developed through stages of debasement (sometimes silver washed) to pure bronze coins.

Although many regions ruled by Hellenistic monarchs were brought under Roman control, this did not immediately lead to a unitary monetary system throughout the Mediterranean region. Local coinage traditions in the eastern regions prevailed, while the denarius dominated the western regions. The local Greek coinages are known as Greek Imperial coins.

Apart from the Greeks and the Romans, other peoples in the Mediterranean region also issued coins. These include the Phoenicians, the Carthaginians, the Jews, the Celts and various regions in the Iberian Peninsula and the Arab Peninsula.

In regions to the East of the Roman Empire, that were formerly controlled by the Hellenistic Seleucids, the Parthians created an empire in Persia. The Parthians issued a relatively stable series of silver drachms and tetradrachms. After the Parthians were overthrown by the Sassanians in 226 AD, the new dynasty of Persia began the minting of their distinct thin, spread fabric silver drachms, that became a staple of their empire right up to the Arab conquest in the 7th century AD.

Evolution

Silver coins have evolved in many different forms through the ages; a rough timeline for silver coins is as follows:

* Silver coins circulated widely as money in Europe and later the Americas from before the time of Alexander the Great until the 1960s.
* 16th - 19th centuries: World silver crowns, the most famous is arguably the Mexican 8 reales (also known as Spanish dollar), minted in many different parts of the world to facilitate trade. Size is more or less standardized at around 38mm with many minor variations in weight and sizes among different issuing nations. Declining towards the end of the 19th century due to the introduction of secure printing of paper currency. It is no longer convenient to carry sacks of silver coins when they can be deposited in the bank for a certificate of deposit carrying the same value. Smaller denominations exist to complement currency usability by the public.
* 1870s - 1930s: Silver trade dollars, a world standard of its era in weight and purity following the example of the older Mexican 8 Reales to facilitate trade in the Far East. Examples: French Indochina Piastres, British Trade Dollar, US Trade Dollar, Japanese 1 Yen, Chinese 1 Dollar. Smaller denomination exists to complement currency usability by the public.
* 1930s - 1960s: Alloyed in circulating coins of many different governments of the world. This period ended when it was no longer economical for world governments to keep silver as an alloying element in their circulating coins.
* 1960's -1970's: Some circulating coins still used silver in their composition, such as 1965-70 Kennedy half dollar coins, which were debased from 90% silver to 40% silver. However, as silver's metal value continued to increase, resulting in additional hoarding by the public, these coins were eventually debased entirely to cupronickel clad coinage.
* 1960s - current: Modern crown sized commemoratives, using the weight and size of the old world crowns.
* 1980 - current: Modern silver bullion coins, mainly from 39 mm - 42 mm diameter, containing 1 troy ounce (31.103 g) of pure silver in content, regardless of purity. Smaller and bigger sizes exist mainly to complement the collectible set for numismatics market. Some are also purchased as a mean for the masses to buy a standardized store of value, which in this case is silver.

Advantages of silver coinage

Silver coins were among the first coins ever used, thousands of years ago. The silver standard was used for centuries in many places of the world. There were multiple reasons for using silver instead of other materials for coins:

* Silver has market liquidity, is easily tradable, and typically has a low spread between buy and sell prices.
* Silver is easily transportable. The elements silver and gold have a high value to weight ratio.
* Silver can be divided into small units without losing significant value; precious metals can be coined from bars, and later melted down into bars again.
* A silver coin is fungible: that is, one unit or piece of the same denomination and origin is equivalent to another.
* Most silver coin have a certain standard weight, or measure, making it easy to infer the weight of a number of coins from their number.
* A silver (alloy) coin is durable and long lasting (pure silver is relatively soft and subject to wear). A silver coin is not subject to decay.
* A silver coin has intrinsic value, although the price of silver bullion coins is subject to market swings and general inflation. Silver has always been a rare metal.
* Because silver is less valuable than gold, it is more practical for small, everyday transactions.

Cultural traditions

A silver coin or coins sometimes are placed under the mast or in the keel of a ship as a good luck charm. This tradition probably originated with the Romans. The tradition continues in modern times, for example, officers of USS New Orleans placed 33 coins heads up under her foremast and mainmast before she was launched in 1933 and USS Higgins, commissioned in 1999, had 11 coins specially selected for her mast stepping.

3pmunQsVxT.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1780 2023-05-22 13:15:14

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1783) Smoke

Summary

Smoking is the act of inhaling and exhaling the fumes of burning plant material. A variety of plant materials are smoked,  but the act is most commonly associated with tobacco as smoked in a cigarette, cigar, or pipe. Tobacco contains nicotine, an alkaloid that is addictive and can have both stimulating and tranquilizing psychoactive effects. The smoking of tobacco, long practiced by American Indians, was introduced to Europe by Christopher Columbus and other explorers. Smoking soon spread to other areas and today is widely practiced around the world despite medical, social, and religious arguments against it.

Details

Smoke is a suspension of airborne particulates and gases emitted when a material undergoes combustion or pyrolysis, together with the quantity of air that is entrained or otherwise mixed into the mass. It is commonly an unwanted by-product of fires (including stoves, candles, internal combustion engines, oil lamps, and fireplaces), but may also be used for pest control (fumigation), communication (smoke signals), defensive and offensive capabilities in the military (smoke screen), cooking, or smoking (tobacco, cannabis, etc.). It is used in rituals where incense, sage, or resin is burned to produce a smell for spiritual or magical purposes. It can also be a flavoring agent and preservative.

Smoke inhalation is the primary cause of death in victims of indoor fires. The smoke kills by a combination of thermal damage, poisoning and pulmonary irritation caused by carbon monoxide, hydrogen cyanide and other combustion products.

Smoke is an aerosol (or mist) of solid particles and liquid droplets that are close to the ideal range of sizes for Mie scattering of visible light.

Chemical composition

The composition of smoke depends on the nature of the burning fuel and the conditions of combustion. Fires with high availability of oxygen burn at a high temperature and with a small amount of smoke produced; the particles are mostly composed of ash, or with large temperature differences, of condensed aerosol of water. High temperature also leads to production of nitrogen oxides. Sulfur content yields sulfur dioxide, or in case of incomplete combustion, hydrogen sulfide. Carbon and hydrogen are almost completely oxidized to carbon dioxide and water. Fires burning with lack of oxygen produce a significantly wider palette of compounds, many of them toxic. Partial oxidation of carbon produces carbon monoxide, while nitrogen-containing materials can yield hydrogen cyanide, ammonia, and nitrogen oxides. Hydrogen gas can be produced instead of water. Contents of halogens such as chlorine (e.g. in polyvinyl chloride or brominated flame retardants) may lead to the production of hydrogen chloride, phosgene, dioxin, and chloromethane, bromomethane and other halocarbons. Hydrogen fluoride can be formed from fluorocarbons, whether fluoropolymers subjected to fire or halocarbon fire suppression agents. Phosphorus and antimony oxides and their reaction products can be formed from some fire retardant additives, increasing smoke toxicity and corrosivity. Pyrolysis of polychlorinated biphenyls (PCB), e.g. from burning older transformer oil, and to lower degree also of other chlorine-containing materials, can produce 2,3,7,8-tetrachlorodibenzodioxin, a potent carcinogen, and other polychlorinated dibenzodioxins. Pyrolysis of fluoropolymers, e.g. teflon, in presence of oxygen yields carbonyl fluoride (which hydrolyzes readily to HF and CO2); other compounds may be formed as well, e.g. carbon tetrafluoride, hexafluoropropylene, and highly toxic perfluoroisobutene (PFIB).

Pyrolysis of burning material, especially incomplete combustion or smoldering without adequate oxygen supply, also results in production of a large amount of hydrocarbons, both aliphatic (methane, ethane, ethylene, acetylene) and aromatic (benzene and its derivates, polycyclic aromatic hydrocarbons; e.g. benzopyrene, studied as a carcinogen, or retene), terpenes. It also results in the emission of a range of smaller oxygenated volatile organic compounds (methanol, acetic acid, hydroxy acetone, methyl acetate and ethyl formate) which are formed as combustion by products as well as less volatile oxygenated organic species such as phenolics, furans and furanones. Heterocyclic compounds may be also present. Heavier hydrocarbons may condense as tar; smoke with significant tar content is yellow to brown. Combustion of solid fuels can result in the emission of many hundreds to thousands of lower volatility organic compounds in the aerosol phase. Presence of such smoke, soot, and/or brown oily deposits during a fire indicates a possible hazardous situation, as the atmosphere may be saturated with combustible pyrolysis products with concentration above the upper flammability limit, and sudden inrush of air can cause flashover or backdraft.

Presence of sulfur can lead to formation of gases like hydrogen sulfide, carbonyl sulfide, sulfur dioxide, carbon disulfide, and thiols; especially thiols tend to get adsorbed on surfaces and produce a lingering odor even long after the fire. Partial oxidation of the released hydrocarbons yields in a wide palette of other compounds: aldehydes (e.g. formaldehyde, acrolein, and furfural), ketones, alcohols (often aromatic, e.g. phenol, guaiacol, syringol, catechol, and cresols), carboxylic acids (formic acid, acetic acid, etc.).

The visible particulate matter in such smokes is most commonly composed of carbon (soot). Other particulates may be composed of drops of condensed tar, or solid particles of ash. The presence of metals in the fuel yields particles of metal oxides. Particles of inorganic salts may also be formed, e.g. ammonium sulfate, ammonium nitrate, or sodium chloride. Inorganic salts present on the surface of the soot particles may make them hydrophilic. Many organic compounds, typically the aromatic hydrocarbons, may be also adsorbed on the surface of the solid particles. Metal oxides can be present when metal-containing fuels are burned, e.g. solid rocket fuels containing aluminium. Depleted uranium projectiles after impacting the target ignite, producing particles of uranium oxides. Magnetic particles, spherules of magnetite-like ferrous ferric oxide, are present in coal smoke; their increase in deposits after 1860 marks the beginning of the Industrial Revolution. (Magnetic iron oxide nanoparticles can be also produced in the smoke from meteorites burning in the atmosphere.) Magnetic remanence, recorded in the iron oxide particles, indicates the strength of Earth's magnetic field when they were cooled beyond their Curie temperature; this can be used to distinguish magnetic particles of terrestrial and meteoric origin. Fly ash is composed mainly of silica and calcium oxide. Cenospheres are present in smoke from liquid hydrocarbon fuels. Minute metal particles produced by abrasion can be present in engine smokes. Amorphous silica particles are present in smokes from burning silicones; small proportion of silicon nitride particles can be formed in fires with insufficient oxygen. The silica particles have about 10 nm size, clumped to 70–100 nm aggregates and further agglomerated to chains. Radioactive particles may be present due to traces of uranium, thorium, or other radionuclides in the fuel; hot particles can be present in case of fires during nuclear accidents (e.g. Chernobyl disaster) or nuclear war.

Smoke particulates, like other aerosols, are categorized into three modes based on particle size:

* nuclei mode, with geometric mean radius between 2.5 and 20 nm, likely forming by condensation of carbon moieties.
* accumulation mode, ranging between 75 and 250 nm and formed by coagulation of nuclei mode particles
* coarse mode, with particles in micrometer range

Most of the smoke material is primarily in coarse particles. Those undergo rapid dry precipitation, and the smoke damage in more distant areas outside of the room where the fire occurs is therefore primarily mediated by the smaller particles.

Aerosol of particles beyond visible size is an early indicator of materials in a preignition stage of a fire.

Burning of hydrogen-rich fuel produces water vapor; this results in smoke containing droplets of water. In absence of other color sources (nitrogen oxides, particulates...), such smoke is white and cloud-like.

Smoke emissions may contain characteristic trace elements. Vanadium is present in emissions from oil fired power plants and refineries; oil plants also emit some nickel. Coal combustion produces emissions containing aluminium, math, chromium, cobalt, copper, iron, mercury, selenium, and uranium.

Traces of vanadium in high-temperature combustion products form droplets of molten vanadates. These attack the passivation layers on metals and cause high temperature corrosion, which is a concern especially for internal combustion engines. Molten sulfate and lead particulates also have such effect.

Some components of smoke are characteristic of the combustion source. Guaiacol and its derivatives are products of pyrolysis of lignin and are characteristic of wood smoke; other markers are syringol and derivates, and other methoxy phenols. Retene, a product of pyrolysis of conifer trees, is an indicator of forest fires. Levoglucosan is a pyrolysis product of cellulose. Hardwood vs softwood smokes differ in the ratio of guaiacols/syringols. Markers for vehicle exhaust include polycyclic aromatic hydrocarbons, hopanes, steranes, and specific nitroarenes (e.g. 1-nitropyrene). The ratio of hopanes and steranes to elemental carbon can be used to distinguish between emissions of gasoline and diesel engines.

Many compounds can be associated with particulates; whether by being adsorbed on their surfaces, or by being dissolved in liquid droplets. Hydrogen chloride is well absorbed in the soot particles.

Inert particulate matter can be disturbed and entrained into the smoke. Of particular concern are particles of asbestos.

Deposited hot particles of radioactive fallout and bioaccumulated radioisotopes can be reintroduced into the atmosphere by wildfires and forest fires; this is a concern in e.g. the Zone of alienation containing contaminants from the Chernobyl disaster.

Polymers are a significant source of smoke. Aromatic side groups, e.g. in polystyrene, enhance generation of smoke. Aromatic groups integrated in the polymer backbone produce less smoke, likely due to significant charring. Aliphatic polymers tend to generate the least smoke, and are non-self-extinguishing. However presence of additives can significantly increase smoke formation. Phosphorus-based and halogen-based flame retardants decrease production of smoke. Higher degree of cross-linking between the polymer chains has such effect too.

Visible and invisible particles of combustion

The unaided eye detects particle sizes greater than 7 µm (micrometres). Visible particles emitted from a fire are referred to as smoke. Invisible particles are generally referred to as gas or fumes. This is best illustrated when toasting bread in a toaster. As the bread heats up, the products of combustion increase in size. The fumes initially produced are invisible but become visible if the toast is burnt.

An ionization chamber type smoke detector is technically a product of combustion detector, not a smoke detector. Ionization chamber type smoke detectors detect particles of combustion that are invisible to the naked eye. This explains why they may frequently false alarm from the fumes emitted from the red-hot heating elements of a toaster, before the presence of visible smoke, yet they may fail to activate in the early, low-heat smoldering stage of a fire.

Smoke from a typical house fire contains hundreds of different chemicals and fumes. As a result, the damage caused by the smoke can often exceed that caused by the actual heat of the fire. In addition to the physical damage caused by the smoke of a fire – which manifests itself in the form of stains – is the often even harder to eliminate problem of a smoky odor. Just as there are contractors that specialize in rebuilding/repairing homes that have been damaged by fire and smoke, fabric restoration companies specialize in restoring fabrics that have been damaged in a fire.

blm-southmoccasinfiremt-bylaurenkokindablm.jpg?itok=WtvBtnru


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1781 2023-05-23 14:44:02

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1784) Coach (sport)

Gist

A person who teaches and trains an athlete or performer.

Details

An athletic coach is a person coaching in sport, involved in the direction, instruction, and training of a sports team or athlete.

History

The original sense of the word coach is that of a horse-drawn carriage, deriving ultimately from the Hungarian city of Kocs where such vehicles were first made. Students at the University of Oxford in the early nineteenth century used the slang word to refer to a private tutor who would drive a less able student through his examinations just like horse driving.

Britain took the lead in upgrading the status of sports in the 19th century. For sports to become professionalized, "coacher" had to become established. It gradually professionalized in the Victorian era and the role was well established by 1914. In the First World War, military units sought out the coaches to supervise physical conditioning and develop morale-building teams.

Effectiveness

John Wooden had a philosophy of coaching that encouraged planning, organization, and understanding, and that knowledge was important but not everything when being an effective coach. Traditionally coaching expertise or effectiveness has been measured by win–loss percentage, satisfaction of players, or years of coaching experience, but like in teacher expertise those metrics are highly ambiguous. Coaching expertise or effectiveness describes good coaching, which looks at coaching behaviour, dispositions, education, experience, and knowledge.

A widely used definition of effective coaching is "the consistent application of integrated professional, interpersonal, and intrapersonal knowledge, to improve athletes competence, confidence, connection, and character in specific coaching contexts".

Knowledge

Coaches require descriptive knowledge and procedural knowledge that relate to all aspects of coaching, with expert coaches using tacit knowledge more freely. Teachers knowledge has been categorized, like coaches knowledge with various terms being used. Many categories falling under content knowledge, pedagogical knowledge, pedagogical-content knowledge. When considering the need to build relationships with others and athletes, interpersonal knowledge has been included. Then when considering professional development requiring the skills to learn from experience while utilizing reflective practice, intrapersonal knowledge has been included.

It is rare in professional sport for a team not to hire a former professional player, but playing and coaching have different knowledge bases. The combination of professional, interpersonal, and intrapersonal knowledge can lead to good thinking habits, maturity, wisdom, and capacity to make reasonable judgements.

Professionalism

The subject, sport, curricular, and pedagogical knowledge all fall under this category of professional coaches knowledge. Including the "ologies" of sports science like; sport psychology, sport biomechanics, sport nutrition, exercise physiology, motor control, critical thinking, sociology, strength and conditioning, and sporting tactics, with all the associated sub areas of knowledge. This category of knowledge is what most coach education has been focused on  but this alone is not enough to be an effective coach.

Coaching is not just about sport specific skills  and education, especially when taking a holistic approach. Keeping sports people safe, and healthy  while participating are responsibilities of a coach as well as awareness of social factors like the relative age effect.

Interpersonality

Much of coaching involves interacting with players, staff, community, opposition, and then family members in youth sport. The relationships built in a sports team influence the social interactions which can affect player performance and development, fan culture, and in professional sport, financial backing. Effective coaches have knowledge that helps in all social contexts to make the best of each situation, with the coach–athlete relationship. being one of the most crucial to get right.

Excellent communication skills are imperative for coaches in order to provide their athletes with the adequate skills, knowledge and mental as well as tactical ability.

Intrapersonality

A coaches ability to improve relies on professional development in continued learning which uses a combination of evaluation and reflective practice. Their recognition of personal ethical views and disposition are also elements of intrapersonal knowledge. The understanding of oneself and ability to use introspection and reflection are skills that take time to develop, using deliberate practice in each changing context. Coaching expertise requires this knowledge much like teachers  as each experience can confirm or contradict a prior belief in player performance. The internal and external framing of a coaches role can impact their reflection, suggesting perspective can be a limitation promoting the idea of a coaching community for feedback.

Athlete outcomes

The coaching behavior assessment system has been used  to show that coaching knowledge and behavior have significant influence on participants psychological profile affecting self-esteem, motivation, satisfaction, attitudes, perceived competence, and performance. For a coach to be seen as effective, the people they work with should be improving, with expert coaches being able to sustain that over an extended period of time. There are various areas of development that can be categorized, which was first done with a 5 C's model: competence, confidence, connection, character and compassion  and was then later shortened to a 4 C's model by combining character and compassion.

People's competence can relate to their sport-specific technical and tactical skills, performance skills, improved health and fitness, and overall training habits. Their confidence relating to an internal sense of overall positive self-worth. Having a good connections is the positive bonds and social relationships with people inside and outside of the sporting context. Then character is respect for the sport and other participating showing good levels of morality, integrity, empathy, and responsibility.

The competence of a person is linked to leadership and centered around becoming a self-reliant member of a sports team and society in the coaching context. Competencies have guided much of sport psychology supporting positive youth development.

The self-determination theory suggests an environment that supports autonomous decision making, can help develop competence, confidence, and connection to others affecting motivation. Effective coaches therefore create supportive environments  while building good relationships with the people they coach.

Support staff

In professional sports, a coach is usually supported by one or more assistant coaches and a specialist team including sports scientists. The staff may include coordinators, a strength and conditioning coach, sport psychologist, physiotherapist, nutritionist, biomechanist, or sports analyst.

Context

The sport, environment, and context of coaching changes just like teaching. It is critical to understand the differences in sport [56] with recreational, developmental, and elite have been 3 suggested categories. This has been reduced to participation and performance by some. These different coaching context alter the trajectories of long term athlete development, affecting the prescription of training patterns and management of social influences

When integrating the suggested participation and performance contexts with age, 4 categories have been suggested to represent the various coaching contexts. The sampling years which are participation coaches for children. The recreational years which are participation coaches for adolescents and adults. The specializing years which are performance coaches for young adolescents. Then the investment years which are the coaches for older adolescents and adults.

sports-coach.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1782 2023-05-24 13:43:17

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1785) Teacher

Gist

A person whose job is to teach, especially in a school or college.

Summary

Teaching is the profession of those who give instruction, especially in an elementary school or a secondary school or in a university.

Measured in terms of its members, teaching is the world’s largest profession. In the 21st century it was estimated that there were about 80 million teachers throughout the world. Though their roles and functions vary from country to country, the variations among teachers are generally greater within a country than they are between countries. Because the nature of the activities that constitute teaching depends more on the age of the persons being taught than on any other one thing, it is useful to recognize three subgroups of teachers: primary-school, or elementary-school, teachers; secondary-school teachers; and university teachers. Elementary-school teachers are by far the most numerous worldwide, making up nearly half of all teachers in some developed countries and three-fourths or more in developing countries. Teachers at the university level are the smallest group.

The entire teaching corps, wherever its members may be located, shares most of the criteria of a profession, namely (1) a process of formal training, (2) a body of specialized knowledge, (3) a procedure for certifying, or validating, membership in the profession, and (4) a set of standards of performance—intellectual, practical, and ethical—that is defined and enforced by members of the profession. Teaching young children and even adolescents could hardly have been called a profession anywhere in the world before the 20th century. It was instead an art or a craft in which the relatively young and untrained women and men who held most of the teaching positions “kept school” or “heard lessons” because they had been better-than-average pupils themselves. They had learned the art solely by observing and imitating their own teachers. Only university professors and possibly a few teachers of elite secondary schools would have merited being called members of a profession in the sense that medical doctors, lawyers, or priests were professionals; in some countries even today primary-school teachers may accurately be described as semiprofessionals. The dividing line is imprecise. It is useful, therefore, to consider the following questions: (1) What is the status of the profession? (2) What kinds of work are done? (3) How is the profession organized?

Details

A teacher, also called a schoolteacher or formally an educator, is a person who helps students to acquire knowledge, competence, or virtue, via the practice of teaching.

Informally the role of teacher may be taken on by anyone (e.g. when showing a colleague how to perform a specific task). In some countries, teaching young people of school age may be carried out in an informal setting, such as within the family (homeschooling), rather than in a formal setting such as a school or college. Some other professions may involve a significant amount of teaching (e.g. youth worker, pastor).

In most countries, formal teaching of students is usually carried out by paid professional teachers. This article focuses on those who are employed, as their main role, to teach others in a formal education context, such as at a school or other place of initial formal education or training.

Duties and functions

A teacher's role may vary among cultures.

Teachers may provide instruction in literacy and numeracy, craftsmanship or vocational training, the arts, religion, civics, community roles, or life skills.

Formal teaching tasks include preparing lessons according to agreed curricula, giving lessons, and assessing pupil progress.

A teacher's professional duties may extend beyond formal teaching. Outside of the classroom teachers may accompany students on field trips, supervise study halls, help with the organization of school functions, and serve as supervisors for extracurricular activities. They also have the legal duty to protect students from harm, such as that which may result from bullying, sexual harassment, racism or abuse. In some education systems, teachers may be responsible for student discipline.

Competences and qualities required by teachers

Teaching is a highly complex activity. This is partially because teaching is a social practice, that takes place in a specific context (time, place, culture, socio-political-economic situation etc.) and therefore is shaped by the values of that specific context. Factors that influence what is expected (or required) of teachers include history and tradition, social views about the purpose of education, accepted theories about learning, etc.

Competences

The competences required by a teacher are affected by the different ways in which the role is understood around the world. Broadly, there seem to be four models:

* the teacher as manager of instruction;
* the teacher as caring person;
* the teacher as expert learner; and
* the teacher as cultural and civic person.

The Organisation for Economic Co-operation and Development has argued that it is necessary to develop a shared definition of the skills and knowledge required by teachers, in order to guide teachers' career-long education and professional development. Some evidence-based international discussions have tried to reach such a common understanding. For example, the European Union has identified three broad areas of competences that teachers require:

* Working with others
* Working with knowledge, technology and information, and
* Working in and with society.

Scholarly consensus is emerging that what is required of teachers can be grouped under three headings:

* knowledge (such as: the subject matter itself and knowledge about how to teach it, curricular knowledge, knowledge about the educational sciences, psychology, assessment etc.)
* craft skills (such as lesson planning, using teaching technologies, managing students and groups, monitoring and assessing learning etc.) and
* dispositions (such as essential values and attitudes, beliefs and commitment).

Qualities:

Enthusiasm

It has been found that teachers who showed enthusiasm towards the course materials and students can create a positive learning experience. These teachers do not teach by rote but attempt to invigorate their teaching of the course materials every day. Teachers who cover the same curriculum repeatedly may find it challenging to maintain their enthusiasm, lest their boredom with the content bore their students in turn. Enthusiastic teachers are rated higher by their students than teachers who didn't show much enthusiasm for the course materials.

Teachers that exhibit enthusiasm are more likely to have engaged, interested and energetic students who are curious about learning the subject matter. Recent research has found a correlation between teacher enthusiasm and students' intrinsic motivation to learn and vitality in the classroom. Controlled, experimental studies exploring intrinsic motivation of college students has shown that nonverbal expressions of enthusiasm, such as demonstrative gesturing, dramatic movements which are varied, and emotional facial expressions, result in college students reporting higher levels of intrinsic motivation to learn. But even while a teacher's enthusiasm has been shown to improve motivation and increase task engagement, it does not necessarily improve learning outcomes or memory for the material.

There are various mechanisms by which teacher enthusiasm may facilitate higher levels of intrinsic motivation. Teacher enthusiasm may contribute to a classroom atmosphere of energy and enthusiasm which feeds student interest and excitement in learning the subject matter. Enthusiastic teachers may also lead to students becoming more self-determined in their own learning process. The concept of mere exposure indicates that the teacher's enthusiasm may contribute to the student's expectations about intrinsic motivation in the context of learning. Also, enthusiasm may act as a "motivational embellishment", increasing a student's interest by the variety, novelty, and surprise of the enthusiastic teacher's presentation of the material. Finally, the concept of emotional contagion may also apply: students may become more intrinsically motivated by catching onto the enthusiasm and energy of the teacher.

Interaction with learners

Research shows that student motivation and attitudes towards school are closely linked to student-teacher relationships. Enthusiastic teachers are particularly good at creating beneficial relations with their students. Their ability to create effective learning environments that foster student achievement depends on the kind of relationship they build with their students. Useful teacher-to-student interactions are crucial in linking academic success with personal achievement. Here, personal success is a student's internal goal of improving themselves, whereas academic success includes the goals they receive from their superior. A teacher must guide their student in aligning their personal goals with their academic goals. Students who receive this positive influence show stronger self-confidence and greater personal and academic success than those without these teacher interactions.

Students are likely to build stronger relations with teachers who are friendly and supportive and will show more interest in courses taught by these teachers. Teachers that spend more time interacting and working directly with students are perceived as supportive and effective teachers. Effective teachers have been shown to invite student participation and decision making, allow humor into their classroom, and demonstrate a willingness to play.

Teaching qualifications

In many countries, a person who wishes to become a teacher must first obtain specified professional qualifications or credentials from a university or college. These professional qualifications may include the study of pedagogy, the science of teaching. Teachers, like other professionals, may have to, or choose to, continue their education after they qualify, a process known as continuing professional development.

The issue of teacher qualifications is linked to the status of the profession. In some societies, teachers enjoy a status on a par with physicians, lawyers, engineers, and accountants, in others, the status of the profession is low. In the twentieth century, many intelligent women were unable to get jobs in corporations or governments so many chose teaching as a default profession. As women become more welcomed into corporations and governments today, it may be more difficult to attract qualified teachers in the future.

Teachers are often required to undergo a course of initial education at a College of Education to ensure that they possess the necessary knowledge, competences and adhere to relevant codes of ethics.

There are a variety of bodies designed to instill, preserve and update the knowledge and professional standing of teachers. Around the world many teachers' colleges exist; they may be controlled by government or by the teaching profession itself.

They are generally established to serve and protect the public interest through certifying, governing, quality controlling, and enforcing standards of practice for the teaching profession.

Professional standards

The functions of the teachers' colleges may include setting out clear standards of practice, providing for the ongoing education of teachers, investigating complaints involving members, conducting hearings into allegations of professional misconduct and taking appropriate disciplinary action and accrediting teacher education programs. In many situations teachers in publicly funded schools must be members in good standing with the college, and private schools may also require their teachers to be college members. In other areas these roles may belong to the State Board of Education, the Superintendent of Public Instruction, the State Education Agency or other governmental bodies. In still other areas Teaching Unions may be responsible for some or all of these duties.

Professional misconduct

Misconduct by teachers, especially sexual misconduct, has been getting increased scrutiny from the media and the courts. A study by the American Association of University Women reported that 9.6% of students in the United States claim to have received unwanted sexual attention from an adult associated with education; be they a volunteer, bus driver, teacher, administrator or other adult; sometime during their educational career.

A study in England showed a 0.3% prevalence of sexual abuse by any professional, a group that included priests, religious leaders, and case workers as well as teachers. It is important to note, however, that this British study is the only one of its kind and consisted of "a random ... probability sample of 2,869 young people between the ages of 18 and 24 in a computer-assisted study" and that the questions referred to "sexual abuse with a professional," not necessarily a teacher. It is therefore logical to conclude that information on the percentage of abuses by teachers in the United Kingdom is not explicitly available and therefore not necessarily reliable. The AAUW study, however, posed questions about fourteen types of sexual harassment and various degrees of frequency and included only abuses by teachers. "The sample was drawn from a list of 80,000 schools to create a stratified two-stage sample design of 2,065 8th to 11th grade students". Its reliability was gauged at 95% with a 4% margin of error.

In the United States especially, several high-profile cases such as Debra LaFave, Pamela Rogers Turner, and Mary Kay Letourneau have caused increased scrutiny on teacher misconduct.

Chris Keates, the general secretary of National Association of Schoolmasters Union of Women Teachers, said that teachers who have gender with pupils over the age of consent should not be placed on the gender offenders register and that prosecution for statutory math "is a real anomaly in the law that we are concerned about." This has led to outrage from child protection and parental rights groups. Fears of being labelled a pedophile or hebephile has led to several men who enjoy teaching avoiding the profession. This has in some jurisdictions reportedly led to a shortage of male teachers.

math-teacher.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1783 2023-05-25 13:09:42

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1786) Heaven

Gist

The abode of God, the angels, and the spirits of the righteous after death; the place or state of existence of the blessed after the mortal life.

Summary

Heaven, or the heavens, is a common religious cosmological or transcendent supernatural place where beings such as deities, angels, souls, saints, or venerated ancestors are said to originate, be enthroned, or reside. According to the beliefs of some religions, heavenly beings can descend to Earth or incarnate and earthly beings can ascend to Heaven in the afterlife or, in exceptional cases, enter Heaven without dying.

Heaven is often described as a "highest place", the holiest place, a Paradise, in contrast to hell or the Underworld or the "low places" and universally or conditionally accessible by earthly beings according to various standards of divinity, goodness, piety, faith, or other virtues or right beliefs or simply divine will. Some believe in the possibility of a heaven on Earth in a world to come.

Another belief is in an axis mundi or world tree which connects the heavens, the terrestrial world, and the underworld. In Indian religions, heaven is considered as Svarga loka, and the soul is again subjected to rebirth in different living forms according to its karma. This cycle can be broken after a soul achieves Moksha or Nirvana. Any place of existence, either of humans, souls or deities, outside the tangible world (Heaven, Hell, or other) is referred to as the otherworld.

At least in the Abrahamic faiths of Christianity, Islam, and some schools of Judaism, as well as Zoroastrianism, heaven is the realm of Afterlife where good actions in the previous life are rewarded for eternity (hell being the place where bad behavior is punished).

Details

Heaven, in many religions, is the abode of God or the gods, as well as of angels, deified humans, the blessed dead, and other celestial beings. It is often conceived as an expanse that overarches the earth, stretching overhead like a canopy, dome, or vault and encompassing the sky and upper atmosphere; the Sun, Moon, and stars; and the transcendent realm beyond.

Overview

In most cultures, heaven is synonymous with order: it contains the blueprints for creation, the mandate by which earthly rulers govern, and the standards by which to measure beauty, goodness, and truth. In religious thought and poetic fancy, heaven is not only a place but also a state of being. As such it is characterized negatively as freedom from hunger, thirst, pain, deprivation, disease, ignorance, and strife and positively as complete contentment, perfect knowledge, everlasting rest, ineffable peace, communion with God, and rapturous joy. Heaven is also understood as the reward for a life well lived, the fulfillment of the heart’s deepest desire, and the ultimate reference point for all human striving and hope.

In ancient cosmologies, heaven is situated in the extreme west or east, on a faraway island or mountaintop, or in astral realms. Plurality and even redundancy is the rule, as multiple heavens overlap with earthly paradises and astronomical spheres. Many myths of the origin of heaven recount that in the beginning heaven and earth were closely wedded; the present condition of estrangement, marked by the withdrawal of the gods and by suffering, sin, and death, is the result of a catastrophic event for which human ancestors or rival heavenly powers are to blame. A desire to recapture lost intimacy with heaven suffuses the literature of the world’s religions, but there is enormous variety in how different traditions conceive of the longed-for realization of human hopes.

World mythology abounds in stories of attempts to invade heaven, such as the flight of Icarus, the Hindu legend of the conquest of heaven by the asura (demon) king Bali, and countless variations on the story of Babel, a man-made tower reaching to heaven (Genesis 11:1–9). Such attempts almost always come to a bad end. Shamans, prophets, kings, and visionaries may visit heaven by way of a dream, trance, or extraordinary summons, but the usual route is by death. Most cultures see the road to heaven as fraught with dangers and trials, such as bridges that narrow to a razor’s edge, rivers filled with waters of death, and hostile powers who seek to block the soul’s ascent. All such ordeals are open to moral and psychological interpretation. In world literature the drama of the perilous journey to heaven has appeared in many forms, including epic, allegory, satire, science fiction, and fantasy. Notable examples are Dante’s masterpiece, The Divine Comedy (early 14th century), the 16th-century Chinese comic novel Xiyouji (“The Journey to the West”), John Bunyan’s The Pilgrim’s Progress (1678), Mark Twain’s Extract from Captain Stormfield’s Visit to Heaven (1909), and C.S. Lewis’s Perelandra (1943).

Earning a place in heaven usually requires meritorious activity, such as almsgiving, caring for the sick, performing sacrifices or other sacramental rites pleasing to the heavenly powers, exhibiting heroic virtue as a warrior, ascetic, or martyr, or enduring great suffering. Some traditions believe that merit can be transferred by means of pious actions performed on behalf of the dead. Yet many take the view that heaven is attainable only as the free gift of a divine being. Adherents of Pure Land Buddhism, for example, rely upon the vow of Amitabha Buddha to bring to Sukhavati (the Pure Land, or Western Paradise) all who sincerely call upon his name; the Lutheran trusts in justification by faith alone; and popular piety in general looks to the protection of powerful heavenly patrons.

Descriptions of heaven vie for superlatives, for here everything must be the best imaginable: from the delectable boar that feeds the crowds of Valhalla (the heavenly abode in Norse mythology), boiled every day and coming alive again every evening, to the perfumed paradise extolled by the Buddhist Sukhavati-vyuha sutras, which coruscates with precious stones, peacocks, parasols, and lotus blossoms, never knowing extremes of climate nor discord of any kind. Heaven may be characterized as a garden (nature perfected) or a city (society perfected) or both at once; it may be a realm of mystical tranquility or of heightened activity. In broad strokes the imagery is universal, with light being the privileged symbol; yet the details are often culture-specific, with the occupations most valued by a given society receiving pride of place, as in the hunters’ paradises of Australian Aboriginal mythology, the Platonic heaven for contemplatives, the bureaucratic heavens of imperial China, and the rabbinic Heavenly Academy.

Heaven in world religions and history:

Ancient Mesopotamia

Creation myths of ancient Mesopotamia typically begin with the separation of heaven and earth, giving rise to a three-story universe that includes heaven above, earth in the middle, and the underworld below. The high gods reign in the heavens as an assembly or council. Earth is the realm of mortal humans, whose purpose is to serve the gods by providing them with sacred dwellings, food, and tribute; it is also populated by minor gods and demons who play a role in magic. At death human beings descend to the underworld, a dreary land of no return; only a few exceptional human heroes are permitted to enter heaven.

In the epic of Gilgamesh, a cycle of Sumerian and Akkadian legends about the king of the Mesopotamian city-state Uruk, Gilgamesh searches unsuccessfully for immortality only to have the sober truth of human mortality brought home: “When the gods created mankind, death for mankind they allotted, life in their own hands retaining.” Good relations with heaven were nonetheless considered vital to the well-being of the living. The Gilgamesh epic suggests that the social order of Uruk was threatened not only by Gilgamesh’s unrealistic ambition to conquer death but also by his unwillingness to enter into sacred marriage with the goddess Ishtar (Sumerian: Inanna), whose temple was the centre of civic and cultic life. Concern for good relations with heaven is reflected as well in the massive body of Mesopotamian texts devoted to celestial observation, astronomical theory, and astrological lore, all of which served to discern and cope with the perceived influence of heaven on human affairs.

Egypt

An even greater emphasis on the ruler’s role as guarantor of right relations with heaven characterized ancient Egyptian civilization throughout its 3,000-year history. The king shared with the sun god Re and the sky god Horus responsibility for defending order against chaos, and he was granted the privilege of enjoying renewable life as part of the great cosmic circuit. That this renewable life depended on massive cultic support is evident from monumental tombs, grave goods, and elaborate mortuary rituals.

Heaven was visualized mythically as the divine cow on whose back the sun god withdrew from earth; as the falcon-headed god Horus whose glittering eyes formed the Sun and Moon; or as the goddess Nut arching over the earth. A happy afterlife, however, could take place in any number of locations: in the fertile Field of Reeds, as a passenger in the solar bark, in the extreme west or east, or among the circumpolar stars. The Pyramid Texts envision a happy afterlife for royalty alone; the dead king is identified with Osiris as well as with the triumphant rising sun. The Coffin Texts and the Book of the Dead, in which the afterlife is to some degree “democratized,” identify all the deceased with Osiris in his capacity as judge and ruler of the underworld.

Judaism

True to its Middle Eastern origins, ancient Judaism at first insisted on the separateness of heaven and earth and had little to say about the prospect of a heavenly afterlife: “The heavens are the LORD’s heavens, but the earth he has given to human beings” (Psalm 115:16). Heaven (in Hebrew, the plural šāmayim) was a vast realm above the earth, supported by a hard firmament of dazzling precious stone, which kept the upper waters from mingling with the waters beneath. The Sun, Moon, and stars were set in the firmament, and windows could open to let down rain, snow, hail, or dew from the celestial storehouses. God, the maker of heaven and earth, was enthroned in the highest reach of heaven; from there he intervened in the affairs of his creatures and revealed through Moses and the prophets his sovereignty, providential care, and cultic and moral demands. Surrounding the divine throne was a heavenly host of solar, astral, and angelic beings. These celestial beings shared many attributes with the gods and goddesses of Canaanite and Mesopotamian polytheism, but the emerging monotheism of the Hebrew Scriptures demanded exclusive commitment to one God, referred to as The Lord, to whom all powers in heaven and on earth were subject.

In ancient Judaism, as in other Middle Eastern religions of the period, the cosmos had a three-story structure. God dwelt in heaven and was also present in the Temple of Jerusalem, his palace on earth. The underworld (Hebrew: She’ōl), to which human beings were consigned at death, was seemingly outside God’s jurisdiction. This picture changed dramatically, however, in response to the Babylonian Exile and the destruction of the First Temple in 586 BCE, as the conviction began to take hold that there must be no limit to God’s power to vindicate his people even after death. During the postexilic period, the experience of foreign rule intensified longing for future deliverance, encouraged speculation influenced by Persian and Greco-Roman models of cosmology, angelology, and immortality, and produced martyrs whose claim on a heavenly afterlife seemed particularly strong. Thus the Book of Daniel, considered the latest composition in the Hebrew Bible, contains this prophecy:

Many of those who sleep in the dust of the earth shall awake, some to everlasting life, and some to shame and everlasting contempt. Those who are wise shall shine like the brightness of the sky, and those who lead many to righteousness, like the stars for ever and ever. (12:2–3)

While belief in a heavenly afterlife became widespread in the Hellenistic Age (323–30 BCE), no single model predominated, but rather a profusion of images and schemes, including resurrection of the dead, immortality of the soul, and transformation into an angel or star. Visionary journeys through the heavens (conceived as a hierarchy of spheres) became a staple of apocalyptic literature, and Jewish mystics produced a vast theosophical lore concerning heavenly palaces, angelic powers, and the dimensions of God’s body. Traces of this heaven mysticism can still be found in the Jewish prayer book (siddur).

Classical Rabbinic Judaism, which emerged after the destruction of the Second Temple (70 CE) and established the main lines on which Jewish eschatology would develop, admitted a plurality of images for heaven; the expression ʿolam ha-ba (“the world to come”) refers both to the messianic age and to the heavenly estate to which the righteous ascend at death. After death, righteous souls await the resurrection in the heavenly Garden of Eden or hidden under the divine throne. Jewish liturgy piles praise upon praise in exaltation of the name and kingship of God, who “rides the highest heavens,” blesses his people eternally, judges, redeems, and “maintains His faith to those asleep in the dust.” The Sabbath is understood to be a preview of heaven, anticipating the wedding feast at the end of time, when the work of creation will be complete and the captivity of Zion will end.

Christianity

Christianity began as one of many Jewish apocalyptic and reform movements active in Palestine in the 1st century CE. These groups shared an intense conviction that the new heavens and new earth prophesied by Isaiah (Isaiah 65:17) were close at hand. They believed that history would soon find its consummation in a world perfected, when the nations would be judged, the elect redeemed, and Israel restored.

Jewish and Christian conceptions of heaven developed side by side, drawing from shared biblical and Greco-Roman sources. The liturgy of Temple, synagogue, and eucharistic service informed images of heaven, for in worship the community symbolically ascends to the heavenly Jerusalem, a realm of perpetual adoration and intercession for the needs of the world, where angels never cease to sing “Holy, holy, holy is the LORD of hosts” (Isaiah 6:3).

Christians believe that the estrangement between heaven and earth ended with the Incarnation, Passion, Resurrection, and Ascension of Christ: “in Christ God was reconciling the world to himself” (2 Corinthians 5:19). Sharing in Christ’s deathless divine life are the members of his mystical body, the church (Greek: ekklēsia), which is the communion of saints both living and dead. The Virgin Mary, regarded as Queen of Heaven, tirelessly intercedes for the faithful, including sinners who seek her protection.

Traditional Christian theology teaches that communion with God is the chief end for which human beings were made and that those who die in a state of grace are immediately (or after a period of purification) admitted to the bliss of heaven, where they become like God (1 John 3:2), see God face to face (1 Corinthians 13:12), and see all things in God. With the resurrection of the dead, beatitude will embrace the whole person—body, soul, and spirit. The social dimension of this beatitude is expressed in the last book of the New Testament, Revelation to John, with its vision of the blessed multitudes adoring God, who dwells in their midst, in a city of bejeweled splendour (21–22). Worship, fellowship, and creative pursuits all form part of the composite Christian picture of heaven, but the emphasis on domestic happiness and never-ending spiritual progress in heaven is largely a modern innovation.

Islam

The Qurʾān, which according to Islamic tradition has its original in heaven, frequently calls attention to the heavens as a sign of God’s sovereignty, justice, and mercy. When the earth was just formed and the sky a mere vapour, God commanded them to join together, and they willingly submitted (sura 41:11–12). God then completed his creation by forming the sky into seven firmaments, adorning the lower firmament with lights, and assigning to everything its just measure. The seven heavens and the earth perpetually celebrate God’s praise (sura 17:44) and by their majestic design provide evidence that God indeed has the power to raise the dead and to judge them on the last day.

Before the resurrection, the souls of the dead are thought to dwell in an intermediate state, experiencing a preview of their future condition of misery or bliss. On the Day of Judgment, heaven will be split asunder, the mountains will crumble to dust, the earth will give up its dead, and each person will undergo a final test. The righteous, with faces beaming, will pass the test easily, passing through hell with ease. In gardens of bliss they will recline on royal couches, clothed in fine silk and shaded by fruit trees of every description. Immortal youths will serve them cool drinks and delicacies, and ever-virgin companions with lustrous eyes will join them. They will also be reunited with their faithful offspring, and peace will reign.

Being in God’s presence is the chief delight of paradise, according to Muslim philosophers and mystics, and the greater one’s degree of blessedness, the closer one will be to God. Accounts of the Prophet Muhammad’s ascent through the seven heavens to the very throne of God are taken as revealing his uniquely favoured status. Although Sufis (Islamic mystics) speak of an ecstatic “annihilation” (fanāʾ) in the presence of God, the emphasis within mainstream Islamic traditions on God’s transcendence has discouraged the development of eschatology focusing on divinization or beatific union with God.

Hinduism

In Hinduism (a comparatively modern term that covers manifold religious practices and worldviews of the peoples of South Asia), heaven is the perennial object of myth, ritual practice, and philosophical speculation. The most ancient religious texts, the Vedas (1500–1200 BCE), depict heaven as the domain of sky gods such as Indra, the thunder god; Surya, the Sun; Agni, the sacrificial fire; Soma, the heavenly elixir (embodied on earth as an intoxicating plant); Varuna, the overseer of cosmic order; and Yama, the first human to die. Ritual sacrifice was deemed essential for world maintenance, and funeral rites ensured that the spirit of the deceased would ascend to the “world of the fathers” on high. Rebirth in heaven depended upon having male householder descendants to sponsor the necessary rites.

During the period of the early Upanishads (800–500 BCE), a group of itinerant sages turned from the sacrificial ritualism of Vedic tradition to develop the rudiments of classical Hindu soteriology (the theological doctrine of salvation). These sages taught that the entire phenomenal world is caught up in an endless cycle of birth and death (samsara) propelled by desire. A person’s station in life is determined by actions performed in previous lives (karma). To be reborn in heaven (svarga) is pleasant but impermanent; even the gods must eventually die. The ultimate goal is to escape this perishing life and attain union with the infinite spirit (brahman).

The Upanishadic path of liberation required practicing spiritual disciplines beyond the capacity of ordinary householders. But by the beginning of the 2nd millennium, the mystical asceticism of the Upanishads had been absorbed into the great stream of devotional Hinduism. The result was the appearance of new forms of religious literature, such as the Bhagavadgita and the Puranas, in which salvation takes the form of personal union with the divine, thus opening a broad way to heaven (or, rather, to the heaven beyond all heavens) to those who entrust themselves to the protection of a deity.

Buddhism

Buddhism began in the early 5th century BCE in northeast India as a renunciant movement seeking liberation from samsara through knowledge and spiritual discipline. The Buddha Gotama, the founder of the religion, is the paradigm of an enlightened being who has entered parinibbana (complete nibbana [Sanskrit nirvana]), the state in which the causes of all future existence have been eliminated. Classical Buddhist cosmology describes six realms of rebirth within an incalculably vast system of worlds and eons. One may be reborn as an animal, a human, a hungry ghost, a demigod, a denizen of one of the horrific hell realms, or a god in one of the pleasurable heaven realms. All of these births partake of the impermanence that characterizes samsara. Thus, heaven, in the sense of a celestial realm, is not the goal of spiritual practice. Yet Buddhist tradition speaks of celestial beings of limitless wisdom and compassion, such as Amitabha Buddha and the bodhisattva Avalokiteshvara, who have dedicated their abundant merit to the cultivation of heavenlike Pure Lands for the salvation of sentient beings. Devotees reborn in these paradisiacal realms find there the ideal conditions for attaining enlightenment.

Other heavens

Most, if not all, cultures possess multiple images of heaven and paradise, which coexist in unsystematic profusion. Mount Olympus, the Elysian Fields, and the Isles of the Blessed in Greek and Roman mythology constitute just one example. In Chinese civilizations, conformity to the “way of heaven” (tiandao) is a perennial ideal that appears in a variety of traditions. It is evident in ancient practices of sacrifice and divination, in Confucian teachings on discerning the will of heaven (tianming; literally “heaven’s mandate”) within the nexus of social relations, in Daoist teachings on harmonizing with the way of heaven as manifest in nature, in popular Daoist legends of the Ba Xian (“Eight Immortals”), who travel to heaven by means of alchemy and yoga, and also in innumerable Chinese Buddhist and sectarian movements dedicated to the cult of heaven.

In some traditions, heaven seems to recede into the background. Native American cultures, for example, are oriented toward the totality of earth, sky, and the four directions rather than toward heaven alone. Although heaven is not typically the abode of the blessed dead in Native American mythology, the stars, Sun, Moon, clouds, mountaintops, and sky-dwelling creators figure significantly. The Christian-influenced prophetic visions characteristic of revitalization movements such as the 19th-century Ghost Dance and the religion of Handsome Lake are fervently millenarian, proclaiming the advent of an eschatological paradise to be accompanied by the return of the dead and the restoration of tribal life.

New models of heaven in the modern West have been influenced by ideals of progress, evolution, social equality, and domestic tranquility. The 19th-century Spiritualist movement, adapting the doctrines of the Swedish scientist and theologian Emanuel Swedenborg and of the German physician Franz Anton Mesmer, mixed clairvoyance with science to describe heavenly spheres, radiant with luminiferous ether, where the spirits worked for causes such as abolition, temperance, feminism, and socialism and pursued opportunities for self-improvement. Utopian communities sought to bring this progressive heaven to practical realization on earth. Consolation literature, epitomized in the United States by Elizabeth Stuart Phelps’s novel The Gates Ajar (1868), portrayed heaven as an intimate realm of family reunions.

Belief in heaven persists despite age-old criticisms: that it is an irrational, wish-fulfilling fantasy, a symptom of alienation, and an evasion of responsibility for bettering the real world. Defenders of the doctrine insist, on the contrary, that belief in heaven has a morally invigorating effect, endowing life with meaning and direction and inspiring deeds of heroic self-sacrifice. Whatever be the case, familiarity with the iconography of heaven is indispensable to understanding Western literature and art, including the poetry of Dante, Edmund Spenser, William Shakespeare, John Milton, John Donne, George Herbert, Henry Vaughan, Thomas Traherne, John Bunyan, and William Blake, as well as the paintings of Fra Angelico, Luca Signorelli, Sandro Botticelli, Correggio, Jan van Eyck, and Stefan Lochner. Much the same can be said for other cultures: in every historical period, depictions of heaven provide a revealing index of what a society regards as the highest good. Hence the study of heaven is, in its broadest application, the study of ultimate human ideals.

desktop-wallpaper-heaven-heaven-pc.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1784 2023-05-26 00:37:14

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1787) Walking

Gist

Walking isn’t always the quickest way to make it from point A to point B. But it’s not just about quick transport: The benefits of walking for your body and mind are numerous.

Summary

The Physical Activity Guidelines for Americans recommend that adults get at least 150 minutes of moderate-intensity aerobic physical activity or 75 minutes of vigorous-intensity physical activity, or an equivalent combination each week. The guidelines also recommend that children and adolescents be active for at least 60 minutes every day. Following these guidelines can contribute to overall health, and decrease the risk of chronic diseases such as heart disease, cancer or diabetes.

Walking is a great way to get the physical activity needed to obtain health benefits. Walking does not require any special skills. It also does not require a gym membership or expensive equipment. A single bout of moderate-to-vigorous physical activity can improve sleep, memory, and the ability to think and learn. It also reduces anxiety symptoms.

Details

Walking is an activity that ranges from a competitive sport, usually known as race walking, to a primary and popular form of outdoor recreation and mild aerobic exercise.

Racewalking

The technique followed in the track-and-field sport of racewalking requires that a competitor’s advancing foot touch the ground before the rear foot leaves the ground, and for this reason the sport is sometimes known as heel-and-toe racing. In all countries in the world—with the exception of England—and in the Olympic Games the advancing leg must also be straightened briefly while that foot is in contact with the ground.

Walking as a competitive sport dates from the latter half of the 19th century, although stories of individual walking feats were recorded much earlier. A 7-mile (11-km) walking event was introduced by the Amateur Athletic Club of England at its championships in 1866. During the 1870s and ’80s, professional races were held indoors in New York City in which athletes competed around the clock but were permitted to eat, rest, or nap. The winner was the contestant who covered the greatest distance in six days.

Walking races of 10 miles and 3,500 metres were added to the men’s Olympic program in 1908. Since 1956, however, the Olympic distances have been 20 and 50 km. A women’s 10-km walk was introduced at the 1992 Games; at the 2000 Games the women’s walking event was extended to 20 km.

Recreational and fitness walking

Organized noncompetitive walking is extremely popular in the United States and Europe. Millions participate for the relaxation and exercise it offers. Walking for recreation or fitness is differentiated from hiking by its shorter distances, less challenging settings, and the lack of need for specialized equipment. Walking can simply be an unorganized meander around a local park or trail for relaxation or a daily regimen of several miles that is undertaken for health benefits.

The shoes needed for comfortable recreational walking vary by conditions and the type of walk undertaken. While distance walkers often use conventional hiking boots, particularly in colder weather, shorter-distance recreational walking can comfortably be done in lighter shoes similar to those worn by runners.

Walking is the preferred exercise of a significant segment of the population of North America and Europe. Its health benefits are well documented, ranging from better overall cardiovascular health to the promotion of healthy weight loss, stress relief, and a reduction in the risk of some forms of cancer by up to 40 percent. One major attraction of fitness walking is its less strenuous nature, which reduces the likelihood of the types of injuries more commonly seen in such high-impact sports as running. Fitness walking is an ideal form of exercise for senior citizens and others who need to exercise but prefer a more gentle means of doing so.

Recreational walkers often utilize short sections of trails designed for long-distance hikers. However, not all walking trips are short. Some organized walks last for days and cover distances in excess of 50 miles. Through the efforts of organized walking associations, recreational walkers have established a sizeable trail network of their own in cities and rural areas. In the United States, the American Volkssport Federation is an umbrella organization made up of more than 500 local walking clubs nationwide that promote recreational walking and organize group walks or rambles. In Canada, the Canadian Volkssport Federation is home to another 100 local walking clubs that promote recreational walking in their areas. In Great Britain, the British Walking Federation oversees approximately 40 walking clubs nationwide.

Additional Information

Walking (also known as ambulation) is one of the main gaits of terrestrial locomotion among legged animals. Walking is typically slower than running and other gaits. Walking is defined by an 'inverted pendulum' gait in which the body vaults over the stiff limb or limbs with each step. This applies regardless of the usable number of limbs—even arthropods, with six, eight, or more limbs, walk.

Difference from running

The word walk is descended from the Old English wealcan "to roll". In humans and other bipeds, walking is generally distinguished from running in that only one foot at a time leaves contact with the ground and there is a period of double-support. In contrast, running begins when both feet are off the ground with each step. This distinction has the status of a formal requirement in competitive walking events. For quadrupedal species, there are numerous gaits which may be termed walking or running, and distinctions based upon the presence or absence of a suspended phase or the number of feet in contact any time do not yield mechanically correct classification. The most effective method to distinguish walking from running is to measure the height of a person's centre of mass using motion capture or a force plate at mid-stance. During walking, the centre of mass reaches a maximum height at mid-stance, while running, it is then at a minimum. This distinction, however, only holds true for locomotion over level or approximately level ground. For walking up grades above 10%, this distinction no longer holds for some individuals. Definitions based on the percentage of the stride during which a foot is in contact with the ground (averaged across all feet) of greater than 50% contact corresponds well with identification of 'inverted pendulum' mechanics and are indicative of walking for animals with any number of limbs, although this definition is incomplete. Running humans and animals may have contact periods greater than 50% of a gait cycle when rounding corners, running uphill or carrying loads.

Speed is another factor that distinguishes walking from running. Although walking speeds can vary greatly depending on many factors such as height, weight, age, terrain, surface, load, culture, effort, and fitness, the average human walking speed at crosswalks is about 5.0 kilometres per hour (km/h), or about 1.4 meters per second (m/s), or about 3.1 miles per hour (mph). Specific studies have found pedestrian walking speeds at crosswalks ranging from 4.51 to 4.75 km/h (2.80 to 2.95 mph) for older individuals and from 5.32 to 5.43 km/h (3.31 to 3.37 mph) for younger individuals; a brisk walking speed can be around 6.5 km/h (4.0 mph). In Japan, the standard measure for walking speed is 80 m/min (4.8 km/h). Champion racewalkers can average more than 14 km/h (8.7 mph) over a distance of 20 km (12 mi).

An average human child achieves independent walking ability at around 11 months old.

Health benefits

Regular, brisk exercise of any kind can improve confidence, stamina, energy, weight control and life expectancy and reduces stress. It can also decrease the risk of coronary heart disease, strokes, diabetes, high blood pressure, bowel cancer and osteoporosis. Scientific studies have also shown that walking, besides its physical benefits, is also beneficial for the mind, improving memory skills, learning ability, concentration, mood, creativity, and abstract reasoning. Sustained walking sessions for a minimum period of thirty to sixty minutes a day, five days a week, with the correct walking posture, reduce health risks and have various overall health benefits, such as reducing the chances of cancer, type 2 diabetes, heart disease, anxiety disorder and depression. Life expectancy is also increased even for individuals suffering from obesity or high blood pressure. Walking also improves bone health, especially strengthening the hip bone, and lowering the harmful low-density lipoprotein (LDL) cholesterol, and raising the useful high-density lipoprotein (HDL) cholesterol. Studies have found that walking may also help prevent dementia and Alzheimer's. Walking at a pace that increases ones heart rate to 70% of their maximum heart rate, also known as the "fat-burning heart rate" can cause the body to utilize its fat reserves for energy leading to fat loss. An individual's maximum heart rate can be calculated by subtracting their age from 220.

The Centers for Disease Control and Prevention's fact sheet on the "Relationship of Walking to Mortality Among U.S. Adults with Diabetes" states that those with diabetes who walked for two or more hours a week lowered their mortality rate from all causes by 39 percent. Women who took 4,500 steps to 7,500 steps a day seemed to have fewer premature deaths compared to those who only took 2,700 steps a day. "Walking lengthened the life of people with diabetes regardless of age, gender, race, body mass index, length of time since diagnosis and presence of complications or functional limitations." It has been suggested that there is a relationship between the speed of walking and health, and that the best results are obtained with a speed of more than 2.5 mph (4.0 km/h).

New research in 2022 led to the recommendation that a commuter should walk at least 6000 steps, each weekday throughout a year, to reach optimal health effects.

Governments now recognize the benefits of walking for mental and physical health and are actively encouraging it. This growing emphasis on walking has arisen because people walk less nowadays than previously. In the UK, a Department of Transport report found that between 1995/97 and 2005 the average number of walk trips per person fell by 16%, from 292 to 245 per year. Many professionals in local authorities and the National Health Service are employed to halt this decline by ensuring that the built environment allows people to walk and that there are walking opportunities available to them. Professionals working to encourage walking come mainly from six sectors: health, transport, environment, schools, sport and recreation, and urban design.

One program to encourage walking is "The Walking the Way to Health Initiative", organized by the British walkers association The Ramblers, which is the largest volunteer led walking scheme in the United Kingdom. Volunteers are trained to lead free Health Walks from community venues such as libraries and doctors' surgeries. The scheme has trained over 35,000 volunteers and has over 500 schemes operating across the UK, with thousands of people walking every week. A new organization called "Walk England" launched a web site in June 2008 to provide these professionals with evidence, advice, and examples of success stories of how to encourage communities to walk more. The site has a social networking aspect to allow professionals and the public to ask questions, post news and events, and communicate with others in their area about walking, as well as a "walk now" option to find out what walks are available in each region. Similar organizations exist in other countries and recently a "Walking Summit" was held in the United States. This "assembled thought-leaders and influencers from business, urban planning and real estate, (along with) physicians and public health officials", and others, to discuss how to make American cities and communities places where "people can and want to walk". Walking is more prevalent in European cities that have dense residential areas mixed with commercial areas and good public transportation.

Three-women-walking-across-bridge.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1785 2023-05-27 00:42:47

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1788) Jogging

Gist

Jogging is the activity of running at a slow, regular speed, especially as a form of exercise. (UK)

Jogging, form of running at an easy pace, particularly popular from the 1960s in the United States.

Summary

Jogging is a form of running at an easy pace, particularly popular from the 1960s in the United States. There, an estimated 7,000,000 to 10,000,000 joggers sought fitness, weight loss, grace, physical fulfillment, and relief from stress by jogging. Joggers expend from 10 to 13 calories per minute in this exercise (compared with approximately 7 to 9 calories per minute for tennis).

The popularity of this activity was given substantial impetus by the publication of the book Jogging (1967) by Bill Bowerman, a University of Oregon track coach, and W.E. Harris, a heart specialist. The practice of jogging originated in New Zealand when an Olympic track coach, one Dr. Lydiard, suggested it as a conditioning activity for retired Olympic runners; Bowerman observed the activity there and was impressed.

Jogging has been endorsed by many medical authorities for its value as a heart exercise and for general physical conditioning, usually to be practiced on alternate days. Other medical authorities, however, warn that fallen arches, shin splints, sweat miliaria, strained Achilles tendons, bruised heels, and knee and back ailments can result from jogging—usually done on hard surfaces with the feet striking the ground from 600 to 750 times per mile. Warm-up exercises before jogging, properly designed shoes, loose clothing, proper jogging technique, and general good health—as well as sensible objectives—are necessary for safe pursuit of the activity. The U.S. National Jogging Association was formed in 1968 to promote the pastime.

Details

Jogging is a form of trotting or running at a slow or leisurely pace. The main intention is to increase physical fitness with less stress on the body than from faster running but more than walking, or to maintain a steady speed for longer periods of time. Performed over long distances, it is a form of aerobic endurance training.

Definition

Jogging is running at a gentle pace; its definition, as compared with running, is not standard. In general, jogging speed is between 4 and 6 miles per hour (6.4 and 9.7 km/h) Running is sometimes defined as requiring a moment of no contact to the ground, whereas jogging often sustains the contact.

History

The word jog originated in England in the mid-16th century. The etymology of the word is unknown, but it may be related to shog or have been a new invention. In 1593, William Shakespeare wrote in Taming of the Shrew, "you may be jogging whiles your boots are green". At that point, it usually meant to leave.

The term jog was often used in English and North American literature to describe short quick movements, either intentional or unintentional. It is also used to describe a quick, sharp shake or jar. Richard Jefferies, an English naturalist, wrote of "joggers", describing them as quickly moving people who brushed others aside as they passed. This usage became common throughout the British Empire, and in his 1884 novel My Run Home, the Australian author Rolf Boldrewood wrote, "Your bedroom curtains were still drawn as I passed on my morning jog."

In the United States jogging was called "roadwork" when athletes in training, such as boxers, customarily ran several miles each day as part of their conditioning. In New Zealand during the 1960s or 1970s, the word "roadwork" was mostly supplanted by the word "jogging", promoted by coach Arthur Lydiard, who is credited with popularizing jogging. The idea of jogging as an organised activity was mooted in a sports page article in The New Zealand Herald in February 1962, which told of a group of former athletes and fitness enthusiasts who would meet once a week to run for "fitness and sociability". Since they would be jogging, the newspaper suggested that the club "may be called the Auckland Joggers' Club"—which is thought to be the first use of the noun "jogger". University of Oregon track coach Bill Bowerman, after jogging with Lydiard in New Zealand in 1962, started a joggers' club in Eugene in early 1963. He published the book Jogging in 1966, popularizing jogging in the United States.

Exercise

Jogging may also be used as a warm up or cool down for runners, preceding or following a workout or race. It is often used by serious runners as a means of active recovery during interval training. For example, a runner who completes a fast 400 meter repetition at a sub-5-minute mile pace (3 minute km) may drop to an 8-minute mile jogging pace (5 minute km) for a recovery lap.

Jogging can be used as a method to increase endurance or to provide a means of cardiovascular exercise but with less stress on joints or demand on the circulatory system.

Benefits

According to a study by Stanford University School of Medicine, jogging is effective in increasing human lifespan, and decreasing the effects of aging, with benefits for the cardiovascular system. Jogging is useful for fighting obesity and staying healthy.

The National Cancer Institute has performed studies that suggest jogging and other types of aerobic exercise can reduce the risk of lung, colon, breast and prostate cancers, among others. It is suggested by the American Cancer Society that jogging for at least 30 minutes five days a week can help in cancer prevention.

While jogging on a treadmill will provide health benefits such as cancer prevention, and aid in weight loss, a study published in BMC Public Health reports that jogging outdoors can have the additional benefits of increased energy and concentration. Jogging outdoors is a better way to improve energy levels and advance mood than using a treadmill at the gym.

Jogging also prevents muscle and bone damage that often occurs with age, improves heart performance and blood circulation and assists in preserving a balanced weight gain.

A Danish study released in 2015 reported that "light" and "moderate" jogging were associated with reduced mortality compared to both non-jogging and "strenuous" jogging. The optimal amount per week was 1 to 2.4 hours, the optimal frequency was less than or equal to 3 times per week and the optimal speed was "slow" or "average". A recent meta-analysis on running/jogging and mortality, including more than 230,000 participants found that runners were at 27% lower risk of death than non-runners, during 5.5-35 year follow-ups.

frau-joggen-natur-draussen-sport-1326090869:image-21-9?wid=1200&fit=constrain,0&resMode=sharp2&noCache=1667821365118


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1786 2023-05-28 00:27:46

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1789) Running

Gist

Running is the act of a person, animal, or thing that runs.

Summary

Running is footracing over a variety of distances and courses and numbering among the most popular sports in nearly all times and places. Modern competitive running ranges from sprints (dashes), with their emphasis on continuous high speed, to grueling long-distance and marathon races, requiring great endurance.

Running is also a popular noncompetitive recreation that can produce important physiological benefits.

Details

Running is the way in which people or animals travel quickly on their feet. It is a method of travelling on land. It is different to walking in that both feet are regularly off the ground at the same time. Different terms are used to refer to running according to the speed: jogging is slow, and sprinting is running fast.

Running is a popular form of exercise. It is also one of the oldest forms of sport. The exercise is known to be good for health; it helps breathing and heartbeat, and burns any spare calories. Running keeps a person fit and active. It also relieves stress. Running makes a person thirsty, so it is important to drink water when running.

Benefits of running

* Fitness – From wanting to lose weight to trying to fight disease and aging, there are lots of health benefits to running.
* Mental health – Whether to help depression or find some time to think, there are large mental benefits to running as well.
* Running is also fun and does not need special equipment to do it. When running, the muscles, lungs, brain, heart and other organs get better.
* Running is often used as cross-training for many sports, especially ones that require sustained endurance.

Ways to avoid injuries

Running injuries are quite common among runners. Many running injuries can be reduced through proper training, wearing of the correct gear and awareness of the running environment.

Physical

* Before going on a long run, do not forget to have a five-minute warm-up and some stretching exercises.
* After going on a long run, do not forget to have ten-minutes of cool-down with some stretching exercises.
* In the short term, try running on a flat surface. In the long term coaches recommend cross-country running.
* Running in a way to reduce loading on legs. One method is to use a treadmill, the better is to run in minimal shoes.
These allow the runner's feet to feel the ground and so not inadvertently overload the foot/ankle. Over-cushioned shoes prevent the foot from feeling what it is doing and can also damage ones shins and knees. This topic has been very controversial. There are countless coaches, runners, and scientists who argue that minimal shoes are a main culprit in injuries in runners. They highly recommend shoes that offer stability and cushioning. Many runners with injuries have found that switching to cushioned shoes has improved/cured them of their injuries.

Environmental

* In hot climates, it is better to run in the morning. This will help to reduce fatigue and heat stress.
* Do not run when pollution levels are high.
* Run in the shade if possible, wear sunglasses, apply sunscreen and try to avoid direct sun rays.
* Too much clothing can produce sweating, which causes the body to lose heat quickly. Dress in layers. Wear the correct footwear.
* When running in cold weather wear hat, gloves and clothing that covers your neck.

Racing

Running is a part of many forms of competitive racing. Most running races test speed, endurance or both. Track and field races are usually divided into sprints, middle-distance races and long-distance races. Races held off the track may be called cross-country races. A marathon is run over 42 kilometres.

Footraces have probably existed for most of human history. They were an important part of the ancient Olympic Games.

Additional Information

Running is a method of terrestrial locomotion allowing humans and other animals to move rapidly on foot. Running is a type of gait characterized by an aerial phase in which all feet are above the ground (though there are exceptions). This is in contrast to walking, where one foot is always in contact with the ground, the legs are kept mostly straight and the center of gravity vaults over the stance leg or legs in an inverted pendulum fashion. A feature of a running body from the viewpoint of spring-mass mechanics is that changes in kinetic and potential energy within a stride co-occur, with energy storage accomplished by springy tendons and passive muscle elasticity. The term running can refer to any of a variety of speeds ranging from jogging to sprinting.

Running in humans is associated with improved health and life expectancy.

It is hypothesized that the ancestors of humankind developed the ability to run for long distances about 2.6 million years ago, probably to hunt animals. Competitive running grew out of religious festivals in various areas. Records of competitive racing date back to the Tailteann Games in Ireland between 632 BCE and 1171 BCE, while the first recorded Olympic Games took place in 776 BCE. Running has been described as the world's most accessible sport.

Description

Running gait can be divided into two phases regarding the lower extremity: stance and swing. These can be further divided into absorption, propulsion, initial swing, and terminal swing. Due to the continuous nature of running gait, no certain point is assumed to be the beginning. However, for simplicity, it will be assumed that absorption and footstrike mark the beginning of the running cycle in a body already in motion.

Footstrike

Footstrike occurs when a plantar portion of the foot makes initial contact with the ground. Common footstrike types include forefoot, midfoot, and heel strike types. These are characterized by initial contact of the ball of the foot, ball and heel of the foot simultaneously and heel of the foot respectively. During this time, the hip joint is undergoing extension from being in maximal flexion from the previous swing phase. For proper force absorption, the knee joint should be flexed upon the footstrike, and the ankle should be slightly in front of the body. Footstrike begins the absorption phase as forces from initial contact are attenuated throughout the lower extremity. Absorption of forces continues as the body moves from footstrike to midstance due to vertical propulsion from the toe-off during a previous gait cycle.

Midstance

Midstance is when the lower extremity limb of focus is in knee flexion directly underneath the trunk, pelvis, and hips. At this point, propulsion begins to occur as the hips undergo hip extension, the knee joint undergoes extension, and the ankle undergoes plantar flexion. Propulsion continues until the leg is extended behind the body and toe-off occurs. This involves a maximal hip extension, knee extension, and plantar flexion for the subject, resulting in the body being pushed forward from this motion, and the ankle/foot leaves the ground as the initial swing begins.

Propulsion phase

Most recent research, particularly regarding the footstrike debate, has focused solely on the absorption phases for injury identification and prevention. The propulsion phase of running involves the movement beginning at midstance until toe off. From a full stride length model however, components of the terminal swing and footstrike can aid in propulsion. Set up for propulsion begins at the end of the terminal swing as the hip joint flexes, creating the maximal range of motion for the hip extensors to accelerate through and produce force. As the hip extensors change from reciporatory inhibitors to primary muscle movers, the lower extremity is brought back toward the ground, although aided greatly by the stretch reflex and gravity. Footstrike and absorption phases occur next with two types of outcomes. This phase can be only a continuation of momentum from the stretch reflex reaction to hip flexion, gravity, and light hip extension with a heel strike, which does little to provide force absorption through the ankle joint. With a mid/forefoot strike, loading of the gastro-soleus complex from shock absorption will serve to aid in plantar flexion from midstance to toe-off. As the lower extremity enters the midstance, actual propulsion begins. The hip extensors continue contracting with help from the acceleration of gravity and the stretch reflex left over from maximal hip flexion during the terminal swing phase. Hip extension pulls the ground underneath the body, pulling the runner forward. During midstance, the knee should be in some degree of knee flexion due to elastic loading from the absorption and footstrike phases to preserve forward momentum. The ankle joint is in dorsiflexion at this point underneath the body, either elastically loaded from a mid/forefoot strike or preparing for stand-alone concentric plantar flexion. All three joints perform the final propulsive movements during toe-off. The plantar flexors plantar flex, pushing off from the ground and returning from dorsiflexion in midstance. This can either occur by releasing the elastic load from an earlier mid/forefoot strike or concentrically contracting from a heel strike. With a forefoot strike, the ankle and knee joints release their stored elastic energy from the footstrike/absorption phase. The quadriceps group/knee extensors go into full knee extension, pushing the body off of the ground. At the same time, the knee flexors and stretch reflex pull the knee back into flexion, adding to a pulling motion on the ground and beginning the initial swing phase. The hip extensors extend to the maximum, adding the forces pulling and pushing off of the ground. The hip extensors' movement and momentum also contribute to knee flexion and the beginning of the initial swing phase.

Swing phase

Initial swing is the response of both stretch reflexes and concentric movements to the propulsion movements of the body. Hip flexion and knee flexion occur, beginning the return of the limb to the starting position and setting up for another foot strike. The initial swing ends at midswing when the limb is again directly underneath the trunk, pelvis, and hip with the knee joint flexed and hip flexion continuing. Terminal swing then begins as hip flexion continues to the point of activation of the stretch reflex of the hip extensors. The knee begins to extend slightly as it swings to the anterior portion of the body. The foot then makes contact with the ground with a foot strike, completing the running cycle of one side of the lower extremity. Each limb of the lower extremity works opposite to the other. When one side is in toe-off/propulsion, the other hand is in the swing/recovery phase preparing for footstrike. Following toe-off and the beginning of the initial swing of one side, there is a flight phase where neither extremity is in contact with the ground due to the opposite side finishing terminal swing. As the footstrike of the one hand occurs, the initial swing continues. The opposing limbs meet with one in midstance and midswing, beginning the propulsion and terminal swing phases.

Upper extremity function

The upper extremity function serves mainly in providing balance in conjunction with the opposing side of the lower extremity. The movement of each leg is paired with the opposite arm, which serves to counterbalance the body, particularly during the stance phase. The arms move most effectively (as seen in elite athletes) with the elbow joint at approximately 90 degrees or less, the hands swinging from the hips up to mid-chest level with the opposite leg, the Humerus moving from being parallel with the trunk to approximately 45 degrees shoulder extension (never passing the trunk in flexion) and with as little movement in the transverse plane as possible. The trunk also rotates in conjunction with arm swing. It mainly serves as a balance point from which the limbs are anchored. Thus trunk motion should remain mostly stable with little motion except for slight rotation, as excessive movement would contribute to transverse motion and wasted energy.

Footstrike debate

Recent research into various forms of running has focused on the differences in the potential injury risks and shock absorption capabilities between heel and mid/forefoot footstrikes. It has been shown that heel striking is generally associated with higher rates of injury and impact due to inefficient shock absorption and inefficient biomechanical compensations for these forces. This is due to pressures from a heel strike traveling through bones for shock absorption rather than being absorbed by muscles. Since bones cannot disperse forces easily, the forces are transmitted to other parts of the body, including ligaments, joints, and bones in the rest of the lower extremities up to the lower back. This causes the body to use abnormal compensatory motions in an attempt to avoid serious bone injuries. These compensations include internal rotation of the tibia, knee, and hip joints. Excessive compensation over time has been linked to a higher risk of injuries in those joints and the muscles involved in those motions. Conversely, a mid/forefoot strike has been associated with greater efficiency and lower injury risk due to the triceps surae being used as a lever system to absorb forces with the muscles eccentrically rather than through the bone. Landing with a mid/forefoot strike has also been shown to properly attenuate shock and allow the triceps surae to aid in propulsion via reflexive plantarflexion after stretching to absorb ground contact forces. Thus a mid/forefoot strike may aid in propulsion. However, even among elite athletes, there are variations in self-selected footstrike types. This is especially true in longer distance events, where there is a prevalence of heel strikers. There does tend however to be a greater percentage of mid/forefoot striking runners in the elite fields, particularly in the faster racers and the winning individuals or groups. While one could attribute the faster speeds of elite runners compared to recreational runners with similar footstrikes to physiological differences, the hip, and joints have been left out of the equation for proper propulsion. This raises the question of how heel-striking elite distance runners can keep up such high paces with a supposedly inefficient and injurious foot strike technique.

Stride length, hip and knee function

Biomechanical factors associated with elite runners include increased hip function, use, and stride length over recreational runners. An increase in running speeds causes increased ground reaction forces, and elite distance runners must compensate for this to maintain their pace over long distances. These forces are attenuated through increased stride length via increased hip flexion and extension through decreased ground contact time and more energy being used in propulsion. With increased propulsion in the horizontal plane, less impact occurs from the decreased force in the vertical plane. Increased hip flexion allows for increased use of the hip extensors through midstance and toe-off, allowing for more force production. The difference even between world-class and national-level 1500-m runners has been associated with more efficient hip joint function. The increase in velocity likely comes from the increased range of motion in hip flexion and extension, allowing for greater acceleration and speed. The hip extensors and extension have been linked to more powerful knee extension during toe-off, contributing to propulsion. Stride length must be appropriately increased with some degree of knee flexion maintained through the terminal swing phases, as excessive knee extension during this phase along with footstrike has been associated with higher impact forces due to braking and an increased prevalence of heel striking. Elite runners tend to exhibit some degree of knee flexion at footstrike and midstance, which first serves to eccentrically absorb impact forces in the quadriceps muscle group. Secondly it allows for the knee joint to contract concentrically and provides significant aid in propulsion during toe-off as the quadriceps group is capable of producing large amounts of force. Recreational runners have been shown to increase stride length through increased knee extension rather than increased hip flexion, as exhibited by elite runners, which provides an intense braking motion with each step and decreases the rate and efficiency of knee extension during toe-off, slowing down speed. Knee extension, however, contributes to additional stride length and propulsion during toe-off and is seen more frequently in elite runners as well.

Good technique

Leaning forward places a runner's center of mass on the front part of the foot, which avoids landing on the heel and facilitates the use of the spring mechanism of the foot. It also makes it easier for the runner to avoid landing the foot in front of the center of mass and the resultant braking effect. While upright posture is essential, a runner should maintain a relaxed frame and use their core to keep posture upright and stable. This helps prevent injury as long as the body is neither rigid nor tense. The most common running mistakes are tilting the chin up and scrunching shoulders.

Stride rate and types

Exercise physiologists have found that the stride rates are extremely consistent across professional runners, between 185 and 200 steps per minute. The main difference between long- and short-distance runners is the length of stride rather than the rate of stride.

During running, the speed at which the runner moves may be calculated by multiplying the cadence (steps per minute) by the stride length. Running is often measured in terms of pace, expressed in units of minutes per mile or minutes per kilometer (the inverse of speed, in mph or km/h). Some coaches advocate training at a combination of specific paces related to one's fitness in order to stimulate various physiological improvements.

Different types of stride are necessary for different types of running. When sprinting, runners stay on their toes bringing their legs up, using shorter and faster strides. Long-distance runners tend to have more relaxed strides that vary.

A680~jogging-edit_hero.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1787 2023-05-29 00:23:33

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1790) Sprint

Gist

Sprint is to run or go at top speed especially for a short distance.

Sprint, also called dash, in athletics (track and field), a footrace over a short distance with an all-out or nearly all-out burst of speed, the chief distances being 100, 200, and 400 metres and 100, 220, and 440 yards.

Summary

Sprinting is running over a short distance at the top-most speed of the body in a limited period of time. It is used in many sports that incorporate running, typically as a way of quickly reaching a target or goal, or avoiding or catching an opponent. Human physiology dictates that a runner's near-top speed cannot be maintained for more than 30–35 seconds due to the depletion of phosphocreatine stores in muscles, and perhaps secondarily to excessive metabolic acidosis as a result of anaerobic glycolysis.

In athletics and track and field, sprints (or dashes) are races over short distances. They are among the oldest running competitions, being recorded at the Ancient Olympic Games. Three sprints are currently held at the modern Summer Olympics and outdoor World Championships: the 100 metres, 200 metres, and 400 metres.

At the professional level, sprinters begin the race by assuming a crouching position in the starting blocks before driving forward and gradually moving into an upright position as the race progresses and momentum is gained. The set position differs depending on the start. The use of starting blocks allows the sprinter to perform an enhanced isometric preload; this generates muscular pre-tension which is channeled into the subsequent forward drive, making it more powerful. Body alignment is of key importance in producing the optimal amount of force. Ideally, the athlete should begin in a 4-point stance and drive forwards, pushing off using both legs for maximum force production. Athletes remain in the same lane on the running track throughout all sprinting events, with the sole exception of the 400 metres indoors. Races up to 100 metres are largely focused upon acceleration to an athlete's maximum speed. All sprints beyond this distance increasingly incorporate an element of endurance.

Details

Sprint running races are short distances races in which athletes try to run at their maximum speed throughout the entire distance of the race. Sprint races are part of the track and field discipline and are included in all events that feature track and field competitions.

The 400m oval running track is split into eight lanes, where each lane is 4ft wide. Up to eight athletes compete in a single race. Competitions are conducted in a heats format, where athletes in groups of eight take part in a every race, with winners moving on to the next round, until the final winner is decided.

Sprint races can be of various distances from 50 - 400m. The three formats used for the Olympics are: 100m, 200m and 400m. The 100m and 400m races are also conducted in a relay format where a team of four each run a leg and pass a baton from one runner to the next.

For the 100m race all runners are lined up in a straight line in a track, and for the 200m and 400m the start position is based on which lane the athlete is on the track. The runner to first cross the finish line is the winner. The time taken to finish the race for each athlete is also tracked for historical record keeping.

Sprint, also called dash, in athletics (track and field), is a footrace over a short distance with an all-out or nearly all-out burst of speed, the chief distances being 100, 200, and 400 metres and 100, 220, and 440 yards.

The course for sprint races is usually marked off in lanes within which each runner must remain for the entire race. Originally sprinters used a standing start, but after 1884 sprinters started from a crouched position using a device called a starting block (legalized in the 1930s) to brace their feet (see photograph). Races are begun by a pistol shot; at 55 to 65 metres (60 to 70 yards), top sprinters attain maximum speed, more than 40 km per hour (25 miles per hour). After the 65-metre mark the runner begins to lose speed through fatigue.

All important international races at 200 metres and 220 yards, as well as 400 metres and 440 yards, are run on an oval track. The starts are staggered (the lanes farther from the centre begin progressively farther forward on the track) so that each runner will cover an equal distance. As a result, the competitors, particularly in the 400 metres and 440 yards, have no exact knowledge of their respective positions until they have completed the final turn. Great emphasis is therefore placed on an athlete’s ability to judge his own pace, as well as upon his speed and endurance.

History

The first 13 editions of the Ancient Olympic Games featured only one event—the stadion race, which was a sprinting race from one end of the stadium to the other. The Diaulos ("double pipe") was a double-stadion race, c. 400 metres (1,300 feet), introduced in the 14th Olympiad of the ancient Olympic Games (724 BC).

Sprint races were part of the original Olympic Games in the 7th century B.C. as well as the first modern Olympic Games which started in the late 19th century (Athens 1896) and featured the 100 meters and 400 meters. Athletes started both races from a crouched start (4-point stance). In both the original Olympics and the modern Olympics, only men were allowed to participate in track and field until the 1928 games in Amsterdam, Netherlands. The 1928 games were also the first games to use a 400-meter track, which became the standard for track and field.

The modern sprinting events have their roots in races of imperial measurements which were later altered to metric: the 100 m evolved from the 100-yard dash, the 200 m distance came from the furlong (or 1⁄8 mile), and the 400 m was the successor to the 440-yard dash or quarter-mile race.

Technological advances have always improved sprint performances (i.e., starting blocks, synthetic track material, and shoe technology). In 1924, athletes used a small shovel to dig holes to start the race. The world record in the 100-meter dash in 1924 was 10.4 seconds, while in 1948, (the first use of starting blocks) was 10.2 seconds, and was 10.1 seconds in 1956. The constant drive for faster athletes with better technology has brought man from 10.4 seconds to 9.58 seconds in less than 100 years.

Track events were measured with the metric system except for the United Kingdom and the United States until 1965 and 1974 respectively. The Amateur Athletic Association (AAU) decided to switch track and field in the U.S. to the metric system to finally make track and field internationally equivalent.

Usain-Bolt-150m-Manchester-2009-by-Peter-Langdown.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1788 2023-05-30 00:02:26

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1791) Mountaineering

Gist

Mountaineering, also called mountain climbing, the sport of attaining, or attempting to attain, high points in mountainous regions, mainly for the pleasure of the climb.

Summary

Mountaineering, also called mountain climbing, is the sport of attaining, or attempting to attain, high points in mountainous regions, mainly for the pleasure of the climb. Although the term is often loosely applied to walking up low mountains that offer only moderate difficulties, it is more properly restricted to climbing in localities where the terrain and weather conditions present such hazards that, for safety, a certain amount of previous experience will be found necessary. For the untrained, mountaineering is a dangerous pastime.

Mountaineering differs from other outdoor sports in that nature alone provides the field of action—and just about all of the challenges—for the participant. Climbing mountains embodies the thrills produced by testing one’s courage, resourcefulness, cunning, strength, ability, and stamina to the utmost in a situation of inherent risk. Mountaineering, to a greater degree than other sports, is a group activity, with each member both supporting and supported by the group’s achievement at every stage. For most climbers, the pleasures of mountaineering lie not only in the “conquest” of a peak but also in the physical and spiritual satisfactions brought about through intense personal effort, ever-increasing proficiency, and contact with natural grandeur.

History

Early attempts to ascend mountain peaks were inspired by other than sporting motives: to build altars or to see if spirits actually haunted once-forbidden heights, to get an overview of one’s own or a neighbouring countryside, or to make meteorological or geological observations. Before the modern era, history recorded few attempts to ascend mountain peaks for the mere sake of the accomplishment. During the 18th century a growing number of natural philosophers—the scientists of their day—began making field trips into the Alps of Europe to make scientific observations. The area around Chamonix, France, became a special attraction to those investigators because of the great glaciers on the Mont Blanc chain.

Mountaineering in a contemporary sporting sense was born when a young Genevese scientist, Horace-Bénédict de Saussure, on a first visit to Chamonix in 1760, viewed Mont Blanc (at 15,771 feet [4,807 metres] the tallest peak in Europe) and determined that he would climb to the top of it or be responsible for its being climbed. He offered prize money for the first ascent of Mont Blanc, but it was not until 1786, more than 25 years later, that his money was claimed—by a Chamonix doctor, Michel-Gabriel Paccard, and his porter, Jacques Balmat. A year later de Saussure himself climbed to the summit of Mont Blanc. After 1850 groups of British climbers with Swiss, Italian, or French guides scaled one after another of the high peaks of Switzerland. A landmark climb in the growth of the sport was the spectacular first ascent of the Matterhorn (14,692 feet [4,478 metres]) on July 14, 1865, by a party led by an English artist, Edward Whymper. In the mid-19th century the Swiss developed a coterie of guides whose leadership helped make mountaineering a distinguished sport as they led the way to peak after peak throughout central Europe.

By 1870 all of the principal Alpine summits had been scaled, and climbers began to seek new and more-difficult routes on peaks that had already been ascended. As the few remaining minor peaks of the Alps were overcome, by the end of the 19th century climbers turned their attention to the Andes Mountains of South America, the North American Rocky Mountains, the Caucasus at the western edge of Asia, Africa’s peaks, and finally the vastness of the Himalayas. Mount Aconcagua (22,831 feet [6,959 metres]), the highest peak of the Andes, was first climbed in 1897, and Grand Teton (13,770 feet [4,197 metres]) in North America’s Rocky Mountains was ascended in 1898. The Italian duke d’Abruzzi in 1897 made the first ascent of Mount St. Elias (18,008 feet [5,489 metres]), which stands athwart the international boundary of the U.S. state of Alaska and Yukon territory, Canada, and in 1906 successfully climbed Margherita Peak in the Ruwenzori Range (16,795 feet [5,119 metres]) in East Africa. In 1913 an American, Hudson Stuck, ascended Denali (Mount McKinley) in Alaska, which, at 20,310 feet (6,190 metres), is the highest peak in North America. The way was opening for greater conquests, but it would be mid-century before the final bastion, Mount Everest in the Himalayas, was ascended.

As the 20th century wore on, the truly international character of mountaineering began to reveal itself. Increasingly, Austrians, Chinese, English, French, Germans, Indians, Italians, Japanese, and Russians turned their attention to opportunities inherent in the largest mountain landmass of the planet, the Himalayas and neighbouring ranges. After World War I the British made Everest their particular goal. Meanwhile, climbers from other countries were making spectacularly successful climbs of other great Himalayan peaks. A Soviet team climbed Stalin Peak (24,590 feet [7,495 metres])—later renamed Communism Peak and then Imeni Ismail Samani Peak—in the Pamirs in 1933, a German party succeeded on Siniolchu (22,600 feet [6,888 metres]) in 1936, and the English climbed Nanda Devi (25,646 feet [7,817 metres]) the same year. In 1940–47The Alpine Journal of London, a reliable chronicler of ascents, listed for the first time no peaks ascended—a reflection, of course, of the imperatives of World War II.

In the 1950s came a series of successful ascents of mountains in the Himalayas: a first climb by the French of Annapurna I (26,545 feet [8,091 metres]) in June 1950, Nanga Parbat (26,660 feet [8,126 metres]) by the Germans and Austrians in 1953, Kanchenjunga (28,169 feet [8,586 metres]) by the British in May 1955, and Lhotse I (27,940 feet [8,516 metres]) by the Swiss in 1956. In addition, K2 in the Karakoram Range, at 28,251 feet (8,611 metres) the world’s second highest mountain, was first scaled by two Italian climbers in July 1954. Beyond all those, however, the success of the British on Mount Everest (29,035 feet [8,850 metres]; see Researcher’s Note: Height of Mount Everest)—when a New Zealand beekeeper, Edmund (later Sir Edmund) Hillary, and the Tibetan guide Tenzing Norgay stood on the top of the world on May 29, 1953—was a culminating moment. That expedition, which was led by Colonel John Hunt, was the eighth team in 30 years to attempt Everest, and there had also been three reconnaissance expeditions.

An Austrian party reached the summit of Cho Oyu (26,906 feet [8,201 metres]), just to the west of Everest, in October 1954. In May 1955 a French party succeeded in getting all its members and a Sherpa guide to the summit of Makalu 1 (27,766 feet [8,463 metres]), another neighbour of Everest. The British expedition that in May 1955 climbed Kanchenjunga, often considered one of the world’s most-difficult mountaineering challenges, was led by Charles Evans, who had been deputy leader of the first successful climb of Everest.

Beginning in the 1960s, mountaineering underwent several transformations. Once peaks were climbed, the emphasis moved to a search for increasingly difficult routes up the mountain face to the summit, as in the golden age of the Alpine ascents. A notable example was the 1963 ascent of the West Face of Everest by two members of the first American team to climb the mountain. Moreover, vertical or other so-called impossible rock faces were being scaled through the use of newly developed artificial aids and advanced climbing techniques. Smooth vertical faces of granite were overcome in climbs lasting days or even weeks at a time—for example, the 27-day conquest by American climbers in 1970 of the sheer 3,600-foot (1,100-metre) southeast face of the granite monolith El Capitan in Yosemite National Park in the North American Sierra Nevada range. Other notable developments included an increase in the “Alpine” style of climbing the highest peaks, where mountaineers carried a minimal amount of equipment and supplies and did not rely on porters and other outside support, and a rise in the number of people climbing at high elevations without the use of supplemental oxygen.

Techniques

While it is necessary for the complete mountaineer to be competent in all three phases of the sport—hiking, rock climbing, and snow and ice technique—each is quite different. There are wide variations within those categories, and even the most accomplished mountaineers will have varying degrees of competence in each. Good climbers will strike that balance that is consonant with their own physical and mental capabilities and approach.

Hiking is the essential element of all climbing, for in the end mountains are climbed by placing one foot in front of another over and over again. The most-arduous hours in mountaineering are those spent hiking or climbing slowly, steadily, hour after hour, on the trails of a mountain’s approach or lower slopes.

Rock climbing, like hiking, is a widely practiced sport in its own right. The essentials of rock climbing are often learned on local cliffs, where the teamwork of mountaineering, the use of the rope, and the coordinated prerequisites of control and rhythm are mastered. The rope, the artificial anchor, and carabiner (or snap link, a metal loop or ring that can be snapped into an anchor and through which the rope may be passed) are used primarily as safety factors. An exception occurs in tension climbing, in which the leader is supported by a judiciously placed series of anchors and carabiners through which the rope is passed. He or she is then supported on the rope by fellow climbers while slowly moving upward to place another anchor and repeat the process.

Anchors are used with discretion rather than in abundance. Anchors include the chock, which is a small piece of shaped metal that is attached to rope or wire cable and wedged by hand into a crack in the rock; the piton, which is a metal spike, with an eye or ring in one end, that is hammered into a crack; the bolt, which is a metal rod that is hammered into a hole drilled by the climber and to whose exposed, threaded end a hanger is then attached; and the “friend,” which is a form of chock with a camming device that automatically adjusts to a crack. Anchors are rarely used as handholds or footholds.

For the majority of rock climbers, hands and feet alone are the essential, with the feet doing most of the labour. The layperson’s notion that the climber must be extraordinarily strong in arms and shoulders is true only for such situations as the negotiation of serious overhangs. By and large, hands are used for balance, feet for support. Hands and arms are not used for dragging the climber up the cliff.

Balance is essential, and the body weight is kept as directly over the feet as possible, the climber remaining as upright as the rock will permit. An erect stance enables the climber to use that fifth element of climbing, the eyes. Careful observation as while moving up a cliff will save many vain scrambles for footholds. Three points of contact with the rock are usually kept, either two hands and a foot or two feet and a hand. Jumping for holds is extremely dangerous because it allows no safety factor. Rhythmic climbing may be slow or fast according to the difficulty of the pitch. Rhythm is not easily mastered and, when achieved, becomes the mark of the truly fine climber.

The harder the climb, the more the hands are used for support. They are used differently in different situations. In a chimney, a pipelike, nearly cylindrical vertical shaft, they press on opposite sides in opposition to each other. On slabs, the pressure of the palms of the hand on smooth rock may provide the necessary friction for the hold.

Climbing down steep rock is usually harder than going up, because of the difficulty in seeing holds from above and the normal reluctance of climbers to reach down and work their hands low enough as they descend. The quick way down is via the doubled rope in the technique called rappelling. The rope, one end being firmly held or secured, is wrapped around the climber’s body in such a way that it can be fed out by one hand slowly or quickly as desired to lower the body gradually down the face of the rock.

Rope handling is a fine art that is equally essential on snow, ice, and rock. Sufficient rope for the pitch to be climbed and of sufficient length for rappelling is needed. As a lifeline, the rope receives the greatest care and respect. A good rope handler is a valued person on the climb. The techniques involved are not easily learned and are mastered primarily through experience. Anchors and carabiners must be so placed and the rope strung in such a way as to provide maximum safety and to minimize effort in ascending and descending. That includes keeping the rope away from cracks where it might jam and from places where it might become caught on rock outcrops or vegetation. A rope should not lay over rough or sharp-edged rock, where under tension it may be damaged from friction or cut by falling rock. The use of helmets while climbing, once a somewhat controversial issue (they may be uncomfortable or may limit vision or mobility), has become much more common, especially for technical climbs (e.g., up rock faces).

Constantly changing conditions of snow and ice are important hazards faced by mountaineers. Good mountaineers must have an intimate knowledge of snow conditions. They must be able to detect hidden crevasses, be aware of potential avalanches, and be able to safely traverse other tricky or dangerous concentrations of snow or ice. In snow-and-ice technique, the use of the ice ax is extremely important as an adjunct to high mountaineering. Consisting of a pick and an adze opposed at one end of a shaft and a spike at the other, it is used for cutting steps in ice, probing crevasses, obtaining direct aid on steep slopes, achieving balance as necessary, arresting a slide, and securing the rope (belaying). Crampons (sets of spikes that can be strapped on boot soles) are intended to preclude slipping and are useful on steep slopes of snow and ice and in steps that have been cut. By biting into the surface, they make progress possible where boots alone would not do. On many slopes, crampons also render unnecessary the cutting of steps. On extremely difficult snow and ice, ice pitons and carabiners are used. The pitons, when driven in, are allowed to freeze in place.

In climbing long snow slopes, a tedious task, it is necessary to strike a slow and rhythmic pace that can be sustained for a long time. It is desirable to make a start on the mountain early in the day when the snow is in hard condition. As in all phases of mountaineering, judgment is important when engaging in snow and ice climbing. The length of the climb, the nature of the weather, the effect of the sun’s heat on snow and ice, and the potential avalanche danger must all be considered.

The basic organization of the sport is the mountaineering or rock-climbing club. Every nation with mountaineers has its own clubs, among which the Alpine Club in Great Britain, founded in 1857, is perhaps the most venerable. The largest numbers of clubs are found in the Alpine countries, in the British Isles, and in North America. Major mountaineering clubs frequently participate financially in the sponsorship of major expeditions. Most of the clubs publish annual or periodic reports, journals, or bulletins.

Details

Mountaineering, mountain climbing, or alpinism, is a set of outdoor activities that involves ascending mountains. Mountaineering-related activities include traditional outdoor climbing, skiing, and traversing via ferratas that have become sports in their own right. Indoor climbing, sport climbing, and bouldering are also considered variants of mountaineering by some, but are part of a wide group of mountain sports.

Mountaineering activity, involving such activities as mountain climbing and trekking, has traditionally been dominated by men. Although women's participation in mountaineering has grown, the gender gap is still pronounced in terms of quantitative engagement in these forms of sport tourism. Yet, in competitive mountaineering, the success rate of females is currently higher than that of males

Unlike most sports, mountaineering lacks widely applied formal rules, regulations, and governance; mountaineers adhere to a large variety of techniques and philosophies when climbing mountains. Numerous local alpine clubs support mountaineers by hosting resources and social activities. A federation of alpine clubs, the International Climbing and Mountaineering Federation (UIAA), is the International Olympic Committee-recognized world organization for mountaineering and climbing. The consequences of mountaineering on the natural environment can be seen in terms of individual components of the environment (land relief, soil, vegetation, fauna, and landscape) and the location/zone of mountaineering activity (hiking, trekking, or climbing zone). Mountaineering impacts communities on economic, political, social and cultural levels, often leading to changes in people's worldviews influenced by globalization, specifically foreign cultures and lifestyles.

History:

Early mountaineering

Humans have been present in mountains since prehistory. The remains of Ötzi, who lived in the 4th millennium BC, were found in a glacier in the Ötztal Alps. However, the highest mountains were rarely visited early on, and were often associated with supernatural or religious concepts. Nonetheless, there are many documented examples of people climbing mountains prior to the formal development of the sport in the 19th century, although many of these stories are sometimes considered fictional or legendary.

The famous poet Petrarch describes his 26 April 1336 ascent of Mount Ventoux (1,912 m (6,273 ft)) in one of his epistolae familiares, claiming to be inspired by Philip V of Macedon's ascent of Mount Haemo.

For most of antiquity, climbing mountains was a practical or symbolic activity, usually undertaken for economic, political, or religious purposes. A commonly cited example is the 1492 ascent of Mont Aiguille (2,085 m (6,841 ft)) by Antoine de Ville, a French military officer and lord of Domjulien and Beaupré.

In the Andes, around the late 1400s and early 1500s many ascents were made of extremely high peaks by the Incas and their subjects. The highest they are known for certain to have climbed is 6739 m at the summit of Volcan Llullaillaco.

The Enlightenment and the Golden Age of Alpinism

The Age of Enlightenment and the Romantic era marked a change of attitudes towards high mountains. In 1757 Swiss scientist Horace-Bénédict de Saussure made the first of several unsuccessful attempts on Mont Blanc in France. He then offered a reward to anyone who could climb the mountain, which was claimed in 1786 by Jacques Balmat and Michel-Gabriel Paccard. The climb is usually considered an epochal event in the history of mountaineering, a symbolic mark of the birth of the sport.

By the early 19th century, many of the alpine peaks were reached, including the Grossglockner in 1800, the Ortler in 1804, the Jungfrau in 1811, the Finsteraarhorn in 1812, and the Breithorn in 1813. In 1808, Marie Paradis became the first woman to climb Mont Blanc, followed in 1838 by Henriette d'Angeville.

The beginning of mountaineering as a sport in the UK is generally dated to the ascent of the Wetterhorn in 1854 by English mountaineer Sir Alfred Wills, who made mountaineering fashionable in Britain. This inaugurated what became known as the Golden Age of Alpinism, with the first mountaineering club – the Alpine Club – being founded in 1857.

One of the most dramatic events was the spectacular first ascent of the Matterhorn in 1865 by a party led by English illustrator Edward Whymper, in which four of the party members fell to their deaths. By this point the sport of mountaineering had largely reached its modern form, with a large body of professional guides, equipment, and methodologies.

In the early years of the "golden age", scientific pursuits were intermixed with the sport, such as by the physicist John Tyndall. In the later years, it shifted to a more competitive orientation as pure sportsmen came to dominate the London-based Alpine Club and alpine mountaineering overall. The first president of the Alpine Club, John Ball, is considered to be the discoverer of the Dolomites, which for decades were the focus of climbers like Paul Grohmann and Angelo Dibona. At that time, the edelweiss also established itself as a symbol of alpinists and mountaineers.

Expansion around the world

In the 19th century, the focus of mountaineering turned towards mountains beyond the Alps, and by the turn of the 20th century, mountaineering had acquired a more international flavour.

In 1897 Mount Saint Elias (18,008 ft (5,489 m)) on the Alaska-Yukon border was summitted by the Duke of the Abruzzi and party. In 1879–1880 the exploration of the highest Andes in South America began when English mountaineer Edward Whymper climbed Chimborazo (20,549 ft (6,263 m)) and explored the mountains of Ecuador. It took until the late 19th century for European explorers to penetrate Africa. Mount Kilimanjaro in Africa was climbed in 1889 by Austrian mountaineer Ludwig Purtscheller and German geologist Hans Meyer, Mount Kenya in 1899 by Halford Mackinder.

The last frontier: The Himalayas

The last and greatest mountain range to be conquered was the Himalayas in South Asia. They had initially been surveyed by the British Empire for military and strategic reasons. In 1892 Sir William Martin Conway explored the Karakoram Himalayas, and climbed a peak of 23,000 ft (7,000 m). In 1895 Albert F. Mummery died while attempting Nanga Parbat, while in 1899 Douglas Freshfield took an expedition to the snowy regions of Sikkim.

In 1899, 1903, 1906, and 1908 American mountaineer Fanny Bullock Workman (one of the first professional female mountaineers) made ascents in the Himalayas, including one of the Nun Kun peaks (23,300 ft (7,100 m)). A number of Gurkha sepoys were trained as expert mountaineers by Charles Granville Bruce, and a good deal of exploration was accomplished by them.

In 1902 the Eckenstein–Crowley Expedition, led by English mountaineer Oscar Eckenstein and English occultist Aleister Crowley was the first to attempt to scale K2. They reached 22,000 feet (6,700 m) before turning back due to weather and other mishaps. Undaunted, in 1905 Crowley led the first expedition to Kangchenjunga, the third highest mountain in the world, in an attempt described as "misguided" and "lamentable".

Eckenstein was also a pioneer in developing new equipment and climbing methods. He started using shorter ice axes which could be used single-handed, designed the modern crampons and improved on the nail patterns used for the climbing boots.

By the 1950s, all the eight-thousanders but two had been climbed starting with Annapurna in 1950 by Maurice Herzog and Louis Lachenal on the 1950 French Annapurna expedition. The highest of these peaks Mount Everest was climbed in 1953 after the British had made several attempts in the 1920s; the 1922 expedition reached 8,320 metres (27,300 ft) before being aborted on the third summit attempt after an avalanche killed seven porters. The 1924 expedition saw another height record achieved but still failed to reach the summit with confirmation when George Mallory and Andrew Irvine disappeared on the final attempt. The summit was finally reached on 29 May 1953 by Sir Edmund Hillary and Tenzing Norgay from the south side in Nepal.

Just a few months later, Hermann Buhl made the first ascent of Nanga Parbat (8,125 m), on the 1953 German–Austrian Nanga Parbat expedition, a siege-style expedition culminating in a last 1,300 meters walking alone, being under the influence of drugs: pervitin (based on the stimulant methamphetamine used by soldiers during World War II), padutin and tea from coca leaves. K2 (8,611 m), the second-highest peak in the world, was first scaled in 1954 by Lino Lacedelli and Achille Compagnoni. In 1964, the final eight-thousander to be climbed was Shishapangma (8,013 m), the lowest of all the 8,000-metre peaks. Reinhold Messner from the Dolomites mountain range (Italy) was then the first to climb all eight-thousanders up to 1986, in addition to be the first without supplemental oxygen. In 1978 he climbed Mount Everest with Peter Habeler without supplemental oxygen, the first men to do so.

Today

Long the domain of the wealthy elite and their agents, the emergence of the middle-class in the 19th and 20th centuries resulted in mass interest in mountaineering. It became a popular pastime and hobby of many people. Some have come to criticize the sport as becoming too much of a tourist activity.

Organisation:

Activities

There are different activities associated with the sport.

* Traditional mountaineering involves identifying a specific mountain and route to climb, and executing the plan by whatever means appropriate. A mountain summit is almost always the goal. This activity is strongly associated with aid climbing and free climbing, as well as the use of ice axe and crampons on glaciers and similar terrain.
* Ski mountaineering involves skiing on mountainous terrain, usually in terrain much more rugged than typical cross-country skiing. Unlike traditional mountaineering, routes are less well-defined and summiting may not be the main goal.
* Peak bagging is the general activity of ascending peaks that are on a list of notable mountains, such as the 4000m peaks of the Alps.
* Enchainment is climbing more than one significant summit in one outing, usually on the same day.
* Climbing via ferratas involves traversing ladder-like paths on highly exposed terrain.
* Ice climbing which involves proceeding on steep sections of blank ice with crampons and ice axes. This activity often requires progressing on steep and blank sections of ice. Most mountaineers have to rely on ice climbing skills to climb upon the higher peaks in the European Alps, Himalayas and Canadian ranges.

Rules and governance

Mountaineering lacks formal rules; in theory, any person may climb a mountain and call themself a mountaineer. In practice, the sport is defined by the safe and necessary use of technical skills in mountainous terrain: in particular, roped climbing and snow travel abilities. A variety of techniques have been developed to help people climb mountains that are widely applied among practitioners of the sport.

Despite its lack of defined rules and non-competitive nature, mountaineering has much of the trappings of an organized sport, with recognition by the International Olympic Committee and a prominent international sport federation, the UIAA, which counts numerous national alpine clubs as its members. There are also many notable mountaineering/alpine clubs unassociated with the UIAA, such as The Mountaineers and the French Federation of Mountaineering and Climbing.

The premier award in mountaineering is the Piolet d'Or. There are no "world championships" or other similar competitions for mountaineering.

Mountaineering-Equipment-Boots-Mountains-Books-Hiking.jpg?fit=crop&w=700&h=466


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1789 2023-05-31 00:02:21

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1792) Long-distance running

Summary

Long-distance running, in athletics (track and field), is footraces ranging from 3,000 metres through 10,000, 20,000, and 30,000 metres and up to the marathon, which is 42,195 metres (26 miles 385 yards). It includes cross-country races over similar distances. Olympic events are the 5,000- and 10,000-metre races, held on a track, and the marathon, contested on roads. Like the middle-distance races (800 and 1,500 metres in the Olympics), long-distance races are run at a strategic pace, but less seldom is a final spurt, or kick, needed by the winning racer.

Women rarely competed in races beyond 3,000 metres until the second half of the 20th century. The women’s 3,000-metre race and marathon were introduced to the Olympic Games in 1984. After 1992 the 3,000-metre race for women was discontinued, but the women’s 10,000- and 5,000-metre events were added in 1988 and 1996, respectively.

Details

Long-distance running, or endurance running, is a form of continuous running over distances of at least 3 km (1.9 mi). Physiologically, it is largely aerobic in nature and requires stamina as well as mental strength.

Within endurance running comes two different types of respiration. The more prominent side that runners experience more frequently is aerobic respiration. This occurs when oxygen is present, and the body can utilize oxygen to help generate energy and muscle activity. On the other side, anaerobic respiration occurs when the body is deprived of oxygen, and this is common towards the final stretch of races when there is a drive to speed up to a greater intensity. Overall, both types of respiration are used by endurance runners quite often but are very different from each other.

Among mammals, humans are well adapted for running significant distances, particularly so among primates. The capacity for endurance running is also found in migratory ungulates and a limited number of terrestrial carnivores, such as bears, dogs, wolves, and hyenas.

In modern human society, long-distance running has multiple purposes: people may engage in it for physical exercise, for recreation, as a means of travel, for economic reasons, or cultural reasons. Long-distance running can also be used as a means to improve cardiovascular health.

Endurance running is often a component of physical military training. Long-distance running as a form of tradition or ceremony is known among the Hopi and Tarahumara people, among others.

In the sport of athletics, long-distance events are defined as races covering 3 km (1.9 mi) and above. The three most common types are track running, road running, and cross country running, all of which are defined by their terrain – all-weather tracks, roads, and natural terrain, respectively.

Physiology

Humans have been considered among the best distance runners among all running animals: game animals are faster over short distances, but they have less endurance than humans. Unlike other primates whose bodies are suited to walk on four legs or climb trees, the human body has evolved into upright walking and running around 2-3 million years ago. The human body can endure long-distance running through the following attributes:

* Bone and muscle structure: unlike quadruped mammals, which have their center of mass in front of the hind legs or limbs, in biped mammals including humans the center of mass lies right above the legs. This leads to different bone and muscular demands, especially in the legs and pelvis.

* Dissipation of metabolic heat: humans' ability to cool the body by sweating through the body surface provides many advantages over panting through the mouth or nose. These include a larger surface of evaporation and independence of the respiratory cycle.

One distinction between upright walking and running is energy consumption during locomotion. While walking, humans use about half the energy needed to run.

Factors:

Aerobic capacity

One's aerobic capacity or VO2Max is the ability to maximally take up and consume oxygen during exhaustive exercise. Long-distance runners typically perform at around 75–85% of peak aerobic capacity, while short-distance runners perform at closer to 100% of peak.

Aerobic capacity depends on the transportation of large amounts of blood to and from the lungs to reach all tissues. This in turn is dependent on having a high cardiac output, sufficient levels of hemoglobin in blood and an optimal vascular system to distribute blood. A 20-fold increase of local blood flow within the skeletal muscle is necessary for endurance athletes, like marathon runners, to meet their muscles' oxygen demands at maximal exercise that are up to 50 times greater than at rest.

Elite long-distance runners often have larger hearts and decreased resting heart rates that enable them to achieve greater aerobic capacities. Increased dimensions of the heart enable an individual to achieve a greater stroke volume. A concomitant decrease in stroke volume occurs with the initial increase in heart rate at the onset of exercise. Despite an increase in cardiac dimensions, a marathoner's aerobic capacity is confined to this capped and ever-decreasing heart rate.

The amount of oxygen that blood can carry depends on blood volume, which increases during a race, and the amount of hemoglobin in the blood.

Other physiological factors affecting a marathon runner's aerobic capacity include pulmonary diffusion, mitochondria enzyme activity, and capillary density.

A long-distance runner's running economy is their steady state requirement for oxygen at specific speeds and helps explain differences in performance for runners with very similar aerobic capacities. This is often measured by the volume of oxygen consumed, either in liters or milliliters, per kilogram of body weight per minute (L/kg/min or mL/kg/min). As of 2016 the physiological basis for this was uncertain, but it seemed to depend on the cumulative years of running and reaches a cap that longer individual training sessions cannot overcome.

Lactate threshold

A long-distance runner's velocity at the lactate threshold is strongly correlated to their performance. The lactate threshold is the cross-over point between predominantly aerobic energy usage and anaerobic energy usage and is considered a good indicator of the body's ability to efficiently process and transfer chemical energy into mechanical energy.  For most runners, the aerobic zone doesn't begin until around 120 heartbeats per minute. Lactate threshold training involves tempo workouts that are meant to build strength and speed, rather than improve the cardiovascular system's efficiency in absorbing and transporting oxygen. By running at your lactate threshold, your body will become more efficient at clearing lactic acid and reusing it to fuel your muscles. Uncertainty exists in regard to how lactate threshold affects endurance performance.

Fuel

In order to sustain high-intensity running, a marathon runner must obtain sufficient glycogen stores. Glycogen can be found in the skeletal muscles and liver. With low levels of glycogen stores at the onset of the marathon, premature depletion of these stores can reduce performance or even prevent the completion of the race. ATP production via aerobic pathways can further be limited by glycogen depletion.  Free Fatty Acids serve as a sparing mechanism for glycogen stores. The artificial elevation of these fatty acids along with endurance training demonstrates a marathon runner's ability to sustain higher intensities for longer periods of time. The prolonged sustenance of running intensity is attributed to a high turnover rate of fatty acids that allows the runner to preserve glycogen stores later into the race.

Long-distance runners generally practice carbohydrate loading in their training and race preparation.

Thermoregulation and body fluid loss

The maintenance of core body temperature is crucial to a marathon runner's performance and health. An inability to reduce rising core body temperature can lead to hyperthermia. In order to reduce body heat, the metabolically produced heat needs to be removed from the body via sweating, which in turn requires rehydration to compensate for. Replacement of fluid is limited but can help keep the body's internal temperatures cooler. Fluid replacement is physiologically challenging during the exercise of this intensity due to the inefficient emptying of the stomach. Partial fluid replacement can serve to avoid a marathon runner's body overheating but not enough to keep pace with the loss of fluid via sweat evaporation.  Environmental factors can especially complicate heat regulation.

Altitude

Since the late 1980s, Kenyans, Moroccans, and Ethiopians have dominated in major international long-distance competitions. The high altitude of these countries has been proven to help these runners achieve more success. High altitude, combined with endurance training, can lead to an increase in red blood cells, allowing increased oxygen delivery via arteries. The majority of these East African successful runners come from three mountain districts that run along the Great Rift Valley. While altitude may be a contributing factor, a culture of hard work, teamwork, as well as an advanced institutional structure also contributes to their success.

Impact on health

"… an evolutionary perspective indicates that we did not evolve to run long distances at fast speeds on a regular basis. As a result, it is unlikely there was a selection for the human body to cope with some of the extreme demands runners place on their bodies."

The impact of long-distance running on human health is generally positive. Various organs and systems in the human body are improved: bone mineral density is increased, and cholesterol is lowered.

However, beyond a certain point, negative consequences might occur. Older male runners (45-55) who run more than 40 miles (64 kilometers) per week face reduced testosterone levels, although they are still in the normal range. Running a marathon lowers testosterone levels by 50% in men and more than doubles cortisol levels for 24 hours. Low testosterone is thought to be a physiological adaptation to the sport, as excess muscle caused maybe shed through lower testosterone, yielding a more efficient runner. Veteran, lifelong endurance athletes have been found to have more heart scarring than control groups, but replication studies and larger studies should be done to firmly establish the link, which may or may not be causal. Some studies find that running more than 20 miles (32 kilometers) per week yields no lower risk for all-cause mortality than non-runners, although these studies are in conflict with large studies that show longer lifespans for any increase in exercise volume.

Elite-level long-distance running is associated with a 3 to 7 times higher risk of the knee osteoarthritis later in life compared to non-runners.

The effectiveness of shoe inserts has been contested. Memory foam and similar shoe inserts may be comfortable, but they can make foot muscles weaker in the long term. Running shoes with special features, or lack thereof in the case of minimalist designs, do not prevent injury. Rather, comfortable shoes and standard running styles are safer.

In sport

Many sporting activities feature significant levels of running under prolonged periods of play, especially during ball sports like association football and rugby league. However, continuous endurance running is exclusively found in racing sports. Most of these are individual sports, although team and relay forms also exist.

The most prominent long-distance running sports are grouped within the sport of athletics, where running competitions are held on strictly defined courses, and the fastest runner to complete the distance wins. The foremost types are long-distance track running, road running and cross-country running. Other less popular variants such as fell running, trail running, mountain running, and tower running combine the challenge of distance with a significant incline or change of elevation as part of the course.

Multisport races frequently include endurance running. Triathlon, as defined by the International Triathlon Union, may feature running sections ranging from five kilometres (3.1 miles) to the marathon distance (42.195 kilometres, or 26 miles and 385 yards), depending on the race type. The related sport of duathlon is a combination of cycling and distance running. Previous versions of the modern pentathlon incorporated a three or four-kilometre (1.9–2.5 mi) run, but changes to the official rules in 2008 meant the running sections are now divided into three separate legs of one kilometre each (0.6 mi).

Depending on the rules and terrain, navigation sports such as foot orienteering and rogaining may contain periods of endurance running within the competition. Variants of adventure racing may also combine navigational skills and endurance running in this manner.

Running competitions:

Track running

The history of long-distance track running events are tied to the track and field stadia where they are held. Oval circuits allow athletes to cover long distances in a confined area. Early tracks were usually on flattened earth or were simply marked areas of grass. The style of running tracks became refined during the 20th century: the oval running tracks were standardised to 400 metres in distance and cinder tracks were replaced by synthetic all-weather running track of asphalt and rubber from the mid-1960s onwards. It was not until the 1912 Stockholm Olympics that the standard long-distance track events of 5000 metres and 10,000 metres were introduced.

* The 3000 metres steeplechase is a race that involves not only running but also jumping over barriers and a water pit. While it can be considered a hurdling event, it is also widely regarded as a long-distance running event as well. The obstacles for the men are 914 millimetres (36.0 inches) high, and for the women 762 millimetres (30.0 inches).
* The world record for men is 7:53.63 by Saif Saaeed Shaheen of Qatar in Brussels, Belgium set on 3 September 2004.
* The world record for women is 8:44.32 by Beatrice Chepkoech of Kenya in Monaco, set on 20 July 2018.
* The 5000 metres is a premier event that requires tactics and superior aerobic conditioning. Training for such an event may consist of a total of 60–200 kilometers (37–124 miles) a week, although training regimens vary greatly. The 5000 is often a popular entry-level race for beginning runners.
* The world record for men is 12:35.36 (an average of 23.83 km/h) by Joshua Cheptegei of Uganda in Monaco set on 14 August 2020
* The world record for women is 14:06.62 (an average of 21.26 km/h) by Letesenbet Gidey of Ethiopia in Valencia, Spain set on 7 October 2020
* The 10,000 metres is the longest standard track event. Most of those running such races also compete in road races and cross country running events.
* The world record for men is 26:11.00 (22.915 km/h) by Joshua Cheptegei of Uganda in Valencia, Spain set on 7 October 2020
* The world record for women is 29:01.03 by Letesenbet Gidey of Ethiopia set on 8 June 2021.
* The One hour run is an endurance race that is rarely contested, except in pursuit of world records.
* The 20,000 metres is also rarely contested, most world records in the 20,000 metres have been set while in a one-hour run race.

Road running

Long-distance road running competitions are mainly conducted on courses of paved or tarmac roads, although major events often finish on the track of a main stadium. In addition to being a common recreational sport, the elite level of the sport – particularly marathon races – is one of the most popular aspects of athletics. Road racing events can be of virtually any distance, but the most common and well-known is the marathon, half marathon, and 10 km run.

The sport of road running finds its roots in the activities of footmen: male servants who ran alongside the carriages of aristocrats around the 18th century, and who also ran errands over distances for their masters. Foot racing competitions evolved from wagers between aristocrats, who pitted their footman against that of another aristocrat in order to determine a winner. The sport became professionalised as footmen were hired specifically on their athletic ability and began to devote their lives to training for gambling events. The amateur sports movement in the late 19th century marginalised competitions based on the professional, gambling model. The 1896 Summer Olympics saw the birth of the modern marathon and the event led to the growth of road running competitions through annual public events such as the Boston Marathon (first held in 1897) and the Lake Biwa Marathon and Fukuoka Marathons, which were established in the 1940s. The 1970s running boom in the United States made road running a common pastime and also increased its popularity at the elite level.

The marathon is the only road running event featured at the World Athletics Championships and the Summer Olympics, although there is also the World Arhletics Half Marathon Championships held every two years. The marathon is also the only road running event featured at the World Para Athletics Championships and the Summer Paralympics. The World Marathon Majors series includes the six most prestigious marathon competitions at the elite level – the Berlin, Boston, Chicago, London, Tokyo, and New York City marathons. The Tokyo Marathon was most recently added to the World Marathon Majors in 2012.

* Ekiden contests – which originated in Japan and remain common there – are a relay race variation on the marathon, in contrast to the typically individual sport of road running.

Cross country running

Cross-country running is the most naturalistic form of long-distance running in athletics as competitions take place on open-air courses over surfaces such as grass, woodland trails, earth, or mountains. In contrast to the relatively flat courses in track and road races, cross country usually incorporates obstacles such as muddy sections, logs, and mounds of earth. As a result of these factors, weather can play an integral role in racing conditions. Cross country is both an individual and team sport, as runners are judged on an individual basis and a points-scoring method is used for teams. Competitions are typically races of 4 km (2.5 mi) or more which are usually held in autumn and winter. Cross country's most successful athletes often compete in long-distance track and road events as well.

The history of the sport is linked with the game of paper chase, or hare and hounds, where a group of runners would cover long distances to chase a leading runner, who left a trail of paper to follow. The Crick Run in England in 1838 was the first recorded instance of an organised cross-country competition. The sport gained popularity in British, then American schools in the 19th century and culminated in the creation of the first International Cross Country Championships in 1903. The annual World Athletics Cross Country Championships was inaugurated in 1973 and this remains the highest level of competition for the sport. A number of continental cross country competitions are held, with championships taking place in Africa, Asia, Europe, Oceania, North America and South America. The sport has retained its status at the scholastic level, particularly in the United Kingdom and the United States. At the professional level, the foremost competitions come under the banner of the World Athletics Cross Country Tour.

While cross country competitions are no longer held at the Olympics, having featured in the athletics programme from 1912 to 1924, it has been present as one of the events within the modern pentathlon competition since the 1912 Summer Olympics.

Fell running, trail running, and mountain running can all be considered variations on the traditional cross country which incorporate significant uphill and/or downhill sections as an additional challenge to the course.

Adventure running

The term adventure running is loosely defined and can be used to describe any form of long-distance running in a natural setting, regardless of the running surface. It may include river crossing, scrambling, snow, extremely high or low temperatures, and high altitudes. It has both competitive and non-competitive forms, the latter being for individual recreation or social experience. As a result, courses are often set in scenic locations and feature obstacles designed to give participants a sense of achievement. It bears similarities to running sections of adventure racing.

Ultra-long distance: extended events and achievements

A number of events, records, and achievements exist for long-distance running, outside the context of track and field sports events. These include multiday races, ultramarathons, and long-distance races in extreme conditions or measuring hundreds or thousands of miles.

Beyond these, records and stand-alone achievements, rather than regular events, exist for individuals who have achieved running goals of a unique nature, such as running across or around continents or running around the world.

KenyaRun2_wide-c97318dd49483b4421ed3de410149760e7df4641.jpg?s=5


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1790 2023-06-01 01:19:24

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1793) Horse riding

Gist

Horse riding is the sport or activity of riding a horse.

Horsemanship is the art of riding, handling, and training horses. Good horsemanship requires that a rider control the animal's direction, gait, and speed with maximum effectiveness and minimum efforts.

Summary

Equestrianism (from Latin equester, equestr-, equus, 'horseman', 'horse'), commonly known as horse riding (Commonwealth English) or horseback riding (American English), includes the disciplines of riding, driving, and vaulting. This broad description includes the use of horses for practical working purposes, transportation, recreational activities, artistic or cultural exercises, and competitive sport.

Overview of equestrian activities

Horses are trained and ridden for practical working purposes, such as in police work or for controlling herd animals on a ranch. They are also used in competitive sports including dressage, endurance riding, eventing, reining, show jumping, tent pegging, vaulting, polo, horse racing, driving, and rodeo (see additional equestrian sports listed later in this article for more examples). Some popular forms of competition are grouped together at horse shows where horses perform in a wide variety of disciplines. Horses (and other equids such as mules) are used for non-competitive recreational riding, such as fox hunting, trail riding, or hacking. There is public access to horse trails in almost every part of the world; many parks, ranches, and public stables offer both guided and independent riding. Horses are also used for therapeutic purposes both in specialized para-equestrian competition as well as non-competitive riding to improve human health and emotional development.

Horses are also driven in harness racing, at horse shows, and in other types of exhibition such as historical reenactment or ceremony, often pulling carriages. In some parts of the world, they are still used for practical purposes such as farming.

Horses continue to be used in public service, in traditional ceremonies (parades, funerals), police and volunteer mounted patrols and for mounted search and rescue.

Riding halls enable training of horse and rider in all weathers as well as indoor competition riding.

Details

Horsemanship

Horsemanship is the art of riding, handling, and training horses. Good horsemanship requires that a rider control the animal’s direction, gait, and speed with maximum effectiveness and minimum efforts.

Horsemanship evolved, of necessity, as the art of riding with maximum discernment and a minimum of interference with the horse. Until the 20th century riding was a monopoly of the cavalry, of cowboys and others whose work required riding on horseback, and of the wealthy, who rode for sport. Although hunting and polo tend to remain sports of the wealthy and the role of the horse in battle has ended, special value is now placed on horse shows of a high standard, in which the most popular event is undoubtedly show jumping. Horsemanship has remained a valued social asset and symbol of prestige, but the opening of many new riding clubs and stables has made riding and horsemanship accessible to a much larger segment of the population.

History:

Origins and early history

From the 2nd millennium BCE, and probably even earlier, the horse was employed as a riding animal by fierce nomadic peoples of central Asia. One of these peoples, the Scythians were accomplished horsemen and used saddles. It is also likely that they realized the importance of a firm seat and were the first to devise a form of stirrup. A saddled horse with straps hanging at the side and looped at the lower end is portrayed on a vase of the 4th century BCE found at Chertomlyk in Ukraine. This contrivance may have been used for mounting only, however, because of the danger of being unable to free the foot quickly in dismounting. The Greek historian Strabo said that the indocility of the Scythians’ wild horses made gelding necessary, a practice until then unknown in the ancient world. The Sarmatians, superb horsemen who superseded the Scythians, rode bareback, controlling their horses with knee pressure and distribution of the rider’s weight.

Among the earliest peoples to fight and hunt on horseback were the Hittites, the Assyrians, and the Babylonians; at the same time (about 1500 BCE) the Hyksos, or Shepherd Kings, introduced horses into Egypt and rode them in all their wars. In the 8th and 7th centuries BCE the Scythians brought horses to Greece, where the art of riding developed rapidly, at first only for pleasure. A frieze from the Parthenon in Athens shows Greeks riding bareback. Philip II of Macedon had a body of cavalry in his army, and the army of his son Alexander had separate, organized horse units. In the 4th century BCE another Greek historian, Xenophon, wrote his treatise Peri hippikēs (On Horsemanship), giving excellent advice on horsemanship. Many of his principles are still perfectly valid. He advocated the use of the mildest possible bits and disapproved of the use of force in training and in riding. The Roman mounted troops were normally barbarian archers who rode without stirrups and apparently without reins, leaving the hands free to use the bow and arrow.

As a general rule, almost every item of riding equipment used today originated among the horsemen of the Eurasian steppes and was adopted by the people of the lands they overran to the east, the south, and later the west.

Horseshoes of various types were used by migratory Eurasian tribes about the 2nd century BCE, but the nailed iron horseshoe as used today first appeared in Europe about the 5th century CE, introduced by invaders from the East. One, complete with nails, was found in the tomb of the Frankish king Childeric I at Tournai, Belgium.

Attila is said to have brought the stirrup to Europe. Round or triangular iron stirrups were used by the Avars in the 6th century CE, and metal stirrups were used by the Byzantine cavalry. They were in use in China and Japan by about 600 CE.

The principle of controlling a horse by exerting pressure on its mouth through a bit (a metal contrivance inserted in the mouth of the horse) and reins (straps attached to the bit held by the rider) was practiced from the earliest times, and bits made of bone and antlers have been found dating from before 1000 BCE. The flexible mouthpiece with two links and its variations have been in use down the centuries, leading directly to the jointed snaffle bit of the present day.

Early, stumpy prickspurs have been found in Bohemia on 4th-century-BCE Celtic sites.

Military horsemanship

The importance of cavalry increased in the early Middle Ages, and, in the thousand years that followed, mounted warriors became predominant in battle. Armour steadily became bulkier and heavier, forcing the breeding of more and more massive horses, until the combination rendered maneuverability nearly impossible.

Efforts to overcome this were made at a Naples riding academy in the early 16th century, when Federico Grisone and Giovanni Battista Pignatelli tried to combine Classical Greek principles with the requirements of medieval mounted combat. After Xenophon—except for a 14th-century treatise by Ibn Hudhayl, an Arab of Granada, Spain, and a 15th-century book on knightly combat by Edward, king of Portugal—apparently little notable literature on riding was produced until Grisone published his Gli ordini di cavalcare (“The Orders of Riding”) in 1550.

The development of firearms led to the shedding of armour, making it possible for some further modifications in methods and training under followers of the school of Pignatelli and Grisone, such as William Cavendish, duke of Newcastle. In 1733 François Robichon de la Guérinière published École de cavalerie (“School of Cavalry”), in which he explained how a horse can be trained without being forced into submission, the fundamental precept of modern dressage. Dressage is the methodical training of a horse for any of a wide range of purposes, excluding only racing and cross-country riding.

Meanwhile, the Spanish Imperial Riding School in Vienna and the French cavalry centre at Saumur aimed at perfecting the combined performance of horse and rider. Their technique and academic seat, a formal riding position or style in which the rider sits erectly, deep in the middle of the saddle, exerted considerable influence in Europe and America during the 18th and 19th centuries and are still used in modern dressage. The head riding master at Saumur, Comte Antoine d’Aure, however, promoted a bold, relaxed, and more natural, if less “correct,” style of riding across country, in disagreement with his 19th-century contemporary François Baucher, a horseman of great ability with formal haute école (“high school”) ideas. Classical exercises in the manège, or school for riding, had to make way for simplified and more rational riding in war and the hunt. During this period hunting riders jumped obstacles with their feet forward, their torso back on the horse’s haunches, and the horse’s head held up. The horse often leaped in terror.

At the turn of the 20th century, Capt. Federico Caprilli, an Italian cavalry instructor, made a thorough study of the psychology and mechanics of locomotion of the horse. He completely revolutionized the established system by innovating the forward seat, a position and style of riding in which the rider’s weight is centred forward in the saddle, over the horse’s withers. Caprilli wrote very little, but his pupil, Piero Santini, popularized his master’s fundamental principles. Except in dressage and showing, the forward seat is the one now most frequently used, especially for jumping.

The art of horsemanship

The basic principle of horsemanship is to obtain results in a humane way by a combination of balance, seat, hands, and legs.

Fundamentals

The horse’s natural centre of gravity shifts with its every movement and change of gait. Considering that a mounted horse also carries a comparatively unstable burden approximately one-fifth of its own weight, it is up to the rider to conform with the movements of the horse as much as possible.

Before one mounts, the saddle is checked to be sure that it fits both the horse and its rider. Experienced riders position themselves in the saddle in such a way as to be able to stay on the horse and control it. The seat adopted depends on the particular task at hand. A secure seat is essential, giving riders complete independence and freedom to apply effectively the aids at their disposal. Good riders do not overrule the horse, but, firmly and without inflicting pain, they persuade it to submit to their wishes.

The horse’s movements

The natural gaits of the horse are the walk, the trot, the canter or slow gallop, and the gallop, although in dressage the canter and gallop are not usually differentiated. A riding horse is trained in each gait and in the change from one to another.

During the walk and the gallop the horse’s head moves down and forward, then up and back (only at the trot is it still); riders follow these movements with their hands.

Walk

The walk is a slow, four-beat, rhythmic pace of distinct successive hoof beats in an order such as near (left) hind, near fore, off (right) hind, off fore. Alternately two or three feet may be touching the ground simultaneously. It may be a free, or ordinary, walk in which relaxed extended action allows the horse freedom of its head and neck, but contact with the mouth is maintained; or it may be a collected walk, a short-striding gait full of impulsion, or vigour; or it may be an extended walk of long, unhurried strides.

Trot

The trot is a two-beat gait, light and balanced, the fore and hind diagonal pairs of legs following each other almost simultaneously—near fore, off hind, off fore, and near hind. Riders can either sit in the saddle and be bumped as the horse springs from one diagonal to the other, or they can rise to the trot, post, by rising out of the saddle slightly and allowing more of their weight to bear on the stirrups when one or the other of the diagonal pairs of legs leaves the ground. Posting reduces the impact of the trot on both horse and rider.

Canter

As the horse moves faster, its gait changes into the canter, or ordinary gallop, in which the rider does not rise or bump. It is a three-beat gait, graceful and elegant, characterized by one or the other of the forelegs and both hindlegs leading—near hind, off hind, and near fore practically together, then off fore, followed briefly by complete suspension. Cantering can be on the near lead or the off, depending on which is the last foot to leave the ground. The rider’s body is more forward than at the trot, the weight taken by the stirrups.

Gallop

An accelerated canter becomes the gallop, in which the rider’s weight is brought sharply forward as the horse reaches speeds up to 30 miles (48 kilometres) an hour. The horse’s movements are the same as in the canter. To some authorities, the gallop is a four-beat gait, especially in an extended run.

Other gaits

There are a number of disconnected and intermediate gaits, some done only by horses bred to perform them. One is the rack, a four-beat gait, with each beat evenly spaced in perfect cadence and rapid succession. The legs on either side move together, the hindleg striking the ground slightly before the foreleg. The single foot is similar to the rack. In the pace, the legs on either side move and strike the ground together in a two-beat gait. The fox trot and the amble are four-beat gaits, the latter smoother and gliding.

Training and equipment

Depending on the abilities and inclinations of horse and trainer, training may include such elements as collection (controlled, precise, elevated movement) and extension (smooth, swift, reaching movement—the opposite of collection) at all paces; turns on the forehand (that part of the horse that is in front of the rider) and hindquarters; changing lead leg at the canter; change of speed; reining back, or moving backward; lateral movements; and finally the refinements of dressage, jumping, and cross-country riding.

Communication with the horse is rendered possible by the use of the bit and the aids. The rider signals intentions to the horse by a combination of recognized movements of hands and legs, using several articles of equipment. By repetition the horse remembers this language, understands what is required, and obeys.

Bits

There are several types of bits, including the snaffle, the double bridle, and the Pelham.

The simplest is the snaffle, also called the bridoon. It consists of a single straight or jointed mouthpiece with a ring at each end for the reins. The snaffle is used for racing and frequently for cross-country riding. It is appropriate for preliminary schooling.

The double bridle is used for advanced schooling. It consists of a jointed snaffle and a straight bit placed together in the mouth, first the snaffle, then the bit, both functioning independently and attached to separate reins. The mouthpiece of the bit can have a port or indentation in its centre to give more control. The slightest pull on the bit rein exerts pressure on the mouth.

The Pelham is a snaffle with a straight mouthpiece; cheekpieces with rings at the lower ends for curb action; and a curb chain, with which pressure may be applied to the lower outside of the mouth. The Pelham gives control with only slight discomfort and is popular for polo.

Bridles

The bridle is a set of straps that makes the bit secure in the animal’s mouth and thus ensures human control by means of the reins. The upper portion of the bridle consists of the headpiece passing behind the ears and joining the headband over the forehead; the cheek straps run down the sides of the head to the bit, to which they are fastened; in the blind type of driving bridle the blinkers, rectangular or round leather flaps that prevent the animal from seeing anything except what lies in front, are attached to the cheek straps; the noseband passes around the front of the nose just above the nostrils; and the throatlatch extends from the top of the cheek straps underneath the head.

Aids

The principal features of a horse’s mentality are acute powers of observation, innate timidity, and a good memory. To a certain extent the horse can also understand. Schooling is based on these faculties, and the rider’s aids are applied accordingly. The natural aids are the voice, the hands through the reins and the bit, the legs and heels, and the movement of the rider’s weight. The whip, the spur, and devices such as martingales, special nosebands, and reins are artificial aids, so termed in theory, as the horse does not discriminate between natural and artificial.

Horses are easily startled. A good horseman will approach them quietly, speaking to them and patting them to give them confidence. Silence on the part of the rider can even cause disquiet to some horses, but they should not be shouted at. The rider’s voice and its tone make a useful aid in teaching a horse in its early schooling to walk, trot, canter, and halt.

To keep the horse alert at all times, the rider’s hands keep a light, continual contact with its mouth, even at the halt. The hands are employed together with the legs to maintain contact, to urge the horse forward, to turn, to rein back, and generally to control the forehand. The horse is said to be collected and light in hand when the action of the bit can cause it to flex, or relax, its jaw with its head bent at the poll, or top.

When pressed simultaneously against the flanks, immediately after the hands ease the reins, the legs induce the forward movement of the horse. They are of the greatest importance in creating and maintaining impulsion, in controlling the hindquarters, and for lateral movement.

Riders achieve unity of balance by means of the weight aid, that is, by moving the body in harmony with the movements of the horse, forward, backward, or to the side. Thus, in cantering to the left, the rider leans to the left; or when about to descend a steep slope, the rider stays erect while the horse is feeling for the edge with its forefeet, but as soon as the descent starts the rider leans forward, leaving the hindquarters free to act as a brake and to prevent scraping the back of the horse’s rear legs on rough ground. Meanwhile the hands keep the horse headed straight to maintain its balance.

The whip is used chiefly to reinforce the leg aid for control, to command attention, and to demand obedience, but it can be used as a punishment in cases of deliberate rebellion. A horse may show resistance by gnashing its teeth and swishing its tail. Striking should always be on the quarters, behind the saddle girth, and must be immediate since a horse can associate only nearly simultaneous events. This applies equally to rewards. A friendly tone of voice or a pat on the neck are types of reward.

Although normally the leg or the heel, or both, should be sufficient, spurs, which should always be blunt, assist the legs in directing the precision movements of advanced schooling. Their use must be correctly timed.

Martingales are of three types: running, standing, or Irish. The running and standing martingales are attached to the saddle straps at one end and the bit reins or bridle at the other. The Irish martingale, a short strap below the horse’s chin through which the reins pass, is used for racing and stops the horse from jerking the reins over its head. As the horse cannot see below a line from the eye to the nostril, it should not be allowed to toss its head back, particularly near an obstacle, as it is liable to leap blindly. A martingale should not be necessary with a well-schooled horse.

The noseband, a strap of the bridle that encircles the horse’s nose, may be either a cavesson, with a headpiece and rings for attaching a long training rein, or a noseband with a headstrap, only necessary if a standing martingale is used. A variety of other nosebands are intended for horses that pull, or bear, on the reins unnecessarily.

Seats

The saddle, the length of the stirrup, and the rider’s seat, or style of riding, should suit the purpose for which the horse is ridden. The first use of the stirrup is to enable the rider to get on the horse, normally from the near (left) side. With the raised foot in the stirrup the rider should avoid digging the horse in the flank on springing up and should gradually slide into position without landing on the horse’s kidneys with a bump. With an excitable horse, the rider may wait, resting on knees and stirrups, until the horse moves forward.

Forward seat

The forward seat, favoured for show jumping, hunting, and cross-country riding, is generally considered to conform with the natural action of the horse. The rider sits near the middle of the saddle, his torso a trifle forward, even at the halt. The saddle is shaped with the flaps forward, sometimes with knee rolls for added support in jumping. The length of the stirrup leather is such that, with continual lower thigh and knee grip, the arch of the foot can press on the tread of the iron with the heel well down. A wide and heavy stirrup iron allows easy release of the foot in case of accidents. The line along the forearm from the elbow to the hands and along the reins to the bit is held straight. As the horse moves forward, so do the rider’s hands, to suit the horse’s comfort.

Dressage seat

In the show and dressage seat the rider sinks deep into the saddle, in a supple, relaxed but erect position above it. The saddle flaps are practically straight so as to show as much expanse of the horse’s front as possible. The stirrup leather is of sufficient length for the rider’s knee to bend at an angle of about 140 degrees and for the calf to make light contact with the horse’s flank, the heel well down, and the toes or the ball of the foot resting on the tread of the stirrup iron. The rider keeps continual, light contact with the horse’s mouth; and the intention is to convey an impression of graceful, collected action. In the past this type of saddle, with its straight-cut flaps, was used for hunting and polo, but the forward seat has become more popular for these activities.

Stock saddle

The stock saddle seat is appropriate for ranchers but is also used at rodeos and by many pleasure and trail riders. The saddle, which can weigh as much as 40 pounds (18 kilograms), is designed for rounding up cattle and is distinguished by a high pommel horn for tying a lariat. The rider employs long stirrups and a severe bit that he seldom uses because he rides with a loose rein, guiding his horse chiefly by means of shifting the weight of his body in the saddle. The gaucho roughriders of the Argentine Pampa have adopted a similar seat, using a saddle with a high pommel and cantle. Australian stockmen have used a saddle that has a short flap and is equipped with knee and thigh rolls, or props, which give an extremely secure seat.

Side saddle

Though now not so fashionable, the elegant and classical side-saddle seat was formerly favoured and considered correct by many horsewomen. On the near side the saddle has an upright pommel on which the rider’s right leg rests. There is a lower, or leaping, pommel, against which the left leg can push upward when grip is required, and a single stirrup. Although the rider sits with both legs on one side of the saddle, forward action to suit the movement of the horse is feasible in cross-country riding.

Bareback

Bareback means riding without saddle or blanket, the rider sitting in the hollow of the horse’s back and staying there chiefly by balance. It is an uncomfortable seat but less so at the walk and the slow canter. When suffering from saddle galls horses are sometimes ridden bareback for exercise.

Dressage

Originally intended for military use, dressage training was begun early in the 16th century. The international rules for dressage are based on the traditions and practice of the best riding schools in the world. The following is an extract from these rules of the Fédération Équestre Internationale:

Object and general principles.

The object of dressage is the harmonious development of the physique and the ability of the horse. As a result, it makes the horse calm, supple, and keen, thus achieving perfect understanding with its rider. These qualities are revealed by the freedom and regularity of the paces; the harmony, lightness, and ease of the movements; the lightening of the forehand, and the engagement of the hindquarters; the horse remaining absolutely straight in any movement along a straight line, and bending accordingly when moving on curved lines.

The horse thus gives the impression of doing of his own account what is required of him. Confident and attentive, he submits generously to the control of his rider.

Campagne is the term used for elementary but thorough training, including work on the longeing rein. This long rein, also used for training young or difficult horses, is attached to a headpiece with a noseband called a cavesson. The horse is bitted and saddled and is schooled in circles at the end of the rein. It is an accessory to training from the saddle, which is always best. Basic to campagne is collection: teaching the horse to arch its neck, to shift its weight backward onto its hindquarters, and to move in a showy, animated manner. Other elements of campagne include riding in a straight line, turns, and lateral movements.

Haute école is the most elaborate and specialized form of dressage, reaching its ultimate development at the Vienna school in its traditional white Lippizaner horses. Some characteristic haute école airs, or movements, are the pirouettes, which are turns on the haunches at the walk and the canter; the piaffe, in which the horse trots without moving forward, backward, or sideways, the impulse being upward; the passage, high-stepping trot in which the impulse is more upward than forward; the levade, in which the horse stands balanced on its hindlegs, its forelegs drawn in; the courvet, which is a jump forward in the levade position; and the croupade, ballotade, and capriole, a variety of spectacular airs in which the horse jumps and lands again in the same spot.

All of these movements are based, perhaps remotely in some instances, on those that the horse performs naturally.

Jumping

The most sensitive parts of the horse when ridden are the mouth and the loins, particularly in jumping. The rider’s hands control the forehand while the legs act on the hindquarters. As speed is increased the seat is raised slightly from the saddle, with the back straight and the trunk and hands forward, the lower thighs and the knees taking the weight of the body and gripping the saddle, leaving the legs from the knees down free for impulsion. Contact with the mouth is maintained evenly and continually, the rider conforming with every movement as the horse’s head goes forward after takeoff and as it is retracted on landing, the hands always moving in line with the horse’s shoulder. In order to give complete freedom to the hindquarters and to the hocks, the rider does not sit back in the saddle until at least two strides after landing.

The horse is a natural jumper, but, if ridden, schooling becomes necessary. Training is started in an enclosed level area by walking the horse, preferably in a snaffle, over a number of bars or poles laid flat on the ground. When the horse has become accustomed to this, its speed is increased. As the horse progresses, the obstacles are systematically raised, varied, and spaced irregularly. The object is to teach the horse: (1) to keep its head down; (2) to approach an obstacle at a quiet, collected, yet energetic pace; (3) to decide how and where to take off; and (4) after landing to proceed quietly to the next obstacle. The horse should be confident over every jump before it is raised and should be familiarized with a variety of obstacles.

Only thoroughly trained riders and horses compete. Very strenuous effort is required of the horse, as well as of the rider who does not by any action give the horse the impression that something out of the ordinary is impending. If possible the horse is warmed up by at least a half-hour’s walking and trotting before entering the ring. The horse is guided toward the exact centre of every obstacle, the rider looking straight ahead and not looking around after takeoff for any reason, as that might unbalance the horse. The broader the obstacle, the greater the speed of approach. Although a few experienced riders can adjust the horse’s stride for a correct takeoff, this should not be necessary with a well-schooled horse. The rider is always made to conform with every action of the horse, the only assistance necessary being that of direction and increasing or decreasing speed according to the obstacle.

Riding and shows

Racing on horseback probably originated soon after humans first mastered the horse. By the 7th century BCE, organized mounted games were held at Olympia. The Romans held race meetings, and in medieval Europe tournaments, jousting, and horse fairs were frequent and popular events. Played in Persia for centuries, polo was brought to England from India about 1870. In North America, Western ranch riding produced the rodeo.

Horse associations and pony clubs are today the mainstay of equine sport. They have improved the standards of riding instruction and the competitive activities of dressage, hunter trials, and show jumping. The latter became an important event from 1869, when what was probably the first “competition for leaping horses” was included in the program of an Agricultural Hall Society horse show in London. National organizations such as the British Horse Society, the American Horse Shows Association (AHSA), the Federazione Italiana Sports Equestri, the National Equestrian Federation of Ireland, the Fédération Française des Sports Équestres, and similar groups from about 50 other nations are affiliated with the Fédération Équestre Internationale (FEI), founded in 1921 with headquarters at Brussels, the official international governing body and the authority on the requirements of equitation.

Horse shows

Horse shows are a popular institution that evolved from the horse sections of agricultural fairs. Originally they were informal displays intended to attract buyers and encourage the improvement of every type of horse. Now they are organized and conducted by committees of experts and by associations that enforce uniform rules, appoint judges, settle disputes, maintain records, and disseminate information. Riding contests included in the program have become increasingly important.

Under the auspices of the Royal Dublin Society, an international horse show was first held at Dublin in 1864. It is an annual exhibition of every type of saddle horse, as well as broodmares and ponies. International jumping contests similar to Olympic competition, events for children, and auction sales are held during this five-day show.

The National Horse Show at New York, first held in 1883, is another great yearly event. Held at Madison Square Garden, it lasts several days and includes about 10 different events. Among the most important are the international jumping under FEI rules and the open jumping under AHSA rules. Other shows are held in many sections of the United States.

Horse and pony shows are held regularly in the United Kingdom, the most important being the Richmond Royal Horse Show, the Horse of the Year Show, and the Royal International Horse Show. The latter, an annual event first held in 1907, has flourished under royal patronage and includes international jumping, special items such as the visit of the Spanish Riding School with its Lippizaners in 1953, and a Supreme Riding Horse competition.

In Canada the Royal Agricultural Winter Fair at Toronto, opened in 1922 and known in Canada as the “Royal,” is a major event, and in Australia the Royal Agricultural Society organizes horse shows annually in every state. Other events include the shows at Verona and at the Piazza di Siena in Rome; frequent horse shows in Belgium, France, Germany, and the Netherlands; the winter show in July in Buenos Aires; and the Exhibition of Economic Achievement in Moscow.

Olympic equestrian competition

The FEI organizes and controls the equestrian events at the Olympic Games. Included in each Olympics since the Games at Stockholm in 1912 (equestrian events were also held in 1900), these events are the occasion for keen rivalry and evoke high standards of horsemanship. They comprise a dressage grand prix, a three-day event, and a jumping grand prix, all open to team and individual competition.

The Grand Prix de Dressage involves performance of the walk, trot, canter, and collected paces and several conventional dressage figures and movements, as well as the correct rider’s position. Scoring on each item is from a maximum of 10 for excellent down to 1 for very bad.

The three-day event consists of tests in dressage, endurance or cross-country riding, and show jumping. Dressage is on the first day. On the second day there is an endurance test over a course 25 to 35 kilometres (16 to 22 miles) in length, covering swamp roads, tracks, steeplechase obstacles, and cross-country sections. Jumping tests, less strenuous than the Prix des Nations jumping event, are held on the third day.

The Prix des Nations jumping event is a competition involving 13 or 14 obstacles, heights varying between 1.30 and 1.60 metres (51 and 63 inches), and a water jump 4 metres (13 feet) across, over a course with 60 metres (200 feet) between obstacles. Penalties are scored for disobedience, knocking down or touching an obstacle, and for a fall. The rider with the lowest penalty score wins.

In addition to these competitions there is a riding section of the modern pentathlon, also conducted under FEI rules. Competitors must clear, riding a strange horse chosen by lot, 20 obstacles over a course 1,000 metres (3,000 feet) in length. Other international competitions began in the 1950s under the supervision of the FEI.

woman-with-horse.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1791 2023-06-02 00:08:17

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1794) Swimming (Sport)

Gist

Swimming is an individual or team racing sport that requires the use of one's entire body to move through water. The sport takes place in pools or open water (e.g., in a sea or lake).

Summary

Swimming, in recreation and sports, is the propulsion of the body through water by combined arm and leg motions and the natural flotation of the body. Swimming as an exercise is popular as an all-around body developer and is particularly useful in therapy and as exercise for physically handicapped persons. It is also taught for lifesaving purposes. For activities that involve swimming, see also diving, lifesaving, surfing, synchronized swimming, underwater diving, and water polo.

History

Archaeological and other evidence shows swimming to have been practiced as early as 2500 BCE in Egypt and thereafter in Assyrian, Greek, and Roman civilizations. In Greece and Rome swimming was a part of martial training and was, with the alphabet, also part of elementary education for males. In the Orient swimming dates back at least to the 1st century BCE, there being some evidence of swimming races then in Japan. By the 17th century an imperial edict had made the teaching of swimming compulsory in the schools. Organized swimming events were held in the 19th century before Japan was opened to the Western world. Among the preliterate maritime peoples of the Pacific, swimming was evidently learned by children about the time they walked, or even before. Among the ancient Greeks there is note of occasional races, and a famous boxer swam as part of his training. The Romans built swimming pools, distinct from their baths. In the 1st century BCE the Roman Gaius Maecenas is said to have built the first heated swimming pool.

The lack of swimming in Europe during the Middle Ages is explained by some authorities as having been caused by a fear that swimming spread infection and caused epidemics. There is some evidence of swimming at seashore resorts of Great Britain in the late 17th century, evidently in conjunction with water therapy. Not until the 19th century, however, did the popularity of swimming as both recreation and sport begin in earnest. When the first swimming organization was formed there in 1837, London had six indoor pools with diving boards. The first swimming championship was a 440-yard (400-metre) race, held in Australia in 1846 and annually thereafter. The Metropolitan Swimming Clubs of London, founded in 1869, ultimately became the Amateur Swimming Association, the governing body of British amateur swimming. National swimming federations were formed in several European countries from 1882 to 1889. In the United States swimming was first nationally organized as a sport by the Amateur Athletic Union (AAU) on its founding in 1888. The Fédération Internationale de Natation Amateur (FINA) was founded in 1909.

Competitive swimming

Internationally, competitive swimming came into prominence with its inclusion in the modern Olympic Games from their inception in 1896. Olympic events were originally only for men, but women’s events were added in 1912. Before the formation of FINA, the Games included some unusual events. In 1900, for instance, when the Games’ swimming events were held on the Seine River in France, a 200-metre obstacle race involved climbing over a pole and a line of boats and swimming under them. Such oddities disappeared after FINA took charge. Under FINA regulations, for both Olympic and other world competition, race lengths came increasingly to be measured in metres, and in 1969 world records for yard-measured races were abolished. The kinds of strokes allowed were reduced to freestyle (crawl), backstroke, breaststroke, and butterfly. All four strokes were used in individual medley races. Many nations have at one time or another dominated Olympic and world competition, including Hungary, Denmark, Australia, Germany, France, Great Britain, Canada, Japan, and the United States.

Instruction and training

The earliest instruction programs were in Great Britain in the 19th century, both for sport and for lifesaving. Those programs were copied in the rest of Europe. In the United States swimming instruction for lifesaving purposes began under the auspices of the American Red Cross in 1916. Instructional work done by the various branches of the armed forces during both World Wars I and II was very effective in promoting swimming. Courses taught by community organizations and schools, extending ultimately to very young infants, became common.

The early practice of simply swimming as much as possible at every workout was replaced by interval training and repeat training by the late 1950s. Interval training consists of a series of swims of the same distance with controlled rest periods. In slow interval training, used primarily to develop endurance, the rest period is always shorter than the time taken to swim the prescribed distance. Fast interval training, used primarily to develop speed, permits rest periods long enough to allow almost complete recovery of the heart and breathing rate.

The increased emphasis on international competition led to the growing availability of 50-metre (164-foot) pools. Other adjuncts that improved both training and performance included wave-killing gutters for pools, racing lane markers that also reduce turbulence, cameras for underwater study of strokes, large clocks visible to swimmers, and electrically operated touch and timing devices. Since 1972 all world records have been expressed in hundredths of a second. Advances in swimsuit technology reached a head at the 2008 Olympic Games in Beijing, where swimmers—wearing high-tech bodysuits that increased buoyancy and decreased water resistance—broke 25 world records. After another round of record-shattering times at the 2009 world championships, FINA banned such bodysuits, for fear that they augmented a competitor’s true ability.

Strokes

The earliest strokes to be used were the sidestroke and the breaststroke. The sidestroke was originally used with both arms submerged. That practice was modified toward the end of the 19th century by bringing forward first one arm above the water, then the other, and then each in turn. The sidestroke was supplanted in competitive swimming by the crawl (see below) but is still used in lifesaving and recreational swimming. The body stays on its side and the arms propel alternately. The leg motion used in sidestroke is called the scissors kick, in which the legs open slowly, under leg backward, upper leg forward, both knees slightly bent, and toes pointed. The scissoring action of the legs coming smartly together after opening creates the forward propulsion of the kick.

The breaststroke is believed to be the oldest of strokes and is much used in lifesaving and recreational swimming as well as in competitive swimming. The stroke is especially effective in rough water. As early as the end of the 17th century, the stroke was described as consisting of a wide pull of the arms combined with a symmetrical action of the legs and simulating the movement of a swimming frog, hence the usual term frog kick. The stroke is performed lying face down in the water, the arms always remaining underwater. The early breaststroke featured a momentary glide at the completion of the frog kick. Later the competitive breaststroke eliminated the glide. In the old breaststroke, breath was taken in at the beginning of the arm stroke, but in the later style, breath was taken in near the end of the arm pull.

The butterfly stroke, used only in competition, differs from the breaststroke in arm action. In the butterfly the arms are brought forward above the water. The stroke was brought to the attention of U.S. officials in 1933 during a race involving Henry Myers, who used the stroke. He insisted that his stroke conformed to the rules of breaststroke as then defined. After a period of controversy, the butterfly was recognized as a distinct competitive stroke in 1953. The frog kick originally used was abandoned for a fishtail (dolphin) kick, depending only on up-and-down movement of the legs. Later swimmers used two dolphin kicks to one arm pull. Breathing is done in sprint competition by raising the head every second or third stroke.

The backstroke began to develop early in the 20th century. In that stroke, the swimmer’s body position is supine, the body being held as flat and streamlined as possible. The arms reach alternately above the head and enter the water directly in line with the shoulders, palm outward with the little finger entering the water first. The arm is pulled back to the thigh. There is a slight body roll. The kick was originally the frog kick, but it subsequently involved up-and-down leg movements as in the crawl. The backstroke is a competition stroke, but it is also used in recreational swimming as a rest from other strokes, frequently with minimum arm motion and only enough kick to maintain forward motion.

The crawl, the stroke used in competitive freestyle swimming, has become the fastest of all strokes. It is also the almost unanimous choice of stroke for covering any considerable distance. The stroke was in use in the Pacific at the end of the 19th century and was taken up by the Australian swimmer Henry Wickham about 1893. The brothers Syd and Charles Cavill of Australia popularized the stroke in Europe in 1902 and in the United States in 1903. The crawl was like the old sidestroke in its arm action, but it had a fluttering up-and-down leg action performed twice for each arm stroke. Early American imitators added an extra pair of leg actions, and later as many as six kicks were used. The kicks also varied in kind. In the crawl, the body lies prone, flat on the surface of the water, with the legs kept slightly under the water. The arms move alternately, timed so that one will start pulling just before the other has finished its pull, thus making propulsion continuous. Breathing is done by turning the head to either side during recovery of the arm from that side. Since 1896 the crawl has been used in more races than any other stroke.

Races

In competition there are freestyle races at distances of 50, 100, 200, 400, 800, and 1,500 metres; backstroke, breaststroke, and butterfly races at 100 metres and 200 metres; individual medley races at 200 metres and 400 metres; the freestyle relays, 4 × 100 metres and 4 × 200 metres; and the medley relay, 4 × 100 metres.

Starts are all (with the exception of the backstroke) from a standing or forward-leaning position, the object being to get the longest possible glide before the stroke begins. All races are in multiples of the pool length, so that the touch before turning, which is varied for different stroke races, is important for success. In relay races, a swimmer finishes his leg of the relay by touching the starting edge of the pool, upon which his next teammate dives into the water to begin his leg.

Distance swimming

Any swimming competition longer than 1,500 metres (1,640 yards) is considered distance swimming. Most long-distance races are in the 24- to 59-km (15- to 37-mile) range, though some, such as the Lake George marathon (67 km [41.5 miles]) and the Lake Michigan Endurance Swim (80 km [50 miles]), both in the United States, have been longer. FINA governs distance swimming for 5-km, 10-km, and 25-km (3.1-mile, 6.2-mile, and 15.5-mile) races. In 1954 a group of amateur and professional marathon swimmers formed the Fédération Internationale de Natation Longue Distance; and in 1963, after dissension between amateur and professional swimmers, the World Professional Marathon Swimming Federation was founded. Throughout the 1960s the latter group sanctioned about eight professional marathons annually, the countries most frequently involved being Canada, Egypt, Italy, Argentina, and the United States. The British Long Distance Swimming Association has sponsored races on inland waters of from 16.5 to 35.4 km (10.25 to 22 miles).

The first type of distance swimming to be regulated by FINA was English Channel swimming, which captured the popular imagination in the second half of the 19th century. Captain Matthew Webb of Great Britain was the first to make the crossing from Dover, England, to Calais, France, in 1875; his time was 21 hours 45 minutes. The map distance was 17.75 nautical miles (33 km), but the actual distance of a Channel Swim is frequently lengthened by tides and winds. No one matched Webb’s feat until 1911, when another Englishman, T.W. Burgess, made the crossing. In 1926 the American swimmer Gertrude Ederle became the first woman to swim the Channel, crossing from Cap Gris-Nez, France, to Dover in a record-setting time for man or woman of 14 hours 31 minutes. Since then, except for the World War II years, crossing swims have been made annually. Several swimmers have made 10 or more crossings. The Channel Swimming Association was formed in 1927 to control swims and verify times. By 1978 the record had been lowered to 7 hours 40 minutes by Penny Dean of the United States, and by the 1990s successful crossings had been made by swimmers as young as 12 and as old as 65. Various swimmers had crossed both ways with only brief rests between the swims. Open-water distance swimming events of 10 km (for men and women) were added to the Olympic program in 2008.

Details

Swimming is an individual or team racing sport that requires the use of one's entire body to move through water. The sport takes place in pools or open water (e.g., in a sea or lake). Competitive swimming is one of the most popular Olympic sports, with varied distance events in butterfly, backstroke, breaststroke, freestyle, and individual medley. In addition to these individual events, four swimmers can take part in either a freestyle or medley relay. A medley relay consists of four swimmers who will each swim a different stroke, ordered as backstroke, breaststroke, butterfly and freestyle.

Swimming each stroke requires a set of specific techniques; in competition, there are distinct regulations concerning the acceptable form for each individual stroke. There are also regulations on what types of swimsuits, caps, jewelry and injury tape that are allowed at competitions. Although it is possible for competitive swimmers to incur several injuries from the sport, such as tendinitis in the shoulders or knees, there are also multiple health benefits associated with the sport.

History

Evidence of recreational swimming in prehistoric times has been found, with the earliest evidence dating to Stone Age paintings from around 10,000 years ago. Written references date from 2000 BC, with some of the earliest references to swimming including the Iliad, the Odyssey, the Bible, Beowulf, the Quran and others. In 1538, Nikolaus Wynmann, a Swiss–German professor of languages, wrote the earliest known complete book about swimming, Colymbetes, sive de arte natandi dialogus et festivus et iucundus lectu (The Swimmer, or A Dialogue on the Art of Swimming and Joyful and Pleasant to Read).

Swimming emerged as a competitive recreational activity in the 1830s in England. In 1828, the first indoor swimming pool, St George's Baths was opened to the public. By 1837, the National Swimming Society was holding regular swimming competitions in six artificial swimming pools, built around London. The recreational activity grew in popularity and by 1880, when the first national governing body, the Amateur Swimming Association was formed, there were already over 300 regional clubs in operation across the country.

In 1844 two Native American participants at a swimming competition in London introduced the front crawl to a European audience. Sir John Arthur Trudgen picked up the hand-over stroke from some South American natives and successfully debuted the new stroke in 1873, winning a local competition in England. His stroke is still regarded as the most powerful to use today.

Captain Matthew Webb was the first man to swim the English Channel (between England and France), in 1875. Using the breaststroke technique, he swam the channel 21.26 miles (34.21 km) in 21 hours and 45 minutes. His feat was not replicated or surpassed for the next 36 years, until T.W. Burgess made the crossing in 1911.

Other European countries also established swimming federations; Germany in 1882, France in 1890 and Hungary in 1896. The first European amateur swimming competitions were in 1889 in Vienna. The world's first women's swimming championship was held in Scotland in 1892.

Men's swimming became part of the first modern Olympic Games in 1896 in Athens. In 1902, the Australian Richmond Cavill introduced freestyle to the Western world. In 1908, the world swimming association, Fédération Internationale de Natation (FINA), was formed. Women's swimming was introduced into the Olympics in 1912; the first international swim meet for women outside the Olympics was the 1922 Women's Olympiad. Butterfly was developed in the 1930s and was at first a variant of breaststroke, until it was accepted as a separate style in 1952. FINA renamed itself World Aquatics in December 2022.

Competitive swimming

Competitive swimming became popular in the 19th century. The goal of high level competitive swimming is to break personal or world records while beating competitors in any given event. Swimming in competition should create the least resistance in order to obtain maximum speed. However, some professional swimmers who do not hold a national or world ranking are considered the best in regard to their technical skills. Typically, an athlete goes through a cycle of training in which the body is overloaded with work in the beginning and middle segments of the cycle, and then the workload is decreased in the final stage as the swimmer approaches competition.

The practice of reducing exercise in the days just before an important competition is called tapering. Tapering is used to give the swimmer's body some rest without stopping exercise completely. A final stage is often referred to as "shave and taper": the swimmer shaves off all exposed hair for the sake of reducing drag and having a sleeker and more hydrodynamic feel in the water. Additionally, the "shave and taper" method refers to the removal of the top layer of "dead skin", which exposes the newer and richer skin underneath. This also helps to "shave" off mere milliseconds on your time.

Swimming is an event at the Summer Olympic Games, where male and female athletes compete in 16 of the recognized events each. Olympic events are held in a 50-meter pool, called a long course pool.

There are forty officially recognized individual swimming events in the pool; however the International Olympic Committee only recognizes 32 of them. The international governing body for competitive swimming is World Aquatics, previously named the Fédération Internationale de Natation ("International Swimming Federation"), or more commonly, FINA, prior to 2023.

Open water

In open water swimming, where the events are swum in a body of open water (lake or sea), there are also 5 km, 10 km and 25 km events for men and women. However, only the 10 km event is included in the Olympic schedule, again for both men and women. Open-water competitions are typically separate to other swimming competitions with the exception of the World Championships and the Olympics.

Swim styles

In competitive swimming, four major styles have been established. These have been relatively stable over the last 30–40 years with minor improvements. They are:

* Butterfly
* Backstroke
* Breaststroke
* Freestyle

In competition, only one of these styles may be used except in the case of the individual medley, or IM, which consists of all four. In this latter event, swimmers swim equal distances of butterfly, then backstroke, breaststroke, and finally, freestyle. In Olympic competition, this event is swum in two distances – 200 and 400 meters. Some short course competitions also include the 100-yard or 100-meter IM – particularly, for younger or newer swimmers (typically under 14 years) involved in club swimming, or masters swimming (over 18).

Dolphin kick

Since the 1990s, the most drastic change in swimming has been the addition of the underwater dolphin kick. This is used to maximize the speed at the start and after the turns in all styles. The first successful use of it was by David Berkoff. At the 1988 Olympics, he swam most of the 100 m backstroke race underwater and broke the world record in the distance during the preliminaries. Another swimmer to use the technique was Denis Pankratov at the 1996 Olympics in Atlanta, where he completed almost half of the 100 m butterfly underwater to win the gold medal. In the past decade, American competitive swimmers have shown the most use of the underwater dolphin kick to gain advantage, most notably Olympic and World medal winners Michael Phelps and Ryan Lochte; however currently swimmers are not allowed to go any further than fifteen metres underwater due to rule changes by FINA. In addition, FINA announced in 2014 that a single dolphin kick can be added to the breaststroke pullout prior to the first breaststroke kick.

While the dolphin kick is mostly seen in middle-distance freestyle events and in all distances of backstroke and butterfly, it is not usually used to the same effect in freestyle sprinting. That changed with the addition of the so-called "technical" suits around the European Short Course Championships in Rijeka, Croatia in December 2008. There, Amaury Leveaux set new world records of 44.94 seconds in the 100 m freestyle, 20.48 seconds in the 50 m freestyle and 22.18 in the 50 m butterfly. Unlike the rest of the competitors in these events, he spent at least half of each race submerged using the dolphin kick.

Competition pools

World Championship pools must be 50 metres (160 ft) (long course) long and 25 metres (82 ft) wide, with ten lanes labelled zero to nine (or one to ten in some pools; zero and nine (or one and ten) are usually left empty in semi-finals and finals); the lanes must be at least 2.5 metres (8.2 ft) wide. They will be equipped with starting blocks at both ends of the pool and most will have Automatic Officiating Equipment, including touch pads to record times and sensors to ensure the legality of relay takeovers. The pool must have a minimum depth of two metres.

Other pools which host events under World Aquatics regulations are required to meet some but not all of these requirements. Many of these pools have eight, or even six, instead of ten lanes and some will be 25 metres (82 ft) long, making them Short course. World records that are set in short course pools are kept separate from those set in long course pools because it may be an advantage or disadvantage to swimmers to have more or less turns in a race.

Seasons

Competitive swimming, from the club through to international level, tends to have an autumn and winter season competing in short course (25 metres or yards) pools and a spring and summer season competing in long course (50-metre) pools and in open water.

In international competition and in club swimming in Europe, the short course (25m) season lasts from September to December, and the long course (50m) season from January to August with open water in the summer months. These regulations are slowly being brought to competition in North America.

As of right now, in club, school, and college swimming in the United States and Canada, the short course (25 yards) season is much longer, from September to March. The long-course season takes place in 50-meter pools and lasts from April to the end of August with open water in the summer months.

In club swimming in Australasia, the short course (25m) season lasts from April to September, and the long course (50m) season from October to March with open water in the summer months.

Outside the United States, meters is the standard in both short and long course swimming, with the same distances swum in all events. In the American short course season, the 500-yard, 1000 yard, and 1650-yard freestyle events are swum as a yard is much shorter than a meter (100 yards equals 91.44 meters), while during the American long course season the 400 meter, 800 meter, and 1500-meter freestyle events are swum instead.

Beginning each swimming season racing in short course allows for shorter distance races for novice swimmers. For example, in the short course season if a swimmer wanted to compete in a stroke they had just learned, a 25-yard/meter race is available to them, opposed to the long course season when they would need to be able to swim at least 50 meters of that new stroke in order to compete.

Records

The increase in accuracy and reliability of electronic timing equipment led to the introduction of hundredths of a second to the time records from 21 August 1972.

Records in short course (25 m) pools began to be officially approved as "short course world records" from 3 March 1991. Prior to this date, times in short course (25 m) pools were not officially recognised, but were regarded a "world best time" (WBT). From 31 October 1994 times in 50 m backstroke, breaststroke, and butterfly were added to the official record listings.

FINA currently recognises world records in the following events for both men and women.

* Freestyle: 50 m, 100 m, 200 m, 400 m, 800 m, 1500 m
* Backstroke: 50 m, 100 m, 200 m
* Breaststroke: 50 m, 100 m, 200 m
* Butterfly: 50 m, 100 m, 200 m
* Individual medley: 100 m (short course only), 200 m, 400 m
* Relays: 4×50 m freestyle relay (short course only), 4×100 m freestyle, 4×200 m freestyle, 4×50 m medley relay (short course only), 4×100 m medley
* Mixed relays (teams of two men and two women): 4×50 m mixed freestyle (short course only), 4×100 m mixed freestyle (long course only), 4×50 m mixed medley (short course only), 4×100 m mixed medley (long course only).

Duel-in-the-pool-860x380.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1792 2023-06-03 00:01:51

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1795) Rowing (Sport)

Summary

Rowing is propulsion of a boat by means of oars. As a sport, it involves watercraft known as shells (usually propelled by eight oars) and sculls (two or four oars), which are raced mainly on inland rivers and lakes. The term rowing refers to the use of a single oar grasped in both hands, while sculling involves the use of two oars, one grasped in each hand.

In competitive rowing the oar is a shaft of wood with a rounded handle at one end and a shaped blade at the other. The shaft usually consists of two halves hollowed out and glued together in order to save weight and increase flexibility. The blade—a thin broadened surface—is either flat or slightly curved at the sides and tip to produce a firm grip of the water. The loom, or middle portion of the oar, rests either in a notch or oarlock (rowlock) or between thole pins on the gunwale (top edge) of the boat in order to serve as a fulcrum of the oar. The loom is protected against wear in this area of contact by a short sleeve of leather or plastic. Oars have fixed leather or adjustable metal or plastic collars, called buttons, to prevent slippage outboard. In sculling, the oars are called sculls.

History

Rowing began as a means of transportation. Galleys, used as war vessels and ships of state, prevailed in ancient Egypt (on the Nile River) and subsequently in the Roman Empire (on the Mediterranean) from at least the 25th century BCE to the 4th century CE. Rowing was also an important adjunct to sailing for the Anglo-Saxons, Danes, and Norwegians in their waterborne military forays. Rowing in England, of both small boats and barges, began on the River Thames as early as the 13th century and resulted in a company of watermen who transported passengers up, down, and across the Thames in and near London. Wagering by passengers in different boats by the 16th century led to races, at first impromptu and later organized. By the early 18th century there were more than 40,000 liveried watermen. Doggett’s Coat and Badge, an organized watermen’s race, has been held annually since 1715. The watermen were, of course, professionals, and the regattas, programs of racing, held throughout the 18th century were also professional. A similar form of racing by ferrymen in the United States began early in the 19th century.

Rowing in six- and eight-oar boats began as a club and school activity for amateurs about this time in England and somewhat later in the United States. Organized racing began at the universities of Oxford and Cambridge in the 1820s, culminating in 1839 in the Henley Regatta (from 1851 the Henley Royal Regatta), which has continued to the present. Rowing as sport developed from the 1830s to the ’60s in Australia and Canada and during the same period became popular throughout Europe and in the United States. (Harvard and Yale universities first raced in 1851; the first open regatta for amateurs was held in 1872.) Throughout the century professional sculling was a popular sport.

Local and national organizations, amateur and professional, were formed in this period, and in 1892 the Fédération Internationale des Sociétés d’Aviron (FISA; the International Rowing Federation) was founded. Events in rowing (for crews of eight, four, and two) and in sculling were established. In races for eights and for some fours and pairs, there is also a coxswain, who sits at the stern, steers, calls the stroke, and generally directs the strategy of the race. Rowing events in the Olympic Games have been held for men since 1900 and for women since 1976.

Under FISA rules, all races take place over a 2,000-metre (6,560-foot) straight course on still water, each crew or sculler racing in a separate, buoy-marked lane. Racing shells range in overall length from 18.9 metres (62 feet) for an eight, 13.4 metres (44 feet) for a four, and 10.4 metres (34 feet) for a pair, to 8.2 metres (27 feet) for a single scull. There are no specifications for weight, which varies according to materials used and ranges from 14 kilograms (30.8 pounds) for a scull to 96 kg (212 pounds) or more for a shell for eights. The size, shape, and weights of oars are also not specified, but they are generally about 4 metres (13 feet) in length and weigh about 3.6 kg (8 pounds).

Events classified as lightweight are for women rowers not exceeding 59 kg (130 pounds) and men rowers not exceeding 72.5 kg (160 pounds). All rowers must weigh in between one and two hours before a race.

Stroke and style of training

The racing stroke begins with the entry of the oar blade into the water (the catch). The stroke underwater follows, and then the travel of the blade out of the water (the recovery). Turning the blade horizontally by wrist motion as the oar handle is depressed to raise the blade clear of the water at the beginning of the recovery is called feathering. The extraction of the blade after driving the boat through the water is called the finish. Turning of the blade from horizontal to vertical in preparation for the catch is called squaring.

Early fixed-seat rowing used the English stroke: body swing produced most of the power, the arms being used mainly to transfer the weight of the body to the oar. With the introduction of the sliding seat (1857 in the United States; 1871 in England), leg drive was added. Later style changes introduced by Steve Fairbairn in 1881 emphasized leg drive and arm pull. The German coach Karl Adam in the 1950s produced good results when he introduced new training methods based on Fahrtspiel (“speed play”), originally used for training runners, and on interval training (short sprints alternated with long runs).

Details

Rowing, sometimes called crew in the United States, is the sport of racing boats using oars. It differs from paddling sports in that rowing oars are attached to the boat using oarlocks, while paddles are not connected to the boat. Rowing is divided into two disciplines: sculling and sweep rowing. In sculling, each rower holds two oars—one in each hand, while in sweep rowing each rower holds one oar with both hands. There are several boat classes in which athletes may compete, ranging from single sculls, occupied by one person, to shells with eight rowers and a coxswain, called eights. There are a wide variety of course types and formats of racing, but most elite and championship level racing is conducted on calm water courses 2 kilometres (1.2 mi) long with several lanes marked using buoys.

Modern rowing as a competitive sport can be traced to the early 17th century when professional watermen held races (regattas) on the River Thames in London, England. Often prizes were offered by the London Guilds and Livery Companies. Amateur competition began towards the end of the 18th century with the arrival of "boat clubs" at British public schools. Similarly, clubs were formed at colleges within Oxford and Cambridge in the early nineteenth century. Public rowing clubs were beginning at the same time in England, Germany, and the United States. The first American college rowing club was formed in 1843 at Yale College.

Rowing is one of the oldest Olympic sports. Though it was on the programme for the 1896 games, racing did not take place due to bad weather. Male rowers have competed since the 1900 Summer Olympics. Women's rowing was added to the Olympic programme in 1976. Today, there are fourteen boat classes which race at the Olympics. In addition, the sport's governing body, the World Rowing Federation, holds the annual World Rowing Championships with twenty-two boat classes.

Across six continents, 150 countries now have rowing federations that participate in the sport. Major domestic competitions take place in dominant rowing nations and include The Boat Race and Henley Royal Regatta in the United Kingdom, the Australian Rowing Championships in Australia, the Harvard–Yale Regatta and Head of the Charles Regatta in the United States, and the Royal Canadian Henley Regatta in Canada. Many other competitions often exist for racing between clubs, schools, and universities in each nation.

History

An Egyptian funerary inscription of 1430 BC records that the warrior Amenhotep (Amenophis) II was also renowned for his feats of oarsmanship, though there is some disagreement among scholars over whether there were rowing contests in ancient Egypt. In the Aeneid, Virgil mentions rowing forming part of the funeral games arranged by Aeneas in honour of his father. In the 13th century, Venetian festivals called regata included boat races among others.

The first known "modern" rowing races began from competition among the professional watermen in the United Kingdom that provided ferry and taxi service on the River Thames in London. Prizes for wager races were often offered by the London Guilds and Livery Companies or wealthy owners of riverside houses. The oldest surviving such race, Doggett's Coat and Badge was first contested in 1715 and is still held annually from London Bridge to Chelsea. During the 19th century these races were to become numerous and popular, attracting large crowds. Prize matches amongst professionals similarly became popular on other rivers throughout Great Britain in the 19th century, notably on the Tyne. In America, the earliest known race dates back to 1756 in New York, when a pettiauger defeated a Cape Cod whaleboat in a race.

Amateur competition in England began towards the end of the 18th century. Documentary evidence from this period is sparse, but it is known that the Monarch Boat Club of Eton College and the Isis Club of Westminster School were both in existence in the 1790s. The Star Club and Arrow Club in London for gentlemen amateurs were also in existence before 1800. At the University of Oxford bumping races were first organised in 1815 when Brasenose College and Jesus College boat clubs had the first annual race while at Cambridge the first recorded races were in 1827. Brasenose beat Jesus to win Oxford University's first Head of the River; the two clubs claim to be the oldest established boat clubs in the world. The Boat Race between Oxford University and Cambridge University first took place in 1829, and was the second intercollegiate sporting event (following the first Varsity Cricket Match by 2 years). The interest in the first Boat Race and subsequent matches led the town of Henley-on-Thames to begin hosting an annual regatta in 1839.

Founded in 1818, Leander Club is the world's oldest public rowing club. The second oldest club which still exists is the Der Hamburger und Germania Ruder Club which was founded 1836 and marked the beginning of rowing as an organized sport in Germany. During the 19th century, as in England, wager matches in North America between professionals became very popular attracting vast crowds. Narragansett Boat Club was founded in 1838 exclusively for rowing. During an 1837 parade in Providence, R.I, a group of boatmen were pulling a longboat on wheels, which carried the oldest living survivor of the 1772 Gaspee Raid. They boasted to the crowd that they were the fastest rowing crew on the Bay. A group of Providence locals took issue with this and challenged them to race, which the Providence group summarily won. The six-man core of that group went on in 1838 to found NBC. Detroit Boat Club was founded in 1839 and is the second oldest continuously operated rowing club in the U.S. In 1843, the first American college rowing club was formed at Yale University. The Harvard–Yale Regatta is the oldest intercollegiate sporting event in the United States, having been contested every year since 1852 (excepting interruptions for wars and the COVID-19 pandemic).

The Schuylkill Navy is an association of amateur rowing clubs of Philadelphia. Founded in 1858, it is the oldest amateur athletic governing body in the United States. The member clubs are all on the Schuylkill River where it flows through Fairmount Park in Philadelphia, mostly on the historic Boathouse Row. The success of the Schuylkill Navy and similar organizations contributed heavily to the extinction of professional rowing and the sport's current status as an amateur sport. At its founding, it had nine clubs; today, there are 12. At least 23 other clubs have belonged to the Navy at various times. Many of the clubs have a rich history, and have produced a large number of Olympians and world-class competitors.

The sport's governing body, Fédération Internationale des Sociétés d'Aviron, was founded in 1892, and is the oldest international sports federation in the Olympic movement.

FISA first organized a European Rowing Championships in 1893. An annual World Rowing Championships was introduced in 1962. Rowing has also been conducted at the Olympic Games since 1900 (cancelled at the first modern Games in 1896 due to bad weather).

History of women's rowing

Women row in all boat classes, from single scull to coxed eights, across the same age ranges and standards as men, from junior amateur through university-level to elite athlete. Typically men and women compete in separate crews although mixed crews and mixed team events also take place. Coaching for women is similar to that for men. The world's first women's rowing team was formed in 1896 at the Furnivall Sculling Club in London. The club, with signature colors a very distinct myrtle and gold, began as a women's club, but eventually allowed the admittance of men in 1901.

The first international women's races were the 1954 European Rowing Championships. The introduction of women's rowing at the 1976 Summer Olympics in Montreal increased the growth of women's rowing because it created the incentive for national rowing federations to support women's events. Rowing at the 2012 Summer Olympics in London included six events for women compared with eight for men. In the US, rowing is an NCAA sport for women but not for men; though it is one of the country's oldest collegiate sports, the difference is in large part due to the requirements of Title IX.

At the international level, women's rowing traditionally has been dominated by Eastern European countries, such as Romania, Russia, and Bulgaria, although other countries such as Germany, Canada, the Netherlands, Great Britain and New Zealand often field competitive teams. The United States also has had very competitive crews, and in recent years these crews have become even more competitive given the surge in women's collegiate rowing. Now there is usually the same number of girls and boys in a group.

Technique

While rowing, the athlete sits in the boat facing toward the stern and uses the oars (also interchangeably referred to as "blades"), which are held in place by oarlocks (also referred to as "gates"), to propel the boat forward (towards the bow). Rowing is distinguished from paddling in that the oar is attached to the boat using an oarlock, where in paddling there is no oarlock or attachment of the paddle to the boat.

The rowing stroke may be characterized by two fundamental reference points: the catch, which is placement of the oar spoon in the water, and the extraction, also known as the finish or release, when the rower removes the oar spoon from the water.

After the oar is placed in the water at the catch, the rower applies pressure to the oar levering the boat forward which is called the drive phase of the stroke. Once the rower extracts the oar from the water, the recovery phase begins, setting up the rower's body for the next stroke.

At the catch, the rower places the oar in the water and applies pressure to the oar by pushing the seat toward the bow of the boat by extending the legs, thus pushing the boat through the water. The point of placement of the spoon in the water is a relatively fixed point about which the oar serves as a lever to propel the boat. As the rower's legs approach full extension, the rower pivots the torso toward the bow of the boat and then finally pulls the arms towards his or her chest. The hands meet the chest right above the diaphragm.

At the end of the stroke, with the oar spoon still in the water, the hands drop slightly to unload the oar so that spring energy stored in the bend of the oar gets transferred to the boat which eases removing the oar from the water and minimizes energy wasted on lifting water above the surface (splashing).

The recovery phase follows the drive. The recovery starts with the extraction and involves coordinating the body movements with the goal to move the oar back to the catch position. In extraction, the rower pushes down on the oar handle to quickly lift the spoon out of the water and rapidly rotates the oar so that the spoon is parallel to the water. This process is sometimes referred to as feathering the blade. Simultaneously, the rower pushes the oar handle away from the chest. The spoon should emerge from the water perpendicular or square and be feathered immediately once clear of the water. After feathering and extending the arms, the rower pivots the body forward. Once the hands are past the knees, the rower compresses the legs which moves the seat towards the stern of the boat. The leg compression occurs relatively slowly compared to the rest of the stroke, which affords the rower a moment to recover, and allows the boat to glide through the water. The gliding of the boat through the water during recovery is often called run.

A controlled slide is necessary to maintain momentum and achieve optimal boat run. However, various teaching methods disagree about the optimal relation in timing between drive and recovery. Near the end of the recovery, the rower squares the oar spoon into perpendicular orientation with respect to the water and begins another stroke.

Additional Information

What is Rowing?

Rowing involves propelling a boat using oars fixed to the vessel. It differs from other disciplines in that rowers sit with their backs to the direction of movement, therefore crossing the finish line backwards.

In the Olympics, rowers race against each other as individuals or in crews of two, four or eight.

By whom, where and when was Rowing invented?

Rowing was first used as a means of transport in ancient Egypt, Greece and Rome. As a sport, it probably began in England in the 17th and early 18th centuries, with the Oxford-Cambridge university boat race in the United Kingdom, which was inaugurated in 1828.

By the 19th century, rowing was popular in Europe and had been exported to America.

What are the rules of Rowing?

Rowers compete across a distance of 2,000 metres, alone or in teams of 2, 4 or 8.

Double sculls athletes hold one oar in each hand while sweep rowing athletes hold a single oar with both hands.

Eight-person crews have a coxswain, who steers the boat and directs the crew. The boat is steered using a small rudder that is attached to the foot of one of the rowers by a cable.

Lanes are clearly marked by buoys every 10 to 12.5 metres, and the course must have a depth of at least three metres.

Crews committing a false start are first given a warning, and two false starts in the same race leads to a disqualification for that individual or team.

A boat’s final time is determined by when its bow crosses the finish line. In the case of a close finish, a photo finish will be consulted to determine the order that the bow of each boat crossed the line.

What are the two types of Rowing?

The races are divided into sculling and sweep oar. Sculling events use two oars, whilst in sweep, the rower holds one. The eight-person crews have a coxswain, who steers the boat and directs the crew, but in all other boats, one rower steers by controlling a small rudder with a foot pedal.

Rowing and the Olympics

Rowing has been staged at every edition of the Olympic Games, except in 1896 in Athens. It was on the programme for those Games, but a stormy sea compelled the organisers to cancel the events.

Women made their debut at Montreal 1976, while and the Olympic Games Atlanta 1996 marked the introduction of lightweight events.

The USA initially dominated Olympic rowing, before the Soviet Union and Germany came to the forefront. However, Great Britain's Sir Steve Redgrave is considered by many to be the greatest rower of all time, winning gold medals at five Olympic Games, and six world titles.

His female counterpart on the gold medal count is Elisabeta Lipa of Romania, who also won five Olympic gold medals between 1984 and 2004.

Best rowers to watch

Double sculls Olympic champions from Tokyo 2020 Matthieu Androdias and Hugo Boucheron of France took part in one race in 2022: the World Championships. Winning that race reminded their rivals that they are still the team to beat in that weight class.

On the women’s side, Romanian crews have been performing well recently, highlighted by Simona Radis and Ancuta Bodnar, who went on a formidable win streak over the past two years that included double sculls Olympic gold in Tokyo, World and European titles.

Radis was also part of the Romanian team that won the eight-woman race at the 2022 World Championships, while Great Britain picked up the men’s trophy.

Also keep an eye out for multiple world champion and Tokyo 2020 gold medallist Paul O’Donovan from Ireland, alongside his fellow world champion teammate Fintan McCarthy in the lightweight double sculls.

In the singles sculls, New Zealand Olympic champion from Tokyo 2020 Emma Twigg is a standout performer.

Rowing Competition Rules at Paris 2024

There will be 502 rowers competing at Paris 2024: 251 athletes of each gender. This figure includes host country quotas (one for each gender) and universality places (two for each gender), allocated to ensure broad participation during the Olympic Games regatta.

paddle-vehicle-swimming-team-publicdomain-sports-boating-oars-event-twizel-sonydslra580-tamron18270mmpzd-rowing-sculling-rowingtwizel-ruataniwha-cox-highschoolrowingnz-scullling-1024x601.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1793 2023-06-03 20:58:51

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1796) Coastline

Gist

Coastline is a line that forms the boundary between the land and the ocean or a lake.

Summary

Coast, also called shore, is a broad area of land that borders the sea.

The coastlines of the world’s continents measure about 312,000 km (193,000 miles). They have undergone shifts in position over geologic time because of substantial changes in the relative levels of land and sea. Studies of glaciations during the Pleistocene Epoch (2.6 million to 11,700 years ago) indicate that drops in sea level caused by the removal of water from the oceans during glacial advances affected all coastal areas. During the last Pleistocene glacial period, the sea level is thought to have been almost 122 m (400 feet) lower than it is today, resulting in the exposure of large portions of what is now the continental shelf.

Such changes in sea level have also played an important role in shaping the coasts. Glacial ice descending from coastal mountains in Alaska, Norway, and certain other areas excavated deep U-shaped depressions in times of lowered sea level. When the glacial ice melted and the level of the sea rose again, these steep-sided valleys were inundated, forming fjords. Estuaries, formed by the flooding of coastal river valleys, also are found in regions where the sea level has risen significantly.

Other factors that are instrumental in molding the topography of coasts are destructive erosional processes (e.g., wave action and chemical weathering), deposition of rock debris by currents, and tectonic activity that causes an uplifting or sinking of the Earth’s crust. The configuration and distinctive landforms of any given coast result largely from the interaction of these processes and their relative intensity, though the type and structure of the rock material underlying the area also have a bearing. For example, coastal terrains of massive sedimentary rock that have been uplifted by tectonic forces and subjected to intense wave erosion are characterized by steep cliffs extending out into the water. These nearly vertical sea cliffs generally alternate with irregularly shaped bays and narrow inlets. By contrast, wide sandy beaches and relatively smooth plains of unconsolidated sediment prevail in areas of crustal subsidence where deposition is intense. Such coasts are characterized by sandbars paralleling the shoreline, as well as by tidal flats.

Details

The coast, also known as the coastline or seashore, is defined as the area where land meets the ocean, or as a line that forms the boundary between the land and the coastline. Shores are influenced by the topography of the surrounding landscape, as well as by water induced erosion, such as waves. The geological composition of rock and soil dictates the type of shore which is created. The Earth has around 620,000 kilometres (390,000 mi) of coastline. Coasts are important zones in natural ecosystems, often home to a wide range of biodiversity. On land, they harbor important ecosystems such as freshwater or estuarine wetlands, which are important for bird populations and other terrestrial animals. In wave-protected areas they harbor saltmarshes, mangroves or seagrasses, all of which can provide nursery habitat for finfish, shellfish, and other aquatic species. Rocky shores are usually found along exposed coasts and provide habitat for a wide range of sessile animals (e.g. mussels, starfish, barnacles) and various kinds of seaweeds. In physical oceanography, a shore is the wider fringe that is geologically modified by the action of the body of water past and present, while the beach is at the edge of the shore, representing the intertidal zone where there is one. Along tropical coasts with clear, nutrient-poor water, coral reefs can often be found between depths of 1–50 meters (3.3–164.0 feet).

According to a United Nations atlas, 44% of all people live within 150 km (93 mi) of the sea. Because of their importance in society and high concentration of population, the coast is important for major parts of the global food and economic system, and they provide many ecosystem services to humankind. For example, important human activities happen in port cities. Coastal fisheries (commercial, recreational, and subsistence) and aquaculture are major economic activities and create jobs, livelihoods, and protein for the majority of coastal human populations. Other coastal spaces like beaches and seaside resorts generate large revenues through tourism. Marine coastal ecosystems can also provide protection against sea level rise and tsunamis. In many countries, mangroves are the primary source of wood for fuel (e.g. charcoal) and building material. Coastal ecosystems like mangroves and seagrasses have a much higher capacity for carbon sequestration than many terrestrial ecosystems, and as such can play a critical role in the near-future to help mitigate climate change effects by uptake of atmospheric anthropogenic carbon dioxide.

However, the economic importance of coasts makes many of these communities vulnerable to climate change, which causes increases in extreme weather and sea level rise, and related issues such as coastal erosion, saltwater intrusion and coastal flooding. Other coastal issues, such as marine pollution, marine debris, coastal development, and marine ecosystem destruction, further complicate the human uses of the coast and threaten coastal ecosystems. The interactive effects of climate change, habitat destruction, overfishing and water pollution (especially eutrophication) have led to the demise of coastal ecosystem around the globe. This has resulted in population collapse of fisheries stocks, loss of biodiversity, increased invasion of alien species, and loss of healthy habitats. International attention to these issues has been captured in Sustainable Development Goal 14 "Life Below Water" which sets goals for international policy focused on preserving marine coastal ecosystems and supporting more sustainable economic practices for coastal communities. Likewise, the United Nations has declared 2021-2030 the UN Decade on Ecosystem Restoration, but restoration of coastal ecosystems has received insufficient attention.

Because coasts are constantly changing, a coastline's exact perimeter cannot be determined; this measurement challenge is called the coastline paradox. The term coastal zone is used to refer to a region where interactions of sea and land processes occur. Both the terms coast and coastal are often used to describe a geographic location or region located on a coastline (e.g., New Zealand's West Coast, or the East, West, and Gulf Coast of the United States.) Coasts with a narrow continental shelf that are close to the open ocean are called pelagic coast, while other coasts are more sheltered coast in a gulf or bay. A shore, on the other hand, may refer to parts of land adjoining any large body of water, including oceans (sea shore) and lakes (lake shore).

Size

Somalia has the longest coastline in Africa.

The Earth has approximately 620,000 kilometres (390,000 mi) of coastline. Coastal habitats, which extend to the margins of the continental shelves, make up about 7 percent of the Earth's oceans, but at least 85% of commercially harvested fish depend on coastal environments during at least part of their life cycle. As of October 2010, about 2.86% of exclusive economic zones were part of marine protected areas.

The definition of coasts varies. Marine scientists think of the "wet" (aquatic or intertidal) vegetated habitats as being coastal ecosystems (including seagrass, salt marsh etc.) whilst some terrestrial scientist might only think of coastal ecosystems as purely terrestrial plants that live close to the seashore (see also estuaries and coastal ecosystems).

While there is general agreement in the scientific community regarding the definition of coast, in the political sphere, the delineation of the extents of a coast differ according to jurisdiction. Government authorities in various countries may define coast differently for economic and social policy reasons.

Exact length of coastline

The coastline paradox is the counterintuitive observation that the coastline of a landmass does not have a well-defined length. This results from the fractal curve-like properties of coastlines; i.e., the fact that a coastline typically has a fractal dimension. Although the "paradox of length" was previously noted by Hugo Steinhaus, the first systematic study of this phenomenon was by Lewis Fry Richardson, and it was expanded upon by Benoit Mandelbrot.

The measured length of the coastline depends on the method used to measure it and the degree of cartographic generalization. Since a landmass has features at all scales, from hundreds of kilometers in size to tiny fractions of a millimeter and below, there is no obvious size of the smallest feature that should be taken into consideration when measuring, and hence no single well-defined perimeter to the landmass. Various approximations exist when specific assumptions are made about minimum feature size.

Formation

Tides often determine the range over which sediment is deposited or eroded. Areas with high tidal ranges allow waves to reach farther up the shore, and areas with lower tidal ranges produce deposition at a smaller elevation interval. The tidal range is influenced by the size and shape of the coastline. Tides do not typically cause erosion by themselves; however, tidal bores can erode as the waves surge up the river estuaries from the ocean.

Geologists classify coasts on the basis of tidal range into macrotidal coasts with a tidal range greater than 4 m (13 ft); mesotidal coasts with a tidal range of 2 to 4 m (6.6 to 13 ft); and microtidal coasts with a tidal range of less than 2 m (7 ft). The distinction between macrotidal and mesotidal coasts is more important. Macrotidal coasts lack barrier islands and lagoons, and are characterized by funnel-shaped estuaries containing sand ridges aligned with tidal currents. Wave action is much more important for determining bedforms of sediments deposited along mesotidal and microtidal coasts than in macrotidal coasts.

Waves erode coastline as they break on shore releasing their energy; the larger the wave the more energy it releases and the more sediment it moves. Coastlines with longer shores have more room for the waves to disperse their energy, while coasts with cliffs and short shore faces give little room for the wave energy to be dispersed. In these areas, the wave energy breaking against the cliffs is higher, and air and water are compressed into cracks in the rock, forcing the rock apart, breaking it down. Sediment deposited by waves comes from eroded cliff faces and is moved along the coastline by the waves. This forms an abrasion or cliffed coast.

Sediment deposited by rivers is the dominant influence on the amount of sediment located in the case of coastlines that have estuaries. Today, riverine deposition at the coast is often blocked by dams and other human regulatory devices, which remove the sediment from the stream by causing it to be deposited inland. Coral reefs are a provider of sediment for coastlines of tropical islands.

Like the ocean which shapes them, coasts are a dynamic environment with constant change. The Earth's natural processes, particularly sea level rises, waves and various weather phenomena, have resulted in the erosion, accretion and reshaping of coasts as well as flooding and creation of continental shelves and drowned river valleys (rias).

Additional Information

The coastline, that narrow strip of land that borders the sea along a continent or an island, is an ideal place to see a constantly-changing landscape. The nonstop wave action there means nothing ever stays the same. Breakers gnaw away at cliffs, shift sand to and fro, breach barriers, build walls, and sculpt bays. Even the gentlest of ripples constantly reshape coastlines in teeny, tiny ways—a few grains of sand at a time.

Glaciers, rivers, and streams deliver a steady supply of building material for nature's unending job. And not to be outdone, the tectonic forces that move giant pieces of Earth's crust will periodically bump the bedrock and squeeze fresh lava out.

Formed by the Ocean
Waves are the busiest sculptors on the coastline. Built up by winds far out at sea, they unleash their energy and go to work when they break on the shore. The upward rush of water, called swash, delivers sand and gravel to the beach. On the return, backwash carries sand and gravel out to sea. Since waves usually hit the beach from one side or the other but always return at a right angle to the beach, the motion moves sand and gravel along the shore.

The ebb and flow of the tides is an added partner in the dance of breaking waves and shifting sands, helping to sculpt an array of landforms for temporary display, such as narrow spits, barrier islands, and lofty dunes. The delivery of sediment from muddy rivers and streams keeps the coastal construction on the go.

Along much of the coastline, pounding waves slowly chip away the base of cliffs, forcing chunks of rock to crumble and slide into the sea. Where a band of solid rock gives way, waves claw at weaker clays behind to sculpt a cove or a bay. Headlands form where the coastline gives on either side, leaving a lone rocky mass to get hammered by the sea.

w_1280


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1794 2023-06-04 18:37:21

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1797) Umbrella

Details

An umbrella or parasol is a folding canopy supported by wooden or metal ribs that is usually mounted on a wooden, metal, or plastic pole. It is designed to protect a person against rain or sunlight. The term umbrella is traditionally used when protecting oneself from rain, with parasol used when protecting oneself from sunlight, though the terms continue to be used interchangeably. Often the difference is the material used for the canopy; some parasols are not waterproof, and some umbrellas are transparent. Umbrella canopies may be made of fabric or flexible plastic. There are also combinations of parasol and umbrella that are called en-tout-cas (French for "in any case").

Umbrellas and parasols are primarily hand-held portable devices sized for personal use. The largest hand-portable umbrellas are golf umbrellas. Umbrellas can be divided into two categories: fully collapsible umbrellas, in which the metal pole supporting the canopy retracts, making the umbrella small enough to fit in a handbag, and non-collapsible umbrellas, in which the support pole cannot retract and only the canopy can be collapsed. Another distinction can be made between manually operated umbrellas and spring-loaded automatic umbrellas, which spring open at the press of a button.

Hand-held umbrellas have a type of handle which can be made from wood, a plastic cylinder or a bent "crook" handle (like the handle of a cane). Umbrellas are available in a range of price and quality points, ranging from inexpensive, modest quality models sold at discount stores to expensive, finely made, designer-labeled models. Larger parasols capable of blocking the sun for several people are often used as fixed or semi-fixed devices, used with patio tables or other outdoor furniture, or as points of shade on a sunny beach.

Parasol may also be called sunshade, or beach umbrella (US English). An umbrella may also be called a brolly (UK slang), parapluie (nineteenth century, French origin), rainshade, gamp (British, informal, dated), or bumbershoot (rare, facetious American slang). When used for snow, it is called a paraneige. When used for sun it is called a parasol.

Modern use

National Umbrella Day is held on 10 February each year around the world.

The pocket (foldable) umbrella was invented in Uraiújfalu (Hungary) by the Balogh brothers, whose patent request was admitted in 1923 by the Royal Notary Public of Szombathely. Later on their patent was also approved in Austria, Germany, Belgium, France, Poland, Great Britain and the United States.

In 1928, Hans Haupt's pocket umbrellas appeared. In Vienna in 1928, Slawa Horowitz, a student studying sculpture at the Akademie der Bildenden Kunste Wien (Academy of Fine Arts), developed a prototype for an improved compact foldable umbrella for which she received a patent on 19 September 1929. The umbrella was called "Flirt" and manufactured by the Austrian company "Brüder Wüster" and their German associates "Kortenbach & Rauh". In Germany, the small foldable umbrellas were produced by the company "Knirps", which became a synonym in the German language for small foldable umbrellas in general. In 1969, Bradford E Phillips, the owner of Totes Incorporated of Loveland, Ohio, obtained a patent for his "working folding umbrella".

Umbrellas have also been fashioned into hats as early as 1880 and at least as recently as 1987.

Golf umbrellas, one of the largest sizes in common use, are typically around 62 inches (157 cm) across, but can range anywhere from 60 to 70 inches (150 to 180 cm).

Umbrellas are now a consumer product with a large global market. As of 2008, most umbrellas worldwide are made in China, mostly in the Guangdong, Fujian and Zhejiang provinces. The city of Shangyu alone had more than a thousand umbrella factories. In the US alone, about 33 million umbrellas, worth $348 million, are sold each year.

Umbrellas continue to be actively developed. In the US, so many umbrella-related patents are being filed that the U.S. Patent Office employs four full-time examiners to assess them. As of 2008, the office registered 3000 active patents on umbrella-related inventions. Nonetheless, Totes, the largest American umbrella producer, has stopped accepting unsolicited proposals. Its director of umbrella development was reported as saying that while umbrellas are so ordinary that everyone thinks about them, "it's difficult to come up with an umbrella idea that hasn’t already been done."

While the predominant canopy shape of an umbrella is round, canopy shapes have been streamlined to improve aerodynamic response to wind. Examples include the stealth-shaped canopy of Rizotti (1996), scoop-shaped canopy of Lisciandro (2004), and teardrop-shaped canopies of Hollinger (2004).

In 2005 Gerwin Hoogendoorn, a Dutch industrial design student of the Delft University of Technology in the Netherlands, invented an aerodynamically streamlined storm umbrella (with a similar shape as a stealth plane) which can withstand wind force 10 (winds of up to 100 km/h or 70 mp/h) and won't turn inside-out like a regular umbrella as well as being equipped with so-called ‘eyesavers’ which protect others from being accidentally wounded by the tips. Hoogendoorn's storm umbrella was nominated for and won several design awards and was featured on Good Morning America. The umbrella is sold in Europe as the Senz umbrella and is sold under license by Totes in the United States.

Additional Information

Umbrella is a a portable, hand-held device that is used for protection against rain and sunlight. The modern umbrella consists of a circular fabric or plastic screen stretched over hinged ribs that radiate from a central pole. The hinged ribs permit the screen to be opened and closed so that the umbrella can be carried with ease when not in use.

Umbrellas in ancient Egypt, Mesopotamia, China, and India were used to protect important persons from the sun. They were often large and held by bearers, and they served as marks of honour and authority for the wearer. The ancient Greeks helped introduce umbrellas into Europe as sunshades, and the Romans used them to protect against rain. The use of umbrellas disappeared in Europe during the Middle Ages but had reappeared in Italy by the late 16th century, where they were regarded as marks of distinction for the pope and clergy. By the 17th century the use of the umbrella had spread to France, and by the 18th century umbrellas were common throughout Europe. A small, dainty umbrella used for shading women’s faces from the sun became known as a parasol and was a standard element of fashionable women’s outdoor attire in the 18th and 19th centuries. The traditional construction of umbrellas using cane ribs was replaced in the 1850s by modern umbrellas using a very light but strong steel frame. Men in the West began carrying umbrellas for personal use in the mid-19th century. Men’s umbrellas were generally black, but in the 20th century men’s as well as women’s umbrellas were made in a variety of bright and colourful designs.

premium_photo-1661580141677-52d9ce33ca75?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=387&q=80


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1795 2023-06-05 13:24:15

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1798) Skyline

Gist

The outline of buildings, mountains, etc., against the background of the sky.

Summary

A skyline is the outline or shape viewed near the horizon. It can be created by a city's overall structure, or by human intervention in a rural setting, or in nature that is formed where the sky meets buildings or the land.

City skylines serve as a pseudo-fingerprint as no two skylines are alike. For this reason, news and sports programs, television shows, and movies often display the skyline of a city to set a location. The term The Sky Line of New York City was first introduced in 1896, when it was the title of a color lithograph by Charles Graham for the color supplement of the New York Journal. Paul D. Spreiregen, FAIA, has called a city skyline "a physical representation of a city's facts of life ... a potential work of art ... its collective vista."

Features

High-rise buildings

High-rise buildings, including skyscrapers, are the fundamental feature of urban skylines. Both contours and cladding (brick or glass) make an impact on the overall appearance of a skyline.

Towers

Towers from different eras make for contrasting skylines.

San Gimignano, in Tuscany, Italy, has been described as having an "unforgettable skyline" with its competitively built towers.

Remote locations

Some remote locations have striking skylines, created either by nature or by sparse human settlement in an environment not conducive to housing significant populations.

Architectural design

Norman Foster served as architect for the Gherkin in London and the Hearst Tower in Midtown Manhattan, and these buildings have added to their cities' skylines.

Use in media

Skylines are often used as backgrounds and establishing shots in film, television programs, news websites, and in other forms of media.

Subjective ranking

Several services rank skylines based on their own subjective criteria. Emporis is one such service, which uses height and other data to give point values to buildings and add them together for skylines. The three cities it ranks highest are Hong Kong, Seoul and Shenzhen.

photo-1493134799591-2c9eed26201a?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=1170&q=80


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1796 2023-06-06 13:30:47

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1799) Lateral Thinking

Gist

Lateral thinking—a term first coined by Edward de Bono in 1967—refers to a person's capacity to address problems by imagining solutions that cannot be arrived at via deductive or logical means. Or, to put it in simpler terms: the ability to develop original answers to difficult questions.

Summary

Lateral thinking is a method for solving problems by making unusual or unexpected connections between ideas.

Lateral thinking (horizontal thinking) is a form of ideation where designers approach problems by using reasoning that is disruptive or not immediately obvious. They use indirect and creative methods to think outside the box and see problems from radically new angles, gaining insights to help find innovative solutions.

Lateral Thinking helps Break Out of the Box

Many problems (e.g., mathematical ones) require the vertical, analytical, step-by-step approach we’re so familiar with. Called linear thinking, it’s based on logic, existing solutions and experience: You know where to start and what to do to reach a solution, like following a recipe. However, many design problems—particularly, wicked problems—are too complex for this critical path of reasoning. They may have several potential solutions. Also, they won’t offer clues; unless we realize our way of thinking is usually locked into a tight space and we need a completely different approach.

That’s where lateral thinking comes in – essentially thinking outside the box. “The box” refers to the apparent constraints of the design space and our limited perspective from habitually meeting problems head-on and linearly. Designers often don’t realize what their limitations are when considering problems – hence why lateral thinking is invaluable in (e.g.) the design thinking process. Rather than be trapped by logic and assumptions, you learn to stand back and use your imagination to see the big picture when you:

* Focus on overlooked aspects of a situation/problem.

* Challenge assumptions – to break free from traditional ways of understanding a problem/concept/solution.

* Seek alternatives – not just alternative potential solutions, but alternative ways of thinking about problems.

When you do this, you tap into disruptive thinking and can turn an existing paradigm on its head. Notable examples include:

* The mobile defibrillator and mobile coronary care – Instead of trying to resuscitate heart-attack victims once they’re in hospital, treat them at the scene.

* Uber – Instead of investing in a fleet of taxicabs, have drivers use their own cars.

Rather than focus on channeling more resources into established solutions to improve them, these innovators assessed their problems creatively and uncovered game-changing (and life-changing) insights.

How to Get Fresh Perspectives with Lateral Thinking

For optimal results, use lateral thinking early in the divergent stages of ideation. You want to reframe the problem and:

* Understand what’s constraining you and why.

* Find new strategies to solutions and places/angles to start exploring.

* Find the apparent edges of your design space and push beyond them – to reveal the bigger picture.

You can use various methods. A main approach is provocations: namely, to make deliberately false statements about an aspect of the problem/situation. This could be to question the norms through contradiction, distortion, reversal (i.e., of assumptions), wishful thinking or escapism.

Details

Lateral thinking is a manner of solving problems using an indirect and creative approach via reasoning that is not immediately obvious. It involves ideas that may not be obtainable using only traditional step-by-step logic. The term was first used in 1967 by Maltese psychologist Edward de Bono in his book The Use of Lateral Thinking. De Bono cites the Judgment of Solomon as an example of lateral thinking, where King Solomon resolves a dispute over the parentage of a child by calling for the child to be cut in half, and making his judgment according to the reactions that this order receives. Edward de Bono also links lateral thinking with humour, arguing it entails a switch-over from a familiar pattern to a new, unexpected one. It is this moment of surprise, generating laughter and new insight, which facilitates the ability to see a different thought pattern which initially was not obvious. According to de Bono, lateral thinking deliberately distances itself from the standard perception of creativity as "vertical" logic, the classic method for problem solving.

Critics have characterized lateral thinking as a pseudo-scientific concept, arguing de Bono's core ideas have never been rigorously tested or corroborated.

Methods

Lateral thinking has to be distinguished from critical thinking. Critical thinking is primarily concerned with judging the true value of statements and seeking errors whereas lateral thinking focuses more on the "movement value" of statements and ideas. A person uses lateral thinking to move from one known idea to new ideas. Edward de Bono defines four types of thinking tools:

* idea-generating tools intended to break current thinking patterns—routine patterns, the status quo
* focus tools intended to broaden where to search for new ideas
* harvest tools intended to ensure more value is received from idea generating output
* treatment tools that promote consideration of real-world constraints, resources, and support

Random entry idea generation

The thinker chooses an object at random, or a noun from a dictionary and associates it with the area they are thinking about. De Bono exemplifies this through the randomly-chosen word, "nose", being applied to an office photocopier, leading to the idea that the copier could produce a lavender smell when it was low on paper.

Provocation idea generation

A provocation is a statement that we know is wrong or impossible but used to create new ideas. De Bono gives an example of considering river pollution and setting up the provocation, "the factory is downstream of itself", causing a factory to be forced to take its water input from a point downstream of its output, an idea which later became law in some countries. Provocations can be set up by the use of any of the provocation techniques—wishful thinking, exaggeration, reversal, escape, distortion, or arising. The thinker creates a list of provocations and then uses the most outlandish ones to move their thinking forward to new ideas.

Movement techniques

The purpose of movement techniques is to produce as many alternatives as possible in order to encourage new ways of thinking about both problems and solutions. The production of alternatives tends to produce many possible solutions to problems that seemed to only have one possible solution.[9] One can move from a provocation to a new idea through the following methods: extract a principle, focus on the difference, moment to moment, positive aspects or special circumstances.

Challenge

A tool which is designed to ask the question, "Why?", in a non-threatening way: why something exists or why it is done the way it is. The result is a very clear understanding of "Why?", which naturally leads to new ideas. The goal is to be able to challenge anything at all, not those that are problematic. For example, one could challenge the handles on coffee cups: The reason for the handle seems to be that the cup is often too hot to hold directly; perhaps coffee cups could be made with insulated finger grips, or there could be separate coffee-cup holders similar to beer holders, or coffee shouldn't be so hot in the first place.

Concept formation

Ideas carry out concepts. This tool systematically expands the range and number of concepts in order to end up with a very broad range of ideas to consider.

Disproving

Based on the idea that the majority is always wrong (as suggested by Henrik Ibsen and by John Kenneth Galbraith), take anything that is obvious and generally accepted as "goes without saying", question it, take an opposite view, and try to convincingly disprove it. This technique is similar to de Bono's "Black Hat" of Six Thinking Hats, which looks at identifying reasons to be cautious and conservative.

Fractionation

The purpose of fractionation is to create alternative perceptions of problems and solutions by taking the commonplace view of the situation and break it into multiple alternative situations in order to break away from the fixed view and see the situation from different angles, thus being able to generate multiple possible solutions that can be synthesized into more comprehensive answers.

Problem solving

When something creates a problem, the performance or the status quo of the situation drops. Problem-solving deals with finding out what caused the problem and then figuring out ways to fix the problem. The objective is to get the situation to where it should be. For example, a production line has an established run rate of 1000 items per hour. Suddenly, the run rate drops to 800 items per hour. Ideas as to why this happened and solutions to repair the production line must be thought of, such as giving the worker a pay raise. A study on engineering students' abilities to answer very open-ended questions suggests that students showing more lateral thinking were able to solve the problems much quicker and more accurately.

Lateral problem "solving"

Lateral thinking will often produce solutions whereby the problem appears as "obvious" in hindsight. That lateral thinking will often lead to problems that you never knew you had, or it will solve simple problems that have a huge potential. For example, if a production line produced 1000 books per hour, lateral thinking may suggest that a drop in output to 800 would lead to higher quality, and more motivated workers. Students have shown lateral thinking in their application of a variety of individual, unique concepts in order to solve complex problems.

Lateral-Analtical-Thinking.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1797 2023-06-07 14:49:26

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1800) Bungalow

Gist

A house that usually has only one storey (= level), sometimes with a smaller upper storey set in the roof and windows that come out from the roof.

Summary

A bungalow is a single-storied house with a sloping roof, usually small and often surrounded by a veranda. The name derives from a Hindi word meaning “a house in the Bengali style” and came into English during the era of the British administration of India. In Great Britain the name became a derisive one because of the spread of poorly built bungalow-type houses there. The style, however, gained popularity in housing developments of American towns during the 1920s. Its general design—with high ceilings, large doors and windows, and shade-giving eaves or verandas—makes it especially well suited for hot climates, and bungalows are still frequently built as summer cottages or as homes in warm regions such as southern California.

Details

A bungalow is an Indian style of houses and architecture originating in the Bengal region. It can be described as a small house or cottage that is either single-story or has a second story built into a sloping roof (usually with dormer windows), and may be surrounded by wide verandas. The style has evolved over time and is found around the world.

The first house in England that was classified as a bungalow was built in 1869. In America it was initially used as a vacation architecture, and was most popular between 1900 and 1918, especially with the Arts and Crafts movement.

The term bungalow is derived from the word bangla and used elliptically to mean "a house in the Bengal style".

Design considerations

Bungalows are very convenient for the homeowner in that all living areas are on a single story and there are no stairs between living areas. A bungalow is well suited to persons with impaired mobility, such as the elderly or those in wheelchairs.

Neighborhoods of only bungalows offer more privacy than similar neighborhoods with two-story houses. As bungalows are one or one and a half stories, strategically planted trees and shrubs are usually sufficient to block the view of neighbors. With two-story houses, the extra height requires much taller trees to accomplish the same, and it may not be practical to place such tall trees close to the building to obscure the view from the second floor of the next door neighbor. Bungalows provide cost-effective residences. On the other hand, even closely spaced bungalows make for quite low-density neighborhoods, contributing to urban sprawl. In Australia, bungalows have broad verandas to shade the interior from intense sun. But as a result they are often excessively dark inside, requiring artificial light even in daytime.

Cost and space considerations

On a per unit area basis (e.g. per square meter or per square foot), bungalows are more expensive to construct than two-story houses, because the same foundation and roof is required for a smaller living area.

Although the 'footprint' of a bungalow is often a simple rectangle, any foundation is theoretically possible. For bungalows with brick walls, the windows are often positioned high, and are close to the roof. This architectural technique avoids the need for special arches or lintels to support the brick wall above the windows. However, in two-story houses, there is no choice but to continue the brick wall above the window.

By region

Australia

From 1891 the Federation Bungalow style swept across Australia, first in Camberwell, Victoria, and through Sydney's northern suburbs after 1895. The developer Richard Stanton built in Federation Bungalow style first in Haberfield, New South Wales, the first Garden Suburb (1901), and then in Rosebery, New South Wales (1912). Beecroft, Hornsby and Lindfield contain many examples of Federation Bungalows built between 1895 and 1920.

From about 1908 to the 1930s, the California bungalow style was very popular in Australia with a rise of interest in single-family homes and planned urban communities. The style first saw widespread use in the suburbs of Sydney. It then spread throughout the Australian states and New Zealand.

In South Australia, the suburb of Colonel Light Gardens contains many well-preserved bungalow developments.

Bangladesh

In rural Bangladesh, the concept is often called Bangla ghar ("Bengali Style House") and remains popular. The main construction material is corrugated steel sheets or red clay tiles, while past generations used wood, bamboo, and khar straw. In houses that used straw as roof, it was used for keeping the house cooler during hot summer days.

Canada

Canada uses the definition of a bungalow to mean a single-family dwelling that is one story high.

India

In India, the term bungalow or villa refers to any single-family unit, as opposed to an apartment building, which is the norm for Indian middle-class city living. The normal custom for an Indian bungalow is one story, but as time progressed many families built larger two-story houses to accommodate humans and pets. The area with bungalows built in 1920s–1930s in New Delhi is now known as Lutyens' Bungalow Zone and is an architectural heritage area. In Bandra, a suburb of India's commercial capital Mumbai, numerous colonial-era bungalows exist; they are threatened by removal and replacement of ongoing development.

In a separate usage, the dak bungalows formerly used by the British mail service have been adapted for use as centers of local government or as rural hostels.

Ireland

The bungalow is the most common type of house built in the Irish countryside. During the Celtic Tiger years of the late 20th century, single-story bungalows declined as a type of new construction, and residents built more two-story or dormer bungalows. There was a trend in both the Republic of Ireland and Northern Ireland of people moving into rural areas and buying their own plots of land. Often these plots were large, so a one-story bungalow was quite practical, particularly for retirees.

Germany

In Germany a bungalow refers to a single story house with a flat roof. This building style was most popular during the 1960s. The two criteria are mentioned in contemporary literature e.g. Landhaus und Bungalow by Klara Trost (1961).

Singapore and Malaysia

"Moonlight" bungalow (now known as the Jim Thompson cottage) is a mock Tudor-styled mansion located in the Cameron Highlands, Pahang, Malaysia. The pre-war house is still a draw for the many who have had an interest in the mysterious disappearance of Jim Thompson in the Cameron Highlands.

A heritage bungalow located in the heart of Singapore’s civic district. Today, the bungalow serves as part of an “urban plaza” where upmarket furnishings from Europe and America are promoted.

In Singapore and Malaysia, the term bungalow is sometimes used to refer to a house that was built during the colonial era. The structures were constructed "from the early 19th century until the end of World War II." They were built by the British to house their "military officers, High Court judges and other members of the colonial society's great and good."

At present, there is still a high demand for colonial-era bungalows in Singapore and Malaysia. Most of the units are used as residences. Over the years, some have been transformed into offices, hotels, galleries, spas and restaurants.

In the post-colonial period, the term bungalow has been adapted and used to refer to any stand-alone residence, regardless of size, architectural style, or era in which it was built. Calling a house a bungalow often carries with it connotations of the price and status of the residence, and thus the wealth of its owner. Local real estate lingo commonly includes the word "bungalow" when referring to residences that are more normally described as "detached", "single-family homes", or even "mansions" in other countries. The pervasiveness of the word in the local jargon has resulted in bungalow being imported into the Malay language as the word banglo with the same meaning.

South Africa

In South Africa, the term bungalow refers to a single story, detached house. It may be implied that it is a temporary residence, such as a holiday home or student housing.

United Kingdom

The first two bungalows in England were built in Westgate-on-Sea in 1869 or 1870. A bungalow was a prefabricated single-story building used as a seaside holiday home. Manufacturers included Boulton & Paul Ltd, who made corrugated iron bungalows as advertised in their 1889 catalogue, which were erected by their men on the purchaser's light brickwork foundation. Examples include Woodhall Spa Cottage Museum, and Castle Bungalow at Peppercombe, North Devon, owned by the Landmark Trust; it was built by Boulton and Paul in the 1920s. Construction of this type of bungalow peaked towards the end of the decade, to be replaced by brick construction.

Bungalows became popular in the United Kingdom between the two World Wars and very large numbers were built, particularly in coastal resorts, giving rise to the pejorative adjective, "bungaloid", first found in the Daily Express from 1927: "Hideous allotments and bungaloid growth make the approaches to any city repulsive". Many villages and seaside resorts have large estates of 1960s bungalows, usually occupied by retired people. The typical 1930s bungalow is square in plan, with those of the 1960s more likely to be oblong. It is rare for the term "bungalow" to be used in British English to denote a dwelling having other than a single story, in which case "chalet bungalow", is used.

Styles

Airplane bungalow

Although stylistically related to others, the special characteristic of the Airplane Bungalow was its single room on a second story, surrounded by windows, designed as a sleeping room in summer weather with all-around access to breezes. This variant developed in California in the 1910s, had appeared in El Paso, Texas, by April 1916, and became most prevalent in the western half of the U.S., and southwestern and western Canada.

American Craftsman bungalow

The American Craftsman bungalow typified the styles of the American Arts and Crafts movement, with common features usually including low-pitched roof lines on a gabled or hipped roof, deeply overhanging eaves, exposed rafters or decorative brackets under the eaves, and a front porch or veranda beneath an extension of the main roof.

Sears Company and The Aladdin Company were two of the manufacturing companies that produced pre-fab kits and sold them from catalogues for construction on sites during the turn of the 20th century.

Bungalow colony

A special use of the term bungalow developed in the greater New York City area, between the 1930s and 1970s, to denote a cluster of small rental summer homes, usually in the Catskill Mountains in the area known as the Borscht Belt. First- and second-generation Jewish-American families were especially likely to rent such houses. The old bungalow colonies continue to exist in the Catskills, and are occupied today chiefly by Hasidic Jews.

California bungalow

The California bungalow was a widely popular 1+1⁄2-story variation on the bungalow in the United States from 1910 to 1925. It was also widely popular in Australia within the period 1910–1940.

Chalet bungalow

A chalet bungalow is a bungalow with a second-story loft. The loft may be extra space over the garage. It is often space to the side of a great room with a vaulted ceiling area. The building is marketed as a bungalow with loft because the main living areas of the house are on one floor. All the convenience of single-floor living still applies and the loft is not expected to be accessed on a daily basis.

Some have extra bedrooms in the loft or attic area. Such buildings are really one-and-a-half storeys and not bungalows, and are referred to in British English as "chalet bungalows" or as "dormer bungalows". "Chalet bungalow" is also used in British English for where the area enclosed within pitched roof contains rooms, even if this comprises a large part of the living area and is fully integrated into the fabric of the property.

True bungalows do not use the attic. Because the attic is not used, the roof pitch can be quite shallow, constrained only by snow load considerations.

Chicago bungalow

The majority of Chicago bungalows were built between 1910 and 1940. They were typically constructed of brick (some including decorative accents), with one-and-a-half storys and a full basement. With more than 80,000 bungalows, the style represents nearly one-third of Chicago's single-family housing stock. One primary difference between the Chicago bungalow and other types is that the gables are parallel to the street, rather than perpendicular. Like many other local houses, Chicago bungalows are relatively narrow, being an average of 20 feet (6.1 m) wide on a standard 24-foot (7.3 m) or 25-foot (7.6 m) wide city lot. Their veranda (porch) may either be open or partially enclosed (if enclosed, it may further be used to extend the interior rooms).

Michigan bungalow

There are numerous examples of Arts and Crafts bungalows built from 1910 to 1925 in the metro-Detroit area, including Royal Oak, Pleasant Ridge, Hazel Park, Highland Park and Ferndale. Keeping in line with the principles of the Arts and Crafts movement, the bungalows were constructed using local building materials.

Milwaukee bungalow

A large fraction of the older residential buildings in Milwaukee, Wisconsin, are bungalows in a similar Arts and Crafts style to those of Chicago, but usually with the gable perpendicular to the street. Also, many Milwaukee bungalows have white stucco on the lower portion of the exterior.

Overwater bungalow

The overwater bungalow is a form of, mainly high end, tourist accommodation inspired by the traditional stilt houses of South Asia and the Pacific. The first overwater bungalows were constructed on the French Polynesian island of Ra’iātea in 1967 by three American hotel owners, Jay Carlisle, Donald McCallum and Hugh Kelley.

They had wanted to attract tourists to Ra’iātea, and to their hotel, but the island had no real beaches and so to overcome this handicap they decided to build hotel rooms directly on the water using large wooden poles. These structures they called overwater bungalows and they were an immediate success.

By the seventies tourism to French Polynesia and the Pacific Islands in general was booming and overwater bungalows, sometimes by then called water villas, became synonymous with the region, particularly for honeymoons and romantic getaways. Soon this new tradition spread to many other parts of Asia, the Maldives being the best example, and other parts of the world including, in the last twenty years, many parts of the Caribbean. The first overwater bungalow resort in Mexico opened in 2016.

Their proliferation would have been much greater but for the fact that overwater bungalows need certain conditions to be structurally viable, i.e. that the water surrounding them be consistently very calm. Ideally the type of water that can be found in the lagoons and atolls of The Maldives or Bora Bora or, at the very least, that of an extremely sheltered bay. Therefore, despite their popularity, they still remain something of a touristic novelty.

Raised bungalow

A raised bungalow is one in which the basement is partially above ground. The benefit is that more light can enter the basement with above ground windows in the basement. A raised bungalow typically has a foyer at ground level that is halfway between the first floor and the basement. Thus, it further has the advantage of creating a foyer with a very high ceiling without the expense of raising the roof or creating a skylight. Raised bungalows often have the garage in the basement. Because the basement is not that deep, and the ground must slope downwards away from the building, the slope of the driveway is quite shallow. This avoids the disadvantage of steep driveways found in most other basement garages. Bungalows without basements can still be raised, but the advantages of raising the bungalow are much less.

Ranch bungalow

A ranch bungalow is a bungalow organized so that bedrooms are on one side and "public" areas (kitchen, living/dining/family rooms) are on the other side. If there is an attached garage, the garage is on the public side of the building so that a direct entrance is possible, when this is allowed by legislation. On narrower lots, public areas are at the front of the building and such an organization is typically not called a "ranch bungalow". Such buildings are often smaller and have only two bedrooms in the back as required.

Ultimate bungalow

The term ultimate bungalow is commonly used to describe a very large and detailed Craftsman-style house in the United States. The design is usually associated with such California architects as Greene and Greene, Bernard Maybeck, and Julia Morgan.

bungalow-484145265-5867d4235f9b586e0228798b.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1798 2023-06-08 13:25:44

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1801) Boxing

Summary

Boxing (also known as "western boxing" or "pugilism") is a combat sport and a martial art in which two people, usually wearing protective gloves and other protective equipment such as hand wraps and mouthguards, throw punches at each other for a predetermined amount of time in a boxing ring.

Although the term boxing is commonly attributed to «western boxing», in which only fists are involved, it has developed in different ways in different geographical areas and cultures of the World. In global terms, "boxing" today is also a set of combat sports focused on striking, in which two opponents face each other in a fight using at least their fists, and possibly involving other actions such as kicks, elbow strikes, knee strikes, and headbutts, depending on the rules. Some of these variants are the bare knuckle boxing, kickboxing, muay-thai, lethwei, savate, and sanda. Boxing techniques have been incorporated into many martial arts, military systems, and other combat sports.

Though humans have fought in hand-to-hand combat since the dawn of human history and the origin of the sport of boxing is unknown, according to some sources boxing has prehistoric origins in present-day Ethiopia where it appeared in the sixth millennium BC and when the Egyptians invaded Nubia they learned the art of boxing from the local population and they took the sport to Egypt where it became popular and from Egypt boxing spread to other countries including Greece, and eastward to Mesopotamia and northward to Rome.

The earliest visual evidence of any type of boxing is from Egypt and Sumer both from the third millennia and can be seen in Sumerian carvings from the third and second millennia BC. The earliest evidence of boxing rules dates back to Ancient Greece, where boxing was established as an Olympic game in 688 BC. Boxing evolved from 16th- and 18th-century prizefights, largely in Great Britain, to the forerunner of modern boxing in the mid-19th century with the 1867 introduction of the Marquess of Queensberry Rules.

Amateur boxing is both an Olympic and Commonwealth Games sport and is a standard fixture in most international games—it also has its world championships. Boxing is overseen by a referee over a series of one-to-three-minute intervals called "rounds".

A winner can be resolved before the completion of the rounds when a referee deems an opponent incapable of continuing, disqualifies an opponent, or the opponent resigns. When the fight reaches the end of its final round with both opponents still standing, the judges' scorecards determine the victor. In case both fighters gain equal scores from the judges, a professional bout is considered a draw. In Olympic boxing, because a winner must be declared, judges award the contest to one fighter on technical criteria.

Details

Boxing is a sport, both amateur and professional, involving attack and defense with the fists. Boxers usually wear padded gloves and generally observe the code set forth in the marquess of Queensberry rules. Matched in weight and ability, boxing contestants try to land blows hard and often with their fists, each attempting to avoid the blows of the opponent. A boxer wins a match either by outscoring the opponent—points can be tallied in several ways—or by rendering the opponent incapable of continuing the match. Bouts range from 3 to 12 rounds, each round normally lasting three minutes.

The terms pugilism and prizefighting in modern usage are practically synonymous with boxing, although the first term indicates the ancient origins of the sport in its derivation from the Latin pugil, “a boxer,” related to the Latin pugnus, “fist,” and derived in turn from the Greek pyx, “with clenched fist.” The term prizefighting emphasizes pursuit of the sport for monetary gain, which began in England in the 17th century.

History

Early years

Boxing first appeared as a formal Olympic event in the 23rd Olympiad (688 BCE), but fist-fighting contests must certainly have had their origin in mankind’s prehistory. The earliest visual evidence for boxing appears in Sumerian relief carvings from the 3rd millennium BCE. A relief sculpture from Egyptian Thebes (c. 1350 BCE) shows both boxers and spectators. The few extant Middle Eastern and Egyptian depictions are of bare-fisted contests with, at most, a simple band supporting the wrist; the earliest evidence of the use of gloves or hand coverings in boxing is a carved vase from Minoan Crete (c. 1500 BCE) that shows helmeted boxers wearing a stiff plate strapped to the fist.

The earliest evidence of rules for the sport comes from ancient Greece. These ancient contests had no rounds; they continued until one man either acknowledged defeat by holding up a finger or was unable to continue. Clinching (holding an opponent at close quarters with one or both arms) was strictly forbidden. Contests were held outdoors, which added the challenge of intense heat and bright sunlight to the fight. Contestants represented all social classes; in the early years of the major athletic festivals, a preponderance of the boxers came from wealthy and distinguished backgrounds.

The Greeks considered boxing the most injurious of their sports. A 1st-century-BCE inscription praising a pugilist states, “A boxer’s victory is gained in blood.” In fact, Greek literature offers much evidence that the sport caused disfigurement and, occasionally, even death. An amazingly bloody bout is recounted by Homer in the Iliad (c. 675 BCE).

By the 4th century BCE, the simple ox-hide thongs described in the Iliad had been replaced by what the Greeks called “sharp thongs,” which had a thick strip of hard leather over the knuckles that made them into lacerative weapons. Although the Greeks used padded gloves for practice, not dissimilar from the modern boxing glove, these gloves had no role in actual contests. The Romans developed a glove called the caestus (cestus) that is seen in Roman mosaics and described in their literature; this glove often had lumps of metal or spikes sewn into the leather. The caestus is an important feature in a boxing match in Virgil’s Aeneid (1st century BCE). The story of the match between Dares and Entellus is majestically told in this passage from the pugilism article in the 11th edition of Encyclopædia Britannica:

Further on we find the account of the games on the occasion of the funeral of Anchises, in the course of which Dares, the Trojan, receiving no answer to his challenge from the Sicilians, who stood aghast at his mighty proportions, claims the prize; but, just as it is about to be awarded him, Entellus, an aged but huge and sinewy Sicilian, arises and casts into the arena as a sign of his acceptance of the combat the massive cesti, all stained with blood and brains, which he has inherited from King Eryx, his master in the art of boxing. The Trojans are now appalled in their turn, and Dares, aghast at the fearful implements, refused the battle, which, however, is at length begun after Aeneas has furnished the heroes with equally matched cesti. For some time the young and lusty Dares circles about his gigantic but old and stiff opponent, upon whom he rains a torrent of blows which are avoided by the clever guarding and dodging of the Sicilian hero. At last Entellus, having got his opponent into a favourable position, raises his tremendous right hand on high and aims a terrible blow at the Trojan’s head; but the wary Dares deftly steps aside, and Entellus, missing his adversary altogether, falls headlong by the impetus of his own blow, with a crash like that of a falling pine. Shouts of mingled exultation and dismay break from the multitude, and the friends of the aged Sicilian rush forward to raise their fallen champion and bear him from the arena; but, greatly to the astonishment of all, Entellus motions them away and returns to the fight more keenly than before. The old man’s blood is stirred, and he attacks his youthful enemy with such furious and headlong rushes, buffeting him grievously with both hands, that Aeneas put an end to the battle, though barely in time to save the discomfited Trojan from being beaten into insensibility.

Roman boxing took place in both the sporting and gladiatorial arenas. Roman soldiers often boxed each other for sport and as training for hand-to-hand combat. The gladiatorial boxing contests usually ended only with the death of the losing boxer. With the rise of Christianity and the concurrent decline of the Roman Empire, pugilism as entertainment apparently ceased to exist for many centuries.

Boxing history picks up again with a formal bout recorded in Britain in 1681, and by 1698 regular pugilistic contests were being held in the Royal Theatre of London. The fighters performed for whatever purses were agreed upon plus stakes (side bets), and admirers of the combatants wagered on the outcomes. These matches were fought without gloves and, for the most part, without rules. There were no weight divisions; thus, there was just one champion, and lighter men were at an obvious disadvantage. Rounds were designated, but a bout was usually fought until one participant could no longer continue. Wrestling was permitted, and it was common to fall on a foe after throwing him to the ground. Until the mid 1700s it was also common to hit a man when he was down.

Although boxing was illegal, it became quite popular, and by 1719 the prizefighter James Figg had so captured the public’s imagination that he was acclaimed champion of England, a distinction he held for some 15 years. One of Figg’s pupils, Jack Broughton, is credited with taking the first steps toward boxing’s acceptance as a respectable athletic endeavour. One of the greatest bare-knuckle prizefighters in history, Broughton devised the modern sport’s first set of rules in 1743, and those rules, with only minor changes, governed boxing until they were replaced by the more detailed London Prize Ring rules in 1838. It is said that Broughton sought such regulations after one of his opponents died as a result of his fight-related injuries.

Broughton discarded the barroom techniques that his predecessors favoured and relied primarily on his fists. While wrestling holds were still permitted, a boxer could not grab an opponent below the waist. Under Broughton’s rules, a round continued until a man went down; after 30 seconds he had to face his opponent (square off), standing no more than a yard (about a metre) away, or be declared beaten. Hitting a downed opponent was also forbidden. Recognized as the “Father of Boxing,” Broughton attracted pupils to the sport by introducing “mufflers,” the forerunners of modern gloves, to protect the fighter’s hands and the opponent’s face. (Ironically, these protective devices would prove in some ways to be more dangerous than bare fists. When boxers wear gloves, they are more likely to aim for their opponent’s head, whereas, when fighters used their bare hands, they tended to aim for softer targets to avoid injuring the hand. Thus, the brain damage associated with boxing can be traced in part to the introduction of the padded boxing glove.)

After Jack Slack beat Broughton in 1750 to claim the championship, fixed fights (fights in which outcomes were predetermined) became common, and boxing again experienced a period of decline, though there were exceptions—pugilists Daniel Mendoza and Gentleman John Jackson were great fighters of the late 1700s. Mendoza weighed only 160 pounds (73 kg), and his fighting style therefore emphasized speed over brute strength. Jackson, who eventually defeated Mendoza to claim the championship, contributed to the transformation of boxing by interesting members of the English aristocracy in the sport, thus bringing it a degree of respectability. During the early to mid 1800s, some of the greatest British champions, including Jem Belcher, Tom Cribb, Ben Caunt, and Jem Mace, came to symbolize ideals of manliness and honour for the English.

After the British Pugilists’ Protective Association initiated the London Prize Ring rules in 1838, the new regulations spread quickly throughout Britain and the United States. First used in a championship fight in 1839 in which James (“Deaf”) Burke lost the English title to William Thompson (“Bendigo”), the new rules provided for a ring 24 feet (7.32 metres) square bounded by two ropes. When a fighter went down, the round ended, and he was helped to his corner. The next round would begin 30 seconds later, with each boxer required to reach, unaided, a mark in the centre of the ring. If a fighter could not reach that mark by the end of 8 additional seconds, he was declared the loser. Kicking, gouging, butting with the head, biting, and low blows were all declared fouls.

The era of Regency England was the peak of British boxing, when the champion of bare-knuckle boxing in Britain was considered to be the world champion as well. Britain’s only potential rival in pugilism was the United States. Boxing had been introduced in the United States in the late 1700s but began to take root there only about 1800 and then only in large urban areas such as Boston, New York City, Philadelphia, and to some extent New Orleans. Most of the fighters who fought in the United States had emigrated from either England or Ireland; because boxing was then considered to be the national sport of Britain, there were few American-born fighters of the time.

Boxing’s hold upon the British imagination is evidenced in the many idioms taken from pugilism that entered the English language during this period. Phrases such as come up to scratch (to meet the qualifications), start from scratch (to start over from the beginning), and not up to the mark (not up to the necessary level) all refer to the line that was scratched in the dirt to divide the ring. At the beginning of each round, both boxers were required to put their toes up against the line to prove they were fit enough for the bout. If they were unable to do so, they were said to be unable to come up to scratch, or to the mark. The term draw, meaning a tied score, derives from the stakes that held the rope surrounding the ring: when the match was over, the stakes were “drawn” out from the ground, and eventually the finality of taking down the ropes came to stand for the end of an inconclusive fight. Further, these stakes were also the basis behind the monetary meaning of stakes. In early prizefights a bag of money, which would go to the winner of the bout, was hung from one of the stakes—thus high stakes and stake money. As for the ropes held by the stakes, to be against the ropes connotes a posture of defense against an aggressive opponent. And any telling point in an argument is spoken of as being a knockout blow, and a beautiful woman as being a knockout.

The Queensberry rules

Though the London Prize Ring rules did much to help boxing, the brawling that distinguished old-time pugilism continued to alienate most of England’s upper class, and it became apparent that still more revisions were necessary to attract a better class of patron. John Graham Chambers of the Amateur Athletic Club devised a new set of rules in 1867 that emphasized boxing technique and skill. Chambers sought the patronage of John Sholto Douglas, the 9th marquess of Queensberry, who lent his name to the new guidelines. The Queensberry rules differed from the London rules in four major respects: contestants wore padded gloves; a round consisted of three minutes of fighting followed by a minute of rest; wrestling was illegal; and any fighter who went down had to get up unaided within 10 seconds—if a fighter was unable to get up, he was declared knocked out, and the fight was over. During this period the introduction of the first weight divisions also took place.

The new rules at first were scorned by professionals, who considered them unmanly, and championship bouts continued to be fought under London Prize Ring rules. But many young pugilists preferred the Queensberry guidelines and fought accordingly. Prominent among these was James (“Jem”) Mace, who won the English heavyweight title under the London rules in 1861. Mace’s enthusiasm for gloved fighting did much to popularize the Queensberry rules.

In addition to the shift in rules, dominance in the ring began to slowly shift to American fighters. The change started, perhaps, with American fighters competing in Britain during the Regency era. Two such early fighters were former slaves—Bill Richmond and his protégé Tom Molineaux. Both Richmond and Molineaux fought against the top English pugilists of the day; indeed, Molineaux fought Tom Cribb twice for the championship title, in 1810 and 1811. Soon British champions began touring the United States and fighting American opponents.

Despite the change to the Queensberry rules, boxing was losing the social acceptability it had gained in England—partly because of changing middle-class values and an Evangelical religious revival intensely concerned about sinful pastimes. Boxing, after all, had close associations with such unsavoury practices as drinking and gambling. Further, the violence of boxing was not confined to the boxers—the spectators themselves, who often bet heavily on matches, were prone to crowd into the ring and fight as well. Large brawls frequently ensued.

This energy, conversely, suited the American scene and the millions of new immigrants. Bouts were frequently promoted and perceived as ethnic grudge matches—for instance, between fighters from Ireland and those of American birth—and violence between ethnic gang members frequently broke out during and after such bouts. This was the heyday of such fighters as Yankee Sullivan, Tom Hyer, John Morrissey, and John Heenan.

British ascendancy in boxing came to an end with the rise of the Irish-born American boxer John L. Sullivan. Sullivan was the first American champion to be considered world champion as well. For a hundred years after Sullivan’s ascendancy, boxing champions, especially in the heavyweight division, tended to reside in the United States. It was Sullivan who was also responsible for aligning professional fighters on the side of the Queensberry rules. He claimed the world heavyweight championship in 1882 under the London bare-knuckle rules, and in 1889 he defended his title against Jake Kilrain in the last heavyweight championship bare-knuckle fight in the United States. Legal problems followed the Kilrain match, because bare-knuckle boxing had by that time been made illegal in every state, and so when Sullivan went up against James J. Corbett in 1892, he fought under Queensberry rules.

Boxing’s legal status

Rule changes in British boxing took into account not only shifts in societal norms but the inescapable fact that the sport was illegal. The primary task of proponents was to reconcile a putatively barbaric activity with a civilizing impulse. According to English law, as reported in William Blackstone’s Commentaries on the Laws of England (1765–69), “a tilt or tournament, the martial diversion of our ancestors is an unlawful act: and so are boxing and sword playing, the succeeding amusements of their posterity.” Perceived by the courts as a throwback to a less-civilized past, prizefighting was classified as an affray, an assault, and a riot. However, widespread public support for boxing in England led to legal laxity and inconsistency of enforcement.

In the United States the response was different. There a combination of Puritan values and fears of lawlessness often produced heightened judicial vigilance. As the frequency of prizefights increased, various states moved beyond general and sometimes vague statutes concerning assault and enacted laws that expressly forbade fistfights. In 1876 the Massachusetts State Supreme Court confirmed its intention to maintain a lawful and ordered society by ruling that “prizefighting, boxing matches, and encounters of that kind serve no useful purpose, tend to breaches of the peace, and are unlawful even when entered into by agreement and without anger or ill will.” Boxing thus took a course of evasion by bringing a greater appearance of order to the sport through changes in rules and by relocation to more lenient environments. Matches were frequently held in remote backwaters and were not openly publicized in order that the fighters might avoid arrest; barges were also used as fight venues because they could be located in waters outside U.S. legal jurisdiction and fights could be held unimpeded.

Eventually the ever-growing popularity and profitability of the sport combined with its hero-making potential forced a reconsideration of boxing’s value by many state authorities. The fact that the heavyweight champion of boxing came to symbolize American might and resolve, even dominance, had a significant impact on the sport’s acceptance. Likewise, its role as a training tool in World War I left many with the impression that boxing, if conducted under proper conditions, lent itself to the development of skill, courage, and character. Thus, the very authorities who had fined and jailed pugilists came to sanction and regulate their activities through state boxing and athletic commissions. State regulation became the middle ground between outright prohibition and unfettered legalization.

The boxing world:

Economic impetus

By the early 20th century, boxing had become a path to riches and social acceptance for various ethnic and racial groups. It was at this time that professional boxing became centred in the United States, with its expanding economy and successive waves of immigrants. Famine had driven thousands of Irish to seek refuge in the United States, and by 1915 the Irish had become a major force in professional boxing, producing such standouts as Terry McGovern, Philadelphia Jack O’Brien, Mike (“Twin”) Sullivan and his brother Jack, Packey McFarland, Jimmy Clabby, and Jack Britton, among others. German, Scandinavian, and central European fighters also emerged. Outstanding Jewish fighters such as Joe Choynski, Abe Attell, Battling Levinsky, and Harry Lewis were active before 1915 and were followed by a second wave consisting of Barney Ross, Benny Leonard, Sid Terris, Lew Tendler, Al Singer, Maxie Rosenbloom, and Max Baer. Italian Americans to reach prominence included Tony Canzoneri, Johnny Dundee, Rocky Marciano, Rocky Graziano, Carmen Basilio, and Willie Pep.

African Americans also turned to boxing to “fight their way to the top,” and foreign-born Black boxers such as Peter Jackson, Sam Langford, and George Dixon went to the United States to capitalize on the opportunities offered by boxing. Of African American boxers, Joe Gans won the world lightweight championship in 1902, and Jack Johnson became the first Black heavyweight champion in 1908. Before and after Jack Johnson won his title, prejudice against Black boxers was great. Gans was frequently forced by promoters to lose to or underperform against less-talented white fighters. Other Black fighters found it difficult or impossible to contend for championships, as white boxers refused to face them. For instance, John L. Sullivan refused to accept the challenges of any Black, and Sullivan’s successor, Jim Corbett, refused to fight the Black Australian Peter Jackson, although Jackson had fought Corbett to a 63-round draw before Corbett became champion. Jack Dempsey continued the tradition by refusing to meet the African American Harry Wills. During Jack Johnson’s reign as champion, he was hounded so relentlessly that he was forced to leave the United States.

Blacks nevertheless continued to pursue fistic careers, particularly during the Great Depression. In 1936 African American fighter Joe Louis was matched against German Max Schmeling in a bout that was invested with both racial and political symbolism. Louis lost to Schmeling in a 12th-round knockout. In 1937 Louis captured the world heavyweight title from James Braddock, but stated he would not call himself a champion until he had beaten Schmeling in a rematch. The fight occurred on June 22, 1938, and was seen on both sides of the Atlantic as a confrontation between the United States and Nazi Germany; the American press made much of the contest between an African American and an athlete seen as a representative of Aryan culture. Both Adolph Hitler and Franklin D. Roosevelt had personal meetings with their nation’s pugilist. Louis’s sensational 1st-round victory over Schmeling in the rematch was a pivotal moment for African American athletes, as Louis in victory quickly became a symbol of the triumph of world democracy for Americans of all races.

Other African Americans followed Louis, with Sugar Ray Robinson, Archie Moore, Ezzard Charles, Henry Armstrong, Ike Williams, Sandy Saddler, Emile Griffith, Bob Foster, Jersey Joe Walcott, Floyd Patterson, Sonny Liston, Muhammad Ali, Joe Frazier, and George Foreman winning world championships in various weight divisions. By the turn of the 21st century, African Americans were a dominant force in professional boxing, producing stars such as Sugar Ray Leonard, Marvelous Marvin Hagler, Thomas Hearns, Aaron Pryor, Larry Holmes, Michael Spinks, Mike Tyson, Evander Holyfield, Riddick Bowe, Pernell Whitaker, Shane Mosley, Bernard Hopkins, Roy Jones, Jr., and Floyd Mayweather, Jr.

Amateur boxing

In 1867 the first amateur boxing championships took place under the Queensberry rules. In 1880 the Amateur Boxing Association (ABA), the sport’s first amateur governing body, was formed in Britain, and in the following year the ABA staged its first official amateur championships.

The Amateur Athletic Union (AAU) of the United States was formed in 1888 and instituted its annual championships in boxing the same year. In 1926 the Chicago Tribune started another amateur competition called the Golden Gloves. It grew into a national competition rivaling that of the AAU. The United States of America Amateur Boxing Federation (now USA Boxing), which governs American amateur boxing, was formed after the 1978 passage of a law forbidding the AAU to govern more than one Olympic sport.

Amateur boxing spread rapidly to other countries and resulted in several major international tournaments taking place annually, biennially, or, as in the case of the Olympic Games, every four years. Important events include the European Games, the Commonwealth Games, the Pan American Games, the African Games, and the World Military Games. All international matches are controlled by the Association Internationale de Boxe Amateur (AIBA), formed in 1946.

Although the Soviet Union did not permit professional boxing, it joined the AIBA in 1950, entered the Olympics in 1952, and became one of the world’s strongest amateur boxing nations, along with such other communist countries as East Germany, Poland, Hungary, and Cuba. Cuba, which had produced many excellent professional boxers before professional sports were banned by Fidel Castro’s government, became a dominating force in international amateur boxing. The Cuban heavyweight Teófilo Stevenson won Olympic gold medals in 1972, 1976, and 1980, a feat that was duplicated by his countryman Felix Savón in 1992, 1996, and 2000. African countries advanced in boxing after acquiring independence in the 1950s and ’60s, and by the end of the 20th century Nigeria, Ghana, Tanzania, Egypt, and South Africa had excellent amateur boxing programs.

In the late 20th century boxing began attracting participants from the general public—especially because of its conditioning benefits—and by the early 1990s the sport’s popularity among white-collar professionals had given rise to a new form of amateur boxing known as white-collar boxing. While many of the matches were held for charity and featured no decisions, several regulatory groups were formed, and they established rules, sanctioned events, and ranked competitors.

f_webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1799 2023-06-09 00:53:59

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1802) Cradle

Gist

Cradle. noun : a bed or cot for a baby usually on rockers or pivots. : a frame to keep the bedclothes from contact with an injured part of the body.

Summary

A cradle is an infant bed which rocks but is non-mobile. It is distinct from a typical bassinet which is a basket-like container on free-standing legs with wheels. A carbonized cradle was found in the remains of Herculaneum left from the destruction of the city by the eruption of Mount Vesuvius in 79 CE.

A cradle is a bed for a baby that is usually designed to rock back and forth when pushed gently.

Details

Cradle, in furniture, is infant’s bed of wood, wicker, or iron, having enclosed sides and suspended from a bar, slung upon pivots, or mounted on rockers. The rocking motion of the cradle is intended to lull the infant to sleep. The cradle is an ancient type of furniture, and its origins are unknown. Early cradles developed from hollowed-out tree trunks to oblong, lidless wood boxes, originally with apparently detachable rockers. Later cradles were paneled and carved, supported on pillars, inlaid, or mounted in gilded bronze.

Every period of furniture style has produced a variety of cradle types, from simple boxes to the elaborate draped state cradles of 18th-century France. While peasant babies slept in light wooden or wickerwork cradles, royal and noble medieval infants were rocked in cradles decorated with gold, silver, and precious stones. The wood cradles mounted on rockers so popular from the 15th through the 17th century were gradually superseded in the 18th and 19th centuries by wicker cradles that were slung between end supports in order to raise them higher from the ground. Adult cradles also survive, presumably from the 18th and 19th centuries, for the elderly and infirm. In much of the world, cradles were gradually replaced by the barred crib in the early 20th century.

Additional Information

A bassinet, bassinette, or cradle is a bed specifically for babies from birth to about four months. Bassinets are generally designed to work with fixed legs or caster wheels, while cradles are generally designed to provide a rocking or gliding motion. Bassinets and cradles are distinguished from Moses baskets and carry cots, which are designed to be carried and sit directly on the floor or furniture. After four months, babies are often transferred to a crib (North American usage) or cot (UK usage). In the United States, however, the bedside sleeper is the prevalent option, since they are generally bigger, recommended up to 6 months, and often used up to a year.

Design

A bassinet is typically a basket-like structure on free-standing legs, often with castors. A cradle is typically set in a fixed frame, but with the ability to rock or glide.

Use

Bassinet usage in the United States nearly doubled to 20% from 1992–2006. Greater than 45% of babies up to two months used a bassinet. By 5–6 months, however, fewer than 10% of babies sleep in bassinets. In a hospital environment, a special form of sealed bassinet is used in a neonatal intensive care unit.

On many long-haul flights, most airlines provide a bassinet (which is attached to a bulkhead) to adults travelling with an infant, i.e., a child under the age of two. The use of the bassinet is restricted by the infant's size and weight. These need to be requested in advance with the airline. However, most USA and Canadian airlines have bassinet policies which mean they are only allocated at the airport gate.

Research has shown that the mattress influences SIDS outcomes; a firm mattress lowers SIDS risk.

Some bassinets are designed to rock or swing freely, with many carers finding their child calmed by this action. The process of lulling the child to sleep may be accompanied by prerecorded or live performance of lullabies.

Stationary or portable

Although there are many variations, they fall generally into two categories:

* light and portable types sometimes called Moses baskets
* sturdier but less portable cradles

In both cases, they are generally designed to allow the resting baby to be carried from place to place. Within the home, they are often raised on a stand or other surface to reduce back strain when bending over to tend the baby. Wheeled frames to convert a bassinet into a pram or baby carriage are common.

Smart Bassinets

Bassinets that automatically soothe babies by sound and motion in response to crying recently have become available, starting with the Snoo in October 2016. The Snoo has been criticized for its high price. Graco, 4Moms, and other companies have introduced cheaper competing products.

Rolling

At three or four months of age babies are able to roll over by themselves; this means they could tip the bassinet over, so for safety they must use an infant bed or toddler bed instead.

Handmade-Wooden-Baby-Cradle-YT-248-1-jpg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1800 2023-06-09 21:14:06

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,703

Re: Miscellany

1803) Airport

Gist

An airport, also called air terminal, aerodrome, or airfield, is a site and installation for the takeoff and landing of aircraft. An airport usually has paved runways and maintenance facilities and serves as a terminal for passengers and cargo.

Details

An airport is an aerodrome with extended facilities, mostly for commercial air transport. Airports usually consist of a landing area, which comprises an aerially accessible open space including at least one operationally active surface such as a runway for a plane to take off and to land or a helipad, and often includes adjacent utility buildings such as control towers, hangars and terminals, to maintain and monitor aircraft. Larger airports may have airport aprons, taxiway bridges, air traffic control centres, passenger facilities such as restaurants and lounges, and emergency services. In some countries, the US in particular, airports also typically have one or more fixed-base operators, serving general aviation.

Operating airports is extremely complicated, with a complex system of aircraft support services, passenger services, and aircraft control services contained within the operation. Thus airports can be major employers, as well as important hubs for tourism and other kinds of transit. Because they are sites of operation for heavy machinery, a number of regulations and safety measures have been implemented in airports, in order to reduce hazards. Additionally, airports have major local environmental impacts, as both large sources of air pollution, noise pollution and other environmental impacts, making them sites that acutely experience the environmental effects of aviation. Airports are also vulnerable infrastructure to extreme weather, climate change caused sea level rise and other disasters.

Terminology

The terms aerodrome, airfield, and airstrip also refer to airports, and the terms heliport, seaplane base, and STOLport refer to airports dedicated exclusively to helicopters, seaplanes, and short take-off and landing aircraft.

In colloquial use in certain environments, the terms airport and aerodrome are often interchanged. However, in general, the term airport may imply or confer a certain stature upon the aviation facility that other aerodromes may not have achieved. In some jurisdictions, airport is a legal term of art reserved exclusively for those aerodromes certified or licensed as airports by the relevant civil aviation authority after meeting specified certification criteria or regulatory requirements.

That is to say, all airports are aerodromes, but not all aerodromes are airports. In jurisdictions where there is no legal distinction between aerodrome and airport, which term to use in the name of an aerodrome may be a commercial decision. In US technical/legal usage, landing area is used instead of aerodrome, and airport means "a landing area used regularly by aircraft for receiving or discharging passengers or cargo".

Types of airports

An airport solely serving helicopters is called a heliport. An airport for use by seaplanes and amphibious aircraft is called a seaplane base. Such a base typically includes a stretch of open water for takeoffs and landings, and seaplane docks for tying-up.

An international airport has additional facilities for customs and passport control as well as incorporating all the aforementioned elements. Such airports rank among the most complex and largest of all built typologies, with 15 of the top 50 buildings by floor area being airport terminals.

Management

Smaller or less-developed airfields, which represent the vast majority, often have a single runway shorter than 1,000 m (3,300 ft). Larger airports for airline flights generally have paved runways of 2,000 m (6,600 ft) or longer. Skyline Airport in Inkom, Idaho, has a runway that is only 122 m (400 ft) long.

In the United States, the minimum dimensions for dry, hard landing fields are defined by the FAR Landing And Takeoff Field Lengths. These include considerations for safety margins during landing and takeoff.

The longest public-use runway in the world is at Qamdo Bamda Airport in China. It has a length of 5,500 m (18,045 ft). The world's widest paved runway is at Ulyanovsk Vostochny Airport in Russia and is 105 m (344 ft) wide.

As of 2009, the CIA stated that there were approximately 44,000 "airports or airfields recognizable from the air" around the world, including 15,095 in the US, the US having the most in the world.

Airport ownership and operation

The Berlin Brandenburg Airport is publicly financed by the states of Berlin and Brandenburg and the Federal Republic of Germany.

Most of the world's large airports are owned by local, regional, or national government bodies who then lease the airport to private corporations who oversee the airport's operation. For example, in the UK the state-owned British Airports Authority originally operated eight of the nation's major commercial airports – it was subsequently privatized in the late 1980s, and following its takeover by the Spanish Ferrovial consortium in 2006, has been further divested and downsized to operating just Heathrow. Germany's Frankfurt Airport is managed by the quasi-private firm Fraport. While in India GMR Group operates, through joint ventures, Indira Gandhi International Airport and Rajiv Gandhi International Airport. Bengaluru International Airport is controlled by Fairfax .Chhatrapati Shivaji International Airport, Chaudhary Charan Singh International Airport, Mangalore International Airport, Thiruvananthapuram International Airport, Lokpriya Gopinath Bordoloi International Airport, Jaipur International Airport, Sardar Vallabhbhai Patel International Airport are operated by Adani Group through a Public Private Partnership wherein Adani Group, the operator pays Airports Authority of India, the owner of the airports, a predetermined sum of money based on the number of passengers handled by the airports. The rest of India's airports are managed by the Airports Authority of India. In Pakistan nearly all civilian airports are owned and operated by the Pakistan Civil Aviation Authority except for Sialkot International Airport which has the distinction of being the first privately owned public airport in Pakistan and South Asia.

In the US, commercial airports are generally operated directly by government entities or government-created airport authorities (also known as port authorities), such as the Los Angeles World Airports authority that oversees several airports in the Greater Los Angeles area, including Los Angeles International Airport.

In Canada, the federal authority, Transport Canada, divested itself of all but the remotest airports in 1999/2000. Now most airports in Canada are operated by individual legal authorities, such as Vancouver International Airport Authority (although still owned by Transport Canada); some airports, such as Boundary Bay Airport and Pitt Meadows Airport, are municipally owned.

Many US airports still lease part or all of their facilities to outside firms, who operate functions such as retail management and parking. All US commercial airport runways are certified by the FAA under the Code of Federal Regulations Title 14 Part 139, "Certification of Commercial Service Airports" but maintained by the local airport under the regulatory authority of the FAA.

Despite the reluctance to privatize airports in the US (contrary to the FAA sponsoring a privatization program since 1996), the government-owned, contractor-operated (GOCO) arrangement is the standard for the operation of commercial airports in the rest of the world.

Airport funding

The Airport & Airway Trust Fund (AATF) was created by the Airport and Airway Development in 1970 which finances aviation programs in the United States. Airport Improvement Program (AIP), Facilities and Equipment (F&E), and Research, Engineering, and Development (RE&D) are the three major accounts of Federal Aviation Administration which are financed by the AATF, as well as pays for the FAA's Operation and Maintenance (O&M) account. The funding of these accounts are dependent on the taxes the airports generate of revenues. Passenger tickets, fuel, and cargo tax are the taxes that are paid by the passengers and airlines help fund these accounts.

Airport revenue

Airports revenues are divided into three major parts: aeronautical revenue, non-aeronautical revenue, and non-operating revenue. Aeronautical revenue makes up 56%, non-aeronautical revenue makes up 40%, and non-operating revenue makes up 4% of the total revenue of airports.

Aeronautical revenue

Aeronautical revenue are generated through airline rents and landing, passenger service, parking, and hangar fees. Landing fees are charged per aircraft for landing an airplane in the airport property. Landing fees are calculated through the landing weight and the size of the aircraft which varies but most of the airports have a fixed rate and a charge extra for extra weight. Passenger service fees are charges per passengers for the facilities used on a flight like water, food, wifi and shows which is paid while paying for an airline ticket. Aircraft parking is also a major revenue source for airports. Aircraft are parked for a certain amount of time before or after takeoff and have to pay to park there. Every airport has its own rates of parking, for example, John F Kennedy airport in New York City charges $45 per hour for a plane of 100,000 pounds and the price increases with weight.

Non-aeronautical revenue

Non-aeronautical revenue is gained through things other than aircraft operations. It includes lease revenue from compatible land-use development, non-aeronautical building leases, retail and concession sales, rental car operations, parking and in-airport advertising. Concession revenue is one big part of non-aeronautical revenue airports makes through duty free, bookstores, restaurants and money exchange. Car parking is a growing source of revenue for airports, as more people use the parking facilities of the airport. O'Hare International Airport in Chicago charges $2 per hour for every car.

Price regulation

Many airports are local monopolies. To prevent them from abusing their market power, governments regulate how much airports may charge to airlines, using price-cap regulation.

Landside and airside areas

Airports are divided into landside and airside zones. The landside is subject to fewer special laws and is part of the public realm, while access to the airside zone is tightly controlled. Landside facilities may include publicly accessible airport check-in desks, shops and ground transportation facilities. The airside area includes all parts of the airport around the aircraft, and the parts of the buildings that are restricted to staff, and sections of these extended to travelling, airside shopping, dining, or waiting passengers. Depending on the airport, passengers and staff must be checked by security or border control before being permitted to enter the airside zone. Conversely, passengers arriving from an international flight must pass through border control and customs to access the landside area, in which they exit, unless in airside transit. Most multi-terminal airports have (variously termed) flight/passenger/air connections buses, moving walkways and/or people movers for inter-terminal airside transit. Their airlines can arrange for baggage to be routed directly to the passenger's destination. Most major airports issue a secure keycard, an airside pass to employees, to assist in their reliable, standardized and efficient verification of identity.

Facilities

A terminal is a building with passenger facilities. Small airports have one terminal. Large ones often have multiple terminals, though some large airports, like Amsterdam Airport Schiphol, still have one terminal. The terminal has a series of gates, which provide passengers with access to the plane.

The following facilities are essential for departing passengers:

* Check-in facilities, including a baggage drop-off
* Security clearance gates
* Passport control (for some international flights)
* Gates
* Waiting areas

The following facilities are essential for arriving passengers:

* Passport control (international arrivals only)
*Baggage reclaim facilities, often in the form of a carousel
* Customs (international arrivals only)
* A landside meeting place

For both sets of passengers, there must be a link between the passenger facilities and the aircraft, such as jet bridges or airstairs. There also needs to be a baggage handling system, to transport baggage from the baggage drop-off to departing planes, and from arriving planes to the baggage reclaim.

The area where the aircraft parks to load passengers and baggage is known as an apron or ramp (or incorrectly, "the tarmac").

Airports with international flights have customs and immigration facilities. However, as some countries have agreements that allow travel between them without customs and immigrations, such facilities are not a definitive need for an international airport. International flights often require a higher level of physical security, although in recent years, many countries have adopted the same level of security for international and domestic travel.

"Floating airports" are being designed which could be located out at sea and which would use designs such as pneumatic stabilized platform technology.

Airport security

Airport security normally requires baggage checks, metal screenings of individual persons, and rules against any object that could be used as a weapon. Since the September 11 attacks and the Real ID Act of 2005, airport security has dramatically increased and got tighter and stricter than ever before.

Products and services

Most major airports provide commercial outlets for products and services. Most of these companies, many of which are internationally known brands, are located within the departure areas. These include clothing boutiques and restaurants and in the US amounted to $4.2 billion in 2015. Prices charged for items sold at these outlets are generally higher than those outside the airport. However, some airports now regulate costs to keep them comparable to "street prices". This term is misleading as prices often match the manufacturers' suggested retail price (MSRP) but are almost never discounted.

Many new airports include walkthrough duty-free stores that require air passengers to enter a retail store upon exiting security. Airport planners sometimes incorporate winding routes within these stores such that passengers encounter more goods as they walk towards their gate. Planners also install artworks next to the airport's shops in order to draw passengers into the stores.

Apart from major fast food chains, some airport restaurants offer regional cuisine specialties for those in transit so that they may sample local food without leaving the airport.

Some airport structures include on-site hotels built within or attached to a terminal building. Airport hotels have grown popular due to their convenience for transient passengers and easy accessibility to the airport terminal. Many airport hotels also have agreements with airlines to provide overnight lodging for displaced passengers.

Major airports in such countries as Russia and Japan offer miniature sleeping units within the airport that are available for rent by the hour. The smallest type is the capsule hotel popular in Japan. A slightly larger variety is known as a sleep box. An even larger type is provided by the company YOTEL.

Premium and VIP services

Airports may also contain premium and VIP services. The premium and VIP services may include express check-in and dedicated check-in counters. These services are usually reserved for first and business class passengers, premium frequent flyers, and members of the airline's clubs. Premium services may sometimes be open to passengers who are members of a different airline's frequent flyer program. This can sometimes be part of a reciprocal deal, as when multiple airlines are part of the same alliance, or as a ploy to attract premium customers away from rival airlines.

Sometimes these premium services will be offered to a non-premium passenger if the airline has made a mistake in handling of the passenger, such as unreasonable delays or mishandling of checked baggage.

Airline lounges frequently offer free or reduced cost food, as well as alcoholic and non-alcoholic beverages. Lounges themselves typically have seating, showers, quiet areas, televisions, computer, Wi-Fi and Internet access, and power outlets that passengers may use for their electronic equipment. Some airline lounges employ baristas, bartenders and gourmet chefs.

Airlines sometimes operate multiple lounges within the one airport terminal allowing ultra-premium customers, such as first class customers, additional services, which are not available to other premium customers. Multiple lounges may also prevent overcrowding of the lounge facilities.

Cargo and freight service

In addition to people, airports move cargo around the clock. Cargo airlines often have their own on-site and adjacent infrastructure to transfer parcels between ground and air.

Cargo Terminal Facilities are areas where international airports export cargo has to be stored after customs clearance and prior to loading the aircraft. Similarly, import cargo that is offloaded needs to be in bond before the consignee decides to take delivery. Areas have to be kept aside for examination of export and import cargo by the airport authorities. Designated areas or sheds may be given to airlines or freight forward ring agencies.

Every cargo terminal has a landside and an airside. The landside is where the exporters and importers through either their agents or by themselves deliver or collect shipments while the airside is where loads are moved to or from the aircraft. In addition, cargo terminals are divided into distinct areas – export, import, and interline or transshipment.

Access and onward travel

Airports require parking lots, for passengers who may leave the cars at the airport for a long period of time. Large airports will also have car-rental firms, taxi ranks, bus stops and sometimes a train station.

Many large airports are located near railway trunk routes for seamless connection of multimodal transport, for instance Frankfurt Airport, Amsterdam Airport Schiphol, London Heathrow Airport, Tokyo Haneda Airport, Tokyo Narita Airport, Hamad International Airport, London Gatwick Airport and London Stansted Airport. It is also common to connect an airport and a city with rapid transit, light rail lines or other non-road public transport systems. Some examples of this would include the AirTrain JFK at John F. Kennedy International Airport in New York, Link light rail that runs from the heart of downtown Seattle to Seattle–Tacoma International Airport, and the Silver Line T at Boston's Logan International Airport by the Massachusetts Bay Transportation Authority (MBTA). Such a connection lowers risk of missed flights due to traffic congestion. Large airports usually have access also through controlled-access highways ('freeways' or 'motorways') from which motor vehicles enter either the departure loop or the arrival loop.

Internal transport

The distances passengers need to move within a large airport can be substantial. It is common for airports to provide moving walkways, buses, and rail transport systems. Some airports like Hartsfield–Jackson Atlanta International Airport and London Stansted Airport have a transit system that connects some of the gates to a main terminal. Airports with more than one terminal have a transit system to connect the terminals together, such as John F. Kennedy International Airport, Mexico City International Airport and London Gatwick Airport.

Airport operations

There are three types of surface that aircraft operate on:

* Runways, for takeoff and landing
* Taxiways, where planes "taxi" (transfer to and from a runway)
* Apron or ramp: a surface where planes are parked, loaded, unloaded or refuelled.

Air traffic control

Air traffic control (ATC) is the task of managing aircraft movements and making sure they are safe, orderly and expeditious. At the largest airports, air traffic control is a series of highly complex operations that requires managing frequent traffic that moves in all three dimensions.

A "towered" or "controlled" airport has a control tower where the air traffic controllers are based. Pilots are required to maintain two-way radio communication with the controllers, and to acknowledge and comply with their instructions. A "non-towered" airport has no operating control tower and therefore two-way radio communications are not required, though it is good operating practice for pilots to transmit their intentions on the airport's common traffic advisory frequency (CTAF) for the benefit of other aircraft in the area. The CTAF may be a Universal Integrated Community (UNICOM), MULTICOM, Flight Service Station (FSS), or tower frequency.

The majority of the world's airports are small facilities without a tower. Not all towered airports have 24/7 ATC operations. In those cases, non-towered procedures apply when the tower is not in use, such as at night. Non-towered airports come under area (en-route) control. Remote and virtual tower (RVT) is a system in which ATC is handled by controllers who are not present at the airport itself.

Air traffic control responsibilities at airports are usually divided into at least two main areas: ground and tower, though a single controller may work both stations. The busiest airports may subdivide responsibilities further, with clearance delivery, apron control, and/or other specialized ATC stations.

Ground control

Ground control is responsible for directing all ground traffic in designated "movement areas", except the traffic on runways. This includes planes, baggage trains, snowplows, grass cutters, fuel trucks, stair trucks, airline food trucks, conveyor belt vehicles and other vehicles. Ground Control will instruct these vehicles on which taxiways to use, which runway they will use (in the case of planes), where they will park, and when it is safe to cross runways. When a plane is ready to takeoff it will be turned over to tower control. Conversely, after a plane has landed it will depart the runway and be "handed over" from Tower to Ground Control.

Tower control

Tower control is responsible for aircraft on the runway and in the controlled airspace immediately surrounding the airport. Tower controllers may use radar to locate an aircraft's position in 3D space, or they may rely on pilot position reports and visual observation. They coordinate the sequencing of aircraft in the traffic pattern and direct aircraft on how to safely join and leave the circuit. Aircraft which are only passing through the airspace must also contact tower control to be sure they remain clear of other traffic.

Traffic pattern

At all airports the use of a traffic pattern (often called a traffic circuit outside the US) is possible. They may help to assure smooth traffic flow between departing and arriving aircraft. There is no technical need within modern commercial aviation for performing this pattern, provided there is no queue. And due to the so-called SLOT-times, the overall traffic planning tend to assure landing queues are avoided. If for instance an aircraft approaches runway 17 (which has a heading of approx. 170 degrees) from the north (coming from 360/0 degrees heading towards 180 degrees), the aircraft will land as fast as possible by just turning 10 degrees and follow the glidepath, without orbit the runway for visual reasons, whenever this is possible. For smaller piston engined airplanes at smaller airfields without ILS equipment, things are very different though.

Generally, this pattern is a circuit consisting of five "legs" that form a rectangle (two legs and the runway form one side, with the remaining legs forming three more sides). Each leg is named (see diagram), and ATC directs pilots on how to join and leave the circuit. Traffic patterns are flown at one specific altitude, usually 800 or 1,000 ft (244 or 305 m) above ground level (AGL). Standard traffic patterns are left-handed, meaning all turns are made to the left. One of the main reason for this is that pilots sit on the left side of the airplane, and a Left-hand patterns improves their visibility of the airport and pattern. Right-handed patterns do exist, usually because of obstacles such as a mountain, or to reduce noise for local residents. The predetermined circuit helps traffic flow smoothly because all pilots know what to expect, and helps reduce the chance of a mid-air collision.

At controlled airports, a circuit can be in place but is not normally used. Rather, aircraft (usually only commercial with long routes) request approach clearance while they are still hours away from the airport; the destination airport can then plan a queue of arrivals, and planes will be guided into one queue per active runway for a "straight-in" approach. While this system keeps the airspace free and is simpler for pilots, it requires detailed knowledge of how aircraft are planning to use the airport ahead of time and is therefore only possible with large commercial airliners on pre-scheduled flights. The system has recently become so advanced that controllers can predict whether an aircraft will be delayed on landing before it even takes off; that aircraft can then be delayed on the ground, rather than wasting expensive fuel waiting in the air.

Navigational aids

There are a number of aids, both visual and electronic, though not at all airports. A visual approach slope indicator (VASI) helps pilots fly the approach for landing. Some airports are equipped with a VHF omnidirectional range (VOR) to help pilots find the direction to the airport. VORs are often accompanied by a distance measuring equipment (DME) to determine the distance to the VOR. VORs are also located off airports, where they serve to provide airways for aircraft to navigate upon. In poor weather, pilots will use an instrument landing system (ILS) to find the runway and fly the correct approach, even if they cannot see the ground. The number of instrument approaches based on the use of the Global Positioning System (GPS) is rapidly increasing and may eventually become the primary means for instrument landings.

Larger airports sometimes offer precision approach radar (PAR), but these systems are more common at military air bases than civilian airports. The aircraft's horizontal and vertical movement is tracked via radar, and the controller tells the pilot his position relative to the approach slope. Once the pilots can see the runway lights, they may continue with a visual landing.

Taxiway signs

Airport guidance signs provide direction and information to taxiing aircraft and airport vehicles. Smaller aerodromes may have few or no signs, relying instead on diagrams and charts.

Lighting

Airport lighting

Many airports have lighting that help guide planes using the runways and taxiways at night or in rain or fog.

On runways, green lights indicate the beginning of the runway for landing, while red lights indicate the end of the runway. Runway edge lighting consists of white lights spaced out on both sides of the runway, indicating the edges. Some airports have more complicated lighting on the runways including lights that run down the centerline of the runway and lights that help indicate the approach (an approach lighting system, or ALS). Low-traffic airports may use pilot-controlled lighting to save electricity and staffing costs.

Along taxiways, blue lights indicate the taxiway's edge, and some airports have embedded green lights that indicate the centerline.

Weather observations

Automated weather system

Weather observations at the airport are crucial to safe takeoffs and landings. In the United States and Canada, the vast majority of airports, large and small, will either have some form of automated airport weather station, whether an AWOS, ASOS, or AWSS, a human observer or a combination of the two. These weather observations, predominantly in the METAR format, are available over the radio, through automatic terminal information service (ATIS), via the ATC or the flight service station.

Planes take-off and land into the wind to achieve maximum performance. Because pilots need instantaneous information during landing, a windsock can also be kept in view of the runway. Aviation windsocks are made with lightweight material, withstand strong winds and some are lit up after dark or in foggy weather. Because visibility of windsocks is limited, often multiple glow-orange windsocks are placed on both sides of the runway.

Airport ground crew (ground handling)

Most airports have groundcrew handling the loading and unloading of passengers, crew, baggage and other services. Some groundcrew are linked to specific airlines operating at the airport.

Among the vehicles that serve an airliner on the ground are:

* A tow tractor to move the aircraft in and out of the berth.
* A jet bridge (in some airports) or stairs unit to allow passengers to embark and disembark.
* A ground power unit for supplying electricity. As the engines will be switched off, they will not be generating electricity as they do in flight.
* A cleaning service.
* A catering service to deliver food and drinks for a flight.
* A toilet waste truck to empty the tank which holds the waste from the toilets in the aircraft.
* A water truck to fill the water tanks of the aircraft.
* A refueling vehicle. The fuel may come from a tanker, or from underground fuel tanks.
* A conveyor belt unit for loading and unloading luggage.
* A vehicle to transport luggage to and from the terminal.

The length of time an aircraft remains on the ground in between consecutive flights is known as "turnaround time". Airlines pay great attention to minimizing turnaround times in an effort to keep aircraft use (flying time) high, with times scheduled as low as 25 minutes for jet aircraft operated by low-cost carriers on narrow-body aircraft.

Maintenance management

Like industrial equipment or facility management, airports require tailor-made maintenance management due to their complexity. With many tangible assets spread over a large area in different environments, these infrastructures must therefore effectively monitor these assets and store spare parts to maintain them at an optimal level of service.

To manage these airport assets, several solutions are competing for the market: CMMS (computerized maintenance management system) predominate, and mainly enable a company's maintenance activity to be monitored, planned, recorded and rationalized.

Safety management

Aviation safety is an important concern in the operation of an airport, and almost every airfield includes equipment and procedures for handling emergency situations. Airport crash tender crews are equipped for dealing with airfield accidents, crew and passenger extractions, and the hazards of highly flammable aviation fuel. The crews are also trained to deal with situations such as bomb threats, hijacking, and terrorist activities.

Hazards to aircraft include debris, nesting birds, and reduced friction levels due to environmental conditions such as ice, snow, or rain. Part of runway maintenance is airfield rubber removal which helps maintain friction levels. The fields must be kept clear of debris using cleaning equipment so that loose material does not become a projectile and enter an engine duct (see foreign object damage). In adverse weather conditions, ice and snow clearing equipment can be used to improve traction on the landing strip. For waiting aircraft, equipment is used to spray special deicing fluids on the wings.

Many airports are built near open fields or wetlands. These tend to attract bird populations, which can pose a hazard to aircraft in the form of bird strikes. Airport crews often need to discourage birds from taking up residence.

Some airports are located next to parks, golf courses, or other low-density uses of land. Other airports are located near densely populated urban or suburban areas.

An airport can have areas where collisions between aircraft on the ground tend to occur. Records are kept of any incursions where aircraft or vehicles are in an inappropriate location, allowing these "hot spots" to be identified. These locations then undergo special attention by transportation authorities (such as the FAA in the US) and airport administrators.

During the 1980s, a phenomenon known as microburst became a growing concern due to aircraft accidents caused by microburst wind shear, such as Delta Air Lines Flight 191. Microburst radar was developed as an aid to safety during landing, giving two to five minutes' warning to aircraft in the vicinity of the field of a microburst event.

Some airfields now have a special surface known as soft concrete at the end of the runway (stopway or blastpad) that behaves somewhat like styrofoam, bringing the plane to a relatively rapid halt as the material disintegrates. These surfaces are useful when the runway is located next to a body of water or other hazard, and prevent the planes from overrunning the end of the field.

Airports often have on-site firefighters to respond to emergencies. These use specialized vehicles, known as airport crash tenders. Most civil aviation authorities have required levels of on-site emergency response capabilities based on an airport's traffic. At airports where civil and military operations share a common set of runways and infrastructure, emergency response is often managed by the relevant military unit as part of their base's operations.

Environmental concerns and sustainability

Aircraft noise is a major cause of noise disturbance to residents living near airports. Sleep can be affected if the airports operate night and early morning flights. Aircraft noise occurs not only from take-offs and landings but also from ground operations including maintenance and testing of aircraft. Noise can have other health effects as well. Other noises and environmental concerns are vehicle traffic causing noise and pollution on roads leading to the airport.

The construction of new airports or addition of runways to existing airports, is often resisted by local residents because of the effect on countryside, historical sites, and local flora and fauna. Due to the risk of collision between birds and aircraft, large airports undertake population control programs where they frighten or shoot birds.

The construction of airports has been known to change local weather patterns. For example, because they often flatten out large areas, they can be susceptible to fog in areas where fog rarely forms. In addition, they generally replace trees and grass with pavement, they often change drainage patterns in agricultural areas, leading to more flooding, run-off and erosion in the surrounding land. Airports are often built on low-lying coastal land, globally 269 airports are at risk of coastal flooding now. A temperature rise of 2 degrees C – consistent with the Paris Agreement - would lead to 100 airports being below mean sea level and 364 airports at risk of flooding. If global mean temperature rise exceeds this then as many as 572 airports will be at risk by 2100, leading to major disruptions without appropriate adaptation.

Some of the airport administrations prepare and publish annual environmental reports to show how they consider these environmental concerns in airport management issues and how they protect environment from airport operations. These reports contain all environmental protection measures performed by airport administration in terms of water, air, soil and noise pollution, resource conservation and protection of natural life around the airport.

A 2019 report from the Cooperative Research Programs of the US Transportation Research Board showed all airports have a role to play in advancing greenhouse gas (GHG) reduction initiatives. Small airports have demonstrated leadership by using their less complex organizational structure to implement newer technologies and to serve as a proving ground for their feasibility. Large airports have the economic stability and staff resources necessary to grow in-house expertise and fund comprehensive new programs.

A growing number of airports are installing solar photovoltaic arrays to offset their electricity use. The National Renewable Energy Lab has shown this can be done safely. This can also be done on the roofs of the airports and it has been found that the solar panels on these buildings work more effectively when compared to residential panels.

The world's first airport to be fully powered by solar energy is located at Kochi, India. Another airport known for considering environmental concerns is Seymour Airport in the Galapagos Islands.

Military air base

An airbase, sometimes referred to as an air station or airfield, provides basing and support of military aircraft. Some airbases, known as military airports, provide facilities similar to their civilian counterparts. For example, RAF Brize Norton in the UK has a terminal that caters to passengers for the Royal Air Force's scheduled flights to the Falkland Islands. Some airbases are co-located with civilian airports, sharing the same ATC facilities, runways, taxiways and emergency services, but with separate terminals, parking areas and hangars. Bardufoss Airport, Bardufoss Air Station in Norway and Pune Airport in India are examples of this.

An aircraft carrier is a warship that functions as a mobile airbase. Aircraft carriers allow a naval force to project air power without having to depend on local bases for land-based aircraft. After their development in World War I, aircraft carriers replaced the battleship as the centrepiece of a modern fleet during World War II.

Airport designation and naming

Most airports in the United States are designated "private-use airports" meaning that, whether publicly- or privately owned, the airport is not open or available for use by the public (although use of the airport may be made available by invitation of the owner or manager).

Airports are uniquely represented by their IATA airport code and ICAO airport code.

Most airport names include the location. Many airport names honour a public figure, commonly a politician (e.g., Charles de Gaulle Airport, George Bush Intercontinental Airport, Lennart Meri Airport, O.R. Tambo International Airport, Soekarno–Hatta International Airport), a monarch (e.g. Chhatrapati Shivaji International Airport, King Shaka International Airport), a cultural leader (e.g. Liverpool John Lennon Airport, Leonardo da Vinci-Fiumicino Airport, Louis Armstrong New Orleans International Airport) or a prominent figure in aviation history of the region (e.g. Sydney Kingsford Smith Airport), sometimes even famous writers (e.g. Allama Iqbal International Airport) and explorers (e.g. Venice Marco Polo Airport).

Some airports have unofficial names, possibly so widely circulated that its official name is little used or even known.

Some airport names include the word "International" to indicate their ability to handle international air traffic. This includes some airports that do not have scheduled international airline services (e.g. Port Elizabeth International Airport).

History and development

The earliest aircraft takeoff and landing sites were grassy fields. The plane could approach at any angle that provided a favorable wind direction. A slight improvement was the dirt-only field, which eliminated the drag from grass. However, these functioned well only in dry conditions. Later, concrete surfaces would allow landings regardless of meteorological conditions.

The title of "world's oldest airport" is disputed. Toussus-le-Noble airport near Paris was established in 1907 and has been operating since. College Park Airport in Maryland, US, established in 1909 by Wilbur Wright serves only general aviation traffic.

Beijing Nanyuan Airport in China, which was built to accommodate planes in 1904, and airships in 1907, opened in 1910. It was in operation until September 2019. Pearson Field Airport in Vancouver, Washington, United States, was built to accommodate planes in 1905 and airships in 1911, and is still in use as of January 2022.

Hamburg Airport opened in January 1911, making it the oldest commercial airport in the world which is still in operation. Bremen Airport opened in 1913 and remains in use, although it served as an American military field between 1945 and 1949. Amsterdam Airport Schiphol opened on September 16, 1916, as a military airfield, but has accepted civil aircraft only since December 17, 1920, allowing Sydney Airport—which started operations in January 1920—to claim to be one of the world's oldest continuously operating commercial airports.[46] Minneapolis-Saint Paul International Airport in the US opened in 1920 and has been in continuous commercial service since. It serves about 35,000,000 passengers each year and continues to expand, recently opening a new 11,000-foot (3,355 m) runway. Of the airports constructed during this early period in aviation, it is one of the largest and busiest that is still currently operating. Don Mueang International Airport near Bangkok, Thailand, opened 1914, is also a contender, as well as the Rome Ciampino Airport, which opened in 1916. Increased aircraft traffic during World War I led to the construction of landing fields. Aircraft had to approach these from certain directions and this led to the development of aids for directing the approach and landing slope.

Following the war, some of these military airfields added civil facilities for handling passenger traffic. One of the earliest such fields was Paris – Le Bourget Airport at Le Bourget, near Paris. The first airport to operate scheduled international commercial services was Hounslow Heath Aerodrome in August 1919, but it was closed and supplanted by Croydon Airport in March 1920. In 1922, the first permanent airport and commercial terminal solely for commercial aviation was opened at Flughafen Devau near what was then Königsberg, East Prussia. The airports of this era used a paved "apron", which permitted night flying as well as landing heavier aircraft.

The first lighting used on an airport was during the latter part of the 1920s; in the 1930s approach lighting came into use. These indicated the proper direction and angle of descent. The colours and flash intervals of these lights became standardized under the International Civil Aviation Organization (ICAO). In the 1940s, the slope-line approach system was introduced. This consisted of two rows of lights that formed a funnel indicating an aircraft's position on the glideslope. Additional lights indicated incorrect altitude and direction.

After World War II, airport design became more sophisticated. Passenger buildings were being grouped together in an island, with runways arranged in groups about the terminal. This arrangement permitted expansion of the facilities. But it also meant that passengers had to travel further to reach their plane.

An improvement in the landing field was the introduction of grooves in the concrete surface. These run perpendicular to the direction of the landing aircraft and serve to draw off excess rainwater that could build up in front of the plane's wheels.

Airport construction boomed during the 1960s with the increase in jet aircraft traffic. Runways were extended out to 3,000 m (9,800 ft). The fields were constructed out of reinforced concrete using a slip-form machine that produces a continuous slab with no disruptions along the length. The early 1960s also saw the introduction of jet bridge systems to modern airport terminals, an innovation which eliminated outdoor passenger boarding. These systems became commonplace in the United States by the 1970s.

The malicious use of UAVs has led to the deployment of counter unmanned air system (C-UAS) technologies such as the Aaronia AARTOS which have been installed on major international airports.

Airports in entertainment

Airports have played major roles in films and television programs due to their very nature as a transport and international hub, and sometimes because of distinctive architectural features of particular airports. One such example of this is The Terminal, a film about a man who becomes permanently grounded in an airport terminal and must survive only on the food and shelter provided by the airport. They are also one of the major elements in movies such as The V.I.P.s, Speed, Airplane!, Airport (1970), Die Hard 2, Soul Plane, Jackie Brown, Get Shorty, Home Alone (1990), Home Alone 2: Lost in New York (1992), Liar Liar, Passenger 57, Final Destination (2000), Unaccompanied Minors, Catch Me If You Can, Rendition and The Langoliers. They have also played important parts in television series like Lost, The Amazing Race, America's Next Top Model (season 10), 90 Day Fiancé, Air Crash Investigation which have significant parts of their story set within airports. In other programmes and films, airports are merely indicative of journeys, e.g. Good Will Hunting.

Several computer simulation games put the player in charge of an airport. These include the Airport Tycoon series, SimAirport and Airport CEO.

Airport directories

Each civil aviation authority provides a source of information about airports in their country. This will contain information on airport elevation, airport lighting, runway information, communications facilities and frequencies, hours of operation, nearby NAVAIDs and contact information where prior arrangement for landing is necessary.

Australia

Information can be found on-line in the En route Supplement Australia (ERSA) which is published by Airservices Australia, a government owned corporation charged with managing Australian ATC.

Brazil

Infraero is responsible for the airports in Brazil.

Canada

Two publications, the Canada Flight Supplement (CFS) and the Water Aerodrome Supplement, published by Nav Canada under the authority of Transport Canada provides equivalent information.

Europe

The European Organisation for the Safety of Air Navigation (EUROCONTROL) provides an Aeronautical Information Publication (AIP), aeronautical charts and NOTAM services for multiple European countries.

Germany

Provided by the Luftfahrt-Bundesamt (Federal Office for Civil Aviation of Germany).

France

Aviation Generale Delage edited by Delville and published by Breitling.

The United Kingdom

The information is found in Pooley's Flight Guide, a publication compiled with the assistance of the United Kingdom Civil Aviation Authority (CAA). Pooley's also contains information on some continental European airports that are close to Great Britain. National Air Traffic Services, the UK's Air Navigation Service Provider, a public–private partnership also publishes an online AIP for the UK.

The United States

The US uses the Airport/Facility Directory (A/FD) (now officially termed the Chart Supplement) published in seven volumes. DAFIF also includes extensive airport data but has been unavailable to the public at large since 2006.

Japan

Aeronautical Information Publication (AIP) is provided by Japan Aeronautical Information Service Center, under the authority of Japan Civil Aviation Bureau, Ministry of Land, Infrastructure, Transport and Tourism of Japan.

Plane-on-takeoff-from-the-Amsterdam-Airport-international-airport.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

Board footer

Powered by FluxBB