# Math Is Fun Forum

Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫  π  -¹ ² ³ °

You are not logged in.

## #1826 2023-07-05 00:12:38

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1829) Puppetry

Gist

Puppetry is a form of theatre or performance that involves the manipulation of puppets – inanimate objects, often resembling some type of human or animal figure, that are animated or manipulated by a human called a puppeteer. Such a performance is also known as a puppet production.

Details

Puppetry is a form of theatre or performance that involves the manipulation of puppets – inanimate objects, often resembling some type of human or animal figure, that are animated or manipulated by a human called a puppeteer. Such a performance is also known as a puppet production. The script for a puppet production is called a puppet play. Puppeteers use movements from hands and arms to control devices such as rods or strings to move the body, head, limbs, and in some cases the mouth and eyes of the puppet. The puppeteer sometimes speaks in the voice of the character of the puppet, while at other times they perform to a recorded soundtrack.

There are many different varieties of puppets, and they are made of a wide range of materials, depending on their form and intended use. They can be extremely complex or very simple in their construction. The simplest puppets are finger puppets, which are tiny puppets that fit onto a single finger, and sock puppets, which are formed from a sock and operated by inserting one's hand inside the sock, with the opening and closing of the hand simulating the movement of the puppet's "mouth". A hand puppet or glove puppet is controlled by one hand which occupies the interior of the puppet and moves the puppet around. Punch and Judy puppets are familiar examples. Other hand or glove puppets are larger and require two puppeteers for each puppet. Japanese Bunraku puppets are an example of this. Marionettes are suspended and controlled by a number of strings, plus sometimes a central rod attached to a control bar held from above by the puppeteer. Rod puppets are made from a head attached to a central rod. Over the rod is a body form with arms attached controlled by separate rods. They have more movement possibilities as a consequence than a simple hand or glove puppet.

Puppetry is a very ancient form of theatre which was first recorded in the 5th century BC in Ancient Greece. Some forms of puppetry may have originated as long ago as 3000 years BC. Puppetry takes many forms, but they all share the process of animating inanimate performing objects to tell a story. Puppetry occurs in almost all human societies where puppets are used for the purpose of entertainment through performance, as sacred objects in rituals, as symbolic effigies in celebrations such as carnivals, and as a catalyst for social and psychological change in transformative arts.

History

Puppetry is a very ancient art form, thought to have originated about 4000 years ago. Puppets have been used since the earliest times to animate and communicate the ideas and needs of human societies. Some historians claim that they pre-date actors in theatre. There is evidence that they were used in Egypt as early as 2000 BCE when string-operated figures of wood were manipulated to perform the action of kneading bread. Wire controlled, articulated puppets made of clay and ivory have also been found in Egyptian tombs. Hieroglyphs also describe "walking statues" being used in ancient Egyptian religious dramas. Puppetry was practiced in ancient Greece and the oldest written records of puppetry can be found in the works of Herodotus and Xenophon, dating from the 5th century BC.

Contemporary era

From early in the 19th century, puppetry began to inspire artists from the 'high-art' traditions. In 1810, Heinrich von Kleist wrote an essay 'On the Marionette Theatre', admiring the "lack of self-consciousness" of the puppet. Puppetry developed throughout the 20th century in a variety of ways. Supported by the parallel development of cinema, television and other filmed media it now reaches a larger audience than ever. Another development, starting at the beginning of the century, was the belief that puppet theatre, despite its popular and folk roots, could speak to adult audiences with an adult, and experimental voice, and reinvigorate the high art tradition of actors' theatre.

Sergei Obraztsov explored the concept of kukolnost ('puppetness'), despite Joseph Stalin's insistence on realism. Other pioneers, including Edward Gordon Craig and Erwin Piscator were influenced by puppetry in their crusade to regalvanise the mainstream. Maeterlinck, Shaw, Lorca and others wrote puppet plays, and artists such as Picasso, Jarry, and Léger began to work in theatre. Craig's concept of the "übermarionette"—in which the director treats the actors like objects—has been highly influential on contemporary "object theatre" and "physical theatre". Tadeusz Kantor frequently substituted actors for puppets, or combined the two, and conducted each performance from the edge of the stage, in some ways similar to a puppeteer.

Kantor influenced a new formalist generation of directors such as Richard Foreman and Robert Wilson who were concerned with the 'object' in theatrical terms "putting it on stage and finding different ways of looking at it" (Foreman). Innovatory puppeteers such as Tony Sarg, Waldo Lanchester, John Wright, Bil Baird, Joan Baixas, Sergei Obratsov, Philipe Genty, Peter Schumann, Dattatreya Aralikatte, The Little Players, Jim Henson, Dadi Pudumjee, and Julie Taymor have also continued to develop the forms and content of puppetry, so that the phrase 'puppet theatre' is no longer limited to traditional forms of marionettes, glove, or rod puppets. Directors and companies like Peter Schumann of Bread and Puppet Theatre, Bob Frith of Horse and Bamboo Theatre, and Sandy Speiler of In the Heart of the Beast Puppet and Mask Theatre have also combined mask and puppet theatre where the performer, puppets and objects are integrated within a largely visual theatre world that minimises the use of spoken language.

The Jim Henson Foundation, founded by puppeteer and Muppet creator Jim Henson, is a philanthropic, charitable organization created to promote and develop puppetry in the United States. It has bestowed 440 grants to innovative puppet theatre artists. Puppetry troupes in the early 21st-century such as HomeGrown Theatre in Boise, Idaho continue the avant garde satirical tradition for millennials.

Puppetry is the making and manipulation of puppets for use in some kind of theatrical show. A puppet is a figure—human, animal, or abstract in form—that is moved by human, and not mechanical, aid.

These definitions are wide enough to include an enormous variety of shows and an enormous variety of puppet types, but they do exclude certain related activities and figures. A doll, for instance, is not a puppet, and a girl playing with her doll as if it were a living baby is not giving a puppet show; but, if before an audience of her mother and father she makes the doll walk along the top of a table and act the part of a baby, she is then presenting a primitive puppet show. Similarly, automaton figures moved by clockwork that appear when a clock strikes are not puppets, and such elaborate displays of automatons as those that perform at the cathedral clock in Strasbourg, France, or the town hall clock in Munich, Germany, must be excluded from consideration.

Puppet shows seem to have existed in almost all civilizations and in almost all periods. In Europe, written records of them go back to the 5th century BCE (e.g., the Symposium of the Greek historian Xenophon). Written records in other civilizations are less ancient, but in China, India, Java, and elsewhere in Asia there are ancient traditions of puppet theatre, the origins of which cannot now be determined. Among the American Indians, there are traditions of puppetlike figures used in ritual magic. In Africa, records of puppets are meagre, but the mask is an important feature in almost all African magical ceremonies, and the dividing line between the puppet and the masked actor, as will be seen, is not always easily drawn. It may certainly be said that puppet theatre has everywhere antedated written drama and, indeed, writing of any kind. It represents one of the most primitive instincts of the human race.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1827 2023-07-06 00:03:29

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1830) Painting

Gist

Painting is defined as the process of applying paint, or another medium, to a solid surface – usually a canvas. Paints or other forms of color are commonly applied to using a paintbrush. However, artists do use different tools such as sponges, spray paint, or even knives.

Summary

Painting is the expression of ideas and emotions, with the creation of certain aesthetic qualities, in a two-dimensional visual language. The elements of this language—its shapes, lines, colours, tones, and textures—are used in various ways to produce sensations of volume, space, movement, and light on a flat surface. These elements are combined into expressive patterns in order to represent real or supernatural phenomena, to interpret a narrative theme, or to create wholly abstract visual relationships. An artist’s decision to use a particular medium, such as tempera, fresco, oil, acrylic, watercolour or other water-based paints, ink, gouache, encaustic, or casein, as well as the choice of a particular form, such as mural, easel, panel, miniature, manuscript illumination, scroll, screen or fan, panorama, or any of a variety of modern forms, is based on the sensuous qualities and the expressive possibilities and limitations of those options. The choices of the medium and the form, as well as the artist’s own technique, combine to realize a unique visual image.

Earlier cultural traditions—of tribes, religions, guilds, royal courts, and states—largely controlled the craft, form, imagery, and subject matter of painting and determined its function, whether ritualistic, devotional, decorative, entertaining, or educational. Painters were employed more as skilled artisans than as creative artists. Later the notion of the “fine artist” developed in Asia and Renaissance Europe. Prominent painters were afforded the social status of scholars and courtiers; they signed their work, decided its design and often its subject and imagery, and established a more personal—if not always amicable—relationship with their patrons.

During the 19th century painters in Western societies began to lose their social position and secure patronage. Some artists countered the decline in patronage support by holding their own exhibitions and charging an entrance fee. Others earned an income through touring exhibitions of their work. The need to appeal to a marketplace had replaced the similar (if less impersonal) demands of patronage, and its effect on the art itself was probably similar as well. Generally, artists in the 20th century could reach an audience only through commercial galleries and public museums, although their work may have been occasionally reproduced in art periodicals. They may also have been assisted by financial awards or commissions from industry and the state. They had, however, gained the freedom to invent their own visual language and to experiment with new forms and unconventional materials and techniques. For example, some painters combined other media, such as sculpture, with painting to produce three-dimensional abstract designs. Other artists attached real objects to the canvas in collage fashion or used electricity to operate coloured kinetic panels and boxes. Conceptual artists frequently expressed their ideas in the form of a proposal for an unrealizable project, while performance artists were an integral part of their own compositions. The restless endeavour to extend the boundaries of expression in art produced continuous international stylistic changes. The often bewildering succession of new movements in painting was further stimulated by the swift interchange of ideas by means of international art journals, traveling exhibitions, and art centres. Such exchanges accelerated in the 21st century with the explosion of international art fairs and the advent of social media, the latter of which offered not only new means of expression but direct communication between artists and their followers.

Details

Painting is the practice of applying paint, pigment, color or other medium to a solid surface (called the "matrix"[citation needed] or "support"). The medium is commonly applied to the base with a brush, but other implements, such as knives, sponges, and airbrushes, can be used.

In art, the term "painting" describes both the act and the result of the action (the final work is called "a painting"). The support for paintings includes such surfaces as walls, paper, canvas, wood, glass, lacquer, pottery, leaf, copper and concrete, and the painting may incorporate multiple other materials, including sand, clay, paper, plaster, gold leaf, and even whole objects.

Painting is an important form of visual art, bringing in elements such as drawing, composition, gesture, narration, and abstraction. Paintings can be naturalistic and representational (as in still life and landscape painting), photographic, abstract, narrative, symbolistic (as in Symbolist art), emotive (as in Expressionism) or political in nature (as in Artivism).

A portion of the history of painting in both Eastern and Western art is dominated by religious art. Examples of this kind of painting range from artwork depicting mythological figures on pottery, to Biblical scenes on the Sistine Chapel ceiling, to scenes from the life of Buddha (or other images of Eastern religious origin).

History

The oldest known paintings are approximately 40,000 years old, found in both the Franco-Cantabrian region in western Europe, and in the caves in the district of Maros (Sulawesi, Indonesia). In November 2018, however, scientists reported the discovery of the then-oldest known figurative art painting, over 40,000 (perhaps as old as 52,000) years old, of an unknown animal, in the cave of Lubang Jeriji Saléh on the Indonesian island of Borneo (Kalimantan). In December 2019, figurative cave paintings depicting pig hunting in the Maros-Pangkep karst in Sulawesi were estimated to be even older, at at least 43,900 years old. The finding was noted to be "the oldest pictorial record of storytelling and the earliest figurative artwork in the world". More recently, in 2021, cave art of a pig found in an Indonesian island, and dated to over 45,500 years, has been reported. However, the earliest evidence of the act of painting has been discovered in two rock-shelters in Arnhem Land, in northern Australia. In the lowest layer of material at these sites, there are used pieces of ochre estimated to be 60,000 years old. Archaeologists have also found a fragment of rock painting preserved in a limestone rock-shelter in the Kimberley region of North-Western Australia, that is dated to 40,000 years old. There are examples of cave paintings all over the world—in Indonesia, France, Spain, Portugal, Italy, China, India, Australia, Mexico, etc. In Western cultures, oil painting and watercolor painting have rich and complex traditions in style and subject matter. In the East, ink and color ink historically predominated the choice of media, with equally rich and complex traditions.

The invention of photography had a major impact on painting. In the decades after the first photograph was produced in 1829, photographic processes improved and became more widely practiced, depriving painting of much of its historic purpose to provide an accurate record of the observable world. A series of art movements in the late 19th and early 20th centuries—notably Impressionism, Post-Impressionism, Fauvism, Expressionism, Cubism, and Dadaism—challenged the Renaissance view of the world. Eastern and African painting, however, continued a long history of stylization and did not undergo an equivalent transformation at the same time.

Modern and Contemporary art has moved away from the historic value of craft and documentation in favour of concept. This has not deterred the majority of living painters from continuing to practice painting either as a whole or part of their work. The vitality and versatility of painting in the 21st century defy the previous "declarations" of its demise. In an epoch characterized by the idea of pluralism, there is no consensus as to a representative style of the age. Artists continue to make important works of art in a wide variety of styles and aesthetic temperaments—their merits are left to the public and the marketplace to judge.

The Feminist art movement began in the 1960s during the second wave of feminism. The movement sought to gain equal rights and equal opportunities for female artists internationally.

Elements of painting:

Color and tone

Color, made up of hue, saturation, and value, dispersed over a surface is the essence of painting, just as pitch and rhythm are the essence of music. Color is highly subjective, but has observable psychological effects, although these can differ from one culture to the next. Black is associated with mourning in the West, but in the East, white is. Some painters, theoreticians, writers, and scientists, including Goethe, Kandinsky, and Newton, have written their own color theory.

Moreover, the use of language is only an abstraction for a color equivalent. The word "red", for example, can cover a wide range of variations from the pure red of the visible spectrum of light. There is not a formalized register of different colors in the way that there is agreement on different notes in music, such as F or C♯. For a painter, color is not simply divided into basic (primary) and derived (complementary or mixed) colors (like red, blue, green, brown, etc.).

Painters deal practically with pigments, so "blue" for a painter can be any of the blues: phthalocyanine blue, Prussian blue, indigo, Cobalt blue, ultramarine, and so on. Psychological and symbolical meanings of color are not, strictly speaking, means of painting. Colors only add to the potential, derived context of meanings, and because of this, the perception of a painting is highly subjective. The analogy with music is quite clear—sound in music (like a C note) is analogous to "light" in painting, "shades" to dynamics, and "coloration" is to painting as the specific timbre of musical instruments is to music. These elements do not necessarily form a melody (in music) of themselves; rather, they can add different contexts to it.

Modern artists have extended the practice of painting considerably to include, as one example, collage, which began with Cubism and is not painting in the strict sense. Some modern painters incorporate different materials such as metal, plastic, sand, cement, straw, leaves or wood for their texture. Examples of this are the works of Jean Dubuffet and Anselm Kiefer. There is a growing community of artists who use computers to "paint" color onto a digital "canvas" using programs such as Adobe Photoshop, Corel Painter, and many others. These images can be printed onto traditional canvas if required.

Rhythm

Jean Metzinger's mosaic-like Divisionist technique had its parallel in literature; a characteristic of the alliance between Symbolist writers and Neo-Impressionist artists:

I ask of divided brushwork not the objective rendering of light, but iridescences and certain aspects of color still foreign to painting. I make a kind of chromatic versification and for syllables, I use strokes which, variable in quantity, cannot differ in dimension without modifying the rhythm of a pictorial phraseology destined to translate the diverse emotions aroused by nature. (Jean Metzinger, circa 1907)

Rhythm, for artists such as Piet Mondrian, is important in painting as it is in music. If one defines rhythm as "a pause incorporated into a sequence", then there can be rhythm in paintings. These pauses allow creative force to intervene and add new creations—form, melody, coloration. The distribution of form or any kind of information is of crucial importance in the given work of art, and it directly affects the aesthetic value of that work. This is because the aesthetic value is functionality dependent, i.e. the freedom (of movement) of perception is perceived as beauty. Free flow of energy, in art as well as in other forms of "techne", directly contributes to the aesthetic value.

Music was important to the birth of abstract art since music is abstract by nature—it does not try to represent the exterior world, but expresses in an immediate way the inner feelings of the soul. Wassily Kandinsky often used musical terms to identify his works; he called his most spontaneous paintings "improvisations" and described more elaborate works as "compositions". Kandinsky theorized that "music is the ultimate teacher," and subsequently embarked upon the first seven of his ten Compositions. Hearing tones and chords as he painted, Kandinsky theorized that (for example), yellow is the color of middle C on a brassy trumpet; black is the color of closure, and the end of things; and that combinations of colors produce vibrational frequencies, akin to chords played on a piano. In 1871 the young Kandinsky learned to play the piano and cello. Kandinsky's stage design for a performance of Mussorgsky's Pictures at an Exhibition illustrates his "synaesthetic" concept of a universal correspondence of forms, colors and musical sounds.

Music defines much of modernist abstract painting. Jackson Pollock underscores that interest with his 1950 painting Autumn Rhythm (Number 30).

Aesthetics and theory

Aesthetics is the study of art and beauty; it was an important issue for 18th- and 19th-century philosophers such as Kant and Hegel. Classical philosophers like Plato and Aristotle also theorized about art and painting in particular. Plato disregarded painters (as well as sculptors) in his philosophical system; he maintained that painting cannot depict the truth—it is a copy of reality (a shadow of the world of ideas) and is nothing but a craft, similar to shoemaking or iron casting. By the time of Leonardo, painting had become a closer representation of the truth than painting was in Ancient Greece. Leonardo da Vinci, on the contrary, said that "Italian: La Pittura è cosa mentale" ("English: painting is a thing of the mind"). Kant distinguished between Beauty and the Sublime, in terms that clearly gave priority to the former. Although he did not refer to painting in particular, this concept was taken up by painters such as J.M.W. Turner and Caspar David Friedrich.

A relief against a wall shows a bearded man reaching up with his hands as his clothes are draped over his body.
Nino Pisano, Apelles or the Art of painting in detail (1334–1336); relief of the Giotto's Bell Tower in Florence, Italy
Hegel recognized the failure of attaining a universal concept of beauty and, in his aesthetic essay, wrote that painting is one of the three "romantic" arts, along with Poetry and Music, for its symbolic, highly intellectual purpose. Painters who have written theoretical works on painting include Kandinsky and Paul Klee. In his essay, Kandinsky maintains that painting has a spiritual value, and he attaches primary colors to essential feelings or concepts, something that Goethe and other writers had already tried to do.

Iconography is the study of the content of paintings, rather than their style. Erwin Panofsky and other art historians first seek to understand the things depicted, before looking at their meaning for the viewer at the time, and finally analyzing their wider cultural, religious, and social meaning.

In 1890, the Parisian painter Maurice Denis famously asserted: "Remember that a painting—before being a warhorse, a naked woman or some story or other—is essentially a flat surface covered with colors assembled in a certain order." Thus, many 20th-century developments in painting, such as Cubism, were reflections on the means of painting rather than on the external world—nature—which had previously been its core subject. Recent contributions to thinking about painting have been offered by the painter and writer Julian Bell. In his book What is Painting?, Bell discusses the development, through history, of the notion that paintings can express feelings and ideas. In Mirror of The World, Bell writes:

A work of art seeks to hold your attention and keep it fixed: a history of art urges it onwards, bulldozing a highway through the homes of the imagination.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1828 2023-07-07 00:09:42

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1831) Hotel

Summary

A hotel is a building that provides lodging, meals, and other services to the traveling public on a commercial basis. A motel performs the same functions as a hotel but in a format designed for travelers using automobiles.

Inns have existed since very ancient times to serve merchants and other travelers. In the Roman Empire hostelries called mansiones were situated along the Roman road system to accommodate travelers on government or commercial business. The commercial revival of the European Middle Ages stimulated a widespread growth of inns and hostels. Many of these were operated by monastic brotherhoods in order to guarantee haven for travelers in dangerous regions; a famous example is the hostel in the Great St. Bernard Pass in the Swiss Alps, which was founded in the 10th century by St. Bernard of Montjoux and is still operated by the community of Augustinian monks. In 13th-century China Marco Polo found an extensive system of relay houses in existence to provide lodgings for travelers and way stations for the Mongol postal service.

Privately operated inns intended primarily for use by merchants were widespread in both Islamic and western European countries during the later Middle Ages. The rapid proliferation of stagecoach travel during the 18th century further stimulated the development of inns. But it was the Industrial Revolution of the 19th century that generated the most progress in innkeeping, especially in England, whose inns became a standard for the world on account of their cleanliness and comfort. Meanwhile, American innkeepers were setting a standard for size; by 1800 the inns of the United States were the largest in the world. The American trend toward large size continued into the 20th century and eventually was adopted by other countries.

The modern hotel was to a large extent the result of the railroad age; faster travel eliminated the need for the inns serving the old coach routes, and many of these were forced out of business as a result. On the other hand, many new and larger hotels were profitably built close to railroad stations. As travel for pleasure became increasingly popular during the 19th century, a new class of resort hotels was built in many countries. Along the French and Italian Riviera resort hotels were constructed to serve wealthy vacationers, who frequently came for the entire summer or winter season. Luxury hotels soon made their appearance in the cities; in 1889 the Savoy Hotel in London set a new standard with its own electricity and its host of special services for guests.

Another landmark was the opening in Buffalo, New York, in 1908 of the Statler Hotel, whose owner, Ellsworth Milton Statler, introduced many innovations in service and conveniences for the benefit of the large and growing class of business travelers. From the Buffalo Statler grew the Statler Company, the first great chain operation in hotelkeeping.

World War I was followed by a period of tremendous hotel construction, and hotels also increased in size; the Stevens Hotel (later the Conrad Hilton) in Chicago opened in 1927 with 3,000 rooms and retained the title of the world’s largest until the late 1960s, when the Hotel Rossiya (demolished 2006) opened in Moscow (since the late 20th century, a series of hotels, including the First World Hotel in Malaysia, assumed the title of world’s largest). After World War II many hotels were built at or near major airports.

The operation of hotel chains became a characteristic of modern hotelkeeping, particularly in the decades after World War II. A chain operation, in which one company operates two or more hotels, permits increased efficiency in such areas as purchasing, sales, and reservations.

The main categories of hotels are transient, resort, and residential. Hotels are classed as “mainly transient” when at least 75 percent of their guests are not permanent residents. The guest in a typical transient hotel can expect a room with private bath, telephone, radio, and television, in addition to such customer services as laundry, valet, and cleaning and pressing. A larger establishment usually has a coffee shop, dining room, math lounge or nightclub, and a gift shop or newsstand-tobacco counter.

The resort hotel is a luxury facility that is intended primarily for vacationers and is usually located near special attractions, such as beaches and seashores, scenic or historic areas, ski parks, or spas. Though some resorts operate on a seasonal basis, the majority now try to operate all year-round. The residential hotel is basically an apartment building offering maid service, a dining room, and room meal service. Residential hotels range from the luxurious to the moderately priced. Some resort hotels operate on the so-called American plan, in which the cost of meals is included in the charge for the room. Others operate on the European plan, in which the rate covers only the room and guests make their own arrangements for meals. Transient hotels generally operate on the European plan.

Details

A hotel is an establishment that provides paid lodging on a short-term basis. Facilities provided inside a hotel room may range from a modest-quality mattress in a small room to large suites with bigger, higher-quality beds, a dresser, a refrigerator, and other kitchen facilities, upholstered chairs, a flat-screen television, and en-suite bathrooms. Small, lower-priced hotels may offer only the most basic guest services and facilities. Larger, higher-priced hotels may provide additional guest facilities such as a swimming pool, a business center with computers, printers, and other office equipment, childcare, conference and event facilities, tennis or basketball courts, gymnasium, restaurants, day spa, and social function services. Hotel rooms are usually numbered (or named in some smaller hotels and B&Bs) to allow guests to identify their room. Some boutique, high-end hotels have custom decorated rooms. Some hotels offer meals as part of a room and board arrangement. In Japan, capsule hotels provide a tiny room suitable only for sleeping and shared bathroom facilities.

The precursor to the modern hotel was the inn of medieval Europe. For a period of about 200 years from the mid-17th century, coaching inns served as a place for lodging for coach travelers. Inns began to cater to wealthier clients in the mid-18th century. One of the first hotels in a modern sense was opened in Exeter in 1768. Hotels proliferated throughout Western Europe and North America in the early 19th century, and luxury hotels began to spring up in the later part of the 19th century, paricularly in the United States.

Hotel operations vary in size, function, complexity, and cost. Most hotels and major hospitality companies have set industry standards to classify hotel types. An upscale full-service hotel facility offers luxury amenities, full-service accommodations, an on-site restaurant, and the highest level of personalized service, such as a concierge, room service, and clothes-ironing staff. Full-service hotels often contain upscale full-service facilities with many full-service accommodations, an on-site full-service restaurant, and a variety of on-site amenities. Boutique hotels are smaller independent, non-branded hotels that often contain upscale facilities. Small to medium-sized hotel establishments offer a limited amount of on-site amenities. Economy hotels are small to medium-sized hotel establishments that offer basic accommodations with little to no services. Extended stay hotels are small to medium-sized hotels that offer longer-term full-service accommodations compared to a traditional hotel.

Timeshare and destination clubs are a form of property ownership involving ownership of an individual unit of accommodation for seasonal usage. A motel is a small-sized low-rise lodging with direct access to individual rooms from the car parking area. Boutique hotels are typically hotels with a unique environment or intimate setting. A number of hotels and motels have entered the public consciousness through popular culture. Some hotels are built specifically as destinations in themselves, for example casinos and holiday resorts.

Most hotel establishments are run by a general manager who serves as the head executive (often referred to as the "hotel manager"), department heads who oversee various departments within a hotel (e.g., food service), middle managers, administrative staff, and line-level supervisors. The organizational chart and volume of job positions and hierarchy varies by hotel size, function and class, and is often determined by hotel ownership and managing companies.

Etymology

The word hotel is derived from the French hôtel (coming from the same origin as hospital), which referred to a French version of a building seeing frequent visitors, and providing care, rather than a place offering accommodation. In contemporary French usage, hôtel now has the same meaning as the English term, and hôtel particulier is used for the old meaning, as well as "hôtel" in some place names such as Hôtel-Dieu (in Paris), which has been a hospital since the Middle Ages. The French spelling, with the circumflex, was also used in English, but is now rare. The circumflex replaces the 's' found in the earlier hostel spelling, which over time took on a new, but closely related meaning. Grammatically, hotels usually take the definite article – hence "The Astoria Hotel" or simply "The Astoria".

History

Facilities offering hospitality to travellers featured in early civilizations. In Greco-Roman culture and in ancient Persia, hospitals for recuperation and rest were built at thermal baths. Guinness World Records officially recognised Japan's Nishiyama Onsen Keiunkan, founded in 705, as the oldest hotel in the world. During the Middle Ages, various religious orders at monasteries and abbeys would offer accommodation for travellers on the road.

The precursor to the modern hotel was the inn of medieval Europe, possibly dating back to the rule of Ancient Rome. These would provide for the needs of travellers, including food and lodging, stabling and fodder for the traveller's horses and fresh horses for mail coaches. Famous London examples of inns include the George and the Tabard. A typical layout of an inn featured an inner court with bedrooms on the two sides, with the kitchen and parlour at the front and the stables at the back.

For a period of about 200 years from the mid-17th century, coaching inns served as a place for lodging for coach travellers (in other words, a roadhouse). Coaching inns stabled teams of horses for stagecoaches and mail coaches and replaced tired teams with fresh teams. Traditionally they were seven miles apart, but this depended very much on the terrain.

Some English towns had as many as ten such inns and rivalry between them became intense, not only for the income from the stagecoach operators but for the revenue from the food and drink supplied to the wealthy passengers. By the end of the century, coaching inns were being run more professionally, with a regular timetable being followed and fixed menus for food.

Inns began to cater to richer clients in the mid-18th century, and consequently grew in grandeur and in the level of service provided. Sudhir Andrews traces "the birth of an organised hotel industry" to Europe's chalets and small hotels which catered primarily to aristocrats. One of the first hotels in a modern sense, the Royal Clarence, opened in Exeter in 1768, although the idea only really caught on in the early-19th century. In 1812 Mivart's Hotel opened its doors in London, later changing its name to Claridge's.

Hotels proliferated throughout Western Europe and North America in the 19th century. Luxury hotels, including the 1829 Tremont House in Boston, the 1836 Astor House in New York City, the 1889 Savoy Hotel in London, and the Ritz chain of hotels in London and Paris in the late 1890s, catered to an ever more-wealthy clientele.

Title II of the Civil Rights Act of 1964 is part of a United States law that prohibits discrimination on the basis of race, religion, or national origin in places of public accommodation. Hotels are included as types of public accommodation in the Act.

Types

Hotel operations vary in size, function, and cost. Most hotels and major hospitality companies that operate hotels have set widely accepted industry standards to classify hotel types. General categories include the following:

International luxury

International luxury hotels offer high-quality amenities, full-service accommodations, on-site full-service restaurants, and the highest level of personalized and professional service in major or capital cities. International luxury hotels are classified with at least a Five Diamond rating or Five Star hotel rating depending on the country and local classification standards. Example brands include: Grand Hyatt, Conrad, InterContinental, Sofitel, Mandarin Oriental, Four Seasons, The Peninsula, Rosewood, JW Marriott and The Ritz-Carlton.

Lifestyle luxury resorts

Lifestyle luxury resorts are branded hotels that appeal to a guest with lifestyle or personal image in specific locations. They are typically full-service and classified as luxury. A key characteristic of lifestyle resorts is focus on providing a unique guest experience as opposed to simply providing lodging. Lifestyle luxury resorts are classified with a Five Star hotel rating depending on the country and local classification standards. Example brands include: Waldorf Astoria, St. Regis, Shangri-La, Oberoi, Belmond, Jumeirah, Aman, Taj Hotels, Hoshino, Raffles, Fairmont, Banyan Tree, Regent and Park Hyatt.

Upscale full-service

Upscale full-service hotels often provide a wide array of guest services and on-site facilities. Commonly found amenities may include: on-site food and beverage (room service and restaurants), meeting and conference services and facilities, fitness center, and business center. Upscale full-service hotels range in quality from upscale to luxury. This classification is based upon the quality of facilities and amenities offered by the hotel. Examples include: W Hotels, Sheraton, Langham, Kempinski, Pullman, Kimpton Hotels, Hilton, Lotte, Renaissance, Marriott and Hyatt Regency brands.

Boutique

Boutique hotels are smaller independent non-branded hotels that often contain mid-scale to upscale facilities of varying size in unique or intimate settings with full-service accommodations. These hotels are generally 100 rooms or fewer.

Focused or select service

Small to medium-sized hotel establishments that offer a limited number of on-site amenities that only cater and market to a specific demographic of travelers, such as the single business traveler. Most focused or select service hotels may still offer full-service accommodations but may lack leisure amenities such as an on-site restaurant or a swimming pool. Examples include Hyatt Place, Holiday Inn, Courtyard by Marriott and Hilton Garden Inn.

Economy and limited service

Small to medium-sized hotel establishments that offer a very limited number of on-site amenities and often only offer basic accommodations with little to no services, these facilities normally only cater and market to a specific demographic of travelers, such as the budget-minded traveler seeking a "no frills" accommodation. Limited service hotels often lack an on-site restaurant but in return may offer a limited complimentary food and beverage amenity such as on-site continental breakfast service. Examples include Ibis Budget, Hampton Inn, Aloft, Holiday Inn Express, Fairfield Inn, and Four Points by Sheraton.

Extended stay

Extended stay hotels are small to medium-sized hotels that offer longer-term full-service accommodations compared to a traditional hotel. Extended stay hotels may offer non-traditional pricing methods such as a weekly rate that caters towards travelers in need of short-term accommodations for an extended period of time. Similar to limited and select service hotels, on-site amenities are normally limited and most extended stay hotels lack an on-site restaurant. Examples include Staybridge Suites, Candlewood Suites, Homewood Suites by Hilton, Home2 Suites by Hilton, Residence Inn by Marriott, Element, and Extended Stay America.

Timeshare and destination clubs

Timeshare and destination clubs are a form of property ownership also referred to as a vacation ownership involving the purchase and ownership of an individual unit of accommodation for seasonal usage during a specified period of time. Timeshare resorts often offer amenities similar that of a full-service hotel with on-site restaurants, swimming pools, recreation grounds, and other leisure-oriented amenities. Destination clubs on the other hand may offer more exclusive private accommodations such as private houses in a neighborhood-style setting. Examples of timeshare brands include Hilton Grand Vacations, Marriott Vacation Club International, Westgate Resorts, Disney Vacation Club, and Holiday Inn Club Vacations.

Motel

A motel, an abbreviation for "motor hotel", is a small-sized low-rise lodging establishment similar to a limited service, lower-cost hotel, but typically with direct access to individual rooms from the car park. Motels were built to serve road travellers, including travellers on road trip vacations and workers who drive for their job (travelling salespeople, truck drivers, etc.). Common during the 1950s and 1960s, motels were often located adjacent to a major highway, where they were built on inexpensive land at the edge of towns or along stretches of freeway.

New motel construction is rare in the 2000s as hotel chains have been building economy-priced, limited-service franchised properties at freeway exits which compete for largely the same clientele, largely saturating the market by the 1990s. Motels are still useful in less populated areas for driving travelers, but the more populated an area becomes, the more hotels move in to meet the demand for accommodation. While many motels are unbranded and independent, many of the other motels which remain in operation joined national franchise chains, often rebranding themselves as hotels, inns or lodges. Some examples of chains with motels include EconoLodge, Motel 6, Super 8, and Travelodge.

Motels in some parts of the world are more often regarded as places for romantic assignations where rooms are often rented by the hour. This is fairly common in parts of Latin America.

Microstay

Hotels may offer rooms for microstays, a type of booking for less than 24 hours where the customer chooses the check in time and the length of the stay. This allows the hotel increased revenue by reselling the same room several times a day. They first gained popularity in Europe but are now common in major global tourist centers.

Management

Hotel management is a globally accepted professional career field and academic field of study. Degree programs such as hospitality management studies, a business degree, and/or certification programs formally prepare hotel managers for industry practice.

Most hotel establishments consist of a general manager who serves as the head executive (often referred to as the "hotel manager"), department heads who oversee various departments within a hotel, middle managers, administrative staff, and line-level supervisors. The organizational chart and volume of job positions and hierarchy varies by hotel size, function, and is often determined by hotel ownership and managing companies.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1829 2023-07-08 00:09:35

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1832) Guesthouse

Gist

Chiefly US : a building that is separate from the main house of a property and that is used for guests; The estate includes a small guesthouse.
2. Chiefly British : a small hotel; also : a private house that accepts paying guests.

Details

A guest house (also guesthouse) is a kind of lodging. In some parts of the world (such as the Caribbean), guest houses are a type of inexpensive hotel-like lodging. In others, it is a private home that has been converted for the exclusive use of lodging. The owner usually lives in an entirely separate area within the property and the guest house may serve as a form of lodging business.

Overview

In some areas of the world, guest houses are the only kind of accommodation available for visitors who have no local relatives to stay with. Among the features which distinguish a guest house from a hotel, or inn is the lack of a full-time staff.

Bed and breakfasts and guest houses in England are family owned and the family lives on the premises though family members are not normally available during the evening. However, most family members work a 10- to 12-hour day from 6 am as they may employ part-time service staff. Hotels maintain a staff presence 24 hours a day and 7 days a week, whereas a guest house has a more limited staff presence. Because of limited staff presence, check-in at a guest house is often by appointment. An inn also usually has a restaurant attached.

In India, a tremendous growth can be seen in the guest house business especially in Delhi-NCR (national capital region) where progress in the IT sector and Commonwealth Games 2010 were two most influential factor. Nowadays the guest house accommodation sector has improved itself significantly, with even a home converted guest house also offering 3-star equivalent facilities to its guests.

Security

Generally, there are two variations of paying guest house:

* Home converted guest house
* Professionally run guest house with all necessary amenities and staff

In the first version of the guest house the guest probably has to live with a family where they get shelter and food (bed and breakfast) only and for the rest of the jobs like washing clothes and utensils, cleaning of room or area around their bed is to be done by the guest. In the second version, the guest receives the all necessary amenities which are required to live life comfortably like a fully furnished room, comfortable bed, air-conditioner, TV, hot and cold water supply, and also one important aspect, security.

A big plus point of a professionally run paying guest accommodation service is that the owner follows the safety norms set by their local government. Some of the important safety points are:

* Fire safety with regular fire drills
* Disaster management
* Updated safety equipment
* Information signboards for guests and staff
* Government certifications

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1830 2023-07-08 22:24:47

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

Gist

Reading is making meaning from print. It requires that we:

* Identify the words in print – a process called word recognition
* Construct an understanding from them – a process called comprehension
* Coordinate identifying words and making meaning so that reading is automatic and accurate – an achievement called fluency

Sometimes you can make meaning from print without being able to identify all the words. Remember the last time you got a note in messy handwriting? You may have understood it, even though you couldn't decipher all the scribbles.

Summary

The ability to see and understand written or printed language is called reading. People who cannot read are said to be illiterate, or unlettered (see literacy and illiteracy). The ability to read is one of the foundation skills in all industrialized societies that use the written language to transmit culture and the benefits of civilization from one generation to another.

Like many human abilities, reading is a learned skill. It must be taught. Young children begin learning to read a few years after learning to speak. In doing so, the connection between the words they have learned to say and the ones they see on a printed page becomes clear. The words that appear on a page are printed symbols. The mind interprets those symbols as words it already knows in a rapid recognition process based on the individual’s past experiences. (See also learning; education.)

Perception or decoding of words is basic to all reading. Perception is an activity of the senses, and in the case of reading the sense involved for most people is sight. For the blind the sense is touch, because a blind person uses the fingers to read a code called Braille.

Words and their meanings are recognized together. Beyond the decoding of words is comprehension, which is more than just understanding the words, sentences, and paragraphs. It is a matter of seeing relationships and of connecting what is stated on a page with what is already known about a subject. Comprehension, assimilation, and interpretation of literature are steps toward building new concepts or ideas as well as increasing vocabulary. A reader corrects and refines concepts through each new reading experience. Part of a reader’s reaction involves making judgments about the worth of what is read. Some responses are emotional, while others may be intellectual—assessing the truth of what is read. Imagination is often stimulated as the reader’s mind creates pictures of what is being read.

Several factors determine a reader’s level of comprehension and assimilation: intellectual ability, the range of personal experiences, and the speed at which one reads. Intellectual ability and the breadth of experience are personal matters, and they often have something to do with the age of the individual. The more one has learned and experienced, the more one tends to gain from reading.

Speed of reading is more subject to control. With training, readers can usually learn to read faster. Skimming, or scanning, is a method of quickly looking through text to get information without going through the content line by line. Slower, analytical reading is necessary when content is more technical in nature and details must be absorbed. Following printed directions is required in many activities—using a recipe to bake a cake; studying a highway map or street guide; assembling bicycles, model airplanes, or furniture; or connecting technology such as video-game systems. Some reading calls for critical evaluation of what is read, such as comparing conflicting views or differing opinions encountered in newspapers and magazines.

Details

Reading is the process of taking in the sense or meaning of letters, symbols, etc., especially by sight or touch.

For educators and researchers, reading is a multifaceted process involving such areas as word recognition, orthography (spelling), alphabetics, phonics, phonemic awareness, vocabulary, comprehension, fluency, and motivation.

Other types of reading and writing, such as pictograms (e.g., a hazard symbol and an emoji), are not based on speech-based writing systems. The common link is the interpretation of symbols to extract the meaning from the visual notations or tactile signals (as in the case of braille).

Overview

Reading is typically an individual activity, done silently, although on occasion a person reads out loud for other listeners; or reads aloud for one's own use, for better comprehension. Before the reintroduction of separated text (spaces between words) in the late Middle Ages, the ability to read silently was considered rather remarkable.

Major predictors of an individual's ability to read both alphabetic and non-alphabetic scripts are oral language skills, phonological awareness, rapid automatized naming and verbal IQ.

Reading is an essential part of literacy, yet from a historical perspective literacy is about having the ability to both read and write.

And, since the 1990s some organizations have defined literacy in a wide variety of ways that may go beyond the traditional ability to read and write. The following are some examples:

* "the ability to read and write ... in all media (print or electronic), including digital literacy"
* "the ability to ... understand ... using printed and written materials associated with varying contexts"
* "the ability to read, write, speak and listen"
* "having the skills to be able to read, write and speak to understand and create meaning"
* "the ability to ... communicate using visual, audible, and digital materials"
* "the ability to use printed and written information to function in society, to achieve one's goals, and to develop one's knowledge and potential". It includes three types of adult literacy: prose (e.g., a newspaper article), documents (e.g., a bus schedule), and quantitative literacy (e.g., using arithmetic operations in a product advertisement).

In the academic field, some view literacy in a more philosophical manner and propose the concept of "multiliteracies". For example, they say, "this huge shift from traditional print-based literacy to 21st century multiliteracies reflects the impact of communication technologies and multimedia on the evolving nature of texts, as well as the skills and dispositions associated with the consumption, production, evaluation, and distribution of those texts (Borsheim, Meritt, & Reed, 2008, p. 87).  According to cognitive neuroscientist Mark Seidenberg these "multiple literacies" have allowed educators to change the topic from reading and writing to "Literacy". He goes on to say that some educators, when faced with criticisms of how reading is taught, "didn't alter their practices, they changed the subject".

Also, some organizations might include numeracy skills and technology skills separately but alongside of literacy skills.

In addition, since the 1940s the term literacy is often used to mean having knowledge or skill in a particular field (e.g., computer literacy, ecological literacy, health literacy, media literacy, quantitative literacy (numeracy) and visual literacy).

Writing systems

In order to understand a text, it is usually necessary to understand the spoken language associated with that text. In this way, writing systems are distinguished from many other symbolic communication systems. Once established, writing systems on the whole change more slowly than their spoken counterparts, and often preserve features and expressions which are no longer current in the spoken language. The great benefit of writing systems is their ability to maintain a persistent record of information expressed in a language, which can be retrieved independently of the initial act of formulation.

Cognitive benefits

Research suggests that reading can improve stress management, memory, focus, writing skills, and imagination.

The cognitive benefits of reading continue into mid-life and the senior years.

Research suggests that reading books and writing are among the brain-stimulating activities that can slow down cognitive decline in seniors.

Reading has been the subject of considerable research and reporting for decades. Many organizations measure and report on reading achievement for children and adults (e.g., NAEP, PIRLS, PISA PIAAC, and EQAO). (National Assessment of Educational Progress, Progress in International Reading Literacy Study, Programme for International Student Assessment, Programme for the International Assessment of Adult Competencies, Education Quality and Accountability Office).

Researchers have concluded that 95% of students can be taught to read by the end of first grade, yet in many countries 20% or more do not meet that expectation.

According to the 2019 Nation's Report card, 34% of grade four students in the United States failed to perform at or above the Basic reading level. There was a significant difference by race and ethnicity (e.g., black students at 52% and white students at 23%). After the impact of the covid-19 pandemic the average basic reading score dropped by 3% in 2022. See more about the breakdown by ethnicity in 2019 and 2022 here. According to a 2023 study in California, many teenagers who've spent time in California's juvenile detention facilities get high school diplomas with grade-school reading skills. "There are kids getting their high school diplomas who aren't able to even read and write." During a five-year span beginning in 2018, 85% of these students who graduated from high school did not pass a 12th-grade reading assessment.

Between 2013 and 2022, 30 US States passed laws or implemented new policies related to evidence-based reading instruction. In 2023, New York City set about to require schools to teach reading with an emphasis on phonics. In that city, less than half of the students from the third grade to the eighth grade of school scored as proficient on state reading exams. More than 63% of Black and Hispanic test-takers did not make the grade.

Globally, the COVID-19 pandemic created a substantial overall learning deficit in reading abilities and other academic areas. It arose early in the pandemic and persists over time, and is particularly large among children from low socio-economic backgrounds. In the US, several research studies show that, in the absent of additional support, there is nearly a 90 percent chance that a poor reader in Grade 1 will remain a poor reader.

In Canada, the provinces of Ontario and Nova Scotia, respectively, reported that 26% and 30% of grade three students did not meet the provincial reading standards in 2019. In Ontario, 53% of Grade 3 students with special education needs (students who have an Individual Education Plan), were not meeting the provincial standard.

The Progress in International Reading Literacy Study (PIRLS) publishes reading achievement for fourth graders in 50 countries. The five countries with the highest overall reading average are the Russian Federation, Singapore, Hong Kong SAR, Ireland and Finland. Some others are: England 10th, United States 15th, Australia 21st, Canada 23rd, and New Zealand 33rd.

The Programme for International Student Assessment (PISA) measures 15-year-old school pupils scholastic performance on mathematics, science, and reading.

The reading levels of adults, ages 16–65, in 39 countries are reported by the Programme for the International Assessment of Adult Competencies (PIAAC). Between 2011 and 2018, PIAAC reports the percentage of adults reading at-or-below level one (the lowest of five levels). Some examples are Japan 4.9%, Finland 10.6%, Netherlands 11.7%, Australia 12.6%, Sweden 13.3%, Canada 16.4%, England (UK) 16.4%, and the United States 16.9%.

According to the World Bank, 53% of all children in low-and-middle-income countries suffer from 'learning poverty'. In 2019, using data from the UNESCO Institute for Statistics, they published a report entitled Ending Learning Poverty:
What will it take?. Learning poverty is defined as being unable to read and understand a simple text by age 10.

Although they say that all foundational skills are important, include reading, numeracy, basic reasoning ability, socio-emotional skills, and others – they focus specifically on reading. Their reasoning is that reading proficiency is an easily understood metric of learning, reading is a student's gateway to learning in every other area, and reading proficiency can serve as a proxy for foundational learning in other subjects.

They suggest five pillars to reduce learning poverty: 1) learners are prepared and motivated to learn, 2) teachers at all levels are effective and valued, 3) classrooms are equipped for learning, 4) Schools are safe and inclusive spaces, and 5) education systems are well-managed.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1831 2023-07-09 21:40:58

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1834) Crown

Gist

A crown is a circular decoration for the head, usually made of gold and jewels (= precious stones), and worn by a king or queen at official ceremonies

Summary

A crown, from the earliest times, is a distinctive head ornament that has served as a reward of prowess and a sign of honour and dominion. Athletes, poets, and successful warriors were awarded wreaths of different forms in Classical times, and the chief of a barbarian tribe customarily wore a distinctive helmet. In the earliest English coronation ritual, dating back more than 1,000 years, the king was invested with a helmet instead of a crown, and a helmet with an ornamental frame surmounts the unwarlike head of Edward the Confessor on his great seal.

Another crown form in England and abroad followed the principle of the wreath and might consist of a string of jewels tied at the back with a ribbon or set in a rigid band of gold. When this type of chaplet was adopted by the nobility in general, the royal crown was distinguished by a number of ornaments upstanding from its rim; by the 15th century the helmet form was incorporated by the addition of one or more arches. These rose from the rim and, crossing in the centre, supported a finial—usually a ball and cross but in France, from the time of Louis XIV, a fleur-de-lis.

Many of the early European crowns were made in sections hinged together by long pins, which enabled them to be taken apart for transport or storage and, when worn, to adapt themselves to the shape and size of the wearer’s head. A circlet was made for Queen Victoria on the same principle, with its sections hinged but not detachable.

The practice of grounding the arches not on the rim of the circlet but on the tops of the surrounding ornaments began in the 17th century. This led to a change in shape and a flattening or depression in the centre that later was explained away as having a royal or imperial significance. Many crowns are to be found in continental cathedrals, museums, and royal treasuries. Some are associated with early figures of history or romance; others—e.g., the steel crown of Romania—are comparatively modern. The only European states in which the crown is still imposed in the course of a religious ceremony of consecration are Great Britain and the Vatican.

Details

A crown is a traditional form of head adornment, or hat, worn by monarchs as a symbol of their power and dignity. A crown is often, by extension, a symbol of the monarch's government or items endorsed by it. The word itself is used, particularly in Commonwealth countries, as an abstract name for the monarchy itself, as distinct from the individual who inhabits it (that is, The Crown). A specific type of crown (or coronet for lower ranks of peerage) is employed in heraldry under strict rules. Indeed, some monarchies never had a physical crown, just a heraldic representation, as in the constitutional kingdom of Belgium.

Variations

* Costume headgear imitating a monarch's crown is also called a crown hat. Such costume crowns may be worn by actors portraying a monarch, people at costume parties, or ritual "monarchs" such as the king of a Carnival krewe, or the person who found the trinket in a king cake.
* The nuptial crown, sometimes called a coronal, worn by a bride, and sometimes the bridegroom, at her wedding is found in many European cultures since ancient times. In the present day, it is most common in Eastern Orthodox cultures. The Eastern Orthodox marriage service has a section called the crowning, wherein the bride and groom are crowned as "king" and "queen" of their future household. In Greek weddings, the crowns are diadems usually made of white flowers, synthetic or real, often adorned with silver or mother of pearl. They are placed on the heads of the newlyweds and are held together by a ribbon of white silk. They are then kept by the couple as a reminder of their special day. In Slavic weddings, the crowns are usually made of ornate metal, designed to resemble an imperial crown, and are held above the newlyweds' heads by their best men. A parish usually owns one set to use for all the couples that are married there since these are much more expensive than Greek-style crowns. This was common in Catholic countries in the past.
* Crowns are also often used as symbols of religious status or veneration, by divinities (or their representation such as a statue) or by their representatives (e.g., the Black Crown of the Karmapa Lama) sometimes used a model for wider use by devotees.
* According to the New Testament, a crown of thorns was placed on the head of Jesus before his crucifixion; it has become a common symbol of martyrdom.
* According to Roman Catholic tradition, the Blessed Virgin Mary was crowned as Queen of Heaven after her assumption into heaven. She is often depicted wearing a crown, and statues of her in churches and shrines are ceremonially crowned during May.
* The Crown of Immortality is also common in historical symbolism.
* The heraldic symbol of Three Crowns, referring to the three evangelical Magi (wise men), traditionally called kings, is believed thus to have become the symbol of the Swedish kingdom, but it also fits the historical (personal, dynastic) Kalmar Union (1397–1520) between the three kingdoms of Denmark, Sweden, and Norway.
* In India, crowns are known as makuta (Sanskrit for "crest"), and have been used in India since ancient times and are described adorning Hindu gods or kings. The makuta style was then copied by the Indianized kingdoms that was influenced by Hindu-Buddhist concept of kingship in Southeast Asia, such as in Java and Bali in Indonesia, Cambodia, Burma and Thailand.
* Dancers of certain traditional Thai dances often wear crowns (mongkut) on their head. These are inspired in the crowns worn by deities and by kings.
* In pre-Colonial Philippines crown-like diadems, or putong, were worn by elite individuals and deities, among an array of golden ornaments.
* The shamsa was a massive, jewel-inlaid ceremonial crown hung by a chain that was part of the regalia of the Abbasid and Fatimid Caliphates.

Terminology

Three distinct categories of crowns exist in those monarchies that use crowns or state regalia.

Coronation

Worn by monarchs when being crowned.

State

Worn by monarchs on other state occasions.

Consort crowns

Worn by a consort, signifying rank granted as a constitutional courtesy protocol.

Crowns or similar headgear, as worn by nobility and other high-ranking people below the ruler, is in English often called a coronet; however, in many languages, this distinction is not made and the same word is used for both types of headgear (e.g., French couronne, German Krone, Dutch kroon). In some of these languages the term "rank crown" (rangkroon, etc.) refers to the way these crowns may be ranked according to hierarchical status. In classical antiquity, the crown (corona) that was sometimes awarded to people other than rulers, such as triumphal military generals or athletes, was actually a wreath or chaplet, or ribbon-like diadem.

History

Crowns have been discovered in pre-historic times from Haryana, India. The precursor to the crown was the browband called the diadem, which had been worn by the Achaemenid Persian emperors. It was adopted by Constantine I and was worn by all subsequent rulers of the later Roman Empire. Almost all Sassanid kings wore crowns. One of the most famous kings who left numerous statues, reliefs and coins of crowns is the king Shapur I.

Numerous crowns of various forms were used in antiquity, such as the Hedjet, Deshret, Pschent (double crown) and Khepresh of Pharaonic Egypt. The Pharaohs of Egypt also wore the diadem, which was associated with solar cults, an association which was not completely lost, as it was later revived under the Roman Emperor Augustus. By the time of the Pharaoh Amenophis III (r.1390–1352c) wearing a diadem clearly became a symbol of royalty. The wreaths and crowns of classical antiquity were sometimes made from natural materials such as laurel, myrtle, olive, or wild celery.

The corona radiata, the "radiant crown" known best on the Statue of Liberty, and perhaps worn by the Helios that was the Colossus of Rhodes, was worn by Roman emperors as part of the cult of Sol Invictus prior to the Roman Empire's conversion to Christianity. It was referred to as "the chaplet studded with sunbeams" by Lucian, about 180 AD.

Perhaps the oldest extant Christian crown in Europe is the Iron Crown of Lombardy, of Roman and Longobard antiquity, used by the Holy Roman Empire and the Kingdom of Italy. Later again used to crown modern Kings of Napoleonic and Austrian Italy, and to represent united Italy after 1860. Today, the crown is kept in the Cathedral of Monza. In the Christian tradition of European cultures, where ecclesiastical sanction authenticates monarchic power when a new monarch ascends the throne, the crown is placed on the new monarch's head by a religious official in a coronation ceremony. Some, though not all, early Holy Roman Emperors travelled to Rome at some point in their careers to be crowned by the pope. Napoleon, according to legend, surprised Pius VII when he reached out and crowned himself, although in reality this order of ceremony had been pre-arranged.

Today, only the British Monarchy and Tongan Monarchy, with their anointed and crowned monarchs, continue this tradition, although many monarchies retain a crown as a national symbol. The French Crown Jewels were sold in 1885 on the orders of the Third French Republic, with only a token number, their precious stones replaced by glass, retained for historic reasons and displayed in the Louvre. The Spanish Crown Jewels were destroyed in a major fire in the 18th century while the so-called "Irish Crown Jewels" (actually merely the British Sovereign's insignia of the Most Illustrious Order of St Patrick) were stolen from Dublin Castle in 1907, just before the investiture of Bernard Edward Barnaby FitzPatrick, 2nd Baron Castletown.

The Crown of King George XII of Georgia made of gold and decorated with 145 diamonds, 58 rubies, 24 emeralds, and 16 amethysts. It took the form of a circlet surmounted by ornaments and eight arches. A globe surmounted by a cross rested on the top of the crown.

Special headgear to designate rulers dates back to pre-history, and is found in many separate civilizations around the globe. Commonly, rare and precious materials are incorporated into the crown, but that is only essential for the notion of crown jewels. Gold and precious jewels are common in western and oriental crowns. In the Native American civilizations of the Pre-Columbian New World, rare feathers, such as that of the quetzal, often decorated crowns; so too in Polynesia (e.g., Hawaii).

Coronation ceremonies are often combined with other rituals, such as enthronement (the throne is as much a symbol of monarchy as the crown) and anointing (again, a religious sanction, the only defining act in the Biblical tradition of Israel).

In other cultures, no crown is used in the equivalent of coronation, but the head may still be otherwise symbolically adorned; for example, with a royal tikka in the Hindu tradition of India.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1832 2023-07-10 16:09:55

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1835) Throne

Gist

Throne: a) the special chair where a king or queen sits.
b) the position of being king or queen.

Summary

A throne is a chair of state often set on a dais and surmounted by a canopy, representing the power of the dignitary who sits on it and sometimes conferring that power. The extent to which seats of this kind have become symbolically identified with the status of their occupiers is suggested by the fact that in monarchies the office of the ruler is often referred to as The Throne and that at Papal conclaves, when an election has been made, the canopies are lowered from the thrones of all participating cardinals except the successful one.

From the very beginning of Greek history, thrones were identified as seats of the gods. Soon the meaning of the word included the symbolic seats of those who held secular or religious power—a meaning common to virtually all cultures, ranging from Benin to the empires of South America. In the ancient world, especially in the East, thrones almost invariably had symbolic magnificence. Solomon’s throne, for instance, is thus described in II Chron. 9:

'The king also made a great ivory throne, and overlaid it with pure gold. The throne had six steps and a footstool of gold, and on each side of the seats were arm rests and two lions standing beside the arm rests, while twelve lions stood there, one on each end of a step on the six steps. The like of it was never made in any kingdom.'

The throne of the Byzantine emperors was modelled on Solomon’s, with the added refinement that the lions were mechanical. In the British Museum there is a fragment encrusted with gold, ivory, lapis lazuli, and carnelian believed to have come from the throne of Sargon II of Assyria (died 705 BC).

Details

A throne is the seat of state of a potentate or dignitary, especially the seat occupied by a sovereign on state occasions; or the seat occupied by a pope or bishop on ceremonial occasions. "Throne" in an abstract sense can also refer to the monarchy or the Crown itself, an instance of metonymy, and is also used in many expressions such as "the power behind the throne".

Since the early advanced cultures, a throne has been known as a symbol of divine and secular rule and the establishment of a throne as a defining sign of the claim to power and authority. It can be with a high backrest and feature heraldic animals or other decorations as adornment and as a sign of power and strength. A throne can be placed underneath a canopy or baldachin. The throne can stand on steps or a dais and is thus always elevated. The expression "ascend (mount) the throne" takes its meaning from the steps leading up to the dais or platform, on which the throne is placed, being formerly comprised in the word's significance. Coats of arms or insignia can feature on throne or canopy and represent the dynasty. Even in the physical absence of the ruler an empty throne can symbolise the everlasting presence of the monarchical authority.

When used in a political or governmental sense, a throne typically exists in a civilization, nation, tribe, or other politically designated group that is organized or governed under a monarchical system. Throughout much of human history societies have been governed under monarchical systems, in the beginning as autocratic systems and later evolved in most cases as constitutional monarchies within liberal democratic systems, resulting in a wide variety of thrones that have been used by given heads of state. These have ranged from stools in places such as in Africa to ornate chairs and bench-like designs in Europe and Asia, respectively. Often, but not always, a throne is tied to a philosophical or religious ideology held by the nation or people in question, which serves a dual role in unifying the people under the reigning monarch and connecting the monarch upon the throne to his or her predecessors, who sat upon the throne previously. Accordingly, many thrones are typically held to have been constructed or fabricated out of rare or hard to find materials that may be valuable or important to the land in question. Depending on the size of the throne in question it may be large and ornately designed as an emplaced instrument of a nation's power, or it may be a symbolic chair with little or no precious materials incorporated into the design.

When used in a religious sense, throne can refer to one of two distinct uses. The first use derives from the practice in churches of having a bishop or higher-ranking religious official (archbishop, pope, etc.) sit on a special chair which in church referred to by written sources as a "throne", or “cathedra” (Latin for 'chair') and is intended to allow such high-ranking religious officials a place to sit in their place of worship. The other use for throne refers to a belief among many of the world's monotheistic and polytheistic religions that the deity or deities that they worship are seated on a throne. Such beliefs go back to ancient times, and can be seen in surviving artwork and texts which discuss the idea of ancient gods (such as the Twelve Olympians) seated on thrones. In the major Abrahamic religions of Judaism, Christianity, and Islam, the Throne of God is attested to in religious scriptures and teachings, although the origin, nature, and idea of the Throne of God in these religions differs according to the given religious ideology practiced.

In the west, a throne is most identified as the seat upon which a person holding the title King, Queen, Emperor, or Empress sits in a nation using a monarchy political system, although there are a few exceptions, notably with regards to religious officials such as the pope and bishops of various sects of the Christian faith. Changing geo-political tides have resulted in the collapse of several dictatorial and autocratic governments, which in turn have left a number of throne chairs empty. Many of these thrones—such as China's Dragon Throne—survive today as historic examples of nation's previous government.

Antiquity

Thrones were found throughout the canon of ancient furniture. The depiction of monarchs and deities as seated on chairs is a common topos in the iconography of the Ancient Near East.

The word throne itself is from Greek (thronos), "seat, chair", in origin a derivation from the PIE root *dher- "to support" (also in dharma "post, sacrificial pole"). Early Greek  (Dios thronous) was a term for the "support of the heavens", i.e. the axis mundi, which term when Zeus became an anthropomorphic god was imagined as the "seat of Zeus".[ In Ancient Greek, a "thronos" was a specific but ordinary type of chair with a footstool, a high status object but not necessarily with any connotations of power. The Achaeans (according to Homer) were known to place additional, empty thrones in the royal palaces and temples so that the gods could be seated when they wished to be. The most famous of these thrones was the throne of Apollo in Amyclae.

The Romans also had two types of thrones—one for the emperor and one for the goddess Roma whose statues were seated upon thrones, which became centers of worship.

Persia

In Persia, the traditional name of the throne is the Takht-e Padeshah. From the Achaemenid era to the last Iranian dynasty Pahlavi, the throne was used for sitting shahs.

Hebrew Bible

The word "throne" in English translations of the Bible renders Hebrew. The pharaoh of the Exodus is described as sitting on a throne (Exodus 11:5, 12:29), but mostly the term refers to the throne of the kingdom of Israel, often called the "throne of David" or "throne of Solomon". The literal throne of Solomon is described in 1 Kings 10:18–20: "Moreover the king made a great throne of ivory, and overlaid it with the best gold.. The throne had six steps, and the top of the throne was round behind: and there were stays on either side on the place of the seat, and two lions stood beside the stays. And twelve lions stood there on the one side and on the other upon the six steps: there was not the like made in any kingdom." In the Book of Esther (5:3), the same word refers to the throne of the king of Persia.

The God of Israel himself is frequently described as sitting on a throne, referred to outside of the Bible as the Throne of God, in the Psalms, and in a vision Isaiah (6:1), and notably in Isaiah 66:1, YHWH says of himself "The heaven is my throne, and the earth is my footstool" (this verse is alluded to by Matthew 5:34-35).

Christian:

Biblical

In the Old Testament, Book of Kings I explicits the throne of Solomon: "Then the king made a great throne covered with ivory and overlaid with fine gold. The throne had six steps, and its back had a rounded top. On both sides of the seat were armrests, with a lion standing beside each of them. Twelve lions stood on the six steps, one at either end of each step" in Chapter 10 18-20.

In the New Testament, the angel Gabriel also refers to this throne in the Gospel of Luke (1:32–33): "He will be great, and will be called the Son of the Highest; and the Lord God will give Him the throne of His father David. And He will reign over the house of Jacob forever, and of His kingdom there will be no end."

Jesus promised his apostles that they would sit upon "twelve thrones", judging the twelve tribes of Israel (Matthew 19:28). John's Revelation states: "And I saw a great white throne, and him that sat on it, from whose face the earth and the heaven fled away" (Revelation 20:11).

The Apostle Paul speaks of "thrones" in Colossians 1:16. Pseudo-Dionysius the Areopagite, in his work De Coelesti Hierarchia (VI.7), interprets this as referring to one of the ranks of angels (corresponding to the Hebrew Arelim or Ophanim). This concept was expanded upon by Thomas Aquinas in his Summa Theologica (I.108), wherein the thrones are concerned with carrying out divine justice.

In Medieval times the "Throne of Solomon" was associated with the Virgin Mary, who was depicted as the throne upon which Jesus sat. The ivory in the biblical description of the Throne of Solomon was interpreted as representing purity, the gold representing divinity, and the six steps of the throne stood for the six virtues. Psalm 45:9 was also interpreted as referring to the Virgin Mary, with the entire Psalm describing a royal throne room.

Ecclesiastical

From ancient times, bishops of the Roman Catholic, Eastern Orthodox, Anglican and other churches where episcopal offices exist, have been formally seated on a throne, called a cathedra (Greek:  seat). Traditionally located in the sanctuary, the cathedra symbolizes the bishop's authority to teach the faith (hence the expression "ex cathedra") and to govern his flock.

Ex cathedra refers to the explicative authority, notably the extremely rarely used procedure required for a papal declaration to be 'infallible' under Roman Catholic canon law. In several languages the word deriving from cathedra is commonly used for an academic teaching mandate, the professorial chair.

From the presence of this cathedra (throne), which can be as elaborate and precious as fits a secular prince (even if the prelate is not a prince of the church in the secular sense), a bishop's primary church is called a cathedral. In the Roman Catholic Church, a basilica—from the Greek basilikos 'royal'—now refers to the presence there of a papal canopy (ombrellino), part of his regalia, and applies mainly to many cathedrals and Catholic churches of similar importance or splendor. In Roman Antiquity a basilica was secular public hall. Thus, the term basilica may also refer to a church designed after the manner of the ancient Roman basilica. Many of the churches built by the emperor Constantine the Great and Justinian are of the basilica style.

Some other prelates besides bishops are permitted the use of thrones, such as abbots and abbesses. These are often simpler than the thrones used by bishops and there may be restrictions on the style and ornamentation used on them, according to the regulations and traditions of the particular denomination.

As a mark of distinction, Roman Catholic bishops and higher prelates have a right to a canopy above their thrones at certain ecclesiastical functions. It is sometimes granted by special privilege to prelates inferior to bishops, but always with limitations as to the days on which it may be used and the character of its ornamentation. The liturgical color of the canopy should correspond with that of the other vestments. When ruling monarchs attend services, they are also allowed to be seated on a throne that is covered by a canopy, but their seats must be outside the sanctuary.

In the Greek Orthodox Church, the bishop's throne will often combine features of the monastic choir stall (kathisma) with appurtenances inherited from the Byzantine court, such as a pair of lions seated at the foot of the throne.

The term "throne" is often used in reference to patriarchs to designate their ecclesiastical authority; for instance, "the Ecumenical Throne" refers to the authority of the ecumenical patriarch of Constantinople.

Western bishops may also use a faldstool to fulfill the liturgical purpose of the cathedra when not in their own cathedral.

Papal

In the Roman Catholic Church, the pope is an elected monarch, both under canon law as supreme head of the church, and under international law as the head of state—styled "sovereign pontiff"—of the Vatican City State (the sovereign state within the city of Rome established by the 1929 Lateran Treaty). Until 1870, the pope was the elected monarch of the Papal States, which for centuries constituted one of the largest political powers on the divided Italian peninsula. To this day, the Holy See maintains officially recognised diplomatic status, and papal nuncios and legates are deputed on diplomatic missions throughout the world.

The pope's throne (Cathedra Romana) is located in the apse of the Archbasilica of St. John Lateran, his cathedral as Bishop of Rome.

In the apse of Saint Peter's Basilica, above the "Altar of the Chair" lies the Cathedra Petri, a throne believed to have been used by St Peter himself and other earlier popes; this relic is enclosed in a gilt bronze casting and forms part of a huge monument designed by Gian Lorenzo Bernini.

Unlike at his cathedral (Archbasilica of St. John Lateran), there is no permanent cathedra for the pope in St Peter's Basilica, so a removable throne is placed in the basilica for the pope's use whenever he presides over a liturgical ceremony. Prior to the liturgical reforms that occurred in the wake of the Second Vatican Council, a huge removable canopied throne was placed above an equally removable dais in the choir side of the "Altar of the Confession" (the high altar above the tomb of St Peter and beneath the monumental bronze baldachin); this throne stood between the apse and the Altar of the Confession.

This practice has fallen out of use with the 1960s and 1970s reform of Papal liturgy and, whenever the pope celebrates Mass in St. Peter's Basilica, a simpler portable throne is now placed on platform in front of the Altar of the Confession. Whenever Pope Benedict XVI celebrated the Liturgy of the Hours at St Peter's, a more elaborate removable throne was placed on a dais to the side of the Altar of the Chair. When the pope celebrates Mass on the basilica steps facing St. Peter's Square, portable thrones are also used.

In the past, the pope was also carried on occasions in a portable throne, called the sedia gestatoria. Originally, the sedia was used as part of the elaborate procession surrounding papal ceremonies that was believed to be the most direct heir of pharaonic splendor, and included a pair of flabella (fans made from ostrich feathers) to either side. Pope John Paul I at first abandoned the use of these implements, but later in his brief reign began to use the sedia so that he could be seen more easily by the crowds. The use of the sedia was abandoned by Pope John Paul II in favor of the so-called "popemobile" when outside. Near the end of his pontificate, Pope John Paul II had a specially constructed throne on wheels that could be used inside.

Prior to 1978, at the papal conclave, each cardinal was seated on a throne in the Sistine Chapel during the balloting. Each throne had a canopy over it. After a successful election, once the new pope accepted election and decided by what name he would be known, the cardinals would all lower their canopies, leaving only the canopy over the newly elected pope. This was the new pope's first throne. This tradition was dramatically portrayed in the 1968 film The Shoes of the Fisherman.

Medieval and early modern periods

In European feudal countries, monarchs often were seated on thrones, based in all likelihood on the Roman magisterial chair. These thrones were originally quite simple, especially when compared to their Asian counterparts. One of the grandest and most important was the Throne of Ivan "the Terrible". Dating from the mid-16th century, it is shaped as a high-backed chair with arm rests, and adorned with ivory and walrus bone plaques intricately carved with mythological, heraldic and life scenes. The plaques carved with scenes from the biblical account of King David’s life are of particular relevance, as David was seen as the ideal for Christian monarchs. In practice, any chair the monarch occupied in a formal setting served as a "throne", though there were often special chairs used only for this kept in places the monarch often went to. Thrones began to be made in pairs, for the king and queen, which remained common in later periods. Sometimes they are identical, or the queen's throne may be slightly less grand.

The throne of the Byzantine Empire (Magnaura) included elaborate automatons of singing birds. In the 'regency' (nominally an Ottoman province, de facto an independent realm) of the bey of Tunis, the throne was called kursi.

Although medieval examples tended to be retained in the early modern period, having acquired the aura of tradition, when new thrones were made they either continued medieval styles or were just very grand and elaborate versions of contemporary chairs or armchairs.

South Asia

In the Indian subcontinent, the traditional Sanskrit name for the throne was siṃhāsana (lit., seat of a lion). In the Mughal times the throne was called Shāhī takht. The term gadi or gaddi referred to a seat with a cushion used as a throne by Indian princes. That term was usually used for the throne of a Hindu princely state's ruler, while among Muslim princes or Nawabs, save exceptions such as the Travancore State royal family the term musnad, also spelt as musnud, was more common, even though both seats were similar.

The Throne of Jahangir was built by Mughal emperor Jahangir in 1602 and is located at the Diwan-i-Khas (hall of private audience) at the Agra Fort.

The Peacock Throne was the seat of the Mughal emperors of India. It was commissioned in the early 17th century by Emperor Shah Jahan and was located in the Red Fort of Delhi. The original throne was subsequently captured and taken as a war trophy in 1739 by the Persian king Nadir Shah and has been lost ever since. A replacement throne based on the original was commissioned afterwards and existed until the Indian Rebellion of 1857.

Maharaja Ranjit Singh's throne was made by the goldsmith Hafez Muhammad Multani about 1820 to 1830. Made of wood and resin core, covered with sheets of repoussé, chased and engraved gold.

The Golden Throne or Chinnada Simhasana or Ratna Simahasana in Kannada is the royal seat of the rulers of the Kingdom of Mysore. The Golden Throne is kept at Mysore Palace.

Southeast Asia

In Burma, the traditional name for a throne is palin, from the Pali term pallaṅka, which means "couch" or "sofa." The Burmese palin in pre-colonial times was used to seat the sovereign and his main consort, and is today used to seat religious leaders such as sayadaws, and images of the Buddha. Royal thrones are called yazapalin, while thrones seating images or statues of the Buddha are called gaw pallin  or samakhan, from the Pali term sammakhaṇḍa.

East Asia

The Dragon Throne is the term used to identify the throne of the emperor of China. As the dragon was the emblem of divine imperial power, the throne of the emperor, who was considered a living god, was known as the Dragon Throne. The term can refer to very specific seating, as in the special seating in various structures in the Forbidden City of Beijing or in the palaces of the Old Summer Palace. In an abstract sense, the "Dragon Throne" also refers rhetorically to the head of state and to the monarchy itself. The Daoguang Emperor is said to have referred to his throne as "the divine utensil."

The throne of the emperors of Vietnam are often referred to as ngai vàng ("golden throne") or ngôi báu literally "great precious" (seat/position). The throne is always adorned with the pattern and motif of the Vietnamese dragon, which is the exclusive and privileged symbol of the Vietnamese emperors. The last existing imperial throne in Vietnam is the throne of the Nguyễn emperors placed in the Hall of Supreme Harmony at the Imperial City of Huế. It is designated as a national treasure of Vietnam. In Vietnamese folk religion, the gods, deities and ancestral spirits are believed to seat figuratively on thrones at places of worship. Therefore, on Vietnamese altars, there are various types of liturgical "throne" often decorated with red paint and golden gilding.

The Phoenix Throne is the term used to identify the throne of the king of Korea. In an abstract sense, the Phoenix Throne also refers rhetorically to the head of state of the Joseon dynasty (1392–1897) and the Empire of Korea (1897–1910). The throne is located at Gyeongbok Palace in Seoul.

The Chrysanthemum Throne is the term used to identify the throne of the emperor of Japan. The term also can refer to very specific seating, such as the takamikura throne in the Shishin-den at Kyoto Imperial Palace.

The throne of the Ryukyu Kingdom is located in Shuri Castle, Naha.

Modern period

During the Russian Empire, the throne in St. George's Hall (the "Greater Throne Room") in the Winter Palace was regarded as the throne of Russia. It sits atop a seven-stepped dais with a proscenium arch above and the symbol of the imperial family behind (the two-headed eagle). Peter I's Room (the "Smaller Throne Room") is modest in comparison to the former. The throne was made for Empress Anna Ivanovna in London. There is also a throne in the Grand Throne Room of the Peterhof Palace.

In some countries with a monarchy, thrones are still used and have important symbolic and ceremonial meaning. Among the most famous thrones still in usage are St Edward's Chair, on which the British monarch is crowned, and the thrones used by monarchs during the state opening of parliaments in the United Kingdom, the Netherlands, Canada, Australia, and Japan among others.

Some republics use distinctive throne-like chairs in some state ceremonial. The president of Ireland sits on a former viceregal throne during his or her inauguration ceremony, while lords mayor and lords provost of many British and Irish cities often preside over local councils from throne-like chairs.

Owing to its symbolic nature, a toilet is often jokingly referred to as "a throne".

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1833 2023-07-11 14:37:29

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1836) Training

Gist

Training : the process of learning the skills you need to do a particular job or activity.

It is a process by which someone is taught the skills that are needed for an art, profession, or job.
2) It is the process by which an athlete prepares for competition by exercising, practicing, etc.

Summary

Training: Meaning, Definition and Types of Training!

Training constitutes a basic concept in human resource development. It is concerned with developing a particular skill to a desired standard by instruction and practice. Training is a highly useful tool that can bring an employee into a position where they can do their job correctly, effectively, and conscientiously. Training is the act of increasing the knowledge and skill of an employee for doing a particular job.

Definition of Training:

Dale S. Beach defines training as ‘the organized procedure by which people learn knowledge and/or skill for a definite purpose’. Training refers to the teaching and learning activities carried on for the primary purpose of helping members of an organization acquire and apply the knowledge, skills, abilities, and attitudes needed by a particular job and organization.

According to Edwin Flippo, ‘training is the act of increasing the skills of an employee for doing a particular job’.

Need for Training:

Every organization should provide training to all the employees irrespective of their qualifications and skills.

Specifically the need for training arises because of following reasons:

1. Environmental changes:

Mechanization, computerization, and automation have resulted in many changes that require trained staff possessing enough skills. The organization should train the employees to enrich them with the latest technology and knowledge.

2. Organizational complexity:

With modern inventions, technological upgradation, and diver­sification most of the organizations have become very complex. This has aggravated the problems of coordination. So, in order to cope up with the complexities, training has become mandatory.

3. Human relations:

Every management has to maintain very good human relations, and this has made training as one of the basic conditions to deal with human problems.

4. To match employee specifications with the job requirements and organizational needs:

An employee’s specification may not exactly suit to the requirements of the job and the organization, irrespective of past experience and skills. There is always a gap between an employee’s present specifications and the organization’s requirements. For filling this gap training is required.

5. Change in the job assignment:

Training is also necessary when the existing employee is pro­moted to the higher level or transferred to another department. Training is also required to equip the old employees with new techniques and technologies.

Importance of Training:

Training of employees and mangers are absolutely essential in this changing environment. It is an important activity of HRD which helps in improving the competency of employees. Training gives a lot of benefits to the employees such as improvement in efficiency and effectiveness, development of self confidence and assists every one in self management.

The stability and progress of the organization always depends on the training imparted to the employees. Training becomes mandatory under each and every step of expansion and diversification. Only training can improve the quality and reduce the wastages to the minimum. Training and development is also very essential to adapt according to changing environment.

Types of Training:

Various types of training can be given to the employees such as induction training, refresher training, on the job training, vestibule training, and training for promotions.

Some of the commonly used training programs are listed below:

1. Induction training:

Also known as orientation training given for the new recruits in order to make them familiarize with the internal environment of an organization. It helps the employees to understand the procedures, code of conduct, policies existing in that organization.

2. Job instruction training:

This training provides an overview about the job and experienced trainers demonstrates the entire job. Addition training is offered to employees after evaluating their performance if necessary.

3. Vestibule training:

It is the training on actual work to be done by an employee but conducted away from the work place.

4. Refresher training:

This type of training is offered in order to incorporate the latest development in a particular field. This training is imparted to upgrade the skills of employees. This training can also be used for promoting an employee.

5. Apprenticeship training:

Apprentice is a worker who spends a prescribed period of time under a supervisor.

Details

Training is teaching, or developing in oneself or others, any skills and knowledge or fitness that relate to specific useful competencies. Training has specific goals of improving one's capability, capacity, productivity and performance. It forms the core of apprenticeships and provides the backbone of content at institutes of technology (also known as technical colleges or polytechnics). In addition to the basic training required for a trade, occupation or profession, training may continue beyond initial competence to maintain, upgrade and update skills throughout working life. People within some professions and occupations may refer to this sort of training as professional development. Training also refers to the development of physical fitness related to a specific competence, such as sport, martial arts, military applications and some other occupations.

Types:

Physical training

Physical training concentrates on mechanistic goals: training programs in this area develop specific motor skills, agility, strength or physical fitness, often with an intention of peaking at a particular time.

In military use, training means gaining the physical ability to perform and survive in combat, and learn the many skills needed in a time of war. These include how to use a variety of weapons, outdoor survival skills, and how to survive being captured by the enemy, among many others. See military education and training.

For psychological or physiological reasons, people who believe it may be beneficial to them can choose to practice relaxation training, or autogenic training, in an attempt to increase their ability to relax or deal with stress. While some studies have indicated relaxation training is useful for some medical conditions, autogenic training has limited results or has been the result of few studies.

Occupational skills training

Some occupations are inherently hazardous, and require a minimum level of competence before the practitioners can perform the work at an acceptable level of safety to themselves or others in the vicinity. Occupational diving, rescue, firefighting and operation of certain types of machinery and vehicles may require assessment and certification of a minimum acceptable competence before the person is allowed to practice as a licensed instructor.

On-job training

Some commentators use a similar term for workplace learning to improve performance: "training and development". There are also additional services available online for those who wish to receive training above and beyond what is offered by their employers. Some examples of these services include career counseling, skill assessment, and supportive services. One can generally categorize such training as on-the-job or off-the-job.

The on-the-job training method takes place in a normal working situation, using the actual tools, equipment, documents or materials that trainees will use when fully trained. On-the-job training has a general reputation as most effective for vocational work. It involves employees training at the place of work while they are doing the actual job. Usually, a professional trainer (or sometimes an experienced and skilled employee) serves as the instructor using hands-on practical experience which may be supported by formal classroom presentations. Sometimes training can occur by using web-based technology or video conferencing tools. On-the-job training is applicable on all departments within an organization.

Simulation based training is another method which uses technology to assist in trainee development. This is particularly common in the training of skills requiring a very high degree of practice, and in those which include a significant responsibility for life and property. An advantage is that simulation training allows the trainer to find, study, and remedy skill deficiencies in their trainees in a controlled, virtual environment. This also allows the trainees an opportunity to experience and study events that would otherwise be rare on the job, e.g., in-flight emergencies, system failure, etc., wherein the trainer can run 'scenarios' and study how the trainee reacts, thus assisting in improving his/her skills if the event was to occur in the real world. Examples of skills that commonly include simulator training during stages of development include piloting aircraft, spacecraft, locomotives, and ships, operating air traffic control airspace/sectors, power plant operations training, advanced military/defense system training, and advanced emergency response training like fire training or first-aid training.

Off-the-job training method takes place away from normal work situations — implying that the employee does not count as a directly productive worker while such training takes place. Off-the-job training method also involves employee training at a site away from the actual work environment. It often utilizes lectures, seminars, case studies, role playing, and simulation, having the advantage of allowing people to get away from work and concentrate more thoroughly on the training itself. This type of training has proven more effective in inculcating concepts and ideas. Many personnel selection companies offer a service which would help to improve employee competencies and change the attitude towards the job.[citation needed] The internal personnel training topics can vary from effective problem-solving skills to leadership training.

A more recent development in job training is the On-the-Job Training Plan or OJT Plan. According to the United States Department of the Interior, a proper OJT plan should include: An overview of the subjects to be covered, the number of hours the training is expected to take, an estimated completion date, and a method by which the training will be evaluated.

Religion and spirituality

In religious and spiritual use, the word "training" may refer to the purification of the mind, heart, understanding and actions to obtain a variety of spiritual goals such as (for example) closeness to God or freedom from suffering. Note for example the institutionalised spiritual training of Threefold Training in Buddhism, meditation in Hinduism or discipleship in Christianity. These aspects of training can be short-term or can last a lifetime, depending on the context of the training and which religious group it is a part of.

Artificial-intelligence feedback

Researchers have developed training methods for artificial-intelligence devices as well. Evolutionary algorithms, including genetic programming and other methods of machine learning, use a system of feedback based on "fitness functions" to allow computer programs to determine how well an entity performs a task. The methods construct a series of programs, known as a “population” of programs, and then automatically test them for "fitness", observing how well they perform the intended task. The system automatically generates new programs based on members of the population that perform the best. These new members replace programs that perform the worst. The procedure repeats until the achievement of optimum performance. In robotics, such a system can continue to run in real-time after initial training, allowing robots to adapt to new situations and to changes in themselves, for example, due to wear or damage. Researchers have also developed robots that can appear to mimic simple human behavior as a starting point for training.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1834 2023-07-12 14:13:19

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1837) Remote control

Gist

A remote control is a small device that is used to operate electronic equipment (such as a television) from a distance by using electronic signals. It is a process or system that makes it possible to control something from a distance by using electronic signals.

Details

In electronics, a remote control (also known as a remote or clicker) is an electronic device used to operate another device from a distance, usually wirelessly. In consumer electronics, a remote control can be used to operate devices such as a television set, DVD player or other home appliance. A remote control can allow operation of devices that are out of convenient reach for direct operation of controls. They function best when used from a short distance. This is primarily a convenience feature for the user. In some cases, remote controls allow a person to operate a device that they otherwise would not be able to reach, as when a garage door opener is triggered from outside.

Early television remote controls (1956–1977) used ultrasonic tones. Present-day remote controls are commonly consumer infrared devices which send digitally-coded pulses of infrared radiation. They control functions such as power, volume, channels, playback, track change, heat, fan speed, and various other features. Remote controls for these devices are usually small wireless handheld objects with an array of buttons. They are used to adjust various settings such as television channel, track number, and volume. The remote control code, and thus the required remote control device, is usually specific to a product line. However, there are universal remotes, which emulate the remote control made for most major brand devices.

Remote controls in the 2000s include Bluetooth or Wi-Fi connectivity, motion sensor-enabled capabilities and voice control. Remote controls for 2010s onward Smart TVs may feature a standalone keyboard on the rear side to facilitate typing, and be usable as a pointing device.

History

Wired and wireless remote control was developed in the latter half of the 19th century to meet the need to control unmanned vehicles (for the most part military torpedoes). These included a wired version by German engineer Werner von Siemens in 1870, and radio controlled ones by British engineer Ernest Wilson and C. J. Evans (1897) and a prototype that inventor Nikola Tesla demonstrated in New York in 1898. In 1903 Spanish engineer Leonardo Torres Quevedo introduced a radio based control system called the "Telekino" at the Paris Academy of Sciences, which he hoped to use to control a dirigible airship of his own design. Unlike previous “on/off” mechanisms, Torres developed a system for controlling any mechanical or electrical device with multiple states of operation. Specifically, he was able to do up to 19 different actions with his prototypes. From 1904 to 1906 Torres chose to conduct Telekino testings in the form of a three-wheeled land vehicle with an effective range of 20 to 30 meters, and using an electrically powered launch that demonstrated a standoff range of 2 kilometers. The first remote-controlled model airplane flew in 1932, and the use of remote control technology for military purposes was worked on intensively during the Second World War, one result of this being the German Wasserfall missile.

By the late 1930s, several radio manufacturers offered remote controls for some of their higher-end models. Most of these were connected to the set being controlled by wires, but the Philco Mystery Control (1939) was a battery-operated low-frequency radio transmitter, thus making it the first wireless remote control for a consumer electronics device. Using pulse-count modulation, this also was the first digital wireless remote control.

Television remote controls

The first remote intended to control a television was developed by Zenith Radio Corporation in 1950. The remote, called Lazy Bones, was connected to the television by a wire. A wireless remote control, the Flashmatic, was developed in 1955 by Eugene Polley. It worked by shining a beam of light onto one of four photoelectric cells, but the cell did not distinguish between light from the remote and light from other sources. The Flashmatic also had to be pointed very precisely at one of the sensors in order to work.

The Zenith Space Commander Six hundred remote control

In 1956, Robert Adler developed Zenith Space Command, a wireless remote. It was mechanical and used ultrasound to change the channel and volume. When the user pushed a button on the remote control, it struck a bar and clicked, hence they were commonly called "clickers", and the mechanics were similar to a pluck. Each of the four bars emitted a different fundamental frequency with ultrasonic harmonics, and circuits in the television detected these sounds and interpreted them as channel-up, channel-down, sound-on/off, and power-on/off.

Later, the rapid decrease in price of transistors made possible cheaper electronic remotes that contained a piezoelectric crystal that was fed by an oscillating electric current at a frequency near or above the upper threshold of human hearing, though still audible to dogs. The receiver contained a microphone attached to a circuit that was tuned to the same frequency. Some problems with this method were that the receiver could be triggered accidentally by naturally occurring noises or deliberately by metal against glass, for example, and some people could hear the lower ultrasonic harmonics.

An RCA universal remote

In 1970, RCA introduced an all-electronic remote control that uses digital signals and metal–oxide–semiconductor field-effect transistor (MOSFET) memory. This was widely adopted for color television, replacing motor-driven tuning controls.

The impetus for a more complex type of television remote control came in 1973, with the development of the Ceefax teletext service by the BBC. Most commercial remote controls at that time had a limited number of functions, sometimes as few as three: next channel, previous channel, and volume/off. This type of control did not meet the needs of Teletext sets, where pages were identified with three-digit numbers. A remote control that selects Teletext pages would need buttons for each numeral from zero to nine, as well as other control functions, such as switching from text to picture, and the normal television controls of volume, channel, brightness, color intensity, etc. Early Teletext sets used wired remote controls to select pages, but the continuous use of the remote control required for Teletext quickly indicated the need for a wireless device. So BBC engineers began talks with one or two television manufacturers, which led to early prototypes in around 1977–1978 that could control many more functions. ITT was one of the companies and later gave its name to the ITT protocol of infrared communication.

TV, VHS and DVD Remote controls

In 1980, the most popular remote control was the Starcom Cable TV Converter (from Jerrold Electronics, a division of General Instrument) which used 40-kHz sound to change channels. Then, a Canadian company, Viewstar, Inc., was formed by engineer Paul Hrivnak and started producing a cable TV converter with an infrared remote control. The product was sold through Philips for approximately \$190 CAD. The Viewstar converter was an immediate success, the millionth converter being sold on March 21, 1985, with 1.6 million sold by 1989.

Other remote controls

The Blab-off was a wired remote control created in 1952 that turned a TV's (television) sound on or off so that viewers could avoid hearing commercials. In the 1980s Steve Wozniak of Apple started a company named CL 9. The purpose of this company was to create a remote control that could operate multiple electronic devices. The CORE unit (Controller Of Remote Equipment) was introduced in the fall of 1987. The advantage to this remote controller was that it could "learn" remote signals from different devices. It had the ability to perform specific or multiple functions at various times with its built-in clock. It was the first remote control that could be linked to a computer and loaded with updated software code as needed. The CORE unit never made a huge impact on the market. It was much too cumbersome for the average user to program, but it received rave reviews from those who could.[citation needed] These obstacles eventually led to the demise of CL 9, but two of its employees continued the business under the name Celadon. This was one of the first computer-controlled learning remote controls on the market.

In the 1990s, cars were increasingly sold with electronic remote control door locks. These remotes transmit a signal to the car which locks or unlocks the door locks or unlocks the trunk. An aftermarket device sold in some countries is the remote starter. This enables a car owner to remotely start their car. This feature is most associated with countries with winter climates, where users may wish to run the car for several minutes before they intend to use it, so that the car heater and defrost systems can remove ice and snow from the windows.

Proliferation

By the early 2000s, the number of consumer electronic devices in most homes greatly increased, along with the number of remotes to control those devices. According to the Consumer Electronics Association, an average US home has four remotes. To operate a home theater as many as five or six remotes may be required, including one for cable or satellite receiver, VCR or digital video recorder (DVR/PVR), DVD player, TV and audio amplifier. Several of these remotes may need to be used sequentially for some programs or services to work properly. However, as there are no accepted interface guidelines, the process is increasingly cumbersome. One solution used to reduce the number of remotes that have to be used is the universal remote, a remote control that is programmed with the operation codes for most major brands of TVs, DVD players, etc. In the early 2010s, many smartphone manufacturers began incorporating infrared emitters into their devices, thereby enabling their use as universal remotes via an included or downloadable app.

Technique

The main technology used in home remote controls is infrared (IR) light. The signal between a remote control handset and the device it controls consists of pulses of infrared light, which is invisible to the human eye but can be seen through a digital camera, video camera or phone camera. The transmitter in the remote control handset sends out a stream of pulses of infrared light when the user presses a button on the handset. A transmitter is often a light emitting diode (LED) which is built into the pointing end of the remote control handset. The infrared light pulses form a pattern unique to that button. The receiver in the device recognizes the pattern and causes the device to respond accordingly.

The emission spectrum of a typical sound system remote control is in the near infrared.

The infrared diode modulates at a speed corresponding to a particular function. When seen through a digital camera, the diode appears to be emitting pulses of purple light.

Most remote controls for electronic appliances use a near infrared diode to emit a beam of light that reaches the device. A 940 nm wavelength LED is typical. This infrared light is not visible to the human eye but picked up by sensors on the receiving device. Video cameras see the diode as if it produces visible purple light. With a single channel (single-function, one-button) remote control the presence of a carrier signal can be used to trigger a function. For multi-channel (normal multi-function) remote controls more sophisticated procedures are necessary: one consists of modulating the carrier with signals of different frequencies. After the receiver demodulates the received signal, it applies the appropriate frequency filters to separate the respective signals. One can often hear the signals being modulated on the infrared carrier by operating a remote control in very close proximity to an AM radio not tuned to a station. Today, IR remote controls almost always use a pulse width modulated code, encoded and decoded by a digital computer: a command from a remote control consists of a short train of pulses of carrier-present and carrier-not-present of varying widths.

Consumer electronics infrared protocols

Different manufacturers of infrared remote controls use different protocols to transmit the infrared commands. The RC-5 protocol that has its origins within Philips, uses, for instance, a total of 14 bits for each button press. The bit pattern is modulated onto a carrier frequency that, again, can be different for different manufacturers and standards, in the case of RC-5, the carrier is 36 kHz. Other consumer infrared protocols include the various versions of SIRCS used by Sony, the RC-6 from Philips, the Ruwido R-Step, and the NEC TC101 protocol.

Infrared, line of sight and operating angle

Since infrared (IR) remote controls use light, they require line of sight to operate the destination device. The signal can, however, be reflected by mirrors, just like any other light source. If operation is required where no line of sight is possible, for instance when controlling equipment in another room or installed in a cabinet, many brands of IR extenders are available for this on the market. Most of these have an IR receiver, picking up the IR signal and relaying it via radio waves to the remote part, which has an IR transmitter mimicking the original IR control. Infrared receivers also tend to have a more or less limited operating angle, which mainly depends on the optical characteristics of the phototransistor. However, it's easy to increase the operating angle using a matte transparent object in front of the receiver.

Radio remote control (RF remote control) is used to control distant objects using a variety of radio signals transmitted by the remote control device. As a complementary method to infrared remote controls, the radio remote control is used with electric garage door or gate openers, automatic barrier systems, burglar alarms and industrial automation systems. Standards used for RF remotes are: Bluetooth AVRCP, Zigbee (RF4CE), Z-Wave. Most remote controls use their own coding, transmitting from 8 to 100 or more pulses, fixed or Rolling code, using OOK or FSK modulation. Also, transmitters or receivers can be universal, meaning they are able to work with many different codings. In this case, the transmitter is normally called a universal remote control duplicator because it is able to copy existing remote controls, while the receiver is called a universal receiver because it works with almost any remote control in the market.

A radio remote control system commonly has two parts: transmit and receive. The transmitter part is divided into two parts, the RF remote control and the transmitter module. This allows the transmitter module to be used as a component in a larger application. The transmitter module is small, but users must have detailed knowledge to use it; combined with the RF remote control it is much simpler to use.

The receiver is generally one of two types: a super-regenerative receiver or a superheterodyne. The super-regenerative receiver works like that of an intermittent oscillation detection circuit. The superheterodyne works like the one in a radio receiver. The superheterodyne receiver is used because of its stability, high sensitivity and it has relatively good anti-interference ability, a small package and lower price.

Usage:

Industry

A remote control is used for controlling substations, pump storage power stations and HVDC-plants. For these systems often PLC-systems working in the longwave range are used.

Garage and gate

Garage and gate remote controls are very common, especially in some countries such as the US, Australia, and the UK, where garage doors, gates and barriers are widely used. Such a remote is very simple by design, usually only one button, and some with more buttons to control several gates from one control. Such remotes can be divided into two categories by the encoder type used: fixed code and rolling code. If you find dip-switches in the remote, it is likely to be fixed code, an older technology which was widely used. However, fixed codes have been criticized for their (lack of) security, thus rolling code has been more and more widely used in later installations.

Military

Remotely operated torpedoes were demonstrated in the late 19th century in the form of several types of remotely controlled torpedoes. The early 1870s saw remotely controlled torpedoes by John Ericsson (pneumatic), John Louis Lay (electric wire guided), and Victor von Scheliha (electric wire guided).

The Brennan torpedo, invented by Louis Brennan in 1877 was powered by two contra-rotating propellers that were spun by rapidly pulling out wires from drums wound inside the torpedo. Differential speed on the wires connected to the shore station allowed the torpedo to be guided to its target, making it "the world's first practical guided missile". In 1898 Nikola Tesla publicly demonstrated a "wireless" radio-controlled torpedo that he hoped to sell to the U.S. Navy.

Archibald Low was known as the "father of radio guidance systems" for his pioneering work on guided rockets and planes during the First World War. In 1917, he demonstrated a remote-controlled aircraft to the Royal Flying Corps and in the same year built the first wire-guided rocket. As head of the secret RFC experimental works at Feltham, A. M. Low was the first person to use radio control successfully on an aircraft, an "Aerial Target". It was "piloted" from the ground by future world aerial speed record holder Henry Segrave. Low's systems encoded the command transmissions as a countermeasure to prevent enemy intervention. By 1918 the secret D.C.B. Section of the Royal Navy's Signals School, Portsmouth under the command of Eric Robinson V.C. used a variant of the Aerial Target's radio control system to control from ‘mother’ aircraft different types of naval vessels including a submarine.

The military also developed several early remote control vehicles. In World War I, the Imperial German Navy employed FL-boats (Fernlenkboote) against coastal shipping. These were driven by internal combustion engines and controlled remotely from a shore station through several miles of wire wound on a spool on the boat. An aircraft was used to signal directions to the shore station. EMBs carried a high explosive charge in the bow and traveled at speeds of thirty knots. The Soviet Red Army used remotely controlled teletanks during the 1930s in the Winter War against Finland and the early stages of World War II. A teletank is controlled by radio from a control tank at a distance of 500 to 1,500 meters, the two constituting a telemechanical group. The Red Army fielded at least two teletank battalions at the beginning of the Great Patriotic War. There were also remotely controlled cutters and experimental remotely controlled planes in the Red Army.

Remote controls in military usage employ jamming and countermeasures against jamming. Jammers are used to disable or sabotage the enemy's use of remote controls. The distances for military remote controls also tend to be much longer, up to intercontinental distance satellite-linked remote controls used by the U.S. for their unmanned airplanes (drones) in Afghanistan, Iraq, and Pakistan. Remote controls are used by insurgents in Iraq and Afghanistan to attack coalition and government troops with roadside improvised explosive devices, and terrorists in Iraq are reported in the media to use modified TV remote controls to detonate bombs.

Space

In the winter of 1971, the Soviet Union explored the surface of the moon with the lunar vehicle Lunokhod 1, the first roving remote-controlled robot to land on another celestial body. Remote control technology is also used in space travel, for instance, the Soviet Lunokhod vehicles were remote-controlled from the ground. Many space exploration rovers can be remotely controlled, though vast distance to a vehicle results in a long time delay between transmission and receipt of a command.

PC control

Existing infrared remote controls can be used to control PC applications.[44] Any application that supports shortcut keys can be controlled via infrared remote controls from other home devices (TV, VCR, AC). This is widely used[citation needed] with multimedia applications for PC based home theater systems. For this to work, one needs a device that decodes IR remote control data signals and a PC application that communicates to this device connected to PC. A connection can be made via serial port, USB port or motherboard IrDA connector. Such devices are commercially available but can be homemade using low-cost microcontrollers. LIRC (Linux IR Remote control) and WinLIRC (for Windows) are software packages developed for the purpose of controlling PC using TV remote and can be also used for homebrew remote with lesser modification.

Photography

Remote controls are used in photography, in particular to take long-exposure shots. Many action cameras such as the GoPros  as well as standard DSLRs including Sony's Alpha series  incorporate Wi-Fi based remote control systems. These can often be accessed and even controlled via cell-phones and other mobile devices.

Video games

Video game consoles had not used wireless controllers until recently, mainly because of the difficulty involved in playing the game while keeping the infrared transmitter pointed at the console. Early wireless controllers were cumbersome and when powered on alkaline batteries, lasted only a few hours before they needed replacement. Some wireless controllers were produced by third parties, in most cases using a radio link instead of infrared. Even these were very inconsistent, and in some cases, had transmission delays, making them virtually useless. Some examples include the Double Player for NES, the Master System Remote Control System and the Wireless Dual Shot for the PlayStation.

The first official wireless game controller made by a first party manufacturer was the CX-42 for Atari 2600. The Philips CD-i 400 series also came with a remote control, the WaveBird was also produced for the GameCube. In the seventh generation of gaming consoles, wireless controllers became standard. Some wireless controllers, such as those of the PlayStation 3 and Wii, use Bluetooth. Others, like the Xbox 360, use proprietary wireless protocols.

Standby power

To be turned on by a wireless remote, the controlled appliance must always be partly on, consuming standby power.

Alternatives

Hand-gesture recognition has been researched as an alternative to remote controls for television sets.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1835 2023-07-13 14:06:23

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1838) Coeducation

Gist

Coeducation is the education of both male and female students at the same institution.

Summary

Co-education is the education of males and females in the same schools. The practice has been different in different countries and at different times.

Most primary schools have been co-educational for a long time since it was believed that there is no reason to educate females separately from males before the age of puberty. Also, the curriculum in primary schools is not controversial since it emphasizes reading, writing, and arithmetic, with some elementary knowledge of geography and history. In some countries, it includes some religious and cultural education.

However, before the mid-19th century, girls were often educated at home or not educated at all. On that point, there were great differences in different parts of the world. In England and Wales, universal primary education was set up by the Elementary Education Act of 1870, and attendance from the ages of 5 to 10 was compulsory. That was extended in another Act of 1880. Since then. almost all primary education in the United Kingdom has been co-educational, as it is in many other countries.

With secondary education, children go through the process of puberty, and there is no general agreement as to whether both sexes should be educated together. People make arguments both for and against the idea. At one extreme is the United States in which both sexes are usually educated together at all stages. At the other extreme are certain traditional societies in which girls do not get a secondary education at all. The tendency has been for more countries to move to co-education as the standard at every level of education. An exception would be the Islamic world, where girls are educated separately from boys or even not educated at all.

The world's oldest co-educational school may be Archbishop Tenison's Church of England High School, Croydon. It was established in 1714 in what in Surrey but is now in South London. It has admitted both boys and girls since its opening. It has always been a day school only and is thought that to be the oldest surviving mixed-gender school in the world.

Details

Coeducation is education of males and females in the same schools. A modern phenomenon, it was adopted earlier and more widely in the United States than in Europe, where tradition proved a greater obstacle.

Coeducation was first introduced in western Europe after the Reformation, when certain Protestant groups urged that girls as well as boys should be taught to read the Bible. The practice became especially marked in Scotland, the northern parts of England, and colonial New England, where young children of both sexes attended dame schools. In the latter half of the 18th century, girls were gradually admitted to town schools. The Society of Friends in England as well as in the United States were pioneers in coeducation as they were in universal education, and, in Quaker settlements in the British colonies, boys and girls generally attended school together. The new free public elementary, or common, schools, which after the American Revolution supplanted church institutions, were almost always coeducational, and by 1900 most public high schools were coeducational as well. Many private colleges from their inception admitted women (the first was Oberlin College in Oberlin, Ohio), and many state universities followed their example. By the end of the 19th century, 70 percent of American colleges were coeducational. In the second half of the 20th century, many institutions of higher learning that had been exclusively for persons of one gender became coeducational.

In western Europe the main exponents of primary and secondary coeducation were the Scandinavian countries. In Denmark coeducation extends back to the 18th century, and in Norway coeducation was adopted by law in 1896. In Germany, on the other hand, until the closing decades of the 19th century it was practically impossible for a girl to get a secondary education, and, when girls’ secondary schools were introduced, their status was inferior to that of schools for boys. At present in many large municipalities, such as Bremen, Hamburg, and Berlin, coeducation at the primary level is the rule; at the secondary level there has been little change.

Antagonism to coeducation in England and on the European continent diminished more rapidly in higher education than in secondary. In England, Girton College at Cambridge was established for women in 1869, and the London School of Economics was opened to women in 1874. Germany permitted women to matriculate in 1901, and by 1910 women had been admitted to universities in the Netherlands, Belgium, Denmark, Sweden, Switzerland, Norway, Austria-Hungary, France, and Turkey.

Since World War II, coeducation has been adopted in many developing countries; China and Cuba are outstanding examples. There are many other countries, however, where social conditioning and religious sanctions have limited its success. In most Arab countries, for example, girls tend to drop out of coeducational schools at the age of puberty.

By learning in a coeducational environment, boys and girls cultivate mutual respect, understanding, and support for one another. Students realise and appreciate their own individual value as well as each other’s.

By learning together, girls and boys cooperate and collaborate in ways that enable them to embrace and celebrate their differences as well as their similarities. Students develop skills as they navigate the range of perspectives that coeducation brings through lively debate, critical questioning and exploration.

Girls and boys embrace leadership positions across all areas of learning, providing all students with strong female and male role models.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1836 2023-07-14 13:53:22

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1839) Experiment

Gist

An experiment is a scientific test that is done in order to get proof of something or new knowledge. It is a test, trial, or tentative procedure; an act or operation for the purpose of discovering something unknown or of testing a principle, supposition, etc.

Details

An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated. Experiments vary greatly in goal and scale but always rely on repeatable procedure and logical analysis of the results. There also exist natural experimental studies.

A child may carry out basic experiments to understand how things fall to the ground, while teams of scientists may take years of systematic investigation to advance their understanding of a phenomenon. Experiments and other types of hands-on activities are very important to student learning in the science classroom. Experiments can raise test scores and help a student become more engaged and interested in the material they are learning, especially when used over time. Experiments can vary from personal and informal natural comparisons (e.g. tasting a range of chocolates to find a favorite), to highly controlled (e.g. tests requiring complex apparatus overseen by many scientists that hope to discover information about subatomic particles). Uses of experiments vary considerably between the natural and human sciences.

Experiments typically include controls, which are designed to minimize the effects of variables other than the single independent variable. This increases the reliability of the results, often through a comparison between control measurements and the other measurements. Scientific controls are a part of the scientific method. Ideally, all variables in an experiment are controlled (accounted for by the control measurements) and none are uncontrolled. In such an experiment, if all controls work as expected, it is possible to conclude that the experiment works as intended, and that results are due to the effect of the tested variables.

Overview

In the scientific method, an experiment is an empirical procedure that arbitrates competing models or hypotheses. Researchers also use experimentation to test existing theories or new hypotheses to support or disprove them.

An experiment usually tests a hypothesis, which is an expectation about how a particular process or phenomenon works. However, an experiment may also aim to answer a "what-if" question, without a specific expectation about what the experiment reveals, or to confirm prior results. If an experiment is carefully conducted, the results usually either support or disprove the hypothesis. According to some philosophies of science, an experiment can never "prove" a hypothesis, it can only add support. On the other hand, an experiment that provides a counterexample can disprove a theory or hypothesis, but a theory can always be salvaged by appropriate ad hoc modifications at the expense of simplicity.

An experiment must also control the possible confounding factors—any factors that would mar the accuracy or repeatability of the experiment or the ability to interpret the results. Confounding is commonly eliminated through scientific controls and/or, in randomized experiments, through random assignment.

In engineering and the physical sciences, experiments are a primary component of the scientific method. They are used to test theories and hypotheses about how physical processes work under particular conditions (e.g., whether a particular engineering process can produce a desired chemical compound). Typically, experiments in these fields focus on replication of identical procedures in hopes of producing identical results in each replication. Random assignment is uncommon.

In medicine and the social sciences, the prevalence of experimental research varies widely across disciplines. When used, however, experiments typically follow the form of the clinical trial, where experimental units (usually individual human beings) are randomly assigned to a treatment or control condition where one or more outcomes are assessed. In contrast to norms in the physical sciences, the focus is typically on the average treatment effect (the difference in outcomes between the treatment and control groups) or another test statistic produced by the experiment. A single study typically does not involve replications of the experiment, but separate studies may be aggregated through systematic review and meta-analysis.

There are various differences in experimental practice in each of the branches of science. For example, agricultural research frequently uses randomized experiments (e.g., to test the comparative effectiveness of different fertilizers), while experimental economics often involves experimental tests of theorized human behaviors without relying on random assignment of individuals to treatment and control conditions.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1837 2023-07-15 14:22:53

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1840) Skipping rope

Details

A skipping rope or jump rope is a tool used in the sport of skipping/jump rope where one or more participants jump over a rope swung so that it passes under their feet and over their heads. There are multiple subsets of skipping/jump rope, including single freestyle, single speed, pairs, three-person speed (Double Dutch), and three-person freestyle (Double Dutch freestyle).

Rope skipping is commonly performed as an exercise or recreational activity, and there are also several major organizations that support jump rope as a competitive sport. Often separated by gender and age, events include hundreds of competitive teams all around the world. In the US, schools rarely have jump rope teams, and few states have sanctioned official events at the elementary school level. In freestyle events, jumpers use a variety of basic and advanced techniques in a routine of one minute, which is judged by a head judge, content judges, and performance judges. In speed events, a jumper alternates their feet with the rope going around the jumper every time one of their feet hits the ground for 30 seconds, one minute, or three minutes. The jumper is judged on the number of times the right foot touches the ground in those times.

History

Explorers reported seeing aborigines jumping with vines in the 16th century. European boys started skipping in the early 17th century. The activity was considered indecent for girls due to concerns of them showing their ankles. Girls began skipping in the 18th century, adding skipping chants, owning the rope, controlling the game, and deciding who may participate.

In the United States, domination of the activity by girls emerged as their families moved into cities in the late 19th century. There, they found sidewalks and other smooth surfaces conducive to skipping, along with a high density of peers with whom to engage in the sport.

Techniques

There are many techniques that can be used when skipping. These can be used individually or combined in a series to create a routine.

Solo participants

For solo jumping, the participant jumps and swings the rope under their feet. The timing of the swing is matched to the jump. This allows them to jump the rope and establish a rhythm more successfully. This can be contrasted with swinging the rope at the feet and jumping, which would mean they were matching the jump to the swing. This makes it harder to jump the rope and establish a rhythm.

Basic jump or easy jump

Jump with both feet slightly apart over the rope. Beginners usually master this technique first before moving onto more advanced techniques.

Alternate foot jump (speed step)

Use alternate feet to jump off the ground. This technique can be used to effectively double the number of jumps per minute as compared to the above technique. This step can be used for speed events.

Criss-cross

Also known as crossover or cross arms. Perform the basic jump whilst crossing arms in front of the body.

Side swing

The rope is passed by the side of the participant's body without jumping it.

EB (front-back cross or sailor)

Perform the criss-cross whilst crossing one arm behind the back.

Double under

A high basic jump, turning the rope twice under the feet. Turning the rope three times is called a triple under. In competitions, participants may attempt quadruple (quads) and quintuple under (quins) using the same method.

Boxer jump

One foot is positioned slightly forward and one foot slightly back. The person positions their bodyweight primarily over their front foot, with the back foot acting as a stabiliser. From this stance, the person jumps up several times (often 2-3 times) before switching their stance, so the front foot becomes the back foot, and the back foot becomes the front foot. And so forth. An advantage of this technique is that it allows the back leg a brief rest. So while both feet are still used in the jump, a person may find they can skip for longer than if they were using the basic two-footed technique.

Perform the criss-cross with one arm crossing under the opposite leg from the inside.

Leg over / Crougar

A basic jump with one arm hooked under the adjacent leg. Doing Crougar with the non-dominant leg in the air is easier.

Awesome Annie

Also known as Awesome Anna or swish. Alternates between a leg over and a toad without a jump in between.

Perform the toad whilst one arm crosses the adjacent leg from the outside.

Elephant

A cross between the inverse toad and the toad, with both arms crossing under one leg.

Frog or Donkey kick

The participant does a handstand, returns to their feet, and turns the rope under them. A more advanced version turns the rope during the return to the ground.

TJ

A triple-under where the first 'jump' is a side swing, the middle jump is a toad and the final jump in the open.

In competitions, participants are required to demonstrate competence using specific techniques. The selection depends on the judging system and the country in which the tournament is held.

Health effects

Skipping may be used as a cardiovascular workout, similar to jogging or bicycle riding, and has a high MET (Metabolic equivalent of task) or intensity level. This aerobic exercise can achieve a "burn rate" of up to 700 to over 1,200 calories per hour of vigorous activity, with about 0.1 to nearly 1.1 calories consumed per jump, mainly depending upon the speed and intensity of jumps and leg foldings. Ten minutes of skipping are roughly the equivalent of running an eight-minute mile. Skipping for 15–20 minutes is enough to burn off the calories from a candy bar and is equivalent to 45–60 minutes of running, depending upon the intensity of jumps and leg swings. Many professional trainers, fitness experts, and professional fighters greatly recommend skipping for burning fat over any other alternative exercises like running and jogging.

Weighted skipping ropes are available for such athletes to increase the difficulty and effectiveness of such exercise. Individuals or groups can participate in the exercise, and learning proper techniques is relatively simple compared to many other athletic activities. The exercise is also appropriate for a wide range of ages and fitness levels.

Skipping grew in popularity in 2020 when gyms closed or people stayed home due to coronavirus restrictions across the world. These workouts can be done at home and do not require specialized equipment.

Competition:

International

The world governing body for the sport of jump rope is the International Jump Rope Union (IJRU). It is a merger of two previous rival world organizations: the International Rope Skipping Federation (FISAC-IRSF), and the World Jump Rope Federation (WJRF). There have been 11 World Championships on every alternate year by FISAC-IRSF, with the final competition being held in Shanghai, China. There have been 7 World Jump Rope Championships held every year by (WJRF); the final competition taking place in Oslo, Norway. Previous locations of this championship included Washington DC, Orlando, France, and Portugal. IJRU plans to hold its inaugural World Jump Rope Championship in Colorado Springs, Colorado in 2023.

In 2018, FISAC-IRSF and WJRF announced the merger organization IJRU. IJRU has become the 10th International Federation to gain GAISF Observer status. The decision was taken by the Global Association of International Sports Federations (GAISF) Council, which met during SportAccord in Bangkok. Observer status is the first step on a clear pathway for new International Federations towards the top of the Olympic Family pyramid. Those who wish to proceed will be assisted by GAISF, leading them into full GAISF membership through the Alliance of Independent Recognised Members of Sport (AIMS), and the Association of IOC Recognised International Sports Federations (ARISF).

In 2019 the International Rope Skipping Organisation (IRSO). re-emerged and reactivated its activities as governing body of Rope Skipping / Jump Rope sport. The organization is headed by Richard Cendali, who is referred to as the grandfather of the sport of jump rope. IRSO had disagreements with both FISAC-IRSF and WJRF for ignoring several long-standing organizations in their merger. Various jump rope organizations that were long-standing for the development of the sport were left out of the merger of IJRU and came under IRSO under the leadership of Richard Cendali. The USA Jump Rope Federation and newly formed Asian Rope Skipping Association also joined IRSO and decided to host their World Championship in conjunction with AAU.

World Inter School

The first World Inter-School Rope Skipping Championship[15] was held at Dubai, November 2015. The second World Inter-School Rope Skipping Championship was held at Eger, Hungary. The Championship was organized by World Inter School Rope Skipping Organisation (WIRSO). Second, third and fourth World Inter-School championships held in Hungary 2017, Hong Kong 2018 and Belgium 2019 respectively.

United States

Historically, there were two competing jump rope organizations in the United States: the International Rope Skipping Organization (IRSO), and the World Rope Skipping Federation (WRSF). IRSO focused on stunt-oriented and gymnastic/athletic type moves, while WRSF appreciated the aesthetics and form of the exercise. In 1995, these two organizations merged to form the United States Amateur Jump Rope Federation which is today now known as USA Jump Rope (USAJR). USAJR has hosted annual national tournaments, as well as camps, workshops, and clinics on instruction since 1995. Jump rope is also part of the Amateur Athletic Union and participates in their annual AAU Junior Olympic Games. More recently, the American Jump Rope Federation was founded in 2016 by previous members of WJRF. It is recognized as the official governing body for the sport of jump rope in the United States by IJRU.

Types of jump ropes

Speed jump ropes are made from a thin vinyl cord or wire and are primarily used for speed jumping or double unders. They are best for indoor use, because they will wear down fast on concrete or other harsh surfaces. Licorice jump ropes are also made from vinyl cord or PVC and are primarily used for freestyle jumping. The beaded ropes make rhythmic jumping very easy, because the jumper can hear the beads hitting the ground and strive for a rhythmic pattern. Leather jump ropes are thicker and is less likely to tangle or wear down with outdoor use.

Jump rope, also called skip rope is a children’s game played by individuals or teams with a piece of rope, which may have handles attached at each end. Jump rope, which dates back to the 19th century, is traditionally a girls’ playground or sidewalk activity in which two players turn a rope (holding it by its ends and swinging it in a circle) and the other players take turns jumping it while chanting a rhyme or counting. When it is played as a game, each player is required to move in while the rope is turning, complete the jump, and move out without contacting or stopping the rope; the jumps required usually become more complicated as the game proceeds.

There are many types of jumps, including single, double, backward, crossed-feet, hot pepper (twice as fast as usual), quarter turns, half turns, full turns, and two-at-a-time (jumpers); in double Dutch, two ropes (or one long rope such as a clothesline that has been doubled) are turned simultaneously in opposite directions; in criss-cross, performed by one person holding both ends of the rope, the arms are crossed back and forth on alternate turns of the rope.

There are countless chants, many originally from Germany and England, associated with jump rope, which often dictate the actions or stunts to be performed, such as:

One, two, touch my shoe,
Three, four, touch the floor,
Five, six, pick up sticks,
Seven, eight, double rate,
Nine, ten, out again.

In another version:

Apples, peaches, pears, and plums,
Tell me when your birthday comes...
the jumper chants the names of the months, then the days up to the date of her birthday.

More recent chants reflect inner-city culture. For instance:

Hey D.J., let’s sing that song, keep a footin’ all night long,
Hey D.J., let’s sing that song, keep a hoppin’ all night long,
Hey D.J., let’s sing that song, keep a turning, all night long,
Hey D.J., let’s sing that song, keep a clapping all night long.

In Chinese and Vietnamese jump rope, a stationary rope or string, commonly elastic, is held in a rectangular configuration around two players’ legs; the jumper performs designated hops in and out of the rectangle, with the rope being raised on each successive jump.

Single rope jumping or rope skipping is a popular form of cardiovascular exercise. This exercise originated with prizefighters to help develop their lungs and legs.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1838 2023-07-16 14:36:51

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1841) Pendulum

Gist

A pendulum is a body suspended from a fixed point so as to swing freely to and fro under the action of gravity and commonly used to regulate movements (as of clockwork).
2. something (such as a state of affairs) that alternates between opposites.

Summary

A pendulum is a body suspended from a fixed point so that it can swing back and forth under the influence of gravity. Pendulums are used to regulate the movement of clocks because the interval of time for each complete oscillation, called the period, is constant.

The Italian scientist Galileo first noted (c. 1583) the constancy of a pendulum’s period by comparing the movement of a swinging lamp in a Pisa cathedral with his pulse rate. The Dutch mathematician and scientist Christiaan Huygens invented a clock controlled by the motion of a pendulum in 1656. The priority of invention of the pendulum clock has been ascribed to Galileo by some authorities and to Huygens by others, but Huygens solved the essential problem of making the period of a pendulum truly constant by devising a pivot that caused the suspended body, or bob, to swing along the arc of a cycloid rather than that of a circle.

A simple pendulum consists of a bob suspended at the end of a thread that is so light as to be considered massless. The period of such a device can be made longer by increasing its length, as measured from the point of suspension to the middle of the bob. A change in the mass of the bob, however, does not affect the period, provided the length is not thereby affected. The period, on the other hand, is influenced by the position of the pendulum in relation to Earth. Because the strength of Earth’s gravitational field is not uniform everywhere, a given pendulum swings faster, and thus has a shorter period, at low altitudes and at Earth’s poles than it does at high altitudes and at the Equator.

There are various other kinds of pendulums. A compound pendulum has an extended mass, like a swinging bar, and is free to oscillate about a horizontal axis. A special reversible compound pendulum called Kater’s pendulum is designed to measure the value of g, the acceleration of gravity.

Another type is the Schuler pendulum. When the Schuler pendulum is vertically suspended, it remains aligned to the local vertical even if the point from which it is suspended is accelerated parallel to Earth’s surface. This principle of the Schuler pendulum is applied in some inertial guidance systems to maintain a correct internal vertical reference, even during rapid acceleration.

A spherical pendulum is one that is suspended from a pivot mounting, which enables it to swing in any of an infinite number of vertical planes through the point of suspension. In effect, the plane of the pendulum’s oscillation rotates freely. A simple version of the spherical pendulum, the Foucault pendulum, is used to show that Earth rotates on its axis.

Details

A pendulum is a weight suspended from a pivot so that it can swing freely. When a pendulum is displaced sideways from its resting, equilibrium position, it is subject to a restoring force due to gravity that will accelerate it back toward the equilibrium position. When released, the restoring force acting on the pendulum's mass causes it to oscillate about the equilibrium position, swinging back and forth. The time for one complete cycle, a left swing and a right swing, is called the period. The period depends on the length of the pendulum and also to a slight degree on the amplitude, the width of the pendulum's swing.

The regular motion of pendulums was used for timekeeping and was the world's most accurate timekeeping technology until the 1930s. The pendulum clock invented by Christiaan Huygens in 1656 became the world's standard timekeeper, used in homes and offices for 270 years, and achieved accuracy of about one second per year before it was superseded as a time standard by the quartz clock in the 1930s. Pendulums are also used in scientific instruments such as accelerometers and seismometers. Historically they were used as gravimeters to measure the acceleration of gravity in geo-physical surveys, and even as a standard of length. The word pendulum is Neo-Latin, from the Latin pendulus, meaning hanging.

Simple gravity pendulum

The simple gravity pendulum is an idealized mathematical model of a pendulum. This is a weight (or bob) on the end of a massless cord suspended from a pivot, without friction. When given an initial push, it will swing back and forth at a constant amplitude. Real pendulums are subject to friction and air drag, so the amplitude of their swings declines.

Use for time measurement

For 300 years, from its discovery around 1582 until development of the quartz clock in the 1930s, the pendulum was the world's standard for accurate timekeeping. In addition to clock pendulums, freeswinging seconds pendulums were widely used as precision timers in scientific experiments in the 17th and 18th centuries. Pendulums require great mechanical stability: a length change of only 0.02%, 0.2 mm in a grandfather clock pendulum, will cause an error of a minute per week.

Clock pendulums

Pendulums in clocks  are usually made of a weight or bob (b) suspended by a rod of wood or metal (a). To reduce air resistance (which accounts for most of the energy loss in precision clocks) the bob is traditionally a smooth disk with a lens-shaped cross section, although in antique clocks it often had carvings or decorations specific to the type of clock. In quality clocks the bob is made as heavy as the suspension can support and the movement can drive, since this improves the regulation of the clock. A common weight for seconds pendulum bobs is 15 pounds (6.8 kg). Instead of hanging from a pivot, clock pendulums are usually supported by a short straight spring (d) of flexible metal ribbon. This avoids the friction and 'play' caused by a pivot, and the slight bending force of the spring merely adds to the pendulum's restoring force. The highest precision clocks have pivots of 'knife' blades resting on agate plates. The impulses to keep the pendulum swinging are provided by an arm hanging behind the pendulum called the crutch, (e), which ends in a fork, (f) whose prongs embrace the pendulum rod. The crutch is pushed back and forth by the clock's escapement, (g,h).

Each time the pendulum swings through its centre position, it releases one tooth of the escape wheel (g). The force of the clock's mainspring or a driving weight hanging from a pulley, transmitted through the clock's gear train, causes the wheel to turn, and a tooth presses against one of the pallets (h), giving the pendulum a short push. The clock's wheels, geared to the escape wheel, move forward a fixed amount with each pendulum swing, advancing the clock's hands at a steady rate.

The pendulum always has a means of adjusting the period, usually by an adjustment nut (c) under the bob which moves it up or down on the rod. Moving the bob up decreases the pendulum's length, causing the pendulum to swing faster and the clock to gain time. Some precision clocks have a small auxiliary adjustment weight on a threaded shaft on the bob, to allow finer adjustment. Some tower clocks and precision clocks use a tray attached near to the midpoint of the pendulum rod, to which small weights can be added or removed. This effectively shifts the centre of oscillation and allows the rate to be adjusted without stopping the clock.

The pendulum must be suspended from a rigid support. During operation, any elasticity will allow tiny imperceptible swaying motions of the support, which disturbs the clock's period, resulting in error. Pendulum clocks should be attached firmly to a sturdy wall.

The most common pendulum length in quality clocks, which is always used in grandfather clocks, is the seconds pendulum, about 1 metre (39 inches) long. In mantel clocks, half-second pendulums, 25 cm (9.8 in) long, or shorter, are used. Only a few large tower clocks use longer pendulums, the 1.5 second pendulum, 2.25 m (7.4 ft) long, or occasionally the two-second pendulum, 4 m (13 ft)  which is used in Big Ben.

Temperature compensation

The largest source of error in early pendulums was slight changes in length due to thermal expansion and contraction of the pendulum rod with changes in ambient temperature. This was discovered when people noticed that pendulum clocks ran slower in summer, by as much as a minute per week (one of the first was Godefroy Wendelin, as reported by Huygens in 1658). Thermal expansion of pendulum rods was first studied by Jean Picard in 1669. A pendulum with a steel rod will expand by about 11.3 parts per million (ppm) with each degree Celsius increase, causing it to lose about 0.27 seconds per day for every degree Celsius increase in temperature, or 9 seconds per day for a 33 °C (59 °F) change. Wood rods expand less, losing only about 6 seconds per day for a 33 °C (59 °F) change, which is why quality clocks often had wooden pendulum rods. The wood had to be varnished to prevent water vapor from getting in, because changes in humidity also affected the length.

Mercury pendulum

The first device to compensate for this error was the mercury pendulum, invented by George Graham in 1721. The liquid metal mercury expands in volume with temperature. In a mercury pendulum, the pendulum's weight (bob) is a container of mercury. With a temperature rise, the pendulum rod gets longer, but the mercury also expands and its surface level rises slightly in the container, moving its centre of mass closer to the pendulum pivot. By using the correct height of mercury in the container these two effects will cancel, leaving the pendulum's centre of mass, and its period, unchanged with temperature. Its main disadvantage was that when the temperature changed, the rod would come to the new temperature quickly but the mass of mercury might take a day or two to reach the new temperature, causing the rate to deviate during that time. To improve thermal accommodation several thin containers were often used, made of metal. Mercury pendulums were the standard used in precision regulator clocks into the 20th century.

Gridiron pendulum

The most widely used compensated pendulum was the gridiron pendulum, invented in 1726 by John Harrison. This consists of alternating rods of two different metals, one with lower thermal expansion (CTE), steel, and one with higher thermal expansion, zinc or brass. The rods are connected by a frame, as shown in the drawing at the right, so that an increase in length of the zinc rods pushes the bob up, shortening the pendulum. With a temperature increase, the low expansion steel rods make the pendulum longer, while the high expansion zinc rods make it shorter. By making the rods of the correct lengths, the greater expansion of the zinc cancels out the expansion of the steel rods which have a greater combined length, and the pendulum stays the same length with temperature.

Zinc-steel gridiron pendulums are made with 5 rods, but the thermal expansion of brass is closer to steel, so brass-steel gridirons usually require 9 rods. Gridiron pendulums adjust to temperature changes faster than mercury pendulums, but scientists found that friction of the rods sliding in their holes in the frame caused gridiron pendulums to adjust in a series of tiny jumps. In high precision clocks this caused the clock's rate to change suddenly with each jump. Later it was found that zinc is subject to creep. For these reasons mercury pendulums were used in the highest precision clocks, but gridirons were used in quality regulator clocks.

Gridiron pendulums became so associated with good quality that, to this day, many ordinary clock pendulums have decorative 'fake' gridirons that don't actually have any temperature compensation function.

Invar and fused quartz

Around 1900, low thermal expansion materials were developed which could be used as pendulum rods in order to make elaborate temperature compensation unnecessary. These were only used in a few of the highest precision clocks before the pendulum became obsolete as a time standard. In 1896 Charles Édouard Guillaume invented the nickel steel alloy Invar. This has a CTE of around 0.5 µin/(in·°F), resulting in pendulum temperature errors over 71 °F of only 1.3 seconds per day, and this residual error could be compensated to zero with a few centimeters of aluminium under the pendulum bob. Invar pendulums were first used in 1898 in the Riefler regulator clock which achieved accuracy of 15 milliseconds per day. Suspension springs of Elinvar were used to eliminate temperature variation of the spring's restoring force on the pendulum. Later fused quartz was used which had even lower CTE. These materials are the choice for modern high accuracy pendulums.

Atmospheric pressure

The effect of the surrounding air on a moving pendulum is complex and requires fluid mechanics to calculate precisely, but for most purposes its influence on the period can be accounted for by three effects:

i) By Archimedes' principle the effective weight of the bob is reduced by the buoyancy of the air it displaces, while the mass (inertia) remains the same, reducing the pendulum's acceleration during its swing and increasing the period. This depends on the air pressure and the density of the pendulum, but not its shape.
ii) The pendulum carries an amount of air with it as it swings, and the mass of this air increases the inertia of the pendulum, again reducing the acceleration and increasing the period. This depends on both its density and shape.
iii) Viscous air resistance slows the pendulum's velocity. This has a negligible effect on the period, but dissipates energy, reducing the amplitude. This reduces the pendulum's Q factor, requiring a stronger drive force from the clock's mechanism to keep it moving, which causes increased disturbance to the period.

Increases in barometric pressure increase a pendulum's period slightly due to the first two effects, by about 0.11 seconds per day per kilopascal (0.37 seconds per day per inch of mercury or 0.015 seconds per day per torr). Researchers using pendulums to measure the acceleration of gravity had to correct the period for the air pressure at the altitude of measurement, computing the equivalent period of a pendulum swinging in vacuum. A pendulum clock was first operated in a constant-pressure tank by Friedrich Tiede in 1865 at the Berlin Observatory, and by 1900 the highest precision clocks were mounted in tanks that were kept at a constant pressure to eliminate changes in atmospheric pressure. Alternatively, in some a small aneroid barometer mechanism attached to the pendulum compensated for this effect.

Gravity

Pendulums are affected by changes in gravitational acceleration, which varies by as much as 0.5% at different locations on Earth, so precision pendulum clocks have to be recalibrated after a move. Even moving a pendulum clock to the top of a tall building can cause it to lose measurable time from the reduction in gravity.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1839 2023-07-17 15:11:50

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1842) Motorbike

Summary

A motorcycle is any two-wheeled or, less commonly, three-wheeled motor vehicle, usually propelled by an internal-combustion engine.

History

Just as the automobile was the answer to the 19th-century dream of self-propelling the horse-drawn carriage, the invention of the motorcycle created the self-propelled bicycle. The first commercial design was a three-wheeler built by Edward Butler in Great Britain in 1884. It employed a horizontal single-cylinder gasoline engine mounted between two steerable front wheels and connected by a drive chain to the rear wheel.

By 1900 many manufacturers were converting bicycles—or pedal cycles, as they were sometimes called—by adding small, centrally mounted spark ignition engines. The need for reliable constructions led to road motorcycle trial tests and competition between manufacturers. The original Tourist Trophy motorcycle races were held on the Isle of Man in 1907 as reliability or endurance races. Such events have been the proving ground for many new ideas from early two-stroke-cycle designs to supercharged, multivalve engines mounted on aerodynamic, carbon-fibre reinforced bodywork.

Components

Motorcycles are produced with both two-stroke- and four-stroke-cycle engines and with up to four cylinders. Most are air-cooled, though a few are water-cooled. Engines are generally limited to displacements of about 1,800 cc. The smallest designs, termed mopeds (from “motor pedal”), have very small engines (50 cc) with fuel economies of as much as 2.4 litres per 100 km (100 miles per gallon). Such units are not permitted on limited-access public roads because of their low speed capability. In order of increasing power capacity and engine displacements, the other five classifications are child bikes, trail bikes, road bikes, touring bikes, and racing bikes. A subcategory of racing bikes is known as superbikes. These are motorcycles that displace more than 900 cc and in which the seat is tilted forward so that the rider is hunched over the frame, creating a more aerodynamic profile.

The motorcycle frame is often of steel, usually a combination of tubes and sheets. The wheels are generally aluminum or steel rims with spokes, although some cast wheels are used. Graphite, composite, and magnesium parts are increasingly in use because of their high strength-to-weight characteristics. Tires are similar to those used on automobiles but are smaller and rounded to permit leaning to lower the centre of gravity in a turn without losing traction. The gyroscopic effect of motorcycle wheels rotating at high speed significantly improves stability and cornering ability. Inertia and steering geometry are also significant factors. Front-wheel suspension is provided by coil springs on a telescopic fork; rear-wheel springs are often mounted on shock absorbers similar to those used in automobiles.

Transmissions on motorcycles typically have four to six speeds, although small bikes may have as few as two. Power is normally transmitted to the rear-wheel sprockets by a chain, though occasionally belts or shafts are used.

The clutch and throttle, which control engine speed, are operated by twist-type controls on the handgrips. The front-wheel brake is controlled by a lever near the handgrip; the rear-wheel brake is engaged by a foot pedal. Except on very small machines, the front brake is usually of the hydraulic disc type. The rear brake may be disc or drum. The kick starter has been mostly replaced by an electric push-button starter.

Emissions standards

Tailpipe emissions standards for motorcycles continue to be strengthened. In 1980 the U.S. Environmental Protection Agency (EPA) first regulated new motorcycle hydrocarbon emissions, requiring motorcycles to emit less than 5.0 grams per km (0.3 ounce per mile) of highway driving. California and the European Union (EU) imposed stricter limits on hydrocarbons and added restrictions on nitric oxides and carbon monoxide. In 2006 emissions from new motorcycles sold in the United States were limited to a combined 1.4 grams of hydrocarbons and nitric oxides and 12.0 grams of carbon monoxide per km. The EPA decreased the limit on combined emissions of hydrocarbons and nitric oxides to 0.8 gram in 2010. The EU reduced emissions from new motorcycles in 2004 to 1.0 gram of hydrocarbons, 0.3 gram of nitric oxides, and 5.5 grams of carbon monoxide per km; in 2007 these levels were further reduced to 0.3 gram of hydrocarbons, 0.15 gram of nitric oxides, and 2.0 grams of carbon monoxide per km. The EU did a further emission reduction in 2016 to 0.17 gram of hydrocarbons, 0.09 gram of nitric oxides, and 1.14 grams of carbon monoxide, with a further reduction to 0.1 gram of hydrocarbons, 0.06 gram of nitric oxides, and 1 gram of carbon monoxide planned for 2020. Although U.S. limits for carbon monoxide were not lowered by law, the required reductions in other pollutants effectively lowered carbon monoxide emissions in fact. In order to meet these “clean-air regulations,” manufacturers installed more sophisticated catalytic converters and fuel-injection systems.

Details

A motorcycle (motorbike, bike, or trike (if three-wheeled)) is a two or three-wheeled motor vehicle steered by a handlebar from a saddle-style seat.

Motorcycle design varies greatly to suit a range of different purposes: long-distance travel, commuting, cruising, sport (including racing), and off-road riding. Motorcycling is riding a motorcycle and being involved in other related social activities such as joining a motorcycle club and attending motorcycle rallies.

The 1885 Daimler Reitwagen made by Gottlieb Daimler and Wilhelm Maybach in Germany was the first internal combustion, petroleum-fueled motorcycle. In 1894, Hildebrand & Wolfmüller became the first series production motorcycle.

Globally, motorcycles are comparably popular to cars as a method of transport. In 2021, approximately 58.6 million new motorcycles were sold around the world, fewer than the 66.7 million cars sold over the same period.

In 2022, the top four motorcycle producers by volume and type were Honda, Yamaha, Kawasaki, and Suzuki. In developing countries, motorcycles are considered utilitarian due to lower prices and greater fuel economy. Of all the motorcycles in the world, 58% are in the Asia-Pacific and Southern and Eastern Asia regions, excluding car-centric Japan.

According to the US Department of Transportation, the number of fatalities per vehicle mile traveled was 37 times higher for motorcycles than for cars.

Types

The term motorcycle has different legal definitions depending on jurisdiction.

There are three major types of motorcycle: street, off-road, and dual purpose. Within these types, there are many sub-types of motorcycles for different purposes. There is often a racing counterpart to each type, such as road racing and street bikes, or motocross including dirt bikes.

Street bikes include cruisers, sportbikes, scooters and mopeds, and many other types. Off-road motorcycles include many types designed for dirt-oriented racing classes such as motocross and are not street legal in most areas. Dual purpose machines like the dual-sport style are made to go off-road but include features to make them legal and comfortable on the street as well.

Each configuration offers either specialised advantage or broad capability, and each design creates a different riding posture.

In some countries the use of pillions (rear seats) is restricted.

Technical aspects:

Construction

Motorcycle construction is the engineering, manufacturing, and assembly of components and systems for a motorcycle which results in the performance, cost, and aesthetics desired by the designer. With some exceptions, construction of modern mass-produced motorcycles has standardised on a steel or aluminium frame, telescopic forks holding the front wheel, and disc brakes. Some other body parts, designed for either aesthetic or performance reasons may be added. A petrol-powered engine typically consisting of between one and four cylinders (and less commonly, up to eight cylinders) coupled to a manual five- or six-speed sequential transmission drives the swingarm-mounted rear wheel by a chain, driveshaft, or belt. The repair can be done using a Motorcycle lift.

Fuel economy

Motorcycle fuel economy varies greatly with engine displacement and riding style. A streamlined, fully faired Matzu Matsuzawa Honda XL125 achieved 470 mpg‑US (0.50 L/100 km; 560 mpg‑imp) in the Craig Vetter Fuel Economy Challenge "on real highways – in real conditions". Due to low engine displacements (100–200 cc (6.1–12.2 cu in)), and high power-to-mass ratios, motorcycles offer good fuel economy. Under conditions of fuel scarcity like 1950s Britain and modern developing nations, motorcycles claim large shares of the vehicle market. In the United States, the average motorcycle fuel economy is 44 miles per US gallon (19 km per liter).

Electric motorcycles

Very high fuel economy equivalents are often derived by electric motorcycles. Electric motorcycles are nearly silent, zero-emission electric motor-driven vehicles. Operating range and top speed are limited by battery technology. Fuel cells and petroleum-electric hybrids are also under development to extend the range and improve performance of the electric drive system.

Reliability

A 2013 survey of 4,424 readers of the US Consumer Reports magazine collected reliability data on 4,680 motorcycles purchased new from 2009 to 2012. The most common problem areas were accessories, brakes, electrical (including starters, charging, ignition), and fuel systems, and the types of motorcycles with the greatest problems were touring, off-road/dual sport, sport-touring, and cruisers. There were not enough sport bikes in the survey for a statistically significant conclusion, though the data hinted at reliability as good as cruisers. These results may be partially explained by accessories including such equipment as fairings, luggage, and auxiliary lighting, which are frequently added to touring, adventure touring/dual sport and sport touring bikes. Trouble with fuel systems is often the result of improper winter storage, and brake problems may also be due to poor maintenance. Of the five brands with enough data to draw conclusions, Honda, Kawasaki and Yamaha were statistically tied, with 11 to 14% of those bikes in the survey experiencing major repairs. Harley-Davidsons had a rate of 24%, while BMWs did worse, with 30% of those needing major repairs. There were not enough Triumph and Suzuki motorcycles surveyed for a statistically sound conclusion, though it appeared Suzukis were as reliable as the other three Japanese brands while Triumphs were comparable to Harley-Davidson and BMW. Three-fourths of the repairs in the survey cost less than US\$200 and two-thirds of the motorcycles were repaired in less than two days. In spite of their relatively worse reliability in this survey, Harley-Davidson and BMW owners showed the greatest owner satisfaction, and three-fourths of them said they would buy the same bike again, followed by 72% of Honda owners and 60 to 63% of Kawasaki and Yamaha owners.

Dynamics

Two-wheeled motorcycles stay upright while rolling due to a physical property known as conservation of angular momentum in the wheels. Angular momentum points along the axle, and it "wants" to stay pointing in that direction.

Different types of motorcycles have different dynamics and these play a role in how a motorcycle performs in given conditions. For example, one with a longer wheelbase provides the feeling of more stability by responding less to disturbances. Motorcycle tyres have a large influence over handling.

Motorcycles must be leaned in order to make turns. This lean is induced by the method known as countersteering, in which the rider momentarily steers the handlebars in the direction opposite of the desired turn. This practice is counterintuitive and therefore often confusing to novices – and even many experienced motorcyclists.

With such short wheelbase, motorcycles can generate enough torque at the rear wheel, and enough stopping force at the front wheel, to lift the opposite wheel off the road. These actions, if performed on purpose, are known as wheelies and stoppies (or endos) respectively.

Accessories

Various features and accessories may be attached to a motorcycle either as OEM (original equipment manufacturer) (factory-fitted) or aftermarket. Such accessories are selected by the owner to enhance the motorcycle's appearance, safety, performance, or comfort, and may include anything from mobile electronics to sidecars and trailers.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1840 2023-07-18 14:23:58

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1843) Tropic of Cancer

Gist

Tropic of Cancer is a latitude approximately 23°27′ N of the terrestrial Equator. This latitude corresponds to the northernmost declination of the Sun’s ecliptic to the celestial equator. At the summer solstice in the Northern Hemisphere, around June 21, the Sun attains its greatest declination north and is directly over the Tropic of Cancer. At that time the Sun appears in the constellation Gemini, but, much earlier in history, it lay in the constellation Cancer, thereby resulting in the designation Tropic of Cancer. Because of the gradual change in the direction of Earth’s axis of rotation, the Sun will reappear in the constellation Cancer in approximately 24,000 years.

Summary

The Tropic of Cancer is a parallel of latitude on the Earth, 23.5 degrees north of the equator.

On the northern summer solstice/southern winter solstice (around the 21st June each year), the Sun reaches its most northerly declination of +23.5 degrees. At this time, the Sun is directly overhead for an observer on the Tropic of Cancer. 3,000 years ago, this occured when Sun was in the zodiac constellation Cancer, and even though this is not longer the case (due to the precession of the equinoxes), the name remains.

The declination of the Sun and the latitude of the a Tropic of Cancer are equal to the obliquity of the ecliptic, the angle that the ecliptic makes with the celestial equator.

Details

The Tropic of Cancer is a line of latitude circling the Earth at approximately 23.5° north of the equator. It is the northernmost point on Earth where the sun's rays can appear directly overhead at local noon. It is also one of the five major degree measures or circles of latitude dividing the Earth (the others are the Tropic of Capricorn, the equator, the Arctic Circle and the Antarctic Circle).

The Tropic of Cancer is significant to Earth's geography because, in addition to being the northernmost point where the sun's rays are directly overhead, it also marks the northern boundary of tropics, which is the region that extends from the equator north to the Tropic of Cancer and south to the Tropic of Capricorn.

Some of the Earth's largest countries and/or cities are at or near the Tropic of Cancer. For example, the line passes through United States' state of Hawaii, portions of Central America, northern Africa, and the Sahara Desert and is near Kolkata, India. It should also be noted that because of the greater amount of land in the Northern Hemisphere, the Tropic of Cancer passes through more cities than the equivalent Tropic of Capricorn in the Southern Hemisphere.

Naming of the Tropic of Cancer

At the June or summer solstice (around June 21) when the Tropic of Cancer was named, the sun was pointed in the direction of the constellation Cancer, thus giving the new line of latitude the name the Tropic of Cancer. However, because this name was assigned over 2,000 years ago, the sun is no longer in the constellation Cancer. It is instead located in the constellation Taurus today. For most references though, it is easiest to understand the Tropic of Cancer with its latitudinal location of 23.5°N.

Significance of the Tropic of Cancer

In addition to being used to divide the Earth into different parts for navigation and marking the northern boundary of the tropics, the Tropic of Cancer is also significant to the Earth's amount of solar insolation and the creation of seasons.

Solar insolation is the amount of incoming solar radiation on the Earth. It varies over the Earth's surface based on the amount of direct sunlight hitting the equator and tropics and spreads north or south from there. Solar insolation is most at the subsolar point (the point on Earth that is directly beneath the Sun and where the rays hit at 90 degrees to the surface) which migrates annually between the Tropics of Cancer and Capricorn because of the Earth's axial tilt. When the subsolar point is at the Tropic of Cancer, it is during the June solstice and this is when the northern hemisphere receives the most solar insolation.

During the June solstice, because the amount of solar insolation is greatest at the Tropic of Cancer, the areas north of the tropic in the northern hemisphere also receive the most solar energy which keeps it warmest and creates summer. In addition, this is also when the areas at latitudes higher than the Arctic Circle receive 24 hours of daylight and no darkness. By contrast, the Antarctic Circle receives 24 hours of darkness and lower latitudes have their winter season because of low solar insolation, less solar energy and lower temperatures.

The Tropic of Cancer, which is also referred to as the Northern Tropic, is the most northerly circle of latitude on Earth at which the Sun can be directly overhead. This occurs on the June solstice, when the Northern Hemisphere is tilted toward the Sun to its maximum extent. It also reaches 90 degrees below the horizon at solar midnight on the December Solstice. Using a continuously updated formula, the circle is currently 23°26′10.4″ (or 23.43623°) north of the Equator.

Its Southern Hemisphere counterpart, marking the most southerly position at which the Sun can be directly overhead, is the Tropic of Capricorn. These tropics are two of the five major circles of latitude that mark maps of Earth, the others being the Arctic and Antarctic circles and the Equator. The positions of these two circles of latitude (relative to the Equator) are dictated by the tilt of Earth's axis of rotation relative to the plane of its orbit, and since the tilt changes, the location of these two circles also changes.

In geopolitics, it is known for being the southern limitation on the mutual defence obligation of NATO, as member states of NATO are not obligated to come to the defence of territory south of the Tropic of Cancer.

Name

When this line of latitude was named in the last centuries BCE, the Sun was in the constellation Cancer (Latin for crab) at the June solstice, the time each year that the Sun reaches its zenith at this latitude. Due to the precession of the equinoxes, this is no longer the case; today the Sun is in Gemini at the June solstice. The word "tropic" itself comes from the Greek "trope", meaning turn (change of direction, or circumstances), inclination, referring to the fact that the Sun appears to "turn back" at the solstices.

Drift

The Tropic of Cancer's position is not fixed, but constantly changes because of a slight wobble in the Earth's longitudinal alignment relative to the ecliptic, the plane in which the Earth orbits around the Sun. Earth's axial tilt varies over a 41,000-year period from 22.1 to 24.5 degrees, and as of 2000 is about 23.4 degrees, which will continue to remain valid for about a millennium. This wobble means that the Tropic of Cancer is currently drifting southward at a rate of almost half an arcsecond (0.468″) of latitude, or 15 m (49 ft), per year. The circle's position was at exactly 23° 27′N in 1917 and will be at 23° 26'N in 2045. The distance between the Antarctic Circle and the Tropic of Cancer is essentially constant as they move in tandem. This is based on an assumption of a constant equator, but the precise location of the equator is not truly fixed. See: equator, axial tilt and circles of latitude for additional details.

Geography

North of the tropic are the subtropics and the North Temperate Zone. The equivalent line of latitude south of the Equator is called the Tropic of Capricorn, and the region between the two, centered on the Equator, is the tropics.

In the year 2000, more than half of the world's population lived north of the Tropic of Cancer.

There are approximately 13 hours, 35 minutes of daylight during the summer solstice. During the winter solstice, there are 10 hours, 41 minutes of daylight.

Climate

The climate at the Tropic of Cancer is generally hot and dry, except for cooler highland regions in China, marine environments such as Hawaii, and easterly coastal areas, where orographic rainfall can be very heavy, in some places reaching 4 metres (160 in) annually. Most regions on the Tropic of Cancer experience two distinct seasons: an extremely hot summer with temperatures often reaching 45 °C (113 °F) and a warm winter with maxima around 22 °C (72 °F). Much land on or near the Tropic of Cancer is part of the Sahara Desert, while to the east, the climate is torrid monsoonal with a short wet season from June to September, and very little rainfall for the rest of the year.

The highest mountain on or adjacent to the Tropic of Cancer is Yu Shan in Taiwan; though it had glaciers descending as low as 2,800 metres (9,190 ft) during the Last Glacial Maximum, none survive and at present no glaciers exist within 470 kilometres (290 mi) of the Tropic of Cancer; the nearest currently surviving are the Minyong and Baishui in the Himalayas to the north and on Iztaccíhuatl to the south.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1841 2023-07-19 14:02:41

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1844) Tropic of Capricorn

Gist

The Tropic of Capricorn is an imaginary line of latitude going around the Earth at approximately 23.5° south of the equator. It is the southernmost point on Earth where the sun's rays can be directly overhead at local noon. It is also one of the five major circles of latitude dividing the Earth (the others are the Tropic of Cancer in the northern hemisphere, the equator, the Arctic Circle and the Antarctic Circle).

Summary

Tropic of Capricorn, latitude approximately 23°27′ S of the terrestrial Equator. This latitude corresponds to the southernmost declination of the Sun’s ecliptic to the celestial equator. At the winter solstice in the Northern Hemisphere, around December 21, the Sun is directly over the Tropic of Capricorn and lies within the boundaries of the constellation Sagittarius, having reached its southernmost declination in the ecliptic. Previously, however, it appeared in the constellation Capricornus at the winter solstice—hence the name Tropic of Capricorn. Because of the gradual change in the direction of Earth’s axis of rotation, the Sun will reappear in the constellation Capricornus in approximately 24,000 years.

Details

The Tropic of Capricorn (or the Southern Tropic) is the circle of latitude that contains the subsolar point at the December (or southern) solstice. It is thus the southernmost latitude where the Sun can be seen directly overhead. It also reaches 90 degrees below the horizon at solar midnight on the June Solstice. Its northern equivalent is the Tropic of Cancer.

The Tropic of Capricorn is one of the five major circles of latitude marked on maps of Earth. Its latitude is currently 23°26′10.4″ (or 23.43623°) south of the Equator, but it is very gradually moving northward, currently at the rate of 0.47 arcseconds, or 15 metres, per year.

Name

When this line of latitude was named in the last centuries BC, the Sun was in the constellation Capricornus at the December solstice. This is the date each year when the Sun reaches zenith at this latitude, the southernmost declination it reaches for the year. (Due to the precession of the equinoxes the Sun currently appears in Sagittarius at this solstice.)

Geography and environment

The Tropic of Capricorn is the dividing line between the Southern Temperate Zone to the south and the Tropics to the north. The Northern Hemisphere equivalent of the Tropic of Capricorn is the Tropic of Cancer.

The Tropic of Capricorn's position is not fixed, but constantly changes because of a slight wobble in the Earth's longitudinal alignment relative to its orbit around the Sun. Earth's axial tilt varies over a 41,000 year period from 22.1 to 24.5 degrees and currently resides at about 23.4 degrees. This wobble means that the Tropic of Capricorn is currently drifting northward at a rate of almost half an arcsecond (0.468″) of latitude, or 15 metres, per year (it was at exactly 23° 27′S in 1917 and will be at 23° 26'S in 2045). Therefore, the distance between Arctic Circle and the Tropic of Capricorn is essentially constant moving in tandem.

There are approximately 10 hours, 41 minutes of daylight during the June solstice (Southern Hemisphere winter). During the December solstice (Southern Hemisphere summer), there are 13 hours, 35 minutes of daylight. The length of the Tropic of Capricorn at 23°26′11.7″S is 36,788 km (22,859 mi).

Africa

In most of this belt of southern Africa, a minimum of seasonal rainfall is reliable and farming is possible, though yields struggle to compete with for example the Mississippi basin, even against like-to-like soil fertilisers. Rivers have been successfully dammed particularly flowing from relief precipitation areas (high eminences) and those from the edge of the Great Rift Valley, such as the Zambezi, well within the Tropics. This, with alluvial or enriched soil, enables substantial yield grain farming in areas with good soil. Across this large region pasture farming is widespread, where intensive, brief and rotational it helps to fertilise and stabilise the soil, preventing run-off and desertification. This approach is traditional to many tribes and promoted by government advisors such as Allan Savory, a Zimbabwean-born biologist, farmer, game rancher, politician and international consultant and co-founder of the Savory Institute. According to the United Nations University Our World dissemination he is credited with developing "holistic management" in the 1960s and has led anti-desertification efforts in Africa for decades using a counterintuitive approach to most developed economies of increasing the number of livestock on grasslands rather than fencing them off for conservation. Such practices in this area have seen success and won generous awards; he gave the keynote speech at UNCCD's Land Day in 2018, and later that year a TED (conference) address, widely re-broadcast.

Australia

In Australia, areas around the Tropic have some of the world's most variable rainfall. In the east advanced plants such as flowering shrubs and eucalyptus and in most bioregions grasses have adapted to cope with means such as deep roots and little transpiration. Wetter areas, seasonally watered, are widely pasture farmed. As to animals, birds and marsupials are well-adapted. Naturally difficult arable agriculture specialises in dry fruits, nuts and modest water consumption produce. Other types are possible given reliable irrigation sources and, ideally, water-retentive enriched or alluvial soils, especially wheat; shallow irrigation sources very widely dry up in and after drought years. The multi-ridge Great Dividing Range brings relief precipitation enough to make hundreds of kilometres either side cultivable, and its rivers are widely dammed to store necessary water; this benefits the settled areas of New South Wales and Queensland.

Behind the end of the green hills, away from the Pacific, which is subject to warm, negative phases of the El Niño–Southern Oscillation (colloquially this is an "El Niño year/season") is a white, red and yellow landscape of 2,800 to 3,300 kilometres of rain shadow heading west in turn feature normally arid cattle lands of the Channel Country, the white Kati Thanda-Lake Eyre National Park, the mainly red Mamungari Conservation Park, then the Gibson Desert, after others the dry landscape settlement of Kalbarri on the west coast and its rest, northward. The Channel Country features an arid landscape with a series of ancient flood plains from rivers which only flow intermittently. The principal rivers are Georgina River, Cooper Creek and the Diamantina River. In most years, their waters are absorbed into the earth or evaporate, but when there is sufficient rainfall in their catchment area, these rivers flow into Lake Eyre, South Australia. One of the most significant rainfall events occurred in 2010 when a monsoonal low from ex-Cyclone Olga created a period of exceptional rainfall.

El Niño adverse phases cause a shift in atmospheric circulation; rainfall becomes reduced over Indonesia and Australia, rainfall and tropical cyclone formation increases over the tropical Pacific. The low-level surface trade winds, which normally blow from east to west along the equator, either weaken or start blowing from the other direction.

South America

In South America, whilst in the continental cratons soils are almost as old as in Australia and Southern Africa, the presence of the geologically young and evolving Andes means that this region is on the western side of the subtropical anticyclones and thus receives warm and humid air from the Atlantic Ocean. As a result, areas in Brazil adjacent to the Tropic are impressively productive agricultural regions, producing large quantities of crops such as sugarcane, and the natural rainforest vegetation has been almost entirely cleared, except for a few remaining patches of Atlantic Forest. Further south in Argentina, the temperate grasslands of the Pampas region is equally influential in wheat, soybeans, maize, and beef, making the country one of the largest worldwide agricultural exporters, similar to the role played by the Prairies region in Canada.

West of the Andes, which creates a rain shadow, the air is further cooled and dried by the cold Humboldt Current which makes it very arid, creating the Atacama Desert, one of the driest in the world, so that no glaciers exist between Volcán Sajama at 18˚30'S and Cerro Tres Cruces at 27˚S. Vegetation here is almost non-existent, though on the eastern slopes of the Andes rainfall is adequate for rainfed agriculture.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1842 2023-07-20 14:01:45

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1845) Prime meridian

Summary

The prime meridian is the imaginary line that divides Earth into two equal parts: the Eastern Hemisphere and the Western Hemisphere. The prime meridian is also used as the basis for the world’s time zones.

The prime meridian appears on maps and globes. It is the starting point for the measuring system called longitude. Longitude is a system of imaginary north-south lines called meridians. They connect the North Pole to the South Pole. Meridians are used to measure distance in degrees east or west from the prime meridian. The prime meridian is 0° longitude. The 180th meridian is the line of longitude that is exactly opposite the prime meridian. It is 180° longitude. Lines of longitude east of the prime meridian are numbered from 1 to 179 east (E). Lines of longitude west of the prime meridian are numbered from 1 to 179 west (W).

The prime meridian is also called the Greenwich meridian because it passes through Greenwich, England. Before 1884 map makers usually began numbering the lines of longitude on their maps at whichever meridian passed through the site of their national observatory. Many countries used British maps because Great Britain was a world leader in exploration and map making. In 1884, therefore, scientists decided that the starting point of longitude for everyone would be the meridian running through Britain’s royal observatory in Greenwich.

Details

A prime meridian is an arbitrary meridian (a line of longitude) in a geographic coordinate system at which longitude is defined to be 0°. Together, a prime meridian and its anti-meridian (the 180th meridian in a 360°-system) form a great circle. This great circle divides a spheroid, like Earth, into two hemispheres: the Eastern Hemisphere and the Western Hemisphere (for an east-west notational system). For Earth's prime meridian, various conventions have been used or advocated in different regions throughout history. Earth's current international standard prime meridian is the IERS Reference Meridian. (The IERS Reference Meridian (IRM), also called the International Reference Meridian, is the prime meridian (0° longitude) maintained by the International Earth Rotation and Reference Systems Service (IERS). It passes about 5.3 arcseconds east of George Biddell Airy's 1851 transit circle which is 102 metres (335 ft) at the latitude of the Royal Observatory, Greenwich. Thus it differs slightly from the historical Greenwich meridian.) It is derived, but differs slightly, from the Greenwich Meridian, the previous standard.

A prime meridian for a planetary body not tidally locked (or at least not in synchronous rotation) is entirely arbitrary, unlike an equator, which is determined by the axis of rotation. However, for celestial objects that are tidally locked (more specifically, synchronous), their prime meridians are determined by the face always inward of the orbit (a planet facing its star, or a moon facing its planet), just as equators are determined by rotation.

Longitudes for the Earth and Moon are measured from their prime meridian (at 0°) to 180° east and west. For all other Solar System bodies, longitude is measured from 0° (their prime meridian) to 360°. West longitudes are used if the rotation of the body is prograde (or 'direct', like Earth), meaning that its direction of rotation is the same as that of its orbit. East longitudes are used if the rotation is retrograde.

History

The notion of longitude for Greeks was developed by the Greek Eratosthenes (c. 276 – 195 BCE) in Alexandria, and Hipparchus (c. 190 – 120 BCE) in Rhodes, and applied to a large number of cities by the geographer Strabo (64/63 BCE – c. 24 CE). But it was Ptolemy (c. 90 – 168 CE) who first used a consistent meridian for a world map in his Geographia.

Ptolemy used as his basis the "Fortunate Isles", a group of islands in the Atlantic, which are usually associated with the Canary Islands (13° to 18°W), although his maps correspond more closely to the Cape Verde islands (22° to 25° W). The main point is to be comfortably west of the western tip of Africa (17.5° W) as negative numbers were not yet in use. His prime meridian corresponds to 18° 40' west of Winchester (about 20°W) today. At that time the chief method of determining longitude was by using the reported times of lunar eclipses in different countries.

One of the earliest known descriptions of standard time in India appeared in the 4th century CE astronomical treatise Surya Siddhanta. Postulating a spherical earth, the book described the thousands years old customs of the prime meridian, or zero longitude, as passing through Avanti, the ancient name for the historic city of Ujjain, and Rohitaka, the ancient name for Rohtak (28°54′N 76°38′E), a city near the Kurukshetra.

Ptolemy's Geographia was first printed with maps at Bologna in 1477, and many early globes in the 16th century followed his lead. But there was still a hope that a "natural" basis for a prime meridian existed. Christopher Columbus reported (1493) that the compass pointed due north somewhere in mid-Atlantic, and this fact was used in the important Treaty of Tordesillas of 1494, which settled the territorial dispute between Spain and Portugal over newly discovered lands. The Tordesillas line was eventually settled at 370 leagues (2,193 kilometers, 1,362 statute miles, or 1,184 nautical miles) west of Cape Verde. This is shown in the copies of Spain's Padron Real made by Diogo Ribeiro in 1527 and 1529. São Miguel Island (25.5°W) in the Azores was still used for the same reason as late as 1594 by Christopher Saxton, although by then it had been shown that the zero magnetic deviation line did not follow a line of longitude.

In 1541, Mercator produced his famous 41 cm terrestrial globe and drew his prime meridian precisely through Fuerteventura (14°1'W) in the Canaries. His later maps used the Azores, following the magnetic hypothesis. But by the time that Ortelius produced the first modern atlas in 1570, other islands such as Cape Verde were coming into use. In his atlas longitudes were counted from 0° to 360°, not 180°W to 180°E as is usual today. This practice was followed by navigators well into the 18th century. In 1634, Cardinal Richelieu used the westernmost island of the Canaries, El Hierro, 19° 55' west of Paris, as the choice of meridian. The geographer Delisle decided to round this off to 20°, so that it simply became the meridian of Paris disguised.

In the early 18th century the battle was on to improve the determination of longitude at sea, leading to the development of the marine chronometer by John Harrison. But it was the development of accurate star charts, principally by the first British Astronomer Royal, John Flamsteed between 1680 and 1719 and disseminated by his successor Edmund Halley, that enabled navigators to use the lunar method of determining longitude more accurately using the octant developed by Thomas Godfrey and John Hadley.

In the 18th century most countries in Europe adapted their own prime meridian, usually through their capital, hence in France the Paris meridian was prime, in Germany it was the Berlin meridian, in Denmark the Copenhagen meridian, and in United Kingdom the Greenwich meridian.

Between 1765 and 1811, Nevil Maskelyne published 49 issues of the Nautical Almanac based on the meridian of the Royal Observatory, Greenwich. "Maskelyne's tables not only made the lunar method practicable, they also made the Greenwich meridian the universal reference point. Even the French translations of the Nautical Almanac retained Maskelyne's calculations from Greenwich – in spite of the fact that every other table in the Connaissance des Temps considered the Paris meridian as the prime."

In 1884, at the International Meridian Conference in Washington, D.C., 22 countries voted to adopt the Greenwich meridian as the prime meridian of the world. The French argued for a neutral line, mentioning the Azores and the Bering Strait, but eventually abstained and continued to use the Paris meridian until 1911.

The current international standard Prime Meridian is the IERS Reference Meridian. The International Hydrographic Organization adopted an early version of the IRM in 1983 for all nautical charts. It was adopted for air navigation by the International Civil Aviation Organization on 3 March 1989.

International prime meridian

Since 1984, the international standard for the Earth's prime meridian is the IERS Reference Meridian. Between 1884 and 1984, the meridian of Greenwich was the world standard. These meridians are physically very close to each other.

Prime meridian at Greenwich

In October 1884 the Greenwich Meridian was selected by delegates (forty-one delegates representing twenty-five nations) to the International Meridian Conference held in Washington, D.C., United States to be the common zero of longitude and standard of time reckoning throughout the world.

The position of the historic prime meridian, based at the Royal Observatory, Greenwich, was established by Sir George Airy in 1851. It was defined by the location of the Airy Transit Circle ever since the first observation he took with it. Prior to that, it was defined by a succession of earlier transit instruments, the first of which was acquired by the second Astronomer Royal, Edmond Halley in 1721. It was set up in the extreme north-west corner of the Observatory between Flamsteed House and the Western Summer House. This spot, now subsumed into Flamsteed House, is roughly 43 metres to the west of the Airy Transit Circle, a distance equivalent to roughly 2 seconds of longitude.[19] It was Airy's transit circle that was adopted in principle (with French delegates, who pressed for adoption of the Paris meridian abstaining) as the Prime Meridian of the world at the 1884 International Meridian Conference.

All of these Greenwich meridians were located via an astronomic observation from the surface of the Earth, oriented via a plumb line along the direction of gravity at the surface. This astronomic Greenwich meridian was disseminated around the world, first via the lunar distance method, then by chronometers carried on ships, then via telegraph lines carried by submarine communications cables, then via radio time signals. One remote longitude ultimately based on the Greenwich meridian using these methods was that of the North American Datum 1927 or NAD27, an ellipsoid whose surface best matches mean sea level under the United States.

IERS Reference Meridian

Beginning in 1973 the International Time Bureau and later the International Earth Rotation and Reference Systems Service changed from reliance on optical instruments like the Airy Transit Circle to techniques such as lunar laser ranging, satellite laser ranging, and very-long-baseline interferometry. The new techniques resulted in the IERS Reference Meridian, the plane of which passes through the centre of mass of the Earth. This differs from the plane established by the Airy transit, which is affected by vertical deflection (the local vertical is affected by influences such as nearby mountains). The change from relying on the local vertical to using a meridian based on the centre of the Earth caused the modern prime meridian to be 5.3″ east of the astronomic Greenwich prime meridian through the Airy Transit Circle. At the latitude of Greenwich, this amounts to 102 metres. This was officially accepted by the Bureau International de l'Heure (BIH) in 1984 via its BTS84 (BIH Terrestrial System) that later became WGS84 (World Geodetic System 1984) and the various International Terrestrial Reference Frames (ITRFs).

Due to the movement of Earth's tectonic plates, the line of 0° longitude along the surface of the Earth has slowly moved toward the west from this shifted position by a few centimetres; that is, towards the Airy Transit Circle (or the Airy Transit Circle has moved toward the east, depending on your point of view) since 1984 (or the 1960s). With the introduction of satellite technology, it became possible to create a more accurate and detailed global map. With these advances there also arose the necessity to define a reference meridian that, whilst being derived from the Airy Transit Circle, would also take into account the effects of plate movement and variations in the way that the Earth was spinning. As a result, the IERS Reference Meridian was established and is commonly used to denote the Earth's prime meridian (0° longitude) by the International Earth Rotation and Reference Systems Service, which defines and maintains the link between longitude and time. Based on observations to satellites and celestial compact radio sources (quasars) from various coordinated stations around the globe, Airy's transit circle drifts northeast about 2.5 centimetres per year relative to this Earth-centred 0° longitude.

It is also the reference meridian of the Global Positioning System operated by the United States Department of Defense, and of WGS84 and its two formal versions, the ideal International Terrestrial Reference System (ITRS) and its realization, the International Terrestrial Reference Frame (ITRF). A current convention on the Earth uses the line of longitude 180° opposite the IRM as the basis for the International Date Line.

The prime meridian is the line of 0° longitude, the starting point for measuring distance both east and west around Earth.

The prime meridian is arbitrary, meaning it could be chosen to be anywhere. Any line of longitude (a meridian) can serve as the 0° longitude line. However, there is an international agreement that the meridian that runs through Greenwich, England, is considered the official prime meridian.

Governments did not always agree that the Greenwich meridian was the prime meridian, making navigation over long distances very difficult. Different countries published maps and charts with longitude based on the meridian passing through their capital city. France published maps with 0° longitude running through Paris. Cartographers in China published maps with 0° longitude running through Beijing. Even different parts of the same country published materials based on local meridians.

Finally, at an international convention called by U.S. President Chester Arthur in 1884, representatives from 25 countries agreed to pick a single, standard meridian. They chose the meridian passing through the Royal Observatory in Greenwich, England. The Greenwich Meridian became the international standard for the prime meridian.

UTC

The prime meridian also sets Coordinated Universal Time (UTC). UTC never changes for daylight savings or anything else. Just as the prime meridian is the standard for longitude, UTC is the standard for time. All countries and regions measure their time zones according to UTC.

There are 24 time zones in the world. If an event happens at 11:00 a.m. in Houston, Texas, United States, it would be reported at 12 p.m. in Orlando, Florida, United States; 4:00 p.m. in Morocco; 9:00 p.m. in Kolkata, India; and 6:00 a.m. in Honolulu, Hawai'i, United States. The event happened at 4:00 p.m. UTC.

The prime meridian also helps establish the International Date Line. Earth's longitude measures 360°, so the halfway point from the prime meridian is the 180° longitude line. The meridian at 180° longitude is commonly known as the International Date Line. As you pass the International Date Line, you either add a day (going west) or subtract a day (going east.)

Hemispheres

The prime meridian and the International Date Line create a circle that divides Earth into the Eastern and Western Hemispheres. This is similar to the way the Equator serves as the 0° latitude line and divides Earth into the northern and southern hemispheres.

The Eastern Hemisphere is east of the prime meridian and west of the International Date Line. Most of Earth's landmasses, including all of Asia and Australia, and most of Africa, are part of the Eastern Hemisphere.

The Western Hemisphere is west of the prime meridian and east of the International Date Line. The Americas, the western part of the British Isles (including Ireland and Wales), and the northwestern part of Africa are land masses in the Western Hemisphere.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1843 2023-07-21 14:55:34

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1846) Exercise

Gist

Exercising regularly, every day if possible, is the single most important thing you can do for your health. In the short term, exercise helps to control appetite, boost mood, and improve sleep. In the long term, it reduces the risk of heart disease, stroke, diabetes, dementia, depression, and many cancers.

Whether you were once much more physically active or have never been one to exercise regularly, now is a great time to start an exercise and fitness regimen. Getting and staying in shape is just as important for seniors as it is for younger people.

Details

Exercise is a body activity that enhances or maintains physical fitness and overall health and wellness.

It is performed for various reasons, including weight loss or maintenance, to aid growth and improve strength, develop muscles and the cardiovascular system, hone athletic skills, improve health, or simply for enjoyment. Many individuals choose to exercise outdoors where they can congregate in groups, socialize, and improve well-being as well as mental health.

In terms of health benefits, the amount of recommended exercise depends upon the goal, the type of exercise, and the age of the person. Even doing a small amount of exercise is healthier than doing none.

Classification

Physical exercises are generally grouped into three types, depending on the overall effect they have on the human body:

* Aerobic exercise is any physical activity that uses large muscle groups and causes the body to use more oxygen than it would while resting. The goal of aerobic exercise is to increase cardiovascular endurance. Examples of aerobic exercise include running, cycling, swimming, brisk walking, skipping rope, rowing, hiking, dancing, playing tennis, continuous training, and long distance running.
* Anaerobic exercise, which includes strength and resistance training, can firm, strengthen, and increase muscle mass, as well as improve bone density, balance, and coordination. Examples of strength exercises are push-ups, pull-ups, lunges, squats, bench press. Anaerobic exercise also includes weight training, functional training, Eccentric Training, interval training, sprinting, and high-intensity interval training which increase short-term muscle strength.
* Flexibility exercises stretch and lengthen muscles. Activities such as stretching help to improve joint flexibility and keep muscles limber. The goal is to improve the range of motion which can reduce the chance of injury.

Physical exercise can also include training that focuses on accuracy, agility, power, and speed.

Types of exercise can also be classified as dynamic or static. 'Dynamic' exercises such as steady running, tend to produce a lowering of the diastolic blood pressure during exercise, due to the improved blood flow. Conversely, static exercise (such as weight-lifting) can cause the systolic pressure to rise significantly, albeit transiently, during the performance of the exercise.

Health effects

Physical exercise is important for maintaining physical fitness and can contribute to maintaining a healthy weight, regulating the digestive system, building and maintaining healthy bone density, muscle strength, and joint mobility, promoting physiological well-being, reducing surgical risks, and strengthening the immune system. Some studies indicate that exercise may increase life expectancy and the overall quality of life. People who participate in moderate to high levels of physical exercise have a lower mortality rate compared to individuals who by comparison are not physically active. Moderate levels of exercise have been correlated with preventing aging by reducing inflammatory potential. The majority of the benefits from exercise are achieved with around 3500 metabolic equivalent (MET) minutes per week, with diminishing returns at higher levels of activity. For example, climbing stairs 10 minutes, vacuuming 15 minutes, gardening 20 minutes, running 20 minutes, and walking or bicycling for transportation 25 minutes on a daily basis would together achieve about 3000 MET minutes a week. A lack of physical activity causes approximately 6% of the burden of disease from coronary heart disease, 7% of type 2 diabetes, 10% of breast cancer and 10% of colon cancer worldwide. Overall, physical inactivity causes 9% of premature mortality worldwide.

Fitness

Most people can increase fitness by increasing physical activity levels. Increases in muscle size from resistance training are primarily determined by diet and testosterone. This genetic variation in improvement from training is one of the key physiological differences between elite athletes and the larger population. There is evidence that exercising in middle age may lead to better physical ability later in life.

Early motor skills and development is also related to physical activity and performance later in life. Children who are more proficient with motor skills early on are more inclined to be physically active, and thus tend to perform well in sports and have better fitness levels. Early motor proficiency has a positive correlation to childhood physical activity and fitness levels, while less proficiency in motor skills results in a more sedentary lifestyle.

The type and intensity of physical activity performed may have an effect on a person's fitness level. There is some weak evidence that high-intensity interval training may improve a person's VO2 max slightly more than lower intensity endurance training. However, unscientific fitness methods could lead to sports injuries.

Cardiovascular system

The beneficial effect of exercise on the cardiovascular system is well documented. There is a direct correlation between physical inactivity and cardiovascular disease, and physical inactivity is an independent risk factor for the development of coronary artery disease. Low levels of physical exercise increase the risk of cardiovascular diseases mortality.

Children who participate in physical exercise experience greater loss of body fat and increased cardiovascular fitness. Studies have shown that academic stress in youth increases the risk of cardiovascular disease in later years; however, these risks can be greatly decreased with regular physical exercise.

There is a dose-response relationship between the amount of exercise performed from approximately 700–2000 kcal of energy expenditure per week and all-cause mortality and cardiovascular disease mortality in middle-aged and elderly men. The greatest potential for reduced mortality is seen in sedentary individuals who become moderately active.

Studies have shown that since heart disease is the leading cause of death in women, regular exercise in aging women leads to healthier cardiovascular profiles.

Most beneficial effects of physical activity on cardiovascular disease mortality can be attained through moderate-intensity activity (40–60% of maximal oxygen uptake, depending on age). Persons who modify their behavior after myocardial infarction to include regular exercise have improved rates of survival. Persons who remain sedentary have the highest risk for all-cause and cardiovascular disease mortality. According to the American Heart Association, exercise reduces the risk of cardiovascular diseases, including heart attack and stroke.

Some have suggested that increases in physical exercise might decrease healthcare costs, increase the rate of job attendance, as well as increase the amount of effort women put into their jobs.

Immune system

Although there have been hundreds of studies on physical exercise and the immune system, there is little direct evidence on its connection to illness. Epidemiological evidence suggests that moderate exercise has a beneficial effect on the human immune system; an effect which is modeled in a J curve. Moderate exercise has been associated with a 29% decreased incidence of upper respiratory tract infections (URTI), but studies of marathon runners found that their prolonged high-intensity exercise was associated with an increased risk of infection occurrence. However, another study did not find the effect. Immune cell functions are impaired following acute sessions of prolonged, high-intensity exercise, and some studies have found that athletes are at a higher risk for infections. Studies have shown that strenuous stress for long durations, such as training for a marathon, can suppress the immune system by decreasing the concentration of lymphocytes. The immune systems of athletes and nonathletes are generally similar. Athletes may have a slightly elevated natural killer cell count and cytolytic action, but these are unlikely to be clinically significant.

Vitamin C supplementation has been associated with a lower incidence of upper respiratory tract infections in marathon runners.

Biomarkers of inflammation such as C-reactive protein, which are associated with chronic diseases, are reduced in active individuals relative to sedentary individuals, and the positive effects of exercise may be due to its anti-inflammatory effects. In individuals with heart disease, exercise interventions lower blood levels of fibrinogen and C-reactive protein, an important cardiovascular risk marker. The depression in the immune system following acute bouts of exercise may be one of the mechanisms for this anti-inflammatory effect.

Cancer

A systematic review evaluated 45 studies that examined the relationship between physical activity and cancer survival rates. According to the review, "[there] was consistent evidence from 27 observational studies that physical activity is associated with reduced all-cause, breast cancer–specific, and colon cancer–specific mortality. There is currently insufficient evidence regarding the association between physical activity and mortality for survivors of other cancers." Evidence suggests that exercise may positively affect the quality of life in cancer survivors, including factors such as anxiety, self-esteem and emotional well-being. For people with cancer undergoing active treatment, exercise may also have positive effects on health-related quality of life, such as fatigue and physical functioning. This is likely to be more pronounced with higher intensity exercise.

Exercise may contribute to a reduction of cancer-related fatigue in survivors of breast cancer. Although there is only limited scientific evidence on the subject, people with cancer cachexia are encouraged to engage in physical exercise. Due to various factors, some individuals with cancer cachexia have a limited capacity for physical exercise. Compliance with prescribed exercise is low in individuals with cachexia and clinical trials of exercise in this population often have high drop-out rates.

There is low-quality evidence for an effect of aerobic physical exercises on anxiety and serious adverse events in adults with hematological malignancies. Aerobic physical exercise may result in little to no difference in the mortality, quality of life, or physical functioning. These exercises may result in a slight reduction in depression and reduction in fatigue.

Neurobiological

The neurobiological effects of physical exercise are numerous and involve a wide range of interrelated effects on brain structure, brain function, and cognition. A large body of research in humans has demonstrated that consistent aerobic exercise (e.g., 30 minutes every day) induces persistent improvements in certain cognitive functions, healthy alterations in gene expression in the brain, and beneficial forms of neuroplasticity and behavioral plasticity; some of these long-term effects include: increased neuron growth, increased neurological activity (e.g., c-Fos and BDNF signaling), improved stress coping, enhanced cognitive control of behavior, improved declarative, spatial, and working memory, and structural and functional improvements in brain structures and pathways associated with cognitive control and memory. The effects of exercise on cognition have important implications for improving academic performance in children and college students, improving adult productivity, preserving cognitive function in old age, preventing or treating certain neurological disorders, and improving overall quality of life.

In healthy adults, aerobic exercise has been shown to induce transient effects on cognition after a single exercise session and persistent effects on cognition following regular exercise over the course of several months. People who regularly perform an aerobic exercise (e.g., running, jogging, brisk walking, swimming, and cycling) have greater scores on neuropsychological function and performance tests that measure certain cognitive functions, such as attentional control, inhibitory control, cognitive flexibility, working memory updating and capacity, declarative memory, spatial memory, and information processing speed. The transient effects of exercise on cognition include improvements in most executive functions (e.g., attention, working memory, cognitive flexibility, inhibitory control, problem solving, and decision making) and information processing speed for a period of up to 2 hours after exercising.

Aerobic exercise induces short- and long-term effects on mood and emotional states by promoting positive affect, inhibiting negative affect, and decreasing the biological response to acute psychological stress. Over the short-term, aerobic exercise functions as both an antidepressant and euphoriant, whereas consistent exercise produces general improvements in mood and self-esteem.

Regular aerobic exercise improves symptoms associated with a variety of central nervous system disorders and may be used as adjunct therapy for these disorders. There is clear evidence of exercise treatment efficacy for major depressive disorder and attention deficit hyperactivity disorder. The American Academy of Neurology's clinical practice guideline for mild cognitive impairment indicates that clinicians should recommend regular exercise (two times per week) to individuals who have been diagnosed with this condition. Reviews of clinical evidence also support the use of exercise as an adjunct therapy for certain neurodegenerative disorders, particularly Alzheimer's disease and Parkinson's disease. Regular exercise is also associated with a lower risk of developing neurodegenerative disorders. A large body of preclinical evidence and emerging clinical evidence supports the use of exercise as an adjunct therapy for the treatment and prevention of drug addictions. Regular exercise has also been proposed as an adjunct therapy for brain cancers.

Depression

A number of medical reviews have indicated that exercise has a marked and persistent antidepressant effect in humans, an effect believed to be mediated through enhanced BDNF signaling in the brain. Several systematic reviews have analyzed the potential for physical exercise in the treatment of depressive disorders. The 2013 Cochrane Collaboration review on physical exercise for depression noted that, based upon limited evidence, it is more effective than a control intervention and comparable to psychological or antidepressant drug therapies. Three subsequent 2014 systematic reviews that included the Cochrane review in their analysis concluded with similar findings: one indicated that physical exercise is effective as an adjunct treatment (i.e., treatments that are used together) with antidepressant medication; the other two indicated that physical exercise has marked antidepressant effects and recommended the inclusion of physical activity as an adjunct treatment for mild–moderate depression and mental illness in general. One systematic review noted that yoga may be effective in alleviating symptoms of prenatal depression. Another review asserted that evidence from clinical trials supports the efficacy of physical exercise as a treatment for depression over a 2–4 month period. These benefits have also been noted in old age, with a review conducted in 2019 finding that exercise is an effective treatment for clinically diagnosed depression in older adults.

A meta-analysis from July 2016 concluded that physical exercise improves overall quality of life in individuals with depression relative to controls.

Continuous aerobic exercise can induce a transient state of euphoria, colloquially known as a "runner's high" in distance running or a "rower's high" in crew, through the increased biosynthesis of at least three euphoriant neurochemicals: anandamide (an endocannabinoid), β-endorphin (an endogenous opioid), and phenethylamine (a trace amine and amphetamine analog).

Sleep

Preliminary evidence from a 2012 review indicated that physical training for up to four months may increase sleep quality in adults over 40 years of age. A 2010 review suggested that exercise generally improved sleep for most people, and may help with insomnia, but there is insufficient evidence to draw detailed conclusions about the relationship between exercise and sleep. A 2018 systematic review and meta-analysis suggested that exercise can improve sleep quality in people with insomnia.

Libido

One 2013 study found that exercising improved sexual arousal problems related to antidepressant use.

Respiratory system

People who participate in physical exercise experience increased cardiovascular fitness. There is some level of concern about additional exposure to air pollution when exercising outdoors, especially near traffic.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1844 2023-07-22 14:28:08

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1847) Pottery

Gist

Pottery is the process and the products of forming vessels and other objects with clay and other ceramic materials, which are fired at high temperatures to give them a hard, durable form. Major types include earthenware, stoneware and porcelain. The place where such wares are made by a potter is also called a pottery. The definition of pottery, used by the ASTM International, is "all fired ceramic wares that contain clay when formed, except technical, structural, and refractory products." In archaeology, especially of ancient and prehistoric periods, "pottery" often means vessels only, and figures of the same material are called "terracottas." Clay as a part of the materials used is required by some definitions of pottery, but this is dubious.

Pottery is one of the oldest human inventions, originating before the Neolithic period, with ceramic objects like the Gravettian culture Venus of Dolní Věstonice figurine discovered in the Czech Republic dating back to 29,000–25,000 BC, and pottery vessels that were discovered in Jiangxi, China, which date back to 18,000 BC.

Summary

Pottery is the process and the products of forming vessels and other objects with clay and other raw materials, which are fired at high temperatures to give them a hard and durable form. The place where such wares are made by a potter is also called a pottery (plural potteries). The definition of pottery, used by the ASTM International, is "all fired ceramic wares that contain clay when formed, except technical, structural, and refractory products". End applications include tableware, decorative ware, sanitaryware, and in technology and industry such as electrical insulators and laboratory ware. In art history and archaeology, especially of ancient and prehistoric periods, pottery often means vessels only, and sculpted figurines of the same material are called terracottas.

Pottery is one of the oldest human inventions, originating before the Neolithic period, with ceramic objects such as the Gravettian culture Venus of Dolní Věstonice figurine discovered in the Czech Republic dating back to 29,000–25,000 BC, and pottery vessels that were discovered in Jiangxi, China, which date back to 18,000 BC. Early Neolithic and pre-Neolithic pottery artifacts have been found, in Jōmon Japan (10,500 BC), the Russian Far East (14,000 BC), Sub-Saharan Africa (9,400 BC), South America (9,000s–7,000s BC), and the Middle East (7,000s–6,000s BC).

Pottery is made by forming a clay body into objects of a desired shape and heating them to high temperatures (600–1600 °C) in a bonfire, pit or kiln, which induces reactions that lead to permanent changes including increasing the strength and rigidity of the object. Much pottery is purely utilitarian, but some can also be regarded as ceramic art. An article can be decorated before or after firing.

Pottery is traditionally divided into three types: earthenware, stoneware and porcelain. All three may be glazed and unglazed. All may also be decorated by various techniques. In many examples the group a piece belongs to is immediately visually apparent, but this is not always the case; for example fritware uses no or little clay, so falls outside these groups. Historic pottery of all these types is often grouped as either "fine" wares, relatively expensive and well-made, and following the aesthetic taste of the culture concerned, or alternatively "coarse", "popular", "folk" or "village" wares, mostly undecorated, or simply so, and often less well-made.

Cooking in pottery became less popular once metal pots became available, but is still used for dishes that benefit from the qualities of pottery cooking, typically slow cooking in an oven, such as biryani, cassoulet, daube, tagine, jollof rice, kedjenou, cazuela and types of baked beans.

Details

Pottery is one of the oldest and most widespread of the decorative arts, consisting of objects made of clay and hardened with heat. The objects made are commonly useful ones, such as vessels for holding liquids or plates or bowls from which food can be served.

Kinds, processes, and techniques

Clay, the basic material of pottery, has two distinctive characteristics: it is plastic (i.e., it can be molded and will retain the shape imposed upon it); and it hardens on firing to form a brittle but otherwise virtually indestructible material that is not attacked by any of the agents that corrode metals or organic materials. Firing also protects the clay body against the effects of water. If a sun-dried clay vessel is filled with water, it will eventually collapse, but, if it is heated, chemical changes that begin to take place at about 900 °F (500 °C) preclude a return to the plastic state no matter how much water is later in contact with it. Clay is a refractory substance; it will vitrify only at temperatures of about 2,900 °F (1,600 °C). If it is mixed with a substance that will vitrify at a lower temperature (about 2,200 °F, or 1,200 °C) and the mixture is subjected to heat of this order, the clay will hold the object in shape while the other substance vitrifies. This forms a nonporous opaque body known as stoneware. When feldspar or soapstone (steatite) is added to the clay and exposed to a temperature of 2,000 to 2,650 °F (1,100 to 1,450 °C), the product becomes translucent and is known as porcelain. In this section, earthenware is used to denote all pottery substances that are not vitrified and are therefore slightly porous and coarser than vitrified materials.

The line of demarcation between the two classes of vitrified materials—stoneware and porcelain—is extremely vague. In the Western world, porcelain is usually defined as a translucent substance—when held to the light most porcelain does have this property—and stoneware is regarded as partially vitrified material that is not translucent. The Chinese, on the other hand, define porcelain as any ceramic material that will give a ringing tone when tapped. None of these definitions is completely satisfactory; for instance, some thinly potted stonewares are slightly translucent if they have been fired at a high temperature, whereas some heavily potted porcelains are opaque. Therefore, the application of the terms is often a matter of personal preference and should be regarded as descriptive, not definitive.

Kinds of pottery:

Earthenware

Earthenware was the first kind of pottery made, dating back about 9,000 years. In the 21st century, it is still widely used.

The earthenware body varies in colour from buff to dark red and from gray to black. The body can be covered or decorated with slip (a mixture of clay and water in a creamlike consistency, used for adhesive and casting as well as for decoration), with a clear glaze, or with an opaque tin glaze. Tin-glazed earthenware is usually called majolica, faience, or delft (see below Decorative glazing). If the clear-glazed earthenware body is a cream colour, it is called creamware. Much of the commercial earthenware produced beginning in the second half of the 20th century was heat- and cold-proof and could thus be used for cooking and freezing as well as for serving.

Stoneware

Stoneware is very hard and, although sometimes translucent, usually opaque. The colour of the body varies considerably; it can be red, brown, gray, white, or black.

Fine white stoneware was made in China as early as 1400 BCE (Shang dynasty). In Korea, stoneware was first made during the Silla dynasty (57 BCE–935 CE); in Japan, during the 13th century (Kamakura period). The first production of stoneware in Europe was in 16th-century Germany. When tea was first imported to Europe from China in the 17th century, each chest was accompanied by a red stoneware pot made at the Yixing kilns in Jiangsu province. This ware was copied in Germany, the Netherlands, and England. At the end of the 17th century, English potters made a salt-glazed white stoneware that was regarded by them as a substitute for porcelain (see below Decorative glazing). In the 18th century, the Englishman Josiah Wedgwood made a black stoneware called basaltes and a white stoneware (coloured with metallic oxides) called jasper. A fine white stoneware, called Ironstone china, was introduced in England early in the 19th century. In the 20th century, stoneware was used mostly by artist-potters, such as Bernard Leach and his followers.

Porcelain

Porcelain was first made in China during the Tang dynasty (618–907 CE). The kind most familiar in the West was not manufactured until the Yuan dynasty (1279–1368 CE). It was made from kaolin (white china clay) and petuntse (a feldspathic rock also called china stone), the latter being ground to powder and mixed with the clay. During the firing, which took place at a temperature of about 2,650 °F (1,450 °C), the petuntse vitrified, while the refractory clay ensured that the vessel retained its shape.

In medieval times isolated specimens of Chinese porcelain found their way to Europe, where they were much prized, principally because of their translucency. European potters made numerous attempts to imitate them, and, since at that time there was no exact body of chemical and physical knowledge whereby the porcelain could be analyzed and then synthesized, experiments proceeded strictly by analogy. The only manufactured translucent substance then known was glass, and it was perhaps inevitable that glass made opaque with tin oxide (the German Milchglas, or milk glass, for example) should have been used as a substitute for porcelain. The nature of glass, however, made it impossible to shape it by any of the means used by the potter, and a mixture of clay and ground glass was eventually tried. Porcelain made in this way resembles that of the Chinese only superficially and is always termed soft, or artificial, porcelain. The date and place of the first attempt to make soft porcelain are debatable, but some Middle Eastern pottery of the 12th century was made from glaze material mixed with clay and is occasionally translucent (see below Islamic: Egyptian). Much the same formula was employed with a measure of success in Florence about 1575 at workshops under the patronage of Duke Francesco de’Medici. No further attempts of any kind appear to have been made until the mid-17th century, when Claude and François Révérend, Paris importers of Dutch pottery, were granted a monopoly of porcelain manufacture in France. It is not known whether they succeeded in making it or not, but, certainly by the end of the 17th century, porcelain was being made in quantity, this time by a factory at Saint-Cloud, near Paris.

The secret of true, or hard, porcelain similar to that of China was not discovered until about 1707 in Saxony, when Ehrenfried Walter von Tschirnhaus, assisted by an alchemist called Johann Friedrich Böttger, substituted ground feldspathic rock for the ground glass in the soft porcelain formula. Soft porcelain, always regarded as a substitute for hard porcelain, was progressively discontinued because it was uneconomic; kiln wastage was excessive, occasionally rising to nine-tenths of the total.

The terms soft and hard porcelain refer to the soft firing (about 2,200 °F, or 1,200 °C) necessary for the first, and the hard firing (about 2,650 °F, or 1,450 °C) necessary for the second. By coincidence they apply also to the physical properties of the two substances: for example, soft porcelain can be cut with a file, whereas hard porcelain cannot. This is sometimes used as a test for the nature of the body.

In the course of experiments in England during the 18th century, a type of soft porcelain was made in which bone ash (a calcium phosphate made by roasting the bones of cattle and grinding them to a fine powder) was added to the ground glass. Josiah Spode the Second later added this bone ash to the true, hard porcelain formula, and the resulting body, known as bone china, has since become the standard English porcelain. Hard porcelain is strong, but its vitreous nature causes it to chip fairly easily and, unless especially treated, it is usually tinged slightly with blue or gray. Bone china is slightly easier to manufacture. It is strong, does not chip easily, and the bone ash confers an ivory-white appearance widely regarded as desirable. Generally, bone china is most popular for table services in England and the United States, while hard porcelain is preferred on the European continent.

Forming processes and techniques

Raw clay consists primarily of true clay particles and undecomposed feldspar mixed with other components of the igneous rocks from which it was derived, usually appreciable quantities of quartz and small quantities of mica, iron oxides, and other substances. The composition and thus the behaviour and plasticity of clays from different sources are therefore slightly different. Except for coarse earthenwares, which can be made from clay as it is found in the earth, pottery is made from special clays plus other materials mixed to achieve the desired results. The mixture is called the clay body, or batch.

To prepare the batch, the ingredients are combined with water and reduced to the desired degree of fineness. The surplus water is then removed.

Shaping the clay

The earliest vessels were modeled by hand, using the finger and thumb, a method employed still by the Japanese to make raku teabowls. Flat slabs of clay luted together (using clay slip as an adhesive) were employed to make square or oblong vessels, and the slabs could be formed into a cylinder and provided with a flat base by the same means. Coiled pottery was an early development. Long rolls of clay were coiled in a circle, layer upon layer, until the approximate shape had been attained; the walls of the vessel were then finished by scraping and smoothing. Some remarkably fine early pots were made in this way.

It is impossible to say when the potter’s wheel, which is a difficult tool and needs long apprenticeship, was introduced. A pot cannot be made by hand modeling or coiling without the potter’s either turning it or moving around it, and, as turning involves the least expenditure of human effort, it would obviously be preferred. The development of the slow, or hand-turned, wheel as an adjunct to pottery manufacture led eventually to the introduction of the kick wheel, rotated by foot, which became the potter’s principal tool. The potter throws the clay onto a rapidly rotating disc and shapes his pot by manipulating it with both hands. This is a considerable feat of manual dexterity that leads to much greater exactness and symmetry of form. Perhaps the most skillful of all potters have been the Chinese. Excellent examples of their virtuosity are the double-gourd vases, made from the 16th century onward, which were turned in separate sections and afterward joined together. By the 18th century the wheel was no longer necessarily turned by the potter’s foot but by small boys, and since the 19th century the motive power has been mechanical. Electrical power was common in the 20th century, but many artisans continued to prefer foot power.

Jollying, or jiggering, is the mechanical adaptation of wheel throwing and is used where mass production or duplication of the same shape—particularly cups and plates—is required. The jolly, or jigger, was introduced during the 18th century. It is similar to the wheel in appearance except that the head consists of a plaster mold shaped like the inside of an object, such as a plate. As it revolves, the interior of the plate is shaped by pressing the clay against the head, while the exterior, including the footring, is shaped by a profile (a flat piece of metal cut to the contour of the underside of the plate) brought into contact with the clay. Machines that make both cups and plates automatically on this principle were introduced in the 20th century. Small parts, such as cup handles, are made separately by pressing clay into molds and are subsequently attached to the vessel by luting.

One of the earliest methods of shaping clay was molding. Pots were made by smearing clay around the inside of a basket or coarsely woven sack. The matrix was consumed during firing, leaving the finished pot with the impression of the weave on the exterior. A more advanced method, used by the Greeks and others, is to press the pottery body into molds of fired clay. Though the early molds were comparatively simple, they later became more complex, a tendency best seen in those molds used for the manufacture of pottery figures. The unglazed earthenware figures of Tanagra (Boeotia, central Greece) were first modeled by hand, then molds of whole figures were used, and finally the components—arms, legs, heads, and torsos—were all molded separately. The parts were often regarded as interchangeable, so that a variety of models could be constructed from a limited number of components. No improvement on this method of manufacture had been devised by the 20th century: the European porcelain factories make their figures in precisely the same way.

Plaster of paris molds were introduced into Staffordshire about 1745. They enabled vessels to be cast in slip, for when the slip was poured into the mold the plaster absorbed the water from it, thus leaving a layer of clay on the surface of the mold. When this layer had reached a sufficient strength and thickness, the surplus slip was poured off, the cast removed and fired, and the mold used again. This method is still in common use.

Drying, turning, and firing

Newly shaped articles were formerly allowed to dry slowly in the atmosphere. In 20th century pottery factories, this stage was speeded up by the introduction of automatic dryers, often in the form of hot, dry tunnels through which the ware passes on a conveyor belt.

Turning is the process of finishing the greenware (unfired ware) after it has dried to leather hardness. The technique is used to smooth and finish footrings on wheel-thrown wares or undercut places on molded or jiggered pieces. It is usually done on the potter’s wheel or jigger as the ware revolves. Lathe turning, like most hand operations, was tending to disappear in the mid-20th century except on the more ornamental and expensive objects.

The earliest vessels, which were sun-dried but not fired, could be used only for storing cereals and similar dry materials. If a sun-dried clay vessel is filled with water it absorbs the liquid, becomes very soft, and eventually collapses; but if it is heated, chemical changes that begin to take place at about 900 °F (500 °C) preclude a return to the plastic state.

After thorough drying, the pottery is fired in a kiln. In early pottery making, the objects were simply stacked in a shallow depression or hole in the ground, and a pyre of wood was built over them. Later, coal- or wood-fired ovens became almost universal. In the 20th century both gas and electricity were used as fuels. Many improvements were made in the design of intermittent kilns, in which the ware is stacked when cold and then raised to the desired temperature. These kilns were extravagant of fuel, however, and were awkward to fill or empty if they did not have time to cool completely. For these reasons they were replaced by continuous kilns, the most economical and successful of which is the tunnel kiln. In these kilns, the wares were conveyed slowly from a comparatively cool region at the entrance to the full heat in the centre. As they neared the exit after firing, they cooled gradually.

The atmosphere in the kiln at the time of firing, as well as the composition of the clay body, determines the colour of the fired earthenware pot. Iron is ubiquitous in earthenware clay, and under the usual firing conditions it oxidizes, giving a colour ranging from buff to dark red according to the amount present. In a reducing atmosphere (i.e., one where a limited supply of air causes the presence of carbon monoxide) the iron gives a colour varying from gray to black, although a dark colour may also occur as a result of the action of smoke. Both of the colours that result from iron in the clay can be seen in the black-topped vases of predynastic Egypt.

Decorating processes and techniques

Even the earliest pottery was usually embellished in one way or another. One of the earliest methods of decoration was to make an impression in the raw clay. Finger marks were sometimes used, as well as impressions from rope (as in Japanese Jōmon ware) or from a beater bound with straw (used to shape the pot in conjunction with a pad held inside it). Basketwork patterns are found on pots molded over baskets and are sometimes imitated on pots made by other methods.

The addition of separately modeled decoration, known as applied ornament (or appliqué), such as knops (ornamental knobs) or the reliefs on Wedgwood jasperware, came somewhat later. The earliest known examples are found on Mediterranean pottery made at the beginning of the 1st millennium. Raised designs are also produced by pressing out the wall of the vessel from inside, as in the Roman pottery known as terra sigillata, a technique that resembles the repoussé method adopted by metalworkers. Relief ornament was also executed—by the Etruscans, for example—by rolling a cylinder with the design recessed in intaglio over the soft clay, the principle being the same as that used to make Babylonian cylinder seals.

Incising, sgraffito, carving, and piercing

The earliest decoration was incised into the raw clay with a pointed stick or with the thumbnail, chevrons (inverted v’s) being a particularly common motif. Incised designs on a dark body were sometimes filled with lime, which effectively accents the decoration. Examples can be seen in some early work from Cyprus and in some comparatively modern work. Decoration engraved after firing is much less usual, but the skillful and accomplished engraving on one fine Egyptian pot of the predynastic period (i.e., before c. 3100 BCE) suggests that the practice may have been more frequent than was previously suspected.

Originally, defects of body colour suggested the use of slip, either white or coloured, as a wash over the vessel before firing. A common mode of decoration is to incise a pattern through the slip, revealing the differently coloured body beneath, a technique called sgraffito (“scratched”). Sgraffito ware was produced by Islamic potters and became common throughout the Middle East. The 18th-century scratched-blue class of English white stoneware is decorated with sgraffito patterns usually touched with blue.

Related to the sgraffito technique is slip carving: the clay body is covered with a thick coating of slip, which is carved out with a knife, leaving a raised design in slip (champlevé technique). Slip carving was done by Islamic and Chinese potters (Song dynasty).

Much pierced work—executed by piercing the thrown pot before firing—was done in China during the Ming dynasty (reign of Wanli). It was sometimes called “demon’s work” (guigong) because of the almost supernatural skill it was supposed to require. English white molded stoneware of the 18th century also has elaborate piercing.

Slip decorating

In addition to sgraffito and carving, slip can be used for painting, trailing, combining, and inlay. The earliest forms of decoration in ancient Egypt, for example, were animal and scenic motifs painted in white slip on a red body; and in the North American Indian cultures coloured slips provided the material for much of the painted freehand decoration.

Slip, too, is sometimes dotted and trailed in much the same way as a confectioner decorates a cake with icing sugar. The English slipwares of the 17th and 18th centuries are typical of this kind of work. Earthenware washed over with a white slip and covered with a colourless glaze is sometimes difficult to distinguish from ware covered with a tin glaze (see below Decorative glazing). In consequence it has sometimes been wrongly called faience. The term for French earthenware covered with a transparent glaze (in imitation of Wedgwood’s creamware) is faience fine, and in Germany it is called Steingut. Mezza-Maiolica (Italy) and Halb fayence (Germany) refer to slip-covered earthenware with incised decoration.

Slip is also used for combed wares. The marbled effect on Chinese pottery of the Tang dynasty, for example, was sometimes achieved by mingling, with a comb, slips of contrasting colours after they had been put on the pot.

The Koreans used slip for their punch’ŏng (buncheong) inlay technique, which the Japanese called mishima. The designs were first incised into the clay, and the incisions were then filled with black and white slip.

Burnishing and polishing

When the clay used in early pottery was exceptionally fine, it was sometimes polished or burnished after firing. Such pottery—dating back to 6500 and 2000 BCE—has been excavated in Turkey and the Banshan cemetery in Gansu province, China. Most Inca pottery is red polished ware.

Decorative glazing

Early fired earthenware vessels held water, but, because these vessels were still slightly porous, the liquid percolated slowly to the outside, where it evaporated, cooling the contents of the vessel. Thus, the porosity of earthenware was, and still is, sometimes an advantage in hot countries, and the principle still is utilized in the 21st century in the construction of domestic milk and butter coolers and some food-storage cupboards.

Porosity, however, had many disadvantages; e.g., the vessels could not be used for storing wine or milk. To overcome the porosity, some peoples applied varnishes of one kind or another. Varnished pots were made, for example, in Fiji. The more advanced technique is glazing. The fired object was covered with a finely ground glass powder often suspended in water and was then fired again. During the firing the fine particles covering the surface fused into an amorphous, glasslike layer, sealing the pores of the clay body.

The art of glazing earthenware for decorative as well as practical purposes followed speedily upon its introduction. On stoneware, hard porcelain, and some soft porcelain, which are fired to the point of vitrification and are therefore nonporous, glazing is used solely for decoration.

Except for tin-glazed wares (see below Painting), earthenware glaze was added to the biscuit clay body, which was then fired a second time at a lower temperature. Soft porcelain glaze was always applied in this way. Hard porcelain glaze was usually (and stoneware salt glaze, always) fired at the same time as the raw clay body at the same high temperature.

Basically, there are four principal kinds of glazes: feldspathic, lead, tin, and salt. (Modern technology has produced new glazes that fall into none of these categories while remaining a type of glass.) Feldspathic, lead, and salt glazes are transparent; tin glaze is an opaque white. Hard porcelain takes a feldspathic glaze, soft porcelain usually a kind of lead glaze and can be classified according to the kind of glaze used.

There are two main types of glazed earthenware: the one is covered with a transparent lead glaze, and the other with an opaque white tin glaze.

Tin glaze was no doubt employed in the first place to hide faults of colour in the body, for most clays contain a variable amount of iron that colours the body from buff to dark red. Tin-glazed wares look somewhat as though they have been covered with thick white paint. These wares are often referred to as “tin-enameled.” As noted above, other terms in common use are maiolica, faience, and delft. Unfortunately, these are variously defined by various authorities. The art of tin-glazing was discovered by the Assyrians, who used it to cover courses of decorated brickwork. It was revived in Mesopotamia about the 9th century CE and spread to Moorish Spain, whence it was conveyed to Italy by way of the island of Majorca, or Majolica. In Italy, tin-glazed earthenware was called majolica after the place where it was mistakenly thought to have originated. The wares of Italy, particularly those of Faenza, were much prized abroad, and early in the 16th century the technique was imitated in southern France. The term faience, which is applied to French tin-glazed ware, is undoubtedly derived from Faenza. Wares made in Germany, Spain, and Scandinavia are known by the same name. Early in the 17th century a flourishing industry for the manufacture of tin-glazed ware was established at the town of Delft, the Netherlands, and Dutch potters brought the art of tin-glazing to England together with the name of delft, which now applies to ware manufactured in the Netherlands and England. Some misleading uses of these terms include that of applying majolica to wares made outside Italy but in the Italian style, and faience to Egyptian blue-glazed ware and certain kinds of Middle Eastern earthenware.

Although glazed stoneware does not fall into such definite categories as glazed earthenware, to some extent it can be classified according to the kind of glaze used. The fine Chinese stonewares of the Song dynasty (960–1279 CE) were covered with a glaze made from feldspar, the same vitrifiable material later used in both the body and glaze of porcelain. Stoneware covered with a lead glaze is sometimes seen, but perhaps the majority of extant glazed wares are salt-glazed. In this process a shovelful of common salt (sodium chloride) is thrown into the kiln when the temperature reaches its maximum. The salt splits into its components, the sodium combining with the silica in the clay to form a smear glaze of sodium silicate, the chlorine escaping through the kiln chimney. Salt glazes have a pitted appearance similar to that of orange peel. A little red lead is sometimes added to the salt, which gives the surface an appearance of being glazed by the more usual means.

Some fusion usually occurs between glaze and body, and it is therefore essential that both should shrink by the same proportion and at the same rate on cooling. If there is a discrepancy, the glaze will either develop a network of fine cracks or will peel off altogether. This crazing of the glaze was sometimes deliberately induced as a decorative device by the Chinese.

One method of applying colour to pottery is to add colouring oxides to the glaze itself. Coloured glazes have been widely used on earthenware, stoneware, and porcelain and have led to the development of special techniques in which patterns were incised, or outlined with clay threads (cloisonné technique), so that differently coloured glazes could be used in the same design without intermingling; for example, in the lakabi wares of the Middle East.

Earthenware, stoneware, and porcelain are all found in unglazed as well as glazed forms. Wares fired without a glaze are called biscuit. Early earthenware pottery, as discussed above, was unglazed and therefore slightly porous. Of the unglazed stonewares, the most familiar are the Chinese Ming dynasty teapots and similar wares from Yixing in Jiangsu province, the red stoneware body made at Meissen in Saxony during the first three decades of the 18th century and revived in modern times, and the ornamental basaltes and jaspers made by Josiah Wedgwood and Sons since the 18th century. Biscuit porcelain was introduced in Europe in the 18th century. It was largely confined to figures, most of which were made at the French factories of Vincennes and Sèvres. Unglazed porcelain must be perfect, for the flaws cannot be concealed with glaze or enamel. The fashion for porcelain biscuit was revived in the 19th century and called Parian ware.

Painting

Painted designs are an early development, some remarkably fine work made before 3000 BCE coming from excavations at Ur and elsewhere in Mesopotamia, as well as urns from Banshan in Gansu that date back to 2000 BCE.

The earliest pottery colours appear to have been achieved by using slips stained with various metallic oxides (see above Slip decorating). At first these were undoubtedly oxides that occurred naturally in the clay; later they were added from other sources. Until the 19th century, when pottery colours began to be manufactured on an industrial scale, the oxides commonly used were those of tin, cobalt, copper, iron, manganese, and antimony. Tin oxide supplied a useful white, which was also used in making tin glaze (see above Decorative glazing) and occasionally for painting. Cobalt blue, ranging in colour from grayish blue to pure sapphire, was widely used in East Asia and Europe for blue-and-white porcelain wares. Cupric oxide gives a distinctive series of blues, cuprous oxide a series of greens, and, in the presence of an excess of carbon monoxide (which the Chinese achieved by throwing wet wood into the kiln), cupric oxide yields a bluish red. This particular colour is known as reduced copper, and the kiln is said to have a reducing atmosphere. (For the effect of this atmosphere on the colour of the biscuit body, see above Drying, turning, and firing).The colours obtained from ferric iron range from pale yellow to black, the most important being a slightly orange red, referred to as iron red. Ferrous iron yields a green that can be seen at its best on Chinese celadon wares. Manganese gives colours varying from the bright red purple similar to permanganate of potash to a dark purplish brown that can be almost black. The aubergine purple of the Chinese was derived from this oxide. Antimony provides an excellent yellow.

Pottery colours are used in two ways—under the glaze or over it. Overglaze painting is executed on a fired clay body covered with a fired glaze, underglaze painting, on a fired, unglazed body (which includes a body that has been coated with raw or unfired, glaze material).

Earthenware and stoneware are usually decorated with underglaze colours. After the body is manipulated into the desired shape it is fired. It is then painted, coated with glaze, and fired again. The second firing is at a lower temperature than the first, being just sufficient to fuse the glaze. In the case of most tin-glazed wares the fired object was first coated with the tin glaze, then painted, then fired again. The painting needed exceptional skill, since it was executed on the raw glaze and erasures were impossible. The addition of a transparent lead glaze over the painted decoration needed a third firing. In 18th-century Germany especially tin-glazed wares were decorated with colours applied over the fired glaze, as on porcelain. The wares were sometimes called Fayence-Porcellaine.

The body and glaze of most hard porcelain are fired in one operation, since the fusion temperature of body and glaze is roughly the same. Underglaze colours are limited because they must be fired at the same temperature as the body and glaze, which is so high that many colours would “fire away” and disappear. Although the Chinese made some use of copper red, underglaze painting on porcelain is more or less limited to cobalt blue, an extremely stable and reliable colour that yields satisfactory results under both high- and low-temperature firings. On soft porcelain, manganese was sometimes used under the glaze, but examples are rare. All other porcelain colours were painted over the fired glaze and fixed by a second firing that is much lower than the first.

Underglaze pigments are known as high-temperature colours, or colours of the grand feu. Similarly, overglaze colours are known as low-temperature colours, or colours of the petit feu. Other terms for overglaze colours are enamel colours and muffle colours, the latter name being derived from the type of kiln, known as a muffle kiln, in which they are fired. Overglaze colours consist of pigments mixed with glaze material suspended in a medium, such as gum arabic, with an alkaline flux added to lower the melting point below that of the glaze. They were first used in Persia on earthenware (minai painting) in the 12th century and perhaps at the same date on Chinese stoneware made at Cizhou.

Lustre decoration is carried out by applying a colloid suspension of finely powdered gold, silver, platinum, or copper to the glazed and fired object. On a further, gentle firing, gold yields a purplish colour, silver a pale straw colour, platinum retains its natural hue, and copper varies from lemonish yellow to gold and rich brown. Lustre painting was invented by early Islamic potters.

Pottery may be gilded or silvered. The earliest gilding was done with gold mixed with an oil base. The use of gold ground in honey may be seen on the finest porcelain from Sèvres during the 18th century, as well as on that from Chelsea. Toward the end of the same century, gold was applied as an amalgam, the mercury subsequently being volatilized by heating. Silver was used occasionally for the same purposes as gold but with time has nearly always turned black through oxidation.

Transfer printing

The transfer print made from a copper plate was first used in England in the 18th century. In the 20th century transfers from copper plates were in common use for commercial wares, as were lithographic and other processes, such as silkscreen printing, which consists of rubbing the colour through a patterned screen of textile material. Combinations of hand-painted and transfer decorations were often used. The outline or other part of the decoration was applied with a transfer print, then parts of the design, such as leaves, flowers, clothing, or water, were painted in.

Marking

Most porcelain and much earthenware bears marks or devices for the purpose of identification. Stonewares, apart from those of Wedgwood, are not so often marked. Chinese porcelain marks usually record the dynasty and the name of an emperor, but great caution is necessary before accepting them at their face value. In the past Chinese vendors frequently used the mark of an earlier reign as a sign of veneration for the products of antiquity and occasionally for financial gain.

The majority of European factories adopted a device—for example, the well-known crossed swords of Meissen taken from the electoral arms of Saxony, or the royal monogram on Sèvres porcelain—but these, also, cannot be regarded as a guarantee of authenticity. Not only are false marks added to contemporary forgeries but the smaller 18th-century factories often copied the marks of their more august competitors. If 18th-century European porcelain is signed with the artist’s name, it generally means that the painting was done outside the factory. Permission to sign factory work was rarely given.

On earthenware, a factory mark is much less usual than on porcelain. Workmen’s marks of one kind or another are frequently seen, but signatures are rare. There are a few on Greek vases.

It is often desirable to identify the provenance and the date of manufacture of specimens of pottery as closely as possible. Not only does such information add to the interest of the specimen in question and increase understanding of the pottery art as a whole but it also often throws fresh light on historical questions or the social habits and technical skills of the time it was made. Since ceramics are not affected by any of the agents that attack metal, wood, or textiles, they are often found virtually unchanged after being buried for thousands of years, while other artifacts from the same period are partially or completely destroyed. For this reason archaeologists use pottery extensively—for example, to trace contacts between peoples, since vessels were often widely distributed in course of trade, either by the people who made them or by such maritime nations as the Phoenicians.

Pottery making is not universal. It is rarely found among nomadic tribes, since potters must live within reach of their raw materials. Moreover, if there are gourds, skins, and similar natural materials that can be made into vessels without trouble, there is no incentive to make pottery. Yet pottery making is one of the most widespread and oldest of the crafts.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1845 2023-07-23 14:14:12

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1848) Knife

Gist

A knife is a tool, usually with a metal blade and a handle, used for cutting and spreading food or other substances, or as a weapon.

Summary

A knife is a tool or implement for cutting, the blade being either fixed to the handle or fastened with a hinge so as to clasp into it. Knives form the largest class of cutting implements known collectively as cutlery.

Cutting tools and weapons used for hunting and defense were first made from stones and flint and later of bronze and iron. The Romans taught the early Britons to work iron, and the Norman invaders are said to have brought with them smiths and metalworkers. Steel-bladed eating knives dating from the Romano-British period have been excavated, but extremely few fine medieval knives with handles of precious or semiprecious material have survived; cleaning and grinding wore away the blades. Some of the early knives and weapons became famous for their perfection, among them the skilfully produced Toledo and Damascus blades.

In Europe prior to the 17th century, only in the houses of the wealthy were there enough cutlery sets for knives to be offered to guests. Men typically carried a personal knife in a sheath attached to his belt or in a compartment on his sword scabbard. Women wore theirs attached to the girdle. In the later 17th century, services of silver cutlery in a house were sufficient to provide for guests. Although individual knives were no longer carried, a service including a knife, fork, spoon, and beaker was indispensable to the traveler and such sets were made until well into the 19th century. The characteristic 18th-century table knife has a pistol-shaped handle in which is mounted a curved blade of so-called “scimitar” form. With the modern stainless steel table knife, standard patterns have evolved in which practical needs and durability are the first considerations.

Details

A knife is a tool or weapon with a cutting edge or blade, usually attached to a handle or hilt. One of the earliest tools used by humanity, knives appeared at least 2.5 million years ago, as evidenced by the Oldowan tools. Originally made of wood, bone, and stone (such as flint and obsidian), over the centuries, in step with improvements in both metallurgy and manufacturing, knife blades have been made from copper, bronze, iron, steel, ceramic, and titanium. Most modern knives have either fixed or folding blades; blade patterns and styles vary by maker and country of origin.

Knives can serve various purposes. Hunters use a hunting knife, soldiers use the combat knife, scouts, campers, and hikers carry a pocketknife; there are kitchen knives for preparing foods (the chef's knife, the paring knife, bread knife, cleaver), table knife (butter knives and steak knives), weapons (daggers or switchblades), knives for throwing or juggling, and knives for religious ceremony or display (the kirpan).

Parts

A modern knife consists of:

the handle
the point – the end of the knife used for piercing
the edge – the cutting surface of the knife extending from the point to the heel
the grind – the cross section shape of the blade
the spine – the thickest section of the blade; on a single-edged knife, the side opposite the edge; on a two-edged knife, more toward the middle
the ricasso – the flat section of the blade located at the junction of the blade and the knife's bolster or guard
the guard – the barrier between the blade and the handle which prevents the hand from slipping forward onto the blade and protects the hand from the external forces that are usually applied to the blade during use
the hilt or butt – the end of the handle used for blunt force
the lanyard – a strap used to secure the knife to the wrist

The blade edge can be plain or serrated, or a combination of both. Single-edged knives may have a reverse edge or false edge occupying a section of the spine. These edges are usually serrated and are used to further enhance function.

The handle, used to grip and manipulate the blade safely, may include a tang, a portion of the blade that extends into the handle. Knives are made with partial tangs (extending part way into the handle, known as "stick tangs") or full tangs (extending the full length of the handle, often visible on top and bottom). There is also the enterçado construction method present in antique knives from Brazil, such as the Sorocaban Knife, which consists in riveting a repurposed blade to the ricasso of a bladeless handle. The handle may include a bolster, a piece of heavy material (usually metal) situated at the front or rear of the handle. The bolster, as its name suggests, is used to mechanically strengthen the knife.

Knife blades can be manufactured from a variety of materials, each of which has advantages and disadvantages. Carbon steel, an alloy of iron and carbon, can be very sharp. It holds its edge well, and remains easy to sharpen, but is vulnerable to rust and stains. Stainless steel is an alloy of iron, chromium, possibly nickel, and molybdenum, with only a small amount of carbon. It is not able to take quite as sharp an edge as carbon steel, but is highly resistant to corrosion. High carbon stainless steel is stainless steel with a higher amount of carbon, intended to incorporate the better attributes of carbon steel and stainless steel. High carbon stainless steel blades do not discolor or stain, and maintain a sharp edge. Laminated blades use multiple metals to create a layered sandwich, combining the attributes of both. For example, a harder, more brittle steel may be sandwiched between an outer layer of softer, tougher, stainless steel to reduce vulnerability to corrosion. In this case, however, the part most affected by corrosion, the edge, is still vulnerable. Damascus steel is a form of pattern welding with similarities to laminate construction. Layers of different steel types are welded together, but then the stock is manipulated to create patterns in the steel.

Titanium is a metal that has a better strength-to-weight ratio, is more wear resistant, and more flexible than steel. Although less hard and unable to take as sharp an edge, carbides in the titanium alloy allow them to be heat-treated to a sufficient hardness. Ceramic blades are hard, brittle, and lightweight: they may maintain a sharp edge for years with no maintenance at all, but are as fragile as glass and will break if dropped on a hard surface. They are immune to common corrosion, and can only be sharpened on silicon carbide sandpaper and some grinding wheels. Plastic blades are not especially sharp and are typically serrated. They are often disposable.

Steel blades are commonly shaped by forging or stock removal. Forged blades are made by heating a single piece of steel, then shaping the metal while hot using a hammer or press. Stock removal blades are shaped by grinding and removing metal. With both methods, after shaping, the steel must be heat treated. This involves heating the steel above its critical point, then quenching the blade to harden it. After hardening, the blade is tempered to remove stresses and make the blade tougher. Mass manufactured kitchen cutlery uses both the forging and stock removal processes. Forging tends to be reserved for manufacturers' more expensive product lines, and can often be distinguished from stock removal product lines by the presence of an integral bolster, though integral bolsters can be crafted through either shaping method.

Many knives have holes in the blade for various uses. Holes are commonly drilled in blades to reduce friction while cutting, increase single-handed usability of pocket knives, and, for butchers' knives, allow hanging out of the way when not in use.

A fixed blade knife, sometimes called a sheath knife, does not fold or slide, and is typically stronger due to the tang, the extension of the blade into the handle, and lack of moving parts.

A folding knife connects the blade to the handle through a pivot, allowing the blade to fold into the handle. To prevent injury to the knife user through the blade accidentally closing on the user's hand, folding knives typically have a locking mechanism. Different locking mechanisms are favored by various individuals for reasons such as perceived strength (lock safety), legality, and ease of use. Popular locking mechanisms include:

Slip joint – Found most commonly on traditional pocket knives, the opened blade does not lock, but is held in place by a spring device that allows the blade to fold if a certain amount of pressure is applied.
Lockback – Also known as the spine lock, the lockback includes a pivoted latch affixed to a spring, and can be disengaged only by pressing the latch down to release the blade.
Linerlock – Invented by Michael Walker, a Linerlock is a folding knife with a side-spring lock that can be opened and closed with one hand without repositioning the knife in the hand. The lock is self-adjusting for wear.
Compression Lock – A variant of the Liner Lock, it uses a small piece of metal at the tip of the lock to lock into a small corresponding impression in the blade. This creates a lock that doesn't disengage when the blade is torqued, instead of becoming more tightly locked. It is released by pressing the tab of metal to the side, to allow the blade to be placed into its groove set into the handle.
Frame Lock – Also known as the integral lock or monolock, this locking mechanism was invented by a custom knifemaker Chris Reeve for the Sebenza as an update to the liner lock. The frame lock works in a manner similar to the liner lock but uses a partial cutout of the actual knife handle, rather than a separate liner inside the handle to hold the blade in place.
Collar lock – found on Opinel knives.
Button Lock – Found mainly on automatic knives, this type of lock uses a small push-button to open and release the knife.
Close-up of the pivot joint of a folding knife, showing locking barrel inserted through holes in the handle
Axis Lock – A locking mechanism patented by Benchmade Knife Company until 2020. A cylindrical bearing is tensioned such that it will jump between the knife blade and some feature of the handle to lock the blade open.
Arc Lock – A locking mechanism exclusively licensed to SOG Specialty Knives. It differs from an axis lock in that the cylindrical bearing is tensioned by a rotary spring rather than an axial spring.
Ball Bearing Lock – A locking mechanism exclusively licensed to Spyderco. This lock is conceptually similar to the axis and arc locks but the bearing is instead a ball bearing.
Tri-Ad Lock – A locking mechanism exclusively licensed to Cold Steel. It is a form of lockback which incorporates a thick steel stop pin between the front of the latch and the back of the tang to transfer force from the blade into the handle.
PickLock – A round post on the back base of the blade locks into a hole in a spring tab in the handle. To close, manually lift (pick) the spring tab (lock) off the blade post with your fingers, or in "Italian Style Stilettos" swivel the bolster (hand guard) clockwise to lift the spring tab off the blade post.

Another prominent feature of many folding knives is the opening mechanism. Traditional pocket knives and Swiss Army knives commonly employ the nail nick, while modern folding knives more often use a stud, hole, disk, or flipper located on the blade, all of which have the benefit of allowing the user to open the knife with one hand.

The "wave" feature is another prominent design, which uses a part of the blade that protrudes outward to catch on one's pocket as it is drawn, thus opening the blade; this was patented by Ernest Emerson and is not only used on many of the Emerson knives, but also on knives produced by several other manufacturers, notably Spyderco and Cold Steel.

Automatic or switchblade knives open using the stored energy from a spring that is released when the user presses a button or lever or other actuator built into the handle of the knife. Automatic knives are severely restricted by law in the UK and most American states.

Increasingly common are assisted opening knives which use springs to propel the blade once the user has moved it past a certain angle. These differ from automatic or switchblade knives in that the blade is not released by means of a button or catch on the handle; rather, the blade itself is the actuator. Most assisted openers use flippers as their opening mechanism. Assisted opening knives can be as fast or faster than automatic knives to deploy.

Common locking mechanisms

In the lock back, as in many folding knives, a stop pin acting on the top (or behind) the blade prevents it from rotating clockwise. A hook on the tang of the blade engages with a hook on the rocker bar which prevents the blade from rotating counter-clockwise. The rocker bar is held in position by a torsion bar. To release the knife the rocker bar is pushed downwards as indicated and pivots around the rocker pin, lifting the hook and freeing the blade.

When negative pressure (pushing down on the spine) is applied to the blade all the stress is transferred from the hook on the blade's tang to the hook on the rocker bar and thence to the small rocker pin. Excessive stress can shear one or both of these hooks rendering the knife effectively useless. Knife company Cold Steel uses a variant of the lock back called the Tri-Ad Lock which introduces a pin in front of the rocker bar to relieve stress on the rocker pin, has an elongated hole around the rocker pin to allow the mechanism to wear over time without losing strength and angles the hooks so that the faces no longer meet vertically.

The bolt in the bolt lock is a rectangle of metal that is constrained to slide only back and forward. When the knife is open a spring biases the bolt to the forward position where it rests above the tang of the blade preventing the blade from closing. Small knobs extend through the handle of the knife on both sides allowing the user to slide the bolt backward freeing the knife to close. The Axis Lock used by knife maker Benchmade is functionally identical to the bolt lock except that it uses a cylinder rather than a rectangle to trap the blade. The Arc Lock by knife maker SOG is similar to the Axis Lock except the cylinder follows a curved path rather than a straight path.

In the liner lock, an "L"-shaped split in the liner allows part of the liner to move sideways from its resting position against the handle to the centre of the knife where it rests against the flat end of the tang. To disengage, this leaf spring is pushed so it again rests flush against the handle allowing the knife to rotate. A frame lock is functionally identical but instead of using a thin liner inside the handle material uses a thicker piece of metal as the handle and the same split in it allows a section of the frame to press against the tang.

A sliding knife is a knife that can be opened by sliding the knife blade out the front of the handle. One method of opening is where the blade exits out the front of the handle point-first and then is locked into place (an example of this is the gravity knife). Another form is an OTF (out-the-front) switchblade, which only requires the push of a button or spring to cause the blade to slide out of the handle and lock into place. To retract the blade back into the handle, a release lever or button, usually the same control as to open, is pressed. A very common form of sliding knife is the sliding utility knife (commonly known as a stanley knife or boxcutter).

Handle

The handles of knives can be made from a number of different materials, each of which has advantages and disadvantages. Handles are produced in a wide variety of shapes and styles. Handles are often textured to enhance grip.

* Wood handles provide good grip and are warm in the hand, but are more difficult to care for. They do not resist water well, and will crack or warp with prolonged exposure to water. Modern stabilized and laminated woods have largely overcome these problems. Many beautiful and exotic hardwoods are employed in the manufacture of custom and some production knives. In some countries it is now forbidden for commercial butchers' knives to have wood handles, for sanitary reasons.
* Plastic handles are more easily cared for than wooden handles, but can be slippery and become brittle over time.
* Injection molded handles made from higher grade plastics are composed of polyphthalamide, and when marketed under trademarked names such as Zytel or Grivory, are reinforced with Kevlar or fiberglass. These are often used by major knife manufacturers.
* Rubber handles such as Kraton or Resiprene-C are generally preferred over plastic due to their durable and cushioning nature.
* Micarta is a popular handle material on user knives due to its toughness and stability. Micarta is nearly impervious to water, is grippy when wet, and is an excellent insulator. Micarta has come to refer to any fibrous material cast in resin. There are many varieties of micarta available. One very popular version is a fiberglass impregnated resin called G-10.
* Leather handles are seen on some hunting and military knives, notably the KA-BAR. Leather handles are typically produced by stacking leather washers, or less commonly, as a sleeve surrounding another handle material. Russian manufacturers often use birchbark in the same manner.
* Skeleton handles refers to the practice of using the tang itself as the handle, usually with sections of material removed to reduce weight. Skeleton handled knives are often wrapped with parachute cord or other wrapping materials to enhance grip.
* Stainless steel and aluminum handles are durable and sanitary, but can be slippery. To counter this, premium knife makers make handles with ridges, bumps, or indentations to provide extra grip. Another problem with knives that have metal handles is that, since metal is an excellent heat-conductor, these knives can be very uncomfortable, and even painful or dangerous, when handled without gloves or other protective handwear in (very) cold climates.

More exotic materials usually only seen on art or ceremonial knives include: Stone, bone, mammoth tooth, mammoth ivory, oosik (walrus math bone), walrus tusk, antler (often called stag in a knife context), sheep horn, buffalo horn, teeth, and mop (mother of pearl or "pearl"). Many materials have been employed in knife handles.

Handles may be adapted to accommodate the needs of people with disabilities. For example, knife handles may be made thicker or with more cushioning for people with arthritis in their hands. A non-slip handle accommodates people with palmar hyperhidrosis.

Types:

Weapons

As a weapon, the knife is universally adopted as an essential tool. It is the essential element of a knife fight. For example:

Ballistic knife: A specialized combat knife with a detachable gas- or spring-propelled blade that can be fired to a distance of several feet or meters by pressing a trigger or switch on the handle.
Bayonet: A knife-shaped close-quarters combat weapon designed to attach to the muzzle of a rifle or similar weapon.
Butterfly knife: A folding pocket knife also known as a "balisong" or "batangas" with two counter-rotating handles where the blade is concealed within grooves in the handles.
Combat knife: Any knife intended to be used by soldiers in the field, as a general-use tool, but also for fighting.
Dagger: A single-edged or double-edged combat knife with a central spine and edge(s) sharpened their full length, used primarily for thrusting or stabbing. Variations include the Stiletto and Push dagger. .
Fighting knife: A knife with a blade designed to inflict a lethal injury in a physical confrontation between two or more individuals at very short range (grappling distance). Well known examples include the Bowie knife, Ka-Bar combat knife, and the Fairbairn–Sykes fighting knife.
Genoese knife: produced from the 12th century with a guardless handle
Karambit: A knife with a curved blade resembling a tiger's claw, and a handle with one or two safety holes.
Rampuri: An Indian gravity knife having a single-edged blade roughly 9 to 12 inches (23 to 30 cm) long.
Shiv: A crudely made homemade knife out of everyday materials, especially prevalent in prisons among inmates. An alternate name in some prisons is shank.
Sword: An evolution of the knife with a lengthened and strengthened blade used primarily for mêlée combat and hunting.
Throwing knife: A knife designed and weighted for throwing.
Trench knife: Purpose-made or improvised knives, intended for close-quarter fighting, particularly in trench warfare; some have a d-shaped integral hand guard.
Throwing knife: A knife designed and weighted for throwing.

Utensils

A primary aspect of the knife as a tool includes dining, used either in food preparation or as cutlery. Examples of this include:

Boning knife: A knife used for removing the bones of poultry, meat, and fish.
Butcher's Knife: A knife designed and used primarily for the butchering and/or dressing of animals.
Carving knife: A knife for carving large cooked meats such as poultry, roasts, hams, and other large cooked meats.
Canelle or Channel knife: The notch of the blade is used to cut a twist from a citrus fruit, usually in the preparation of math
Chef's knife: Also known as a French knife, a cutting tool used in preparing food
Cleaver: A large knife that varies in its shape but usually resembles a rectangular-bladed hatchet. It is used mostly for hacking through bones as a kitchen knife or butcher knife, and can also be used for crushing via its broad side, typically garlic.
Electric knife: An electrical device consisting of two serrated blades that are clipped together, providing a sawing action when powered on
Kitchen knife: Any knife, including the chef's knife, that is intended to be used in food preparation
Oyster knife: Has a short, thick blade for prying open oyster shells
Mezzaluna: A two-handled arc-shaped knife used in a rocking motion as an herb chopper or for cutting other foods
Paring or Coring Knife: A knife with a small but sharp blade used for cutting out the cores from fruit.
Rocker knife is a knife that cuts with a rocking motion, which is primarily used by people whose disabilities prevent them from using a fork and knife simultaneously.
Table knife or Case knife: A piece of cutlery, either a butter knife, steak knife, or both, that is part of a table setting, accompanying the fork and spoon

Tools

a utility tool the knife can take many forms, including:

Bowie knife: Commonly, any large sheath knife, or a specific style of large knife popularized by Jim Bowie.
Bushcraft knife: A sturdy, normally fixed blade knife used while camping in the wilderness.
Camping knife: A camping knife is used for camping and survival purposes in a wilderness environment.
Head knife or Round knife: A knife with a semicircular blade used since antiquity to cut leather.
Crooked knife: Sometimes referred to as a "curved knife", "carving knife" or in the Algonquian language the "mocotaugan" is a utilitarian knife used for carving.
Diver's knife: A knife adapted for use in diving and water sports and a necessary part of standard diving dress.
Electrician's knife: A short-bladed knife used to cut electrical insulation. Also, a folding knife with a large screw driver as well as a blade. Typically the screwdriver locks, but the blade may not lock.
Folding knife: A folding knife is a knife with one or more blades that fit inside the handle that can still fit in a pocket. It is also known as a jackknife or jack-knife.
Hunting knife: A knife used to dress large game.
Kiridashi: A small Japanese knife having a chisel grind and a sharp point, used as a general-purpose utility knife.
Linoleum knife: is a small knife that has a short, stiff blade with a curved point and a handle and is used to cut linoleum or other sheet materials.
Machete: A large heavy knife used to cut through thick vegetation such as sugar cane or jungle undergrowth; it may be used as an offensive weapon.
Marking knife: A woodworking tool used for marking out workpieces.
Palette knife: A knife, or frosting spatula, lacking a cutting edge, used by artists for tasks such as mixing and applying paint and in cooking for spreading icing.
Paper knife: Or a "letter opener" it is a knife made of metal or plastic, used for opening mail.
Pocketknife: a folding knife designed to be carried in a pants pocket. Subtypes include:
Lockback knife: a folding knife with a mechanism that locks the blade into the open position, preventing accidental closure while in use
Multi-tool and Swiss Army knife, which combine a folding knife blade with other tools and implements, such as pliers, scissors, or screwdrivers
Produce knife: A knife with a rectangular profile and a blunt front edge used by grocers to cut produce.
Rigging knife: A knife used to cut rigging in sailing vessels.
Scalpel: A medical knife, used to perform surgery.
Straight razor: A reusable knife blade used for shaving hair.
Survival knife: A sturdy knife, sometimes with a hollow handle filled with survival equipment.
Switchblade: A knife with a folding blade that springs out of the grip when a button or lever on the grip is pressed.
Utility knife: A short knife with a replaceable (typically) triangular blade, used for cutting sheet materials including card stock, paperboard, and corrugated fiberboard, also called a boxcutter knife or boxcutter
Wood carving knife and whittling knives: Knives used to shape wood in the arts of wood carving and whittling, often with short, thin replaceable blades for better control.
Craft knife: A scalpel-like form of non-retractable utility knife with a (typically) long handle and a replaceable pointed blade, used for precise, clean cutting in arts and crafts, often called an X-acto knife in the US and Canada after the popular brand name.

Athame: A typically black-handled and double-edged ritual knife used in Wicca and other derivative forms of Neopagan witchcraft.
Dirk: A long bladed thrusting dagger worn by Scottish Highlanders for customary and ceremonial purposes.
Katar: An Indian push dagger sometimes used ceremonially.
Kilaya: A dagger used in Tibetan Buddhist rituals.
Kirpan: A ceremonial knife that all baptised Sikhs must wear as one of the five visible symbols of the Sikh faith (Kakars)
Kris: A dagger used in Indo-Malay cultures, often by nobility and sometimes in religious rituals
Kukri: A Nepalese knife used as a tool and weapon
Maguro bōchō: A traditional Japanese knife with a long specialized blade that is used to fillet large ocean fish.
Puukko: A traditional Finnish style woodcraft belt-knife used as a tool rather than a weapon
Seax: A Germanic single-edged knife, dagger or short sword used both as a tool and as a weapon.
Sgian-dubh: A small knife traditionally worn with the Highland and Isle dress (Kilt) of Scotland.
Ulu: An Inuit woman's all-purpose knife.
Yakutian knife: A traditional Yakuts knife used as a tool for wood carving and meat or fish cutting. Can be used as a part of yakutian ethnic costume.

Rituals and superstitions

The knife plays a significant role in some cultures through ritual and superstition, as the knife was an essential tool for survival since early man. Knife symbols can be found in various cultures to symbolize all stages of life; for example, a knife placed under the bed while giving birth is said to ease the pain, or, stuck into the headboard of a cradle, to protect the baby; knives were included in some Anglo-Saxon burial rites, so the dead would not be defenseless in the next world. The knife plays an important role in some initiation rites, and many cultures perform rituals with a variety of knives, including the ceremonial sacrifices of animals. Samurai warriors, as part of bushido, could perform ritual suicide, or seppuku, with a tantō, a common Japanese knife. An athame, a ceremonial knife, is used in Wicca and derived forms of neopagan witchcraft.

In Greece, a black-handled knife placed under the pillow is used to keep away nightmares. As early as 1646 reference is made to a superstition of laying a knife across another piece of cutlery being a sign of witchcraft. A common belief is that if a knife is given as a gift, the relationship of the giver and recipient will be severed. Something such as a small coin, dove or a valuable item is exchanged for the gift, rendering "payment."

Legislation

Knives are typically restricted by law, because they are often used in crime, although restrictions vary greatly by country or state and type of knife. For example, some laws prohibit carrying knives in public while other laws prohibit private ownership of certain knives, such as switchblades.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1846 2023-07-24 14:09:15

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1849) Brine

Gist

Brine is salt water, particularly a highly concentrated water solution of common salt (sodium chloride). Natural brines occur underground, in salt lakes, or as seawater and are commercially important sources of common salt and other salts, such as chlorides and sulfates of magnesium and potassium.

Brine is used as a preservative in meat-packing (as in corned beef) and pickling. In refrigeration and cooling systems, brines are used as heat-transfer media because of their low freezing temperatures or as vapour-absorption agents because of their low vapour pressure. Brine is also used to quench (cool) steel.

Summary

Brine is a solution of salt in water. Salt content can vary, but brine includes salt solutions ranging from about 3.5% (typical concentration of seawater, or the lower end of solutions used for brining foods) up to about 26% (a typical saturated solution, depending on temperature).

Brine is commonly used in large refrigeration installations for the transport of thermal energy from place to place. In colder temperatures, brine can be used to de-ice or reduce freezing temperatures on roads. In cooking, brine is used for food brining and salting.

Pitting and crevice corrosion of metal can occur in brine.

Brine is a concentrated solution of salt in water. It can be any solution of a salt in water e.g., potassium chloride brine. Natural brines occur underground, in salt lakes, or as seawater and are commercially important sources of salts, such as chlorides and sulfates of magnesium and potassium.

Brine can be used in:

* Preservation of food
* Heat transfer
* Vapor absorption
* Quenching (cooling) of steel

At 212°F (100°C), saturated sodium chloride brine is about 28% salt by weight, whereas at 0°C (32°F), brine can only hold about 26% salt. The thermal conductivity decreases with increasing salinity and increases with increasing temperature.

Brine solution is used as a cooling medium in steel heat exchangers. This can cause corrosion, which can be reduced by reducing temperature, changing the composition of brine and removing dissolved oxygen from brine. Brine corrosion inhibitor is a borate-organic corrosion inhibitor specially designed for use in chloride brine closed re-circulating cooling systems to inhibit corrosion.

Brine is known to corrode stainless steel, as is bleach. A strong brine, such as calcium chloride, is highly aggressive toward metals and alloys. Corrosion rates in brine solutions are higher than those in distilled water, while the rate and nature of the attack vary from one material to another.

Details

Brine (or Briny water) is a high-concentration solution of salt (typically sodium chloride or calcium chloride) in water. In diverse contexts, brine may refer to the salt solutions ranging from about 3.5% (a typical concentration of seawater, on the lower end of that of solutions used for brining foods) up to about 26% (a typical saturated solution, depending on temperature). Brine forms naturally due to evaporation of ground saline water but it is also generated in the mining of sodium chloride. Brine is used for food processing and cooking (pickling and brining), for de-icing of roads and other structures, and in a number of technological processes. It is also a by-product of many industrial processes, such as desalination, so it requires wastewater treatment for proper disposal or further utilization (fresh water recovery).

In nature

Brines are produced in multiple ways in nature. Modification of seawater via evaporation results in the concentration of salts in the residual fluid, a characteristic geologic deposit called an evaporite is formed as different dissolved ions reach the saturation states of minerals, typically gypsum and halite. Dissolution of such salt deposits into water can produce brines as well. As seawater freezes, dissolved ions tend to remain in solution resulting in a fluid termed a cryogenic brine. At the time of formation, these cryogenic brines are by definition cooler than the freezing temperature of seawater and can produce a feature called a brinicle where cool brines descend, freezing the surrounding seawater.

The brine cropping out at the surface as saltwater springs are known as "licks" or "salines". The contents of dissolved solids in groundwater vary highly from one location to another on Earth, both in terms of specific constituents (e.g. halite, anhydrite, carbonates, gypsum, fluoride-salts, organic halides, and sulfate-salts) and regarding the concentration level. Using one of several classification of groundwater based on total dissolved solids (TDS), brine is water containing more than 100,000 mg/L TDS. Brine is commonly produced during well completion operations, particularly after the hydraulic fracturing of a well.

Wastewater

Brine is a byproduct of many industrial processes, such as desalination, power plant cooling towers, produced water from oil and natural gas extraction, acid mine or acid rock drainage, reverse osmosis reject, chlor-alkali wastewater treatment, pulp and paper mill effluent, and waste streams from food and beverage processing. Along with diluted salts, it can contain residues of pretreatment and cleaning chemicals, their reaction byproducts and heavy metals due to corrosion.

Wastewater brine can pose a significant environmental hazard, both due to corrosive and sediment-forming effects of salts and toxicity of other chemicals diluted in it.

Unpolluted brine from desalination plants and cooling towers can be returned to the ocean. From the desalination process, reject brine is produced, which proposes potential damages to the marine life and habitats. To limit the environmental impact, it can be diluted with another stream of water, such as the outfall of a wastewater treatment or power plant. Since brine is heavier than seawater and would accumulate on the ocean bottom, it requires methods to ensure proper diffusion, such as installing underwater diffusers in the sewerage. Other methods include drying in evaporation ponds, injecting to deep wells, and storing and reusing the brine for irrigation, de-icing or dust control purposes.

Technologies for treatment of polluted brine include: membrane filtration processes, such as reverse osmosis and forward osmosis; ion exchange processes such as electrodialysis or weak acid cation exchange; or evaporation processes, such as thermal brine concentrators and crystallizers employing mechanical vapour recompression and steam. New methods for membrane brine concentration, employing osmotically assisted reverse osmosis and related processes, are beginning to gain ground as part of zero liquid discharge systems (ZLD).

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1847 2023-07-25 14:21:43

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1850) Amusement park

Gist

A large outdoor area with fairground rides, shows, and other entertainments.

Details

An amusement park is a park that features various attractions, such as rides and games, as well as other events for entertainment purposes. A theme park is a type of amusement park that bases its structures and attractions around a central theme, often featuring multiple areas with different themes. Unlike temporary and mobile funfairs and carnivals, amusement parks are stationary and built for long-lasting operation. They are more elaborate than city parks and playgrounds, usually providing attractions that cater to a variety of age groups. While amusement parks often contain themed areas, theme parks place a heavier focus with more intricately-designed themes that revolve around a particular subject or group of subjects.

Amusement parks evolved from European fairs, pleasure gardens, and large picnic areas, which were created for people's recreation. World's fairs and other types of international expositions also influenced the emergence of the amusement park industry. Lake Compounce opened in 1846 and is considered the oldest, continuously operating amusement park in North America.

History:

Origins

The amusement park evolved from three earlier traditions: traveling or periodic fairs, pleasure gardens, and exhibitions such as world fairs. The oldest influence was the periodic fair of the Middle Ages - one of the earliest was the Bartholomew Fair in England from 1133. By the 18th and 19th centuries, they had evolved into places of entertainment for the masses, where the public could view freak shows, acrobatics, conjuring and juggling, take part in competitions and walk through menageries.

A wave of innovation in the 1860s and 1870s created mechanical rides, such as the steam-powered carousel (built by Thomas Bradshaw, at the Aylsham Fair), and its derivatives, notably from Frederick Savage of King's Lynn, Norfolk whose fairground machinery was exported all over the world; his "galloping horses" innovation is seen in carousels today. This inaugurated the era of the modern funfair ride, as the working classes were increasingly able to spend their surplus wages on entertainment.

The second influence was the pleasure garden. An example of this is the world's oldest amusement park, Bakken ("The Hill"), which opened in mainland Europe in 1583. It is located north of Copenhagen in Klampenborg, Denmark.

Another early garden was the Vauxhall Gardens, founded in 1661 in London. By the late 18th century, the site had an admission fee for its many attractions. It regularly drew enormous crowds, with its paths often noted for romantic assignations; tightrope walkers, hot air balloon ascents, concerts and fireworks providing amusement. Although the gardens were originally designed for the elites, they soon became places of great social diversity. Public firework displays were put on at Marylebone Gardens, and Cremorne Gardens offered music, dancing, and animal acrobatics displays.

Prater in Vienna, Austria, began as a royal hunting ground which was opened in 1766 for public enjoyment. There followed coffee-houses and cafés, which led to the beginnings of the Wurstelprater as an amusement park.

The concept of a fixed park for amusement was further developed with the beginning of the world's fairs. The first World fair began in 1851 with the construction of the landmark Crystal Palace in London, England. The purpose of the exposition was to celebrate the industrial achievement of the nations of the world and it was designed to educate and entertain the visitors.

American cities and businesses also saw the world's fair as a way of demonstrating economic and industrial success. The World's Columbian Exposition of 1893 in Chicago, Illinois was an early precursor to the modern amusement park. The fair was an enclosed site, that merged entertainment, engineering and education to entertain the masses. It set out to bedazzle the visitors, and successfully did so with a blaze of lights from the "White City." To make sure that the fair was a financial success, the planners included a dedicated amusement concessions area called the Midway Plaisance. Rides from this fair captured the imagination of the visitors and of amusement parks around the world, such as the first steel Ferris wheel, which was found in many other amusement areas, such as the Prater by 1896. Also, the experience of the enclosed ideal city with wonder, rides, culture and progress (electricity), was based on the creation of an illusory place.

The "midway" introduced at the Columbian Exposition would become a standard part of most amusement parks, fairs, carnivals, and circuses. The midway contained not only the rides, but other concessions and entertainments such as shooting galleries, penny arcades, games of chance, and shows.

Trolley parks and pleasure resorts

Many modern amusement parks evolved from earlier pleasure resorts that had become popular with the public for day-trips or weekend holidays, for example, seaside areas such as Blackpool, United Kingdom and Coney Island, United States. In the United States, some amusement parks grew from picnic groves established along rivers and lakes that provided bathing and water sports, such as Lake Compounce in Connecticut, first established as a picturesque picnic park in 1846, and Riverside Park in Massachusetts, founded in the 1870s along the Connecticut River.

The trick was getting the public to the seaside or resort location. For Coney Island in Brooklyn, New York, on the Atlantic Ocean, a horse-drawn streetcar line brought pleasure seekers to the beach beginning in 1829. In 1875, a million passengers rode the Coney Island Railroad, and in 1876 two million visited Coney Island. Hotels and amusements were built to accommodate both the upper classes and the working class at the beach. The first carousel was installed in the 1870s, the first roller coaster, the "Switchback Railway", in 1884.

In England, Blackpool was a popular beachside location beginning in the 1700s. It rose to prominence as a seaside resort with the completion in 1846 of a branch line to Blackpool from Poulton on the main Preston and Wyre Joint Railway line. A sudden influx of visitors, arriving by rail, provided the motivation for entrepreneurs to build accommodation and create new attractions, leading to more visitors and a rapid cycle of growth throughout the 1850s and 1860s.

In 1879, large parts of the promenade at Blackpool were wired. The lighting and its accompanying pageants reinforced Blackpool's status as the North of England's most prominent holiday resort, and its specifically working class character. It was the forerunner of the present-day Blackpool Illuminations. By the 1890s, the town had a population of 35,000, and could accommodate 250,000 holidaymakers. The number of annual visitors, many staying for a week, was estimated at three million.

In the final decade of the 19th century, electric trolley lines were developed in many large American cities. Companies that established the trolley lines also developed trolley parks as destinations of these lines. Trolley parks such as Atlanta's Ponce de Leon Park, or Reading's Carsonia Park were initially popular natural leisure spots before local streetcar companies purchased the sites, expanding them from picnic groves to include regular entertainments, mechanical amusements, dance halls, sports fields, boat rides, restaurants and other resort facilities.

Some of these parks were developed in resort locations, such as bathing resorts at the seaside in New Jersey and New York. A premiere example in New Jersey was Atlantic City, a famous vacation resort. Entrepreneurs erected amusement parks on piers that extended from the boardwalk out over the ocean. The first of several was the Ocean Pier in 1891, followed later by the Steel Pier in 1898, both of which boasted rides and attractions typical of that time, such as Midway-style games and electric trolley rides. The boardwalk also had the first Roundabout installed in 1892 by William Somers, a wooden predecessor to the Ferris Wheel. Somers installed two others in Asbury Park, New Jersey and Coney Island, New York.

Another early park was the Eldorado Amusement Park that opened in 1891 on the banks of the Hudson River, overlooking New York City. It consisted of 25 acres.

Modern amusement parks

The first permanent enclosed entertainment area, regulated by a single company, was founded in Coney Island in 1895: Sea Lion Park at Coney Island in Brooklyn. This park was one of the first to charge admission fee to get into the park in addition to sell tickets for rides within the park.

In 1897, Sea Lion Park was joined by Steeplechase Park, the first of three major amusement parks that would open in the Coney Island area. George Tilyou designed the park to provide thrills and entertainment. The combination of the nearby population center of New York City and the ease of access to the area made Coney Island the embodiment of the American amusement park. Coney Island also featured Luna Park (1903) and Dreamland (1904). Coney Island was a huge success and by the year 1910 attendance on days could reach a million people. Fueled by the efforts of Frederick Ingersoll who borrowed the name, other "Luna Parks" were quickly erected worldwide and opened to rave reviews.

The first amusement park in England was opened in 1896 - the Blackpool Pleasure Beach by W. G. Bean. In 1904, Sir Hiram Maxim's Captive Flying Machine was introduced; he had designed an early aircraft powered by steam engines that had been unsuccessful and instead opened up a pleasure ride of flying carriages that revolved around a central pylon. Other rides included the 'Grotto' (a fantasy ride), 'River Caves' (a scenic railway), water chutes and a tobogganing tower.

Fire was a constant threat in those days, as much of the construction within the amusement parks of the era was wooden. In 1911, Dreamland was the first Coney Island amusement park to completely burn down; in 1944, Luna Park also burned to the ground. Most of Ingersoll's Luna Parks were similarly destroyed, usually by arson, before his death in 1927.

The Golden Age

During the Gilded Age, many Americans began working fewer hours and had more disposable income. With new-found money and time to spend on leisure activities, Americans sought new venues for entertainment. Amusement parks, set up outside major cities and in rural areas, emerged to meet this new economic opportunity. These parks served as a source of fantasy and escape from real life. By the early 1900s, hundreds of amusement parks were operating in the United States and Canada. Trolley parks stood outside many cities. Parks like Atlanta's Ponce de Leon and Idora Park, near Youngstown, OH, took passengers to traditionally popular picnic grounds, which by the late 1890s also often included rides like the Giant Swing, Carousel, and Shoot-the-Chutes. These amusement parks were often based on nationally known parks or world's fairs: they had names like Coney Island, White City, Luna Park, or Dreamland. The American Gilded Age was, in fact, amusement parks' Golden Age that reigned until the late 1920s.

The Golden Age of amusement parks also included the advent of the kiddie park. Founded in 1925, the original Kiddie Park is located in San Antonio, Texas, and is still in operation as of 2022. The kiddie parks became popular all over America after World War II.

This era saw the development of new innovations in roller coasters that included extreme drops and speeds to thrill the riders. By the end of the First World War, people seemed to want an even more exciting entertainment, a need met by roller coasters. Although the development of the automobile provided people with more options for satisfying their entertainment needs, the amusement parks after the war continued to be successful, while urban amusement parks saw declining attendance. The 1920s is more properly known as the Golden Age of roller coasters, being the decade of frenetic building for these rides.

In England, Dreamland Margate opened in 1880 with Frederick Savage's carousel the first amusement ride installed. In 1920 the Scenic Railway rollercoaster opened to the public with great success, carrying half a million passengers in its first year. The park also installed other rides common to the time including a smaller roller coaster, the Joy Wheel, Miniature Railway, The Whip and the River Caves. A ballroom was constructed on the site of the Skating Rink in 1920 and in 1923 a Variety Cinema was built on the site. Between 1920 and 1935 over £500,000 was invested in the site, constantly adding new rides and facilities and culminating in the construction of the Dreamland Cinema complex in 1934 which stands to this day.

Meanwhile, the Blackpool Pleasure Beach was also being developed. Frequent large-scale investments were responsible for the construction of many new rides, including the Virginia Reel, Whip, Noah's Ark, Big Dipper and Dodgems. In the 1920s the "Casino Building" was built, which remains to this day. In 1923, land was reclaimed from the sea front. It was at this period that the park moved to its 44-acre (18 ha) current location above what became Watson Road, which was built under the Pleasure Beach in 1932. During this time Joseph Emberton, an architect famous for his work in the amusement trade was brought in to redesign the architectural style of the Pleasure Beach rides, working on the "Grand National" roller coaster, "Noah's Ark" and the Casino building to name a few.

Depression and post-World War II decline

The Great Depression of the 1930s and World War II during the 1940s saw the decline of the amusement park industry. War caused the affluent urban population to move to the suburbs, television became a source of entertainment, and families went to amusement parks less often.

By the 1950s, factors such as urban decay, crime, and even desegregation in the ghettos led to changing patterns in how people chose to spend their free time. Many of the older, traditional amusement parks closed or burned to the ground. Many would be taken out by the wrecking ball to make way for suburb and housing and development. In 1964, Steeplechase Park, once the king of all amusement parks, closed down for good. The traditional amusement parks which survived, for example, Kennywood, in West Mifflin, Pennsylvania, and Cedar Point, in Sandusky, Ohio, did so in spite of the odds.

Mid-Century Comeback

In 1951, Walt Disney came up with the idea of having an amusement park next to the studios in Burbank. The park would have been called Mickey Mouse Park, built across the street with a western area featuring a steam driven paddleboat, a turn of the century town, and a midway. It was rejected by the Burbank city council in fear of a carnival atmosphere. In 1952, he created WED Enterprises to design the park, which was now to be built in Anaheim, and in 1953, was able to convince the bankers on funding the park with the help of a studio artist, Herb Ryman, by making an aerial drawing of Disneyland. By July 1954, construction had started with a deadline of one year. Disneyland opened on July 17, 1955, and two months after the park opened, it welcomed its one millionth guest. Because of the financial success of Disneyland, the amusement industry was reinvigorated . What became Busch Gardens Tampa opened in 1959 as a garden and bird sanctuary. Six Flags Over Texas opened in 1961, themed to the six different countries that ruled over Texas. In 1964, Universal Studios Hollywood opened to the public with a studio tour of their backlot that had multiple adventure scenes and became a proper theme park. That same year, SeaWorld San Diego opened and displayed many varieties of aquatic and marine life.

Initially meant to house Walt Disney's dream idea, EPCOT (Experimental Prototype Community Of Tomorrow), Disney executives decided to settle on building the park first in Walt Disney World and the city later. After six years of construction, Walt Disney World opened to the public on October 1, 1971. Meant to be a larger east coast version of Disneyland, it had copies of most of the attractions from Disneyland (except for Liberty Square and the Hall of Presidents), yet it was financially the most ambitious project Walt Disney Productions had ever undertaken, and succeeded once the holiday crowds came in during Thanksgiving. In 1982, Walt Disney Productions opened the second Walt Disney World park, EPCOT Center, based on Walt Disney's futurist ideals and World Fairs. Like a World's Fair, the park would display the latest technologies in an area called Future World, and the cultural pavilions in World Showcase.

The 1990s

In 1987, Disney announced that it would open its third Disney World park, Disney-MGM Studios in 1989, which would have a working backlot. However, Universal knew that its Californian backlot tour would not work as a standalone attraction next to Disney World (especially now as Disney built one in Disney-MGM). So it divided up the segments of its California tour into individual attractions, such as Jaws, Disaster!, and Kongfrontation. Disney-MGM Studios opened it on May 1, 1989, with two major attractions: The Backlot Tour and The Great Movie Ride. The concept for the park started out as an EPCOT pavilion, but was turned into a park as a "half day" attraction—a complement to the rest of the resort. The rest of the park was themed to 1930s Hollywood and featured lost parts of Hollywood like the Brown Derby. Universal Studios Florida opened on June 7, 1990 (delayed by one year) to great fanfare, but the primary attractions were experiencing severe technical difficulties. All three of the park's major attractions (Jaws, Disaster!, and Kongfrontation) were not working and suffered major technical difficulties. Disaster! and Kongfrontation were fixed by the end of June, but Jaws had to be rebuilt and reopened three years later. However, Universal learned from opening day and started conducting exit surveys and special ticket deals.

In 1992, Disney opened its first European park, Euro Disneyland, outside of Paris, France, designed to be like the Magic Kingdom in Florida, yet it caters to the European tastes through changes, including removing Tomorrowland and replacing it with Discoveryland, themed to the great futuristic thinkers of European culture such as H.G. Wells and Jules Vernes. A recession in the French economy and the immense public backlash against the park led to financial hardship, putting the park into debt. However, this did not stop Disney from expanding Disney-MGM Studios with the Twilight Zone: Tower of Terror, in 1994, and building their fourth Walt Disney World park, Disney's Animal Kingdom.

The 2000s

In the early 90s after the opening of Universal Studios Florida, Universal sought to build a second theme park, one aimed more towards children and their families. Universal acquired the theme park rights to many properties including Marvel and Dr. Seuss to build the park around. In 1999, Universal Studios opened Universal Studios Islands of Adventure under the new resort name Universal Studios Escape. The park was allegedly designed by former Disney Imagineers who left after the financial disaster of Disneyland Paris. In the late 80s, the Oriental Land Company (the owners and operators of Tokyo Disneyland resort which opened in 1983) wanted a second park. None of the current non-Magic Kingdom parks satisfied the Japanese, but one concept thrown away for Disneyland's second gate inspired a new one: DisneySea. Tokyo DisneySea is themed after stories based on the ocean and nautical adventure. It was constructed at a cost of ¥335 billion and opened on September 4, 2001. The park's two signature attractions are a modernized version of 20,000 Leagues Under The Sea and Journey To The Center of The Earth.

In the early 90s, Micheal Eisner wanted to make Disneyland in the image of Walt Disney World's resort style. Plans were made for multiple hotels (such as one based on the Grand Floridian Hotel) and a new west coast version of EPCOT, called WESTCOT. WESTCOT never came to be due to local opposition from residents, rising costs, and the financial fallout of Disneyland Paris. After a corporate retreat in Colorado, Disney executives decided to make a park themed to California so that guests could experience all of California within the confines of the Disneyland Resort and would be built across from Disneyland on its 100 acre parking lot. Disney's California Adventure would be the largest disaster Disney ever created because unlike Disneyland, it would be set in the modern day and spoof modern day California with its cheap, insincere, and flat backdrops. The park would be adult focused, sell fine food, and serve alcohol. When the park opened on February 8, 2001, it received a chilly reception for its lack of attractions, poor environment (for example, Hollywood Studios Backlot was themed to a modern day movie backlot of modern day Hollywood), and overemphasis on retail and dining. When John Hench (an original Imagineer who worked with Walt and was a chief creative executive at Imagineering since Imagineering was founded) was asked for his opinion on the park, he reportedly said, "I preferred the parking lot."

Walt Disney Studios Park in Paris was the second Disneyland Paris park. Disney had to build a second park or risk losing the land to the French government. The park opened March 16th, 2002, with only three rides and California Adventure style theming. However, Hong Kong Disneyland was higher quality than the other black sheep, but still lacked the number of attractions that was needed, just like California Adventure and Walt Disney Studios Park. It opened on September 12, 2005, with only four lands, and had exorbitant wait times on opening day for everything from rides to food.

In the early 2000s, the Harry Potter book series written by J.K Rowling had become a pop culture phenomenon. Universal and Disney entered a bidding war over the theme park rights to the books, but Disney seemed to have won after Rowling signed a letter of intent with Disney. However, Rowling was disappointed with Disney's small-scale plans to install an omnimover attraction themed to the Defense Against the Dark Arts class with one shop and one restaurant in the former submarine lagoon at Magic Kingdom. She was also displeased with the lack of creative control she had and exited the deal. She went to Universal next and was also displeased with the initial plan to redress the Islands of Adventure's Lost Continent area. To remedy this, J.K Rowling wrestled creative control from Universal and forced them to make the land a full scale, realistic re-creation of Hogsmeade and Hogwarts without being a refurbishment of an existing area. The project was announced in 2007 and in 2010 the land was opened to the public and made Universal Orlando a must visit destination.

Today, there are over 475 amusement parks in the United States, ranging from mega-parks and those that are operated by Warner Bros., Disney, Six Flags and NBCUniversal.

Amusement and theme parks today

The amusement park industry's offerings range from immersive theme parks such as Warner Bros. World Abu Dhabi, the Disneyland Resort and Universal Orlando Resort to thrilling coaster parks such as the Six Flags parks and Cedar Fair parks. Countless smaller ventures exist across the United States and around the world. Simpler theme parks directly aimed at smaller children have also emerged, such as Legoland.

Examples of amusement parks in shopping malls exist in West Edmonton Mall, Pier 39 and Mall of America.

Family fun parks starting as miniature golf courses have begun to grow to include batting cages, go-karts, bumper cars, bumper boats and water slides. Some of these parks have grown to include even roller coasters, and traditional amusement parks now also have these competition areas in addition to their thrill rides.

In 2015, theme parks in the United States had a revenue of US\$8 billion and theme parks in China had a revenue of US\$4.6 billion, with China expected to overtake the United States by 2020.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1848 2023-07-26 14:07:47

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1851) Street light

Summary

Sodium-vapour lamp is an electric discharge lamp using ionized sodium, used for street lighting and other illumination. A low-pressure sodium-vapour (LPS) lamp contains an inner discharge tube made of borosilicate glass that is fitted with metal electrodes and filled with neon and argon gas and a little metallic sodium. When current passes between the electrodes, it ionizes the neon and argon, giving a red glow until the hot gas vaporizes the sodium. The vapourized sodium ionizes and shines a nearly monochrome yellow. LPS lamps have been used widely for street lighting since the 1930s because of their efficiency (measured in lumens per watt) and the ability of their yellow light to penetrate fog. High-pressure sodium-vapour (HPS) lamps have an inner discharge tube made of translucent alumina that can withstand the corrosive effects of a mixture of mercury and sodium under greater pressure and higher temperature. HPS lamps give a whiter light and are used for extra-bright lighting in places such as road intersections, tunnels, sports stadiums, and other places where it is desirable to see a full spectrum of reflected colours.

Details

A street light, light pole, lamp pole, lamppost, street lamp, light standard, or lamp standard is a raised source of light on the edge of a road or path. Similar lights may be found on a railway platform. When urban electric power distribution became ubiquitous in developed countries in the 20th century, lights for urban streets followed, or sometimes led.

Many lamps have light-sensitive photocells that activate the lamp automatically when needed, at times when there is little-to-no ambient light, such as at dusk, dawn, or at the onset of dark weather conditions. This function in older lighting systems could be performed with the aid of a solar dial. Many street light systems are being connected underground instead of wiring from one utility post to another. Street lights are an important source of public security lighting intended to reduce crime.

Modern lights

Today, street lighting commonly uses high-intensity discharge lamps. Low-pressure sodium (LPS) lamps became commonplace after World War II for their low power consumption and long life. Late in the 20th century high-pressure sodium (HPS) lamps were preferred, taking further the same virtues. Such lamps provide the greatest amount of photopic illumination for the least consumption of electricity. However, white light sources have been shown to double driver peripheral vision and improve driver brake reaction time by at least 25%; to enable pedestrians to better detect pavement trip hazards and to facilitate visual appraisals of other people associated with interpersonal judgements. Studies comparing metal halide and high-pressure sodium lamps have shown that at equal photopic light levels, a street scene illuminated at night by a metal halide lighting system was reliably seen as brighter and safer than the same scene illuminated by a high-pressure sodium system.

Two national standards now allow for variation in illuminance when using lamps of different spectra. In Australia, HPS lamp performance needs to be reduced by a minimum value of 75%. In the UK, illuminances are reduced with higher values S/P ratio.

New street lighting technologies, such as LED or induction lights, emit a white light that provides high levels of scotopic lumens, allowing streetlights with lower wattages and lower photopic lumens to replace existing streetlights. However, there have been no formal specifications written around Photopic/Scotopic adjustments for different types of light sources, causing many municipalities and street departments to hold back on implementation of these new technologies until the standards are updated. Eastbourne in East Sussex, UK is currently undergoing a project to see 6000 of its streetlights converted to LED and will be closely followed by Hastings in early 2014. Many UK councils are undergoing mass-replacement schemes to LED, and though streetlights are being removed along many long stretches of UK motorways (as they are not needed and cause light pollution), LEDs are preferred in areas where lighting installations are necessary.

(A light-emitting diode (LED) is a semiconductor device that emits light when current flows through it. Electrons in the semiconductor recombine with electron holes, releasing energy in the form of photons. The color of the light (corresponding to the energy of the photons) is determined by the energy required for electrons to cross the band gap of the semiconductor. White light is obtained by using multiple semiconductors or a layer of light-emitting phosphor on the semiconductor device.)

Milan, Italy, is the first major city to have entirely switched to LED lighting.

In North America, the city of Mississauga, Canada was one of the first and largest LED conversion projects, with over 46,000 lights converted to LED technology between 2012 and 2014. It is also one of the first cities in North America to use Smart City technology to control the lights. DimOnOff, a company based in Quebec City, was chosen as a Smart City partner for this project. In the United States, the city of Ann Arbor, Michigan was the first metropolitan area to fully implement LED street lighting in 2006. Since then, Sodium-vapor lamps were slowly being replaced by LED lamps.

Photovoltaic-powered LED luminaires are gaining wider acceptance. Preliminary field tests show that some LED luminaires are energy-efficient and perform well in testing environments.

In 2007, the Civil Twilight Collective created a variant of the conventional LED streetlight, namely the Lunar-resonant streetlight. These lights increase or decrease the intensity of the streetlight according to the lunar light. This streetlight design thus reduces energy consumption as well as light pollution.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1849 2023-07-27 14:04:58

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

1852) Light fixure

Gist

Luminaire, or light fixture, is a complete lighting unit, consisting of one or more lamps (bulbs or tubes that emit light), along with the socket and other parts that hold the lamp in place and protect it, wiring that connects the lamp to a power source, and a reflector that helps direct and distribute the light. Fluorescent fixtures usually have lenses or louvers to shield the lamp (thus reducing glare) and redirect the light emitted. Luminaires include both portable and ceiling- or wall-mounted fixtures.

Details

A light fixture (US English), light fitting (UK English), or luminaire is an electrical device containing an electric lamp that provides illumination. All light fixtures have a fixture body and one or more lamps. The lamps may be in sockets for easy replacement—or, in the case of some LED fixtures, hard-wired in place.

Fixtures may also have a switch to control the light, either attached to the lamp body or attached to the power cable. Permanent light fixtures, such as dining room chandeliers, may have no switch on the fixture itself, but rely on a wall switch.

Fixtures require an electrical connection to a power source, typically AC mains power, but some run on battery power for camping or emergency lights. Permanent lighting fixtures are directly wired. Movable lamps have a plug and cord that plugs into a wall socket.

Light fixtures may also have other features, such as reflectors for directing the light, an aperture (with or without a lens), an outer shell or housing for lamp alignment and protection, an electrical ballast or power supply, and a shade to diffuse the light or direct it towards a workspace (e.g., a desk lamp). A wide variety of special light fixtures are created for use in the automotive lighting industry, aerospace, marine and medicine sectors.

Portable light fixtures are often called lamps, as in table lamp or desk lamp. In technical terminology, the lamp is the light source, which, in casual terminology, is called the light bulb. Both the International Electrotechnical Commission (IEC) and the Illuminating Engineering Society (IES) recommend the term luminaire for technical use.

History

Fixture manufacturing began soon after production of the incandescent light bulb. When practical uses of fluorescent lighting were realized after 1924, the three leading companies to produce various fixtures were Lightolier, Artcraft Fluorescent Lighting Corporation, and Globe Lighting in the United States.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

## #1850 2023-07-28 13:34:24

Jai Ganesh
Registered: 2005-06-28
Posts: 47,788

### Re: Miscellany

Gist

Business is buying and selling as a way of earning money; commerce.

Summary

Business is the practice of making one's living or making money by producing or buying and selling products (such as goods and services). It is also "any activity or enterprise entered into for profit."

Having a business name does not separate the business entity from the owner, which means that the owner of the business is responsible and liable for debts incurred by the business. If the business acquires debts, the creditors can go after the owner's personal possessions. The taxation system for businesses is different from that of the corporates. A business structure does not allow for corporate tax rates. The proprietor is personally taxed on all income from the business.

The term is also often used colloquially (but not by lawyers or public officials) to refer to a company, such as a corporation or cooperative.

Corporations, in contrast with sole proprietors and partnerships, are a separate legal entity and provide limited liability for their owners/members, as well as being subject to corporate tax rates. A corporation is more complicated and expensive to set up, but offers more protection and benefits for the owners/members.

Details

A business organization is an entity formed for the purpose of carrying on commercial enterprise. Such an organization is predicated on systems of law governing contract and exchange, property rights, and incorporation.

Business enterprises customarily take one of three forms: individual proprietorships, partnerships, or limited-liability companies (or corporations). In the first form, a single person holds the entire operation as his personal property, usually managing it on a day-to-day basis. Most businesses are of this type. The second form, the partnership, may have from 2 to 50 or more members, as in the case of large law and accounting firms, brokerage houses, and advertising agencies. This form of business is owned by the partners themselves; they may receive varying shares of the profits depending on their investment or contribution. Whenever a member leaves or a new member is added, the firm must be reconstituted as a new partnership. The third form, the limited-liability company, or corporation, denotes incorporated groups of persons—that is, a number of persons considered as a legal entity (or fictive “person”) with property, powers, and liabilities separate from those of its members. This type of company is also legally separate from the individuals who work for it, whether they be shareholders or employees or both; it can enter into legal relations with them, make contracts with them, and sue and be sued by them. Most large industrial and commercial organizations are limited-liability companies.

This article deals primarily with the large private business organizations made up chiefly of partnerships and limited-liability companies—called collectively business associations. Some of the principles of operation included here also apply to large individually owned companies and to public enterprises.

Business associations have three distinct characteristics: (1) they have more than one member (at least when they are formed); (2) they have assets that are legally distinct from the private assets of the members; and (3) they have a formal system of management, which may or may not include members of the association.

The first feature, plurality of membership, distinguishes the business association from the business owned by one individual; the latter does not need to be regulated internally by law, because the single owner totally controls the assets. Because the single owner is personally liable for debts and obligations incurred in connection with the business, no special rules are needed to protect its creditors beyond the ordinary provisions of bankruptcy law.

The second feature, the possession of distinct assets (or a distinct patrimony), is required for two purposes: (1) to delimit the assets to which creditors of the association can resort to satisfy their claims (though in the case of some associations, such as the partnership, they can also compel the members to make good any deficiency) and (2) to make clear what assets the managers of the association may use to carry on business. The assets of an association are contributed directly or indirectly by its members—directly if a member transfers a personally owned business or property or investments to the association in return for a share in its capital, indirectly if a member’s share of capital is paid in cash and the association then uses that contribution and like contributions in cash made by other members to purchase a business, property, or investments.

The third essential feature, a system of management, varies greatly. In a simple form of business association the members who provide the assets are entitled to participate in the management unless otherwise agreed. In the more complex form of association, such as the company or corporation of the Anglo-American common-law countries, members have no immediate right to participate in the management of the association’s affairs; they are, however, legally entitled to appoint and dismiss the managers (known also as directors, presidents, or administrators), and their consent is legally required (if only pro forma) for major changes in the company’s structure or activities, such as reorganizations of its capital and mergers with other associations. The role of a member of a company or corporation is basically passive; a member is known as a shareholder or stockholder, the emphasis being placed on the individual’s investment function. The managers of a business association, however, do not in law comprise all of the persons who exercise discretion or make decisions. Even the senior executives of large corporations or companies may be merely employees, and, like manual or clerical workers, their legal relationship with the corporation is of no significance in considering the law governing the corporation. Whether an executive is a director, president, or administrator (an element in the company or corporation’s legal structure) depends on purely formal considerations; whether the executive is named as such in the document constituting the corporation or is subsequently appointed or elected to hold such an office, the person’s actual functions in running the corporation’s business and the amount of power or influence wielded are irrelevant. Nevertheless, for certain purposes, such as liability for defrauding creditors in English law and liability for deficiencies of assets in bankruptcy in French law, people who act as directors and participate in the management of the company’s affairs are treated as such even though they have not been formally appointed.

Partnerships

The distinguishing features of the partnership are the personal and unrestricted liability of each partner for the debts and obligations of the firm (whether the partner assented to their being incurred or not) and the right of each partner to participate in the management of the firm and to act as an agent of it in entering into legal transactions on its behalf. The civil-law systems of most continental European countries have additionally always permitted a modified form of partnership, the limited partnership (société en commandite, Kommanditgesellschaft, società in accomandita), in which one or more of the partners are liable for the firm’s debts only to the extent of the capital they contribute or agree to contribute. Such limited partners are prohibited from taking part in the management of the firm, however; if they do, they become personally liable without limit for the debts of the firm, together with the general partners. English common law refused to recognize the limited partnership, and in the United States at the beginning of the 19th century only Louisiana, which was governed by French civil law, permitted such partnerships. During the 19th century most of the states enacted legislation allowing limited partnerships to be formed, and in 1907 Great Britain adopted the limited partnership by statute, but it has not been much used there in practice. Another distinction between kinds of partnership in civil law—one that has no equivalent in Anglo-American common-law countries—is that between civil and commercial partnerships. This distinction depends on whether the purposes for which the partnership is formed fall within the list of commercial activities in the country’s commercial code. These codes always make manufacturing, dealing in, and transporting goods commercial activities, while professional and agricultural activities are always noncommercial. Consequently, a partnership of lawyers, doctors, or farmers is a civil partnership, governed exclusively by the civil code of the country concerned and untouched by its commercial code. No such distinction is made in the common-law countries, where professional and business partnerships are subject to the same rules as trading partnerships, although only partners in a trading partnership have the power to borrow on the firm’s behalf.

Limited-liability companies, or corporations

The company or corporation, unlike the partnership, is formed not simply by an agreement entered into between its first members; it must also be registered at a public office or court designated by law or otherwise obtain official acknowledgment of its existence. Under English and American law the company or corporation is incorporated by filing the company’s constitution (memorandum and articles of association, articles or certificate of incorporation) signed by its first members at the Companies Registry in London or, in the United States, at the office of the state secretary of state or corporation commissioner. In France, Germany, and Italy and the other countries subject to a civil-law system, a notarized copy of the constitution is filed at the local commercial tribunal, and proof is tendered that the first members of the company have subscribed the whole or a prescribed fraction of the company’s capital and that assets transferred to the company in return for an allotment of its shares have been officially valued and found to be worth at least the amount of capital allotted for them. English and American law, together with the laws of the Netherlands and the Scandinavian countries, provide only one category of business company or corporation (in the Netherlands the naamloze vennootschap, in Sweden the aktiebolag), although all these systems of law make distinctions for tax purposes between private, or close, companies or corporations on the one hand and public companies or corporations on the other. English law also distinguishes between private and public companies for some purposes of company law; for example, a private company cannot have more than 50 members and cannot advertise subscriptions for its shares. Under the civil-law systems, however, a fundamental distinction is drawn between the public company (société anonyme, Aktiengesellschaft, società per azioni) and the private company (société à responsabilité limitée, Gesellschaft mit beschränkter Haftung [GmbH], società a responsabilità limitata), and in Germany the two kinds of companies are governed by different enactments, as they were in France until 1966. For practical purposes, however, public and private companies function the same way in all countries. Private companies are formed when there is no need to appeal to the public to subscribe for the company’s shares or to lend money to it, and often they are little more than incorporated partnerships whose directors hold all or most of the company’s shares. Public companies are formed—or more usually created by the conversion of private companies into public ones—when the necessary capital cannot be supplied by the directors or their associates and it is necessary to raise funds from the public by publishing a prospectus. In Great Britain, the Commonwealth countries, and the United States, this also requires the obtaining of a stock exchange listing for the shares or other securities offered or an offer on the Unlisted Securities Market (USM). In a typical public company the directors hold only a small fraction of its shares, often less than 1 percent, and in Great Britain and the United States, at least, it is not uncommon for up to one-half of the funds raised by the company to be represented not by shares in the company but by loan securities such as debentures or bonds.

In Anglo-American common-law countries, public and private companies account for most of the business associations formed, and partnerships are entered into typically only for professional activities. In European countries the partnership in both its forms is still widely used for commercial undertakings. In Germany a popular form of association combines both the partnership and the company. This is the GmbH & Co. KG, which is a limited partnership whose general partner (nominally liable without limit for the partnership’s debts) is a private company and whose limited partners are the same persons as the shareholders of the company. The limited partners enjoy the benefit of limited liability for the partnership’s debts, and, by ensuring that most of the partnership’s profits are paid to them as limited partners and not to them as shareholders in the private company, they largely avoid the incidence of corporation tax.

Shares and other securities

Under all systems of law, partners may assign their share or interest in a partnership to anyone they wish unless the partnership agreement forbids this, but an assignment does not make the assignee a partner unless all the other partners agree. If they do not, the assignee is merely entitled to receive the financial benefits attached to the share or interest without being able to take part in the management of the firm, but neither is the assignee personally liable for the debts of the firm.

The shares of a company are quite different. In the first place, they are freely transferable unless the company’s constitution imposes restrictions on their transfer or, in French and Belgian law, unless the company is a private one, in which case transfers require the consent of the holders of three-quarters of the company’s issued shares. The constitution of an English private company must always restrict the transfer of its shares for the company to qualify as private. The restriction is usually that the directors may refuse to register a transfer for any of several reasons or that the other shareholders shall have the right to buy the shares at a fair price when their holder wishes to sell. In American law similar restrictions may be imposed, but unreasonable restrictions are disallowed by the courts. According to French and German law, the transfer of shares in public companies may be restricted only by being made subject to the consent of the board of directors or of the management board, but under French law, if the directors do not find an alternative purchaser at a fair price within three months, their consent is considered as given.

Limited liability

The second significant difference between share holding and partnership is that shares in a company do not expose the holder to unlimited liability in the way that a partner (other than a limited one) is held liable for the debts of the firm. Under all systems of law, except those of Belgium and some U.S. states, all shares must have a nominal value expressed in money terms, such as \$10, £1, €12,500, or €1, the latter two being the minimum permissible under German and French law, respectively. A company may issue shares for a price greater than this nominal value (the excess being known as a share premium), but it generally cannot issue them for less. Any part of that nominal value and the share premium that has not so far been paid is the measure of the shareholder’s maximum liability to contribute if the company becomes insolvent. If shares are issued without a nominal value (no par value shares), the subscription price is fixed by the directors and is the measure of the shareholder’s maximum liability to contribute. Usually the subscription price of shares is paid to the company fairly soon after they are issued. The period for payment of all the installments is rarely more than a year in common-law countries, and it is not uncommon for the whole subscription price to be payable when the shares are issued. The actual subscription price is influenced by market considerations, such as the company’s profit record and prospects, and by the market value of the company’s existing shares. Although directors have a duty to obtain the best subscription price possible, they can offer new shares to existing shareholders at favourable prices, and those shareholders can benefit either by subscribing for the new shares or by selling their subscription rights to other persons. Under European legislation, directors are bound to offer new shares to existing shareholders in the first place unless they explicitly forgo their preemptive rights. In most U.S. states (but not in the United Kingdom), such preemptive rights are implied if the new shares belong to the same class as existing shares, but the rights may be negated by the company’s constitution.

Dividends

The third difference between share holding and partnerships is that a partner is automatically entitled to a share of the profits of the firm as soon as they are ascertained, but a shareholder is entitled to a dividend out of the company’s profits only when it has been declared. Under English law, dividends are usually declared at annual general meetings of shareholders, though the company’s constitution usually provides that the shareholders cannot declare higher dividends than the directors recommend. Under U.S. law, dividends are usually declared by the directors, and, if shareholders consider, in view of the company’s profits, that too small a dividend has been paid, they may apply to the court to direct payment of a reasonable dividend. German law similarly protects shareholders of public companies against niggardly dividends by giving the annual general meeting power to dispose as it wishes of at least half the profit shown by the company’s annual accounts before making transfers to reserve. For the same object, Swedish law empowers the holders of 10 percent of a company’s shares to require at least one-fifth of its accumulated profits and reserves to be distributed as a dividend, provided that the total distribution does not exceed one-half of the profits of its last financial year. Thus, most national law recognizes potential conflict of interest between directors and shareholders.

Classes of shares

Companies may issue shares of different classes, the commonest classes being ordinary and preference, or, in American terminology, common and preferred shares. Preference shares are so called because they are entitled by the terms on which they are issued to payment of a dividend of a fixed amount (usually expressed as a percentage of their nominal value) before any dividend is paid to the ordinary shareholders. In the case of cumulative preference shares, any unpaid part of a year’s dividend is carried forward and added to the next year’s dividend and so on until the arrears of preference dividend are paid off. The accumulation of arrears of preference dividend depreciates the value of the ordinary shares, whose holders cannot be paid a dividend until the arrears of preference dividend have been paid. Consequently, it has been common in the United States (but not in the United Kingdom) for companies to issue noncumulative preference shares, giving their holders the right to a fixed preferential dividend each year if the company’s profits are sufficient to pay it but limiting the dividend to the amount of the profits of the year if they are insufficient to pay the preference dividend in full. Preference shares are not common in Europe, but under German and Italian law they have the distinction of being the only kind of shares that can be issued without voting rights in general meetings, all other shares carrying voting rights proportionate to their nominal value by law.

History of the limited-liability company

The limited-liability company, or corporation, is a relatively recent innovation. Only since the mid-19th century have incorporated businesses risen to ascendancy over other modes of ownership. Thus, any attempt to trace the forerunners of the modern corporation should be distinguished from a general history of business or a chronicle of associated activity. People have embarked on enterprises for profit and have joined together for collective purposes since the dawn of recorded history, but these early enterprises were forerunners of the contemporary corporation in terms of their functions and activities, not in terms of their mode of incorporation. When a group of Athenian or Phoenician merchants pooled their savings to build or charter a trading vessel, their organization was not a corporation but a partnership; ancient societies did not have laws of incorporation that delimited the scope and standards of business activity.

The corporate form itself developed in the early Middle Ages with the growth and codification of civil and canon law. Several centuries passed, however, before business ownership was subsumed under this arrangement. The first corporations were towns, universities, and ecclesiastical orders. These differed from partnerships in that the corporation existed independently of any particular membership. Unlike modern business corporations, they were not the “property” of their participants. The holdings of a monastery, for example, belonged to the order itself; no individual owned shares in its assets. The same was true of the medieval guilds, which dominated many trades and occupations. As corporate bodies, they were chartered by government, and their business practices were regulated by public statutes; each guild member, however, was an individual proprietor who ran his own establishment, and, while many guilds had substantial properties, these were the historic accruals of the associations themselves. By the 15th century, the courts of England had agreed on the principle of “limited liability”: Si quid universitati debetur, singulis non debetur, nec quod debet universitas, singuli debent (“If something is owed to the group, it is not owed to the individuals nor do the individuals owe what the group owes”). Originally applied to guilds and municipalities, this principle set limits on how much an alderman of the Liverpool Corporation, for example, might be called upon to pay if the city ran into debt or bankruptcy. Applied later to stockholders in business corporations, it served to encourage investment because the most an individual could lose in the event of the firm’s failure would be the actual amount originally paid for the shares.

Incorporation of business enterprises began in England during the Elizabethan era. This was a period when business owners were beginning to accumulate substantial surpluses, and overseas exploration and trade presented expanded investment opportunities. This was an age that gave overriding regulatory powers to the state, which sought to ensure that business activity was consonant with current mercantilist conceptions of national prosperity. Thus, the first joint-stock companies, while financed with private capital, were created by public charters setting down in detail the activities in which the enterprises might operate. In 1600 Queen Elizabeth I granted to a group of investors headed by the earl of Cumberland the right to be “one body corporate,” known as the Governor and Company of Merchants of London, trading into the East Indies. The East India Company was bestowed a trading monopoly in its territories and also was given authority to make and enforce laws in the areas it entered. The East India Company, the Royal African Company, the Hudson’s Bay Company, and similar incorporated firms were semipublic enterprises acting both as arms of the state and as vehicles for private profit. The same principle held with the colonial charters on the American continent. In 1606 the crown vested in a syndicate of “loving and well-disposed Subjects” the right to develop Virginia as a royal domain, including the power to coin money and to maintain a military force. The same was done in subsequent decades for the “Governor and Company of the Massachusetts Bay in New England” and for William Penn’s “Free Society of Traders” in Pennsylvania.

Much of North America’s settlement was initially underwritten as a business venture. But, while British investors accepted the regulations inhering in their charters, American entrepreneurs came to regard such rules as repressive and unrealistic. The U.S. War of Independence can be interpreted as a movement against the tenets of this mercantile system, raising serious questions about a direct tie between business enterprise and public policy. One result of that war, therefore, was to establish the premise that a corporation need not show that its activities advance a specific public purpose. Alexander Hamilton, the first secretary of the treasury and an admirer of Adam Smith, took the view that businessmen should be encouraged to explore their own avenues of enterprise. “To cherish and stimulate the activity of the human mind, by multiplying the objects of enterprise, is not among the least considerable of the expedients by which the wealth of a nation may be promoted,” he wrote in 1791.

The growth of independent corporations did not occur overnight. For a long time, both in Europe and in the United States, the corporate form was regarded as a creature of government, providing a form of monopoly. In the United States the new state legislatures granted charters principally to public-service companies intending to build or operate docks, bridges, turnpikes, canals, and waterworks, as well as to banks and insurance companies. Of the 335 companies receiving charters prior to 1800, only 13 were firms engaging in commerce or manufacturing. By 1811, however, New York had adopted a general act of incorporation, setting the precedent that businesspeople had only to provide a summary description of their intentions for permission to launch an enterprise. By the 1840s and ’50s the rest of the states had followed suit. In Great Britain after 1825 the statutes were gradually liberalized so that the former privilege of incorporating joint-stock companies became the right of any group complying with certain minimum conditions, and the principle of limited liability was extended to them. A similar development occurred in France and parts of what is now Germany.

By the late 20th century, in terms of size, influence, and visibility, the corporation had become the dominant business form in industrial nations. While corporations may be large or small, ranging from firms having hundreds of thousands of employees to neighbourhood businesses of very modest proportions, public attention increasingly focused on the several hundred giant companies that play a preponderant economic role in the United States, Japan, South Korea, the nations of western Europe, Canada, Australia, New Zealand, South Africa, and several other countries. These firms not only occupy important positions in the economy but have great social, political, and cultural influence as well. Both at home and abroad they affect the operations of national and local governments, give shape to local communities, and influence the values of ordinary individuals. Therefore, while in fact and in law corporate businesses are private enterprises, their activities have consequences that are public in character and as pervasive as those of many governments.

Besides the partnership and the company or corporation, there are a number of other forms of business association, of which some are developments or adaptations of the partnership or company, some are based on contract between the members or on a trust created for their benefit, and others are statutory creations. The first of these classes includes the cooperative society; the building society, home loan association, and its German equivalent, the Bausparkasse; the trustee savings bank, or people’s or cooperative bank; the friendly society, or mutual insurance association; and the American mutual fund investment company. The essential features of these associations are that they provide for the small or medium investor. Although they originated as contractual associations, they are now governed in most countries by special legislation and not by the law applicable to companies or corporations.

The establishment and management of cooperatives is treated in most countries under laws distinct from those governing other business associations. The cooperative is a legal entity but typically is owned and controlled by those who use it or work in it, though there may be various degrees of participation and profit sharing. The essential point is that the directors and managers are accountable ultimately to the enterprise members, not to the outside owners of capital. This form is rooted in a strong sense of social purpose; it was devised in the 19th century as an idealistic alternative to the conventional capitalist business association. It has been particularly associated with credit, retailing, agricultural marketing, and crafts.

The second class comprises the English unit trust and the European fonds d’investissements or Investmentfonds, which fulfill the same functions as American mutual funds; the Massachusetts business trust (now little used but providing a means of limiting the liability of participants in a business activity like the limited partnership); the foundation (fondation, Stiftung), a European organization that has social or charitable objects and often carries on a business whose profits are devoted to those objects; and, finally, the cartel, or trade association, which regulates the business activities of its individual members and is itself extensively regulated by antitrust and antimonopoly legislation.

The third class of associations, those wholly created by statute, comprises corporations formed to carry on nationalized business undertakings (such as the Bank of England and the German Railway) or to coexist with other businesses in the same field (such as the Italian Istituto per la Ricostruzione Industriale) or to fulfill a particular governmental function (such as the Tennessee Valley Authority). Such statutory associations usually have no share capital, though they may raise loans from the public. They are regarded in European law as being creatures of public law, like departments and agencies of the government. In recent years, however, a hybrid between the state corporation and the privately owned corporation or company has appeared in the form of the mixed company or corporation (société mixte). In this kind of organization, part of the association’s share capital is held by the state or a state agency and part by private persons, this situation often resulting from a partial acquisition of the association’s shares by the state. In only France and Italy are there special rules governing such associations; in the United Kingdom and Germany they are subject to the ordinary rules of company law.

Management and control of companies

The simplest form of management is the partnership. In Anglo-American common-law and European civil-law countries, every partner (other than a limited partner) is entitled to take part in the management of the firm’s business; however, a partnership agreement may provide that ordinary partners shall not participate in management, in which case they are dormant partners but are still personally liable for the debts and obligations incurred by the other managing partners.

The management structure of companies or corporations is more complex. The simplest is that envisaged by English, Belgian, Italian, and Scandinavian law, by which the shareholders of the company periodically elect a board of directors who collectively manage the company’s affairs and reach decisions by a majority vote but also have the right to delegate any of their powers, or even the whole management of the company’s business, to one or more of their number. Under this regime it is common for a managing director (directeur général, direttore generale) to be appointed, often with one or more assistant managing directors, and for the board of directors to authorize them to enter into all transactions needed for carrying on the company’s business, subject only to the general supervision of the board and to its approval of particularly important measures, such as issuing shares or bonds or borrowing. The U.S. system is a development of this basic pattern. By the laws of most states it is obligatory for the board of directors elected periodically by the shareholders to appoint certain executive officers, such as the president, vice president, treasurer, and secretary. The latter two have no management powers and fulfill the administrative functions that in an English company are the concern of its secretary, but the president and in his absence the vice president have by law or by delegation from the board of directors the same full powers of day-to-day management as are exercised in practice by an English managing director.

The most complex management structures are those provided for public companies under German and French law. The management of private companies under these systems is confided to one or more managers (gérants, Geschäftsführer) who have the same powers as managing directors. In the case of public companies, however, German law imposes a two-tier structure, the lower tier consisting of a supervisory committee (Aufsichtsrat) whose members are elected periodically by the shareholders and the employees of the company in the proportion of two-thirds shareholder representatives and one-third employee representatives (except in the case of mining and steel companies where shareholders and employees are equally represented) and the upper tier consisting of a management board (Vorstand) comprising one or more persons appointed by the supervisory committee but not from its own number. The affairs of the company are managed by the management board, subject to the supervision of the supervisory committee, to which it must report periodically and which can at any time require information or explanations. The supervisory committee is forbidden to undertake the management of the company itself, but the company’s constitution may require its approval for particular transactions, such as borrowing or the establishment of branches overseas, and by law it is the supervisory committee that fixes the remuneration of the managers and has power to dismiss them.

The French management structure for public companies offers two alternatives. Unless the company’s constitution otherwise provides, the shareholders periodically elect a board of directors (conseil d’administration), which “is vested with the widest powers to act on behalf of the company” but which is also required to elect a president from its members who “undertakes on his own responsibility the general management of the company,” so that in fact the board of directors’ functions are reduced to supervising the president. The similarity to the German pattern is obvious, and French legislation carries this further by openly permitting public companies to establish a supervisory committee (conseil de surveillance) and a management board (directoire) like the German equivalents as an alternative to the board of directors–president structure.

Dutch and Italian public companies tend to follow the German pattern of management, although it is not expressly sanctioned by the law of those countries. The Dutch commissarissen and the Italian sindaci, appointed by the shareholders, have taken over the task of supervising the directors and reporting on the wisdom and efficiency of their management to the shareholders.

Separation of ownership and control

The investing public is a major source of funds for new or expanding operations. As companies have grown, their need for funds has grown, with the consequence that legal ownership of companies has become widely dispersed. For example, in large American corporations, shareholders may run into the hundreds of thousands and even more. Although large blocks of shares may be held by wealthy individuals or institutions, the total amount of stock in these companies is so large that even a very wealthy person is not likely to own more than a small fraction of it.

The chief effect of this stock dispersion has been to give effective control of the companies to their salaried managers. Although each company holds an annual meeting open to all stockholders, who may vote on company policy, these gatherings, in fact, tend to ratify ongoing policy. Even if sharp questions are asked, the presiding officers almost invariably hold enough proxies to override outside proposals. The only real recourse for dissatisfied shareholders is to sell their stock and invest in firms whose policies are more to their liking. (If enough shareholders do this, of course, the price of the stock will fall quite markedly, perhaps impelling changes in management personnel or company policy.) Occasionally, there are “proxy battles,” when attempts are made to persuade a majority of shareholders to vote against a firm’s managers (or to secure representation of a minority bloc on the board), but such struggles seldom involve the largest companies. It is in the managers’ interest to keep the stockholders happy, for, if the company’s shares are regarded as a good buy, then it is easy to raise capital through a new stock issue.

Thus, if a company is performing well in terms of sales and earnings, its executives will have a relatively free hand. If a company gets into trouble, its usual course is to agree to be merged into another incorporated company or to borrow money. In the latter case, the lending institution may insist on a new chief executive of its own choosing. If a company undergoes bankruptcy and receivership, the court may appoint someone to head the operation. But managerial autonomy is the rule. The salaried executives typically have the discretion and authority to decide what products and services they will put on the market, where they will locate plants and offices, how they will deal with employees, and whether and in what directions they will expand their spheres of operation.

Executive management

The markets that corporations serve reflect the great variety of humanity and human wants; accordingly, firms that serve different markets exhibit great differences in technology, structure, beliefs, and practice. Because the essence of competition and innovation lies in differentiation and change, corporations are in general under degrees of competitive pressure to modify or change their existing offerings and to introduce new products or services. Similarly, as markets decline or become less profitable, they are under pressure to invent or discover new wants and markets. Resistance to this pressure for change and variety is among the benefits derived from regulated manufacturing, from standardization of machines and tools, and from labour specialization. Every firm has to arrive at a mode of balancing change and stability, a conflict often expressed in distinctions drawn between capital and revenue and long- and short-term operations and strategy. Many corporations have achieved relatively stable product-market relationships, providing further opportunity for growth within particular markets and expansion into new areas. Such relative market control endows corporate executives and officers with considerable discretion over resources and, in turn, with considerable corporate powers. In theory these men and women are hired to manage someone else’s property; in practice, however, many management officers have come increasingly to regard the stockholders as simply one of several constituencies to which they must report at periodic intervals through the year.

Managerial decision making

The guidelines governing management decisions cannot be reduced to a simple formula. Traditionally, economists have assumed that the goal of a business enterprise was to maximize its profits. There are, however, problems of interpretation with this simple assertion. First, over time the notion of “profit” is itself unclear in operational terms. Today’s profits can be increased at the expense of profits years away, by cutting maintenance, deferring investment, and exploiting staff. Second, there are questions over whether expenditure on offices, cars, staff expenses, and other trappings of status reduces shareholders’ wealth or whether these are part of necessary performance incentives for executives. Some proponents of such expenditures believe that they serve to enhance contacts, breed confidence, improve the flow of information, and stimulate business. Third, if management asserts primacy of profits, this may in itself provide negative signals to employees about systems of corporate values. Where long-term success requires goodwill, commitment, and cooperation, focus on short-term profit may alienate or drive away those very employees upon whom long-term success depends.

Generally speaking, most companies turn over only about half of their earnings to stockholders as dividends. They plow the rest of their profits back into the operation. A major motivation of executives is to expand their operations faster than those of their competitors. The important point, however, is that without profit over the long term no firm can survive. For growing firms in competitive markets, a major indicator of executive competence is the ability to augment company earnings by increasing sales or productivity or by achieving savings in other ways. This principle distinguishes the field of business from other fields. A drug company makes pharmaceuticals and may be interested in improving health, but it exists, first and foremost, to make profits. If it found that it could make more money by manufacturing frozen orange juice, it might choose to do so.

The modern executive

Much has been written about business executives as “organization men.” According to this view, typical company managers no longer display the individualism of earlier generations of entrepreneurs. They seek protection in committee-made decisions and tailor their personalities to please their superiors; they aim to be good “team” members, adopting the firm’s values as their own. The view is commonly held that there are companies—and entire industries—that have discouraged innovative ideas. The real question now is whether companies will develop policies to encourage autonomy and adventuresomeness among managers.

In Japan, where the employees of large corporations tend to remain with the same employer throughout their working lives, the corporations recruit young people upon their graduation from universities and train them as company cadets. Those among the cadets who demonstrate ability and a personality compatible with the organization are later selected as managers. Because of the seniority system, many are well past middle age before they achieve high status. There are signs that the system is weakening, however, as efforts are more often made to lift promising young men and women out of low-echelon positions. Criticism of the traditional method has been stimulated by the example of some of the newer corporations and of those owned by foreign capital. The few individuals in the Japanese business world who have emerged as personalities are either founders of corporations, managers of family enterprises, or small businesspeople. They share a strong inclination to make their own decisions and to minimize the role of directors and boards.

Modern trends

The sheer size of the largest limited-liability companies, or corporations—especially “multinationals,” with holdings across the world—has been a subject of discussion and public concern since the end of the 19th century, for with this rise has come market and political power. While some large firms have declined, been taken over, or gone out of business, others have grown to replace them. The giant firms continue to increase their sales and assets by expanding their markets, by diversifying, and by absorbing smaller companies. Diversification carried to the extreme has brought into being the conglomerate company, which acquires and operates subsidiaries that are often in unrelated fields. The holding company, with the conglomerate, acts as a kind of internal stock market, allocating funds to its subsidiaries on the basis of financial performance. The decline or failure of many conglomerates, however, has cast doubt upon the competence of any one group of executives to manage a diversity of unrelated operations. Empirical evidence from the United States suggests that conglomerates have been less successful financially than companies that have had a clear product-market focus based on organizational strengths and competencies.

The causes of such vast corporate growth have found varying explanations. One school of thought, most prominently represented by American economist John Kenneth Galbraith, sees growth as stemming from the imperatives of modern technology. Only a large firm can employ the range of talent needed for research and development in areas such as aerospace and nuclear energy. And only companies of this stature have the capacity for innovating industrial processes and entering international markets. Just as government has had to grow in order to meet new responsibilities, so have corporations found that producing for the contemporary economy calls for the intricate interaction of executives, experts, and extensive staffs of employees. While there is certainly room for small firms, the kinds of goods and services that the public seems to want increasingly require the resources that only a large company can master.

Others hold that the optimum size of the efficient firm is substantially smaller than many people believe, and some research has shown that profit rates in industries having a large number of smaller firms are just as high as in those in which a few big companies dominate a market. In this view, corporate expansion stems not from technological necessity but rather from an impulse to acquire or establish new subsidiaries or to branch out into new fields. The structures of most large corporations are really the equivalent of a congeries of semi-independent companies. In some cases these divisions compete against one another as if they were separately owned. The picture has been further complicated by growth across national boundaries, producing multinational companies, principally firms from western Europe and North America. Their enormous size and extent raise questions about their accountability and political and economic influence and power.

The impact of the large company

While it is generally agreed that the power of large companies extends beyond the economic sphere, this influence is difficult to measure in any objective way. The processes of business entail at least some effort to ensure the sympathetic enactment and enforcement of legislation, since costs and earnings are affected by tax rates and government regulations. Companies and business groups send agents to local and national capitals and use such vehicles as advertising to enlist support for policies that they favour. Although in many countries companies may not legally contribute directly to candidates running for public office, their executives and stockholders may do so as individuals. In the United States, however, a 2010 Supreme Court decision gave companies the right to engage in political advertising. Companies may, however, make payments to lobbyists and contribute to committees working to pass or defeat legislative proposals. In practical terms, many lawmakers look upon companies as part of their constituency, although, if their districts depend on local plants, these lawmakers may be concerned more with preserving jobs than with protecting company profits. In any case, limited-liability companies are central institutions in society; it would be unrealistic to expect them to remain aloof from the political process that affects their operations, performance, and principles.

The decisions made by company managements have ramifications throughout society. In effect, companies can decide which parts of the country or even which parts of the world will prosper and which will decline by choosing where to locate their plants and other installations. The giant companies not only decide what to produce but also help to instill in their customers a desire for the amenities that the companies make available. To the extent that large firms provide employment, their personnel requirements determine the curricula of schools and universities. For these reasons, individuals’ aspirations and dissatisfactions are likely to be influenced by large companies. This does not mean that large business firms can influence the public in any way they choose; it is simply that they are the only institutions available to perform certain functions. Automobiles, computers, and electric toasters must come from company auspices if they are to be provided at all. Understanding this dependence as a given, companies tend to create an environment congenial to the conduct of their business.

The social role of the large company

Some company executives believe that their companies should act as “responsible” public institutions, holding power in trust for the community. Most companies engage in at least some public-service projects and make contributions to charities. A certain percentage of these donations can be deducted from a corporation’s taxable income. Most of the donated money goes to private health, education, and welfare agencies, ranging from local hospital and charity funds to civil rights groups and cultural institutions.

At the other extreme, it is generally agreed that companies should reject the notion that they have public duties, that society as a whole will be better off if companies maximize their profits, for this will expand employment, improve technology, raise living standards, and also provide individuals with more money to donate to causes of their own choosing. A cornerstone of this argument is that management has no right to withhold dividends. If stockholders wish to give gifts themselves, they should do so from their personal funds. On the other hand, some critics complain that large companies have been much too conservative in defining their responsibilities. Not only have most firms avoided public controversy, but they also have sought to reap public-relations benefits from every sum that they donate. Very few, say the critics, have made more than a token effort to promote minority hiring, provide day-care centres, or take on school dropouts and former convicts. Companies have also been charged with abandoning the central cities, profiting from military contracts, misrepresenting their merchandise, and investing in foreign countries governed by repressive regimes. A perennial indictment has been that profits, prices, and executive compensation are too high, while the wages and taxes paid by corporations are too low.

In the late 20th century a new school of critics emerged who stressed the social costs of the large company. They charged that automobiles, pharmaceuticals, and other products were badly designed and dangerous to their users. The consumer movement, led by such figures as American lawyer Ralph Nader, was joined by environmental critics who pointed to the quantities of waste products released into streams and into the air. Local and national laws were passed in an effort to set higher standards of safety and to force companies to install antipollution devices. However, the costs of these measures are inevitably passed on to the consumer. If a nuclear power plant must have cooling towers so that it does not discharge heated water into an adjacent lake, for example, the extra equipment results in higher electricity bills. Most companies are hesitant to take such steps on their own initiative, fearing that they will need to raise prices without thereby increasing profits. Society, however, is already paying for the costs of traffic congestion, trash removal, and nutritional deficiencies. The prices charged by companies are far from reflecting the total impact that the manufacture and consumption of their products have upon human life.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline