Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#1751 2023-04-26 16:23:39

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1754) Goya Awards

Gist

The Goya Awards (Spanish: Premios Goya) are Spain's main national annual film awards, commonly referred to as the Academy Awards of Spain. The awards were established in 1987, a year after the founding of the Academy of Cinematographic Arts and Sciences, and the first awards ceremony took place on March 16, 1987 at the Teatro Lope de Vega, Madrid. The ceremony continues to take place annually at Centro de Congresos Príncipe Felipe, around the end of January/beginning of February, and awards are given to films produced during the previous year.

Summary

Goya Awards : The Spanish Academy of Motion Picture Arts and Sciences gives out these prestigious awards every year to the best professionals in the different technical and creative categories in Spanish filmmaking.Its first edition took place in 1987. It has 28 different categories as well as the Goya of Honour, which recognises a whole lifetime dedicated to films. Some of the people who have won this award have been actors like Manuel Alexandre, José Sacristán, Ángela Molina, Pepa Flores (Marisol) and Alfredo Landa, or directors such as Mario Camus.Internationally-renowned professionals such as Pedro Almodóvar, Amenábar, Carmen Maura, Antonio Banderas, Javier Bardem and Penélope Cruz have also been award winners. It is, therefore, an award that celebrates the quality of Spanish films.

Details

The Goya Awards (Spanish: Premios Goya) are Spain's main national annual film awards, commonly referred to as the Academy Awards of Spain.

The awards were established in 1987, a year after the founding of the Academy of Cinematographic Arts and Sciences, and the first awards ceremony took place on March 16, 1987, at the Teatro Lope de Vega in Madrid. The ceremony continues to take place annually at Centro de Congresos Príncipe Felipe, around the end of January/beginning of February, and awards are given to films produced during the previous year.

The award itself is a small bronze bust of Francisco Goya created by the sculptor José Luis Fernández, although the original sculpture for the first edition of the Goyas was by Miguel Ortiz Berrocal.

History

To reward the best Spanish films of each year, the Spanish Academy of Motion Pictures and Arts decided to create the Goya Awards. The Goya Awards are Spain's main national film awards, considered by many in Spain, and internationally, to be the Spanish equivalent of the American Academy Awards. The inaugural ceremony took place on March 17, 1987, at the Lope de Vega theatre in Madrid. From the 2nd edition until 1995, the awards were held at the Palacio de Congresos in the Paseo de la Castellana. Then they moved to the similarly named Palacio Municipal de Congresos, also in Madrid. In 2000, the ceremony took place in Barcelona, at the Barcelona Auditorium. In 2003, a large number of film professionals took advantage of the Goya awards ceremony to express their opposition to the Aznar's government support of the U.S. invasion of Iraq. In 2004, the AVT (an association against terrorism in Spain) demonstrated against terrorism and ETA, a paramilitary organization of Basque separatists, in front of the Lope de Vega theatre. In 2005, José Luis Rodríguez Zapatero was the first prime minister in the history of Spain to attend the event. In 2013, the minister of culture and education José Ignacio Wert did not attend, saying he had “other things to do”. Some actors said that this decision reflected the government's lack of respect for their profession and industry. In the 2019 edition, the awards took place in Seville, and in 2020, the ceremony was held in Málaga.

Awards

The awards are currently delivered in 28 categories, excluding the Honorary Goya Award and the International Goya Award, with an increase of up to five nominees per category established for the upcoming 37th edition. There was a maximum of four candidates for each from the 13th Edition (having been three candidates in the first edition, five in the 2nd and 3rd edition and three from the fourth to the twelfth edition) to the 36th edition.

The Goya Awards (Spanish: Premios Goya) are Spain's main national annual film awards, commonly referred to as the Academy Awards of Spain. The awards were established in 1987, a year after the founding of the Academy of Cinematographic Arts and Sciences, and the first awards ceremony took place on March 16, 1987 at the Teatro Lope de Vega, Madrid. The ceremony continues to take place annually at Centro de Congresos Príncipe Felipe, around the end of January/beginning of February, and awards are given to films produced during the previous year. The award itself is a small bronze bust of Francisco Goya created by the sculptor , although the original sculpture for the first edition of the Goyas was by Miguel Ortiz Berrocal.

Premio-goya-Copy.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1752 2023-04-26 22:20:33

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1755) Data Entry

Summary

Data processing is manipulation of data by a computer. It includes the conversion of raw data to machine-readable form, flow of data through the CPU and memory to output devices, and formatting or transformation of output. Any use of computers to perform defined operations on data can be included under data processing. In the commercial world, data processing refers to the processing of data required to run organizations and businesses.

Details

Data entry is the process of digitizing data by entering it into a computer system for organization and management purposes. It is a person-based process and is "one of the important basic" tasks needed when no machine-readable version of the information is readily available for planned computer-based analysis or processing.

Sometimes what is needed is "information about information (that) can be greater than the value of the information itself." It can also involve filling in required information which is then "data-entered" from what was written on the research document, such as the growth in available items in a category.  This is a higher level of abstraction than metadata, "information about data." Common errors in data entry include transposition errors, misclassified data, duplicate data, and omitted data, which are similar to bookkeeping errors.

Procedures

Data entry is often done with a keyboard and at times also using a mouse, although a manually-fed scanner may be involved.

Historically, devices lacking any pre-processing capabilities were used.

Keypunching

Data entry using keypunches was related to the concept of batch processing – there was no immediate feedback.

Computer keyboards

Computer keyboards and online data-entry provide the ability to give feedback to the data entry clerk doing the work.

Numeric keypads

The addition of numeric keypads to computer keyboards introduced quicker and often also less error-prone entry of numeric data.

Computer mouse

The use of a computer mouse, typically on a personal computer, opened up another option for doing data entry.

Touch screens

Touch screens introduced even more options, including the ability to stand and do data entry, especially given "a proper height of work surface when performing data entry."

Spreadsheets

Although most data entered into a computer are stored in a database, a significant amount is stored in a spreadsheet. The use of spreadsheets instead of databases for data entry can be traced to the 1979 introduction of Visicalc, and what some consider the wrong place for storing computational data continues.

Format control and specialized data validation are reasons that have been cited for using database-oriented data entry software.

Data managements

The search for assurance about the accuracy of the data entry process predates computer keyboards and online data entry. IBM even went beyond their 056 Card Verifier and developed their quieter IBM 059 model.

Modern techniques go beyond mere range checks, especially when the new data can be evaluated using probability about an event.

Assessment

In one study, a medical school tested its second year students and found their data entry skills – needed if they are to do small-scale unfunded research as part of their training – were below what the school considered acceptable, creating potential barriers.

Additional Information

Data Entry Operator responsibilities include:

*Entering customer and account data from source documents within time limits
* Compiling, verifying accuracy and sorting information to prepare source data for computer entry
* Reviewing data for deficiencies or errors, correcting any incompatibilities and checking output
* Research and obtain further information for incomplete documents
* Apply data program techniques and procedures
* Generate reports, store completed work in designated locations and perform backup operations
* Scan documents and print files, when needed
* Keep information confidential
* Respond to queries for information and access relevant files
* Comply with data integrity and security policies
* Ensure proper use of office equipment and address any malfunctions

Requirements and skills

* Proven data entry work experience, as a Data Entry Operator or Office Clerk
* Experience with MS Office and data programs
* Familiarity with administrative duties
* Experience using office equipment, like fax machine and scanner
* Typing speed and accuracy
* Excellent knowledge of correct spelling, grammar and punctuation
* Attention to detail
* Confidentiality
* Organization skills, with an ability to stay focused on assigned tasks
* High school diploma; additional computer training or certification will be an asset

Frequently asked questions

What does a Data Entry Operator do?

A Data Entry Operator compiles and verifies data to ensure accuracy while appropriately formatting it. This includes preparing documents for entry and transcribing from paper formats into computer files using manual entry or scanners.

What are the duties and responsibilities of a Data Entry Operator?

The duties of a Data Entry Operator include coding information, troubleshooting processing errors and achieving an organization's goals by completing the necessary tasks. They are also responsible for complying with data integrity and security policies, printing and scanning files and generating reports.

What makes a good Data Entry Operator?

A good Data Entry Operator requires strong attention to detail and excellent written and verbal communication skills. They should also have the ability to perform repetitive tasks with a high degree of accuracy in an ever-changing working environment.

Who does a Data Entry Operator work with?

A Data Entry Operator typically works alongside a team of fellow operators in an office setting. In addition, they work closely with various administrative persons such as an Office Clerk.

landscape-1496347870-gettyimages-538568372.jpg?resize=1200:*


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1753 2023-04-27 18:53:34

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1756) Hallucination

Summary

A hallucination is a perception in the absence of an external stimulus that has the qualities of a real perception. Hallucinations are vivid, substantial, and are perceived to be located in external objective space. Hallucination is a combination of two conscious states of brain wakefulness and REM (Rapid Eye Movement) sleep. They are distinguishable from several related phenomena, such as dreaming (REM sleep), which does not involve wakefulness; pseudohallucination, which does not mimic real perception, and is accurately perceived as unreal; illusion, which involves distorted or misinterpreted real perception; and mental imagery, which does not mimic real perception, and is under voluntary control. Hallucinations also differ from "delusional perceptions", in which a correctly sensed and interpreted stimulus (i.e., a real perception) is given some additional significance. Many hallucinations happen also during sleep paralysis.

Hallucinations can occur in any sensory modality—visual, auditory, olfactory, gustatory, tactile, proprioceptive, equilibrioceptive, nociceptive, thermoceptive and chronoceptive. Hallucinations are referred to as multimodal if multiple sensory modalities occur.

A mild form of hallucination is known as a disturbance, and can occur in most of the senses above. These may be things like seeing movement in peripheral vision, or hearing faint noises or voices. Auditory hallucinations are very common in schizophrenia. They may be benevolent (telling the subject good things about themselves) or malicious, cursing the subject. 55% of auditory hallucinations are malicious in content, for example, people talking about the subject, not speaking to them directly. Like auditory hallucinations, the source of the visual counterpart can also be behind the subject. This can produce a feeling of being looked or stared at, usually with malicious intent. Frequently, auditory hallucinations and their visual counterpart are experienced by the subject together.

Hypnagogic hallucinations and hypnopompic hallucinations are considered normal phenomena. Hypnagogic hallucinations can occur as one is falling asleep and hypnopompic hallucinations occur when one is waking up. Hallucinations can be associated with drug use (particularly deliriants), sleep deprivation, psychosis, neurological disorders, and delirium tremens.

The word "hallucination" itself was introduced into the English language by the 17th-century physician Sir Thomas Browne in 1646 from the derivation of the Latin word alucinari meaning to wander in the mind. For Browne, hallucination means a sort of vision that is "depraved and receive its objects erroneously".

Details

What Are Hallucinations?

If you're like most folks, you probably think hallucinations have to do with seeing things that aren't really there. But there's a lot more to it than that. It could mean you touch or even smell something that doesn't exist.

There are many different causes. It could be a mental illness called schizophrenia, a nervous system problem like Parkinson's disease, epilepsy, or of a number of other things.

If you or a loved one has hallucinations, go see a doctor. You can get treatments that help control them, but a lot depends on what's behind the trouble. There are a few different types.

Common Causes of Hallucinations

Hallucinations most often result from:

* Schizophrenia. More than 70% of people with this illness get visual hallucinations, and 60%-90% hear voices. But some may also smell and taste things that aren't there.
* Parkinson's disease. Up to half of people who have this condition sometimes see things that aren't there.
* Alzheimer's disease. and other forms of dementia, especially Lewy body dementia. They cause changes in the brain that can bring on hallucinations. It may be more likely to happen when your disease is advanced.
* Migraines. About a third of people with this kind of headache also have an "aura," a type of visual hallucination. It can look like a multicolored crescent of light.
* Brain tumor. Depending on where it is, it can cause different types of hallucinations. If it's in an area that has to do with vision, you may see things that aren't real. You might also see spots or shapes of light. Tumors in some parts of the brain can cause hallucinations of smell and taste.
* Charles Bonnet syndrome. This condition causes people with vision problems like macular degeneration, glaucoma, or cataracts to see things. At first, you may not realize it's a hallucination, but eventually, you figure out that what you're seeing isn't real.
* Epilepsy. The seizures that go along with this disorder can make you more likely to have hallucinations. The type you get depends on which part of your brain the seizure affects.

RELATED

Hearing Things (Auditory Hallucinations)

You may sense that the sounds are coming from inside or outside your mind. You might hear the voices talking to each other or feel like they're telling you to do something. Causes could include:

* Schizophrenia
* Bipolar disorder
* Psychosis
* Borderline personality disorder
* Posttraumatic stress disorder
* Hearing loss
* Sleep disorders
* Brain lesions
* Drug use

Seeing Things (Visual Hallucinations)

For example, you might:

* See things others don’t, like insects crawling on your hand or on the face of someone you know
* See objects with the wrong shape or see things moving in ways they usually don’t
* Sometimes they look like flashes of light. A rare type of seizure called "occipital" may cause you to see brightly colored spots or shapes.

Other causes include:

* Irritation in the visual cortex, the part of your brain that helps you see
* Damage to brain tissue (the doctor will call this lesions)
* Schizophrenia
* Schizoaffective disorder
* Depression
* Bipolar disorder
* Delirium (from infections, drug use and withdrawal, or body and brain problems)
* Dementia
* Parkinson’s disease
* Seizures
* Migraines
* Brain lesions and tumors
* Sleep problems
* Drugs that make you hallucinate
* Metabolism problems
* Creutzfeldt-Jakob disease

Smelling Things (Olfactory Hallucinations)

You may think the odor is coming from something around you, or that it's coming from your own body. Causes can include:

* Head injury
* Cold
* Temporal lobe seizure
* Inflamed sinuses
* Brain tumors
* Parkinson’s disease

Tasting Things (Gustatory Hallucinations)

You may feel that something you eat or drink has an odd taste. Causes can include:

* Temporal lobe disease
* Brain lesions
* Sinus diseases
* Epilepsy

Feeling Things (Tactile or Somatic Hallucinations)

You might think you're being tickled even when no one else is around, or you may feel like insects are crawling on or under your skin. You could feel a blast of hot air on your face that isn't real. Causes include:

* Schizophrenia
* Schizoaffective disorder
* Drugs that make you hallucinate
* Delirium tremens
* Alcohol
* Alzheimer's disease
* Lewy body dementia
* Parkinson's disease

Diagnosis and Treatment of Hallucinations

First, your doctor needs to find out what's causing your hallucinations. They'll ask about your medical history and do a physical exam. Then they'll ask about your symptoms.

RELATED

They may need to do tests to help figure out the problem. For instance, an EEG, or electroencephalogram, checks for unusual patterns of electrical activity in your brain. It could show if your hallucinations are due to seizures.

You might get an MRI, or magnetic resonance imaging, which uses powerful magnets and radio waves to make pictures of the inside of your body. It can find out if a brain tumor or something else, like an area that's had a small stroke, could be to blame.

Your doctor will treat the condition that's causing the hallucinations. This can include things like:

* Medication for schizophrenia or dementias like Alzheimer's disease
* Antiseizure drugs to treat epilepsy
* Treatment for macular degeneration, glaucoma, and cataracts
* Surgery or radiation to treat tumors
* Drugs called triptans, beta-blockers, or anticonvulsants for people with migraines

Your doctor may prescribe pimavanserin (Nuplazid). This medicine treats hallucinations and delusions linked to psychosis that affect some people with Parkinson’s disease.

Sessions with a therapist can also help. For example, cognitive behavioral therapy, which focuses on changes in thinking and behavior, helps some people manage their symptoms better.

Additional Information

Hallucination is the experience of perceiving objects or events that do not have an external source, such as hearing one’s name called by a voice that no one else seems to hear. A hallucination is distinguished from an illusion, which is a misinterpretation of an actual stimulus.

A historical survey of the study of hallucinations reflects the development of scientific thought in psychiatry, psychology, and neurobiology. By 1838 the significant relationship between the content of dreams and of hallucinations had been pointed out. In the 1840s the occurrence of hallucinations under a wide variety of conditions (including psychological and physical stress) as well as their genesis through the effects of such drugs as stramonium and hashish had been described.

French physician Alexandre-Jacques-François Brierre de Boismont in 1845 described many instances of hallucinations associated with intense concentration, or with musing, or simply occurring in the course of psychiatric disorder. In the last half of the 19th century, studies of hallucinations continued. Investigators in France were particularly oriented toward abnormal psychological function, and from this came descriptions of hallucinosis during sleepwalking and related reactions. In the 1880s English neurologist John Hughlings Jackson described hallucination as being released or triggered by the nervous system.

Other definitions of the term emerged later. Swiss psychiatrist Eugen Bleuler (1857–1939) defined hallucinations as “perceptions without corresponding stimuli from without,” while the Psychiatric Dictionary in 1940 referred to hallucination as the “apparent perception of an external object when no such object is present.” A spirited interest in hallucinations continued well into the 20th century. Sigmund Freud’s concepts of conscious and unconscious activities added new significance to the content of dreams and hallucinations. It was theorized that infants normally hallucinate the objects and processes that give them gratification. Although the notion has since been disputed, this “regression” hypothesis (i.e., that hallucinating is a regression, or return, to infantile ways) is still employed, especially by those who find it clinically useful. During the same period, others put forth theories that were more broadly biological than Freud’s but that had more points in common with Freud than with each other.

abstraction-hallucination-illusion-wallpaper-preview.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1754 2023-04-28 02:41:07

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1757) Balance of Payments

Summary

In international economics, the balance of payments (also known as balance of international payments and abbreviated BOP or BoP) of a country is the difference between all money flowing into the country in a particular period of time (e.g., a quarter or a year) and the outflow of money to the rest of the world. These financial transactions are made by individuals, firms and government bodies to compare receipts and payments arising out of trade of goods and services.

The balance of payments consists of two components: the current account and the capital account. The current account reflects a country's net income, while the capital account reflects the net change in ownership of national assets.

Details

The balance of payments (BOP) is the method countries use to monitor all international monetary transactions at a specific period. Usually, the BOP is calculated every quarter and every calendar year.

All trades conducted by both the private and public sectors are accounted for in the BOP to determine how much money is going in and out of a country. If a country has received money, this is known as a credit, and if a country has paid or given money, the transaction is counted as a debit.

Theoretically, the BOP should be zero, meaning that assets (credits) and liabilities (debits) should balance, but in practice, this is rarely the case. Thus, the BOP can tell the observer if a country has a deficit or a surplus and from which part of the economy the discrepancies are stemming.

* The balance of payments (BOP) is the record of all international financial transactions made by the residents of a country.
* There are three main categories of the BOP: the current account, the capital account, and the financial account.
* The current account is used to mark the inflow and outflow of goods and services into a country.
* The capital account is where all international capital transfers are recorded.
* In the financial account, international monetary flows related to investment in business, real estate, bonds, and stocks are documented.
* The current account should be balanced versus the combined capital and financial accounts, leaving the BOP at zero, but this rarely occurs.

The Balance of Payments Divided

The BOP is divided into three main categories: the current account, the capital account, and the financial account. Within these three categories are sub-divisions, each of which accounts for a different type of international monetary transaction.

The Current Account

The current account is used to mark the inflow and outflow of goods and services into a country. Earnings on investments, both public and private, are also put into the current account.

Within the current account are credits and debits on the trade of merchandise, which includes goods such as raw materials and manufactured goods that are bought, sold, or given away (possibly in the form of aid). Services refer to receipts from tourism, transportation (like the levy that must be paid in Egypt when a ship passes through the Suez Canal), engineering, business service fees (from lawyers or management consulting, for example), and royalties from patents and copyrights.

When combined, goods and services together make up a country's balance of trade (BOT). The BOT is typically the biggest bulk of a country's balance of payments as it makes up total imports and exports. If a country has a balance of trade deficit, it imports more than it exports, and if it has a balance of trade surplus, it exports more than it imports.

Receipts from income-generating assets such as stocks (in the form of dividends) are also recorded in the current account. The last component of the current account is unilateral transfers. These are credits that are mostly worker's remittances, which are salaries sent back into the home country of a national working abroad, as well as foreign aid that is directly received.

The Capital Account

The capital account is where all international capital transfers are recorded. This refers to the acquisition or disposal of non-financial assets (for example, a physical asset such as land) and non-produced assets, which are needed for production but have not been produced, like a mine used for the extraction of diamonds.

The capital account is broken down into the monetary flows branching from debt forgiveness, the transfer of goods, and financial assets by migrants leaving or entering a country, the transfer of ownership on fixed assets (assets such as equipment used in the production process to generate income), the transfer of funds received to the sale or acquisition of fixed assets, gift and inheritance taxes, death levies and, finally, uninsured damage to fixed assets.

The Financial Account

In the financial account, international monetary flows related to investment in business, real estate, bonds, and stocks are documented. Also included are government-owned assets, such as foreign reserves, gold, special drawing rights (SDRs) held with the International Monetary Fund (IMF), private assets held abroad, and direct foreign investment. Assets owned by foreigners, private and official, are also recorded in the financial account.

The Balancing Act

The current account should be balanced against the combined capital and financial accounts; however, as mentioned above, this rarely happens. We should also note that, with fluctuating exchange rates, the change in the value of money can add to BOP discrepancies.

If a country has a fixed asset abroad, this borrowed amount is marked as a capital account outflow. However, the sale of that fixed asset would be considered a current account inflow (earnings from investments). The current account deficit would thus be funded.

When a country has a current account deficit that is financed by the capital account, the country is actually foregoing capital assets for more goods and services. If a country is borrowing money to fund its current account deficit, this would appear as an inflow of foreign capital in the BOP.

Liberalizing the Accounts

The rise of global financial transactions and trade in the late-20th century spurred BOP and macroeconomic liberalization in many developing nations. With the advent of the emerging market economic boom, developing countries were urged to lift restrictions on capital- and financial-account transactions to take advantage of these capital inflows.

Some economists believe that the liberalization of BOP restrictions eventually lead to financial crises in emerging market nations, such as the Asian financial crisis.

Many of these countries had restrictive macroeconomic policies, by which regulations prevented foreign ownership of financial and non-financial assets. The regulations also limited the transfer of funds abroad.

With capital and financial account liberalization, capital markets began to grow, not only allowing a more transparent and sophisticated market for investors but also giving rise to foreign direct investment (FDI).

For example, investments in the form of a new power station would bring a country greater exposure to new technologies and efficiency, eventually increasing the nation's overall gross domestic product (GDP) by allowing for greater volumes of production. Liberalization can also facilitate less risk by allowing greater diversification in various markets.

The Bottom Line

The balance of payments (BOP) is the method by which countries measure all of the international monetary transactions within a certain period. The BOP consists of three main accounts: the current account, the capital account, and the financial account. The current account is meant to balance against the sum of the financial and capital account but rarely does.

Globalization in the late 20th-century led to BOP liberalization in many emerging market economies. These countries lifted restrictions on BOP accounts to take advantage of the cash flows arriving from foreign, developed nations, which in turn boosted their economies.

Importance of Balance of Payment (BOP)

Balance of Payment is a systematic record of all economic transactions which take place among the individuals of a country and the rest of the world.

A comprehensive set of accounts that tracks the flow of currency and other monetary assets coming in to and going out of a nation. These payments are used for international trade, foreign investments, and other financial activities. The balance of payments is divided into two accounts – current account (which includes payments for imports, exports, services, and transfers) and capital account (which includes payments for physical and financial assets). A deficit in one account is matched by a surplus in the other account. The balance of trade is only one part of the overall balance of payments set of accounts.

Importance of Balance of Payment (BOP):

(a) A country’s Balance of Payments reveals various aspects of a country’s international economic position. It presents the international financial position of the country. If the economy needs support in the form of imports, the government can prepare suitable policies to switch the funds and technology imported to the critical sectors of the economy that can constrain potential growth.

(b) It helps the government in taking decisions on monetary and fiscal policies on the one hand, and on external trade and payments issues on the other. It analyses the business transactions of any economy into exports and imports of goods and services for an exacting financial year. Here, the government can recognize the areas that have the possible for export-oriented growth and can prepare policies supporting those domestic industries.

(c) In the case of a developing country, the balance of payments shows the extent of dependence of the country’s economic development on the financial assistance by the developed countries. The government can also use the indications from Balance of Payments to discern the state of the economy and formulate its policies of inflation control, monetary and fiscal policies based on that.

(d) The greatest importance of the balance of payments lies in its serving as an indicator of changing the international economic position of a country. The balance of payments is the economic barometer which can be used to appraise a nation’s short-term international economic prospects, to evaluate the degree of its international solvency, and to determine the appropriateness of the exchange rate of country’s currency.

(e) However, a country’s favorable balance of payments cannot be taken as an indicator of economic prosperity nor the adverse and even the unfavorable balance of payments is not a reflection of bankruptcy. The government can adopt some defensive measures such as advanced tariff and duties on imports so as to discourage imports of non-essential items and encourage the domestic industries to be self-sufficient.

(f) A balance of payments deficit per se is not proof of the competitive weakness of a nation in foreign markets. However, the longer the balance of payments deficit continues, the more it would imply some fundamental problems in that economy.

(g) A favorable balance of payments should not always make a country complacent. A poor country may have a favorable balance of payments due to large .inflow of foreign loans and equity capital. A developed country may have an adverse balance of payments due to massive assistance given to developing countries.

(h) It does not provide data about assets and liabilities that relate one country to others. However, despite all these shortcomings, the significance of the balance of payments lies in the fact that it provides vital information to understand a country’s economic dealings with other countries.

(i) With the development of national income accounting, the balance of payments account has been used to calculate the control of foreign trade and transitions on the level of national income of the nation.

(j) If the country has a flourishing export trade, the government can adopt measures such as the devaluation of its currency to make its goods and services available in the international market at cheaper rates and bolster exports.

Balance-of-Payment-2.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1755 2023-04-28 23:26:43

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1758) Neanderthal

Gist

Neanderthal is a  Species of the human genus (Homo) that inhabited much of Europe, the Mediterranean lands, and Central Asia c. 200,000–24,000 years ago. The name derives from the discovery in 1856 of remains in a cave above Germany’s Neander Valley. Most scholars designate the species as Homo neanderthalensis and do not consider Neanderthals direct ancestors of modern humans (Homo sapiens); however, both species share a common ancestor that lived as recently as 130,000 years ago. Some scholars report evidence of limited interbreeding between Neanderthals and early modern humans of European and Asian stock. Neanderthals were short, stout, and powerful. Their braincases were long, low, and wide, and their cranial capacity equaled or surpassed that of modern humans. Their limbs were heavy, but they seem to have walked fully erect and had hands as capable as those of modern humans. They were cave dwellers who used fire, wielded stone tools and wooden spears to hunt animals, buried their dead, and cared for their sick or injured. They may have used language and may have practiced a primitive form of religion.

Summary

Neanderthals (Homo neanderthalensis or H. sapiens neanderthalensis), also written as Neandertals, are an extinct species or subspecies of archaic humans who lived in Eurasia until about 40,000 years ago. The reasons for Neanderthal extinction are disputed. Theories for their extinction include demographic factors such as small population size and inbreeding, competitive replacement, interbreeding and assimilation with modern humans, climate change, disease or a combination of these factors.

It is unclear when the line of Neanderthals split from that of modern humans; studies have produced various intervals ranging from 315,000 to more than 800,000 years ago. The date of divergence of Neanderthals from their ancestor H. heidelbergensis is also unclear. The oldest potential Neanderthal bones date to 430,000 years ago, but the classification remains uncertain. Neanderthals are known from numerous fossils, especially from after 130,000 years ago. The type specimen, Neanderthal 1, was found in 1856 in the Neander Valley in present-day Germany. For much of the early 20th century, European researchers depicted Neanderthals as primitive, unintelligent, and brutish. Although knowledge and perception of them has markedly changed since then in the scientific community, the image of the unevolved caveman archetype remains prevalent in popular culture.

Neanderthal technology was quite sophisticated. It includes the Mousterian stone-tool industry and ability to create fire and build cave hearths, make adhesive birch bark tar, craft at least simple clothes similar to blankets and ponchos, weave, go seafaring through the Mediterranean, and make use of medicinal plants, as well as treat severe injuries, store food, and use various cooking techniques such as roasting, boiling, and smoking. Neanderthals made use of a wide array of food, mainly hoofed mammals, but also other megafauna, plants, small mammals, birds, and aquatic and marine resources. Although they were probably apex predators, they still competed with cave bears, cave lions, cave hyenas, and other large predators. A number of examples of symbolic thought and Palaeolithic art have been inconclusively attributed to Neanderthals, namely possible ornaments made from bird claws and feathers or shells, collections of unusual objects including crystals and fossils, engravings, music production (possibly indicated by the Divje Babe flute), and Spanish cave paintings contentiously dated to before 65,000 years ago. Some claims of religious beliefs have been made. Neanderthals were likely capable of speech, possibly articulate, although the complexity of their language is not known.

Compared with modern humans, Neanderthals had a more robust build and proportionally shorter limbs. Researchers often explain these features as adaptations to conserve heat in a cold climate, but they may also have been adaptations for sprinting in the warmer, forested landscape that Neanderthals often inhabited. Nonetheless, they had cold-specific adaptations, such as specialised body-fat storage and an enlarged nose to warm air (although the nose could have been caused by genetic drift). Average Neanderthal men stood around 165 cm (5 ft 5 in) and women 153 cm (5 ft 0 in) tall, similar to pre-industrial modern humans. The braincases of Neanderthal men and women averaged about 1,600 cm3 (98 cu in) and 1,300 cu cm (79 cu in) respectively, which is considerably larger than the modern human average. The Neanderthal skull was more elongated and the brain had smaller parietal lobes and cerebellum, but larger temporal, occipital, and orbitofrontal regions.

The total population of Neanderthals remained low, proliferating weakly harmful gene variants and precluding effective long-distance networks. Despite this, there is evidence of regional cultures and thus of regular communication between communities. They may have frequented caves and moved between them seasonally. Neanderthals lived in a high-stress environment with high trauma rates, and about 80% died before the age of 40. The 2010 Neanderthal genome project's draft report presented evidence for interbreeding between Neanderthals and modern humans. It possibly occurred 316,000 to 219,000 years ago, but more likely 100,000 years ago and again 65,000 years ago. Neanderthals also appear to have interbred with Denisovans, a different group of archaic humans, in Siberia. Around 1–4% of genomes of Eurasians, Indigenous Australians, Melanesians, Native Americans, and North Africans is of Neanderthal ancestry, while the inhabitants of sub-Saharan Africa have only 0.3% of Neanderthal genes, save possible traces from early sapiens-to-Neanderthal gene flow and/or more recent back-migration of Eurasians to Africa. In all, about 20% of distinctly Neanderthal gene variants survive today. Although many of the gene variants inherited from Neanderthals may have been detrimental and selected out, Neanderthal introgression appears to have affected the modern human immune system, and is also implicated in several other biological functions and structures, but a large portion appears to be non-coding DNA.

Details

Neanderthal, (Homo neanderthalensis, Homo sapiens neanderthalensis), also spelled Neandertal, is a member of a group of archaic humans who emerged at least 200,000 years ago during the Pleistocene Epoch (about 2.6 million to 11,700 years ago) and were replaced or assimilated by early modern human populations (Homo sapiens) between 35,000 and perhaps 24,000 years ago. Neanderthals inhabited Eurasia from the Atlantic regions of Europe eastward to Central Asia, from as far north as present-day Belgium and as far south as the Mediterranean and southwest Asia. Similar archaic human populations lived at the same time in eastern Asia and in Africa. Because Neanderthals lived in a land of abundant limestone caves, which preserved bones well, and where there has been a long history of prehistoric research, they are better known than any other archaic human group. Consequently, they have become the archetypal “cavemen.” The name Neanderthal (or Neandertal) derives from the Neander Valley (German Neander Thal or Neander Tal) in Germany, where the fossils were first found.

Until the late 20th century, Neanderthals were regarded as genetically, morphologically, and behaviorally distinct from living humans. However, more recent discoveries about this well-preserved fossil Eurasian population have revealed an overlap between living and archaic humans. Neanderthals lived before and during the last ice age of the Pleistocene in some of the most unforgiving environments ever inhabited by humans. They developed a successful culture, with a complex stone tool technology, that was based on hunting, with some scavenging and local plant collection. Their survival during tens of thousands of years of the last glaciation is a remarkable testament to human adaptation.

First discoveries

The first human fossil assemblage described as Neanderthal was discovered in 1856 in the Feldhofer Cave of the Neander Valley, near Düsseldorf, Germany. The fossils, discovered by lime workers at a quarry, consisted of a robust cranial vault with a massive arched brow ridge, minus the facial skeleton, and several limb bones. The limb bones were robustly built, with large articular surfaces on the ends (that is, surfaces at joints that are typically covered with cartilage) and bone shafts that were bowed front to back. The remains of large extinct mammals and crude stone tools were discovered in the same context as the human fossils. Upon first examination, the fossils were deemed by anatomists as representing the oldest known human beings to inhabit Europe. Others disagreed and labeled the fossils H. neanderthalensis, a species distinct from H. sapiens. Some anatomists suggested that the bones were those of modern humans and that the unusual form was the result of pathology. This flurry of scientific debate coincided with the publication of On the Origin of Species (1859) by Charles Darwin, which provided a theoretical foundation upon which fossils could be viewed as a direct record of life over geologic time. When two fossil skeletons that resembled the original Feldhofer remains were discovered at Spy, Belgium, in 1886, the pathology explanation for the curious morphology of the bones was abandoned.

During the latter part of the 19th century and the early 20th century, additional fossils that resembled the Neanderthals from the Feldhofer and Spy caves were discovered, including those now in Belgium (Naulette), Croatia (Krapina), France (Le Moustier, La Quina, La Chapelle-aux-Saints and Pech de L’Azé), Italy (Guattari and Archi), Hungary (Subalyuk), Israel (Tabūn), the Czech Republic (Ochoz, Kůlna, and Sĭpka), the Crimea (Mezmaiskaya), Uzbekistan (Teshik-Tash), and Iraq (Shanidar). More recently, Neanderthals were discovered in the Netherlands (North Sea coast), Greece (Lakonis and Kalamakia), Syria (Dederiyeh), Spain (El Sidrón), and Russian Siberia (Okladnikov) and at additional sites in France (Saint Césaire, L’Hortus, and Roc de Marsal, near Les Eyzies-de-Tayac), Israel (Amud and Kebara), and Belgium (Scladina and Walou). Well over 200 individuals are represented, including over 70 juveniles. These sites range from nearly 200,000 years ago or earlier to 36,000 years before present, and some groups may have survived in the southern Iberian Peninsula until nearly 30,000–35,000 years ago or even possibly 28,000–24,000 years ago in Gibraltar. Most of the sites, however, are dated to approximately 120,000 to 35,000 years ago. The complete disappearance of the Neanderthals corresponds to, or precedes, the most recent glacial maximum—a time period of intense cold spells and frequent fluctuations in temperature beginning around 29,000 years ago or earlier—and the increasing presence and density in Eurasia of early modern human populations, and possibly their hunting dogs, beginning as early as 40,000 years ago.

Presumed ancestors of the Neanderthals were discovered at Sima de los Huesos (“Pit of the Bones”), at the Atapuerca site in Spain, dated to about 430,000 years ago, which yielded an impressive number of remains of all life stages. Sometimes these remains are attributed to H. heidelbergensis or archaic H. sapiens if one accepts Neanderthals as H. sapiens neanderthalensis—in other words, as a subspecies of modern humans. Presumed descendants of Neanderthals include a “love child” with both Neanderthal and modern human physical features from Portugal (Lagar Velho), dated to about 24,500 years ago.

What happened to the Neanderthals is one of the most-enduring questions in science, and it has been addressed in the early 21st century by applying genetic techniques to compare DNA from hundreds of living humans and ancient DNA recovered from Neanderthal fossils. The earliest genetic studies of Neanderthal mitochondrial DNA supported the idea that the origin of modern humans was a speciation event. More recently, however, it was reported that Eurasians generally carry about 2 percent Neanderthal nuclear DNA, which suggests that modern humans and Neanderthals interbred and thus were not two different biological species, despite most classifications treating them as such. It was previously argued on the basis of morphology that modern humans are distinct from Neanderthals, although the question of “how different is different” has always plagued debates on the apparent uniqueness of this fossil human group. When single specimens of Neanderthals and modern humans are compared, Neanderthals can easily be distinguished. When a broad range of individuals are examined, however, the variation observed fails to isolate Neanderthals as a group that is completely distinct from modern humans for every trait. Like Neanderthal fossils, early modern human fossils are robust in physical form, although they tend to differ from Neanderthal fossils in that they have a more juvenilized but large cranial vault coupled with a smaller face and a distinct mental trigone (chin).

Morphological traits:

Craniofacial features

Although Neanderthals possessed much in common physically with early modern humans, the constellation of Neanderthal features is unique, with much variation among individuals as far as craniofacial (head and facial) characteristics are concerned. Features of the cranium and lower jaw that were present more often in Neanderthals than in early and recent modern humans include a low-vaulted cranium, large orbital and nasal openings, and prominent arched brow ridges. A pronounced occipital region (the rear and base of the skull) served to anchor the large neck musculature. The cranial capacity of Neanderthals was similar to or larger than that of recent humans. The front teeth were larger than those in modern humans, but the molars and premolars were of a similar size. The lower jaw displayed a receding chin and was robustly built. The mental foramen, a small hole in the skull that allows nerves to reach the lower jaw, was placed farther back in Neanderthals than in recent humans, and a space between the last molar and the ascending edge of the lower jaw occurred in many individuals. There was also apparently less lumbar lordosis (back curvature) in Neanderthals and their predecessors from Sima de los Huesos than in modern humans.

Body proportions and cold stress

Neanderthals were a cold-adapted people. As with their facial features, Neanderthals’ body proportions were variable. However, in general, they possessed relatively short lower limb extremities, compared with their upper arms and legs, and a broad chest. Their arms and legs must have been massive and heavily muscled. This body build would have protected the extremities against damage from cold stress. Voluminous pulp cavities, or taurodontism, in the teeth may also have been an adaptation to cold temperatures or perhaps arose from genetic isolation. Cold stress may have delayed maturation in Neanderthal children, although earlier weaning and dental development have also been suggested from studies of teeth.

Other adaptations

Until the early 2000s, it was widely thought that Neanderthals lacked the capacity for complex communication, such as spoken language. Supporting that hypothesis was the fact that the flattened cranial base of Neanderthals—similar to that of modern infants prior to two years old—did not provide sufficient space for the production of vowels, which are used in all spoken languages of modern humans. However, studies beginning in the late 1980s of the hypoglossal canal (one of two small openings in the lower part of the skull) and a hyoid (the bone located between the base of the tongue and the larynx) from the paleoanthropological site at Kebara, Israel, suggested that the Neanderthal vocal tract could have been similar to that found in modern humans. Moreover, genetic studies in the early 2000s involving the Neanderthal FOXP2 gene (a gene thought to allow for the capacity for speech and language) indicated that Neanderthals probably used language in the same way that modern humans have. Such a deduction had also been extrapolated from interpretations of the complex behaviours of Neanderthals—such as the development of an advanced stone tool technology, the burial of the dead, and the care of injured social group members. It is not known, however, whether Neanderthals were capable of the full range of phonemes, or sound tones, that characterize the languages of modern humans. Handedness, which was inferred from dental wear resulting from items held in the mouth for processing, occurred among Neanderthals at a rate similar to that in modern humans and suggests a lateralization (functional separation) of the brain that is fundamental to language.

ap95930934715-copy_wide-00814458848d228b038b2d6d77865972905b3e72-s900-c85.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1756 2023-04-29 23:28:02

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1759) Cro-Magnons

Gist

Cro-Magnons were the first humans (genus Homo) to have a prominent chin. The brain capacity was about 1,600 cc (100 cubic inches), somewhat larger than the average for modern humans. It is thought that Cro-Magnons were probably fairly tall compared with other early human species.

Summary

Early European modern humans (EEMH), or Cro-Magnons, were the first early modern humans (Homo sapiens) to settle in Europe, migrating from Western Asia, continuously occupying the continent possibly from as early as 56,800 years ago. They interacted and interbred with the indigenous Neanderthals (H. neanderthalensis) of Europe and Western Asia, who went extinct 40,000 to 35,000 years ago. The first wave of modern humans in Europe from 45,000-40,000 (Initial Upper Paleolithic) left no genetic legacy to modern Europeans, but from 37,000 years ago a second wave succeeded in forming a single founder population, from which all EEMH descended and which contributes ancestry to present-day Europeans. Early European modern humans (EEMH) produced Upper Palaeolithic cultures, the first major one being the Aurignacian, which was succeeded by the Gravettian by 30,000 years ago. The Gravettian split into the Epi-Gravettian in the east and Solutrean in the west, due to major climate degradation during the Last Glacial Maximum (LGM), peaking 21,000 years ago. As Europe warmed, the Solutrean evolved into the Magdalenian by 20,000 years ago, and these peoples recolonised Europe. The Magdalenian and Epi-Gravettian gave way to Mesolithic cultures as big game animals were dying out and the Last Glacial Period drew to a close.

EEMH were anatomically similar to present-day Europeans, but were more robust, having larger brains, broader faces, more prominent brow ridges, and bigger teeth. The earliest EEMH specimens also exhibit some features that are reminiscent of those found in Neanderthals. The first EEMH would have had darker skin; natural selection for lighter skin would not begin until 30,000 years ago. Before the LGM (Last Glacial Maximum), EEMH had overall low population density, tall stature similar to post-industrial humans, and expansive trade routes stretching as long as 900 km (560 mi), and hunted big game animals. EEMH had much higher populations than the Neanderthals, possibly due to higher fertility rates; life expectancy for both species was typically under 40 years. Following the LGM, population density increased as communities travelled less frequently (though for longer distances), and the need to feed so many more people in tandem with the increasing scarcity of big game caused them to rely more heavily on small or aquatic game, and more frequently participate in game drive systems and slaughter whole herds at a time. The EEMH math included spears, spear-throwers, harpoons, and possibly throwing sticks and Palaeolithic dogs. EEMH likely commonly constructed temporary huts while moving around, and Gravettian peoples notably made large huts on the East European Plain out of mammoth bones.

EEMH are well renowned for creating a diverse array of artistic works, including cave paintings, Venus figurines, perforated batons, animal figurines, and geometric patterns. They may have decorated their bodies with ochre crayons and perhaps tattoos, scarification, and piercings. The exact symbolism of these works remains enigmatic, but EEMH are generally (though not universally) thought to have practiced shamanism, in which cave art — specifically of those depicting human/animal hybrids — played a central part. They also wore decorative beads, and plant-fibre clothes dyed with various plant-based dyes, which were possibly used as status symbols. For music, they produced bone flutes and whistles, and possibly also bullroarers, rasps, drums, idiophones, and other instruments. They buried their dead, though possibly only people who had achieved or were born into high status received burial.

Remains of Palaeolithic cultures have been known for centuries, but they were initially interpreted in a creationist model, wherein they represented antediluvian peoples which were wiped out by the Great Flood. Following the conception and popularisation of evolution in the mid-to-late 19th century, EEMH became the subject of much scientific racism, with early race theories allying with Nordicism and Pan-Germanism. Such historical race concepts were overturned by the mid-20th century. During the first wave feminism movement, the Venus figurines were notably interpreted as evidence of some matriarchal religion, though such claims had mostly died down in academia by the 1970s.

Details

Cro-Magnon is a population of early Homo sapiens dating from the Upper Paleolithic Period (c. 40,000 to c. 10,000 years ago) in Europe.

In 1868, in a shallow cave at Cro-Magnon near the town of Les Eyzies-de-Tayac in the Dordogne region of southwestern France, a number of obviously ancient human skeletons were found. The cave was investigated by the French geologist Édouard Lartet, who uncovered five archaeological layers. The human bones found in the topmost layer proved to be between 10,000 and 35,000 years old. The prehistoric humans revealed by this find were called Cro-Magnon and have since been considered, along with Neanderthals (H. neanderthalensis), to be representative of prehistoric humans. Modern studies suggest that Cro-Magnons emerged even earlier, perhaps as early as 45,000 years ago.

Cro-Magnons were robustly built and powerful and are presumed to have been about 166 to 171 cm (about 5 feet 5 inches to 5 feet 7 inches) tall. The body was generally heavy and solid, apparently with strong musculature. The forehead was straight, with slight browridges, and the face short and wide. Cro-Magnons were the first humans (genus Homo) to have a prominent chin. The brain capacity was about 1,600 cc (100 cubic inches), somewhat larger than the average for modern humans. It is thought that Cro-Magnons were probably fairly tall compared with other early human species.

It is still hard to say precisely where Cro-Magnons belong in recent human evolution, but they had a culture that produced a variety of sophisticated tools such as retouched blades, end scrapers, “nosed” scrapers, the chisel-like tool known as a burin, and fine bone tools (see Aurignacian culture). They also seem to have made tools for smoothing and scraping leather. Some Cro-Magnons have been associated with the Gravettian industry, or Upper Perigordian industry, which is characterized by an abrupt retouching technique that produces tools with flat backs. Cro-Magnon dwellings are most often found in deep caves and in shallow caves formed by rock overhangs, although primitive huts, either lean-tos against rock walls or those built completely from stones, have been found. The rock shelters were used year-round; the Cro-Magnons seem to have been a settled people, moving only when necessary to find new hunting or because of environmental changes.

Like the Neanderthals, the Cro-Magnon people buried their dead. Some of the first examples of art by prehistoric peoples are Cro-Magnon. The Cro-Magnons carved and sculpted small engravings, reliefs, and statuettes not only of humans but also of animals. Their human figures generally depict large-breasted, wide-hipped, and often obviously pregnant women, from which it is assumed that these figures had significance in fertility rites. Numerous depictions of animals are found in Cro-Magnon cave paintings throughout France and Spain at sites such as Lascaux, Eyzies-de-Tayac, and Altamira, and some of them are surpassingly beautiful. It is thought that these paintings had some magic or ritual importance to the people. From the high quality of their art, it is clear that Cro-Magnons were not primitive amateurs but had previously experimented with artistic mediums and forms. Decorated tools and weapons show that they appreciated art for aesthetic purposes as well as for religious reasons.

It is difficult to determine how long the Cro-Magnons lasted and what happened to them. Presumably they were gradually absorbed into the European populations that came later.

Additional Information

The earliest known Cro-Magnon remains are between 35,000 and 45,000 years old, based on radiometric dating. The oldest remains, from 43,000 – 45,000 years ago, were found in Italy and Britain. Other remains also show that Cro-Magnons reached the Russian Arctic about 40,000 years ago.

Cro-Magnons had powerful bodies, which were usually heavy and solid with strong muscles. Unlike Neanderthals, which had slanted foreheads, the Cro-Magnons had straight foreheads, like modern humans. Their faces were short and wide with a large chin. Their brains were slightly larger than the average human's is today.

Naming

The name "Cro-Magnon" was created by Louis Lartet, who discovered the first Cro-Magnon skull in southwestern France in 1868. He called the place where he found the skull Abri de Cro-Magnon. Abri means "rock shelter" in French; cro means "hole" in the Occitan language; and "Magnon" was the name of the person who owned the land where Lartet found the skull. Basically, Cro-Magnon means "rock shelter in a hole on Magnon's land."

This is why scientists now use the term "European early modern humans" instead of "Cro-Magnons." In taxonomy, the term "Cro-Magnon" means nothing.

Cro-Magnon life

* Used bones, shells, and teeth to make jewelry
* Spun, dyed, and tied knots in flax, to make cords for their tools, make baskets, or sew clothing

Like most early humans, the Cro-Magnons mostly hunted large animals. For example, they killed mammoths, cave bears, horses, and reindeer for food. They hunted with spears, javelins, and spear-throwers. They also ate fruits from plants.

The Cro-Magnons were nomadic or semi-nomadic. This means that instead of living in just one place, they followed the migration of the animals they wanted to hunt. They may have built hunting camps from mammoth bones; some of these camps were found in a village in Ukraine. They also made shelters from rocks, clay, tree branches, and animal skin (leather).

4a179df572befa003e3110af5fc24f89.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1757 2023-04-30 23:02:17

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1760) Pulmonology

Gist

Pulmonology is a branch of medicine that deals with all diseases of the lungs, airways, and respiratory muscles. It includes asthma, bronchitis, interstitial lung disease, COPD, and sleep apnea.

Summary

Pulmonology or pneumonology is a medical specialty that deals with diseases involving the respiratory tract. It is also known as respirology, respiratory medicine, or chest medicine in some countries and areas.

Pulmonology is considered a branch of internal medicine, and is related to intensive care medicine. Pulmonology often involves managing patients who need life support and mechanical ventilation. Pulmonologists are specially trained in diseases and conditions of the chest, particularly pneumonia, asthma, tuberculosis, emphysema, and complicated chest infections.

Pulmonology/respirology departments work especially closely with certain other specialties: cardiothoracic surgery departments and cardiology departments.

History of pulmonology

One of the first major discoveries relevant to the field of pulmonology was the discovery of pulmonary circulation. Originally, it was thought that blood reaching the right side of the heart passed through small 'pores' in the septum into the left side to be oxygenated, as theorized by Galen; however, the discovery of pulmonary circulation disproves this theory, which had previously been accepted since the 2nd century. Thirteenth-century anatomist and physiologist Ibn Al-Nafis accurately theorized that there was no 'direct' passage between the two sides (ventricles) of the heart. He believed that the blood must have passed through the pulmonary artery, through the lungs, and back into the heart to be pumped around the body. This is believed by many to be the first scientific description of pulmonary circulation.

Although pulmonary medicine only began to evolve as a medical specialty in the 1950s, William Welch and William Osler founded the 'parent' organization of the American Thoracic Society, the National Association for the Study and Prevention of Tuberculosis. The care, treatment, and study of tuberculosis of the lung is recognised as a discipline in its own right, phthisiology. When the specialty did begin to evolve, several discoveries were being made linking the respiratory system and the measurement of arterial blood gases, attracting more and more physicians and researchers to the developing field.

Pulmonology and its relevance in other medical fields

Surgery of the respiratory tract is generally performed by specialists in cardiothoracic surgery (or thoracic surgery), though minor procedures may be performed by pulmonologists. Pulmonology is closely related to critical care medicine when dealing with patients who require mechanical ventilation. As a result, many pulmonologists are certified to practice critical care medicine in addition to pulmonary medicine. There are fellowship programs that allow physicians to become board certified in pulmonary and critical care medicine simultaneously. Interventional pulmonology is a relatively new field within pulmonary medicine that deals with the use of procedures such as bronchoscopy and pleuroscopy to treat several pulmonary diseases. Interventional pulmonology is increasingly recognized as a specific medical specialty.

Details

A pulmonologist is a doctor who diagnoses and treats diseases of the respiratory system -- the lungs and other organs that help you breathe.

For some relatively short-lasting illnesses that affect your lungs, like the flu or pneumonia, you might be able to get all the care you need from your regular doctor. But if your cough, shortness of breath, or other symptoms don't get better, you might need to see a pulmonologist.

What is pulmonology?

Internal medicine is the type of medical care that deals with adult health, and pulmonology is one of its many fields. Pulmonologists focus on the respiratory system and diseases that affect it. The respiratory system includes your:

* Mouth and nose
* Sinuses
* Throat (pharynx)
* Voice box (larynx)
* Windpipe (trachea)
* Bronchial tubes
* Lungs and things inside them like bronchioles and alveoli
* Diaphragm

What Conditions Do Pulmonologists Treat?

A pulmonologist can treat many kinds of lung problems. These include:

* Asthma, a disease that inflames and narrows your airways and makes it hard to breathe
* Chronic obstructive pulmonary disease (COPD), a group of lung diseases that includes emphysema and chronic bronchitis
* Cystic fibrosis, a disease caused by changes in your genes that makes sticky mucus build up in your lungs
* Emphysema, which damages the air sacs in your lungs
* Interstitial lung disease, a group of conditions that scar and stiffen your lungs
* Lung cancer, a type of cancer that starts in the lungs
* Obstructive sleep apnea, which causes repeated pauses in your breathing while you sleep
* Pulmonary hypertension, or high blood pressure in the arteries of your lungs
* Tuberculosis, a bacterial infection of the lungs
* Bronchiectasis, which damages your airways so they widen and become flabby or scarred
* Bronchitis, which is when your airways are inflamed, with a cough and extra mucus. It can lead to an infection.
* Pneumonia, an infection that makes the air sacs (alveoli) in your lungs inflamed and filled with pus
* COVID-19 pneumonia, which can cause severe breathing problems and respiratory failure

What Kind of Training Do Pulmonologists Have?

A pulmonologist's training starts with a medical school degree. Then, they do an internal medicine residency at a hospital for 3 years to get more experience. After their residency, doctors can get certified in internal medicine by the American Board of Internal Medicine.

That's followed by years of specialized training as a fellow in pulmonary medicine. Finally, they must pass specialty exams to become board-certified in pulmonology. Some doctors get even more training in Interventional Pulmonary, pulmonary hypertension, and lung transplantation. Others might specialize in younger or older patients.

RELATED

How Do Pulmonologists Diagnose Lung Diseases?

Pulmonologists use tests to figure out what kind of lung problem you have. They might ask you to get:

* Blood tests. They check levels of oxygen and other things in your blood.
* Bronchoscopy. It uses a thin, flexible tube with a camera on the end to see inside your lungs and airways.
* X-rays. They use low doses of radiation to make images of your lungs and other things in your chest.
* CT scan. It's a powerful X-ray that makes detailed pictures of the inside of your chest.
* Spirometry. This tests how well your lungs work by measuring how hard you can breathe air in and out.

What Kinds of Procedures Do Pulmonologists Do?

Pulmonologists can do special procedures such as:

* Pulmonary hygiene. This clears fluid and mucus from your lungs.
* Airway ablation. This opens blocked air passages or eases difficult breathing.
* Biopsy. This takes tissue samples to diagnose disease.
* Bronchoscopy. This looks inside your lungs and airways to diagnose disease.

Why See a Pulmonologist

You might see a pulmonologist if you have symptoms such as:

* A cough that is severe or that lasts more than 3 weeks
* Chest pain
* Wheezing
* Dizziness
* Trouble breathing
* Severe tiredness
* Asthma that’s hard to control
* Bronchitis or a cold that keeps coming back.

Pulmonology-1536x922.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1758 2023-05-01 21:14:49

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1761) Passport

Details

A passport is an official travel document issued by a government that contains a person's identity. A person with a passport can travel to and from foreign countries more easily and access consular assistance. A passport certifies the personal identity and nationality of its holder. It is typical for passports to contain the full name, photograph, place and date of birth, signature, and the expiration date of the passport. While passports are typically issued by national governments, certain subnational governments are authorised to issue passports to citizens residing within their borders.

Many nations issue (or plan to issue) biometric passports that contain an embedded microchip, making them machine-readable and difficult to counterfeit. As of January 2019, there were over 150 jurisdictions issuing e-passports. Previously issued non-biometric machine-readable passports usually remain valid until their respective expiration dates.

A passport holder is normally entitled to enter the country that issued the passport, though some people entitled to a passport may not be full citizens with right of abode (e.g. American nationals or British nationals). A passport does not of itself create any rights in the country being visited or obligate the issuing country in any way, such as providing consular assistance. Some passports attest to the bearer having a status as a diplomat or other official, entitled to rights and privileges such as immunity from arrest or prosecution.

History

One of the earliest known references to paperwork that served in a role similar to that of a passport is found in the Hebrew Bible. Nehemiah 2:7–9, dating from approximately 450 BC, states that Nehemiah, an official serving King Artaxerxes I of Persia, asked permission to travel to Judea; the king granted leave and gave him a letter "to the governors beyond the river" requesting safe passage for him as he traveled through their lands.

The Arthashastra (c. 3rd century BC) make mentions of passes issued at the rate of one masha per pass to enter and exit the country. Chapter 34 of the Second Book of Arthashastra concerns with the duties of the Mudrādhyakṣa (lit. 'Superintendent of Seals') who must issue sealed passes before a person could enter or leave the countryside.

Passports were an important part of the Chinese bureaucracy as early as the Western Han (202 BC – 9 AD), if not in the Qin Dynasty. They required such details as age, height, and bodily features. These passports (zhuan) determined a person's ability to move throughout imperial counties and through points of control. Even children needed passports, but those of one year or less who were in their mother's care may not have needed them.

In the medieval Islamic Caliphate, a form of passport was the bara'a, a receipt for taxes paid. Only people who paid their zakah (for Muslims) or jizya (for dhimmis) taxes were permitted to travel to different regions of the Caliphate; thus, the bara'a receipt was a "basic passport."

Etymological sources show that the term "passport" is from a medieval italian document that was required in order to pass through the harbors customs (Italian "passa porto", to pass the harbor) or through the gate (Italian "passa porte", to pass the gates) of a city wall or a city territory. In medieval Europe, such documents were issued by local authorities to foreign travellers (as opposed to local citizens, as is the modern practice) and generally contained a list of towns and cities the document holder was permitted to enter or pass through. On the whole, documents were not required for travel to sea ports, which were considered open trading points, but documents were required to pass harbor controls and travel inland from sea ports. The transition from private to state control over movement was an essential aspect of the transition from feudalism to capitalism. Communal obligations to provide poor relief were an important source of the desire for controls on movement.

In the 12th century, the Republic of Genoa issued a document called Bulletta, which was issued to the nationals of the Republic who were traveling to the ports of the emporiums and the ports of the Genoese colonies overseas, as well as to foreigners who entered them.

King Henry V of England is credited with having invented what some consider the first British passport in the modern sense, as a means of helping his subjects prove who they were in foreign lands. The earliest reference to these documents is found in a 1414 Act of Parliament. In 1540, granting travel documents in England became a role of the Privy Council of England, and it was around this time that the term "passport" was used. In 1794, issuing British passports became the job of the Office of the Secretary of State. The 1548 Imperial Diet of Augsburg required the public to hold imperial documents for travel, at the risk of permanent exile.

In 1791, Louis XVI masqueraded as a valet during his Flight to Varennes as passports for the nobility typically included a number of persons listed by their function but without further description.

A Pass-Card Treaty of October 18, 1850 among German states standardized information including issuing state, name, status, residence, and description of bearer. Tramping journeymen and jobseekers of all kinds were not to receive pass-cards.

A rapid expansion of railway infrastructure and wealth in Europe beginning in the mid-nineteenth century led to large increases in the volume of international travel and a consequent unique dilution of the passport system for approximately thirty years prior to World War I. The speed of trains, as well as the number of passengers that crossed multiple borders, made enforcement of passport laws difficult. The general reaction was the relaxation of passport requirements. In the later part of the nineteenth century and up to World War I, passports were not required, on the whole, for travel within Europe, and crossing a border was a relatively straightforward procedure. Consequently, comparatively few people held passports.

During World War I, European governments introduced border passport requirements for security reasons, and to control the emigration of people with useful skills. These controls remained in place after the war, becoming a standard, though controversial, procedure. British tourists of the 1920s complained, especially about attached photographs and physical descriptions, which they considered led to a "nasty dehumanisation". The British Nationality and Status of Aliens Act was passed in 1914, clearly defining the notions of citizenship and creating a booklet form of the passport.

In 1920, the League of Nations held a conference on passports, the Paris Conference on Passports & Customs Formalities and Through Tickets. Passport guidelines and a general booklet design resulted from the conference, which was followed up by conferences in 1926 and 1927. The League of Nations issued Nansen passports to stateless refugees from 1922 to 1938.

While the United Nations held a travel conference in 1963, no passport guidelines resulted from it. Passport standardization came about in 1980, under the auspices of the ICAO. ICAO standards include those for machine-readable passports. Such passports have an area where some of the information otherwise written in textual form is written as strings of alphanumeric characters, printed in a manner suitable for optical character recognition. This enables border controllers and other law enforcement agents to process these passports more quickly, without having to input the information manually into a computer. ICAO publishes Doc 9303 Machine Readable Travel Documents, the technical standard for machine-readable passports. A more recent standard is for biometric passports. These contain biometrics to authenticate the identity of travellers. The passport's critical information is stored on a tiny RFID computer chip, much like information stored on smartcards. Like some smartcards, the passport booklet design calls for an embedded contactless chip that is able to hold digital signature data to ensure the integrity of the passport and the biometric data.

Historically, legal authority to issue passports is founded on the exercise of each country's executive discretion. Certain legal tenets follow, namely: first, passports are issued in the name of the state; second, no person has a legal right to be issued a passport; third, each country's government, in exercising its executive discretion, has complete and unfettered discretion to refuse to issue or to revoke a passport; and fourth, that the latter discretion is not subject to judicial review. However, legal scholars including A.J. Arkelian have argued that evolutions in both the constitutional law of democratic countries and the international law applicable to all countries now render those historical tenets both obsolete and unlawful.

Additional Information

A passport is a a formal document or certification issued by a national government identifying a traveler as a citizen or national with a right to protection while abroad and a right to return to the country of citizenship.

Passports, letters of transit, and similar documents were used for centuries to allow individuals to travel safely in foreign lands, but the adoption of the passport by all countries is a development of the 19th and 20th centuries. A passport is a small booklet containing a description of the bearer and an accompanying photograph that can be used for purposes of identification. Many countries require travelers entering their borders to obtain a visa—i.e., an endorsement made on a passport by the proper authorities denoting that it has been examined and that the bearer may proceed. The visa permits the traveler to remain in a country for a specified period of time. By the late 20th century the demands of tourism had prompted several countries in western Europe to relax their travel regulations so that travelers could move between them without visas or, in some cases, even without passports. (This arrangement has since expanded to include most countries within the European Union.) At the same time, many countries around the world, to prevent fraud, have added to their passports security features such as holograms, digital watermarks, and embedded microchips that store biometric data.

In the United States, passports are issued to U.S. citizens by the Department of State. Applications are accepted by a number of regional passport agencies in various cities, by the clerks of certain federal and state courts, by certain designated post offices, libraries, and other government facilities, and by U.S. consular authorities abroad. The passport is required for both departure and reentry to the United States. It is valid for 10 years for adults but for only 5 years for persons aged 15 or younger. A U.S. passport must be completely replaced when it expires, either by renewing it in advance or by submitting a new application. In 2008 the United States also began producing a passport card, which, for less than the cost of a traditional passport book, allows citizens to enter the United States by land or by sea from Canada, Mexico, Bermuda, and the islands of the Caribbean.

In the United Kingdom, Her Majesty’s Passport Office within the Home Office issues passports through offices in several cities around the country. Passports are issued to citizens of the United Kingdom and its colonies but not to citizens of Commonwealth countries. As with U.S. passports, British passports are valid for 10 years for adults and for 5 years for persons under age 16, and they must be replaced upon their expiration.

GSWMIBSVRFCF5JAKLMWSTR6WME.jpg?_a=AJARNWIA


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1759 2023-05-02 21:40:43

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1762) Roof

Summary

A roof is a covering of the top of a building, serving to protect against rain, snow, sunlight, wind, and extremes of temperature. Roofs have been constructed in a wide variety of forms—flat, pitched, vaulted, domed, or in combinations—as dictated by technical, economic, or aesthetic considerations.

The earliest roofs constructed by man were probably thatched roofs that were made of straw, leaves, branches, or reeds; they were usually set at a slope, or pitch, so that rainfall could drain off them. Conical thatched roofs are a good example of this type and are still widely used in the rural areas of Africa and elsewhere. Thicker branches and timbers eventually came to be used to span a roof, with clay or some other relatively impermeable substance pressed into the interstices between them. Gabled and flat roofs were possible with these materials. With the invention of brick and cut stone for building, the basic roof forms of the dome and vault appeared.

Two main types of roofs are flat roofs and sloping ones. The flat roof has historically been widely used in the Middle East, the American Southwest, and anywhere else where the climate is arid and the drainage of water off the roof is thus of secondary importance. Flat roofs came into widespread use in Europe and the Americas in the 19th century, when new waterproof roofing materials and the use of structural steel and concrete made them more practical. Flat roofs soon became the most commonly used type to cover warehouses, office buildings, and other commercial buildings, as well as many residential structures.

Sloping roofs come in many different varieties. The simplest is the lean-to, or shed, which has only one slope. A roof with two slopes that form an “A” or triangle is called a gable, or pitched, roof. This type of roof was used as early as the temples of ancient Greece and has been a staple of domestic architecture in northern Europe and the Americas for many centuries. It is still a very common form of roof. A hip, or hipped, roof is a gable roof that has sloped instead of vertical ends. It was commonly used in Italy and elsewhere in southern Europe and is now a very common form in American houses. Gable and hip roofs can also be used for homes with more complicated layouts. The gambrel roof is a type of gable roof with two slopes on each side, the upper being less steep than the lower. The mansard roof is a hipped gambrel roof, thus having two slopes on every side. It was widely used in Renaissance and Baroque French architecture. Both of the aforementioned roof types can provide extra attic space or other room without building an entire additional floor. They can also have a strong aesthetic appeal.

The vault is a parallel series of arches used to form a roof, the most common form being a cylindrical or barrel vault. Vaults came into their greatest prominence in Gothic architecture. The dome is a hemispherical structure that can serve as a roof. Domes have surmounted some of the most grandiose buildings of ancient Roman, Islamic, and post-medieval Western architecture. Vaults and domes do not require a supporting framework directly below the vaulting because they are based on the principle of the arch, but flat and gable roofs frequently require internal supports such as trusses or other bracing. A truss is a structural member that is composed of a series of triangles lying in a single plane. Until the later 19th century, such supporting frameworks were made of wooden beams, sometimes in highly complicated systems. Steel and reinforced concrete have for the most part replaced such heavy wooden support systems, and such materials moreover have enabled the development of new and dramatic roof forms. Thin-shell roofs using concrete reinforced with steel rods can produce domes and barrel vaults that are only three inches thick yet span immense spaces, providing unobstructed interior views for stadiums and amphitheatres. In cantilevered roofs, a roof made of thin precast concrete is suspended from steel cables that are mounted on vertical towers or pylons of some sort. The geodesic dome is a modern structural variant of the dome form.

The external covering of a roof must prevent rainfall or other precipitation from penetrating a building. There are two main groups of roof coverings. One group consists of a waterproof membrane or film that is applied as a liquid and that repels water by its utter impermeability after it has dried; the tar that is used to coat roofing felt is the prime example of this type. The other group consists of pieces of a waterproof material that are arranged in such a way as to prevent the direct passage of water through the joints between those pieces. This group includes shingles made of various materials, tiles made of baked clay or slate, and corrugated sheets of steel, aluminum, lead, copper, or zinc. Flat roofs are normally covered with roofing felt and tar, while sloped roofs are generally covered with shingles or sheet metal.

Details

A roof (pl: roofs or rooves) is the top covering of a building, including all materials and constructions necessary to support it on the walls of the building or on uprights, providing protection against rain, snow, sunlight, extremes of temperature, and wind. A roof is part of the building envelope.

The characteristics of a roof are dependent upon the purpose of the building that it covers, the available roofing materials and the local traditions of construction and wider concepts of architectural design and practice, and may also be governed by local or national legislation. In most countries, a roof protects primarily against rain. A verandah may be roofed with material that protects against sunlight but admits the other elements. The roof of a garden conservatory protects plants from cold, wind, and rain, but admits light.

A roof may also provide additional living space, for example, a roof garden.

Etymology

Old English hrof 'roof, ceiling, top, summit; heaven, sky', also figuratively, 'highest point of something', from Proto-Germanic *khrofam (cf. Dutch roef 'deckhouse, cabin, coffin-lid', Middle High German rof 'penthouse', Old Norse hrof 'boat shed'). There are no apparent connections outside the Germanic family. "English alone has retained the word in a general sense, for which the other languages use forms corresponding to OE. þæc thatch".

Design elements

The elements in the design of a roof are:

* the material
* the construction
* the durability

The material of a roof may range from banana leaves, wheaten straw or seagrass to laminated glass, copper, aluminium sheeting and pre-cast concrete. In many parts of the world ceramic roof tiles have been the predominant roofing material for centuries, if not millennia. Other roofing materials include asphalt, coal tar pitch, EPDM rubber, Hypalon, polyurethane foam, PVC, slate, Teflon fabric, TPO, and wood shakes and shingles.

The construction of a roof is determined by its method of support and how the underneath space is bridged and whether or not the roof is pitched. The pitch is the angle at which the roof rises from its lowest to highest point. Most US domestic architecture, except in very dry regions, has roofs that are sloped, or pitched. Although modern construction elements such as drainpipes may remove the need for pitch, roofs are pitched for reasons of tradition and aesthetics. So the pitch is partly dependent upon stylistic factors, and partially to do with practicalities.

Some types of roofing, for example thatch, require a steep pitch in order to be waterproof and durable. Other types of roofing, for example pantiles, are unstable on a steeply pitched roof but provide excellent weather protection at a relatively low angle. In regions where there is little rain, an almost flat roof with a slight run-off provides adequate protection against an occasional downpour. Drainpipes also remove the need for a sloping roof.

A person that specializes in roof construction is called a roofer.

The durability of a roof is a matter of concern because the roof is often the least accessible part of a building for purposes of repair and renewal, while its damage or destruction can have serious effects.

Form

Terminology of some roof parts:

The shape of roofs differs greatly from region to region. The main factors which influence the shape of roofs are the climate and the materials available for roof structure and the outer covering.

The basic shapes of roofs are flat, mono-pitched, gabled, mansard, hipped, butterfly, arched and domed. There are many variations on these types. Roofs constructed of flat sections that are sloped are referred to as pitched roofs (generally if the angle exceeds 10 degrees). Pitched roofs, including gabled, hipped and skillion roofs, make up the greatest number of domestic roofs. Some roofs follow organic shapes, either by architectural design or because a flexible material such as thatch has been used in the construction.

Parts

There are two parts to a roof: its supporting structure and its outer skin, or uppermost weatherproof layer. In a minority of buildings, the outer layer is also a self-supporting structure.

The roof structure is generally supported upon walls, although some building styles, for example, geodesic and A-frame, blur the distinction between wall and roof.

Support

The supporting structure of a roof usually comprises beams that are long and of strong, fairly rigid material such as timber, and since the mid-19th century, cast iron or steel. In countries that use bamboo extensively, the flexibility of the material causes a distinctive curving line to the roof, characteristic of Oriental architecture.

Timber lends itself to a great variety of roof shapes. The timber structure can fulfil an aesthetic as well as practical function, when left exposed to view.

Stone lintels have been used to support roofs since prehistoric times, but cannot bridge large distances. The stone arch came into extensive use in the ancient Roman period and in variant forms could be used to span spaces up to 45 m (140 ft) across. The stone arch or vault, with or without ribs, dominated the roof structures of major architectural works for about 2,000 years, only giving way to iron beams with the Industrial Revolution and the designing of such buildings as Paxton's Crystal Palace, completed 1851.

With continual improvements in steel girders, these became the major structural support for large roofs, and eventually for ordinary houses as well. Another form of girder is the reinforced concrete beam, in which metal rods are encased in concrete, giving it greater strength under tension.

Roof support can also serve as living spaces as can be seen in roof decking. Roof decking are spaces within the roof structure that is converted into a room of some sort.

Outer layer

This part of the roof shows great variation dependent upon availability of material. In vernacular architecture, roofing material is often vegetation, such as thatches, the most durable being sea grass with a life of perhaps 40 years. In many Asian countries bamboo is used both for the supporting structure and the outer layer where split bamboo stems are laid turned alternately and overlapped. In areas with an abundance of timber, wooden shingles, shakes and boards are used, while in some countries the bark of certain trees can be peeled off in thick, heavy sheets and used for roofing.

The 20th century saw the manufacture of composition asphalt shingles which can last from a thin 20-year shingle to the thickest which are limited lifetime shingles, the cost depending on the thickness and durability of the shingle. When a layer of shingles wears out, they are usually stripped, along with the underlay and roofing nails, allowing a new layer to be installed. An alternative method is to install another layer directly over the worn layer. While this method is faster, it does not allow the roof sheathing to be inspected and water damage, often associated with worn shingles, to be repaired. Having multiple layers of old shingles under a new layer causes roofing nails to be located further from the sheathing, weakening their hold. The greatest concern with this method is that the weight of the extra material could exceed the dead load capacity of the roof structure and cause collapse. Because of this, jurisdictions which use the International Building Code prohibit the installation of new roofing on top of an existing roof that has two or more applications of any type of roof covering; the existing roofing material must be removed before installing a new roof.

Slate is an ideal, and durable material, while in the Swiss Alps roofs are made from huge slabs of stone, several inches thick. The slate roof is often considered the best type of roofing. A slate roof may last 75 to 150 years, and even longer. However, slate roofs are often expensive to install – in the US, for example, a slate roof may have the same cost as the rest of the house. Often, the first part of a slate roof to fail is the fixing nails; they corrode, allowing the slates to slip. In the UK, this condition is known as "nail sickness". Because of this problem, fixing nails made of stainless steel or copper are recommended, and even these must be protected from the weather.

Asbestos, usually in bonded corrugated panels, has been used widely in the 20th century as an inexpensive, non-flammable roofing material with excellent insulating properties. Health and legal issues involved in the mining and handling of asbestos products means that it is no longer used as a new roofing material. However, many asbestos roofs continue to exist, particularly in South America and Asia.

Roofs made of cut turf (modern ones known as green roofs, traditional ones as sod roofs) have good insulating properties and are increasingly encouraged as a way of "greening" the Earth. The soil and vegetation function as living insulation, moderating building temperatures. Adobe roofs are roofs of clay, mixed with binding material such as straw or animal hair, and plastered on lathes to form a flat or gently sloped roof, usually in areas of low rainfall.

In areas where clay is plentiful, roofs of baked tiles have been the major form of roofing. The casting and firing of roof tiles is an industry that is often associated with brickworks. While the shape and colour of tiles was once regionally distinctive, now tiles of many shapes and colours are produced commercially, to suit the taste and pocketbook of the purchaser. Concrete roof tiles are also a common choice, being available in many different styles and shapes.

Sheet metal in the form of copper and lead has also been used for many hundreds of years. Both are expensive but durable, the vast copper roof of Chartres Cathedral, oxidised to a pale green colour, having been in place for hundreds of years. Lead, which is sometimes used for church roofs, was most commonly used as flashing in valleys and around chimneys on domestic roofs, particularly those of slate. Copper was used for the same purpose.

In the 19th century, iron, electroplated with zinc to improve its resistance to rust, became a light-weight, easily transported, waterproofing material. Its low cost and easy application made it the most accessible commercial roofing, worldwide. Since then, many types of metal roofing have been developed. Steel shingle or standing-seam roofs last about 50 years or more depending on both the method of installation and the moisture barrier (underlayment) used and are between the cost of shingle roofs and slate roofs. In the 20th century, a large number of roofing materials were developed, including roofs based on bitumen (already used in previous centuries), on rubber and on a range of synthetics such as thermoplastic and on fibreglass.

Functions

A roof assembly has more than one function. It may provide any or all of the following functions:

1. To shed water i.e., prevent water from standing on the roof surface. Water standing on the roof surface increases the live load on the roof structure, which is a safety issue. Standing water also contributes to premature deterioration of most roofing materials. Some roofing manufacturers' warranties are rendered void due to standing water.

2. To protect the building interior from the effects of weather elements such as rain, wind, sun, heat and snow.

3. To provide thermal insulation. Most modern commercial/industrial roof assemblies incorporate insulation boards or batt insulation. In most cases, the International Building Code and International Residential Code establish the minimum R-value required within the roof assembly.

4. To perform for the expected service life. All standard roofing materials have established histories of their respective longevity, based on anecdotal evidence. Most roof materials will last long after the manufacturer's warranty has expired, given adequate ongoing maintenance, and absent storm damage. Metal and tile roofs may last fifty years or more. Asphalt shingles may last 30–50 years. Coal tar built-up roofs may last forty or more years. Single-ply roofs may last twenty or more years.

5. Provide a desired, unblemished appearance. Some roofs are selected not only for the above functions, but also for aesthetics, similar to wall cladding. Premium prices are often paid for certain systems because of their attractive appearance and "curb appeal."

Insulation

Because the purpose of a roof is to secure people and their possessions from climatic elements, the insulating properties of a roof are a consideration in its structure and the choice of roofing material.

Some roofing materials, particularly those of natural fibrous material, such as thatch, have excellent insulating properties. For those that do not, extra insulation is often installed under the outer layer. In developed countries, the majority of dwellings have a ceiling installed under the structural members of the roof. The purpose of a ceiling is to insulate against heat and cold, noise, dirt and often from the droppings and lice of birds who frequently choose roofs as nesting places.

Concrete tiles can be used as insulation. When installed leaving a space between the tiles and the roof surface, it can reduce heating caused by the sun.

Forms of insulation are felt or plastic sheeting, sometimes with a reflective surface, installed directly below the tiles or other material; synthetic foam batting laid above the ceiling and recycled paper products and other such materials that can be inserted or sprayed into roof cavities. Cool roofs are becoming increasingly popular, and in some cases are mandated by local codes. Cool roofs are defined as roofs with both high reflectivity and high thermal emittance.

Poorly insulated and ventilated roofing can suffer from problems such as the formation of ice dams around the overhanging eaves in cold weather, causing water from melted snow on upper parts of the roof to penetrate the roofing material. Ice dams occur when heat escapes through the uppermost part of the roof, and the snow at those points melts, refreezing as it drips along the shingles, and collecting in the form of ice at the lower points. This can result in structural damage from stress, including the destruction of gutter and drainage systems.

Drainage

The primary job of most roofs is to keep out water. The large area of a roof repels a lot of water, which must be directed in some suitable way, so that it does not cause damage or inconvenience.

Flat roof of adobe dwellings generally have a very slight slope. In a Middle Eastern country, where the roof may be used for recreation, it is often walled, and drainage holes must be provided to stop water from pooling and seeping through the porous roofing material.

Similar problems, although on a very much larger scale, confront the builders of modern commercial properties which often have flat roofs. Because of the very large nature of such roofs, it is essential that the outer skin be of a highly impermeable material. Most industrial and commercial structures have conventional roofs of low pitch.

In general, the pitch of the roof is proportional to the amount of precipitation. Houses in areas of low rainfall frequently have roofs of low pitch while those in areas of high rainfall and snow, have steep roofs. The longhouses of Papua New Guinea, for example, being roof-dominated architecture, the high roofs sweeping almost to the ground. The high steeply-pitched roofs of Germany and Holland are typical in regions of snowfall. In parts of North America such as Buffalo, New York, United States, or Montreal, Quebec, Canada, there is a required minimum slope of 6 in 12 (1:2, a pitch of 30°).

There are regional building styles which contradict this trend, the stone roofs of the Alpine chalets being usually of gentler incline. These buildings tend to accumulate a large amount of snow on them, which is seen as a factor in their insulation. The pitch of the roof is in part determined by the roofing material available, a pitch of 3 in 12 (1:4) or greater slope generally being covered with asphalt shingles, wood shake, corrugated steel, slate or tile.

The water repelled by the roof during a rainstorm is potentially damaging to the building that the roof protects. If it runs down the walls, it may seep into the mortar or through panels. If it lies around the foundations it may cause seepage to the interior, rising damp or dry rot. For this reason most buildings have a system in place to protect the walls of a building from most of the roof water. Overhanging eaves are commonly employed for this purpose. Most modern roofs and many old ones have systems of valleys, gutters, waterspouts, waterheads and drainpipes to remove the water from the vicinity of the building. In many parts of the world, roofwater is collected and stored for domestic use.

Areas prone to heavy snow benefit from a metal roof because their smooth surfaces shed the weight of snow more easily and resist the force of wind better than a wood shingle or a concrete tile roof.

Solar roofs

Newer systems include solar shingles which generate electricity as well as cover the roof. There are also solar systems available that generate hot water or hot air and which can also act as a roof covering. More complex systems may carry out all of these functions: generate electricity, recover thermal energy, and also act as a roof covering.

Solar systems can be integrated with roofs by:

* integration in the covering of pitched roofs, e.g. solar shingles,
* mounting on an existing roof, e.g. solar panel on a tile roof,
* integration in a flat roof membrane using heat welding (e.g. PVC) or
* mounting on a flat roof with a construction and additional weight to prevent uplift from wind.

banner1.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1760 2023-05-03 16:24:51

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1763) Fossil fuel

Gist

Fossil fuels are made from decomposing plants and animals. These fuels are found in the Earth's crust and contain carbon and hydrogen, which can be burned for energy. Coal, oil, and natural gas are examples of fossil fuels.

Summary

A fossil fuel is any of a class of hydrocarbon-containing materials of biological origin occurring within Earth’s crust that can be used as a source of energy.

Fossil fuels include coal, petroleum, natural gas, oil shales, bitumens, tar sands, and heavy oils. All contain carbon and were formed as a result of geologic processes acting on the remains of organic matter produced by photosynthesis, a process that began in the Archean Eon (4.0 billion to 2.5 billion years ago). Most carbonaceous material occurring before the Devonian Period (419.2 million to 358.9 million years ago) was derived from algae and bacteria, whereas most carbonaceous material occurring during and after that interval was derived from plants.

All fossil fuels can be burned in air or with oxygen derived from air to provide heat. This heat may be employed directly, as in the case of home furnaces, or used to produce steam to drive generators that can supply electricity. In still other cases—for example, gas turbines used in jet aircraft—the heat yielded by burning a fossil fuel serves to increase both the pressure and the temperature of the combustion products to furnish motive power.

Since the beginning of the Industrial Revolution in Great Britain in the second half of the 18th century, fossil fuels have been consumed at an ever-increasing rate. Today they supply more than 80 percent of all the energy consumed by the industrially developed countries of the world. Although new deposits continue to be discovered, the reserves of the principal fossil fuels remaining on Earth are limited. The amounts of fossil fuels that can be recovered economically are difficult to estimate, largely because of changing rates of consumption and future value as well as technological developments. Advances in technology—such as hydraulic fracturing (fracking), rotary drilling, and directional drilling—have made it possible to extract smaller and difficult-to-obtain deposits of fossil fuels at a reasonable cost, thereby increasing the amount of recoverable material. In addition, as recoverable supplies of conventional (light-to-medium) oil became depleted, some petroleum-producing companies shifted to extracting heavy oil, as well as liquid petroleum pulled from tar sands and oil shales. See also coal mining; petroleum production.

One of the main by-products of fossil fuel combustion is carbon dioxide (CO2). The ever-increasing use of fossil fuels in industry, transportation, and construction has added large amounts of CO2 to Earth’s atmosphere. Atmospheric CO2 concentrations fluctuated between 275 and 290 parts per million by volume (ppmv) of dry air between 1000 CE and the late 18th century but increased to 316 ppmv by 1959 and rose to 412 ppmv in 2018. CO2 behaves as a greenhouse gas—that is, it absorbs infrared radiation (net heat energy) emitted from Earth’s surface and reradiates it back to the surface. Thus, the substantial CO2 increase in the atmosphere is a major contributing factor to human-induced global warming. Methane (CH4), another potent greenhouse gas, is the chief constituent of natural gas, and CH4 concentrations in Earth’s atmosphere rose from 722 parts per billion (ppb) before 1750 to 1,859 ppb by 2018. To counter worries over rising greenhouse gas concentrations and to diversify their energy mix, many countries have sought to reduce their dependence on fossil fuels by developing sources of renewable energy (such as wind, solar, hydroelectric, tidal, geothermal, and biofuels) while at the same time increasing the mechanical efficiency of engines and other technologies that rely on fossil fuels.

Details

A fossil fuel is a hydrocarbon-containing material such as coal, oil, and natural gas, formed naturally in the Earth's crust from the remains of dead plants and animals that is extracted and burned as a fuel. Fossil fuels may be burned to provide heat for use directly (such as for cooking or heating), to power engines (such as internal combustion engines in motor vehicles), or to generate electricity. Some fossil fuels are refined into derivatives such as kerosene, gasoline and propane before burning. The origin of fossil fuels is the anaerobic decomposition of buried dead organisms, containing organic molecules created by photosynthesis. The conversion from these materials to high-carbon fossil fuels typically require a geological process of millions of years.

In 2019, 84% of primary energy consumption in the world and 64% of its electricity was from fossil fuels. The large-scale burning of fossil fuels causes serious environmental damage. Over 80% of the carbon dioxide (CO2) generated by human activity (around 35 billion tonnes a year) comes from burning them, compared to 4 billion from land development. Natural processes on Earth, mostly absorption by the ocean, can remove only a small part of this. Therefore, there is a net increase of many billion tonnes of atmospheric carbon dioxide per year. Although methane leaks are significant,  the burning of fossil fuels is the main source of greenhouse gas emissions causing global warming and ocean acidification. Additionally, most air pollution deaths are due to fossil fuel particulates and noxious gases. It is estimated that this costs over 3% of the global gross domestic product and that fossil fuel phase-out will save millions of lives each year.

Recognition of the climate crisis, pollution and other negative impacts caused by fossil fuels has led to a widespread policy transition and activist movement focused on ending their use in favor of sustainable energy. However, because the fossil-fuel industry is so heavily integrated in the global economy and heavily subsidized, this transition is expected to have significant economic impacts. Many stakeholders argue that this change needs to be a just transition and create policy that addresses the societal burdens created by the stranded assets of the fossil fuel industry.

International policy, in the form of United Nations sustainable development goals for affordable and clean energy and climate action, as well as the Paris Climate Agreement, is designed to facilitate this transition at a global level. In 2021, the International Energy Agency concluded that no new fossil fuel extraction projects could be opened if the global economy and society wants to avoid the worst impacts of climate change and meet international goals for climate change mitigation.

Origin

Since oil fields are located only at certain places on Earth, only some countries are oil-independent; the other countries depend on the oil-production capacities of these countries.

The theory that fossil fuels formed from the fossilized remains of dead plants by exposure to heat and pressure in Earth's crust over millions of years was first introduced by Andreas Libavius "in his 1597 Alchemia [Alchymia]" and later by Mikhail Lomonosov "as early as 1757 and certainly by 1763". The first use of the term "fossil fuel" occurs in the work of the German chemist Caspar Neumann, in English translation in 1759. The Oxford English Dictionary notes that in the phrase "fossil fuel" the adjective "fossil" means "obtained by digging; found buried in the earth", which dates to at least 1652, before the English noun "fossil" came to refer primarily to long-dead organisms in the early 18th century.

Aquatic phytoplankton and zooplankton that died and sedimented in large quantities under anoxic conditions millions of years ago began forming petroleum and natural gas as a result of anaerobic decomposition. Over geological time this organic matter, mixed with mud, became buried under further heavy layers of inorganic sediment. The resulting high temperature and pressure caused the organic matter to chemically alter, first into a waxy material known as kerogen, which is found in oil shales, and then with more heat into liquid and gaseous hydrocarbons in a process known as catagenesis. Despite these heat-driven transformations, the energy released in combustion is still photosynthetic in origin.

Terrestrial plants tended to form coal and methane. Many of the coal fields date to the Carboniferous period of Earth's history. Terrestrial plants also form type III kerogen, a source of natural gas. Although fossil fuels are continually formed by natural processes, they are classified as non-renewable resources because they take millions of years to form and known viable reserves are being depleted much faster than new ones are generated.

Importance


After recovering from the COVID-19 pandemic, energy company profits increased with greater revenues from higher fuel prices resulting from the Russian invasion of Ukraine, falling debt levels, tax write-downs of projects shut down in Russia, and backing off from earlier plans to reduce greenhouse gas emissions. Record profits sparked public calls for windfall taxes.

Fossil fuels have been important to human development because they can be readily burned in the open atmosphere to produce heat. The use of peat as a domestic fuel predates recorded history. Coal was burned in some early furnaces for the smelting of metal ore, while semi-solid hydrocarbons from oil seeps were also burned in ancient times,[30] they were mostly used for waterproofing and embalming.

Commercial exploitation of petroleum began in the 19th century.

Natural gas, once flared-off as an unneeded byproduct of petroleum production, is now considered a very valuable resource. Natural gas deposits are also the main source of helium.

Heavy crude oil, which is much more viscous than conventional crude oil, and oil sands, where bitumen is found mixed with sand and clay, began to become more important as sources of fossil fuel in the early 2000s. Oil shale and similar materials are sedimentary rocks containing kerogen, a complex mixture of high-molecular weight organic compounds, which yield synthetic crude oil when heated (pyrolyzed). With additional processing, they can be employed instead of other established fossil fuels. During the 2010s and 2020s there was disinvestment from exploitation of such resources due to their high carbon cost relative to more easily-processed reserves.

Prior to the latter half of the 18th century, windmills and watermills provided the energy needed for work such as milling flour, sawing wood or pumping water, while burning wood or peat provided domestic heat. The wide-scale use of fossil fuels, coal at first and petroleum later, in steam engines enabled the Industrial Revolution. At the same time, gas lights using natural gas or coal gas were coming into wide use. The invention of the internal combustion engine and its use in automobiles and trucks greatly increased the demand for gasoline and diesel oil, both made from fossil fuels. Other forms of transportation, railways and aircraft, also require fossil fuels. The other major use for fossil fuels is in generating electricity and as feedstock for the petrochemical industry. Tar, a leftover of petroleum extraction, is used in the construction of roads.

The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon-fueled irrigation. The development of synthetic nitrogen fertilizer has significantly supported global population growth; it has been estimated that almost half of the Earth's population are currently fed as a result of synthetic nitrogen fertilizer use. According to head of a fertilizers commodity price agency, "50% of the world's food relies on fertilisers."

Additional Information

Fossil fuels are made from decomposing plants and animals. These fuels are found in the Earth’s crust and contain carbon and hydrogen, which can be burned for energy. Coal, oil, and natural gas are examples of fossil fuels. Coal is a material usually found in sedimentary rock deposits where rock and dead plant and animal matter are piled up in layers. More than 50 percent of a piece of coal’s weight must be from fossilized plants. Oil is originally found as a solid material between layers of sedimentary rock, like shale. This material is heated in order to produce the thick oil that can be used to make gasoline. Natural gas is usually found in pockets above oil deposits. It can also be found in sedimentary rock layers that don’t contain oil. Natural gas is primarily made up of methane.

According to the National Academies of Sciences, 81 percent of the total energy used in the United States comes from coal, oil, and natural gas. This is the energy that is used to heat and provide electricity to homes and businesses and to run cars and factories. Unfortunately, fossil fuels are a nonrenewable resource and waiting millions of years for new coal, oil, and natural gas deposits to form is not a realistic solution. Fossil fuels are also responsible for almost three-fourths of the emissions from human activities in the last 20 years. Now, scientists and engineers have been looking for ways to reduce our dependence on fossil fuels and to make burning these fuels cleaner and healthier for the environment.

Scientists across the country and around the world are trying to find solutions to fossil fuel problems so that there is enough fuel and a healthy environment to sustain human life and activities in the future. The United States Department of Energy is working on technologies to make commercially available natural-gas-powered vehicles. They are also trying to make coal burning and oil drilling cleaner. Researchers at Stanford University in California have been using greener technologies to figure out a way to burn fossil fuels while lessening their impact on the environment. One solution is to use more natural gas, which emits 50 percent less carbon dioxide into the atmosphere than coal does. The Stanford team is also trying to remove carbon dioxide from the atmosphere and store it underground—a process called carbon capture and sequestration. Scientists at both Stanford and the University of Bath in the United Kingdom are trying something completely new by using carbon dioxide and sugar to make renewable plastic.

fossil_fuels.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1761 2023-05-04 15:59:33

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1764) Weighing scale

Summary

Scales are used to measure the weight of an item. To use a scale, the item which needs to be weighed is put on one side of the scale. Then, weight stones are put on the other side. Once the scale balances (that is the indicator between the two scales is in the middle), the correct weight is chosen.

There are also modern scales, where the item is simply put on the scale. Its weight can then be read from an electronic or analogue display.

Weighting Scales are used to measure the weight of an item. To use a scale, the item which needs to be weighed is put on one side of the scale. Then, usually stones are put on the other side to compare the weight of the stone with the weight of the object you have chosen. If both cups are in the middle, it means that the weights of the objects are equal. If one cup is more up while the other one is down, it means that the one that is down is the heavier one. For example: If you place a small stone in one cup, and a watch in the other cup, and the cup with the watch goes down and the stone obviously goes up, it means that the watch has a greater weight than the stone.

Details

A scale or balance is a device used to measure weight or mass. These are also known as mass scales, weight scales, mass balances, and weight balances.

The traditional scale consists of two plates or bowls suspended at equal distances from a fulcrum. One plate holds an object of unknown mass (or weight), while objects of known mass or weight, called weights, are added to the other plate until static equilibrium is achieved and the plates level off, which happens when the masses on the two plates are equal. The perfect scale rests at neutral. A spring scale will make use of a spring of known stiffness to determine mass (or weight). Suspending a certain mass will extend the spring by a certain amount depending on the spring's stiffness (or spring constant). The heavier the object, the more the spring stretches, as described in Hooke's law. Other types of scales making use of different physical principles also exist.

Some scales can be calibrated to read in units of force (weight) such as newtons instead of units of mass such as kilograms. Scales and balances are widely used in commerce, as many products are sold and packaged by mass.

Pan balance

History

The Ancient Egyptian Book of the Dead depicts a scene in which a scribe's heart is weighed against the feather of truth.
The balance scale is such a simple device that its usage likely far predates the evidence. What has allowed archaeologists to link artifacts to weighing scales are the stones for determining absolute mass. The balance scale itself was probably used to determine relative mass long before absolute mass.

The oldest attested evidence for the existence of weighing scales dates to the Fourth Dynasty of Egypt, with Deben (unit) balance weights, from the reign of Sneferu (c. 2600 BC) excavated, though earlier usage has been proposed. Carved stones bearing marks denoting mass and the Egyptian hieroglyphic symbol for gold have been discovered, which suggests that Egyptian merchants had been using an established system of mass measurement to catalog gold shipments or gold mine yields. Although no actual scales from this era have survived, many sets of weighing stones as well as murals depicting the use of balance scales suggest widespread usage.

Examples, dating c. 2400–1800 BC, have also been found in the Indus River valley. Uniform, polished stone cubes discovered in early settlements were probably used as mass-setting stones in balance scales. Although the cubes bear no markings, their masses are multiples of a common denominator. The cubes are made of many different kinds of stones with varying densities. Clearly their mass, not their size or other characteristics, was a factor in sculpting these cubes.

In China, the earliest weighing balance excavated was from a tomb of the State of Chu of the Chinese Warring States Period dating back to the 3rd to 4th century BC in Mount Zuojiagong near Changsha, Hunan. The balance was made of wood and used bronze masses.

Variations on the balance scale, including devices like the cheap and inaccurate bismar (unequal-armed scales), began to see common usage by c. 400 B.C. by many small merchants and their customers. A plethora of scale varieties each boasting advantages and improvements over one another appear throughout recorded history, with such great inventors as Leonardo da Vinci lending a personal hand in their development.

Even with all the advances in weighing scale design and development, all scales until the seventeenth century AD were variations on the balance scale. The standardization of the weights used – and ensuring traders used the correct weights – was a considerable preoccupation of governments throughout this time.

The original form of a balance consisted of a beam with a fulcrum at its center. For highest accuracy, the fulcrum would consist of a sharp V-shaped pivot seated in a shallower V-shaped bearing. To determine the mass of the object, a combination of reference masses was hung on one end of the beam while the object of unknown mass was hung on the other end (see balance and steelyard balance). For high precision work, such as empirical chemistry, the center beam balance is still one of the most accurate technologies available, and is commonly used for calibrating test masses.

However, bronze fragments discovered in central Germany and Italy had been used during the Bronze Age as an early form of currency. In the same time period, merchants had used standard weights of equivalent value between 8 and 10.5 grams from Great Britain to Mesopotamia.

Mechanical balances

The balance (also balance scale, beam balance and laboratory balance) was the first mass measuring instrument invented. In its traditional form, it consists of a pivoted horizontal lever with arms of equal length – the beam – and a weighing pan suspended from each arm (hence the plural name "scales" for a weighing instrument). The unknown mass is placed in one pan and standard masses are added to the other pan until the beam is as close to equilibrium as possible. In precision balances, a more accurate determination of the mass is given by the position of a sliding mass moved along a graduated scale. Technically, a balance compares weight rather than mass, but, in a given gravitational field (such as Earth's gravity), the weight of an object is proportional to its mass, so the standard masses used with balances are usually labeled in units of mass (e.g. g or kg).

Unlike spring-based scales, balances are used for the precision measurement of mass as their accuracy is not affected by variations in the local gravitational field. (On Earth, for example, these can amount to ±0.5% between locations.) A change in the strength of the gravitational field caused by moving the balance does not change the measured mass, because the moments of force on either side of the beam are affected equally. A balance will render an accurate measurement of mass at any location experiencing a constant gravity or acceleration.

Very precise measurements are achieved by ensuring that the balance's fulcrum is essentially friction-free (a knife edge is the traditional solution), by attaching a pointer to the beam which amplifies any deviation from a balance position; and finally by using the lever principle, which allows fractional masses to be applied by movement of a small mass along the measuring arm of the beam, as described above. For greatest accuracy, there needs to be an allowance for the buoyancy in air, whose effect depends on the densities of the masses involved.

To reduce the need for large reference masses, an off-center beam can be used. A balance with an off-center beam can be almost as accurate as a scale with a center beam, but the off-center beam requires special reference masses and cannot be intrinsically checked for accuracy by simply swapping the contents of the pans as a center-beam balance can. To reduce the need for small graduated reference masses, a sliding weight called a poise can be installed so that it can be positioned along a calibrated scale. A poise adds further intricacies to the calibration procedure, since the exact mass of the poise must be adjusted to the exact lever ratio of the beam.

For greater convenience in placing large and awkward loads, a platform can be floated on a cantilever beam system which brings the proportional force to a noseiron bearing; this pulls on a stilyard rod to transmit the reduced force to a conveniently sized beam.

One still sees this design in portable beam balances of 500 kg capacity which are commonly used in harsh environments without electricity, as well as in the lighter duty mechanical bathroom scale (which actually uses a spring scale, internally). The additional pivots and bearings all reduce the accuracy and complicate calibration; the float system must be corrected for corner errors before the span is corrected by adjusting the balance beam and poise.

Roberval balance

In 1669 the Frenchman Gilles Personne de Roberval presented a new kind of balance scale to the French Academy of Sciences. This scale consisted of a pair of vertical columns separated by a pair of equal-length arms and pivoting in the center of each arm from a central vertical column, creating a parallelogram. From the side of each vertical column a peg extended. To the amazement of observers, no matter where Roberval hung two equal weight along the peg, the scale still balanced. In this sense, the scale was revolutionary: it evolved into the more-commonly encountered form consisting of two pans placed on vertical column located above the fulcrum and the parallelogram below them. The advantage of the Roberval design is that no matter where equal weights are placed in the pans, the scale will still balance.

Further developments have included a "gear balance" in which the parallelogram is replaced by any odd number of interlocking gears greater than one, with alternating gears of the same size and with the central gear fixed to a stand and the outside gears fixed to pans, as well as the "sprocket gear balance" consisting of a bicycle-type chain looped around an odd number of sprockets with the central one fixed and the outermost two free to pivot and attached to a pan.

Because it has more moving joints which add friction, the Roberval balance is consistently less accurate than the traditional beam balance, but for many purposes this is compensated for by its usability.

Torsion balance

The torsion balance is one of the most mechanically accurate and analog balances. Pharmacy schools still teach how to use torsion balances in the U.S. It utilizes pans like a traditional balance that lie on top of a mechanical chamber which bases measurements on the amount of twisting of a wire or fiber inside the chamber. The scale must still use a calibration weight to compare against, and can weigh objects greater than 120 mg and come within a margin of error +/- 7 mg. Many microbalances and ultra-microbalances that weigh fractional gram values are torsion balances. A common fiber type is quartz crystal.

Electronic devices:

Microbalance

A microbalance (also called an ultramicrobalance, or nanobalance) is an instrument capable of making precise measurements of the mass of objects of relatively small mass: on the order of a million parts of a gram and below.

Analytical balance

An analytical balance is a class of balance designed to measure small mass in the sub-milligram range. The measuring pan of an analytical balance (0.1 mg or better) is inside a transparent enclosure with doors so that dust does not collect and so any air currents in the room do not affect the balance's operation. This enclosure is often called a draft shield. The use of a mechanically vented balance safety enclosure, which has uniquely designed acrylic airfoils, allows a smooth turbulence-free airflow that prevents balance fluctuation and the measure of mass down to 1 μg without fluctuations or loss of product. Also, the sample must be at room temperature to prevent natural convection from forming air currents inside the enclosure from causing an error in reading. Single-pan mechanical substitution balances maintain consistent response throughout the useful capacity, which is achieved by maintaining a constant load on the balance beam and thus the fulcrum by subtracting mass on the same side of the beam to which the sample is added.

Electronic analytical scales measure the force needed to counter the mass being measured rather than using actual masses. As such they must have calibration adjustments made to compensate for gravitational differences. They use an electromagnet to generate a force to counter the sample being measured and output the result by measuring the force needed to achieve balance. Such a measurement device is called an electromagnetic force restoration sensor.

Pendulum balance scales

Pendulum type scales do not use springs. These designs use pendulums and operate as a balance that is unaffected by differences in gravity. An example of application of this design are scales made by the Toledo Scale Company.

Programmable scales

A programmable scale has a programmable logic controller in it, allowing it to be programmed for various applications such as batching, labeling, filling (with check weight function), truck scales, and more.

Another important function is counting, e. g. used to count small parts in larger quantities during the annual stock taking. Counting scales (which can also do just weighing) can range from mg to tonnes.

Symbolism

The scales (specifically, a two-pan, beam balance) are one of the traditional symbols of justice, as wielded by statues of Lady Justice. This corresponds to the use in a metaphor of matters being "held in the balance". It has its origins in ancient Egypt.

Scales also are widely used as a symbol of finance, commerce, or trade, in which they have played a traditional, vital role since ancient times. For instance, balance scales are depicted in the seal of the U.S. Department of the Treasury and the Federal Trade Commission.

Capture-51fe5.JPG


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1762 2023-05-05 16:14:35

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1765) Hospital

Summary

A hospital is an institution that is built, staffed, and equipped for the diagnosis of disease; for the treatment, both medical and surgical, of the sick and the injured; and for their housing during this process. The modern hospital also often serves as a centre for investigation and for teaching.

To better serve the wide-ranging needs of the community, the modern hospital has often developed outpatient facilities, as well as emergency, psychiatric, and rehabilitation services. In addition, “bedless hospitals” provide strictly ambulatory (outpatient) care and day surgery. Patients arrive at the facility for short appointments. They may also stay for treatment in surgical or medical units for part of a day or for a full day, after which they are discharged for follow-up by a primary care health provider.

Hospitals have long existed in most countries. Developing countries, which contain a large proportion of the world’s population, generally do not have enough hospitals, equipment, and trained staff to handle the volume of persons who need care. Thus, people in these countries do not always receive the benefits of modern medicine, public health measures, or hospital care, and they generally have lower life expectancies.

In developed countries the hospital as an institution is complex, and it is made more so as modern technology increases the range of diagnostic capabilities and expands the possibilities for treatment. As a result of the greater range of services and the more-involved treatments and surgeries available, a more highly trained staff is required. A combination of medical research, engineering, and biotechnology has produced a vast array of new treatments and instrumentation, much of which requires specialized training and facilities for its use. Hospitals thus have become more expensive to operate, and health service managers are increasingly concerned with questions of quality, cost, effectiveness, and efficiency.

Details

A hospital is a health care institution providing patient treatment with specialized health science and auxiliary healthcare staff and medical equipment. The best-known type of hospital is the general hospital, which typically has an emergency department to treat urgent health problems ranging from fire and accident victims to a sudden illness. A district hospital typically is the major health care facility in its region, with many beds for intensive care and additional beds for patients who need long-term care. Specialized hospitals include trauma centers, rehabilitation hospitals, children's hospitals, seniors' (geriatric) hospitals, and hospitals for dealing with specific medical needs such as psychiatric treatment (see psychiatric hospital) and certain disease categories. Specialized hospitals can help reduce health care costs compared to general hospitals. Hospitals are classified as general, specialty, or government depending on the sources of income received.

A teaching hospital combines assistance to people with teaching to health science students and auxiliary healthcare students. A health science facility smaller than a hospital is generally called a clinic. Hospitals have a range of departments (e.g. surgery and urgent care) and specialist units such as cardiology. Some hospitals have outpatient departments and some have chronic treatment units. Common support units include a pharmacy, pathology, and radiology.

Hospitals are typically funded by public funding, health organisations (for-profit or nonprofit), health insurance companies, or charities, including direct charitable donations. Historically, hospitals were often founded and funded by religious orders, or by charitable individuals and leaders.

Currently, hospitals are largely staffed by professional physicians, surgeons, nurses, and allied health practitioners, whereas in the past, this work was usually performed by the members of founding religious orders or by volunteers. However, there are various Catholic religious orders, such as the Alexians and the Bon Secours Sisters that still focus on hospital ministry in the late 1990s, as well as several other Christian denominations, including the Methodists and Lutherans, which run hospitals. In accordance with the original meaning of the word, hospitals were original "places of hospitality", and this meaning is still preserved in the names of some institutions such as the Royal Hospital Chelsea, established in 1681 as a retirement and nursing home for veteran soldiers.

Etymology


During peacetime, hospitals can be indicated by a variety of symbols. For example, a white 'H' on a blue background is often used in the United States. During times of armed conflict, a hospital may be marked with the emblem of the red cross, red crescent or red crystal in accordance with the Geneva Conventions.

During the Middle Ages, hospitals served different functions from modern institutions in that they were almshouses for the poor, hostels for pilgrims, or hospital schools. The word "hospital" comes from the Latin hospes, signifying a stranger or foreigner, hence a guest. Another noun derived from this, hospitium came to signify hospitality, that is the relation between guest and shelterer, hospitality, friendliness, and hospitable reception. By metonymy, the Latin word then came to mean a guest-chamber, guest's lodging, an inn. Hospes is thus the root for the English words host (where the p was dropped for convenience of pronunciation) hospitality, hospice, hostel, and hotel. The latter modern word derives from Latin via the Old French romance word hostel, which developed a silent s, which letter was eventually removed from the word, the loss of which is signified by a circumflex in the modern French word hôtel. The German word Spital shares similar roots.

Types

Some patients go to a hospital just for diagnosis, treatment, or therapy and then leave ("outpatients") without staying overnight; while others are "admitted" and stay overnight or for several days or weeks or months ("inpatients"). Hospitals are usually distinguished from other types of medical facilities by their ability to admit and care for inpatients whilst the others, which are smaller, are often described as clinics.

General and acute care

The best-known type of hospital is the general hospital, also known as an acute-care hospital. These facilities handle many kinds of disease and injury, and normally have an emergency department (sometimes known as "accident & emergency") or trauma center to deal with immediate and urgent threats to health. Larger cities may have several hospitals of varying sizes and facilities. Some hospitals, especially in the United States and Canada, have their own ambulance service.

District

A district hospital typically is the major health care facility in its region, with large numbers of beds for intensive care, critical care, and long-term care.

In California, "district hospital" refers specifically to a class of healthcare facility created shortly after World War II to address a shortage of hospital beds in many local communities. Even today, district hospitals are the sole public hospitals in 19 of California's counties, and are the sole locally accessible hospital within nine additional counties in which one or more other hospitals are present at a substantial distance from a local community. Twenty-eight of California's rural hospitals and 20 of its critical-access hospitals are district hospitals. They are formed by local municipalities, have boards that are individually elected by their local communities, and exist to serve local needs. They are a particularly important provider of healthcare to uninsured patients and patients with Medi-Cal (which is California's Medicaid program, serving low-income persons, some senior citizens, persons with disabilities, children in foster care, and pregnant women). In 2012, district hospitals provided $54 million in uncompensated care in California.

Specialized

A specialty hospital is primarily and exclusively dedicated to one or a few related medical specialties. Subtypes include rehabilitation hospitals, children's hospitals, seniors' (geriatric) hospitals, long-term acute care facilities, and hospitals for dealing with specific medical needs such as psychiatric problems (see psychiatric hospital), cancer treatment, certain disease categories such as cardiac, oncology, or orthopedic problems, and so forth.

In Germany specialised hospitals are called Fachkrankenhaus; an example is Fachkrankenhaus Coswig (thoracic surgery). In India, specialty hospitals are known as super-specialty hospitals and are distinguished from multispecialty hospitals which are composed of several specialties.

Specialised hospitals can help reduce health care costs compared to general hospitals. For example, Narayana Health's cardiac unit in Bangalore specialises in cardiac surgery and allows for a significantly greater number of patients. It has 3,000 beds and performs 3,000 in paediatric cardiac operations annually, the largest number in the world for such a facility. Surgeons are paid on a fixed salary instead of per operation, thus when the number of procedures increases, the hospital is able to take advantage of economies of scale and reduce its cost per procedure. Each specialist may also become more efficient by working on one procedure like a production line.

Teaching

A teaching hospital delivers healthcare to patients as well as training to prospective Medical Professionals such as medical students and student nurses. It may be linked to a medical school or nursing school, and may be involved in medical research. Students may also observe clinical work in the hospital.

Clinics

Clinics generally provide only outpatient services, but some may have a few inpatient beds and a limited range of services that may otherwise be found in typical hospitals.

Departments or wards

A hospital contains one or more wards that house hospital beds for inpatients. It may also have acute services such as an emergency department, operating theatre, and intensive care unit, as well as a range of medical specialty departments. A well-equipped hospital may be classified as a trauma center. They may also have other services such as a hospital pharmacy, radiology, pathology, and medical laboratories. Some hospitals have outpatient departments such as behavioral health services, dentistry, and rehabilitation services.

A hospital may also have a department of nursing, headed by a chief nursing officer or director of nursing. This department is responsible for the administration of professional nursing practice, research, and policy for the hospital.

Many units have both a nursing and a medical director that serve as administrators for their respective disciplines within that unit. For example, within an intensive care nursery, a medical director is responsible for physicians and medical care, while the nursing manager is responsible for all the nurses and nursing care.

Support units may include a medical records department, release of information department, technical support, clinical engineering, facilities management, plant operations, dining services, and security departments.

Remote monitoring

The COVID-19 pandemic stimulated the development of virtual wards across the British NHS. Patients are managed at home, monitoring their own oxygen levels using an oxygen saturation probe if necessary and supported by telephone. West Hertfordshire Hospitals NHS Trust managed around 1200 patients at home between March and June 2020 and planned to continue the system after COVID-19, initially for respiratory patients. Mersey Care NHS Foundation Trust started a COVID Oximetry@Home service in April 2020. This enables them to monitor more than 5000 patients a day in their own homes. The technology allows nurses, carers, or patients to record and monitor vital signs such as blood oxygen levels.

Funding

Modern hospitals derive funding from a variety of sources. They may be funded by private payment and health insurance or public expenditure, charitable donations.

In the United Kingdom, the National Health Service delivers health care to legal residents funded by the state "free at the point of delivery", and emergency care free to anyone regardless of nationality or status. Due to the need for hospitals to prioritise their limited resources, there is a tendency in countries with such systems for 'waiting lists' for non-crucial treatment, so those who can afford it may take out private health care to access treatment more quickly.

In the United States, hospitals typically operate privately and in some cases on a for-profit basis, such as HCA Healthcare. The list of procedures and their prices are billed with a chargemaster; however, these prices may be lower for health care obtained within healthcare networks. Legislation requires hospitals to provide care to patients in life-threatening emergency situations regardless of the patient's ability to pay. Privately funded hospitals which admit uninsured patients in emergency situations incur direct financial losses, such as in the aftermath of Hurricane Katrina.

Quality and safety

As the quality of health care has increasingly become an issue around the world, hospitals have increasingly had to pay serious attention to this matter. Independent external assessment of quality is one of the most powerful ways to assess this aspect of health care, and hospital accreditation is one means by which this is achieved. In many parts of the world such accreditation is sourced from other countries, a phenomenon known as international healthcare accreditation, by groups such as Accreditation Canada from Canada, the Joint Commission from the US, the Trent Accreditation Scheme from Great Britain, and the Haute Autorité de santé (HAS) from France. In England hospitals are monitored by the Care Quality Commission. In 2020 they turned their attention to hospital food standards after seven patient deaths from listeria linked to pre-packaged sandwiches and salads in 2019, saying "Nutrition and hydration is part of a patient’s recovery."

The World Health Organization noted in 2011 that going into hospital was far riskier than flying. Globally the chance of a patient being subject to an error was about 10% and the chance of death resulting from an error was about 1 in 300 according to Liam Donaldson. 7% of hospitalised patients in developed countries, and 10% in developing countries, acquire at least one health care-associated infection. In the USA 1.7 million infections are acquired in hospital each year, leading to 100,000 deaths, figures much worse than in Europe where there were 4.5 million infections and 37,000 deaths.

SPONSORED_160719962_AR_-1_LNMJLANNEWMJ.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1763 2023-05-06 14:56:03

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1766) Holiday

Gist

A holiday is a period of time during which you relax and enjoy yourself away from home. People sometimes refer to their holiday as their holidays. [British] I've just come back from a holiday in the United States.

Summary

A holiday, (from “holy day”), originally, a day of dedication to religious observance; in modern times, is a day of either religious or secular commemoration. Many holidays of the major world religions tend to occur at the approximate dates of more ancient, pagan festivals. In the case of Christianity, this is sometimes owing to the policy of the early church of scheduling Christian observances at dates when they would eclipse pagan ones—a practice that proved more efficacious than merely prohibiting the earlier celebrations. In other cases, the similarity of the date is due to the tendency to celebrate turning points of the seasons, or to a combination of the two factors.

In many countries, secular holidays are based on commemoration of historic events and the birthdays of national heroes—e.g., Independence Day in the United States and Bastille Day in France. Forms of observance of national holidays range from display of the national flag to exemption of public employees from work, the latter practice generally being followed also by business organizations. Traditionally, many holidays are celebrated by family reunions and the sending of greeting cards and gifts. In Great Britain, the so-called bank holidays are marked by the closing of banks and other institutions.

Details

A holiday is a day or other period of time set aside for festivals or recreation. Public holidays are set by public authorities and vary by state or region. Religious holidays are set by religious organisations for their members and are often also observed as public holidays in religious majority countries. Some religious holidays such as Christmas have become or are becoming secularised by part or all of those who observe it. In addition to secularisation, many holidays have become commercialised due to the growth of industry.

Holidays can be thematic, celebrating or commemorating particular groups, events or ideas, or non-thematic, days of rest which do not have any particular meaning. In Commonwealth English, the term can refer to any period of rest from work, such as vacations or school holidays. In American English, the holidays typically refers to the period from Thanksgiving to New Year's, which contains many important holidays in American culture.

Terminology

The word originally referred only to special religious days.

The word holiday has differing connotations in different regions. In the United States the word is used exclusively to refer to the nationally, religiously or culturally observed day(s) of rest or celebration, or the events themselves, whereas in the United Kingdom and other Commonwealth nations, the word may refer to the period of time where leave from one's duties has been agreed, and is used as a synonym to the US preferred vacation. This time is usually set aside for rest, travel or the participation in recreational activities, with entire industries targeted to coincide or enhance these experiences. The days of leave may not coincide with any specific customs or laws. Employers and educational institutes may designate ‘holidays’ themselves which may or may not overlap nationally or culturally relevant dates, which again comes under this connotation, but it is the first implication detailed that this article is concerned with. The modern use varies geographically. In North America, it means any dedicated day or period of celebration. In the United Kingdom, Australia and New Zealand, holiday is often used instead of the word vacation.

Global holidays

The celebration of the New Year has been a common holiday across cultures for at least four millennia. Such holidays normally celebrate the last day of a year and the arrival of the next year in a calendar system. In modern cultures using the Gregorian calendar, the New Year's celebration spans New Year's Eve on 31 December and New Year's Day on 1 January. However, other calendar systems also have New Year's celebration, such as Chinese New Year and Vietnamese Tet. New Year's Day is the most common public holiday, observed by all countries using the Gregorian calendar except Israel.

Christmas is a popular holiday globally due to the spread of Christianity. The holiday is recognsied as a public holiday in many countries in Europe, the Americas, Africa and Australasia and is celebrated by over 2 billion people. Although a holiday with religious origins, Christmas is often celebrated by non-Christians as a secular holiday. For example, 61% of Brits celebrate Christmas in an entirely secular way. Christmas has also become a tradition in some non-Christian countries. For example, for many Japanese people, it has become customary to buy and eat fried chicken on Christmas.

Recently invented holidays commemorate a range of modern social and political issues and other important topics. The United Nations publishes a list of International Days and Weeks. One such day is International Women's Day on 8 March, which celebrates women's achievements and campaigns for gender equality and women's rights. Earth Day has been celebrated by people across the world since 1970, with 10,000 events in 2007. It is a holiday marking the dangers of environmental damage, such as pollution and the climate crisis.

Common secular holidays

Other secular holidays are observed regionally, nationally and across multi-country regions. The United Nations Calendar of Observances dedicates decades to a specific topic, but also a complete year, month, week and days. Holidays dedicated to an observance such as the commemoration of the ending of World War II, or the Shoah, can also be part of the reparation obligation as per UN General Assembly Resolution 60/147 Basic Principles and Guidelines on the Right to a Remedy and Reparation for Victims of Gross Violations of International Human Rights Law and Serious Violations of International Humanitarian Law.

Another example of a major secular holiday is the Lunar New Year, which is celebrated across East Asia and South East Asia. Many other days are marked to celebrate events or people, but are not strictly holidays as time off work is rarely given; examples include Arbor Day (originally U.S.), Labor Day (celebrated sometimes under different names and on different days in different countries), and Earth Day (22 April).

Public holidays:

Substitute holidays

If a holiday coincides with another holiday or a weekend day a substitute holiday may be recognised in lieu. In the United Kingdom the government website states that "If a bank holiday is on a weekend, a 'substitute' weekday becomes a bank holiday, normally the following Monday.", and the list of bank holidays for the year 2020 includes Monday 28 December as "Boxing Day (substitute day)", as 26 December is a Saturday. The process of moving a holiday from a weekend day to the following Monday is known as Mondayisation in New Zealand.

National days

National days are days of significance to a nation or nation state. National days are typically celebratory of a state's independence (e.g. 4 July in the US), founding or unification (e.g. German Unity Day), the commemoration of a revolution (e.g. Bastille Day in France) or liberation (e.g. 9 May in the Channel Islands), or the feast day for a patron saint (e.g. St Patrick's Day in Ireland) or ruler (e.g. 5 December in Thailand). Every country other than Denmark and the United Kingdom observes a national day. In the UK, constituent countries have official or unofficial national days associated with their patron saint. A British national day has often been proposed, such as the date of the Acts of Union 1707 (1 May) or the King's Official Birthday, but never adopted.

Other days of national importance exist, such as one to celebrate the country's military or veterans. For example, Armistice Day (11 November) is recognised in World War I Allied nations (and across the Commonwealth) to memoralise those lost in the World Wars. National leaders will typically attend remembrance ceremonies at national memorial sites.

Religious holidays

Many holidays are linked to faiths and religions. Christian holidays are defined as part of the liturgical year, the chief ones being Easter and Christmas. The Orthodox Christian and Western-Roman Catholic patronal feast day or "name day" are celebrated in each place's patron saint's day, according to the Calendar of saints. Jehovah's Witnesses annually commemorate "The Memorial of Jesus Christ's Death", but do not celebrate other holidays with any religious significance such as Easter, Christmas or New Year. This holds especially true for those holidays that have combined and absorbed rituals, overtones or practices from non-Christian beliefs into the celebration, as well as those holidays that distract from or replace the worship of Jehovah. In Islam, the largest holidays are Eid al-Fitr (immediately after Ramadan) and Eid al-Adha (at the end of the Hajj). Ahmadi Muslims additionally celebrate Promised Messiah Day, Promised Reformer Day, and Khilafat Day, but contrary to popular belief, neither are regarded as holidays. Hindus, Jains and Sikhs observe several holidays, one of the largest being Diwali (Festival of Light). Japanese holidays as well as few Catholic holidays contain heavy references to several different faiths and beliefs. Celtic, Norse, and Neopagan holidays follow the order of the Wheel of the Year. For example, Christmas ideas like decorating trees and colors (green, red, and white) have very similar ideas to modern Wicca (a modern Pagan belief) Yule which is a lesser Sabbat of the wheel of the year. Some are closely linked to Swedish festivities. The Baháʼí Faith observes 11 annual holidays on dates determined using the Baháʼí calendar. Jews have two holiday seasons: the Spring Feasts of Pesach (Passover) and Shavuot (Weeks, called Pentecost in Greek); and the Fall Feasts of Rosh Hashanah (Head of the Year), Yom Kippur (Day of Atonement), Sukkot (Tabernacles), and Shemini Atzeret (Eighth Day of Assembly).

Secularisation

Some religious holidays are also celebrated by many as secular holidays. For example, 61% of Brits celebrate Christmas in an entirely secular way. 81% of non-Christian Americans also celebrate Christmas. A 2019 Gallup poll found that two-thirds of Americans still celebrate an at least somewhat religious Christmas.

The claimed over-secularisation of particular holidays has caused controversy and claims of censorship of religion or political correctness. For example, in the 1990s, Birmingham City Council promoted a series of events in the Christmas season under the brand Winterval to create a more multi-cultural atmosphere about the seasonal festivities. The Bishop of Birmingham responded to the events, saying "the secular world, which expresses respect for all, is actually embarrassed by faith. Or perhaps it is Christianity which is censored". In the United States, conservative commentators have characterised the secularisation of Winter festivities as "the War on Christmas".

Unofficial holidays

These are holidays that are not traditionally marked on calendars. These holidays are celebrated by various groups and individuals. Some promote a cause, others recognize historical events not officially recognized, and others are "funny" holidays celebrated with humorous intent. For example, Monkey Day is celebrated on December 14, International Talk Like a Pirate Day is observed on September 19, and Blasphemy Day is held on September 30. Other examples are April Fools' Day on April 1 and World No Tobacco Day on May 31. Various community organizers and marketers promote odd social media holidays.

Commercialism

In the United States, holidays have been drawn into a culture of consumption since the late 19th century. Many civic, religious and folk festivals have been commercialised. As such, traditions have been reshaped to serve the needs of industry. Leigh Eric Schmidt argues that the growth of consumption culture allowed the growth of holidays as an opportunity for increased public consumption and the orderly timing of it. Thus, after the Civil War, as department stores became the spatial expression of commercialism, holidays became the temporal expression of it.

6377628823357485734Aqaal.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1764 2023-05-07 14:56:58

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1767) Visa Inc.

Summary

Visa Inc. (stylized as VISA) is an American multinational financial services corporation headquartered in San Francisco, California. It facilitates electronic funds transfers throughout the world, most commonly through Visa-branded credit cards, debit cards and prepaid cards. Visa is one of the world's most valuable companies.

Visa does not issue cards, extend credit or set rates and fees for consumers; rather, Visa provides financial institutions with Visa-branded payment products that they then use to offer credit, debit, prepaid and cash access programs to their customers. In 2015, the Nilson Report, a publication that tracks the credit card industry, found that Visa's global network (known as VisaNet) processed 100 billion transactions during 2014 with a total volume of US$6.8 trillion.

Visa was founded in 1958 by Bank of America (BofA) as the BankAmericard credit card program. In response to competitor Master Charge (now Mastercard), BofA began to license the BankAmericard program to other financial institutions in 1966. By 1970, BofA gave up direct control of the BankAmericard program, forming a cooperative with the other various BankAmericard issuer banks to take over its management. It was then renamed Visa in 1976.

Nearly all Visa transactions worldwide are processed through the company's directly operated VisaNet at one of four secure data centers, located in Ashburn, Virginia; Highlands Ranch, Colorado; London, England; and Singapore. These facilities are heavily secured against natural disasters, crime, and terrorism; can operate independently of each other and from external utilities if necessary; and can handle up to 30,000 simultaneous transactions and up to 100 billion computations every second.

Visa is the world's second-largest card payment organization (debit and credit cards combined), after being surpassed by China UnionPay in 2015, based on annual value of card payments transacted and number of issued cards. However, because UnionPay's size is based primarily on the size of its domestic market in China, Visa is still considered the dominant bankcard company in the rest of the world, where it commands a 50% market share of total card payments.

Details

Visa Credit cards

Whether you are looking for traditional benefits or premium rewards, there is a Visa card for you.

Visa Credit cards give you the convenience and security to make purchases, pay bills, or get cash from over 2 million ATMs worldwide. Whenever you use a credit card, you are actually borrowing money that you will pay back over time or in full. In addition to flexible payment options, credit cards can offer travel rewards, cash back, or other benefits.

Visa Debit cards

Enjoy the convenience of paying directly from your account, with all the security that Visa provides.

Visa Debit cards work like cash, only better. They are issued by your bank or other financial institution, and use funds directly from your bank account. Accepted worldwide, Visa Debit cards offer quick, secure and convenient access to your money in person, online, overseas and over the phone.

Visa prepaid cards

Easy to use and reloadable, Visa prepaid cards go wherever you go. No credit check or bank account needed.

With Visa prepaid cards, spend only what you have already deposited into your account and reload anytime. A smart alternative to cash, prepaid cards come in a range of options to suit your needs—from travel, teens to general purpose.

What Is Visa?

In 1958, Bank of America launched BankAmericard, the first consumer-based credit card program. As cashless spending became popular, BankAmericard grew before officially becoming Visa in 1976 and forming a global corporation in 2007. The company is currently headquartered in San Francisco, California and has more than 20,000 employees worldwide.

In 2021, Visa had annual revenue of $21.4 billion. Visa competes with other large credit card networks — such as Mastercard, Discover, and American Express — but is the most common type of credit card, accounting for 52.8% of all cards in circulation.

So, what exactly is Visa? According to the company, Visa is a world leader in digital payments that aims to be a “force of good that advances inclusive, sustainable, and equitable growth for individuals, communities, and economies around the world.” The company hopes to remove barriers and give everyone the chance to connect to the global economy.

Visa Inc. (V) is one of the largest digital payments companies in the world, and serves individual consumers, merchants, governments, and financial institutions in more than 200 countries. The company does not issue debit or credit cards directly, but rather provides card services to major financial clients, as well as authorization, clearing, and settlement.

Visa generates revenue through selling its services as a middleman between merchants and financial institutions.

The top shareholders of Visa are Rajat Taneja, Alfred F. Kelly, Vasant M. Prabhu, Vanguard Group Inc., BlackRock Inc., and T. Rowe Price Associates Inc.

As of January 5, 2021, Visa's trailing-12-month (TTM) revenue is $21.9 billion and TTM net income is $10.9 billion. The company has a market capitalization of $473.0 billion.

BN-XG830_3og3V_HD_20180201113258.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1765 2023-05-08 14:11:58

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1768) Broadcasting

Gist

Broadcasting is the distribution of audio or video content to a dispersed audience via any electronic mass communications medium, but typically one using the electromagnetic spectrum (radio waves), in a one-to-many model.

Summary

Broadcasting is the distribution of audio or video content to a dispersed audience via any electronic mass communications medium, but typically one using the electromagnetic spectrum (radio waves), in a one-to-many model. Broadcasting began with AM radio, which came into popular use around 1920 with the spread of vacuum tube radio transmitters and receivers. Before this, most implementations of electronic communication (early radio, telephone, and telegraph) were one-to-one, with the message intended for a single recipient. The term broadcasting evolved from its use as the agricultural method of sowing seeds in a field by casting them broadly about. It was later adopted for describing the widespread distribution of information by printed materials or by telegraph. Examples applying it to "one-to-many" radio transmissions of an individual station to multiple listeners appeared as early as 1898.

Over the air broadcasting is usually associated with radio and television, though more recently, both radio and television transmissions have begun to be distributed by cable (cable television). The receiving parties may include the general public or a relatively small subset; the point is that anyone with the appropriate receiving technology and equipment (e.g., a radio or television set) can receive the signal. The field of broadcasting includes both government-managed services such as public radio, community radio and public television, and private commercial radio and commercial television. The U.S. Code of Federal Regulations, title 47, part 97 defines "broadcasting" as "transmissions intended for reception by the general public, either direct or relayed". Private or two-way telecommunications transmissions do not qualify under this definition. For example, amateur ("ham") and citizens band (CB) radio operators are not allowed to broadcast. As defined, "transmitting" and "broadcasting" are not the same.

Transmission of radio and television programs from a radio or television station to home receivers by radio waves is referred to as "over the air" (OTA) or terrestrial broadcasting and in most countries requires a broadcasting license. Transmissions using a wire or cable, like cable television (which also retransmits OTA stations with their consent), are also considered broadcasts but do not necessarily require a license (though in some countries, a license is required). In the 2000s, transmissions of television and radio programs via streaming digital technology have increasingly been referred to as broadcasting as well.

Details

Broadcasting is electronic transmission of radio and television signals that are intended for general public reception, as distinguished from private signals that are directed to specific receivers. In its most common form, broadcasting may be described as the systematic dissemination of entertainment, information, educational programming, and other features for simultaneous reception by a scattered audience with appropriate receiving apparatus. Broadcasts may be audible only, as in radio, or visual or a combination of both, as in television. Sound broadcasting in this sense may be said to have started about 1920, while television broadcasting began in the 1930s. With the advent of cable television in the early 1950s and the use of satellites for broadcasting beginning in the early 1960s, television reception improved and the number of programs receivable increased dramatically.

The scope of this article encompasses the nontechnical aspects of broadcasting in the pre-Internet era. It traces the development of radio and television broadcasting, surveys the state of broadcasting in various countries throughout the world, and discusses the relationship of the broadcaster to government and the public. Discussion of broadcasting as a medium of art includes a description of borrowings from other media.

Broadcasting systems:

The broadcaster and the government

Most observers recognize that no broadcast organization can be wholly independent of government, for all of them must be licensed in accordance with international agreements. Although broadcasters in democratic countries pride themselves on their freedom with respect to their governments, they are not always free of stockholder or advertiser pressure, nor are producers and editors truly independent if senior executives, under pressure from whatever source, interfere with their editorial functions. Independence, therefore, is a relative term when it is applied to broadcasting.

In a monograph that was written for the European Broadcasting Union, broadcasting systems are classified under four headings: state-operated, those that work under the establishment of a public corporation or authority, those whose systems are a partnership blend of public authorities and private interests, and those under private management. A brief summary of these systems provides an indication of the complex variations that have arisen.

State operation

Grouped under this heading are broadcasting systems that are operated by a government department or delegated to an administration, perhaps with a legal personality and even possibly independent in financial and administrative matters, but subject to the government and not essentially autonomous. Under this heading came the systems in most communist countries. In the Soviet Union a special committee was set up in 1957 to be in charge of Soviet radio and television under the direct authority of the U.S.S.R. Council of Ministers. Similar arrangements were made in Czechoslovakia and Poland, except that the committees were given a legal personality. Romania had delegated broadcasting to a committee attached to the Council of Ministers. All-India Radio is a department of the Ministry of Information and Broadcasting. Similar arrangements are common in countries that were colonies but have gained their independence since World War II.

Establishment of a public corporation or authority

The BBC has been the prototype of this kind of system. Provided it abides by the charter and terms of the license under which it operates, the BBC has maximum independence as regards the disposal of its funds (although its revenue is subject to governmental decision as to the cost of the license that is required for every television or radio receiver), the production and scheduling of programs, and, above all, editorial control. Certain residual government powers are either hedged around with agreed provisos or never exercised. Its income, save for profits on the sale of programs abroad and the sale of various phonograph records and publications, is exclusively derived from licenses. External broadcasting (i.e., broadcasting to areas outside national boundaries) is separately financed. The chairman and Board of Governors constitute the legal personality of the BBC; they are chosen by the government not as representatives of sectional interests but on the basis of their experience and standing. Political parties in office have been careful to avoid political prejudice in these appointments.

The Canadian Broadcasting Corporation (CBC), or Société Radio-Canada, also has substantial independent powers as determined by the Broadcasting Act of 1958 and its two successors, passed in 1968 and 1991. These later acts responded to technological as well as social changes, such as the specific needs of the regions and the aspirations of French-speaking Canadian citizens. The CBC is dependent on an annual parliamentary grant for its finance, supplemented by an income derived from advertising that amounts to about one-quarter of its annual revenue. Canadian broadcasting as a whole is a mixed system, with private broadcasting companies operating alongside the CBC.

The Japan Broadcasting Corporation, or the Nippon Hōsō Kyōkai (NHK), was charged by a series of acts in 1950 with the task of conducting “its broadcasting service for the public welfare in such a manner that its broadcasts may be received all over Japan.” The NHK Board of Governors is appointed by the prime minister with the consent of both houses of the Diet. The system is financed almost exclusively from the sale of licenses for receiving sets. Private broadcasting, allowed since 1950, has led to the creation of 170 private broadcasting companies.

Though German broadcasting is properly included in this category, the situation there is substantially different, for the basic radio and television services are a matter not for the federal government but for the individual states (Länder). The state broadcasting organizations are also grouped together in a national organization, the First German Television network. In each state, though there are some variations, there are a broadcasting council that is appointed by the legislature or nominated by churches, universities, associations of employers or trade unions, political parties, or the press; an administrative council; and a director general. Their revenue comes from receiving-set licenses and sometimes also from advertising.

The broadcasting system in Belgium provides an interesting example of a device that has been used successfully for coping with a two-language country. There are three public authorities: one for French broadcasts, a second for Flemish, and a third that owns the property, owns and operates the technical equipment, and is responsible for the symphony orchestra, record library, and central reference library.

Partnership of public authorities and private interests

In many cases this partnership is nominal and historical rather than substantial and actual. The outstanding example is Radiotelevisione Italiana (RAI), originally founded in 1924. In 1927 an agreement was made with the government for a 25-year broadcasting concession. The charter was extended to cover television in 1952. Two years later a government agency acquired control, and in 1985 it owned 99 percent of the shares. RAI’s administrative council consists of 20 members, 6 of whom are elected by the shareholders’ assembly, 10 elected by a parliamentary commission, and 4 selected from a list of candidates representing the regional councils. A parliamentary committee of 40 members is in charge of running the service. The organization must also prepare an outline of programs on a quarterly basis for approval by the Ministry of Posts and Telecommunications, aided by an advisory committee concerned with cultural, artistic, and educational policies. A separate organization runs the broadcast advertising business, which, together with receiving-set licenses, provides the revenue of RAI. State monopoly of broadcasting was terminated in 1976. By the early 1990s there were about 450 private television stations operating in Italy alongside the RAI.

In Sweden the broadcasting monopoly is technically a privately owned corporation in which the state has no financial interest, thus emphasizing the independence of Sveriges Radio from the government. The shares of the corporation must be held by the Swedish press (20 percent), large noncommercial national bodies or movements (60 percent), and commerce and industry (20 percent). The board of governors is made up of a chairman and government nominees and an equal number elected by the shareholders; there also are two employee representatives of Sveriges Radio on the board. The government reserves the right to determine the amount of revenue from receiving-set licenses on an annual basis and thus controls both investment and the amount of broadcasting. The government, however, does not control how that revenue is spent. On balance, Sveriges Radio has a substantial measure of freedom.

In Switzerland too there are elements of partnership between private interests and public authorities, but the federal constitution, the need to broadcast in three languages, and geographical factors have led to a system by which the Swiss Broadcasting Corporation is composed of three regional societies based in Lausanne, Zürich, and Lugano-Besso.

Private management

Most of the broadcasting organizations under this heading are commercial firms that derive their revenue from advertising, which takes the form of brief announcements scheduled at regular intervals throughout the day. In some cases a program, such as a sports event or concert, may be sponsored by one advertiser or group of advertisers. Methods and degree of government control vary, and no general characteristics may be isolated. Private-enterprise radio predominates in the United States and Latin America.

Subject to similar controls in these countries are many nonprofit educational stations, financed by universities, private subscriptions, and foundations. There is a public-service network, the Public Broadcasting Service, in the United States.

Other methods of distributing sound and vision programs by wire and cable are not strictly broadcasting. In the main, wire-diffusion enterprises concentrate on giving efficient reception of broadcast programs in densely populated areas, large blocks of buildings, and hotels. A tall apartment building, for example, may have one television antenna on its roof to which residents may attach their receivers. Programs such as sports, special events, films, and theatrical performances are also available via direct cable lines to subscribers as “Pay TV.” Cable television reached about 56,100,000 homes in the United States in 1991 and has created two industries in broadcasting: one to hook up homes, the other to supply the programs. Cable television has drawn viewers away from the major commercial television networks, whose share of the prime-time audience has fallen and is expected to decline further.

The broadcaster and the public:

Nature of the broadcast audience

The psychology and behaviour of a radio or television audience, which is composed principally of individuals in the privacy of their own homes, differ considerably from those of an audience in a theatre or lecture hall. There is none of the crowd atmosphere that prevails in a public assembly, and listeners are only casually aware that they are actually part of a large audience. This engenders a sense of intimacy that causes the listener to feel a close personal association with the speaker or performer. Furthermore, many people will not accept in their own homes many of the candid forms of expression that they readily condone or support on the stage or in literature.

Because it owes its license to operate to the state, if indeed it is not state-operated, and because of its intimate relationship to its audience, broadcasting functions in a quasi-public domain, open in all its phases to public scrutiny. It is therefore held to be invested with a moral as well as a legal responsibility to serve the public interest and must remain more sensitive to public sentiment and political opinion than most other forms of public expression.

Audience measurement

For economic reasons, as well as those outlined above, evaluation of audience opinion and response to radio or television programs is important to the broadcaster. Audience measurement presents difficult problems, because there is no box office by which to determine the exact number of listeners. Mail received comes principally from those who have the time and inclination to write and cannot be regarded as wholly representative. Audience-measurement information may also be obtained by telephone-sampling methods, interviews in the home by market-research organizations, or special recording devices attached to individual receiving sets. The latter, installed with the owner’s consent, record the amount of time the set is used, when it is turned on and off, and the stations tuned in. These devices are expensive, however, and do not necessarily indicate whether someone is actually watching or listening, and they are therefore limited to small samples of the total audience. Whatever the method of rating, commercial broadcasters are quick to alter or discontinue any program that shows lack of audience appeal, and the listeners are thus influential in determining the nature of the programs that are offered to them. In commercial broadcasting, sponsored programs also are affected by their apparent success or failure in selling the goods advertised.

Educational broadcasting

It is difficult to give an account of educational broadcasting in countries where broadcasting is largely or wholly a matter of private management and where the larger and more important stations and networks are private commercial enterprises. Nevertheless, considerable numbers of educational transmissions are made in the United States and Latin America by universities and colleges and sometimes by municipal or state-owned stations. The Public Broadcasting Service in the United States has increased the amount of educational and generally more thought-provoking material available on the air, and in Latin America some countries use broadcasts not only to support the work of teachers in schools but also to combat illiteracy and to impart advice to isolated rural populations in matters of public health, agricultural methods, and other social and practical subjects. The Roman Catholic Church has been in the forefront of the latter activity, operating, for example, the Rede Nacional de Emissôras Católicas in Brazil and the Acción Cultural Popular in Colombia. A similar use of broadcasting is made in most of the tropical countries of Africa and Asia.

Japan’s NHK has the most ambitious educational-broadcasting output in the world. Each of its two television and AM radio services is devoted wholly to education, while general television services and FM radio also transmit material of this nature. Japan prepares programs for primary, secondary, and higher education, special offerings for the mentally and physically handicapped, and a wide range of transmissions under the general heading of “social education,” which includes foreign languages, vocational and technical instruction, advice on agriculture, forestry, fisheries, and business management, plus special programs for children, adolescents, and women. The educational broadcasts of NHK reach more than 90 percent of Japan’s primary and secondary schools.

In Europe the French state broadcasting service devotes more than one-half of its radio output to educational and cultural broadcasts in the arts, letters, and sciences; and on television about 14 percent of its first and second networks are devoted to adult education. Primary and secondary instruction is offered, as are refresher courses for teachers and university-level courses.

Although Italian radio devotes less than 1 percent of its output specifically to educational programs for children, nearly 20 percent is given to cultural and allied offerings. Educational television began in Italy in 1958 with courses of a vocational nature, followed by transmissions aimed at secondary schools. In 1966 special programs were initiated for areas where there are no secondary schools. By the early 1980s, 17 percent of Italian television time was devoted to educational and school broadcasts and 4 percent to cultural programs.

Swedish radio offers a comprehensive service of educational and cultural broadcasting, with the output on television higher than that on radio. There is also a substantial output of adult education at the primary, secondary, and university levels, with about 1,400 school broadcasts a year, and Sweden has concentrated on vocational training and refreshment for teachers. German broadcasting, by contrast, has been used much less for formal education. In the Netherlands more than two and a half hours of school and continuing education broadcasting are broadcast weekly on the radio; in addition, nearly eight hours of educational television are transmitted every week.

The BBC pioneered in education; its work, in both radio and television, has steadily expanded. The BBC offers primary and secondary students more than 100 radio series and nearly 40 television series. The BBC also offers a wide range of biweekly programs especially designed for study in degree courses with the Open University, created and financed by the government, with the broadcast teaching supplemented by publications and correspondence work. By the mid-1970s, BBC broadcasts for the Open University averaged 16 hours weekly on radio and more than 18 hours on television. In addition, the Independent Broadcasting Authority in the United Kingdom has required the commercial-program companies to contribute educational material both for schools and for adults; by 1970 this amounted to 10 hours weekly during periods totaling 28 weeks of the year.

In Australia there is a small educational output on the commercial stations, both radio and television, but by far the greater part of educational broadcasting is undertaken by the Australian Broadcasting Corporation. Educational programming accounts for about 4 percent of radio time and 18 percent of television output, the majority of which is broadcast to schools and kindergartens. The Canadian Broadcasting Corporation is required to provide educational programs in both English and French and does so on its AM and FM radio networks, as well as on television.

Broadcasts for external reception

International broadcasting—the transmission of programs by a country expressly for audiences beyond its own frontiers—dates from the earliest days of broadcasting. The Soviet Union began foreign-language transmissions for propaganda purposes in the 1920s. Fascist Italy and Nazi Germany made such broadcasts at a later date. France, Great Britain, and the Netherlands were next in the field among European countries, though their first use of shortwave broadcasting was aimed at French-, English-, or Dutch-speaking populations overseas. Great Britain began foreign-language broadcasting early in 1938 with a program in Arabic and transmissions in Spanish and Portuguese directed to Latin America. By August 1939, countries broadcasting in foreign languages included Albania, Bulgaria, China, France, Germany, Great Britain, Hungary, Italy, Japan, Romania, the Soviet Union, Spain, the United States, and Vatican City.

During World War II foreign-language broadcasting continued; the programs of the BBC in particular, because of their reliability and credibility, had an important effect in maintaining morale among the countries that were under German occupation. The continuance of international tension after World War II led to remarkable growth of foreign-language services. In 1950, for example, all of the communist countries of eastern Europe except East Germany had launched external services, although these were on a small scale, and even the Soviet Union was transmitting a total of more than 500 hours of broadcasts weekly in all foreign languages. The United Kingdom’s output, which had once led the field, had been reduced to slightly more than 600 hours a week and the Voice of America to less than 500 hours per week. By the early 1980s the situation had changed radically. The Soviet Union alone broadcast more than 2,000 hours per week, and the output of all communist countries of eastern Europe (excluding Yugoslavia) totaled about 1,500 hours. The United Kingdom logged 744 hours in 1981; West Germany logged 785 hours; and the United States broadcast over the Voice of America and Radio Free Europe/Radio Liberty 1,925 hours a week. The output of China had risen from 66 hours weekly in 1950 to 1,375 hours by 1981. The increase in Chinese broadcasts reflected in part the rising tension between China and the Soviet Union; significantly, the output of China’s ally for much of this period, Albania, rose from 26 to 560 hours weekly during the same period. By the early 1980s Japan was transmitting for 263 hours, while Australia and Canada also sponsored external broadcasts.

Monitoring and transcriptions

A logical development following from external broadcasting is the monitoring of foreign broadcasts and their analysis for intelligence purposes. The BBC in particular has a highly developed monitoring service; this activity often yields valuable information. The Central Intelligence Agency of the United States is also active in monitoring and analyzing foreign broadcasts. Transcriptions (recordings) of programs produced in either the domestic or the external services of one country can be acceptable for broadcast in others. Radio broadcasts of an educational nature can be used in different countries speaking the same language. Although many radio transcriptions are supplied free, in television the situation is different, and there is a substantial trade in television films.

Pirate and offshore stations

In some countries where broadcasting in general or radio alone is a monopoly, radio has had to compete for brief periods with independent commercial stations mounted on ships anchored at sea outside territorial waters. Sweden, Denmark, the Netherlands, and the United Kingdom have been the countries most affected by these stations, which have made use of unauthorized wavelengths, thus endangering other radio communications and operating free of any copyright obligations in respect to any of their broadcast material. Government action gradually has forced closure of such operations: in Sweden a competitive service of popular music proved effective; and in Denmark naval police action (the international legality of which may be questioned), followed by confiscation and heavy penalties, brought an end to the pirate station. The United Kingdom combined legislation penalizing any party who advertised or supplied such ships with the launching by the BBC of Radio 1, substantially a popular music service, to solve the problem. The French have had a particular problem of competition from the so-called postes périphériques, which include Europe No. 1 in the Saar and Radio Andorra in the Pyrenees, not to mention the French-language broadcasts of Monaco, Belgium, Luxembourg, and Switzerland. The strongest competition came from Europe No. 1, in which the French government finally purchased a controlling interest.

radio-broadcasting-maryland.jpg?h=6a1f22b2&itok=3mAsLs3r


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1766 2023-05-09 15:37:52

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1769) Gate

Summary

A gate or gateway is a point of entry to or from a space enclosed by walls. The word derived from old Norse "gat" meaning road or path; But other terms include yett and port. The concept originally referred to the gap or hole in the wall or fence, rather than a barrier which closed it. Gates may prevent or control the entry or exit of individuals, or they may be merely decorative. The moving part or parts of a gateway may be considered "doors", as they are fixed at one side whilst opening and closing like one.

A gate may have a latch that can be raised and lowered to both open a gate or prevent it from swinging. Gate operation can be either automated gate operator or manual. Locks are also used on gates to increase security.

Larger gates can be used for a whole building, such as a castle or fortified town. Actual doors can also be considered gates when they are used to block entry as prevalent within a gatehouse. Today, many gate doors are equipped with self-closing devices that can improve safety, security, and convenience.

It is important to choose a controlled gate closer to ensure a consistent closing speed, as well as safety and security. A self-closing gate can help prevent accidents by children or pets, particularly around swimming pools, spas, beaches and hot tubs. A self-closing gate can also improve the security of the property by ensuring that the gate is closed and latched properly. There are various types of gate closers available, including exposed spring devices, gate closers, spring hinges, and self-closing hinges. The appropriate type of closer will depend on the weight and size of the gate, as well as other factors like speed control, weather resistance, and ADA compliance.

Purpose-specific types of gate

* Baby gate a safety gate to protect babies and toddlers
* City gate of a walled city
* Hampshire gate (a.k.a. New Zealand gate, wire gate, etc.)
* Kissing gate on footpath
* Lychgate with a roof
* Mon Japanese: gate. The religious torii compares to the Chinese pailou (paifang), Indian torana, Indonesian Paduraksa and Korean hongsalmun. Mon are widespread, in Japanese gardens.
* Portcullis of a castle
* Race gates a gate used checkpoints on race tracks.
* Slip gate on footpaths
* Turnstile
* Watergate of a castle by navigable water
* Slalom skiing gates
* Wicket gate.

Details

A gate or gateway is a point of entry to or from a space enclosed by walls. It helps manage the entry and exit from a property by installing security fences and movable barriers.

Security is the primary aspect of having an entrance gate to help prevent intruders from breaking into your house. Additional advantages of installing a residential gate include cutting insurance costs, increasing your property's value, and improving your house's appearance.

Several types of gates are available for use in homes and public locations. The 6 popular types of gates based on their functions are described below.

1. Sliding Gate

Although sliding gates are a clichéd choice, the design is now widely used worldwide. A sliding gate uses the principle of moving swiftly when pushed, just as its name suggests. Unlike a normal gate, which must be opened from the middle, this gate must be slid open.

The modern sliding gates are so delicately made and light in weight that anybody can open them by just sliding them. The fact that it opens sideways is among its most significant advantages. Therefore, a sliding gate is the best option for homes with limited space.

Advantages of Sliding Gate

* Improved safety and security
* Reduced susceptibility to damage under high-wind loading conditions
* They look particularly attractive in front gardens and are an excellent way to safeguard your driveway.
* Secured with hook locks or automated gates

Disadvantages of Sliding Gate

* Sliding gates are louder because they have more parts that move. Neither speeding up the lubrication process nor the pace at which maintenance is performed will prevent this from happening.
* Regular maintenance is needed to clean the sliding gates' track of debris.
* Sliding gates cost more to install because of the side panel.

2. Turnstile Gate

Turnstile gates are those small gates used in subways or other public places that only allow one person to pass through it at a time. Such gates were formerly used only near subways and train stations, but due to their numerous benefits, even huge corporations have begun to utilize them in the workplace.

Advantages of Turnstile Gate

* Most venues utilize card readers for admission through turnstile gates to scan tickets and allow entry.
* Turnstile access control systems are often implemented to improve security at institutions and events. They may be set up to allow only those with the correct credentials.
* Turnstiles may also be used to track admissions and exits from an event. Many amusement parks, sports venues, and organizations use entrance and exit turnstiles with counters to estimate the number of visitors in their facilities accurately.
* The price is cheaper.
* Waterproof and dustproof, adaptable to the climate, and appropriate for both indoor and outdoor use.

Disadvantages of Turnstile Gate

* The channel width (the space that permits people’s entry) is quite narrow, often approximately 500 mm.
* The speed of traffic is relatively slow.
* Sometimes the turnstiles will collide mechanically during gate operation, causing significant noise.

3. Vertical Pivot Gate

Vertical pivot gates are those that open up in the air to allow vehicles to pass through. Compared to swing and slide gates, which need twice as much area for opening on the ground, pivot gates use just the vertical space between the ground and the sky.

Advantages of Verticle Pivot Gate

* The convenience of pivot gate installation eliminates the need for highly qualified installers.
* It saves a lot of floor space because it opens up vertically and doesn't need extra room to open them.
* Opening and closing operation is fast.
* High security (great for prison applications)
* Low expenses for maintenance and repair
* Defense against rusting

Disadvantages of Verticle Pivot Gate

The problem with this type of gate installation is that the gate operator requires a large footing to remain on.
The cost of installing a vertical pivot gate is also much higher.

4. Swing Gate

Swing gates enhance the visual attractiveness of one's property. These kinds of gates used to come in a limited number of designs in the past, but today there are many options available.  Swing gates are the best option for use inside factories to secure workplaces or houses. It requires enough space on the ground for these gates to open, so if the property is next to a sidewalk, it could be a problem.

Advantages of Swing Gate

* The biggest advantage of swinging gates is their low maintenance requirements, especially manual gates. Swing gates have no motor or electronics, so they operate straightforwardly.
* Swing gates may be opened in either direction, making it simpler for people and vehicles from either direction to enter.
* Swing gates are typically the more affordable choice.
* They are more affordable to install since they have fewer moving components and need less setup at the fence's edge without a floor track.

Disadvantages of Swing Gate

* Swing gates may not be an option if your driveway is short since they need more space within the entrance. The gates must have a lot of space behind them to open inward.
* They are more difficult to fit on a slope.
* They are more susceptible to wind damage and are less secure than sliding gates.

5. Retractable Security Gate

Given the ability to fold the security gate up and even store it, retractable gates are an excellent choice for places with limited floor space. These gates are not only more cost-effective, but they also give a higher level of safety and create an impressive fashion statement.

Advantages of Retractable Gate

* Only one of the shutter's sides may be opened at a time for your convenience. Therefore, it will save you from having to open the door.
* These gates are inexpensive.
* It is possible to open a foldable gate halfway or all the way.
* This one is far lighter than the weight of other solid gates.
* It provides a higher level of defense against thieves, hooligans, and other types of burglars.

Disadvantages of Retractable Gate

* More dust gathers in the guiding channel because of this gate. Opening it becomes increasingly difficult.
* This gate's steel channels might break if it isn't operated correctly.
* There should be more lubrication. Therefore, it is not entirely maintenance-free.
* Painting needs to be done often. They corrode more rapidly when exposed directly to sunshine or rain.

6. Automatic Gate

Electric gates are being employed in many buildings. When gates are automated, the need for personnel to operate them is eliminated, which results in cost savings. The automatic gates can be opened with a gate pass card or remote.

Although these gates are usually connected to the main power line, their batteries allow them to operate continuously without any electrical problems.

Advantages of Automatic Gate

* You can install a passcode or a card reader on an automated gate. These systems may improve access control.
* Automated gates increase security.  You can reduce the danger of crimes like damage and theft since you can manage who has entered your property.
* Manual gates and garages are inconvenient as they must be pulled to open or close. But in an automatic gate, one must press a button, or there will be sensors that will automatically open the gate without even pressing a button. Many gates close automatically as a vehicle passes.
* An automated security gate may raise your house's value since it boosts your property's curb appeal and security.
* Automatic gates are inexpensive to operate since they do not need much energy to activate the motor and move.

Disadvantages of Automatic Gate

* The installation of electric entryways requires a significant financial investment from purchasing the gate to installing it.
* Most firms provide free after-sales support for a while. However, after the time limit has passed, it becomes quite expensive.
* The automated security gate is significantly reliant on electricity. This implies that when you lose electricity, so does your gate.

FAQs

What is a gate?

A gate or gateway is a point of entry to or from a space enclosed by walls. It helps manage the entry and exit from a property by installing security fences and movable barriers.

What are the types of gates based on their function?

The types of gates based on their function are.
1. Sliding gate
2. turnstile gate
3. Vertical Pivot gate
4. Swing gate
5. Retractable security gate
6. Automatic gate

What are the advantages of automatic gates?

1. You can install a passcode or a card reader on an automated gate. These systems may improve access control.
2. Automated gates increase security.  You can reduce the danger of crimes like damage and theft since you can manage who has entered your property.
3. Manual gates and garages are inconvenient as they must be pulled to open or close. But in an automatic gate, one must press a button, or there will be sensors that will automatically open the gate without even pressing a button. Many gates close automatically as a vehicle passes.
4. An automated security gate may raise your house's value since it boosts your property's curb appeal and security.
5. Automatic gates are inexpensive to operate since they do not need much energy to activate the motor and move.

Sliding-Gate.jpg?resize=446%2C446&ssl=1


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1767 2023-05-10 14:21:31

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1770) Dream

Gist

a) a series of events or pictures which happen in your mind while you are asleep.
b) something that you want very much to happen, although it is not likely.

Summary

A dream is a succession of images, ideas, emotions, and sensations that usually occur involuntarily in the mind during certain stages of sleep. Humans spend about two hours dreaming per night, and each dream lasts around 5 to 20 minutes, although the dreamer may perceive the dream as being much longer than this.

The content and function of dreams have been topics of scientific, philosophical and religious interest throughout recorded history. Dream interpretation, practiced by the Babylonians in the third millennium BCE and even earlier by the ancient Sumerians, figures prominently in religious texts in several traditions, and has played a lead role in psychotherapy. The scientific study of dreams is called oneirology. Most modern dream study focuses on the neurophysiology of dreams and on proposing and testing hypotheses regarding dream function. It is not known where in the brain dreams originate, if there is a single origin for dreams or if multiple regions of the brain are involved, or what the purpose of dreaming is for the body or mind.

The human dream experience and what to make of it has undergone sizable shifts over the course of history. Long ago, according to writings from Mesopotamia and Ancient Egypt, dreams dictated post-dream behaviors to an extent sharply reduced in later millennia. These ancient writings about dreams highlight visitation dreams, where a dream figure, usually a deity or a prominent forebear, commands the dreamer to take specific actions, and which may predict future events. Framing the dream experience varies across cultures as well as through time.

Dreaming and sleep are intertwined. Dreams occur mainly in the rapid-eye movement (REM) stage of sleep—when brain activity is high and resembles that of being awake. Because REM sleep is detectable in many species, and because research suggests that all mammals experience REM, linking dreams to REM sleep has led to conjectures that animals dream. However, humans dream during non-REM sleep, also, and not all REM awakenings elicit dream reports. To be studied, a dream must first be reduced to a verbal report, which is an account of the subject's memory of the dream, not the subject's dream experience itself. So, dreaming by non-humans is currently unprovable, as is dreaming by human fetuses and pre-verbal infants.

Details

A dream is a hallucinatory experience that occurs during sleep.

Dreaming, a common and distinctive phenomenon of sleep, has throughout human history given rise to myriad beliefs, fears, and conjectures, both imaginative and experimental, regarding its mysterious nature. While any effort toward classification must be subject to inadequacies, beliefs about dreams nonetheless fall into various classifications depending upon whether dreams are held to be reflections of reality, sources of divination, curative experiences, or evidence of unconscious activity.

Efforts to study dreaming:

Dream reports

The manner in which people dream obviously defies direct observation. It has been said that each dream “is a personal document, a letter to oneself” and must be inferred from the observable behaviour of people. Furthermore, observational methods and purposes clearly affect conclusions to be drawn about the inferred dreams. Reports of dreams collected from people after morning awakenings at home tend to exhibit more content of an overt sexual and emotional nature than do those from laboratory subjects. Such experiences as dreaming in colour seldom are spontaneously mentioned but often emerge under careful questioning. Reports of morning dreams are typically richer and more complex than those collected early at night. Immediate recall differs from what is reported after longer periods of wakefulness. In spite of the unique qualities of each person’s dreams, there have been substantial efforts to describe the general characteristics of what people say they have dreamed.

Estimates by individuals of the length of their dreams can vary widely (and by inference, the actual length of the dreams varies widely as well). Spontaneously described dreams among laboratory subjects typically result in short reports; although some may exceed 1,000 words in length, about 90 percent of these reports are fewer than 150 words long. With additional probing, about a third of such reports are longer than 300 words.

Some investigators have been surprised by repeated findings that suggest dreams may be less fantastic or bizarre than generally supposed. One investigator stated that visual dreams are typically faithful to reality—that is, they are representational. To borrow terms from modern art, dreams are rarely described as abstract or surrealist. Except for those that are very short, dreams are reported to take place in ordinary physical settings, with about half of them seeming quite familiar to the dreamer. Only rarely is the setting said to be exotic or peculiar.

Apparently dreams are quite egocentric, with the dreamer perceiving himself as a participant, though the presence of others is typically recalled. Seldom does the person remember an empty, unpopulated dreamworld, and individuals seem to dream roughly two-thirds of the time about people they know. Usually these people are close acquaintances, with family members mentioned in about 20 percent of dream reports. Recollections of notables or weird representations of people are generally rare.

In cases of so-called lucid dreaming, subjects report having been aware that they were dreaming as the dream was taking place. Most lucid dreamers also report having been able to direct or manipulate the dream’s content to some extent. The nature of lucid dreaming and even the coherence of the notion have been disputed, however. Some researchers have suggested that it is a unique state of consciousness that combines elements of wakefulness and ordinary (nonlucid) dreaming.

The typical dream report is of visual imagery; indeed, in the absence of such imagery, the person may describe the phenomenon as thinking rather than “dreaming” while asleep. Rare statements about dreams dominated by auditory experience tend to be made with claims of actually having been awake. It is unusual, however, to hear of dreams without some auditory characteristics. Emotionally bland dreams are common. When dreams do contain emotional overtones, fear and anxiety are most commonly mentioned, followed by anger; pleasant feelings are most often those of friendliness. Reports of overtly erotic dreams, particularly among subjects studied in laboratory settings, are infrequent.

Many individuals report having recurring dreams, or dreams that are repeated over a short or a long period with at most minor variations. Some recurring dreams display common themes, such as being able to fly, being chased, being naked in public, or being late for an exam. Although there is no consensus among experts regarding the causes or interpretation of recurring dreams, many researchers believe that negative recurring dreams may be indicative of the presence of an unresolved conflict in the individual.

Despite their generally representational nature, dreams seem somehow odd or strange. Perhaps this is related to discontinuities in time and purpose. One may suddenly find oneself in a familiar auditorium viewing a fencing match rather than hearing a lecture and abruptly in the “next scene” walking beside a swimming pool. These sudden transitions contribute a feeling of strangeness, which is enhanced by the dreamer’s inability to recall the bulk of his dreams clearly, giving them a dim, mysterious quality.

Physiological dream research

A new era of dream research began in 1953 with the discovery that rapid eye movements during sleep seem often to signal that a person is dreaming. Researchers at the University of Chicago’s Sleep Research Laboratory observed that, about an hour after laboratory subjects fell asleep, they were apt to experience a burst of rapid eye movement (REM) under their closed lids, accompanied by a change in brain waves detected (by electroencephalography) as an electrical pattern resembling that of an alert waking person. When subjects were awakened during REM, they reported vivid dreams 20 out of 27 times; when roused during non-REM (NREM) sleep, they recalled dreams in only 4 of 23 instances. Subsequent systematic study confirmed this relationship between REM, activated brain waves (increased brain activity), and dream recall. Several thousand experimental studies utilizing these observable indexes of dreaming have since been conducted.

A major finding is that the usual report of a vivid, visual dream is primarily associated with REM and increased brain activity. On being aroused while exhibiting these signs, people recall dreams with visual imagery about 80 percent of the time. When awakened in the absence of them, however, people still report some kind of dream activity, though only about 30 to 50 percent of the time. In such cases they are apt to remember their sleep experiences as being relatively “thoughtlike” and realistic and as resembling the experiences of wakefulness.

D-state (desynchronized or dreaming) sleep has been reported for all mammals studied. It has been observed, for example, among monkeys, dogs, cats, rats, elephants, shrews, and opossums; these signs also have been reported in some birds and reptiles.

Surgical destruction of selected brain structures among laboratory animals has clearly demonstrated that the D-state depends on an area within the brain stem known as the pontine tegmentum (see pons). Evidence indicates that D-state sleep is associated with a mechanism involving a bodily chemical called norepinephrine; other stages of sleep seem to involve another chemical (serotonin) in the brain. Among other physiological changes found to be related to D-state sleep are increased variability in heart rate, increased activity in the respiratory system and sexual organs, and increases in blood pressure, accompanied by a near-complete relaxation of the skeletal muscles.

When people are chronically deprived of the opportunity to manifest D-state activity (by awakening them whenever there is EEG evidence of dreaming), it appears increasingly difficult to prevent them from dreaming. On recovery nights (after such deprivation), when the subject can sleep without interruption, there is a substantial increase in the number of reports of dreaming. This rebound effect continues in some degree on subsequent recovery nights, depending on how badly the person has been deprived.

During D-states in the last 6 1/2 to 7 1/2 hours of sleep, people are likely to wake by themselves about 40 percent of the time. This figure is about the same as that for dream recall, with subjects saying they had a dream the previous night about 35 percent of the time (roughly once every three or four nights). Evidence concerning the amount and kind of dreaming also depends on how rapidly one is roused and on the intensity of his effort to recall. Some people recall dreams more often than the average, while others rarely report them. These differences have nothing to do with the amount of D-state sleep. Evidence suggests instead that nonrecall reflects a tendency on the part of the individual to repress or to deny personal experiences.

The psychoanalytic literature is rich with reports indicating that what one dreams about reflects one’s needs as well as one’s immediate and remote past experience. Nevertheless, when someone in D-state sleep is stimulated (for example, by spoken words or by drops of water on the skin), the chances that the dreamer will report having dreamed about the stimulus (or anything like it) are quite low. Studies in which people have watched vivid movies before falling asleep indicate some possibility of influence on dreams, but such studies also emphasize the limitation of this influence. Highly suggestible people seem likely to dream as they are told to do while under hypnosis, but the influence of direct suggestion during ordinary wakefulness seems quite limited.

Variations within the usual quantity of D-state sleep (about 18 to 30 percent of D-state sleep in an average period of sleep) apparently are unrelated to differences in the amount or content of dreaming. The amount of D-state sleep seems independent of wide variations in the daily activities or personality characteristics of different people; groups of scientists, athletes, and artists, for example, cannot be distinguished from one another in terms of D-state activity. Such disorders as schizophrenia and intellectual disability appear to have no clearly discernible effect on the amount of time a person will spend in such REM-activated EEG sleep.

Dreamlike activities

Related states of awareness may be distinguished from the dream experiences typically reported; these include dreamlike states experienced as a person falls asleep and as he awakens, respectively called hypnagogic and hypnopompic reveries. During sleep itself there are nightmares, observable signs of sexual activity, and sleepwalking. Even people who ostensibly are awake may show evidence of such related phenomena as hallucinating, trance behaviour, and reactions to drugs.

Rapid eye movement is not characteristic of sleep onset; nevertheless, as people drift (as inferred from EEG activity) from wakefulness through drowsiness into sleep, they report dreamlike hypnagogic experiences about 90 percent of the time on being awakened. Most of these experiences (about 80 percent) are said to be visual. A person who awakens from drowsiness or at the onset of sleep will recall experiences that may be classified as dreams about 75 percent of the time. These “dreamlets” seem to differ from dream-associated REM sleep in being less emotional (neither pleasant nor unpleasant), more transient, and less elaborate. Such hypnagogic experiences seem to combine abstract thinking with recall of recent events (known in psychoanalytical terms as day residues). This is quite typical of falling asleep. Systematic studies remain to be made of the hypnopompic reveries commonly reported during mornings before full arousal, but it seems likely that they include recollections of the night’s dreams or represent one’s drifting back into transient REM sleep.

Extreme behavioral manifestations during sleep—night terrors, nightmares, sleepwalking, and enuresis (bedwetting)—have been found to be generally unrelated to ordinary dreaming. Night terrors are characterized by abrupt awakening, sometimes with a scream; a sleeping child may sit up in bed, apparently terror-stricken, with wide-open eyes and often with frozen posturing that may last several minutes. Afterward there typically is no recollection of dreamlike experience. Night terrors are observed in about 2 or 3 percent of children, and roughly half of the attacks occur between the ages of 4 and 7; about 10 percent of them are seen among youngsters as old as 12 to 14 years. Nightmares typically seem to be followed by awakening with feelings of suffocation and helplessness and expressions of fearful or threatening thoughts. Evidence of nightmares is observed for 5 to 10 percent of children, primarily about 8 to 10 years of age. Studies have suggested that signs of spontaneously generated night terrors and nightmares may be related to abrupt awakening from deep sleep that experimentally appears dreamless. This suggests that the vividly reported fears may well be produced by emotional disturbances that first occur on awakening.

Sleepwalking, observed in about 1 percent of children, predominantly appears between ages 11 and 14. Apparently sleeping individuals rise and walk from their beds, eyes open, usually avoiding obstacles, and later express no recollection of the episode. Studies of EEG data indicate that sleepwalking occurs only in deep sleep when dreams seem essentially absent; the behaviour remains to be reported for REM sleep. Enuresis occurs in about one-fourth of children over age four. These episodes seem not to be associated with REM as much as they do with deep sleep in the absence of D-state signs.

Nocturnal emission of sperm remains to be described in terms of any distinguishing EEG pattern; such events are extremely rare among sleeping laboratory subjects. Among a large sample of males who were interviewed about their sexual behaviour, about 85 percent reported having experienced emissions at some time in their lives, with typical frequency during the teens and 20s being about once a month. Of the females interviewed, 37 percent reported erotic dreams, sometimes with orgasm, averaging about three to four times a year. Most often, however, openly sexual dreams are said not to be accompanied by orgasm in either gender. Males usually could recall no dreams associated with emission, although most implicated erotic dreaming.

Dreamlike experiences induced by trances, delirium, or drug hallucination seem to stem from impairments to the central nervous system that lower the efficiency of processing sensory stimuli from the external environment. In such cases, apparently, one’s physiological activities begin to escape environmental constraint to the point that internalized, uncritical thinking and perceiving prevail.

Diverse views on the nature of dreams:

Dreams as reflecting reality

Philosophers have long noted the similarities between reality and dreaming and the logical difficulties of distinguishing in principle between the two. The English philosopher Bertrand Russell wrote, “It is obviously possible that what we call waking life may be only an unusual and persistent nightmare,” and he further stated that “I do not believe that I am now dreaming but I cannot prove I am not.” Philosophers have generally tried to resolve such questions by saying that so-called waking experience, unlike dreaming, seems vivid and coherent. As the French philosopher René Descartes put it, “Memory can never connect our dreams one with the other or with the whole course of our lives as it unites events which happen to us while we are awake.” Similarly, Russell stated, “Certain uniformities are observed in waking life, while dreams seem quite erratic.”

Dreams as a source of divination

There is an ancient belief that dreams predict the future; the Chester Beatty Papyrus is a record of Egyptian dream interpretations dating from the 12th dynasty (1991–1786 BCE). In Homer’s Iliad, Agamemnon is visited in a dream by a messenger of the god Zeus to prescribe his future actions. From India, a document called the Atharvaveda, dated to the 5th century BCE, contains a chapter on dream omens. A Babylonian dream guide was discovered in the ruins of the city of Nineveh among tablets from the library of the emperor Ashurbanipal (668–627 BCE). The Old Testament is rife with accounts of prophetic dreams, those of the pharaohs and of Joseph and Jacob being particularly striking. Among pre-Islamic peoples, dream divination so heavily influenced daily life that the practice was formally forbidden by Muhammad (570–632), the founder of Islam.

Ancient and religious literatures express the most confidence about so-called message dreams. Characteristically, a god or some other respected figure appears to the dreamer (typically a king, a hero, or a priest) in time of crisis and states a message. Such reports are found on ancient Sumerian and Egyptian monuments; frequent examples appear in the Bible. Joseph Smith (1805–44), the founder of Mormonism, said that an angel directed him to the location of buried golden tablets that described American Indians as descendants of the tribes of Israel.

Not all dream prophecies are so readily accepted. In Homer’s Odyssey, for example, dreams are classed as false (“passing through the Gate of Ivory”) and as true (“passing the Gate of Horn”). Furthermore, prophetic meaning may be attributed to dream symbolism. In the Bible, Joseph interpreted sheaves of grain and the Moon and stars as symbols of himself and his brethren. In general, the social status of dream interpreters varies; in cultures for which dreams loom important, their interpretation has often been an occupation of priests, elders, or medicine men.

An ancient book of dream interpretation was compiled by the 3rd-century soothsayer Artemidorus Daldianus in the Oneirocritica (from the Greek oneiros, “a dream”). Contemporary studies cover dreams and dreaming from a number of perspectives, such as physiology, neuroscience, psychology, and interpretation.

Dreams as curative

So-called prophetic dreams in the Middle Eastern cultures of antiquity often were combined with other means of prophecy (such as animal sacrifice) and with efforts to heal the sick. In classical Greece, dreams became directly associated with healing. In a practice known as temple sleep, ailing people came to dream in oracular temples such as those of the Greek god of medicine, Asclepius; there, they performed rites or sacrifices in efforts to dream appropriately, and they then slept in wait of the appearance of the god (or his emissary, such as a priest), who would deliver a cure. Many stone monuments placed at the entrances of the temples survive to record dream cures. A practice similar to temple dreaming, known as dream incubation, is recorded in Babylon and Egypt.

Dreams as extensions of the waking state

Even in early human history, dreams were interpreted as reflections of waking experiences and of emotional needs. In his work Parva naturalia (On the Senses and Their Objects), the Greek philosopher Aristotle (384–322 BCE), despite the practice of divination and incubation among his contemporaries, attributed dreams to sensory impressions from “external objects…pauses within the body…eddies…of sensory movement often remaining like they were when they first started, but often too broken into other forms by collision with obstacles.” Anticipating work by the Austrian psychoanalyst Sigmund Freud (1856–1939), Aristotle wrote that sensory function is reduced in sleep, thereby favouring the susceptibility of dreams to emotional subjective distortions.

In spite of Aristotle’s unusually modern views and even after a devastating attack by the Roman statesman Marcus Tullius Cicero (106–43 BCE) in (De divinatione; “On Divination”), the view that dreams have supernatural attributes was not again challenged on a serious level until the 1850s, with the classic work of the French scientist Alfred Maury, who studied thousands of reported recollections of dreams. Maury concluded that dreams arose from external stimuli, instantaneously accompanying such impressions as they acted upon the sleeping person. Citing a personal example, he wrote that part of his bed once fell on the back of his neck and woke him, leaving the memory of dreaming that he had been brought before a French revolutionary tribunal, questioned, condemned, led to the scaffold, and bound by the executioner, and that the guillotine blade had fallen.

The Scottish writer Robert Louis Stevenson said that much of his work was developed by “little people” in his dreams, and he specifically cited the Strange Case of Dr. Jekyll and Mr. Hyde (1886) in this context. The German chemist August Kekule von Stradonitz attributed his interpretation of the ring structure of the benzene molecule to his dream of a snake with its tail in its mouth. Otto Loewi, a German-born physician and pharmacologist, attributed to a dream his inspiration for an experiment with a frog’s nerve that helped him win the Nobel Prize for Physiology or Medicine in 1936. In all of these cases, the dreamers reported having thought about the same topics over considerable periods while they were awake.

Psychoanalytic interpretations

Among Freud’s earliest writings was The Interpretation of Dreams (1899), in which he insisted that dreams are “the royal road to knowledge of activities of the unconscious mind”—in other words, that dreams offer a means of understanding waking experience. He held this theory throughout his career, even mentioning it in his last published statement on dreams, printed about one year before his death. He also offered a theoretical explanation for the bizarre nature of dreams, invented a system for their interpretation, and elaborated on their curative potential.

Freud theorized that thinking during sleep tends to be primitive and regressive. Repressed wishes, particularly those associated with gender and hostility, were said to be released in dreams when the inhibitory demands of wakefulness diminished. The content of the dream was said to derive from such stimuli as urinary pressure in the bladder, traces of experiences from the previous day (day residues), and associated infantile memories. The specific dream details were called their manifest content; the presumably repressed wishes being expressed were called the latent content. Freud suggested that the dreamer kept himself from waking and avoided unpleasant awareness of repressed wishes by disguising them as bizarre manifest content in an effort called dreamwork. He held that impulses one fails to satisfy when awake are expressed in dreams as sensory images and scenes. In dreaming, Freud believed:

All of the linguistic instruments…of subtle thought are dropped…and abstract terms are taken back to the concrete.… The copious employment of symbols…for representing certain objects and processes is in harmony (with) the regression of the mental apparatus and the demands of censorship.

Freud submitted that one aspect of manifest content could come to represent a number of latent elements (and vice versa) through a process called condensation. Further displacement of emotional attitudes toward one object or person theoretically could be displaced in dreaming to another object or person or not appear in the dream at all. Freud further observed a process called secondary elaboration, which occurs when people wake and try to remember dreams. They may recall inaccurately in a process of elaboration and rationalization and provide “the dream, a smooth facade, (or by omission) display rents and cracks.” This waking activity he called “secondary revision.”

In seeking the latent meaning of a dream, Freud advised the individual to associate freely about it. Dreams thus represented another source of free association in psychoanalysis. From listening to the associations, the analyst was supposed to determine what the dream represented, in part through an understanding of the personal needs of the dreamer. Using this information, the analyst could help the patient overcome inhibitions that were identified through dreamwork.

Unlike Freud, Carl Jung (1875–1961) did not view dreams as complementary to waking mental life with respect to specific instinctual impulses. Jung believed that dreams were instead compensatory, that they balanced whatever elements of character were underrepresented in the way people live their lives. Dreaming, to Jung, represented a continuous 24-hour flow of mental activity that would surface in sleep under conducive conditions but could also affect waking life when one’s behaviour denied important elements of one’s true personality.

In Jung’s view, then, dreams are constructed not to conceal or disguise forbidden wishes but to bring the underattended areas to attention. This function is carried out unconsciously in sleep when people are living emotionally well-balanced lives. If this is not the case, there may be first bad moods and then symptoms in waking. Then and only then do dreams need to be interpreted. This is best done not with a single dream and multiple free associations but with a series of dreams so that the repetitive elements become apparent.

Conclusion

Since antiquity, dreams have been viewed as a source of divination, as a form of reality, as a curative force, and as an extension or adjunct of the waking state. Contemporary research focuses on efforts to discover and describe unique, complex biochemical and neurophysiological bases of dreaming. Psychoanalytic theorists emphasize the individual meaningfulness of dreams and their relation to personal hopes and fears. Other perspectives assert that dreams convey supernatural meaning, and some regard dreaming as nothing more than the normal activity of the nervous system. Such variety reflects the lack of any single, all-encompassing theory about the nature or purpose of dreams.

f694d5db-2f4d-4a32-91f3-5521c4e73b18-andrey_popov.jpg?w=614&fit=crop&crop=faces&auto=format%2Ccompress&q=50&dpr=2


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1768 2023-05-11 14:39:54

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1771) Renal replacement therapy

Summary

Continuous Renal Replacement Therapy

Continuous renal replacement therapy, or CRRT, is a non-stop, 24-hour dialysis therapy. It is used to help patients with acute kidney injury (AKI) and fluid overload. Your child may need CRRT if they look swollen or puffy or if their blood test shows they have high levels of waste products.

Details

Renal replacement therapy (RRT) is therapy that replaces the normal blood-filtering function of the kidneys. It is used when the kidneys are not working well, which is called kidney failure and includes acute kidney injury and chronic kidney disease. Renal replacement therapy includes dialysis (hemodialysis or peritoneal dialysis), hemofiltration, and hemodiafiltration, which are various ways of filtration of blood with or without machines. Renal replacement therapy also includes kidney transplantation, which is the ultimate form of replacement in that the old kidney is replaced by a donor kidney.

These treatments are not truly cures for kidney disease. In the context of chronic kidney disease, they are more accurately viewed as life-extending treatments, although if chronic kidney disease is managed well with dialysis and a compatible graft is found early and is successfully transplanted, the clinical course can be quite favorable, with life expectancy of many years. Likewise, in certain acute illnesses or trauma resulting in acute kidney injury, a person could very well survive for many years, with relatively good kidney function, before needing intervention again, as long as they had good response to dialysis, they got a kidney transplant fairly quickly if needed, their body did not reject the transplanted kidney, and they had no other significant health problems. Early dialysis (and, if indicated, early renal transplant) in acute kidney failure usually brings more favorable outcomes.

Types

Hemodialysis, hemofiltration, and hemodiafiltration can be continuous or intermittent and can use an arteriovenous route (in which blood leaves from an artery and returns via a vein) or a venovenous route (in which blood leaves from a vein and returns via a vein). This results in various types of RRT, as follows:

continuous renal replacement therapy (CRRT) — continuous renal replacement therapy (CRRT) is a form of dialysis therapy used in critical care settings. The benefit of CRRT for critically ill patients is that it runs slowly (generally over 24 hours to several days) allowing for removal of excess fluid and uremic toxins with less risk of hypotensive complications.

* continuous hemodialysis (CHD)
* continuous arteriovenous hemodialysis (CAVHD)
* continuous venovenous hemodialysis (CVVHD)
* continuous hemofiltration (CHF)
* continuous arteriovenous hemofiltration (CAVH or CAVHF)
* continuous venovenous hemofiltration (CVVH or CVVHF)
* continuous hemodiafiltration (CHDF)
* continuous arteriovenous hemodiafiltration (CAVHDF)
* continuous venovenous hemodiafiltration (CVVHDF)
* intermittent renal replacement therapy (IRRT)
* intermittent hemodialysis (IHD)
* intermittent venovenous hemodialysis (IVVHD)
* intermittent hemofiltration (IHF)
* intermittent venovenous hemofiltration (IVVH or IVVHF)
* intermittent hemodiafiltration (IHDF)
* intermittent venovenous hemodiafiltration (IVVHDF)

Additional Information

What is Continuous Renal Replacement Therapy (CRRT)?

CRRT full form in medical term - Continuous renal replacement therapy.

CRRT procedure is a blood purification process used for the patients with acute kidney injury, a sepsis-like syndrome, multi-organ failure, particularly patients who are hemodynamically unstable to support kidney function.

During the CRRT dialysis therapy, the patient’s blood passes through blood purification machine, filter and blood warmer. This blood purification therapy is a slow and continuous process which can run continuously 24 hours a day to remove fluid and uremic toxins from the blood and return the blood back to the patient’s body.

This therapy helps patients with unstable heart rates and blood pressure to tolerate hemodialysis better.

CRRT therapy considered as a life-saving and life-sustaining therapy.

Acute kidney injury (AKI)

Acute kidney injury (AKI), also referred as acute renal failure (ARF) or sudden kidney failure. In this condition kidneys lose its filtration capability this causes waste products accumulation and chemical imbalance in blood such as increased levels of acid, fluid, phosphate and potassium in the body and decreased levels of calcium.

Acute kidney injury or failure develop rapidly and can happen in few hours or few days. Acute kidney injury can happen due to uncontrolled diabetes, low blood pressure, organ failure, bleeding, severe diarrhea, heart attack, heart failure, overuse of pain medicines, severe allergic reactions, burns, trauma, major surgery, cancer, kidney stones, blood clots in the urinary tract, direct damage to the kidneys due to diseases and conditions such as sepsis, multiple myeloma, vasculitis, interstitial nephritis etc.

The glomerular filtration rate (GFR) is preferred diagnosis tool to find out the kidney’s filtration rate in patients with acute kidney injury (AKI) or kidney failure.

Types of CRRT

There are different methods of CRRT that are differentiated by their process of waste removal.

* Continuous Venovenous Hemofiltration (CVVH)
* Continuous Venovenous Hemodialysis (CVVHD)
* Continuous Venovenous Hemodiafiltration (CVVHDF)

Indications of CRRT

There are the most common and less common indications of CRRT for acute renal failure treatment when complicated with any of the following:

Most common indications of CRRT

* Kidney failure with low blood pressure on multiple life supporting medications
* Kidney failure with expansion of the extracellular fluid (ECF) volume refer as volume overload along with severe heart failure
* Kidney failure with acute or chronic liver failure, cirrhosis
* Cerebral edema or brain swelling along with Kidney failure
* An abnormally high metabolic breakdown of a tissue or substance which leads to physical deterioration and weight loss, refer as hypercatabolism

Less common indications of CRRT

* Systemic inflammatory response syndrome (SIRS)
* Sepsis or septicemia
* Multiorgan failure syndrome
* Crush syndrome or traumatic rhabdomyolysis
* Tumor lysis syndrome (TLS)

Benefits of CRRT

Continuous Renal Replacement Therapy (CRRT) is a preferred dialysis therapy and has many benefits to acute kidney failure patients, such as:

* Control of blood pressure
* Balance minerals and acid/base chemicals
* Maintain fluid balance
* Normal electrolytes levels
* Continuous removal of excess fluid and uremic toxins
* Improves survival rate
* Improve chances of complete recovery

Frequently asked questions:

Is CRRT safe?

CRRT procedure is safe and effective for acute kidney injury patients. Usually, it is a well-tolerated procedure.

How long does CRRT take?

CRRT dialysis therapy is a continuous process of filtering blood, and it takes 24 hours of time to complete.

Is CRRT the same as hemodialysis?

CRRT is a type of hemodialysis. Routine hemodialysis is a limited hour’s procedure (usually 4 hours) but CRRT name itself suggests that it is a continuous procedure and the technical aspects of the procedure are slightly different from the usual routine hemodialysis. CRRT can run for days altogether continuously.

What is the difference between CRRT and dialysis?

In hemodialysis, filtration of waste products happens rapidly, and it is done over few hours. But in CRRT due to technical differences with routine hemodialysis it happens slowly and can be done continuously for 24 hours a day and that will lead to more stability in blood pressure during the procedure.

What are the complications of CRRT?

CRRT procedure can have same type of complications like hemodialysis such as low blood pressure, abnormal heart rhythm, allergic reactions, bleeding, blood clotting, hypothermia, fluid balance errors, infections and nutrient losses can occur during CRRT.

Continuous-Renal-Replacement-Therapy-CRRT.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1769 2023-05-12 15:16:40

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1772) Cruise ship

Gist

A cruise ship is a luxury vessel that is used to take passengers on a pleasure voyage in a journey that is as much a part of the experience as the various destinations on the way.

Summary

A cruise ship is a large ship used primarily for leisure cruising. While earlier cruises were usually undertaken on ships that had been built for a different primary purpose—such as mail delivery ships or ocean liners meant for transportation—modern cruise ships are essentially floating holiday resorts that feature entertainment, sports activities, and multiple restaurants.

Origins

The first cruise ships were actually mail delivery ships owned by the Peninsular and Oriental Steam Navigation Company (P&O). The company sold the first leisure cruise tickets on these ships in 1844, offering tourists a trip from London around the Mediterranean Sea. Over the years, P&O expanded its leisure offerings, and a handful of other ships began to offer cruises. Perhaps the most well-known mid-19th century cruise is the one taken by prominent American author Mark Twain aboard the wooden paddle steamer Quaker City in 1867, which was the first cruise to depart from North America.

The SS Ceylon was the first vessel repurposed to be a cruise ship. It was owned by the London-based ship brokerage Culliford & Clarke, which hoped to make cruising the centre of its business. The Ceylon was a single-screw, iron-hulled auxiliary steamer—meaning that it had both a steam engine and sails—and could hold up to 100 passengers. Its renovation had removed several dozen passenger cabins and replaced them with public spaces, including a dining room, a boudoir, a smoking room, and a steam-powered fairground organ. After its refit, the Ceylon offered the first around-the-world cruise in history. It departed from Liverpool in 1881 for its 10-month journey. However, it had difficulty attracting a full complement of guests, and the company failed to recover from the expense. In 1885 Culliford & Clarke went into liquidation.

Nonetheless, the idea of cruising had begun to catch on. The charter for the Ceylon was soon sold to the British Regent Street Polytechnic school, which added more berths and began offering affordable and educational leisure cruises to its mainly working-class students. Other organizations and naval companies began to offer cruises, using older ocean liners or ships that were not needed for other business.

Early history

The first cruise ship built solely for the purpose of leisure was produced by Albert Ballin of the German Hamburg-America Line company. Ballin spearheaded the development of the shipping company’s cruise offerings, culminating in the construction of the cruise ship Prinzessin Victoria Luise. She was a 407-foot (124-metre), 4,419-ton vessel with twin-screw engines. Ballin’s target market was people who were rich but not rich enough to own their own leisure yachts. The ship thus had 120 exclusively first-class cabins. It also had a gymnasium, a library, an art gallery, a ballroom, and a darkroom. The Victoria Luise was launched on June 29, 1900, and operated until December 1906, when it wrecked near Jamaica. Though all passengers survived, the ship was unsalvageable. Popularity in leisure cruising declined after the grounding of the Victoria Luise, the 1912 sinking of the Titanic, and the outbreak of World War I (1914–18), which saw the sinking of the Lusitania in 1915.

As countries began to recover from World War I, cruising slowly began to increase in popularity. In the United States, Prohibition (1920–33) drove the rise of affordable cruises. U.S. anti-alcohol law applied only as far as 3 nautical miles (5.6 km) from shore, and many shipping and ocean liner companies began to offer cruises outside this range. Some were simple “booze cruises,” in which the ships traveled 3.1 nautical miles (5.7 km) out to sea and then essentially floated around for a few days. Others were trips to islands in the Atlantic Ocean and Caribbean Sea, catering to passengers interested in a holiday that would include alcohol. The Great Depression and the repeal of Prohibition gradually ended booze cruises, although some leisure cruises remained.

In the 1930s the largest cruise operation in existence was run by the National Socialist Party of Germany, better known as the Nazi Party. The purpose of the cruise line was propaganda, as it allowed the party to advertise its apparent care for the German middle and working classes, who once on board were essentially a captive audience for indoctrination. The success of the operation led to the planning of the world’s first cruise fleet, although the war interrupted its construction.

Cruising came to a complete halt during World War II (1939–45), as all ships were repurposed for military uses. After the war, cruising slowly grew throughout the Atlantic Ocean. Mediterranean cruise companies were founded, and in the United States the business slowly expanded, though still using vessels with other primary commercial purposes.

Cruise ships post-World War II

The mid-century rise of cheap commercial flights across the Atlantic forced many naval companies to reorient toward cruising. Ocean liners, once essential to mass transportation, were largely repurposed as cruise ships. As older companies turned to cruising, many new companies exclusively focused on cruises were founded in the 1960s and ’70s. These included Princess Cruises in 1965, Norwegian Cruise Line in 1966, Royal Caribbean Cruises in 1968, MSC in 1970, and Carnival Cruise Lines in 1972. These companies offered prices that were increasingly affordable for middle-class consumers, driving the industry to expand and more purpose-built cruise ships to be commissioned.

The television show The Love Boat, which took place aboard a ship called the MS Pacific Princess, is generally credited for the explosion of the cruising industry in the 1980s. In 1987 Royal Caribbean launched the world’s first megaship, MS Sovereign of the Seas. It could hold almost 3,000 passengers and became the model for all modern cruise ships as floating resorts. The Sovereign had five restaurants, nine bars, a spa, four pools, and a casino.

From that point on, competition drove cruise companies to construct even bigger ships with more unique selling points. The largest cruise ships now can hold up to 7,000 passengers and include features such as planetariums, surf simulators, water parks, and much more.

Details

Cruise ships are large passenger ships used mainly for vacationing. Unlike ocean liners, which are used for transport, cruise ships typically embark on round-trip voyages to various ports-of-call, where passengers may go on tours known as "shore excursions". On "cruises to nowhere" or "nowhere voyages", cruise ships make two- to three-night round trips without visiting any ports of call.

Modern cruise ships tend to have less hull strength, speed, and agility compared to ocean liners. However, they have added amenities to cater to water tourists, with recent vessels being described as "balcony-laden floating condominiums".

As of December 2018, there were 314 cruise ships operating worldwide, with a combined capacity of 537,000 passengers. Cruising has become a major part of the tourism industry, with an estimated market of $29.4 billion per year, and over 19 million passengers carried worldwide annually as of 2011. The industry's rapid growth saw nine or more newly built ships catering to a North American clientele added every year since 2001, as well as others servicing European clientele until the COVID-19 pandemic in 2020 saw the entire industry all but shut down.

As of 2022, the world's largest passenger ship is Royal Caribbean's Wonder of the Seas.

Organization

Cruise ships are organized much like floating hotels, with a complete hospitality staff in addition to the usual ship's crew. It is not uncommon for the most luxurious ships to have more crew and staff than passengers.

Dining

Dining on almost all cruise ships is included in the cruise price. Traditionally, the ships' restaurants organize two dinner services per day, early dining and late dining, and passengers are allocated a set dining time for the entire cruise; a recent trend is to allow diners to dine whenever they want. Having two dinner times allows the ship to have enough time and space to accommodate all of their guests. Having two different dinner services can cause some conflicts with some of the ship's events (such as shows and performances) for the late diners, but this problem is usually fixed by having a shorter version of the event take place before late dinner. Cunard Line ships maintain the class tradition of ocean liners and have separate dining rooms for different types of suites, while Celebrity Cruises and Princess Cruises have a standard dining room and "upgrade" specialty restaurants that require pre-booking and cover charges. Many cruises schedule one or more "formal dining" nights. Guests dress "formally", however that is defined for the ship, often suits and ties or even tuxedos for men, and formal dresses for women. The menu is more upscale than usual.

Besides the dining room, modern cruise ships often contain one or more casual buffet-style eateries, which may be open 24 hours and with menus that vary throughout the day to provide meals ranging from breakfast to late-night snacks. In recent years, cruise lines have started to include a diverse range of ethnically themed restaurants aboard each ship. Ships also feature numerous bars and nightclubs for passenger entertainment; the majority of cruise lines do not include alcoholic beverages in their fares and passengers are expected to pay for drinks as they consume them. Most cruise lines also prohibit passengers from bringing aboard and consuming their own beverages, including alcohol, while aboard. Alcohol purchased duty-free is sealed and returned to passengers when they disembark.

There is often a central galley responsible for serving all major restaurants aboard the ship, though specialty restaurants may have their own separate galleys.

As with any vessel, adequate provisioning is crucial, especially on a cruise ship serving several thousand meals at each seating. For example, a quasi "military operation" is required to load and unload 3,600 passengers and eight tons of food at the beginning and end of each cruise, for the Royal Princess.

Other on-board facilities

Modern cruise ships typically have aboard some or all of the following facilities:

* Buffet restaurant
* Card room
* Casino — Only open when the ship is at sea to avoid conflict with local laws
* Child care facilities
* Cinema
* Clubs
* Fitness center
* Hot tub
* Indoor and/or outdoor swimming pool with water slides
* Infirmary and morgue
* Karaoke
* Library
* Lounges
* Observation lounge
* Ping pong tables
* Pool tables
* Shops — Only open when the ship is at sea to avoid merchandising licensing and local taxes
* Spa
* Teen Lounges
* Theatre with Broadway-style shows

Some ships have bowling alleys, ice skating rinks, rock climbing walls, sky-diving simulators, miniature golf courses, video arcades, ziplines, surfing simulators, water slides, basketball courts, tennis courts, chain restaurants, ropes obstacle courses, and even roller coasters.

Crew

Crew are usually hired on three to eleven month contracts which may then be renewed as mutually agreed, depending on service ratings from passengers as well as the cyclical nature of the cruise line operator. Most staff work 77-hour work weeks for 10 months continuously followed by two months of vacation.

There are no paid vacations or pensions for service, non-management crew, depending on the level of the position and the type of the contract. Non-service and management crew members get paid vacation, medical, retirement options, and can participate in the company's group insurance plan.

The direct salary is low by North American standards, though restaurant staff have considerable earning potential from passenger tips. Crew members do not have any expenses while on board, because food and accommodation, medical care, and transportation for most employees, are included. Oyogoa states that "Crewing agencies often exploit the desperation of potential employees."

Living arrangements vary by cruise line, but mostly by shipboard position. In general two employees share a cabin with a shower, commode and a desk with a television set, while senior officers are assigned single cabins. There is a set of facilities for the crew separate from that for passengers, such as mess rooms and bars, recreation rooms, prayer rooms/mosques, and fitness center, with some larger ships even having a crew deck with a swimming pool and hot tubs.

All crew members are required to bring their certificates for the Standard of training, certification and watchkeeping or completing the training while being on board. Crew members need to consider completing this certification prior to embarking since it is time-consuming and needs to be accomplished at the same time they perform their daily work activities while being on board.

For the largest cruise operators, most "hotel staff" are hired from less industrialized countries in Asia, Eastern Europe, the Caribbean, and Central America. While several cruise lines are headquartered in the United States, like most international shipping companies, ships are registered in countries such as the Netherlands, the UK, the Bahamas, and Panama. The International Labour Organization's 2006 Maritime Labour Convention, also known as the "Seafarers' Bill of Rights," provides comprehensive rights and protections for all crew members. The ILO sets rigorous standards regarding hours of work and rest, health and safety, and living conditions for crew members, and requires governments to ensure that ships carrying their flags comply. For cruise routes around Hawaii, operators are required to register their ships in the United States and the crew is unionized, so these cruises are typically much more expensive than in the Caribbean or the Mediterranean.

Business model

Most cruise lines since the 2000s have to some extent priced the cruising experience à la carte, as passenger spending aboard generates significantly more than ticket sales. The passenger's ticket includes the stateroom accommodation, room service, unlimited meals in the main dining room (or main restaurant) and buffet, access to shows, and use of pool and gym facilities, while there is a daily gratuity charge to cover housekeeping and waiter service. However, there are extra charges for alcohol and soft drinks, official cruise photos, Internet and wi-fi access, and specialty restaurants. Cruise lines earn significantly from selling onshore excursions offered by local contractors; keeping 50% or more of what passengers spend for these tours. In addition, cruise ships earn significant commissions on sales from onshore stores that are promoted on board as "preferred" (as much as 40% of gross sales). Facilitating this practice are modern cruise terminals with establishments of duty-free shops inside a perimeter accessible only by passengers and not by locals. Ports of call have often oriented their own businesses and facilities towards meeting the needs of visiting cruise ships. In one case, Icy Strait Point in Alaska, the entire destination was created explicitly and solely for cruise ship visitors.

Travel to and from the port of departure is usually the passengers' responsibility, although purchasing a transfer pass from the cruise line for the trip between the airport and cruise terminal will guarantee that the ship will not leave until the passenger is aboard. Similarly, if the passenger books a shore excursion with the cruise line and the tour runs late, the ship is obliged to remain until the passenger returns.

Luxury cruise lines such as Regent Seven Seas Cruises and Crystal Cruises market their fares as "all-inclusive". For example, the base fare on Regent Seven Seas ships includes most alcoholic beverages on board ship and most shore excursions in ports of call, as well as all gratuities that would normally be paid to hotel staff on the ship. The fare may also include a one-night hotel stay before boarding, and the air fare to and from the cruise's origin and destination ports.

Many cruise lines have loyalty programs. Using these and by booking inexpensive tickets, some people have found it cheaper to live continuously on cruise ships instead of on land.

Cruise ship utilization

Cruise ships and former liners sometimes find use in applications other than those for which they were built. Due to slower speed and reduced seaworthiness, as well as being largely introduced after several major wars, cruise ships have also been used as troop transport vessels. By contrast, ocean liners were often seen as the pride of their country and used to rival liners of other nations, and have been requisitioned during both World Wars and the Falklands War to transport soldiers and serve as hospital ships.

During the 1992 Summer Olympics, eleven cruise ships docked at the Port of Barcelona for an average of 18 days, served as floating hotels to help accommodate the large influx of visitors to the Games. They were available to sponsors and hosted 11,000 guests a day, making it the second largest concentration of Olympic accommodation behind the Olympic Village. This hosting solution has been used since then in Games held in coastal cities, such as at Sydney 2000, Athens 2004, London 2012, Sochi 2014, Rio 2016 and was going to be used at Tokyo 2020.

Cruise ships have been used to accommodate displaced persons during hurricanes. For example, on 1 September 2005, the U.S. Federal Emergency Management Agency (FEMA) contracted three Carnival Cruise Lines vessels (Carnival Fantasy, the former Carnival Holiday, and the Carnival Sensation) to house Hurricane Katrina evacuees. In 2017, cruise ships were used to help transport residents from some Caribbean islands destroyed by Hurricane Irma, as well as Puerto Rico residents displaced by Hurricane Maria.

The cruise ships have also been used for evacuations. In 2010, in response to the shutdown of UK airspace due to the eruption of Iceland's Eyjafjallajökull volcano, the newly completed Celebrity Eclipse was used to rescue 2,000 British tourists stranded in Spain as an act of goodwill by the owners. The ship departed from Southampton for Bilbao on 21 April, and returned on 23 April. A cruise ship was kept on standby in case inhabitants of Kangaroo Island required evacuation in 2020 after a series of fires burned on the island.

Cruise-ships-1.png.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1770 2023-05-13 14:31:14

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1773) Runway

Gist

A paved or cleared strip on which planes land and take off.

Details

According to the International Civil Aviation Organization (ICAO), a runway is a "defined rectangular area on a land aerodrome prepared for the landing and takeoff of aircraft". Runways may be a man-made surface (often asphalt, concrete, or a mixture of both) or a natural surface (grass, dirt, gravel, ice, sand or salt). Runways, taxiways and ramps, are sometimes referred to as "tarmac", though very few runways are built using tarmac. Takeoff and landing areas defined on the surface of water for seaplanes are generally referred to as waterways. Runway lengths are now commonly given in meters worldwide, except in North America where feet are commonly used.

History

In 1916, in a World War I war effort context, the first concrete-paved runway was built in Clermont-Ferrand in France, allowing local company Michelin to manufacture Bréguet Aviation military aircraft.

In January 1919, aviation pioneer Orville Wright underlined the need for "distinctly marked and carefully prepared landing places, but the preparing of the surface of reasonably flat ground is an expensive undertaking [and] there would also be a continuous expense for the upkeep."

Headings

For fixed-wing aircraft, it is advantageous to perform takeoffs and landings into the wind to reduce takeoff or landing roll and reduce the ground speed needed to attain flying speed. Larger airports usually have several runways in different directions, so that one can be selected that is most nearly aligned with the wind. Airports with one runway are often constructed to be aligned with the prevailing wind. Compiling a wind rose is in fact one of the preliminary steps taken in constructing airport runways. Wind direction is given as the direction the wind is coming from: a plane taking off from runway 09 faces east, into an "east wind" blowing from 090°.

Originally in the 1920s and 1930s, airports and air bases (particularly in the United Kingdom) were built in a triangle-like pattern of three runways at 60° angles to each other. The reason was that back then aviation was only starting, and as a result although it was known that winds affect runway distance required, etc. not much was known about wind behaviour. As a result, three runways in a triangle-like pattern were built, and the runway with the heaviest traffic on it would eventually expand into an airport's main runway, while the other two runways would be either abandoned or converted into taxiways. For example Bristol Airport has only one runway—09/27 (9/27)—and two taxiways that form a 'V' which may have been runways on the original 1930s RAF Lulsgate Bottom airbase.

Naming

Runways are named by a number between 01 and 36, which is generally the magnetic azimuth of the runway's heading in decadegrees. This heading differs from true north by the local magnetic declination. A runway numbered 09 points east (90°), runway 18 is south (180°), runway 27 points west (270°) and runway 36 points to the north (360° rather than 0°). When taking off from or landing on runway 09, a plane is heading around 90° (east). A runway can normally be used in both directions, and is named for each direction separately: e.g., "runway 15" in one direction is "runway 33" when used in the other. The two numbers differ by 18 (= 180°). For clarity in radio communications, each digit in the runway name is pronounced individually: runway one-five, runway three-three, etc. (instead of "fifteen" or "thirty-three").

FAA airport diagram at O'Hare International Airport. The two 14/32 runways go from upper left to lower right, the two 4/22 runways go from lower left to upper right, and the two 9/27 and three 10/28 runways are horizontal.

A leading zero, for example in "runway zero-six" or "runway zero-one-left", is included for all ICAO and some U.S. military airports (such as Edwards Air Force Base). However, most U.S. civil aviation airports drop the leading zero as required by FAA regulation. This also includes some military airfields such as Cairns Army Airfield. This American anomaly may lead to inconsistencies in conversations between American pilots and controllers in other countries. It is very common in a country such as Canada for a controller to clear an incoming American aircraft to, for example, runway 04, and the pilot read back the clearance as runway 4. In flight simulation programs those of American origin might apply U.S. usage to airports around the world. For example, runway 05 at Halifax will appear on the program as the single digit 5 rather than 05.

Military airbases may include smaller paved runways known as "assault strips" for practice and training next to larger primary runways. These strips eschew the standard numerical naming convention and instead employ the runway's full three digit heading; examples include Dobbins Air Reserve Base's Runway 110/290 and Duke Field's Runway 180/360.

Runways with non-hard surfaces, such as small turf airfields and waterways for seaplanes, may use the standard numerical scheme or may use traditional compass point naming, examples include Ketchikan Harbor Seaplane Base's Waterway E/W. Airports with unpredictable or chaotic water currents, such as Santa Catalina Island's Pebbly Beach Seaplane Base, may designate their landing area as Waterway ALL/WAY to denote the lack of designated landing direction.

Letter suffix

If there is more than one runway pointing in the same direction (parallel runways), each runway is identified by appending left (L), center (C) and right (R) to the end of the runway number to identify its position (when facing its direction)—for example, runways one-five-left (15L), one-five-center (15C), and one-five-right (15R). Runway zero-three-left (03L) becomes runway two-one-right (21R) when used in the opposite direction (derived from adding 18 to the original number for the 180° difference when approaching from the opposite direction). In some countries, regulations mandate that where parallel runways are too close to each other, only one may be used at a time under certain conditions (usually adverse weather).

At large airports with four or more parallel runways (for example, at Chicago O'Hare, Los Angeles, Detroit Metropolitan Wayne County, Hartsfield-Jackson Atlanta, Denver, Dallas–Fort Worth and Orlando), some runway identifiers are shifted by 1 to avoid the ambiguity that would result with more than three parallel runways. For example, in Los Angeles, this system results in runways 6L, 6R, 7L, and 7R, even though all four runways are actually parallel at approximately 69°. At Dallas/Fort Worth International Airport, there are five parallel runways, named 17L, 17C, 17R, 18L, and 18R, all oriented at a heading of 175.4°. Occasionally, an airport with only three parallel runways may use different runway identifiers, such as when a third parallel runway was opened at Phoenix Sky Harbor International Airport in 2000 to the south of existing 8R/26L—rather than confusingly becoming the "new" 8R/26L it was instead designated 7R/25L, with the former 8R/26L becoming 7L/25R and 8L/26R becoming 8/26.

Suffixes may also be used to denote special use runways. Airports that have seaplane waterways may choose to denote the waterway on charts with the suffix W; such as Daniel K. Inouye International Airport in Honolulu and Lake Hood Seaplane Base in Anchorage. Small airports that host various forms of air traffic may employ additional suffixes to denote special runway types based on the type of aircraft expected to use them, including STOL aircraft (S), gliders (G), rotorcraft (H), and ultralights (U). Runways that are numbered relative to true north rather than magnetic north will use the suffix T; this is advantageous for certain airfields in the far north such as Thule Air Base.

Renumbering

Runway designations may change over time because Earth's magnetic lines slowly drift on the surface and the magnetic direction changes. Depending on the airport location and how much drift occurs, it may be necessary to change the runway designation. As runways are designated with headings rounded to the nearest 10°, this affects some runways sooner than others. For example, if the magnetic heading of a runway is 233°, it is designated Runway 23. If the magnetic heading changes downwards by 5 degrees to 228°, the runway remains Runway 23. If on the other hand the original magnetic heading was 226° (Runway 23), and the heading decreased by only 2 degrees to 224°, the runway becomes Runway 22. Because magnetic drift itself is slow, runway designation changes are uncommon, and not welcomed, as they require an accompanying change in aeronautical charts and descriptive documents. When a runway designation does change, especially at major airports, it is often done at night, because taxiway signs need to be changed and the numbers at each end of the runway need to be repainted to the new runway designators. In July 2009 for example, London Stansted Airport in the United Kingdom changed its runway designations from 05/23 to 04/22 during the night.

Additional Information

There are many factors that determine if an aircraft can operate from a given airport. Of course the availability of certain services, such as fuel, access to air stairs and maintenance are all necessary. But before considering anything else, one must determine if the plane can physically land at an airport, and equally as important, take off.

Helicopters must think they’re so fancy. These privileged aircraft and their ability to land almost anywhere don’t impress me one bit. Well they actually do, a lot. But this blog will hopefully illustrate that the devil is in the details when it comes to fixed wing aircraft landing, and taking off. The details and dimensions of an airport's runway is critical, especially when it comes to arranging a private jet or charter flight into and out of a smaller airport.

Runway Length and Width

Looking at aerial views of runways can lead some to the assumption that they are all uniform, big and appropriate for any plane to land. This couldn’t be further from the truth.  A given aircraft type has its own individual set of requirements in regards to these dimensions. The classic 150’ wide runway that can handle a wide-body plane for a large group charter flight isn’t a guarantee at every airport. Knowing the width of available runways is important for a variety of reasons including runway illusion and crosswind condition. Runways also have different approach categories based on width, and have universal threshold markings that indicate the actual width.

As for length, this one is a bit easier to put our heads around. In layman’s terms, a large metal bird on wheels needs a certain amount of space to take off, and to land safely. The weight of the aircraft comes into play here. The actual physics involved with landing and taking off are way over the head of this blogger, but what we are very familiar with is determining who and what can fly into where. 

ARFF Requirements

Once you’ve determined you have a plane that can both land and take off from a given runway based on the length and width, appropriate safety equipment will need to be considered.  Aircraft Rescue and Firefighting (ARFF) is a type of firefighting that involves the emergency response, mitigation, evacuation, and rescue of passengers and crew of aircraft involved in aviation accidents and incidents.

All airports with scheduled passenger flights require firefighting equipment in varying capacities. ARFF is broken into five different indexes (which are simply titled with the letters A through E), and depending on the index, between one and three firefight vehicles are required.  AARF indexes also have varying requirements in regards to extinguishing agents.  The AARF Index of a given airport will also help determine which aircraft can land.  While a larger aircraft may be able to land based on the runway alone, if the airport doesn't have the appropriate AARF certification, you may require a smaller aircraft or may need to hire additional services to accommodate the aircraft (which may be permitted at certain airports).

So what can take off and land from where?

A runway of at least 6,000 ft in length is usually adequate for aircraft weights below approximately 200,000 lb. Larger aircraft including wide-bodies will usually require at least 8,000 ft at sea level and somewhat more at higher altitude airports. International wide-body flights, which carry substantial amounts of fuel and are therefore heavier, may also have landing requirements of 10,000 ft or more and takeoff requirements of 13,000 ft.  The Boeing 747 is considered to have the longest takeoff distance of the more common aircraft types and has set the standard for runway lengths of larger international airports.

249296_6c727c318fd340f4856e5041e95c07c7~mv2.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1771 2023-05-14 14:39:59

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1774) Chlorofluorocarbon

Gist

Chlorofluorocarbons (CFCs) are nontoxic, nonflammable chemicals containing atoms of carbon, chlorine, and fluorine. They are used in the manufacture of aerosol sprays, blowing agents for foams and packing materials, as solvents, and as refrigerants.

Summary

A chlorofluorocarbon (CFC) is any of several organic compounds composed of carbon, fluorine, and chlorine. When CFCs also contain hydrogen in place of one or more chlorines, they are called hydrochlorofluorocarbons, or HCFCs. CFCs are also called Freons, a trademark of the E.I. du Pont de Nemours & Company in Wilmington, Del. CFCs were originally developed as refrigerants during the 1930s. Some of these compounds, especially trichlorofluoromethane (CFC-11) and dichlorodifluoromethane (CFC-12), found use as aerosol-spray propellants, solvents, and foam-blowing agents. They are well suited for these and other applications because they are nontoxic and nonflammable and can be readily converted from a liquid to a gas and vice versa.

Their commercial and industrial value notwithstanding, CFCs were eventually discovered to pose a serious environmental threat. Studies, especially those of American chemists F. Sherwood Rowland and Mario Molina and Dutch chemist Paul Crutzen, indicated that CFCs, once released into the atmosphere, accumulate in the stratosphere, where they contribute to the depletion of the ozone layer. Stratospheric ozone shields life on Earth from the harmful effects of the Sun’s ultraviolet radiation; even a relatively small decrease in the stratospheric ozone concentration can result in an increased incidence of skin cancer in humans and genetic damage in many organisms. Ultraviolet radiation in the stratosphere causes the CFC molecules to dissociate, producing chlorine atoms and radicals (i.e., chlorodifluoromethyl radical; free radicals are species that contain one or more unpaired electrons).

Chemical equation

The chlorine atoms then react with ozone, initiating a process whereby a single chlorine atom can cause the conversion of thousands of ozone molecules to oxygen.

Because of a growing concern over stratospheric ozone depletion and its attendant dangers, a ban was imposed on the use of CFCs in aerosol-spray dispensers in the late 1970s by the United States, Canada, and the Scandinavian countries. In 1990, 93 nations agreed, as part of the Montreal Protocol (established 1987), to end production of ozone-depleting chemicals by the end of the 20th century. By 1992 the list of participating countries had grown to 140, and the timetable for ending production of CFCs advanced to 1996. This goal has largely been met. HCFCs pose less of a risk than CFCs because they decompose more readily in the lower atmosphere; nevertheless, they too degrade the ozone layer and are scheduled to be phased out by 2030.

Details

Chlorofluorocarbons (CFCs) and hydrochlorofluorocarbons (HCFCs) are fully or partly halogenated hydrocarbons that contain carbon (C), hydrogen (H), chlorine (Cl), and fluorine (F), produced as volatile derivatives of methane, ethane, and propane.

The most common representative is dichlorodifluoromethane (R-12). R-12 is also commonly called Freon and is used as a refrigerant. Many CFCs have been widely used as refrigerants, propellants (in aerosol applications), and solvents. Because CFCs contribute to ozone depletion in the upper atmosphere, the manufacture of such compounds has been phased out under the Montreal Protocol, and they are being replaced with other products such as hydrofluorocarbons (HFCs) including R-410A and R-134a.

Structure, properties and production

As in simpler alkanes, carbon in the CFCs bond with tetrahedral symmetry. Because the fluorine and chlorine atoms differ greatly in size and effective charge from hydrogen and from each other, the methane-derived CFCs deviate from perfect tetrahedral symmetry.

The physical properties of CFCs and HCFCs are tunable by changes in the number and identity of the halogen atoms. In general, they are volatile but less so than their parent alkanes. The decreased volatility is attributed to the molecular polarity induced by the halides, which induces intermolecular interactions. Thus, methane boils at −161 °C whereas the fluoromethanes boil between −51.7 (CF2H2) and −128 °C (CF4). The CFCs have still higher boiling points because the chloride is even more polarizable than fluoride. Because of their polarity, the CFCs are useful solvents, and their boiling points make them suitable as refrigerants. The CFCs are far less flammable than methane, in part because they contain fewer C-H bonds and in part because, in the case of the chlorides and bromides, the released halides quench the free radicals that sustain flames.

The densities of CFCs are higher than their corresponding alkanes. In general, the density of these compounds correlates with the number of chlorides.

Commercial development and use

During World War II, various chloroalkanes were in standard use in military aircraft, although these early halons suffered from excessive toxicity. Nevertheless, after the war they slowly became more common in civil aviation as well. In the 1960s, fluoroalkanes and bromofluoroalkanes became available and were quickly recognized as being highly effective fire-fighting materials. Much early research with Halon 1301 was conducted under the auspices of the US Armed Forces, while Halon 1211 was, initially, mainly developed in the UK. By the late 1960s they were standard in many applications where water and dry-powder extinguishers posed a threat of damage to the protected property, including computer rooms, telecommunications switches, laboratories, museums and art collections. Beginning with warships, in the 1970s, bromofluoroalkanes also progressively came to be associated with rapid knockdown of severe fires in confined spaces with minimal risk to personnel.

By the early 1980s, bromofluoroalkanes were in common use on aircraft, ships, and large vehicles as well as in computer facilities and galleries. However, concern was beginning to be expressed about the impact of chloroalkanes and bromoalkanes on the ozone layer. The Vienna Convention for the Protection of the Ozone Layer did not cover bromofluoroalkanes as it was thought, at the time, that emergency discharge of extinguishing systems was too small in volume to produce a significant impact, and too important to human safety for restriction.

Regulation

Since the late 1970s, the use of CFCs has been heavily regulated because of their destructive effects on the ozone layer. After the development of his electron capture detector, James Lovelock was the first to detect the widespread presence of CFCs in the air, finding a mole fraction of 60 ppt of CFC-11 over Ireland. In a self-funded research expedition ending in 1973, Lovelock went on to measure CFC-11 in both the Arctic and Antarctic, finding the presence of the gas in each of 50 air samples collected, and concluding that CFCs are not hazardous to the environment. The experiment did however provide the first useful data on the presence of CFCs in the atmosphere. The damage caused by CFCs was discovered by Sherry Rowland and Mario Molina who, after hearing a lecture on the subject of Lovelock's work, embarked on research resulting in the first publication suggesting the connection in 1974. It turns out that one of CFCs' most attractive features—their low reactivity—is key to their most destructive effects. CFCs' lack of reactivity gives them a lifespan that can exceed 100 years, giving them time to diffuse into the upper stratosphere. Once in the stratosphere, the sun's ultraviolet radiation is strong enough to cause the homolytic cleavage of the C-Cl bond. In 1976, under the Toxic Substances Control Act, the EPA banned commercial manufacturing and use of CFCs and aerosol propellants. This was later superseded by broader regulation by the EPA under the Clean Air Act to address stratospheric ozone depletion.

An animation showing colored representation of ozone distribution by year, above North America, through 6 steps. It starts with a lot of ozone especially over Alaska and by 2060 is almost all gone from north to south.
NASA projection of stratospheric ozone, in Dobson units, if chlorofluorocarbons had not been banned. Animated version.
By 1987, in response to a dramatic seasonal depletion of the ozone layer over Antarctica, diplomats in Montreal forged a treaty, the Montreal Protocol, which called for drastic reductions in the production of CFCs. On 2 March 1989, 12 European Community nations agreed to ban the production of all CFCs by the end of the century. In 1990, diplomats met in London and voted to significantly strengthen the Montreal Protocol by calling for a complete elimination of CFCs by 2000. By 2010, CFCs should have been completely eliminated from developing countries as well.

Ozone-depleting gas trends

Because the only CFCs available to countries adhering to the treaty is from recycling, their prices have increased considerably. A worldwide end to production should also terminate the smuggling of this material. However, there are current CFC smuggling issues, as recognized by the United Nations Environmental Programme (UNEP) in a 2006 report titled "Illegal Trade in Ozone Depleting Substances". UNEP estimates that between 16,000–38,000 tonnes of CFCs passed through the black market in the mid-1990s. The report estimated between 7,000 and 14,000 tonnes of CFCs are smuggled annually into developing countries. Asian countries are those with the most smuggling; as of 2007, China, India and South Korea were found to account for around 70% of global CFC production, South Korea later to ban CFC production in 2010. Possible reasons for continued CFC smuggling were also examined: the report noted that many banned CFC producing products have long lifespans and continue to operate. The cost of replacing the equipment of these items is sometimes cheaper than outfitting them with a more ozone-friendly appliance. Additionally, CFC smuggling is not considered a significant issue, so the perceived penalties for smuggling are low. In 2018 public attention was drawn to the issue, that at an unknown place in east Asia an estimated amount of 13,000 metric tons annually of CFCs have been produced since about 2012 in violation of the protocol. While the eventual phaseout of CFCs is likely, efforts are being taken to stem these current non-compliance problems.

By the time of the Montreal Protocol, it was realised that deliberate and accidental discharges during system tests and maintenance accounted for substantially larger volumes than emergency discharges, and consequently halons were brought into the treaty, albeit with many exceptions.

Regulatory gap

While the production and consumption of CFCs are regulated under the Montreal Protocol, emissions from existing banks of CFCs are not regulated under the agreement. In 2002, there were an estimated 5,791 kilotons of CFCs in existing products such as refrigerators, air conditioners, aerosol cans and others. Approximately one-third of these CFCs are projected to be emitted over the next decade if action is not taken, posing a threat to both the ozone layer and the climate. A proportion of these CFCs can be safely captured and destroyed.

Regulation and DuPont

In 1978 the United States banned the use of CFCs such as Freon in aerosol cans, the beginning of a long series of regulatory actions against their use. The critical DuPont manufacturing patent for Freon ("Process for Fluorinating Halohydrocarbons", U.S. Patent #3258500) was set to expire in 1979. In conjunction with other industrial peers DuPont formed a lobbying group, the "Alliance for Responsible CFC Policy," to combat regulations of ozone-depleting compounds. In 1986 DuPont, with new patents in hand, reversed its previous stance and publicly condemned CFCs. DuPont representatives appeared before the Montreal Protocol urging that CFCs be banned worldwide and stated that their new HCFCs would meet the worldwide demand for refrigerants.

Phasing-out of CFCs

Use of certain chloroalkanes as solvents for large scale application, such as dry cleaning, have been phased out, for example, by the IPPC directive on greenhouse gases in 1994 and by the volatile organic compounds (VOC) directive of the EU in 1997. Permitted chlorofluoroalkane uses are medicinal only.

Bromofluoroalkanes have been largely phased out and the possession of equipment for their use is prohibited in some countries like the Netherlands and Belgium, from 1 January 2004, based on the Montreal Protocol and guidelines of the European Union.

Production of new stocks ceased in most (probably all) countries in 1994. However many countries still require aircraft to be fitted with halon fire suppression systems because no safe and completely satisfactory alternative has been discovered for this application. There are also a few other, highly specialized uses. These programs recycle halon through "halon banks" coordinated by the Halon Recycling Corporation to ensure that discharge to the atmosphere occurs only in a genuine emergency and to conserve remaining stocks.

The interim replacements for CFCs are hydrochlorofluorocarbons (HCFCs), which deplete stratospheric ozone, but to a much lesser extent than CFCs. Ultimately, hydrofluorocarbons (HFCs) will replace HCFCs. Unlike CFCs and HCFCs, HFCs have an ozone depletion potential (ODP) of 0. DuPont began producing hydrofluorocarbons as alternatives to Freon in the 1980s. These included Suva refrigerants and Dymel propellants. Natural refrigerants are climate friendly solutions that are enjoying increasing support from large companies and governments interested in reducing global warming emissions from refrigeration and air conditioning.

Phasing-out of HFCs and HCFCs

Hydrofluorocarbons are included in the Kyoto Protocol and are regulated under the Kigali Amendment to the Montreal Protocol due to their very high Global Warming Potential and the recognition of halocarbon contributions to climate change.

On September 21, 2007, approximately 200 countries agreed to accelerate the elimination of hydrochlorofluorocarbons entirely by 2020 in a United Nations-sponsored Montreal summit. Developing nations were given until 2030. Many nations, such as the United States and China, who had previously resisted such efforts, agreed with the accelerated phase out schedule. India successfully phased out HCFCs by 2020.

Properly collecting, controlling, and destroying CFCs and HCFCs

While new production of these refrigerants has been banned, large volumes still exist in older systems and pose an immediate threat to our environment. Preventing the release of these harmful refrigerants has been ranked as one of the single most effective actions we can take to mitigate catastrophic climate change.

Development of alternatives for CFCs

Work on alternatives for chlorofluorocarbons in refrigerants began in the late 1970s after the first warnings of damage to stratospheric ozone were published.

The hydrochlorofluorocarbons (HCFCs) are less stable in the lower atmosphere, enabling them to break down before reaching the ozone layer. Nevertheless, a significant fraction of the HCFCs do break down in the stratosphere and they have contributed to more chlorine buildup there than originally predicted. Later alternatives lacking the chlorine, the hydrofluorocarbons (HFCs) have an even shorter lifetimes in the lower atmosphere. One of these compounds, HFC-134a, were used in place of CFC-12 in automobile air conditioners. Hydrocarbon refrigerants (a propane/isobutane blend) were also used extensively in mobile air conditioning systems in Australia, the US and many other countries, as they had excellent thermodynamic properties and performed particularly well in high ambient temperatures. 1,1-Dichloro-1-fluoroethane (HCFC-141b) has replaced HFC-134a, due to its low ODP and GWP values. And according to the Montreal Protocol, HCFC-141b is supposed to be phased out completely and replaced with zero ODP substances such as cyclopentane, HFOs, and HFC-345a before January 2020.

Among the natural refrigerants (along with ammonia and carbon dioxide), hydrocarbons have negligible environmental impacts and are also used worldwide in domestic and commercial refrigeration applications, and are becoming available in new split system air conditioners. Various other solvents and methods have replaced the use of CFCs in laboratory analytics.

CFC.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1772 2023-05-15 13:49:47

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1775) Fasteners

Summary

Fasteners:  In construction, are connectors between structural members. Bolted connections are used when it is necessary to fasten two elements tightly together, especially to resist shear and bending, as in column and beam connections. Threaded metal bolts are always used in conjunction with nuts. Another threaded fastener is the screw, which has countless applications, especially for wood construction. The wood screw carves a mating thread in the wood, ensuring a tight fit. Pins are used to keep two or more elements in alignment; since the pin is not threaded, it allows for rotational movement, as in machinery parts. Riveted connections, which resist shearing forces, were in wide use for steel construction before being replaced by welding. The rivet, visibly prominent on older steel bridges, is a metal pin fastener with one end flattened into a head by hammering it through a metal gusset plate. The common nail, less resistant to shear or pull-out forces, is useful for cabinet and finishing work, where stresses are minimal.

Details

A fastener (US English) or fastening (UK English) is a hardware device that mechanically joins or affixes two or more objects together. In general, fasteners are used to create non-permanent joints; that is, joints that can be removed or dismantled without damaging the joining components. Welding is an example of creating permanent joints. Steel fasteners are usually made of stainless steel, carbon steel, or alloy steel.

Other alternative methods of joining materials include: crimping, welding, soldering, brazing, taping, gluing, cement, or the use of other adhesives. Force may also be used, such as with magnets, vacuum (like suction cups), or even friction (like sticky pads). Some types of woodworking joints make use of separate internal reinforcements, such as dowels or biscuits, which in a sense can be considered fasteners within the scope of the joint system, although on their own they are not general purpose fasteners.

Furniture supplied in flat-pack form often uses cam dowels locked by cam locks, also known as conformat fasteners. Fasteners can also be used to close a container such as a bag, a box, or an envelope; or they may involve keeping together the sides of an opening of flexible material, attaching a lid to a container, etc. There are also special-purpose closing devices, e.g. a bread clip.

Items like a rope, string, wire, cable, chain, or plastic wrap may be used to mechanically join objects; but are not generally categorized as fasteners because they have additional common uses. Likewise, hinges and springs may join objects together, but are ordinarily not considered fasteners because their primary purpose is to allow articulation rather than rigid affixment.

Industry

In 2005, it was estimated that the United States fastener industry runs 350 manufacturing plants and employs 40,000 workers. The industry is strongly tied to the production of automobiles, aircraft, appliances, agricultural machinery, commercial construction, and infrastructure. More than 200 billion fasteners are used per year in the U.S., 26 billion of these by the automotive industry. The largest distributor of fasteners in North America is the Fastenal Company.

Materials

There are three major steel fasteners used in industries: stainless steel, carbon steel, and alloy steel. The major grade used in stainless steel fasteners: 200 series, 300 series, and 400 series. Titanium, aluminium, and various alloys are also common materials of construction for metal fasteners. In many cases, special coatings or plating may be applied to metal fasteners to improve their performance characteristics by, for example, enhancing corrosion resistance. Common coatings/platings include zinc, chrome, and hot dip galvanizing.

Applications

When selecting a fastener for industrial applications, it is important to consider a variety of factors. The threading, the applied load on the fastener, the stiffness of the fastener, and the number of fasteners needed should all be taken into account.

When choosing a fastener for a given application, it is important to know the specifics of that application to help select the proper material for the intended use. Factors that should be considered include:

* Accessibility
* Environment, including temperature, water exposure, and potentially corrosive elements
* Installation process
* Materials to be joined
* Reusability
* Weight restrictions

Automotive

Fasteners are an essential component of maintenance and repair in the automotive industry, as they secure exhaust systems, engine blocks, chassis components, and more. Further, fasteners hold trim pieces and exterior body panels in place. They can also be used to construct recreational vehicles such as boats or motorcycles.

Home improvement

Home improvement projects are made much easier with fasteners since they provide a secure hold without requiring special tools or skills. They can be used to attach furniture legs or hang curtains on your walls without drilling holes into walls or countersink screws into furniture pieces. Fasteners are also effective for putting together cabinets and shelving units quickly and easily; all that is needed is to line up two pieces with the fastener in place and then tighten it down with a screwdriver or wrench.

Industrial

In the industrial realm, fasteners can join wooden beams together or attach metal walls to concrete foundations in the construction industry, while in manufacturing processes they help assemble computers, robots, and other machines. With their strength and durability, these fasteners can withstand vibrations or harsh conditions.

Types

A threaded fastener has internal or external screw threads. The most common types are the screw, nut and bolt, possibly involving washers. Other more specialized types of threaded fasteners include captive threaded fasteners, stud, threaded inserts, and threaded rods.

1_wIt5Mo7L9o3RLur4P6Ujuw.jpeg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1773 2023-05-16 14:06:11

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1776) Paste (food)

Summary

A food paste is a semi-liquid colloidal suspension, emulsion, or aggregation used in food preparation or eaten directly as a spread. Pastes are often highly spicy or aromatic, are often prepared well in advance of actual usage, and are often made into a preserve for future use. Common pastes are some fruit preserves, curry pastes, and nut pastes. Purées are food pastes made from already cooked ingredients.

Some food pastes are considered to be condiments and are used directly, while others are made into sauces, which are more liquidy than paste. Ketchup and prepared mustard are pastes that are used both directly as condiments and as ingredients in sauces.

Many food pastes are an intermediary stage in the preparation of food. Perhaps the most notable of such intermediary food pastes is dough. A paste made of fat and flour and often stock or milk is an important intermediary for the basis for a sauce or a binder for stuffing, whether called a beurre manié, a roux or panada. Sago paste is an intermediary stage in the production of sago meal and sago flour from sago palms.

Food for babies and adults who have lost their teeth is often prepared as food pastes. Baby food is often very bland, while older adults often desire increased spiciness in their food pastes.

Preparation

Blenders, grinders, mortars and pestles, metates, and even chewing are used to reduce unprocessed food to a meal, powder, or when significant water is present in the original food, directly into a paste. If required, water, oil and other liquids are added to dry ingredients to make the paste. Often the resultant paste is fermented or cooked to increase its longevity. Often pastes are steamed, baked or enclosed in pastry or bread dough to make them ready for consumption.

Preservation

Traditionally, salt, sugar, vinegar, citric acid and beneficial fermentation were all used to preserve food pastes. In modern times canning is used to preserve pastes in jars, bottles, tins and more recently in plastic bags and tubes.

tomato-paste-substitute.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1774 2023-05-17 13:25:19

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1777) Toothpaste

Toothpaste is a paste or gel dentifrice used with a toothbrush to clean and maintain the aesthetics and health of teeth. Toothpaste is used to promote oral hygiene: it is an abrasive that aids in removing dental plaque and food from the teeth, assists in suppressing halitosis, and delivers active ingredients (most commonly fluoride) to help prevent tooth decay (dental caries) and gum disease (gingivitis). Owing to differences in composition and fluoride content, not all toothpastes are equally effective in maintaining oral health. The decline of tooth decay during the 20th century has been attributed to the introduction and regular use of fluoride-containing toothpastes worldwide. Large amounts of swallowed toothpaste can be toxic. Common colors for toothpaste include white (sometimes with colored stripes or green tint) and blue.

Usefulness

Toothpastes are generally useful to maintain dental health. Toothpastes containing fluoride are effective at preventing tooth decay. Toothpastes may also help to control and remove plaque build-up, promoting healthy gums. A 2016 systematic review indicated that using toothpaste when brushing the teeth does not necessarily impact the level of plaque removal. However, the active ingredients in toothpastes are able to prevent dental diseases with regular use.

Ingredients:

Toothpastes are derived from a variety of components, the three main ones being abrasives, fluoride, and detergent.

Abrasives

Abrasives constitute 8-20% of a typical toothpaste. These insoluble particles are designed to help remove plaque from the teeth. The removal of plaque inhibits the accumulation of tartar (calculus) helping to minimize the risk of gum disease. Representative abrasives include particles of aluminum hydroxide (Al(OH)3), calcium carbonate (CaCO3), magnesium carbonate (MgCO3), sodium bicarbonate, various calcium hydrogen phosphates, various silicas and zeolites, and hydroxyapatite (Ca5(PO4)3OH).

Abrasives, like the dental polishing agents used in dentists' offices, also cause a small amount of enamel erosion which is termed "polishing" action. After the Microbead-Free Waters Act of 2015, the use of microbeads in toothpaste has been discontinued in the US, however since 2015 the industry has shifted toward instead using FDA-approved "rinse-off" metallized-plastic glitter as their primary abrasive agent. Some brands contain powdered white mica, which acts as a mild abrasive, and also adds a cosmetic glittery shimmer to the paste. The polishing of teeth removes stains from tooth surfaces, but has not been shown to improve dental health over and above the effects of the removal of plaque and calculus.

The abrasive effect of toothpaste is indicated by its RDA value. Toothpastes with RDA values above 250 are potentially damaging to the surfaces of teeth. The American National Standards Institute and American Dental Association considers toothpastes with an RDA below 250 to be safe and effective for a lifetime of use.

Fluorides

Fluoride in various forms is the most popular and effective active ingredient in toothpaste to prevent cavities. Fluoride is present in small amounts in plants, animals, and some natural water sources. The additional fluoride in toothpaste has beneficial effects on the formation of dental enamel and bones. Sodium fluoride (NaF) is the most common source of fluoride, but stannous fluoride (SnF2), and sodium monofluorophosphate (Na2PO3F) are also used. At similar fluoride concentrations, toothpastes containing stannous fluoride have been shown to be more effective than toothpastes containing sodium fluoride for reducing the incidence of dental caries and dental erosion, as well as reducing gingivitis. Some stannous fluoride-containing toothpastes also contain ingredients that allow for better stain and calculus removal. A systematic review revealed stabilised stannous fluoride-containing toothpastes had a positive effect on the reduction of plaque, gingivitis and staining, with a significant reduction in calculus and halitosis compared to other toothpastes. Furthermore, numerous clinical trials have shown gluconate chelated stannous fluoride toothpastes possess superior protection against dental erosion and dentine hypersensitivity compared to other fluoride-containing and fluoride-free toothpastes.

Much of the toothpaste sold in the United States has 1,000 to 1,100 parts per million fluoride. In European countries, such as the UK or Greece, the fluoride content is often higher; a sodium fluoride content of 0.312% w/w (1,450 ppm fluoride) or stannous fluoride content of 0.454% w/w (1,100 ppm fluoride) is common. All of these concentrations are likely to prevent tooth decay, according to a 2019 Cochrane review. Concentrations below 1,000 ppm are not likely to be preventive, and the preventive effect increases with concentration. Clinical trials support the use of high fluoride (5,000 ppm fluoride) dentifrices, for prevention of root caries in elderly adults by reducing the amount of plaque accumulated, decreasing the number of mutans streptococci and lactobacilli and possibly promoting calcium fluoride deposits to a higher degree than after the use of traditional fluoride containing dentifrices.

Surfactants

Many, although not all, toothpastes contain sodium lauryl sulfate (SLS) or related surfactants (detergents). SLS is found in many other personal care products as well, such as shampoo, and is mainly a foaming agent, which enables uniform distribution of toothpaste, improving its cleansing power.

Other components:

Antibacterial agents

Triclosan, an antibacterial agent, is a common toothpaste ingredient in the United Kingdom. Triclosan or zinc chloride prevent gingivitis and, according to the American Dental Association, helps reduce tartar and bad breath. A 2006 review of clinical research concluded there was evidence for the effectiveness of 0.30% triclosan in reducing plaque and gingivitis. Another Cochrane review in 2013 has found that triclosan achieved a 22% reduction in plaque, and in gingivitis, a 48% reduction in bleeding gums. However, there was insufficient evidence to show a difference in fighting periodontitis and there was no evidence either of any harmful effects associated with the use of triclosan toothpastes for more than 3 years. The evidence relating to plaque and gingivitis was considered to be of moderate quality while for periodontitis was low quality. Recently, triclosan has been removed as an ingredient from well-known toothpaste formulations. This may be attributed to concerns about adverse effects associated with triclosan exposure. Triclosan use in cosmetics has been positively correlated with triclosan levels in human tissues, plasma and breast milk, and is considered to have potential neurotoxic effects. Long-term studies are needed to substantiate these concerns.

Chlorhexidine is another antimicrobial agent used in toothpastes, however it is more commonly added in mouthwash products. Sodium laureth sulfate, a foaming agent, is a common toothpaste ingredient that also possesses some antimicrobial activities. There are also many commercial products available in the market containing different essential oils, herbal ingredients (e.g. chamomile, neem, chitosan, Aloe vera), and natural or plant extracts (e.g. hinokitiol). These ingredients are claimed by the manufacturers to fight plaque, bad breath and prevent gum disease. A 2020 systematic metareview found that herbal toothpastes are as effective as non-herbal toothpastes in reducing dental plaque at shorter period of follow-up (4 weeks). However, this evidence comes from low-quality studies.

The stannous (tin) ion, commonly added to toothpastes as stannous fluoride or stannous chloride, has been shown to have antibacterial effects in the mouth. Research has shown that stannous fluoride-containing toothpaste inhibits extracellular polysaccharide (EPS) production in a multispecies biofilm greater than sodium fluoride-containing toothpaste. This is thought to contribute to a reduction in plaque and gingivitis when using stannous fluoride-containing toothpastes when compared to other toothpastes, and has been evidenced through numerous clinical trials. In addition to its antibacterial properties, stabilised stannous fluoride toothpastes have been shown to protect against dental erosion and dentine hypersensitivity, making it a multifunctional component in toothpaste formulations.

Flavorants

Toothpaste comes in a variety of colors and flavors, intended to encourage use of the product. The three most common flavorants are peppermint, spearmint, and wintergreen. Toothpaste flavored with peppermint-anise oil is popular in the Mediterranean region. These flavors are provided by the respective oils, e.g. peppermint oil. More exotic flavors include Anethole anise, apricot, bubblegum, cinnamon, fennel, lavender, neem, ginger, vanilla, lemon, orange, and pine. Alternatively, unflavored toothpastes exist.

Remineralizing agents

Chemical repair (remineralization) of early tooth decay is promoted naturally by saliva.[40] However, this process can be enhanced by various remineralisation agents. Fluoride promotes remineralization, but is limited by bioavailable calcium. Casein phosphopeptide stabilised amorphous calcium phosphate (CPP-ACP) is a toothpaste ingredient containing bioavailable calcium that has been widely research to be the most clinically effective remineralization agent that enhances the action of saliva and fluoride. Peptide-based systems, hydroxyapatite nanocrystals and a variety of calcium phosphates have been advocated as remineralization agents; however, more clinical evidence is required to substantiate their effectiveness.

Miscellaneous components

Agents are added to suppress the tendency of toothpaste to dry into a powder. Included are various sugar alcohols, such as glycerol, sorbitol, or xylitol, or related derivatives, such as 1,2-propylene glycol and polyethyleneglycol. Strontium chloride or potassium nitrate is included in some toothpastes to reduce sensitivity. Two systemic meta-analysis reviews reported that arginine, and calcium sodium phosphosilicate - CSPS containing toothpastes are also effective in alleviating dentinal hypersensitivity respectively. Another randomized clinical trial found superior effects when both formulas were combined.

Sodium polyphosphate is added to minimize the formation of tartar.

Chlorohexidine mouthwash has been popular for its positive effect on controlling plaque and gingivitis, however, a systemic review studied the effects of chlorohexidine toothpastes and found insufficient evidence to support its use, tooth surface discoloration was observed as a side effect upon using it, which is considered a negative side effect that can affect patients' compliance.

Sodium hydroxide, also known as lye or caustic soda, is listed as an inactive ingredient in some toothpaste, for example Colgate Total.

Xylitol

A systematic review reported two out of ten studies by the same authors on the same population showed toothpastes with xylitol as an ingredient were more effective at preventing dental caries in permanent teeth of children than toothpastes containing fluoride alone. Furthermore, xylitol has not been found to cause any harmful effects. However, further investigation into the efficacy of toothpastes containing xylitol is required as the currently available studies are of low quality and high risk of bias.

Safety:

Fluoride

Fluoride-containing toothpaste can be acutely toxic if swallowed in large amounts, but instances are exceedingly rare and result from prolonged and excessive use of toothpaste (i.e. several tubes per week). Approximately 15 mg/kg body weight is the acute lethal dose, even though as small amount as 5 mg/kg may be fatal to some children.

The risk of using fluoride is low enough that the use of full-strength toothpaste (1350–1500 ppm fluoride) is advised for all ages. However, smaller volumes are used for young children, for example, a smear of toothpaste until three years old. A major concern of dental fluorosis is for children under 12 months ingesting excessive fluoride through toothpaste. Nausea and vomiting are also problems which might arise with topical fluoride ingestion.

Diethylene glycol

The inclusion of sweet-tasting but toxic diethylene glycol in Chinese-made toothpaste led to a recall in 2007 involving multiple toothpaste brands in several nations. The world outcry made Chinese officials ban the practice of using diethylene glycol in toothpaste.

Triclosan

Reports have suggested triclosan, an active ingredient in many kinds of toothpastes, can combine with chlorine in tap water to form chloroform, which the United States Environmental Protection Agency classifies as a probable human carcinogen. An animal study revealed the chemical might modify hormone regulation, and many other lab researches proved bacteria might be able to develop resistance to triclosan in a way which can help them to resist antibiotics also.

Polyethylene glycol - PEG

PEG is a common ingredient in some of the formulas of toothpastes; it is a hydrophilic polymer that acts as a dispersant in toothpastes. Also, it is used in many cosmetic and pharmaceutical formulas, for example: ointments, osmotic laxatives, some of the nonsteroidal anti-inflammatory drugs, other medications and household products. However, 37 cases of PEG hypersensitivity (delayed and immediate) to PEG-containing substances have been reported since 1977, suggesting that they have unrecognized allergenic potential.

Miscellaneous issues and debates

With the exception of toothpaste intended to be used on pets such as dogs and cats, and toothpaste used by astronauts, most toothpaste is not intended to be swallowed, and doing so may cause nausea or diarrhea. Tartar fighting toothpastes have been debated. Sodium lauryl sulfate (SLS) has been proposed to increase the frequency of mouth ulcers in some people, as it can dry out the protective layer of oral tissues, causing the underlying tissues to become damaged. In studies conducted by the university of Oslo on recurrent aphthous ulcers, it was found that SLS has a denaturing effect on the oral mucin layer, with high affinity for proteins, thereby increasing epithelial permeability. In a double-blind cross-over study, a significantly higher frequency of aphthous ulcers was demonstrated when patients brushed with an SLS-containing versus a detergent-free toothpaste. Also patients with Oral Lichen Planus who avoided SLS-containing toothpaste benefited.

Alteration of taste perception

After using toothpaste, orange juice and other fruit juices are known to have an unpleasant taste if consumed shortly afterwards. Sodium lauryl sulfate, used as a surfactant in toothpaste, alters taste perception. It can break down phospholipids that inhibit taste receptors for sweetness, giving food a bitter taste. In contrast, apples are known to taste more pleasant after using toothpaste. Distinguishing between the hypotheses that the bitter taste of orange juice results from stannous fluoride or from sodium lauryl sulfate is still an unresolved issue and it is thought that the menthol added for flavor may also take part in the alteration of taste perception when binding to lingual cold receptors.

Whitening toothpastes

Many toothpastes make whitening claims. Some of these toothpastes contain peroxide, the same ingredient found in tooth bleaching gels. The abrasive in these toothpastes, not the peroxide, removes the stains. Whitening toothpaste cannot alter the natural color of teeth or reverse discoloration by penetrating surface stains or decay. To remove surface stains, whitening toothpaste may include abrasives to gently polish the teeth or additives such as sodium tripolyphosphate to break down or dissolve stains. When used twice a day, whitening toothpaste typically takes two to four weeks to make teeth appear whiter. Whitening toothpaste is generally safe for daily use, but excessive use might damage tooth enamel. Teeth whitening gels represent an alternative. A recent systematic review in 2017 concluded that nearly all dentifrices that are specifically formulated for tooth whitening were shown to have a beneficial effect in reducing extrinsic stains, irrespective of whether or not a chemical discoloration agent was added. However, the whitening process can permanently reduce the strength of the teeth, as the process scrapes away a protective outer layer of enamel.

Herbal and natural toothpastes

Companies such as Tom's of Maine, among others, manufacture natural and herbal toothpastes and market them to consumers who wish to avoid the artificial ingredients commonly found in regular toothpastes. Many herbal toothpastes do not contain fluoride or sodium lauryl sulfate. The ingredients found in natural toothpastes vary widely but often include baking soda, aloe, eucalyptus oil, myrrh, camomile, calendula, neem, toothbrush tree, plant extract (strawberry extract), and essential oils. A systemic review in 2014 found insufficient evidence to determine whether the aloe vera herbal dentifrice can reduce plaque or improve gingival health, as the randomized studies were found to be flawed with high risk of bias.

According to a study by the Delhi Institute of Pharmaceutical Sciences and Research, many of the herbal toothpastes being sold in India were adulterated with nicotine.

Charcoal has also been incorporated in toothpaste formulas; however, there is no evidence to determine its safety and effectiveness. A 2020 systematic metareview of 24 comparative Randomised controlled trials, involving 1,597 adults aged 18 to 65, showed herbal toothpaste was superior over non-herbal toothpaste, but not to fluoride toothpaste.

Government regulation

In the United States toothpaste is regulated by the U.S. Food and Drug Administration as a cosmetic, except for ingredients with a medical purpose, such as fluoride, which are regulated as drugs. Drugs require scientific studies and FDA approval in order to be legally marketed in the United States, but cosmetic ingredients do not require pre-approval, except for color additives. The FDA does have labelling and requirements and bans certain ingredients.

Striped toothpaste

Striped toothpaste was invented by Leonard Marraffino in 1955. The patent (US patent 2,789,731, issued 1957) was subsequently sold to Unilever, who marketed the novelty under the Stripe brand-name in the early 1960s. This was followed by the introduction of the Signal brand in Europe in 1965 (UK patent 813,514). Although Stripe was initially very successful, it never again achieved the 8% market share that it cornered during its second year.

The red area represents the material used for stripes, and the rest is the main toothpaste material. The two materials are not in separate compartments; they are sufficiently viscous that they will not mix. Applying pressure to the tube causes the main material to issue out through the pipe. Simultaneously, some of the pressure is forwarded to the stripe-material, which is thereby pressed onto the main material through holes in the pipe.

Marraffino's design, which remains in use for single-color stripes, is simple. The main material, usually white, sits at the crimp end of the toothpaste tube and makes up most of its bulk. A thin pipe, through which that carrier material will flow, descends from the nozzle to it. The stripe-material (this was red in Stripe) fills the gap between the carrier material and the top of the tube. The two materials are not in separate compartments, but they are sufficiently viscous that they will not mix. When pressure is applied to the toothpaste tube, the main material squeezes down the thin pipe to the nozzle. Simultaneously, the pressure applied to the main material causes pressure to be forwarded to the stripe material, which thereby issues out through small holes (in the side of the pipe) onto the main carrier material as it is passing those holes.

In 1990, Colgate-Palmolive was granted a patent (USPTO 4,969,767) for two differently colored stripes. In this scheme, the inner pipe has a cone-shaped plastic guard around it, and about halfway up its length. Between the guard and the nozzle-end of the tube is a space for the material for one color, which issues out of holes in the pipe. On the other side of the guard is space for second stripe-material, which has its own set of holes.

In 2016, Colgate-Palmolive was granted a patent (USPTO U.S. Patent 20,160,228,347) for suitable sorts of differently colored toothpastes to be filled directly into tubes to produce a striped mix without any separate compartments. This required adjustment of the diffent components' behavior (rheology) so that stripes are produced when the tube is squeezed.

Striped toothpaste should not be confused with layered toothpaste. Layered toothpaste requires a multi-chamber design (e.g. USPTO 5,020,694), in which two or three layers extrude out of the nozzle. This scheme, like that of pump dispensers (USPTO 4,461,403), is more complicated (and thus, more expensive to manufacture) than either the Marraffino design or the Colgate designs.

The iconic depiction of a wave-shaped blob of toothpaste sitting on a toothbrush is called a “nurdle”.

Pros-and-Cons-of-Different-Types-of-Toothpaste-1.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1775 2023-05-18 14:30:43

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,369

Re: Miscellany

1778) Apartment

Summary

Apartment house, also called apartment block, or block of flats, building containing more than one dwelling unit, most of which are designed for domestic use, but sometimes including shops and other nonresidential features.

Apartment buildings have existed for centuries. In the great cities of the Roman Empire, because of urban congestion, the individual house, or domus, had given way in early imperial times to the communal dwelling, or insula (q.v.), except for the residences of the very wealthy. Four stories were common, and six-, seven-, or eight-story buildings were occasionally constructed. Another type of apartment existed in Europe in the Middle Ages, consisting of a great house or mansion, part of which was subdivided into smaller sets of rooms in order to house the servants and other retainers of an important person. In contrast to these “apartments,” which were simply personal suites within great houses, the apartment house as it is known today first appeared in Paris and other large European cities in the 18th century, when tall blocks of flats for middle-class tenants began appearing. In the typical Parisian apartment building, the size of the apartments (and the financial means of the tenants) decreased with each successive story in a four- or five-story building.

By the mid-19th century, large numbers of inexpensive apartment houses were under construction to house swelling numbers of industrial labourers in cities and towns across Europe and in the United States. These buildings were often incredibly shabby, poorly designed, unsanitary, and cramped. The typical New York City apartment, or tenement, a type first constructed in the 1830s, consisted of apartments popularly known as railroad flats because the narrow rooms were arranged end-to-end in a row like boxcars. Indeed, few low-cost apartment buildings erected in Europe or America before 1918 were designed for either comfort or style. In many European cities, however, particularly in Paris and Vienna, the second half of the 19th century witnessed great progress in the design of apartments for the upper-middle class and the rich.

The modern large apartment building emerged in the early 20th century with the incorporation of elevators, central heating, and other conveniences that could be shared in common by a building’s tenants. Apartments for the well-to-do began to offer other amenities such as leisure facilities, delivery and laundry services, and communal dining rooms and gardens. The multistory apartment house continued to grow in importance as crowding and rising land values in cities made one-family homes less and less practicable in parts of many cities. Much government-subsidized, or public, housing has taken the form of apartment buildings, particularly for the urban elderly and working classes or those living in poverty. Apartment-block towers also were erected in large numbers in the Soviet Union and other countries where housing construction was the responsibility of the state.

Since World War II the demand for apartment housing has continued to grow as a result of continued urbanization. The mid- or high-rise apartment complex has become a fixture of the skylines of most of the world’s cities, and the two- or three-story “walk-up” apartment also remains popular in somewhat less built-up urban areas.

The most common form of occupancy of apartment houses has been on a rental basis. However, multiple ownership of units on a single site has become much more common in the 20th century. Such ownership can take the form of cooperatives or condominiums. In a cooperative, all the occupants of a building own the structure in common; cooperative housing is much more common in parts of Europe than it is in the United States. A condominium denotes the individual ownership of one dwelling unit in an apartment house or other multidwelling building. The increasing popularity of condominiums in the United States and elsewhere is based largely on the fact that, unlike members of a cooperative, condominium owners are not financially interdependent and can mortgage their property.

Details

An apartment (American English), flat (British English, Indian English, South African English), or unit (Australian English), is a self-contained housing unit (a type of residential real estate) that occupies part of a building, generally on a single storey. There are many names for these overall buildings, see below. The housing tenure of apartments also varies considerably, from large-scale public housing, to owner occupancy within what is legally a condominium (strata title or commonhold), to tenants renting from a private landlord.

Terminology

The term apartment is favoured in North America (although in some Canadian cities, flat is used for a unit which is part of a house containing two or three units, typically one to a floor). In the UK, the term apartment is more usual in professional real estate and architectural circles where otherwise the term flat is used commonly, but not exclusively, for an apartment on a single level (hence a 'flat' apartment).

In some countries, the word "unit" is a more general term referring to both apartments and rental business suites. The word 'unit' is generally used only in the context of a specific building.

"Mixed-use buildings" combine commercial and residential uses within the same structure. Typically, mixed-use buildings consist of businesses on the lower floors (often retail in street-facing ground floor and supporting subterranean levels) and residential apartments on the upper floors.

By housing tenure

Tenement law refers to the feudal basis of permanent property such as land or rents. It may be found combined as in "Messuage or Tenement" to encompass all the land, buildings and other assets of a property.

In the United States, some apartment-dwellers own their units, either as a housing cooperative, in which the residents own shares of a corporation that owns the building or development; or in a condominium, whose residents own their apartments and share ownership of the public spaces. Most apartments are in buildings designed for the purpose, but large older houses are sometimes divided into apartments. The word apartment denotes a residential unit or section in a building. In some locations, particularly the United States, the word connotes a rental unit owned by the building owner, and is not typically used for a condominium.

In England and Wales, some flat owners own shares in the company that owns the freehold of the building as well as holding the flat under a lease. This arrangement is commonly known as a "share of freehold" flat. The freehold company has the right to collect annual ground rents from each of the flat owners in the building. The freeholder can also develop or sell the building, subject to the usual planning and restrictions that might apply. This situation does not happen in Scotland, where long leasehold of residential property was formerly unusual, and is now impossible.

By size of the building

Apartment buildings are multi-story buildings where three or more residences are contained within one structure. Such a building may be called an apartment building, apartment complex, flat complex, block of flats, tower block, high-rise or, occasionally, mansion block (in British English), especially if it consists of many apartments for rent. A high-rise apartment building is commonly referred to as a residential tower, apartment tower, or block of flats in Australia.

A high-rise building is defined by its height differently in various jurisdictions. It may be only residential, in which case it might also be called a tower block, or it might include other functions such as hotels, offices, or shops. There is no clear difference between a tower block and a skyscraper, although a building with fifty or more stories is generally considered a skyscraper. High-rise buildings became possible with the invention of the elevator (lift) and cheaper, more abundant building materials. Their structural system usually is made of reinforced concrete and steel.

A low-rise building and mid-rise buildings have fewer storeys, but the limits are not always clear. Emporis defines a low-rise as "an enclosed structure below 35 metres [115 feet] which is divided into regular floor levels." The city of Toronto defines a mid-rise as a building between 4 and 12 stories.

By country

In American English, the distinction between rental apartments and condominiums is that while rental buildings are owned by a single entity and rented out to many, condominiums are owned individually, while their owners still pay a monthly or yearly fee for building upkeep. Condominiums are often leased by their owner as rental apartments. A third alternative, the cooperative apartment building (or "co-op"), acts as a corporation with all of the tenants as shareholders of the building. Tenants in cooperative buildings do not own their apartment, but instead own a proportional number of shares of the entire cooperative. As in condominiums, cooperators pay a monthly fee for building upkeep. Co-ops are common in cities such as New York, and have gained some popularity in other larger urban areas in the U.S.

In British English the usual word is "flat", but apartment is used by property developers to denote expensive 'flats' in exclusive and expensive residential areas in, for example, parts of London such as Belgravia and Hampstead. In Scotland, it is called a block of flats or, if it is a traditional sandstone building, a tenement, a term which has a negative connotation elsewhere.

In India, the word flat is used to refer to multi-storey dwellings that have lifts.

Australian English and New Zealand English traditionally used the term flat (although it also applies to any rental property), and more recently also use the terms unit or apartment. In Australia, a 'unit' refers to flats, apartments or even semi-detached houses. In Australia, the terms "unit", "flat" and "apartment" are largely used interchangeably. Newer high-rise buildings are more often marketed as "apartments", as the term "flats" carries colloquial connotations. The term condominium or condo is rarely used in Australia despite attempts by developers to market it.

In Malaysian English, flat often denotes a housing block of two rooms with walk-up, no lift, without facilities, typically five storeys tall, and with outdoor parking space, while apartment is more generic and may also include luxury condominiums.

In Japanese English loanwords (Wasei-eigo), the term apartment (apaato) is used for lower-income housing and mansion (manshon) is used for high-end apartments; but both terms refer to what English-speakers regard as an apartment. This use of the term mansion has a parallel with British English's mansion block, a term denoting prestigious apartment buildings from the Victorian and Edwardian, which usually feature an ornate facade and large, high-ceilinged flats with period features. Danchi is the Japanese word for a large cluster of apartment buildings of a particular style and design, typically built as public housing by government authorities.

Types and characteristics:

Studio apartment

The smallest self-contained apartments are referred to as studio, efficiency or bachelor apartments in the US and Canada, or studio flat in the UK. These units usually consist of a large single main room which acts as the living room, dining room and bedroom combined and usually also includes kitchen facilities, with a separate bathroom. In Korea, the term "one room" (wonroom) refers to a studio apartment.

A bedsit is a UK variant on single room accommodation: a bed-sitting room, probably without cooking facilities, with a shared bathroom. A bedsit is not self-contained and so is not an apartment or flat as this article uses the terms; it forms part of what the UK government calls a house in multiple occupation.

Garden apartment (US)

Merriam-Webster defines a garden apartment in American English as "a multiple-unit low-rise dwelling having considerable lawn or garden space." The apartment buildings are often arranged around courtyards that are open at one end. Such a garden apartment shares some characteristics of a townhouse: each apartment has its own building entrance, or shares that entrance via a staircase and lobby that adjoins other units immediately above and/or below it. Unlike a townhouse, each apartment occupies only one level. Such garden apartment buildings are almost never more than three stories high, since they typically lack elevators. However, the first "garden apartment" buildings in New York, USA, built in the early 1900s, were constructed five stories high. Some garden apartment buildings place a one-car garage under each apartment. The interior grounds are often landscaped.

Garden flat (UK)

The Oxford English Dictionary defines the use of "garden flat" in British English as "a basement or ground-floor flat with a view of and access to a garden or lawn", although its citations acknowledge that the reference to a garden may be illusory. "Garden flat" can serve simply as a euphemism for a basement. The large Georgian or Victorian townhouse was built with an excavated subterranean space around its front known as an area, often surrounded by cast iron railings. This lowest floor housed the kitchen, the main place of work for the servants, with a "tradesman's entrance" via the area stairs. This "lower ground floor" (another euphemism) has proven ideal for conversion to a self-contained "garden flat". One American term for this arrangement is an English basement.

Basement apartment:

Garret apartment

A unit in the attic of a building and usually converted from domestic servants' quarters. These apartments are characterized by their sloping walls, which can restrict the usable space; the resultant stair climb in buildings that do not have elevators and the sloping walls can make garret apartments less desirable than units on lower floors. However, because these apartments are located on the top floors of their buildings, they can offer the best views and are quieter because of the lack of upstairs neighbours.

Secondary suite

When part of a house is converted for the ostensible use of the owner's family member, the self-contained dwelling may be known as an "in-law apartment", "annexe", or "granny flat", though these (sometimes illegally) created units are often occupied by ordinary renters rather than the landlord's relative. In Canada these are commonly located below the main house and are therefore "basement suites". Another term is an "accessory dwelling unit", which may be part of the main house, or a free-standing structure in its grounds.

Salon apartment

Salon apartment is a term linked to the exclusive apartments built as part of multi-family houses in Belgrade and in certain towns in Yugoslavia in the first decades of the 20th century. The structure of the apartments included centrally located anteroom with a combined function of the dining room and one or more salon areas. Most of these apartments were built in Belgrade (Serbia), along with the first examples of apartments popularly named 'salon apartments', with the concept of spatial and functional organization later spreading to other larger urban centers in Yugoslavia.

Maisonette

Maisonette (a corruption of maisonnette, French for "little house" and originally the spelling in English as well, but which has since fallen into disuse) has no strict definition, but the OED suggests "a part of a residential building which is occupied separately, usually on more than one floor and having its own outside entrance." It differs from a flat in having, usually, more than one floor, with a staircase internal to the dwelling leading from the entrance floor to the upper (or, in some cases, lower) other floor. This is a very common arrangement in much post-war British housing (especially, but not exclusively, public housing) serving both to reduce costs by reducing the amount of space given to access corridors and to emulate the 'traditional' two-storey terrace house to which many of the residents would have been accustomed. It also allows for apartments, even when accessed by a corridor, to have windows on both sides of the building.

A maisonette could encompass Tyneside flats, pairs of single-storey flats within a two-storey terrace. Their distinctive feature is their use of two separate front doors onto the street, each door leading to a single flat. "Maisonette" could also stretch to cottage flats, also known as 'four-in-a-block flats', a style of housing common in Scotland.

One dwelling with two storeys

The vast majority of apartments are on one level, hence "flat". Some, however, have two storeys, joined internally by stairs, just as many houses do. One term for this is "maisonette", as above. Some housing in the United Kingdom, both public and private, was designed as scissor section flats. On a grander level, penthouses may have more than one storey, to emphasise the idea of space and luxury. Two storey units in new construction are sometimes referred to as "townhouses" in some countries (though not usually in Britain).

Small buildings with a few one-storey dwellings

"Duplex" refers to two separate units horizontally adjacent, with a common demising wall, or vertically adjacent, with a floor-ceiling assembly.

Duplex description can be different depending on the part of the US, but generally has two to four dwellings with a door for each and usually two front doors close together but separate—referred to as 'duplex', indicating the number of units, not the number of floors, as in some areas of the country they are often only one story. Groups of more than two units have corresponding names (Triplex, etc.). Those buildings that have a third storey are known as triplexes.

In the United States, regional forms have developed, see vernacular architecture. In Milwaukee, a Polish flat or "raised cottage" is an existing small house that has been lifted up to accommodate the creation of a basement floor housing a separate apartment, then set down again, thus becoming a modest pair of dwellings. In the Sun Belt, boxy small apartment buildings called dingbats, often with carports below, sprang up from the 1950s.

In the United Kingdom the term duplex is usually applied to an apartment with two storeys (with an internal staircase), neither of which is located at ground level. Such homes are frequently found in low-cost rental housing, in apartment blocks constructed by local authorities, or above street-level retail units, where they may be occupied by the occupier of the retail unit or rented out separately. Buildings containing two dwellings with a common vertical wall are instead known as semi-detached, or colloquially a semi. This form of construction is very common, and built as such rather than a later conversion.

Loft apartment

This type of apartment developed in North America during the middle of the 20th century. The term initially described a living space created within a former industrial building, usually 19th century. These large apartments found favor with artists and musicians wanting accommodation in large cities (New York for example) and is related to unused buildings in the decaying parts of such cities being occupied illegally by people squatting.

These loft apartments were usually located in former highrise warehouses and factories left vacant after town planning rules and economic conditions in the mid 20th century changed. The resulting apartments created a new bohemian lifestyle and are arranged in a completely different way from most urban living spaces, often including workshops and art studio spaces. As the supply of old buildings of a suitable nature has dried up, developers have responded by constructing new buildings in the same aesthetic with varying degrees of success.

An industrial, warehouse, or commercial space converted to an apartment is commonly called a loft, although some modern lofts are built by design.

Penthouse

An apartment located on the top floor of a high-rise apartment building.

Communal apartment

In Russia, a communal apartment is a room with a shared kitchen and bath. A typical arrangement is a cluster of five or so room-apartments with a common kitchen and bathroom and separate front doors, occupying a floor in a pre-Revolutionary mansion. Traditionally a room is owned by the government and assigned to a family on a semi-permanent basis.

Serviced apartment

A serviced apartment is any size space for residential living which includes regular maid and cleaning services provided by the rental agent. Serviced apartments or serviced flats developed in the early part of the 20th century and were briefly fashionable in the 1920s and 30s. They are intended to combine the best features of luxury and self-contained apartments, often being an adjunct of a hotel. Like guests semi-permanently installed in a luxury hotel, residents could enjoy the additional facilities such as house keeping, laundry, catering and other services if and when desired.

A feature of these apartment blocks was quite glamorous interiors with lavish bathrooms but no kitchen or laundry spaces in each flat. This style of living became very fashionable as many upper-class people found they could not afford as many live-in staff after the First World War and revelled in a "lock-up and leave" life style that serviced apartment hotels supplied. Some buildings have been subsequently renovated with standard facilities in each apartment, but serviced apartment hotel complexes continue to be constructed. Recently a number of hotels have supplemented their traditional business model with serviced apartment wings, creating privately owned areas within their buildings - either freehold or leasehold.

Facilities

Apartments may be available for rent furnished, with furniture, or unfurnished into which a tenant moves in with his own furniture. Serviced apartments, intended to be convenient for shorter stays, include soft furnishings and kitchen utensils, and maid service.

Laundry facilities may reside in a common area accessible to all building tenants, or each apartment may have its own facilities. Depending on when the building was built and its design, utilities such as water, heating, and electricity may be common for all of the apartments, or separate for each apartment and billed separately to each tenant. (Many areas in the US have ruled it illegal to split a water bill among all the tenants, especially if a pool is on the premises.) Outlets for connection to telephones are typically included in apartments. Telephone service is optional and is almost always billed separately from the rent payments. Cable television and similar amenities also cost extra. Parking space(s), air conditioning, and extra storage space may or may not be included with an apartment. Rental leases often limit the maximum number of residents in each apartment.

On or around the ground floor of the apartment building, a series of mailboxes are typically kept in a location accessible to the public and, thus, to the mail carrier. Every unit typically gets its own mailbox with individual keys to it. Some very large apartment buildings with a full-time staff may take mail from the carrier and provide mail-sorting service. Near the mailboxes or some other location accessible by outsiders, a buzzer (equivalent to a doorbell) may be available for each individual unit. In smaller apartment buildings such as two- or three-flats, or even four-flats, rubbish is often disposed of in trash containers similar to those used at houses. In larger buildings, rubbish is often collected in a common trash bin or dumpster. For cleanliness or minimizing noise, many lessors will place restrictions on tenants regarding smoking or keeping pets in an apartment.

Various

In more urban areas, apartments close to the downtown area have the benefits of proximity to jobs and/or public transportation. However, prices per square foot are often much higher than in suburban areas.

Moving up in size from studio flats are one-bedroom apartments, which contain a bedroom enclosed from the other rooms of the apartment, usually by an internal door. This is followed by two-bedroom, three-bedroom, etc. apartments. Apartments with more than three bedrooms are rare.

Small apartments often have only one entrance. Large apartments often have two entrances, perhaps a door in the front and another in the back, or from an underground or otherwise attached parking structure. Depending on the building design, the entrance doors may be connected directly to the outside or to a common area inside, such as a hallway or a lobby.

In many American cities, the one-plus-five style of mid-rise, wood-framed apartments have gained significant popularity following a 2009 revision to the International Building Code; these buildings typically feature four wood-framed floors above a concrete podium and are popular with developers due to their high density and relatively lower construction costs.

large.jpg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

Board footer

Powered by FluxBB