Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#2126 2024-04-21 00:06:32

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2128) Talk show

Gist

A talk show is a chat show, especially one in which listeners, viewers, or the studio audience are invited to participate in the discussion.

Summary

Talk show, radio or television program in which a well-known personality interviews celebrities and other guests. The late-night television programs hosted by Johnny Carson, Jay Leno, David Letterman, and Conan O’Brien, for example, emphasized entertainment, incorporating interludes of music or comedy. Other talk shows focused on politics (see David Susskind), controversial social issues or sensationalistic topics (Phil Donahue), and emotional therapy (Oprah Winfrey).

Details

A talk show (sometimes chat show in British English) is a television programming, radio programming or Podcast genre structured around the act of spontaneous conversation. A talk show is distinguished from other television programs by certain common attributes. In a talk show, one person (or group of people or guests) discusses various topics put forth by a talk show host. This discussion can be in the form of an interview or a simple conversation about important social, political or religious issues and events. The personality of the host shapes the tone and style of the show. A common feature or unwritten rule of talk shows is to be based on "fresh talk", which is talk that is spontaneous or has the appearance of spontaneity.

The history of the talk show spans back from the 1950s to the present.

Talk shows can also have several different subgenres, which all have unique material and can air at different times of the day via different avenues.

Attributes

Beyond the inclusion of a host, a guest(s), and a studio or call-in audience, specific attributes of talk shows may be identified:

* Talk shows focus on the viewers—including the participants calling in, sitting in a studio or viewing from online or on TV.
* Talk shows center around the interaction of guests with opposing opinions and/or differing levels of expertise, which include both experts and non-experts.
* Although talk shows include guests of various expertise levels, they often cater to the credibility of one's life experiences as opposed to educational expertise.
* Talk shows involve a host responsible for furthering the agenda of the show by mediating, instigating and directing the conversation to ensure the purpose is fulfilled. The purpose of talk shows is to either address or bring awareness to conflicts, to provide information, or to entertain.
* Talk shows consist of evolving episodes that focus on differing perspectives in respect to important issues in society, politics, religion or other popular areas.
* Talk shows are produced at low cost and are typically not aired during prime time.
* Talks shows are either aired live or are recorded live with limited post-production editing.

Subgenres

There are several major formats of talk shows. Generally, each subgenre predominates during a specific programming block during the broadcast day.

* Breakfast chat or early morning shows that generally alternate between news summaries, political coverage, feature stories, celebrity interviews, and musical performances.
* Late morning chat shows that feature two or more hosts or a celebrity panel and focus on entertainment and lifestyle features.
* Daytime tabloid talk shows that generally feature a host, a guest or a panel of guests, and a live audience that interacts extensively with the host and guests. These shows may feature celebrities, political commentators, or "ordinary" people who present unusual or controversial topics.
* "Lifestyle" or self-help programs that generally feature a host or hosts of medical practitioners, therapists, or counselors and guests who seek intervention, describe medical or psychological problems, or offer advice. An example of this type of subgenre is The Oprah Winfrey Show, although it can easily fit into other categories as well.
* Evening panel discussion shows that focus on news, politics, or popular culture (such as the former UK series After Dark which was broadcast "late-night").
* Late-night talk shows that focus primarily on topical comedy and variety entertainment. Most traditionally open with a monologue by the host, with jokes relating to current events. Other segments typically include interviews with celebrity guests, recurring comedy sketches, as well as performances by musicians or other stand-up comics.
* Sunday morning talk shows are a staple of network programming in North America, and generally focus on political news and interviews with elected political figures and candidates for office, commentators, and journalists.
* Aftershows that feature in-depth discussion about a program on the same network that aired just before (for example, Talking Dead).
* Spoof talk shows, such as Space Ghost Coast to Coast, Tim and Eric Nite Live, Comedy Bang! Bang!, and The Eric Andre Show, that feature interviews that are mostly scripted, shown in a humorous and satirical way, or engages in subverting the norms of the format (particularly that of late-night talk shows).

These formats are not absolute; some afternoon programs have similar structures to late-night talk shows. These formats may vary across different countries or markets. Late night talk shows are especially significant in the United States. Breakfast television is a staple of British television. The daytime talk format has become popular in Latin America as well as the United States.

These genres also do not represent "generic" talk show genres. "Generic" genres are categorized based on the audiences' social views of talks shows derived through their cultural identities, fondness, preferences and character judgements of the talk shows in question. The subgenres listed above are based on television programming and broadly defined based on the TV guide rather than on the more specific categorizations of talk show viewers. However, there is a lack of research on "generic" genres, making it difficult to list them here. According to Mittell, "generic" genres is of significant importance in further identifying talk show genres because with such differentiation in cultural preferences within the subgenres, a further distinction of genres would better represent and target the audience.

Talk-radio host Howard Stern also hosted a talk show that was syndicated nationally in the US, then moved to satellite radio's Sirius. The tabloid talk show genre, pioneered by Phil Donahue in 1967 but popularized by Oprah Winfrey, was extremely popular during the last two decades of the 20th century.

Politics are hardly the only subject of American talk shows, however. Other radio talk show subjects include Car Talk hosted by NPR and Coast to Coast AM hosted by Art Bell and George Noory which discusses topics of the paranormal, conspiracy theories, and fringe science. Sports talk shows are also very popular ranging from high-budget shows like The Best darn Sports Show Period to Max Kellerman's original public-access television cable TV show Max on Boxing.

History

Talk shows have been broadcast on television since the earliest days of the medium. Joe Franklin, an American radio and television personality, hosted the first television talk show. The show began in 1951 on WJZ-TV (later WABC-TV) and moved to WOR-TV (later WWOR-TV) from 1962 to 1993.

NBC's The Tonight Show is the world's longest-running talk show; having debuted in 1954, it continues to this day. The show underwent some minor title changes until settling on its current title in 1962, and despite a brief foray into a more news-style program in 1957 and then reverting that same year, it has remained a talk show. Ireland's The Late Late Show is the second-longest running talk show in television history, and the longest running talk show in Europe, having debuted in 1962.

Steve Allen was the first host of The Tonight Show, which began as a local New York show, being picked up by the NBC network in 1954. It in turn had evolved from his late-night radio talk show in Los Angeles. Allen pioneered the format of late night network TV talk shows, originating such talk show staples as an opening monologue, celebrity interviews, audience participation, and comedy bits in which cameras were taken outside the studio, as well as music, although the series' popularity was cemented by second host Jack Paar, who took over after Allen had left and the show had ceased to exist.

TV news pioneer Edward R. Murrow hosted a talk show entitled Small World in the late 1950s and since then, political TV talk shows have predominantly aired on Sunday mornings.

Syndicated daily talk shows began to gain more popularity during the mid-1970s and reached their height of popularity with the rise of the tabloid talk show. Morning talk shows gradually replaced earlier forms of programming — there were a plethora of morning game shows during the 1960s and early to mid-1970s, and some stations formerly showed a morning movie in the time slot that many talk shows now occupy.

Current late night talk shows such as The Tonight Show Starring Jimmy Fallon, Conan and The Late Show with Stephen Colbert have aired featuring celebrity guests and comedy sketches. Syndicated daily talk shows range from tabloid talk shows, such as Jerry Springer and Maury, to celebrity interview shows, like Live with Kelly and Ryan, Tamron Hall, Sherri, Steve Wilkos, The Jennifer Hudson Show and The Kelly Clarkson Show, to industry leader The Oprah Winfrey Show, which popularized the former genre and has been evolving towards the latter. On November 10, 2010, Oprah Winfrey invited several of the most prominent American talk show hosts - Phil Donahue, Sally Jessy Raphael, Geraldo Rivera, Ricki Lake, and Montel Williams - to join her as guests on her show. The 1990s in particular saw a spike in the number of "tabloid" talk shows, most of which were short-lived and are now replaced by a more universally appealing "interview" or "lifestyle TV" format.

Talk shows have more recently started to appear on Internet radio. Also, several Internet blogs are in talk show format including the Baugh Experience.

The current world record for the longest talk show is held by Rabi Lamichhane from Nepal by staying on air for 62 hours from April 11 to 13, 2013 breaking the previous record set by two Ukrainians by airing the show for 52 hours in 2011.

In 2020, the fear of the spread of the coronavirus led to large changes in the operation of talk shows, with many being filmed without live audiences to ensure adherence to the rules of social distancing. The inclusion of a live, participating audience is one of the attributes that contribute to the defining characteristics of talk shows. Operating without the interaction of viewers created difficult moments and awkward silences to hosts who usually used audience responses to transition conversations.

niecy-nash-to-host-daytime-talk-show.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2127 2024-04-22 00:06:25

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2129) Technocracy

Gist

a) a proponent, adherent, or supporter of technocracy.
b) a technological expert, especially one concerned with management or administration.

Summary

Technocracy, government by technicians who are guided solely by the imperatives of their technology. The concept developed in the United States early in the 20th century as an expression of the Progressive movement and became a subject of considerable public interest in the 1930s during the Great Depression. The origins of the technocracy movement may be traced to Frederick W. Taylor’s introduction of the concept of scientific management. Writers such as Henry L. Gannt, Thorstein Veblen, and Howard Scott suggested that businessmen were incapable of reforming their industries in the public interest and that control of industry should thus be given to engineers.

The much-publicized Committee on Technocracy, headed by Walter Rautenstrauch and dominated by Scott, was organized in 1932 in New York City. Scott proclaimed the invalidation, by technologically produced abundance, of all prior economic concepts based on scarcity; he predicted the imminent collapse of the price system and its replacement by a bountiful technocracy. Scott’s academic qualifications, however, were discredited in the press, some of the group’s data were questioned, and there were disagreements among members regarding social policy. The committee broke up within a year and was succeeded by the Continental Committee on Technocracy, which faded by 1936, and Technocracy, Inc., headed by Scott. Technocratic organizations sprang up across the United States and western Canada, but the technocracy movement was weakened by its failure to develop politically viable programs for change, and support was lost to the New Deal and third-party movements. There were also fears of authoritarian social engineering. Scott’s organization declined after 1940 but still survived in the late 20th century.

Details

What Is Technocracy?

A technocracy is a model of governance wherein decision-makers are chosen for office based on their technical expertise and background. A technocracy differs from a traditional democracy in that individuals selected to a leadership role are chosen through a process that emphasizes their relevant skills and proven performance, as opposed to whether or not they fit the majority interests of a popular vote.

The individuals that occupy such positions in a technocracy are known as "technocrats." An example of a technocrat could be a central banker who is a trained economist and follows a set of rules that apply to empirical data.

KEY TAKEAWAYS

* A technocracy is a form of governance whereby government officials or policymakers, known as technocrats, are chosen by some higher authority due to their technical skills or expertise in a specific domain.
* Decisions made by technocrats are supposed to be based on information derived from data and objective methodology, rather than opinion or self-interest.
* Critics complain that technocracy is undemocratic and disregards the will of the people.

How Technocracy Works

A technocracy is a political entity ruled by experts (technocrats) that are selected or appointed by some higher authority. Technocrats are, supposedly, selected specifically for their expertise in the area over which they are delegated authority to govern. In practice, because technocrats must always be appointed by some higher authority, the political structure and incentives that influence that higher authority will always also play some role in the selection of technocrats.

An official who is labeled as a technocrat may not possess the political savvy or charisma that is typically expected of an elected politician. Instead, a technocrat may demonstrate more pragmatic and data-oriented problem-solving skills in the policy arena.

Technocracy became a popular movement in the United States during the Great Depression when it was believed that technical professionals, such as engineers and scientists, would have a better understanding than politicians regarding the economy's inherent complexity.

Although democratically officials may hold seats of authority, most come to rely on the technical expertise of select professionals in order to execute their plans.

Defense measures and policies in government are often developed with considerable consultation with military personnel to provide their firsthand insight. Medical treatment decisions, meanwhile, are based heavily on the input and knowledge of physicians, and city infrastructures could not be planned, designed, or constructed without the input of engineers.

Critiques of Technocracy

Reliance on technocracy can be criticized on several grounds. The acts and decisions of technocrats can come into conflict with the will, rights, and interests of the people whom they rule over. This in turn has often led to populist opposition to both specific technocratic policy decisions and to the degree of power in general granted to technocrats. These problems and conflicts help give rise to the populist concept of the "deep state", which consists of a powerful, entrenched, unaccountable, and oligarchic technocracy which governs in its own interests.

In a democratic society, the most obvious criticism is that there is an inherent tension between technocracy and democracy. Technocrats often may not follow the will of the people because by definition they may have specialized expertise that the general population lacks. Technocrats may or may not be accountable to the will of the people for such decisions.

In a government where citizens are guaranteed certain rights, technocrats may seek to encroach upon these rights if they believe that their specialized knowledge suggests that it is appropriate or in the larger public interest. The focus on science and technical principles might also be seen as separate and disassociated from the humanity and nature of society. For instance, a technocrat might make decisions based on calculations of data rather than the impact on the populace, individuals, or groups within the population.

In any government, regardless of who appoints the technocrats or how, there is always a risk that technocrats will engage in policymaking that favors their own interests or others whom they serve over the public interest. Technocrats are necessarily placed in a position of trust, since the knowledge used to enact their decisions is to some degree inaccessible or not understandable to the general public. This creates a situation where there can be a high risk of self-dealing, collusion, corruption, and cronyism. Economic problems such as rent-seeking, rent-extraction, or regulatory capture are common in technocracy.

Trade on the Go. Anywhere, Anytime

One of the world's largest crypto-asset exchanges is ready for you. Enjoy competitive fees and dedicated customer support while trading securely. You'll also have access to Binance tools that make it easier than ever to view your trade history, manage auto-investments, view price charts, and make conversions with zero fees. Make an account for free and join millions of traders and investors on the global crypto market.

Additional Information

Technocracy is a form of government in which the decision-makers are selected based on their expertise in a given area of responsibility, particularly with regard to scientific or technical knowledge. Technocracy follows largely in the tradition of other meritocracy theories and assumes full state control over political and economic issues.

This system explicitly contrasts with representative democracy, the notion that elected representatives should be the primary decision-makers in government, though it does not necessarily imply eliminating elected representatives. Decision-makers are selected based on specialized knowledge and performance rather than political affiliations, parliamentary skills, or popularity.

The term technocracy was initially used to signify the application of the scientific method to solving social problems. In its most extreme form, technocracy is an entire government running as a technical or engineering problem and is mostly hypothetical. In more practical use, technocracy is any portion of a bureaucracy run by technologists. A government in which elected officials appoint experts and professionals to administer individual government functions, and recommend legislation, can be considered technocratic. Some uses of the word refer to a form of meritocracy, where the ablest are in charge, ostensibly without the influence of special interest groups. Critics have suggested that a "technocratic divide" challenges more participatory models of democracy, describing these divides as "efficacy gaps that persist between governing bodies employing technocratic principles and members of the general public aiming to contribute to government decision making".

History of the term

The term technocracy is derived from the Greek words, tekhne meaning skill and κράτος, kratos meaning power, as in governance, or rule. William Henry Smyth, a California engineer, is usually credited with inventing the word technocracy in 1919 to describe "the rule of the people made effective through the agency of their servants, the scientists and engineers", although the word had been used before on several occasions. Smyth used the term Technocracy in his 1919 article "'Technocracy'—Ways and Means to Gain Industrial Democracy" in the journal Industrial Management. Smyth's usage referred to Industrial democracy: a movement to integrate workers into decision-making through existing firms or revolution.

In the 1930s, through the influence of Howard Scott and the technocracy movement he founded, the term technocracy came to mean 'government by technical decision making', using an energy metric of value. Scott proposed that money be replaced by energy certificates denominated in units such as ergs or joules, equivalent in total amount to an appropriate national net energy budget, and then distributed equally among the North American population, according to resource availability.

There is in common usage found the derivative term technocrat. The word technocrat can refer to someone exercising governmental authority because of their knowledge, "a member of a powerful technical elite", or "someone who advocates the supremacy of technical experts". McDonnell and Valbruzzi define a prime minister or minister as a technocrat if "at the time of their appointment to government, they: have never held public office under the banner of a political party; are not a formal member of any party; and are said to possess recognized non-party political expertise which is directly relevant to the role occupied in government". In Russia, the President of Russia has often nominated ministers based on technical expertise from outside political circles, and these have been referred to as "technocrats".

Characteristics

Technocrats are individuals with technical training and occupations who perceive many important societal problems as being solvable with the applied use of technology and related applications. The administrative scientist Gunnar K. A. Njalsson theorizes that technocrats are primarily driven by their cognitive "problem-solution mindsets" and only in part by particular occupational group interests. Their activities and the increasing success of their ideas are thought to be a crucial factor behind the modern spread of technology and the largely ideological concept of the "information society". Technocrats may be distinguished from "econocrats" and "bureaucrats" whose problem-solution mindsets differ from those of the technocrats.

Examples

The former government of the Soviet Union has been referred to as a technocracy. Soviet leaders like Leonid Brezhnev often had a technical background. In 1986, 89% of Politburo members were engineers.

Leaders of the Chinese Communist Party used to be mostly professional engineers. According to surveys of municipal governments of cities with a population of 1 million or more in China, it has been found that over 80% of government personnel had a technical education. Under the five-year plans of the People's Republic of China, projects such as the National Trunk Highway System, the China high-speed rail system, and the Three Gorges Dam have been completed. During China's 20th National Congress, a class of technocrats in finance and economics are replaced in favor of high-tech technocrats.

In 2013, a European Union library briefing on its legislative structure referred to the Commission as a "technocratic authority", holding a "legislative monopoly" over the EU lawmaking process. The briefing suggests that this system, which elevates the European Parliament to a vetoing and amending body, was "originally rooted in the mistrust of the political process in post-war Europe". This system is unusual since the Commission's sole right of legislative initiative is a power usually associated with Parliaments.

Several governments in European parliamentary democracies have been labelled 'technocratic' based on the participation of unelected experts ('technocrats') in prominent positions. Since the 1990s, Italy has had several such governments (in Italian, governo tecnico) in times of economic or political crisis, including the formation in which economist Mario Monti presided over a cabinet of unelected professionals. The term 'technocratic' has been applied to governments where a cabinet of elected professional politicians is led by an unelected prime minister, such as in the cases of the 2011-2012 Greek government led by economist Lucas Papademos and the Czech Republic's 2009–2010 caretaker government presided over by the state's chief statistician, Jan Fischer. In December 2013, in the framework of the national dialogue facilitated by the Tunisian National Dialogue Quartet, political parties in Tunisia agreed to install a technocratic government led by Mehdi Jomaa.

The article "Technocrats: Minds Like Machines" states that Singapore is perhaps the best advertisement for technocracy: the political and expert components of the governing system there seem to have merged completely. This was underlined in a 1993 article in "Wired" by Sandy Sandfort, where he describes the information technology system of the island even at that early date making it effectively intelligent.

Engineering

Following Samuel Haber, Donald Stabile argues that engineers were faced with a conflict between physical efficiency and cost efficiency in the new corporate capitalist enterprises of the late nineteenth-century United States. Because of their perceptions of market demand, the profit-conscious, non-technical managers of firms where the engineers work often impose limits on the projects that engineers desire to undertake.

The prices of all inputs vary with market forces, thereby upsetting the engineer's careful calculations. As a result, the engineer loses control over projects and must continually revise plans. To maintain control over projects, the engineer must attempt to control these outside variables and transform them into constant factors.

main-qimg-36146ea6651bcd3057942860339a6068-c


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2128 2024-04-22 22:52:21

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2130) Hairdresser

Gist

A hairdresser is a person who cuts people's hair and puts it into a style, usually working in a special shop, called a hairdresser's.

Summary

A hairdresser's job is to organise hair into a particular style or "look". They can cut hair, add colour to it or texture it. A hairdresser may be female or male. Qualified staff are usually called "stylists", who are supported by assistants. Most hairdressing businesses are unisex, that is, they serve both sexes, and have both sexes on their staff.

Male hairdressers who simply cut men's hair (and do not serve females) are often called barbers.

Qualifications for hairdressing usually mean a college course, or an apprenticeship under a senior stylist. Some aspects of the job are quite technical (such as hair dying) and require careful teaching.

Details

A hairdresser is a person whose occupation is to cut or style hair in order to change or maintain a person's image. This is achieved using a combination of hair coloring, haircutting, and hair texturing techniques. A hairdresser may also be referred to as a 'barber' or 'hairstylist'.

History:

Ancient hairdressing

Hairdressing as an occupation dates back thousands of years. Both Aristophanes and Homer, Greek writers, mention hairdressing in their writings. Many Africans believed that hair is a method to communicate with the Divine Being. It is the highest part of the body and therefore the closest to the divine. Because of this Hairdressers held a prominent role in African communities. The status of hairdressing encouraged many to develop their skills, and close relationships were built between hairdressers and their clients. Hours would be spent washing, combing, oiling, styling and ornamenting their hair. Men would work specifically on men, and women on other women. Before a master hairdresser died, they would give their combs and tools to a chosen successor during a special ceremony.

In ancient Egypt, hairdressers had specially decorated cases to hold their tools, including lotions, scissors and styling materials. Barbers also worked as hairdressers, and wealthy men often had personal barbers within their home. With the standard of wig wearing within the culture, wigmakers were also trained as hairdressers. In ancient Rome and Greece household slaves and servants took on the role of hairdressers, including dyeing and shaving. Men who did not have their own private hair or shaving services would visit the local barbershop. Women had their hair maintained and groomed at their homes. Historical documentation is lacking regarding hairstylists from the 5th century until the 14th century. Hair care service grew in demand after a papal decree in 1092 demanded that all Roman Catholic clergymen remove their facial hair.

Europe

The first appearance of the word "hairdresser" is in 17th century Europe, and hairdressing was considered a profession. Hair fashion of the period suggested that wealthy women wear large, complex and heavily adorned hairstyles, which would be maintained by their personal maids and other people, who would spend hours dressing the woman's hair. A wealthy man's hair would often be maintained by a valet. It was in France where men began styling women's hair for the first time, and many of the notable hairdressers of the time were men, a trend that would continue into contemporary times. The first famous male hairdresser was Champagne, who was born in Southern France. Upon moving to Paris, he opened his own hair salon and dressed the hair of wealthy Parisian women until his death in 1658.

Women's hair grew taller in style during the 17th century, popularized by the hairdresser Madame Martin. The hairstyle, "the tower," was the trend with wealthy English and American women, who relied on hairdressers to style their hair as tall as possible. Tall piles of curls were pomaded, powdered and decorated with ribbons, flowers, lace, feathers and jewelry. The profession of hairdressing was launched as a genuine profession when Legros de Rumigny was declared the first official hairdresser of the French court. In 1765 de Rumigny published his book Art de la Coiffure des Dames, which discussed hairdressing and included pictures of hairstyles designed by him. The book was a best seller amongst Frenchwomen, and four years later de Rumigny opened a school for hairdressers: Academie de Coiffure. At the school he taught men and women to cut hair and create his special hair designs.

By 1777, approximately 1,200 hairdressers were working in Paris. During this time, barbers formed unions, and demanded that hairdressers do the same. Wigmakers also demanded that hairdressers cease taking away from their trade, and hairdressers responded that their roles were not the same, hairdressing was a service, and wigmakers made and sold a product. de Rumigny died in 1770 and other hairdressers gained in popularity, specifically three Frenchmen: Frederic, Larseueur, and Léonard. Leonard and Larseueur were the stylists for Marie Antoinette. Leonard was her favorite, and developed many hairstyles that became fashion trends within wealthy Parisian circles, including the loge d'opera, which towered five feet over the wearer's head. During the French Revolution he escaped the country hours before he was to be arrested, alongside the king, queen, and other clients. Léonard emigrated to Russia, where he worked as the premier hairdresser for Russian nobility.

19th century

Parisian hairdressers continued to develop influential styles during the early 19th century. Wealthy French women would have their favorite hairdressers style their hair from within their own homes, a trend seen in wealthy international communities. Hairdressing was primarily a service affordable only to those wealthy enough to hire professionals or to pay for servants to care for their hair. In the United States, Marie Laveau was one of the most famous hairdressers of the period. Laveau, located in New Orleans, began working as a hairdresser in the early 1820s, maintaining the hair of wealthy women of the city. She was a voodoo practitioner, called the "Voodoo Queen of New Orleans," and she used her connections to wealthy women to support her religious practice. She provided "help" to women who needed it for money, gifts and other favors.

French hairdresser Marcel Grateau developed the "Marcel wave" in the late part of the century. His wave required the use of a special hot hair iron and needed to be done by an experienced hairdresser. Fashionable women asked to have their hair "marceled." During this period, hairdressers began opening salons in cities and towns, led by Martha Matilda Harper, who developed one of the first retail chains of hair salons, the Harper Method.

20th century

Beauty salons became popularized during the 20th century, alongside men's barbershops. These spaces served as social spaces, allowing women to socialize while having their hair done and other services such as facials. Wealthy women still had hairdressers visit their home, but, the majority of women visited salons for services, including high-end salons such as Elizabeth Arden's Red Door Salon.

Major advancements in hairdressing tools took place during this period. Electricity led to the development of permanent wave machines and hair dryers. These tools allowed hairdressers to promote visits to their salons, over limited service in-home visits. New coloring processes were developed, including those by Eugène Schueller in Paris, which allowed hairdressers to perform complicated styling techniques. After World War I, the bob cut and the shingle bob became popular, alongside other short haircuts. In the 1930s complicated styles came back into fashion, alongside the return of the Marcel wave. Hairdressing was one of the few acceptable professions during this time for women, alongside teaching, nursing and clerical work.

Modern hairdressing:

Specialties

Some hairstylists specialize in particular services, such as colorists, who specialize in coloring hair.

By country:

United States

Occupationally, hairdressing is expected to grow faster than the average for all other occupations, at 20%. A state license is required for hairdressers to practice, with qualifications varying from state to state. Generally a person interested in hairdressing must have a high school diploma or GED, be at least 16 years of age, and have graduated from a state-licensed barber or cosmetology school. Full-time programs often last 9 months or more, leading to an associate degree. After students graduate from a program, they take a state licensing exam, which often consists of a written test, and a practical test of styling or an oral exam. Hairdressers must pay for licenses, and occasionally licenses must be renewed. Some states allow hairdressers to work without obtaining a new license, while others require a new license. About 44% of hairdressers are self-employed, often putting in 40-hour work weeks, and even longer among the self-employed. In 2008, 29% of hairstylists worked part-time, and 14% had variable schedules. As of 2008, people working as hairdressers totaled about 630,700, with a projected increase to 757,700 by 2018.

Occupational health hazards

Like many occupations, hairdressing is associated with potential health hazards stemming from the products workers use on the job as well as the environment they work in. Exposure risks are highly variable throughout the profession due to differences in the physical workspace, such as use of proper ventilation, as well as individual exposures to various chemicals throughout one's career. Hairdressers encounter a variety of chemicals on the job due to handling products such as shampoos, conditioners, sprays, chemical straighteners, permanent curling agents, bleaching agents, and dyes. While the U.S. Food and Drug Administration does hold certain guidelines over cosmetic products, such as proper labeling and provisions against adulteration, the FDA does not require approval of products prior to being sold to the public. This leaves opportunity for variations in product formulation, which can make occupational exposure evaluation challenging. However, there are certain chemicals that are commonly found in products used in hair salons and have been the subject of various occupational hazard studies.

Formaldehyde

Formaldehyde is a chemical used in various industries and has been classified by the International Agency for Research on Cancer or IARC as “carcinogenic to humans”. The presence of formaldehyde and methylene glycol, a formaldehyde derivative, have been found in hair smoothing products, such as the Brazilian Blowout. The liquid product is applied to the hair, which is then dried using a blow dryer. Simulation studies as well as observational studies of working salons have shown formaldehyde levels in the air that meet and exceed occupational exposure limits. Variations in observed levels are a function of ventilation used in the workplace as well as the levels of formaldehyde, and its derivatives, in the product itself.

Aromatic amines

Aromatic amines are a broad class of compounds containing an amine group attached to an aromatic ring. IARC has categorized most aromatic amines as known carcinogens. Their use spans several industries including use in pesticides, medications, and industrial dyes. Aromatic amines have also been found in oxidative (permanent) hair dyes; however due to their potential for carcinogenicity, they were removed from most hair dye formulations and their use was completely banned in the European Union.

Phthalates

Phthalates are a class of compounds that are esters of phthalic acid. Their main use has been as plasticizers, additives to plastic products to change certain physical characteristics. They have also been widely used in cosmetic products as preservatives, including shampoos and hair sprays. Phthalates have been implicated as endocrine disrupting chemicals, compounds that mimic the body's own hormones and can lead to dysregulation of the reproductive and neurologic systems as well as changes in metabolism and cell proliferation.

Health considerations:

Reproductive

Most hairdressers are women of childbearing age, which lends to additional considerations for potential workplace exposures and the risks they may pose. There have been studies linking mothers who are hairdressers with adverse birthing outcomes such as low birth weight, preterm delivery, perinatal death, and neonates who are small for gestational age. However, these studies failed to show a well-defined association between individual risk factors and adverse birthing outcomes. Other studies have also indicated a correlation between professional hairdressing and menstrual dysfunction as well as subfertility. However, subsequent studies did not show similar correlations. Due to such inconsistencies, further research is required.

Oncologic

The International Agency for Research on Cancer or IARC, has categorized occupational exposures of hairdressers and barbers to chemical agents found in the workplace as “probably carcinogenic to humans” or category 2A in their classification system. This is due in part to the presence of chemical compounds historically found in hair products that have exhibited mutagenic and carcinogenic effects in animal and in vitro studies. However, the same consistent effects have yet to be fully determined in humans. There have been studies showing a link between occupational exposure to hair dyes and increased risk of bladder in male hairdressers but not females. Other malignancies such as ovarian, breast and lung cancers have also been studied in hairdressers, but the outcomes of these studies were either inconclusive due to potential confounding or did not exhibit an increase in risk.

Respiratory

Volatile organic compounds have been shown to be the largest inhalation exposure in hair salons, with the greatest concentrations occurring while mixing hair dyes and with use of hair sprays. Other notable respiratory exposures included ethanol, ammonia, and formaldehyde. The concentration of exposure was generally found to be a function of the presence or absence of ventilation in the area in which they were working. Studies have exhibited an increased rate of respiratory symptoms experienced such as cough, wheezing, rhinitis, and shortness of breath among hairdressers when compared to other groups. Decreased lung function levels on spirometry have also been demonstrated in hairdressers when compared to unexposed reference groups.

Dermal

Contact dermatitis is a common dermatological diagnosis affecting hairdressers. Allergen sensitization has been considered the main cause for most cases of contact dermatitis in hairdressers, as products such as hair dyes and bleaches, as well as permanent curling agents contain chemicals that are known sensitizers. Hairdressers also spend a significant amount of time engaging in wet work with their hands being directly immersed in water or by handling of wet hair and tools. Overtime, this type of work has also been implicated in increased rate of irritant dermatitis among hairdressers due to damage of the skins natural protective barrier

hairdresser-background-check-1.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2129 2024-04-24 00:08:56

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2131) Amputation

Gist

Amputation is the loss or removal of a body part such as a finger, toe, hand, foot, arm or leg. It can be a life changing experience affecting your ability to move, work, interact with others and maintain your independence.

Summary

Amputation is the removal of a limb by trauma, medical illness, or surgery. As a surgical measure, it is used to control pain or a disease process in the affected limb, such as malignancy or gangrene. In some cases, it is carried out on individuals as a preventive surgery for such problems. A special case is that of congenital amputation, a congenital disorder, where fetal limbs have been cut off by constrictive bands. In some countries, judicial amputation is currently used to punish people who commit crimes. Amputation has also been used as a tactic in war and acts of terrorism; it may also occur as a war injury. In some cultures and religions, minor amputations or mutilations are considered a ritual accomplishment. When done by a person, the person executing the amputation is an amputator. The oldest evidence of this practice comes from a skeleton found buried in Liang Tebo cave, East Kalimantan, Indonesian Borneo dating back to at least 31,000 years ago, where it was done when the amputee was a young child.

Details

Amputation is the surgical removal of all or part of a limb or extremity such as an arm, leg, foot, hand, toe, or finger.

About 1.8 million Americans are living with amputations. Amputation of the leg -- either above or below the knee -- is the most common amputation surgery.

Reasons for Amputation

There are many reasons an amputation may be necessary. The most common is poor circulation because of damage or narrowing of the arteries, called peripheral arterial disease. Without adequate blood flow, the body's cells cannot get oxygen and nutrients they need from the bloodstream. As a result, the affected tissue begins to die and infection may set in.

Other causes for amputation may include:

* Severe injury (from a vehicle accident or serious burn, for example)
* Cancerous tumor in the bone or muscle of the limb
* Serious infection that does not get better with antibiotics or other treatment
* Thickening of nerve tissue, called a neuroma
* Frostbite

The Amputation Procedure

An amputation usually requires a hospital stay of five to 14 days or more, depending on the surgery and complications. The procedure itself may vary, depending on the limb or extremity being amputated and the patient's general health.

Amputation may be done under general anesthesia (meaning the patient is asleep) or with spinal anesthesia, which numbs the body from the waist down.

When performing an amputation, the surgeon removes all damaged tissue while leaving as much healthy tissue as possible.

A doctor may use several methods to determine where to cut and how much tissue to remove. These include:

* Checking for a pulse close to where the surgeon is planning to cut
* Comparing skin temperatures of the affected limb with those of a healthy limb
* Looking for areas of reddened skin
* Checking to see if the skin near the site where the surgeon is planning to cut is still sensitive to touch

During the procedure itself, the surgeon will:

* Remove the diseased tissue and any crushed bone
* Smooth uneven areas of bone
* Seal off blood vessels and nerves
* Cut and shape muscles so that the stump, or end of the limb, will be able to have an artificial limb (prosthesis) attached to it

RELATED:

How to Prevent Damaging Your Kidneys

The surgeon may choose to close the wound right away by sewing the skin flaps (called a closed amputation). Or the surgeon may leave the site open for several days in case there's a need to remove additional tissue.

The surgical team then places a sterile dressing on the wound and may place a stocking over the stump to hold drainage tubes or bandages. The doctor may place the limb in traction, in which a device holds it in position, or may use a splint.

Recovery From Amputation

Recovery from amputation depends on the type of procedure and anesthesia used.

In the hospital, the staff changes the dressings on the wound or teaches the patient to change them. The doctor monitors wound healing and any conditions that might interfere with healing, such as diabetes or hardening of the arteries. The doctor prescribes medications to ease pain and help prevent infection.

If the patient has problems with phantom pain (a sense of pain in the amputated limb) or grief over the lost limb, the doctor will prescribe medication and/or counseling, as necessary.

Physical therapy, beginning with gentle, stretching exercises, often begins soon after surgery. Practice with the artificial limb may begin as soon as 10 to 14 days after surgery.

Ideally, the wound should fully heal in about four to eight weeks. But the physical and emotional adjustment to losing a limb can be a long process. Long-term recovery and rehabilitation will include:

* Exercises to improve muscle strength and control
* Activities to help restore the ability to carry out daily activities and promote independence
* Use of artificial limbs and assistive devices
* Emotional support, including counseling, to help with grief over the loss of the limb and adjustment to the new body image

Additional Information:

Prevention

Methods in preventing amputation, limb-sparing techniques, depend on the problems that might cause amputations to be necessary. Chronic infections, often caused by diabetes or decubitus ulcers in bedridden patients, are common causes of infections that lead to gangrene, which, when widespread, necessitates amputation.

There are two key challenges: first, many patients have impaired circulation in their extremities, and second, they have difficulty curing infections in limbs with poor blood circulation.

Crush injuries where there is extensive tissue damage and poor circulation also benefit from hyperbaric oxygen therapy (HBOT). The high level of oxygenation and revascularization speed up recovery times and prevent infections.

A study found that the patented method called Circulator Boot achieved significant results in prevention of amputation in patients with diabetes and arteriosclerosis. Another study found it also effective for healing limb ulcers caused by peripheral vascular disease. The boot checks the heart rhythm and compresses the limb between heartbeats; the compression helps cure the wounds in the walls of veins and arteries, and helps to push the blood back to the heart.

For victims of trauma, advances in microsurgery in the 1970s have made replantations of severed body parts possible.

The establishment of laws, rules, and guidelines, and employment of modern equipment help protect people from traumatic amputations.

Prognosis

The individual may experience psychological trauma and emotional discomfort. The stump will remain an area of reduced mechanical stability. Limb loss can present significant or even drastic practical limitations.

A large proportion of amputees (50–80%) experience the phenomenon of phantom limbs; they feel body parts that are no longer there. These limbs can itch, ache, burn, feel tense, dry or wet, locked in or trapped or they can feel as if they are moving. Some scientists believe it has to do with a kind of neural map that the brain has of the body, which sends information to the rest of the brain about limbs regardless of their existence. Phantom sensations and phantom pain may also occur after the removal of body parts other than the limbs, e.g. after amputation of the breast, extraction of a tooth (phantom tooth pain) or removal of an eye (phantom eye syndrome).

A similar phenomenon is unexplained sensation in a body part unrelated to the amputated limb. It has been hypothesized that the portion of the brain responsible for processing stimulation from amputated limbs, being deprived of input, expands into the surrounding brain, such that an individual who has had an arm amputated will experience unexplained pressure or movement on his face or head.

In many cases, the phantom limb aids in adaptation to a prosthesis, as it permits the person to experience proprioception of the prosthetic limb. To support improved resistance or usability, comfort or healing, some type of stump socks may be worn instead of or as part of wearing a prosthesis.

Another side effect can be heterotopic ossification, especially when a bone injury is combined with a head injury. The brain signals the bone to grow instead of scar tissue to form, and nodules and other growth can interfere with prosthetics and sometimes require further operations. This type of injury has been especially common among soldiers wounded by improvised explosive devices in the Iraq War.

Due to technological advances in prosthetics, many amputees live active lives with little restriction. Organizations such as the Challenged Athletes Foundation have been developed to give amputees the opportunity to be involved in athletics and adaptive sports such as amputee soccer.

Nearly half of the individuals who have an amputation due to vascular disease will die within 5 years, usually secondary to the extensive co-morbidities rather than due to direct consequences of amputation. This is higher than the five year mortality rates for breast cancer, colon cancer, and prostate cancer. Of persons with diabetes who have a lower extremity amputation, up to 55% will require amputation of the second leg within two to three years.

Amputation is surgery to remove all or part of a limb or extremity (outer limbs). Common types of amputation involve:

* Above-knee amputation, removing part of the thigh, knee, shin, foot and toes.
* Below-knee amputation, removing the lower leg, foot and toes.
* Arm amputation.
* Hand amputation.
* Finger amputation.
* Foot amputation, removing part of the foot.
* Toe amputation.

Why are amputations done?

Amputation can be necessary to keep an infection from spreading through your limbs and to manage pain. The most common reason for an amputation is a wound that does not heal. Often this can be from not having enough blood flow to that limb.

After a severe injury, such as a crushing injury, amputation may be necessary if the surgeon cannot repair your limb.

You also may need an amputation if you have:

* Cancerous tumors in the limb.
* Frostbite.
* Gangrene (tissue death).
* Neuroma, or thickening of nerve tissue.
* Peripheral arterial disease (PAD), or blockage of the arteries.
* Severe injury, such as from a car accident.
* Diabetes that leads to nonhealing or infected wounds or tissue death.

amputee-care-feature.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2130 2024-04-25 00:10:52

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2132) Chemistry

Gist

Chemistry is a branch of natural science that deals principally with the properties of substances, the changes they undergo, and the natural laws that describe these changes.

Summary

Chemistry, the science that deals with the properties, composition, and structure of substances (defined as elements and compounds), the transformations they undergo, and the energy that is released or absorbed during these processes. Every substance, whether naturally occurring or artificially produced, consists of one or more of the hundred-odd species of atoms that have been identified as elements. Although these atoms, in turn, are composed of more elementary particles, they are the basic building blocks of chemical substances; there is no quantity of oxygen, mercury, or gold, for example, smaller than an atom of that substance. Chemistry, therefore, is concerned not with the subatomic domain but with the properties of atoms and the laws governing their combinations and how the knowledge of these properties can be used to achieve specific purposes.

The great challenge in chemistry is the development of a coherent explanation of the complex behaviour of materials, why they appear as they do, what gives them their enduring properties, and how interactions among different substances can bring about the formation of new substances and the destruction of old ones. From the earliest attempts to understand the material world in rational terms, chemists have struggled to develop theories of matter that satisfactorily explain both permanence and change. The ordered assembly of indestructible atoms into small and large molecules, or extended networks of intermingled atoms, is generally accepted as the basis of permanence, while the reorganization of atoms or molecules into different arrangements lies behind theories of change. Thus chemistry involves the study of the atomic composition and structural architecture of substances, as well as the varied interactions among substances that can lead to sudden, often violent reactions.

Chemistry also is concerned with the utilization of natural substances and the creation of artificial ones. Cooking, fermentation, glass making, and metallurgy are all chemical processes that date from the beginnings of civilization. Today, vinyl, Teflon, liquid crystals, semiconductors, and superconductors represent the fruits of chemical technology. The 20th century saw dramatic advances in the comprehension of the marvelous and complex chemistry of living organisms, and a molecular interpretation of health and disease holds great promise. Modern chemistry, aided by increasingly sophisticated instruments, studies materials as small as single atoms and as large and complex as DNA (deoxyribonucleic acid), which contains millions of atoms. New substances can even be designed to bear desired characteristics and then synthesized. The rate at which chemical knowledge continues to accumulate is remarkable. Over time more than 8,000,000 different chemical substances, both natural and artificial, have been characterized and produced. The number was less than 500,000 as recently as 1965.

Intimately interconnected with the intellectual challenges of chemistry are those associated with industry. In the mid-19th century the German chemist Justus von Liebig commented that the wealth of a nation could be gauged by the amount of sulfuric acid it produced. This acid, essential to many manufacturing processes, remains today the leading chemical product of industrialized countries. As Liebig recognized, a country that produces large amounts of sulfuric acid is one with a strong chemical industry and a strong economy as a whole. The production, distribution, and utilization of a wide range of chemical products is common to all highly developed nations. In fact, one can say that the “iron age” of civilization is being replaced by a “polymer age,” for in some countries the total volume of polymers now produced exceeds that of iron.

Details

Chemistry is the scientific study of the properties and behavior of matter. It is a physical science within the natural sciences that studies the chemical elements that make up matter and compounds made of atoms, molecules and ions: their composition, structure, properties, behavior and the changes they undergo during reactions with other substances. Chemistry also addresses the nature of chemical bonds in chemical compounds.

In the scope of its subject, chemistry occupies an intermediate position between physics and biology. It is sometimes called the central science because it provides a foundation for understanding both basic and applied scientific disciplines at a fundamental level. For example, chemistry explains aspects of plant growth (botany), the formation of igneous rocks (geology), how atmospheric ozone is formed and how environmental pollutants are degraded (ecology), the properties of the soil on the Moon (cosmochemistry), how medications work (pharmacology), and how to collect DNA evidence at a crime scene (forensics).

Chemistry has existed under various names since ancient times. It has evolved, and now chemistry encompasses various areas of specialisation, or subdisciplines, that continue to increase in number and interrelate to create further interdisciplinary fields of study. The applications of various fields of chemistry are used frequently for economic purposes in the chemical industry.

Etymology

The word chemistry comes from a modification during the Renaissance of the word alchemy, which referred to an earlier set of practices that encompassed elements of chemistry, metallurgy, philosophy, astrology, astronomy, mysticism, and medicine. Alchemy is often associated with the quest to turn lead or other base metals into gold, though alchemists were also interested in many of the questions of modern chemistry.

The modern word alchemy in turn is derived from the Arabic word al-kīmīā. This may have Egyptian origins since al-kīmīā is derived from the Ancient Greek, which is in turn derived from the word Kemet, which is the ancient name of Egypt in the Egyptian language. Alternately, al-kīmīā may derive from 'cast together'.

Modern principles

The current model of atomic structure is the quantum mechanical model. Traditional chemistry starts with the study of elementary particles, atoms, molecules, substances, metals, crystals and other aggregates of matter. Matter can be studied in solid, liquid, gas and plasma states, in isolation or in combination. The interactions, reactions and transformations that are studied in chemistry are usually the result of interactions between atoms, leading to rearrangements of the chemical bonds which hold atoms together. Such behaviors are studied in a chemistry laboratory.

The chemistry laboratory stereotypically uses various forms of laboratory glassware. However glassware is not central to chemistry, and a great deal of experimental (as well as applied/industrial) chemistry is done without it.

A chemical reaction is a transformation of some substances into one or more different substances. The basis of such a chemical transformation is the rearrangement of electrons in the chemical bonds between atoms. It can be symbolically depicted through a chemical equation, which usually involves atoms as subjects. The number of atoms on the left and the right in the equation for a chemical transformation is equal. (When the number of atoms on either side is unequal, the transformation is referred to as a nuclear reaction or radioactive decay.) The type of chemical reactions a substance may undergo and the energy changes that may accompany it are constrained by certain basic rules, known as chemical laws.

Energy and entropy considerations are invariably important in almost all chemical studies. Chemical substances are classified in terms of their structure, phase, as well as their chemical compositions. They can be analyzed using the tools of chemical analysis, e.g. spectroscopy and chromatography. Scientists engaged in chemical research are known as chemists. Most chemists specialize in one or more sub-disciplines. Several concepts are essential for the study of chemistry; some of them are:

Matter

In chemistry, matter is defined as anything that has rest mass and volume (it takes up space) and is made up of particles. The particles that make up matter have rest mass as well – not all particles have rest mass, such as the photon. Matter can be a pure chemical substance or a mixture of substances.

Atom

The atom is the basic unit of chemistry. It consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. The nucleus is made up of positively charged protons and uncharged neutrons (together called nucleons), while the electron cloud consists of negatively charged electrons which orbit the nucleus. In a neutral atom, the negatively charged electrons balance out the positive charge of the protons. The nucleus is dense; the mass of a nucleon is approximately 1,836 times that of an electron, yet the radius of an atom is about 10,000 times that of its nucleus.

The atom is also the smallest entity that can be envisaged to retain the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state(s), coordination number, and preferred types of bonds to form (e.g., metallic, ionic, covalent).

Element

A chemical element is a pure substance which is composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol Z. The mass number is the sum of the number of protons and neutrons in a nucleus. Although all the nuclei of all atoms belonging to one element will have the same atomic number, they may not necessarily have the same mass number; atoms of an element which have different mass numbers are known as isotopes. For example, all atoms with 6 protons in their nuclei are atoms of the chemical element carbon, but atoms of carbon may have mass numbers of 12 or 13.

The standard presentation of the chemical elements is in the periodic table, which orders elements by atomic number. The periodic table is arranged in groups, or columns, and periods, or rows. The periodic table is useful in identifying periodic trends.

Compound

A compound is a pure chemical substance composed of more than one element. The properties of a compound bear little similarity to those of its elements. The standard nomenclature of compounds is set by the International Union of Pure and Applied Chemistry (IUPAC). Organic compounds are named according to the organic nomenclature system. The names for inorganic compounds are created according to the inorganic nomenclature system. When a compound has more than one component, then they are divided into two classes, the electropositive and the electronegative components. In addition the Chemical Abstracts Service has devised a method to index chemical substances. In this scheme each chemical substance is identifiable by a number known as its CAS registry number.

Molecule

A molecule is the smallest indivisible portion of a pure chemical substance that has its unique set of chemical properties, that is, its potential to undergo a certain set of chemical reactions with other substances. However, this definition only works well for substances that are composed of molecules, which is not true of many substances (see below). Molecules are typically a set of atoms bound together by covalent bonds, such that the structure is electrically neutral and all valence electrons are paired with other electrons either in bonds or in lone pairs.

Thus, molecules exist as electrically neutral units, unlike ions. When this rule is broken, giving the "molecule" a charge, the result is sometimes named a molecular ion or a polyatomic ion. However, the discrete and separate nature of the molecular concept usually requires that molecular ions be present only in well-separated form, such as a directed beam in a vacuum in a mass spectrometer. Charged polyatomic collections residing in solids (for example, common sulfate or nitrate ions) are generally not considered "molecules" in chemistry. Some molecules contain one or more unpaired electrons, creating radicals. Most radicals are comparatively reactive, but some, such as nitric oxide (NO) can be stable.

The "inert" or noble gas elements (helium, neon, argon, krypton, xenon and radon) are composed of lone atoms as their smallest discrete unit, but the other isolated chemical elements consist of either molecules or networks of atoms bonded to each other in some way. Identifiable molecules compose familiar substances such as water, air, and many organic compounds like alcohol, sugar, gasoline, and the various pharmaceuticals.

However, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the Earth are chemical compounds without molecules. These other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules per se. Instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance. Examples of such substances are mineral salts (such as table salt), solids like carbon and diamond, metals, and familiar silica and silicate minerals such as quartz and granite.

One of the main characteristics of a molecule is its geometry often called its structure. While the structure of diatomic, triatomic or tetra-atomic molecules may be trivial, (linear, angular pyramidal etc.) the structure of polyatomic molecules, that are constituted of more than six atoms (of several elements) can be crucial for its chemical nature.

Substance and mixture

A chemical substance is a kind of matter with a definite composition and set of properties. A collection of substances is called a mixture. Examples of mixtures are air and alloys.

Mole and amount of substance

The mole is a unit of measurement that denotes an amount of substance (also called chemical amount). One mole is defined to contain exactly 6.02214076×{10}^{23} particles (atoms, molecules, ions, or electrons), where the number of particles per mole is known as the Avogadro constant. Molar concentration is the amount of a particular substance per volume of solution, and is commonly reported in mol/{dm}^3.

Additional Information

Chemistry is the scientific study of matter, its properties, composition, and interactions. It is often referred to as the central science because it connects and bridges the physical sciences, such as physics and biology. Understanding chemistry is crucial for comprehending the world around us, from the air we breathe to the food we eat and the materials we use in everyday life.

Chemistry has many sub-disciplines such as analytical chemistry, physical chemistry, biochemistry, and more. Chemistry plays a crucial role in various industries, including pharmaceuticals, materials science, environmental science, and energy production, making it a cornerstone of modern science and technology

The area of science devoted to studying nature and also composition, properties, elements, and compounds that form matter as well as looking into their reactions forming new substances is chemistry. Chemistry has also been categorized further based on the particular areas of study.

JoshPak_EmilyMorley.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2131 2024-04-26 00:08:30

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2133) Metallurgy

Gist

Metallurgy is a domain of materials science and engineering that studies the physical and chemical behavior of metallic elements, their inter-metallic compounds, and their mixtures, which are known as alloys.

Summary

Metallurgy is a domain of materials science and engineering that studies the physical and chemical behavior of metallic elements, their inter-metallic compounds, and their mixtures, which are known as alloys.

Metallurgy encompasses both the science and the technology of metals, including the production of metals and the engineering of metal components used in products for both consumers and manufacturers. Metallurgy is distinct from the craft of metalworking. Metalworking relies on metallurgy in a similar manner to how medicine relies on medical science for technical advancement. A specialist practitioner of metallurgy is known as a metallurgist.

The science of metallurgy is further subdivided into two broad categories: chemical metallurgy and physical metallurgy. Chemical metallurgy is chiefly concerned with the reduction and oxidation of metals, and the chemical performance of metals. Subjects of study in chemical metallurgy include mineral processing, the extraction of metals, thermodynamics, electrochemistry, and chemical degradation (corrosion). In contrast, physical metallurgy focuses on the mechanical properties of metals, the physical properties of metals, and the physical performance of metals. Topics studied in physical metallurgy include crystallography, material characterization, mechanical metallurgy, phase transformations, and failure mechanisms.

Historically, metallurgy has predominately focused on the production of metals. Metal production begins with the processing of ores to extract the metal, and includes the mixture of metals to make alloys. Metal alloys are often a blend of at least two different metallic elements. However, non-metallic elements are often added to alloys in order to achieve properties suitable for an application. The study of metal production is subdivided into ferrous metallurgy (also known as black metallurgy) and non-ferrous metallurgy, also known as colored metallurgy.

Ferrous metallurgy involves processes and alloys based on iron, while non-ferrous metallurgy involves processes and alloys based on other metals. The production of ferrous metals accounts for 95% of world metal production.

Modern metallurgists work in both emerging and traditional areas as part of an interdisciplinary team alongside material scientists and other engineers. Some traditional areas include mineral processing, metal production, heat treatment, failure analysis, and the joining of metals (including welding, brazing, and soldering). Emerging areas for metallurgists include nanotechnology, superconductors, composites, biomedical materials, electronic materials (semiconductors) and surface engineering. Many applications, practices, and devices associated or involved in metallurgy were established in ancient India and China, such as the innovation of the wootz steel , bronze, blast furnace, cast iron, hydraulic-powered trip hammers, and double acting piston bellows.

Details

Metallurgy is the art and science of extracting metals from their ores and modifying the metals for use. Metallurgy customarily refers to commercial as opposed to laboratory methods. It also concerns the chemical, physical, and atomic properties and structures of metals and the principles whereby metals are combined to form alloys.

History of metallurgy

The present-day use of metals is the culmination of a long path of development extending over approximately 6,500 years. It is generally agreed that the first known metals were gold, silver, and copper, which occurred in the native or metallic state, of which the earliest were in all probability nuggets of gold found in the sands and gravels of riverbeds. Such native metals became known and were appreciated for their ornamental and utilitarian values during the latter part of the Stone Age.

Earliest development

Gold can be agglomerated into larger pieces by cold hammering, but native copper cannot, and an essential step toward the Metal Age was the discovery that metals such as copper could be fashioned into shapes by melting and casting in molds; among the earliest known products of this type are copper axes cast in the Balkans in the 4th millennium BCE. Another step was the discovery that metals could be recovered from metal-bearing minerals. These had been collected and could be distinguished on the basis of colour, texture, weight, and flame colour and smell when heated. The notably greater yield obtained by heating native copper with associated oxide minerals may have led to the smelting process, since these oxides are easily reduced to metal in a charcoal bed at temperatures in excess of 700 °C (1,300 °F), as the reductant, carbon monoxide, becomes increasingly stable. In order to effect the agglomeration and separation of melted or smelted copper from its associated minerals, it was necessary to introduce iron oxide as a flux. This further step forward can be attributed to the presence of iron oxide gossan minerals in the weathered upper zones of copper sulfide deposits.

Bronze

In many regions, copper-math alloys, of superior properties to copper in both cast and wrought form, were produced in the next period. This may have been accidental at first, owing to the similarity in colour and flame colour between the bright green copper carbonate mineral malachite and the weathered products of such copper-math sulfide minerals as enargite, and it may have been followed later by the purposeful selection of math compounds based on their garlic odour when heated.

Element As contents varied from 1 to 7 percent, with up to 3 percent tin. Essentially As-free copper alloys with higher tin content—in other words, true bronze—seem to have appeared between 3000 and 2500 BCE, beginning in the Tigris-Euphrates delta. The discovery of the value of tin may have occurred through the use of stannite, a mixed sulfide of copper, iron, and tin, although this mineral is not as widely available as the principal tin mineral, cassiterite, which must have been the eventual source of the metal. Cassiterite is strikingly dense and occurs as pebbles in alluvial deposits together with math and gold; it also occurs to a degree in the iron oxide gossans mentioned above.

While there may have been some independent development of bronze in varying localities, it is most likely that the bronze culture spread through trade and the migration of peoples from the Middle East to Egypt, Europe, and possibly China. In many civilizations the production of copper, math copper, and tin bronze continued together for some time. The eventual disappearance of copper-math As is difficult to explain. Production may have been based on minerals that were not widely available and became scarce, but the relative scarcity of tin minerals did not prevent a substantial trade in that metal over considerable distances. It may be that tin bronzes were eventually preferred owing to the chance of contracting As poisoning from fumes produced by the oxidation of math-containing minerals.

As the weathered copper ores in given localities were worked out, the harder sulfide ores beneath were mined and smelted. The minerals involved, such as chalcopyrite, a copper-iron sulfide, needed an oxidizing roast to remove sulfur as sulfur dioxide and yield copper oxide. This not only required greater metallurgical skill but also oxidized the intimately associated iron, which, combined with the use of iron oxide fluxes and the stronger reducing conditions produced by improved smelting furnaces, led to higher iron contents in the bronze.

Iron

It is not possible to mark a sharp division between the Bronze Age and the Iron Age. Small pieces of iron would have been produced in copper smelting furnaces as iron oxide fluxes and iron-bearing copper sulfide ores were used. In addition, higher furnace temperatures would have created more strongly reducing conditions (that is to say, a higher carbon monoxide content in the furnace gases). An early piece of iron from a trackway in the province of Drenthe, Netherlands, has been dated to 1350 BCE, a date normally taken as the Middle Bronze Age for this area. In Anatolia, on the other hand, iron was in use as early as 2000 BCE. There are also occasional references to iron in even earlier periods, but this material was of meteoric origin.

Once a relationship had been established between the new metal found in copper smelts and the ore added as flux, the operation of furnaces for the production of iron alone naturally followed. Certainly, by 1400 BCE in Anatolia, iron was assuming considerable importance, and by 1200–1000 BCE it was being fashioned on quite a large scale into weapons, initially dagger blades. For this reason, 1200 BCE has been taken as the beginning of the Iron Age. Evidence from excavations indicates that the art of iron making originated in the mountainous country to the south of the Black Sea, an area dominated by the Hittites. Later the art apparently spread to the Philistines, for crude furnaces dating from 1200 BCE have been unearthed at Gerar, together with a number of iron objects.

Smelting of iron oxide with charcoal demanded a high temperature, and, since the melting temperature of iron at 1,540 °C (2,800 °F) was not attainable then, the product was merely a spongy mass of pasty globules of metal intermingled with a semiliquid slag. This product, later known as bloom, was hardly usable as it stood, but repeated reheating and hot hammering eliminated much of the slag, creating wrought iron, a much better product.

The properties of iron are much affected by the presence of small amounts of carbon, with large increases in strength associated with contents of less than 0.5 percent. At the temperatures then attainable—about 1,200 °C (2,200 °F)—reduction by charcoal produced an almost pure iron, which was soft and of limited use for weapons and tools, but when the ratio of fuel to ore was increased and furnace drafting improved with the invention of better bellows, more carbon was absorbed by the iron. This resulted in blooms and iron products with a range of carbon contents, making it difficult to determine the period in which iron may have been purposely strengthened by carburizing, or reheating the metal in contact with excess charcoal.

Carbon-containing iron had the further great advantage that, unlike bronze and carbon-free iron, it could be made still harder by quenching—i.e., rapid cooling by immersion in water. There is no evidence for the use of this hardening process during the early Iron Age, so that it must have been either unknown then or not considered advantageous, in that quenching renders iron very brittle and has to be followed by tempering, or reheating at a lower temperature, to restore toughness. What seems to have been established early on was a practice of repeated cold forging and annealing at 600–700 °C (1,100–1,300 °F), a temperature naturally achieved in a simple fire. This practice is common in parts of Africa even today.

By 1000 BCE iron was beginning to be known in central Europe. Its use spread slowly westward. Iron making was fairly widespread in Great Britain at the time of the Roman invasion in 55 BCE. In Asia iron was also known in ancient times, in China by about 700 BCE.

Brass

While some zinc appears in bronzes dating from the Bronze Age, this was almost certainly an accidental inclusion, although it may foreshadow the complex ternary alloys of the early Iron Age, in which substantial amounts of zinc as well as tin may be found. Brass, as an alloy of copper and zinc without tin, did not appear in Egypt until about 30 BCE, but after this it was rapidly adopted throughout the Roman world, for example, for currency. It was made by the calamine process, in which zinc carbonate or zinc oxide were added to copper and melted under a charcoal cover in order to produce reducing conditions. The general establishment of a brass industry was one of the important metallurgical contributions made by the Romans.

Precious metals

Bronze, iron, and brass were, then, the metallic materials on which successive peoples built their civilizations and of which they made their implements for both war and peace. In addition, by 500 BCE, rich lead-bearing silver mines had opened in Greece. Reaching depths of several hundred metres, these mines were vented by drafts provided by fires lit at the bottom of the shafts. Ores were hand-sorted, crushed, and washed with streams of water to separate valuable minerals from the barren, lighter materials. Because these minerals were principally sulfides, they were roasted to form oxides and were then smelted to recover a lead-silver alloy.

Lead was removed from the silver by cupellation, a process of great antiquity in which the alloy was melted in a shallow porous clay or bone-ash receptacle called a cupel. A stream of air over the molten mass preferentially oxidized the lead. Its oxide was removed partially by skimming the molten surface; the remainder was absorbed into the porous cupel. Silver metal and any gold were retained on the cupel. The lead from the skimmings and discarded cupels was recovered as metal upon heating with charcoal.

Native gold itself often contained quite considerable quantities of silver. These silver-gold alloys, known as electrum, may be separated in a number of ways, but presumably the earliest was by heating in a crucible with common salt. In time and with repetitive treatments, the silver was converted into silver chloride, which passed into the molten slag, leaving a purified gold. Cupellation was also employed to remove from the gold such contaminates as copper, tin, and lead. Gold, silver, and lead were used for artistic and religious purposes, personal adornment, household utensils, and equipment for the chase.

From 500 BCE to 1500 CE

In the thousand years between 500 BCE and 500 CE, a vast number of discoveries of significance to the growth of metallurgy were made. The Greek mathematician and inventor Archimedes, for example, demonstrated that the purity of gold could be measured by determining its weight and the quantity of water displaced upon immersion—that is, by determining its density. In the pre-Christian portion of the period, the first important steel production was started in India, using a process already known to ancient Egyptians. Wootz steel, as it was called, was prepared as sponge (porous) iron in a unit not unlike a bloomery. The product was hammered while hot to expel slag, broken up, then sealed with wood chips in clay containers and heated until the pieces of iron absorbed carbon and melted, converting it to steel of homogeneous composition containing 1 to 1.6 percent carbon. The steel pieces could then be heated and forged to bars for later use in fashioning articles, such as the famous Damascus swords made by medieval Arab armourers.

As, zinc, antimony, and nickel may well have been known from an early date but only in the alloy state. By 100 BCE mercury was known and was produced by heating the sulfide mineral cinnabar and condensing the vapours. Its property of amalgamating (mixing or alloying) with various metals was employed for their recovery and refining. Lead was beaten into sheets and pipes, the pipes being used in early water systems. The metal tin was available and Romans had learned to use it to line food containers. Although the Romans made no extraordinary metallurgical discoveries, they were responsible for, in addition to the establishment of the brass industry, contributing toward improved organization and efficient administration in mining.

Beginning about the 6th century, and for the next thousand years, the most meaningful developments in metallurgy centred on iron making. Great Britain, where iron ore was plentiful, was an important iron-making region. Iron weapons, agricultural implements, domestic articles, and even personal adornments were made. Fine-quality cutlery was made near Sheffield. Monasteries were often centres of learning of the arts of metalworking. Monks became well known for their iron making and bell founding, the products made either being utilized in the monasteries, disposed of locally, or sold to merchants for shipment to more distant markets. In 1408 the bishop of Durham established the first water-powered bloomery in Britain, with the power apparently operating the bellows. Once power of this sort became available, it could be applied to a range of operations and enable the hammering of larger blooms.

In Spain, another iron-making region, the Catalan forge had been invented, and its use later spread to other areas. A hearth type of furnace, it was built of stone and was charged with iron ore, flux, and charcoal. The charcoal was kept ignited with air from a bellows blown through a bottom nozzle, or tuyere (see figure). The bloom that slowly collected at the bottom was removed and upon frequent reheating and forging was hammered into useful shapes. By the 14th century the furnace was greatly enlarged in height and capacity.

If the fuel-to-ore ratio in such a furnace was kept high, and if the furnace reached temperatures sufficiently hot for substantial amounts of carbon to be absorbed into the iron, then the melting point of the metal would be lowered and the bloom would melt. This would dissolve even more carbon, producing a liquid cast iron of up to 4 percent carbon and with a relatively low melting temperature of 1,150 °C (2,100 °F). The cast iron would collect in the base of the furnace, which technically would be a blast furnace rather than a bloomery in that the iron would be withdrawn as a liquid rather than a solid lump.

While the Iron Age peoples of Anatolia and Europe on occasion may have accidently made cast iron, which is chemically the same as blast-furnace iron, the Chinese were the first to realize its advantages. Although brittle and lacking the strength, toughness, and workability of steel, it was useful for making cast bowls and other vessels. In fact, the Chinese, whose Iron Age began about 500 BCE, appear to have learned to oxidize the carbon from cast iron in order to produce steel or wrought iron indirectly, rather than through the direct method of starting from low-carbon iron.

After 1500

During the 16th century, metallurgical knowledge was recorded and made available. Two books were especially influential. One, by the Italian Vannoccio Biringuccio, was entitled De la pirotechnia (Eng. trans., The Pirotechnia of Vannoccio Biringuccio, 1943). The other, by the German Georgius Agricola, was entitled De re metallica. Biringuccio was essentially a metalworker, and his book dealt with smelting, refining, and assay methods (methods for determining the metal content of ores) and covered metal casting, molding, core making, and the production of such commodities as cannons and cast-iron cannonballs. His was the first methodical description of foundry practice.

Agricola, on the other hand, was a miner and an extractive metallurgist; his book considered prospecting and surveying in addition to smelting, refining, and assay methods. He also described the processes used for crushing and concentrating the ore and then, in some detail, the methods of assaying to determine whether ores were worth mining and extracting. Some of the metallurgical practices he described are retained in principle today.

Ferrous metals

From 1500 to the 20th century, metallurgical development was still largely concerned with improved technology in the manufacture of iron and steel. In England, the gradual exhaustion of timber led first to prohibitions on cutting of wood for charcoal and eventually to the introduction of coke, derived from coal, as a more efficient fuel. Thereafter, the iron industry expanded rapidly in Great Britain, which became the greatest iron producer in the world. The crucible process for making steel, introduced in England in 1740, by which bar iron and added materials were placed in clay crucibles heated by coke fires, resulted in the first reliable steel made by a melting process.

One difficulty with the bloomery process for the production of soft bar iron was that, unless the temperature was kept low (and the output therefore small), it was difficult to keep the carbon content low enough so that the metal remained ductile. This difficulty was overcome by melting high-carbon pig iron from the blast furnace in the puddling process, invented in Great Britain in 1784. In it, melting was accomplished by drawing hot gases over a charge of pig iron and iron ore held on the furnace hearth. During its manufacture the product was stirred with iron rabbles (rakes), and, as it became pasty with loss of carbon, it was worked into balls, which were subsequently forged or rolled to a useful shape. The product, which came to be known as wrought iron, was low in elements that contributed to the brittleness of pig iron and contained enmeshed slag particles that became elongated fibres when the metal was forged. Later, the use of a rolling mill equipped with grooved rolls to make wrought-iron bars was introduced.

The most important development of the 19th century was the large-scale production of cheap steel. Prior to about 1850, the production of wrought iron by puddling and of steel by crucible melting had been conducted in small-scale units without significant mechanization. The first change was the development of the open-hearth furnace by William and Friedrich Siemens in Britain and by Pierre and Émile Martin in France. Employing the regenerative principle, in which outgoing combusted gases are used to heat the next cycle of fuel gas and air, this enabled high temperatures to be achieved while saving on fuel. Pig iron could then be taken through to molten iron or low-carbon steel without solidification, scrap could be added and melted, and iron ore could be melted into the slag above the metal to give a relatively rapid oxidation of carbon and silicon—all on a much enlarged scale. Another major advance was Henry Bessemer’s process, patented in 1855 and first operated in 1856, in which air was blown through molten pig iron from tuyeres set into the bottom of a pear-shaped vessel called a converter. Heat released by the oxidation of dissolved silicon, manganese, and carbon was enough to raise the temperature above the melting point of the refined metal (which rose as the carbon content was lowered) and thereby maintain it in the liquid state. Very soon Bessemer had tilting converters producing 5 tons in a heat of one hour, compared with four to six hours for 50 kilograms (110 pounds) of crucible steel and two hours for 250 kilograms of puddled iron.

Neither the open-hearth furnace nor the Bessemer converter could remove phosphorus from the metal, so that low-phosphorus raw materials had to be used. This restricted their use from areas where phosphoric ores, such as those of the Minette range in Lorraine, were a main European source of iron. The problem was solved by Sidney Gilchrist Thomas, who demonstrated in 1876 that a basic furnace lining consisting of calcined dolomite, instead of an acidic lining of siliceous materials, made it possible to use a high-lime slag to dissolve the phosphates formed by the oxidation of phosphorus in the pig iron. This principle was eventually applied to both open-hearth furnaces and Bessemer converters.

As steel was now available at a fraction of its former cost, it saw an enormously increased use for engineering and construction. Soon after the end of the century it replaced wrought iron in virtually every field. Then, with the availability of electric power, electric-arc furnaces were introduced for making special and high-alloy steels. The next significant stage was the introduction of cheap oxygen, made possible by the invention of the Linde-Frankel cycle for the liquefaction and fractional distillation of air. The Linz-Donawitz process, invented in Austria shortly after World War II, used oxygen supplied as a gas from a tonnage oxygen plant, blowing it at supersonic velocity into the top of the molten iron in a converter vessel. As the ultimate development of the Bessemer/Thomas process, oxygen blowing became universally employed in bulk steel production.

Light metals

Another important development of the late 19th century was the separation from their ores, on a substantial scale, of aluminum and magnesium. In the earlier part of the century, several scientists had made small quantities of these light metals, but the most successful was Henri-Étienne Sainte-Claire Deville, who by 1855 had developed a method by which cryolite, a double fluoride of aluminum and sodium, was reduced by sodium metal to aluminum and sodium fluoride. The process was very expensive, but cost was greatly reduced when the American chemist Hamilton Young Castner developed an electrolytic cell for producing cheaper sodium in 1886. At the same time, however, Charles M. Hall in the United States and Paul-Louis-Toussaint Héroult in France announced their essentially identical processes for aluminum extraction, which were also based on electrolysis. Use of the Hall-Héroult process on an industrial scale depended on the replacement of storage batteries by rotary power generators; it remains essentially unchanged to this day.

Welding

One of the most significant changes in the technology of metals fabrication has been the introduction of fusion welding during the 20th century. Before this, the main joining processes were riveting and forge welding. Both had limitations of scale, although they could be used to erect substantial structures. In 1895 Henry-Louis Le Chatelier stated that the temperature in an oxyacetylene flame was 3,500 °C (6,300 °F), some 1,000 °C higher than the oxyhydrogen flame already in use on a small scale for brazing and welding. The first practical oxyacetylene torch, drawing acetylene from cylinders containing acetylene dissolved in acetone, was produced in 1901. With the availability of oxygen at even lower cost, oxygen cutting and oxyacetylene welding became established procedures for the fabrication of structural steel components.

The metal in a join can also be melted by an electric arc, and a process using a carbon as a negative electrode and the workpiece as a positive first became of commercial interest about 1902. Striking an arc from a coated metal electrode, which melts into the join, was introduced in 1910. Although it was not widely used until some 20 years later, in its various forms it is now responsible for the bulk of fusion welds.

Metallography

The 20th century has seen metallurgy change progressively, from an art or craft to a scientific discipline and then to part of the wider discipline of materials science. In extractive metallurgy, there has been the application of chemical thermodynamics, kinetics, and chemical engineering, which has enabled a better understanding, control, and improvement of existing processes and the generation of new ones. In physical metallurgy, the study of relationships between macrostructure, microstructure, and atomic structure on the one hand and physical and mechanical properties on the other has broadened from metals to other materials such as ceramics, polymers, and composites.

Metallurgy and mining

This greater scientific understanding has come largely from a continuous improvement in microscopic techniques for metallography, the examination of metal structure. The first true metallographer was Henry Clifton Sorby of Sheffield, England, who in the 1860s applied light microscopy to the polished surfaces of materials such as rocks and meteorites. Sorby eventually succeeded in making photomicrographic records, and by 1885 the value of metallography was appreciated throughout Europe, with particular attention being paid to the structure of steel. For example, there was eventual acceptance, based on micrographic evidence and confirmed by the introduction of X-ray diffraction by William Henry and William Lawrence Bragg in 1913, of the allotropy of iron and its relationship to the hardening of steel. During subsequent years there were advances in the atomic theory of solids; this led to the concept that, in nonplastic materials such as glass, fracture takes place by the propagation of preexisting cracklike defects and that, in metals, deformation takes place by the movement of dislocations, or defects in the atomic arrangement, through the crystalline matrix. Proof of these concepts came with the invention and development of the electron microscope; even more powerful field ion microscopes and high-resolution electron microscopes now make it possible to detect the position of individual atoms.

Another example of the development of physical metallurgy is a discovery that revolutionized the use of aluminum in the 20th century. Originally, most aluminum was used in cast alloys, but the discovery of age hardening by Alfred Wilm in Berlin about 1906 yielded a material that was twice as strong with only a small change in weight. In Wilm’s process, a solute such as magnesium or copper is trapped in supersaturated solid solution, without being allowed to precipitate out, by quenching the aluminum from a higher temperature rather than slowly cooling it. The relatively soft aluminum alloy that results can be mechanically formed, but, when left at room temperature or heated at low temperatures, it hardens and strengthens. With copper as the solute, this type of material came to be known by the trade name Duralumin. The advances in metallography described above eventually provided the understanding that age hardening is caused by the dispersion of very fine precipitates from the supersaturated solid solution; this restricts the movement of the dislocations that are essential to crystal deformation and thus raises the strength of the metal. The principles of precipitation hardening have been applied to the strengthening of a large number of alloys.

Additional Information

Metallurgy plays a pivotal role in many industries like aviation, public transportation and electronics — industries that require making things.

From the production of mighty machinery and sturdy construction materials to the creation of intricate electrical systems, metals take center stage. With their exceptional mechanical strength, remarkable thermal conductivity and impressive electrical properties, metals are the lifeblood of technological advancements.

Through skilled hands, metallurgy unlocks the potential of metals, shaping them into essential components that power our modern world. Metallurgists extract, refine and meticulously craft to meet the ever-evolving demands of industries, driving innovation and propelling us into the future.

What Is Metallurgy?

Metallurgy is the study and manipulation of metals and their properties. It is a field of science that focuses on understanding how metals behave and finding ways to improve their properties for different applications. Metallurgists work with widely used metals — like iron, aluminum, copper and steel — in various industries.

One important part of the field is extracting metals from their natural sources, such as ores. An ore is a naturally occurring rock or mineral that contains a valuable material, such as metal or gemstones, which can be extracted and processed for various industrial purposes.

Once extraction is complete, ores can be purified to remove impurities and improve their quality. Think of purification in metallurgy like filtering water. Just as you remove impurities and contaminants from water to make it clean and safe to drink, metallurgists use different methods to remove unwanted substances from metals or ores, making them pure and of higher quality for their intended use.

Metallurgists also study the structure of metals at a microscopic level. They examine how atoms are arranged in metals and how this arrangement affects their properties, like strength, hardness, and conductivity. By understanding the structure, metallurgists can modify metals through processes like heating and cooling, known as heat treatment, to improve their properties.

Metallurgists develop new alloys by combining different metals or adding other elements. Think of it as mixing different paint colors to create a vibrant masterpiece that is stronger, more durable, or corrosion-resistant. Stainless steel, for example, is an alloy that combines iron's strength with chromium's corrosion resistance, making it perfect for shiny kitchen appliances and sturdy construction materials. It's like having the best of both worlds in one metal combo!

197635-050-A672B4AD.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2132 2024-04-27 00:08:27

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2134) Stoichiometry

Gist

Stoichiometry, in chemistry, is the determination of the proportions in which elements or compounds react with one another. The rules followed in the determination of stoichiometric relationships are based on the laws of conservation of mass and energy and the law of combining weights or volumes.

Summary

Stoichiometry is based on the law of conservation of mass. This states that, in a closed system (no outside forces), the mass of the products is the same as the mass of the reactants. Stoichiometry is used to balance reactions so that they obey this law. It is also used to calculate the mass of products and/or reactants.

In order for an equation to be balanced, the number of elements needs to be equal on the left (reactants) and right (products) sides of the equation. We balance equations by using stoichiometric coefficients.

* Stoichiometry is the mathematical relationship between products and reactants in a chemical reaction.

* Stoichiometric coefficients are the numbers before an element/compound which indicate the number of moles present. They show the ratio between reactants and products. They are used to balance equations.

* Stoichiometry can be used to calculate yield by utilizing the ratio between reactants and products. This same concept is used to calculate needed reactant amounts.

* The limiting reactant is the reactant that is completely consumed in the reaction. Once this reactant is fully consumed, it stops the reaction and therefore limits the product made. It can be determined by calculating the yield for all reactants.

* For gas reactions, the ideal gas law must be used to calculate yield.

* The equation for the ideal gas law is PV = nRT  where P=pressure, V=volume, n=moles, R=ideal gas constant, and T=temperature.

Details

Stoichiometry is the relationship between the weights of reactants and products before, during, and following chemical reactions.

Stoichiometry is founded on the law of conservation of mass where the total mass of the reactants equals the total mass of the products, leading to the insight that the relations among quantities of reactants and products typically form a ratio of positive integers. This means that if the amounts of the separate reactants are known, then the amount of the product can be calculated. Conversely, if one reactant has a known quantity and the quantity of the products can be empirically determined, then the amount of the other reactants can also be calculated.

This is illustrated in the image here, where the balanced equation is:

CH4 + 2 O2 → CO2 + 2 H2O

Here, one molecule of methane reacts with two molecules of oxygen gas to yield one molecule of carbon dioxide and two molecules of water. This particular chemical equation is an example of complete combustion. Stoichiometry measures these quantitative relationships, and is used to determine the amount of products and reactants that are produced or needed in a given reaction. Describing the quantitative relationships among substances as they participate in chemical reactions is known as reaction stoichiometry. In the example above, reaction stoichiometry measures the relationship between the quantities of methane and oxygen that react to form carbon dioxide and water.

IUPAC definition for stoichiometry

Because of the well known relationship of moles to atomic weights, the ratios that are arrived at by stoichiometry can be used to determine quantities by weight in a reaction described by a balanced equation. This is called composition stoichiometry.

Gas stoichiometry deals with reactions involving gases, where the gases are at a known temperature, pressure, and volume and can be assumed to be ideal gases. For gases, the volume ratio is ideally the same by the ideal gas law, but the mass ratio of a single reaction has to be calculated from the molecular masses of the reactants and products. In practice, because of the existence of isotopes, molar masses are used instead in calculating the mass ratio.

Etymology

The term stoichiometry was first used by Jeremias Benjamin Richter in 1792 when the first volume of Richter's Anfangsgründe der Stöchyometrie, oder Messkunst chymischer Elemente (Fundamentals of Stoichiometry, or the Art of Measuring the Chemical Elements) was published. The term is derived from the Ancient Greek words στοιχεῖον stoicheion "element" and μέτρον metron "measure".

Definition

A stoichiometric amount or stoichiometric ratio of a reagent is the optimum amount or ratio where, assuming that the reaction proceeds to completion:

* All of the reagent is consumed
* There is no deficiency of the reagent
* There is no excess of the reagent.

Stoichiometry rests upon the very basic laws that help to understand it better, i.e., law of conservation of mass, the law of definite proportions (i.e., the law of constant composition), the law of multiple proportions and the law of reciprocal proportions. In general, chemical reactions combine in definite ratios of chemicals. Since chemical reactions can neither create nor destroy matter, nor transmute one element into another, the amount of each element must be the same throughout the overall reaction. For example, the number of atoms of a given element X on the reactant side must equal the number of atoms of that element on the product side, whether or not all of those atoms are actually involved in a reaction.

Chemical reactions, as macroscopic unit operations, consist of simply a very large number of elementary reactions, where a single molecule reacts with another molecule. As the reacting molecules (or moieties) consist of a definite set of atoms in an integer ratio, the ratio between reactants in a complete reaction is also in integer ratio. A reaction may consume more than one molecule, and the stoichiometric number counts this number, defined as positive for products (added) and negative for reactants (removed). The unsigned coefficients are generally referred to as the stoichiometric coefficients.

Each element has an atomic mass, and considering molecules as collections of atoms, compounds have a definite molar mass. By definition, the atomic mass of carbon-12 is 12 Da, giving a molar mass of 12 g/mol. The number of molecules per mole in a substance is given by the Avogadro constant, defined as 6.02214076×{10}^{23} {mol}^{-1}. Thus, to calculate the stoichiometry by mass, the number of molecules required for each reactant is expressed in moles and multiplied by the molar mass of each to give the mass of each reactant per mole of reaction. The mass ratios can be calculated by dividing each by the total in the whole reaction.

Elements in their natural state are mixtures of isotopes of differing mass; thus, atomic masses and thus molar masses are not exactly integers. For instance, instead of an exact 14:3 proportion, 17.04 g of ammonia consists of 14.01 g of nitrogen and 3 × 1.01 g of hydrogen, because natural nitrogen includes a small amount of nitrogen-15, and natural hydrogen includes hydrogen-2 (deuterium).

A stoichiometric reactant is a reactant that is consumed in a reaction, as opposed to a catalytic reactant, which is not consumed in the overall reaction because it reacts in one step and is regenerated in another step.

Additional Information

Stoichiometry is the quantitative analysis of the reactants required or the products formed. Chemical reactions act on the chemical changes undergone by various compounds and elements.

A balanced chemical equation imparts a lot of information. The coefficients stipulate the molar ratios and the discrete number of particles participating in a specific reaction. Stoichiometry is the quantitative analysis of the reactants required or the products formed.

Although it sounds complicated while the vocabulary of the word, the concept of stoichiometry is very simple and easy to understand.

What is Stoichiometry?

The word ‘stoichiometry‘ comes from the Greek words ‘stoicheion’ (meaning element) and ‘metron’ (meaning measure). Hence, it means measurement of the element.

Stoichiometry denotes the quantitative connection between a chemical reaction’s products and the reactants. Stoichiometry is a branch of chemistry that applies the laws of definite proportions and the conservation of mass and energy to chemical activity.

A well-balanced chemical reaction and the coefficients of reactants and products of a reaction are all part of the stoichiometry of a reaction. In short, stoichiometry means is the measurement of small parts in a reaction.

Definition of Stoichiometry

The balanced chemical equation represents the stoichiometry definition for a reaction. It represents the quantities or amounts of products and reactants involved in the chemical equation.

In chemistry, stoichiometry definition is given as the quantitative connection between two or more substances, especially in chemical or physical change processes.

Stoichiometry definition can be the definite proportions in which elements or compounds react with one another. The rules followed in calculating the stoichiometric relationships are based on the laws of conservation of mass and energy and the law of combining weights or volumes.

molecular-model-illustration-545862141-57dec4f93df78c9cce4ab839.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2133 2024-04-28 00:12:08

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2135) Manicure

Gist

Manicure is a cosmetic treatment of the hands and fingernails, including trimming and polishing of the nails and removing cuticles.

Summary

A manicure is the act of beautifying one's hands or fingernails. Manicures can be done at home or by a professional in a nail salon. During a manicure, the nail is filed with a nail file; the free edge of the nail is cut, the cuticle is treated, the person's hand is massaged and nail polish is put on.

History

5000 years ago, manicures were first used. Henna was used in India for manicure. The word mehendi, another word for henna, derives from the Sanskrit mehandika. Cixi, the Dowager Empress of China, had very long naturally-grown nails.

How it is done

When you get a manicure, your fingernails get filed and shaped and possibly painted red. You might get a manicure at a spa or salon as a way to pamper yourself.

You can pay a professional to give you a manicure, or you can do a manicure at home with a nail file and a bottle of nail polish. Some people get manicures for fun and relaxation, while others don't feel well dressed without the perfect nails that a manicure provides. In French the word manicure means "care of the hands," and it comes from the Latin word for "hand," manus.

Details

A manicure is a mostly cosmetic beauty treatment for the fingernails and hands performed at home or in a nail salon. A manicure usually consists of filing and shaping the free edge of nails, pushing and clipping (with a cuticle pusher and cuticle nippers) any nonliving tissue (but limited to the cuticle and hangnails), treatments with various liquids, massage of the hand, and the application of fingernail polish. When the same is applied to the toenails and feet, the treatment is referred to as a pedicure. Together, the treatments may be known as a mani-pedi. Most nail polish can stay on nails for 2–3 days before another manicure is required for maintenance, if there is no damage done to it.

Some manicures include painting pictures or designs on the nails, applying small decals, or imitation jewels (from 2 dimension to 3 dimension). Other nail treatments may include the application of artificial gel nails, tips, or acrylics, which may be referred to as French manicures.

Nail technicians, such as manicurists and pedicurists, must be licensed in certain states and countries, and must follow government regulations. Since skin is manipulated and often times trimmed, there is a risk of spreading infection when tools are used across many people. Therefore, having improper sanitation can pose serious issues.

Etymology

The English word manicure comes from the French word manucure, meaning care of the hands, which in turn originates from the Latin words manus, for hand, and cura, for care. Similarly, the English word pedicure comes from the Latin words pes (genitive case: pedis), for foot, and cura, for care. Colloquially, the word for manicure is sometimes shortened to mani.

Types

French manicures

Jeff Pink, founder of the professional nail brand ORLY, is credited with creating the natural nail look later called the French manicure in 1976.

In the mid-1970s, Pink was tasked by a film director to come up with a universal nail look that would save screen actresses from having to spend time getting their nails redone to go along with their costume changes. Inspired by the instant brightening effect of a white pencil applied to the underside, Pink suspected that the solution was to apply that same neutralizing principle to the top of the nail. "I got one gallon of white polish for the tips, and pink, beige, or rose for the nail," he recalled in a 2014 interview with The National.

Acrylic manicure with jewel design

The Natural Nail Kit, as Pink called it then, was a hit among movie stars and studios who found the time-saving strategy indispensable. "The director commented that I should get an Oscar for saving the industry so much money," he said. Eventually Pink took the trend to the catwalk crowd in Paris, and they liked it, too. But, it still needed, as he thought, a more pleasing name. He gave it the French rebranding on the flight back home to Los Angeles.

Nails that have undergone a French manicure are characterized by a lack of artificial base color and white tips at the free edge of the nail. For this reason, they are sometimes referred to as French tips. The nail tips are painted white, while the rest of the nails are polished in a pink or a suitable nude shade. French manicures can be achieved with artificial nails. However, it is also as common to perform a French manicure on natural nails. Another technique is to whiten the underside of the nail with white pencil and paint a sheer color over the entire nail.

Hot oil manicures

A hot oil manicure is a specific type of manicure that cleans the cuticles and softens them with oil. Types of oils that can be used are mineral oil, olive oil, some lotions or commercial preparations in an electric heater.

Dip powder manicures

Dip powder manicures are an alternative to traditional acrylic nails and gel polish. Dip powders have become popular due to ease of application. They are similar to traditional silk or fiberglass enhancements, with the fiber being replaced by acrylic powder. Both methods rely on layering cyanoacrylate over the natural nail and encasing either the fiber or acrylic powder. While a single layer of fiber is typical, multiple alternating layers of powder and cyanoacrylate may be used in dip nails.

Paraffin wax treatments

Hands or feet can be covered in melted paraffin wax for softening and moisturizing. Paraffin wax is used because it can be heated to temperatures of over 95 °F (35 °C) without burning or injuring the body. The intense heat allows for deeper absorption of emollients and essential oils. The wax is usually infused with various botanical ingredients such as aloe vera, azulene, chamomile, or tea tree oil, and fruit waxes such as apple, peach, and strawberry, are often used in salons. Paraffin wax treatments are often charged as an addition to the standard manicure or pedicure. They are often not covered in general training and are a rare treatment in most nail salons.

Professional services should not include dipping clients' hands or feet into a communal paraffin bath, as the wax can be a vector for disease. Paraffin should be applied in a way that avoids contamination, often by placing a portion of the wax into a bag or mitt, which is placed on the client's hand or foot and covered with a warm towel, cotton mitt, or booty to retain warmth. The paraffin is left for a few minutes until it has cooled.

Sanitation options

In Australia, the United States, and other countries, many nail salons offer personal nail tool kits for purchase to avoid some of the sanitation issues in the salon. The kits are often kept in the salon and given to the client to take home, or are thrown away after use. They are only used when that client comes in for a treatment.

Another option is to give the client the files and wooden cuticle sticks after the manicure. Since the 1970s, the overwhelming majority of professional salons use electric nail files that are faster and yield higher quality results, particularly with acrylic nail enhancements.

Shape

There are several nail shapes: the basic shapes are almond, oval, pointed, round, square, square oval, square with rounded corners, and straight with a rounded tip. The square oval shape is sometimes known as squoval, a term coined in 1984. The squoval is considered a sturdy shape, useful for those who work with their hands.

Additional Information

A nail technician or nail stylist is a person whose occupation is to style and shape a person's nails. This is achieved using a combination of decorating nails with coloured varnish, transfers, gems or glitter.

A nail technician or nail stylist is a person whose occupation is to style and shape a person's nails. This is achieved using a combination of decorating nails with coloured varnish, transfers, gems or glitter. Basic treatments include manicures and pedicures, as well as cleaning and filing nails and applying overlays or extensions.

Using a stencil or stamping, nail stylists can also paint designs onto nails with an airbrush, by hand. A nail stylist will often complete a consultation with the client to check for any signs of skin problems, deformities or nail disease before treatment, advise clients about looking after their hands and nails, and recommend nail care products.

Training to become a nail stylist involves completing a professional course that normally takes at least a year. Courses will more than likely cover anatomy and physiology of the nails, hands, arms, feet and legs, contraindications that may arise, identifying diseases and disorders, proper sanitation and sterilizing techniques, how to perform nail services safely, gel polish application, liquid and powder enhancements and hard gel enhancements. The work itself tends to take place in a beauty salon although some nail stylists will make house calls to clients. Once licensed, many nail stylists will keep their own regular client list. The basic equipment needed to carry out Nail services can be easily obtained. Types of basic equipment can include nail drills, brushes, gel polish, and a UV lamp. Specialist equipment will be needed for specific nail applications.

Getting a manicure is an excellent way to pamper yourself after a hard week of work. There’s not much that is more relaxing than getting your hands massaged, and then beautified. We use our hands to do all the work we do, which means it’s important to take proper care of them once in a while.

Fortunately, there are a great many manicures that you can choose from at the nail salon. Let’s have a look at some of the more common ones.

Basic Manicure

Most manicures use a basic manicure as a starting point, but this can be a simple and beautiful look in its own right. This involves an oil, lotion or cream first being applied to your cuticles, and then your hands being soaked in warm water. Your cuticles will then be managed, and you can choose the shape and length you want for the nails. Your hands will then be massaged with oils. Following this, your stylist will apply a base coat, a main coat and finally a top coat of polish. Once you’re done with this basic step, you have the choice of either leaving it at that, or getting a more creative option.

French Manicure

The French manicure is probably the most classic of manicure styles, and the look that you probably associate with getting a manicure at all. It’s a clean and beautiful look, with a clear, beige or pale pink polish applied over the whole nail, and white polish at the top. Of course, there are variations in the style, with bolder colors being used for the nail and cuticle. Whatever variation you choose, you’re guaranteed a vibrant and gorgeous look that should be perfect for any occasion.

American Manicure

This style very much resembles a French manicure, but is set apart by the shape and color of the nails. Where the French styles tend to be more obvious, the American manicure seeks to create a subtler, more natural look. This involves having the tips rounded, and off white or neutral colors being used on the tips, instead of the French manicures bright whites. Again, there are variations of this styles, and it can be used in the more classic style, or given a touch of modernity through the use of uncommon colors.

Reverse French Manicure

The reverse French manicure is essentially exactly what it sounds like. The look is relatively new, but attained popularity within the mainstream fairly quickly. It basically entails the nail being painted white, and the tip painted darker. Other options include a black and white look, or a strip of color on the tip coupled with an all white nail.

Paraffin Manicure

If your hands are particularly overworked or dry, this might be the right option for you. This manicure involves using paraffin wax on the hands in order to instantaneously infuse it with moisture and make it smooth. Paraffin manicures also involve a more energetic hand massage, as well as a basic manicure look.

manicure-with-burgundy-gel-nail-polish.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2134 2024-04-29 00:06:32

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2136) Pedicure

Gist

A pedicure is a cosmetic treatment of the feet and toenails, analogous to a manicure. During a pedicure, dead skin cells are rubbed off the bottom of the feet using a rough stone (often a pumice stone).

Summary

Simply defined, a pedicure is a full treatment for one’s feet and toenails. This service may include a foot spa, massage, and scrub, along with toenail cuticle removal, cutting, shaping, and buffing.

Nail grooming is no longer a luxury, but now a necessity. Getting your nails cleaned, trimmed, buffed, scrubbed, and painted may seem like too much of a pampering. But hey, you deserve it! Your hands and feet are among the overworked parts of your body. It’s only right to care for them regularly.

When it comes to nail grooming, you’ve got two choices: a manicure or a pedicure. A manicure service will get your hands relaxed with a massage and your fingernails spoiled with your choice of nail polish. A pedicure service is similar, only designed to care for your toenails. Here’s more about it:

What is a pedicure?

Simply defined, a pedicure is a full treatment for one’s feet and toenails. This service may include a foot spa, massage, and scrub, along with toenail cuticle removal, cutting, shaping, and buffing. There are many ways to provide a pedicure so it’s important for you to choose the right spa or salon to get the best pedicure service. It is best to go to a salon that can do everything, including nail art and nail extension, in case you decide to have them done too.

What are the benefits of a pedicure?

A pedicure service has several benefits, which is why some people make it a part of their regular therapeutic care rather than treating it as a purely cosmetic service. Going for a pedicure regularly will let you enjoy the following:

* It can serve as supplemental care for people suffering from foot problems such as cracked soles, corns, and calluses
* It offers stress relief as provided by its soothing and relaxing treatments
* It keeps your feet healthy as your skin feels softer, and your feet less sore from walking or standing
* It can help in the early detection of infections caused by bunions or fungal infections
* Pedicure nail arts can get your toenails looking glamorous for special occasions.

How often do you get a pedicure?

A standard pedicure treatment with nail polish will last for 1 - 3 weeks. But that will still depend on what type of pedicure treatment you got and how well you took care of your feet after leaving the salon. Gel pedicures may last between 2 and 4 weeks. You may get another pedicure as soon as the nail polish has chipped or every four to six weeks.

Be sure to care for your toenails after getting a pedicure to maximise the lifespan of the nail polish. Wear flip-flops or open-toed sandals instead of closed shoes when going to the salon to ensure that the polish gets to dry fully.

What to expect from a pedicure?

Each salon may follow its procedures when providing a pedicure service to their clients. A pedicure service may last for anywhere from 30 minutes to a full hour. Generally, here are the things that you should expect from it if you decide to get the full service.

Foot spa – your feet will be soaked in warm water. It’s like giving your feet a bath in a small Jacuzzi where your feet will be cleansed and softened with scented water
Exfoliating scrub – your feet will get a good scrub using special salts and minerals to remove dead skin cells.
Foot massage – this is an optional service where the aesthetician will give you either reflexology or acupressure-based massage to relax your feet. Special oils and creams may also be used.
Nail polish – upon request, you can get your cuticles removed and your toenails cut and shaped. Additionally, you may get your toenails painted with the polish of your choice.

Different kinds of pedicure

Different kinds of pedicures provide you with benefits that match your needs. You’re free to decide which type to get to enjoy maximum pampering for your time and money. Just don’t forget to ask what the entire service includes. Some of your choices are:

Gel pedicure – this is the most popular type of pedicure service these days. It uses a gel nail polish and a special UV lamp to make sure that the polish will last for two to four weeks.
Shellac pedicure – this one is like a gel pedicure when it comes to benefits and application process. The only difference is the type of nail polish used.
French pedicure – for this type, you’ll get your toenails cut and shaped like a square then only the tip will be painted white.
Spa pedicure – this is the term normally used for a full-service pedicure. It means you’ll enjoy all the bells and whistles, plus other special services such as paraffin wax and hot stone treatments, among others.
Special pedicure – for this type, there are certain elements that make the pedicure exceptional. A good example is Chocolate Pedicure, which uses warm chocolate for the foot bath and cocoa as the scrub.

Details

A pedicure is a cosmetic treatment of the feet and toenails, analogous to a manicure.

During a pedicure, dead skin cells are rubbed off the bottom of the feet using a rough stone (often a pumice stone). Skincare is often provided up to the knee, including granular exfoliation, moisturizing, and massage.

The word pedicure is derived from the Latin words pedis, which means "of the foot", and cura, which means "care".

History

People have been pedicuring their nails for more than 4,000 years. In southern Babylonia, noblemen used solid gold tools to give themselves manicures and pedicures. The use of nail polish can be traced back even further. Originating in China in 3000 BC, nail colour indicated one's social status, according to a Ming Dynasty manuscript; royal fingernails were painted black and red. Ancient Egyptians have been manicuring all the way back to 2300 BC.

A depiction of early manicures and pedicures was found on a carving from a pharaoh's tomb, and the Egyptians were known for paying special attention to their feet and legs. The Egyptians also colored their nails, using red to show the highest social class. It is said that Cleopatra's nails were painted a deep red, whereas Queen Nefertiti went with a flashier ruby shade. In ancient Egypt and Rome, military commanders also painted their nails to match their lips before they went off to battle.

Pedicures in the United States

Pedicures generally take approximately 45 minutes to an hour in the US. According to the US Department of Labor, manicure and pedicure specialists earned a median income of around $20,820 in 2015. Most professionals earn an hourly wage or salary which can be augmented through customer tips. Independent nail technicians depend on repeat and consistent business to earn a living. The most successful independent manicure technicians may earn salaries of over $50,000 per year. Nail technicians can earn up to $100 per hour from performing more technical nail treatments, such as a French pedicure and sculpting, although these treatments are not popular for the feet. A standard pedicure treatment usually costs around $40.

Tools and nail cosmetics

* Pedicure
* Tools
* Acetone
* Cotton balls
* Cuticle cream
* Cuticle pusher or Cuticle nipper
* Foot bath
* Lotion
* Nail buffer
* Nail file
* Nail polish
* Orange woodstick
* Toenail clippers
* Toe spacers
* Towels
* Pedicure Spa
* Pumice stone (removes dead skin from sole of foot)
* Paper towels (rolled between toes to separate them)
* Nail cosmetics
* Base coat
* Cuticle creams
* Cuticle oil
* Cuticle remover
* Dry nail polish
* Liquid nail polish
* Nail bleach
* Nail conditioner
* Nail dryer
* Nail polish remover
* Nail polish thinner

Types of pedicures

There are various different types of pedicures. Some of the most common types are as follows (names and products may vary from spa to spa):

Regular pedicure: A simple treatment that includes foot soaking, foot scrubbing with a pumice stone or foot file, nail clipping, nail shaping, foot and calf massage, moisturizer and nail polishing.
Shanghai pedicure: A traditional foot medicine deriving from Chinese medicine that involves soaking feet in hot water and using a scalpel.
Spa pedicure: Includes the regular pedicure and generally adds one of the following- Paraffin dip, masks, mud or seaweed treatment.
Dry or Waterless pedicure: A pedicure typically including nail shaping, cuticle cleanup, callus smoothing, moisturizer with massage, nail polish or buffing, but definitively without soaking the feet in water. Often the callus smoothing, nail shaping, and cuticle cleaning are all performed with an electric file.
Paraffin pedicure: A treatment that includes a regular pedicure but also includes the use of paraffin wax. The feet are covered with layers of paraffin wax to moisturize feet.
Stone pedicure: Basically a foot massage that involves the use of different essential oils that are rubbed with the help of hot stones for the massage of the feet and legs.
French pedicure: A regular pedicure that involves the use of white polish on the nail tips with a sheer pink color on the base.
Mini pedicure: This focuses mainly on the toes with a quick soak, nail shaping and polish, but does not include the massage or sole care. This is designed for an appointment between regular pedicures for generally well maintained feet.
Athletic pedicure: Similar to a regular pedicure for both the genders. It includes either a clear polish or toenail buffing. Usually, the aromatics used will be more cooling, such as peppermint, cucumber, or eucalyptus.
Chocolate pedicure: A pedicure which may include a chocolate foot soak, chocolate foot mask, or chocolate moisturizing lotion.
Ice Cream pedicure: A pedicure where a "bath ball", which looks like a scoop of ice cream, is chosen. The soak is followed with a foot scrub (usually vanilla, chocolate, or strawberry) and topped with a whipped moisturizing lotion. Red nail polish simulates the ice cream's "cherry".
Margarita pedicure: A regular pedicure which includes a salt scrub, soaking water with fresh limes, a lime-based massage oil, and moisturizer.
Champagne or Wine pedicure: This is a regular pedicure usually featuring a grape-seed scrub, grape mask peel, and finished off with a grape seed oil or moisturizing massage.

Risks

Improper or unsanitary pedicures can increase the risk of infection. First, some pedicure practices can damage the skin if performed too aggressively and thus increase infection risk. For example, using a pumice stone to shave off calluses on the sole of the foot can result in abrasions and cuticle nippers may accidentally remove too much of the cuticle. Second, instruments or foot baths may not be properly sterilized, introducing pathogens into already vulnerable skin. Mycobacterium fortuitum is known to cause infection in foot spas. These risks are particularly high for people with medical conditions that affect blood flow, sensation, immune response, or healing in the feet, such as diabetes. Major health organizations such as the CDC recommend that diabetics do not soak their feet or remove calluses, and often have a podiatrist cut their toenails, which are some of the key parts of many pedicures.

Solutions and chemicals used to cleanse or soak feet can also cause skin irritation. There can be a risk of developing an ingrown toenail from improper trimming.

pedicure.jpg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2135 2024-04-30 00:05:54

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2137) Check/Cheque

Gist

A cheque is a written order directing a bank to pay out money, and it's exactly the same thing as a check, but with more exciting letters.

The American English word for the slip of paper that authorizes your bank to make a payment is check, which is the adjusted spelling of the British English cheque. That word comes from exchequer which is like a bank, and so a cheque is a note that has the seal of the bank: an official piece of paper. Cheque can be used as a verb meaning "withdraw."

Summary

Cheques, cheques, what’s the big deal, right? They’re those little pieces of paper people deposit, and transactions happen. Simple, right? Well, cheques are more than just a mundane financial instrument; they’re a testament to the evolution of humanity itself. You see, my curious friend, the history of cheques dates back to the 13th century, and it’s a story that has shaped your very existence.

So, picture this: someone randomly asks you, “What is a cheque?”. But you don’t know the answer. So, you find yourself stammering and lost for words. It’s not just your reputation at stake; it’s the struggles of all humankind! But worry not. We’re here to rescue you from the embarrassment. Stay tuned till the end to know all about cheques.

What is a Cheque?

Cheque is a written instrument that serves as a substitute for cash. It is a negotiable instrument, instructing the bank to pay a specific amount from the drawer’s account to a designated payee.

Cheques provide a convenient and secure way to:

* Transfer funds.
* Make payments.
* Settle debts between individuals and businesses.

Types of Cheque

Knowing what a cheque is is not enough. You must know about its various types to fully understand the dynamics of cheques. Here are some of the types of cheques which you should be aware of:

1. Bearer Cheque

The bearer cheque is payable to the person who holds or “bears” the cheque. It is a form of open instrument, and whoever possesses it can encash it.

2. Order Cheque

An order cheque is payable only to a specific person or organization mentioned on the cheque as the payee. It requires endorsement by the payee to be encashed.

3. Crossed Cheque

Crossed cheque has two parallel lines across its face. These lines are drawn to indicate that the cheque must be deposited directly into the payee’s bank account. It also disables the check to be encashed over the counter.

Sorry, folks, no loopholes here!

4. Open Cheque

An open cheque is not crossed or specified as a bearer or order cheque. It is payable to the person presenting it and can be encashed over the counter.

5. Post-Dated Cheque

Post-dated cheques carry an upcoming date as the payment date. It becomes valid for encashment only on or after that date.

6. Stale Cheque

Just like that forgotten sandwich in the back of your fridge, cheques have an expiration date.

An expired cheque is called a stale cheque. So, remember, presenting stale cheques after their due date will leave you with nothing but disappointment.

7. Traveler’s Cheque

When Adventure calls, this cheque answers.

A traveler’s cheque is a pre-printed fixed-amount cheque. It is designed to be used while traveling. It also serves as a safer and more convenient alternative to carrying cash.

8. Self Cheque

A self cheque is a type of cheque written by the account holder and payable to themselves. It allows you to withdraw cash from your account.

It’s like a gift from you to yourself.

Features of Cheque

Here are certain features of a cheque:

1. Issued by Individuals

A bank cheque can be issued by individuals who hold a savings or current account. It provides them with a convenient payment method.

2. Fixed Amount

The amount written on the cheque cannot be changed later on. It provides a secure means of transferring a specific sum of money to the payee.

3. Formal Order of Payment

A cheque is an unconditional order and not a request to the bank. It represents a legally binding instruction to the bank to make the specified payment.

4. Validity and Authorization

The cheque is valid only when signed and dated. Unsigned cheques are considered invalid and cannot be honored by the bank.

5. MICR Code

Magnetic Ink Character Recognition (MICR) code is present at the bottom of the cheque. It facilitates automated processing by banks.

Parties to Cheque

There are three main parties involved in a bank cheque transaction:

1. Drawer

The drawer is the person who writes and signs the cheque, issuing the payment instruction. The drawer is typically the account holder who authorizes the payment from their bank account.

2. Payee

The payee is the person or organization named on the cheque to receive the payment. The payee is the intended recipient of the funds specified by the drawer.

3. Drawee

The drawee is the bank or financial institution on which the bank cheque is drawn. The drawee is responsible for paying the payee as instructed by the drawer.

The Key Takeaway

While digital payment methods dominate today’s financial landscape, cheques remain a valuable tool for certain transactions. Thus, properly understanding cheques equips you with the necessary skills to navigate the financial world effectively. So, we hope that you understand the value of these little pieces of paper by now.

Details

A cheque or check is a document that orders a bank, building society (or credit union) to pay a specific amount of money from a person's account to the person in whose name the cheque has been issued. The person writing the cheque, known as the drawer, has a transaction banking account (often called a current, cheque, chequing, checking, or share draft account) where the money is held. The drawer writes various details including the monetary amount, date, and a payee on the cheque, and signs it, ordering their bank, known as the drawee, to pay the amount of money stated to the payee.

Although forms of cheques have been in use since ancient times and at least since the 9th century, they became a highly popular non-cash method for making payments during the 20th century and usage of cheques peaked. By the second half of the 20th century, as cheque processing became automated, billions of cheques were issued annually; these volumes peaked in or around the early 1990s. Since then cheque usage has fallen, being replaced by electronic payment systems, such as debit cards and credit cards. In an increasing number of countries cheques have either become a marginal payment system or have been completely phased out.

Nature of a cheque

A cheque is a negotiable instrument instructing a financial institution to pay a specific amount of a specific currency from a specified transactional account held in the drawer's name with that institution. Both the drawer and payee may be natural persons or legal entities. Cheques are order instruments, and are not in general payable simply to the bearer as bearer instruments are, but must be paid to the payee. In some countries, such as the US, the payee may endorse the cheque, allowing them to specify a third party to whom it should be paid.

Cheques are a type of bill of exchange that were developed as a way to make payments without the need to carry large amounts of money. Paper money evolved from promissory notes, another form of negotiable instrument similar to cheques in that they were originally a written order to pay the given amount to whoever had it in their possession (the "bearer").

Spelling and etymology

Check is the original spelling in the English language. The newer spelling, cheque (from the French), is believed to have come into use around 1828, when the switch was made by James William Gilbart in his Practical Treatise on Banking. The spellings check, checque, and cheque were used interchangeably from the 17th century until the 20th century. However, since the 19th century, in the Commonwealth and Ireland, the spelling cheque (from the French word chèque) has become standard for the financial instrument, while check is used only for other meanings, thus distinguishing the two definitions in writing.

In American English, the usual spelling for both is check.

Etymological dictionaries attribute the financial meaning of check to come from "a check against forgery", with the use of "check" to mean "control" stemming from the check used in chess, a term which came into English through French, Latin, Arabic, and ultimately from the Persian word shah, or "king".

History

The cheque had its origins in the ancient banking system, in which bankers would issue orders at the request of their customers, to pay money to identified payees. Such an order was referred to as a bill of exchange. The use of bills of exchange facilitated trade by eliminating the need for merchants to carry large quantities of currency (for example, gold) to purchase goods and services.

Early years

There is early evidence of using bill of exchange. In India, during the Maurya Empire (from 321 to 185 BC), a commercial instrument called the adesha was in use, which was an order on a banker desiring him to pay the money of the note to a third person.

The ancient Romans are believed to have used an early form of cheque known as praescriptiones in the 1st century BC.

Beginning in the third century AD, banks in Persian territory began to issue letters of credit. These letters were termed čak, meaning "document" or "contract". The čak became the sakk later used by traders in the Abbasid Caliphate and other Arab-ruled lands. Transporting a paper sakk was more secure than transporting money. In the ninth century, a merchant in one country could cash a sakk drawn on his bank in another country. The Persian poet, Ferdowsi, used the term "cheque" several times in his famous book, Shahnameh, when referring to the Sasanid dynasty.

Ibn Hawqal, living in the 10th century, records the use of a cheque written in Aoudaghost which was worth 42,000 dinars.

In the 13th century the bill of exchange was developed in Venice as a legal device to allow international trade without the need to carry large amounts of gold and silver. Their use subsequently spread to other European countries.

In the early 1500s, to protect large accumulations of cash, people in the Dutch Republic began depositing their money with "cashiers". These cashiers held the money for a fee. Competition drove cashiers to offer additional services including paying money to any person bearing a written order from a depositor to do so. They kept the note as proof of payment. This concept went on to spread to England and elsewhere.

Modern era

By the 17th century, bills of exchange were being used for domestic payments in England. Cheques, a type of bill of exchange, then began to evolve. Initially, they were called drawn notes, because they enabled a customer to draw on the funds that he or she had in the account with a bank and required immediate payment. These were handwritten, and one of the earliest known still to be in existence was drawn on Messrs Morris and Clayton, scriveners and bankers based in the City of London, and dated 16 February 1659.

In 1717, the Bank of England pioneered the first use of a pre-printed form. These forms were printed on "cheque paper" to prevent fraud, and customers had to attend in person and obtain a numbered form from the cashier. Once written, the cheque was brought back to the bank for settlement. The suppression of banknotes in eighteenth-century England further promoted the use of cheques.

Until about 1770, an informal exchange of cheques took place between London banks. Clerks of each bank visited all the other banks to exchange cheques while keeping a tally of balances between them until they settled with each other. Daily cheque clearing began around 1770 when the bank clerks met at the Five Bells, a tavern in Lombard Street in the City of London, to exchange all their cheques in one place and settle the balances in cash. This was the first bankers' clearing house.

Provincial clearinghouses were established in major cities throughout the UK to facilitate the clearing of cheques on banks in the same town. Birmingham, Bradford, Bristol, Hull, Leeds, Leicester, Liverpool, Manchester, Newcastle, Nottingham, Sheffield, and Southampton all had their own clearinghouses.

In America, the Bank of New York began issuing cheques after its establishment by Alexander Hamilton in 1784. The oldest surviving example of a complete American chequebook from the 1790s was discovered by a family in New Jersey. The documents are in some ways similar to modern-day cheques, with some data pre-printed on sheets of paper alongside blank spaces for where other information could be hand-written as needed.

It is thought that the Commercial Bank of Scotland was the first bank to personalize its customers' cheques, in 1811, by printing the name of the account holder vertically along the left-hand edge. In 1830 the Bank of England introduced books of 50, 100, and 200 forms and counterparts, bound or stitched. These cheque books became a common format for the distribution of cheques to bank customers.

In the late 19th century, several countries formalized laws regarding cheques. The UK passed the Bills of Exchange Act 1882, and India passed the Negotiable Instruments Act, 1881; which both covered cheques.

An English cheque from 1956 having a bank clerk's red mark verifying the signature, a two-pence stamp duty, and holes punched by hand to cancel it. This is a "crossed cheque" disallowing the transfer of payment to another account.
In 1931, an attempt was made to simplify the international use of cheques by the Geneva Convention on the Unification of the Law Relating to Cheques. Many European and South American states, as well as Japan, joined the convention. However, countries including the US and members of the British Commonwealth did not participate and so it remained very difficult for cheques to be used across country borders.

In 1959 a standard for machine-readable characters (MICR) was agreed upon and patented in the US for use with cheques. This opened the way for the first automated reader/sorting machines for clearing cheques. As automation increased, the following years saw a dramatic change in the way in which cheques were handled and processed. Cheque volumes continued to grow; in the late 20th century, cheques were the most popular non-cash method for making payments, with billions of them processed each year. Most countries saw cheque volumes peak in the late 1980s or early 1990s, after which electronic payment methods became more popular and the use of cheques declined.

In 1969 cheque guarantee cards were introduced in several countries, allowing a retailer to confirm that a cheque would be honored when used at a point of sale. The drawer would sign the cheque in front of the retailer, who would compare the signature to the signature on the card and then write the cheque-guarantee-card number on the back of the cheque. Such cards were generally phased out and replaced by debit cards, starting in the mid-1990s.

From the mid-1990s, many countries enacted laws to allow for cheque truncation, in which a physical cheque is converted into electronic form for transmission to the paying bank or clearing-house. This eliminates the cumbersome physical presentation and saves time and processing costs.

In 2002, the Eurocheque system was phased out and replaced with domestic clearing systems. Old Eurocheques could still be used, but they were now processed by national clearing systems. At that time, several countries took the opportunity to phase out the use of cheques altogether. As of 2010, many countries have either phased out the use of cheques altogether or signaled that they would do so in the future.

Additional Information

A check is a written, dated, and signed draft that directs a bank to pay a specific sum of money to the bearer. The person or entity writing the check is known as the payor or drawer, while the person to whom the check is written is the payee. The drawee, on the other hand, is the bank on which the check is drawn.

KEY TAKEAWAYS
* A check is a written, dated, and signed draft that directs a bank to pay a specific sum of money to the bearer.
* Checks instruct a financial institution to transfer funds from the payor’s account to the payee or that person's account.
* Check features include the date, the payee line, the amount of the check, the payor’s endorsement, and a memo line.
* Types of checks include certified checks, cashier’s checks, and payroll checks, also called paychecks.
* In some countries, such as Canada and England, the spelling used is “cheque.”

How Checks Work

A check is a bill of exchange or document that guarantees a certain amount of money. It is printed for the drawing bank to provide to an account holder (the payor) to use. The payor writes the check and gives it to the payee, who then takes it to their bank for cash or to deposit into an account.

Checks essentially provide a way to instruct the bank to transfer funds from the payor’s account to the payee or the payee’s account.

The use of checks allows two or more parties to make a monetary transaction without using physical currency. Instead, the amount for which the check is written is a substitute for physical currency of the same amount.

Checks are generally written against a checking account, but they can also be used to move funds from a savings or other type of account.

Checks can be used to make bill payments, as gifts, or to transfer sums between two people or entities. They are generally seen as a more secure way of transferring money than cash, especially with large sums. If a check is lost or stolen, a third party is not able to cash it, as the payee is the only one who can negotiate the check.

Features of a Check

While not all checks look alike, they generally share the same key features. The name and contact information of the person writing the check is located at the top left. The name of the bank that holds the drawer’s account appears on the check as well.

There are a number of lines that need to be filled in by the payor:

* The date must be written on the line in the top right corner of the check.
* The payee’s name goes on the first line in the center of the check. This is indicated by the phrase "Pay to the Order Of."
* The amount of the check in a dollar figure is filled out in the box next to the payee’s name.
* The amount written out in words goes on the line below the payee’s name.
* The payor signs the check on the line at the bottom right corner of the check. The check must be signed to be considered valid.

There is also a memo line in the bottom left corner of the check. The payor may use it to make notes, such as a reference number, an account number, or any particular reason for writing the check.

A series of coded numbers is found along the bottom edge of the check, directly underneath the memo line and extending toward the payor’s signature line. These numbers are:

* the bank’s routing number
* the payor’s account number
* the check number.

In certain countries, such as Canada, the routing number is replaced with an institution number—which represents the bank’s identifying code—and the transit or branch number where the account is held.

Date-the-Cheque.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2136 2024-04-30 22:52:02

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2138) Demand Draft

Gist

A demand draft is a way to initiate a bank transfer that does not require a signature, as is the case with a check. A demand draft is a prepaid instrument; therefore, you cannot stop payment on it in the case of fraud or mis-intended recipient.

Summary

A Demand Draft (DD) refers to a negotiable instrument that any Bank issues. A negotiable instrument means that it guarantees some specific amount of payment while mentioning the payee’s name.

In any situation, a DD cannot be transferred to another person.

Features of a Demand Draft

* A bank issues a DD to a person who directs any other bank or branch to pay some amount to a payee. You can either get a demand draft online or offline.
* As compared to the cheques, demand drafts are hard to counterfeit and are secure. This is because the person must pay before issuing a DD to the bank, while one can issue a cheque without ensuring sufficient funds in the bank account. Therefore, bouncing for cheques is a possibility, but DD assures timely and safer payments.
* A DD is also payable on demand. A beneficiary must present the instrument to the branch directly and cannot pay it directly to the bearer. One can also get it collected via the bank’s clearing mechanism.
* Mostly DD is issued when the parties are unknown to one another and do not have much trust. In such situations, it is handy as then there is no chance of counterfeiting and frauds.

Types of Demand Draft

There are two main types of demand drafts, as follows:

1. Sight Demand Draft

This is a DD type that is only approved and payable after verifying certain documents. If the payee fails to present the required documents, they will not receive the amount.

2. Time Demand Draft

This is a DD type that is payable only after a specific period, and it cannot be drawn from the bank before that.

How to Make Demand Draft?

* You can visit your bank or fill the online application that the bank offers
* You must provide the essential details like the bank account information, the payee’s full name, and the payee’s bank address
* Apart from these, you must also provide the money amount, currency of the amount, reason for payment, and the instructions about sending the DD (to you or directly to the payee)
* Additionally, you might also be required to pay some fees before issuing the demand draft.

When you can use a DD?

You can use a DD while making any online purchase of the items or while purchasing over the phone. You can also pay it if you have some recurring debits from the bank account, including the bill payments.

Other uses of a DD include:

* Return item fees
* Customer payments made remotely
* Transfer payments between various bank accounts.

Therefore, usually, telemarketers, credit card companies, utility firms, and insurance agencies accept DD.

Demand Draft Validity

It is common that people delay depositing their cheques or demand draft for credits into the bank account. There might be various reasons for the delays, but a person must be aware that the Reserve Bank of India has reduced the validity period for demand drafts and cheques.

According to the guidelines of the Reserve Bank of India, the negotiable instruments, including demand drafts, cheques, pay orders, etc., will only be valid for 3 months.

The time is reduced to prevent people from taking undue advantages and circulating the instruments as cash in the Market. RBI also directed all the banks and their branches to not proceed with any payment if any person presents an instrument beyond three months from the issuing date. If the validity of the demand draft expires, the DD purchaser must visit the concerned branch and submit an application for revalidation of the demand draft.

Demand Draft vs Cheque

A key difference between a demand draft and a cheque is that the bank issues a demand draft, and any individual can issue a cheque. There are other differences too, including:

* A bank customer draws a cheque, and the bank draws a demand draft
* The person can stop the payment of a cheque but not of a demand draft
* Demand draft is a prepaid instrument, and thus, it will surely be proceeded once issued. However, a cheque might Fail due to an insufficient balance in an account.

Details

What Is a Demand Draft?

A demand draft is a method used by an individual to make a transfer payment from one bank account to another. Demand drafts differ from regular normal checks in that they do not require signatures to be cashed. In 2005, due to the increasing fraudulent use of demand drafts, the Federal Reserve proposed new regulations increasing a victim's right to claim a refund and holding banks more accountable for cashing fraudulent checks.

KEY TAKEAWAYS

* A demand draft is a way to initiate a bank transfer that does not require a signature, as is the case with a check.
* A demand draft is a prepaid instrument; therefore, you cannot stop payment on it in the case of fraud or mis-intended recipient.
* Because demand drafts can be used to defraud people, there are regulations now in place that allow victims to recover funds from the holding bank.

Demand drafts are less flexible compared to other payment methods but may offer greater security compared to electronic payments or online payment systems.

Understanding Demand Drafts

When a bank prepares a demand draft, the amount of the draft is taken from the account of the customer requesting the draft and is transferred to an account at another bank. The drawer is the person requesting the demand draft; the bank paying the money is the drawee; the party receiving the money is the payee. Demand drafts were originally designed to benefit legitimate telemarketers who needed to withdraw funds from customer checking accounts using their bank account numbers and bank routing numbers.

For example, if a small business owner purchases products from another company on credit, the small business owner asks his bank to send a demand draft to the company for payment of the products, making him the drawer. The bank issues the draft, making it the drawee. After the draft matures, the owner of the other company brings the demand draft to his bank and collects his payment, making him the payee.

Because a demand draft is a prepaid instrument, payment cannot be stopped, whereas payment of a check may be denied for insufficient funds.

Process of Obtaining Demand Draft

To obtain a demand draft, choose the issuing bank or financial institution from which you want to obtain the draft. If you're not an account holder, visit the bank branch and provide additional identification and documentation. You'll often have to fill out an application form with the required details including the amount to be paid, the name of the payee, and other relevant information.

The bank often asks you to provide supporting documents such as proof of identification and address. This complies with Know Your Customer (KYC) regulations. After you pay the required fees, you'll receive the demand draft in your name with a unique draft number printed on special security paper.

When you receive the demand draft, check the demand draft details. Ensure all information is correct including the payee's name, amount, and instructions to ensure they match your requirements. From there, all that's left is to deliver the demand draft to the payee depending on your preference and bank's policies.

Demand Drafts vs. Other Payment Methods:

Demand Drafts vs. Checks

A demand draft is issued by a bank while a check is issued by an individual. Also, a demand draft is drawn by an employee of a bank while a check is drawn by a customer of a bank. Payment of a demand draft may not be stopped by the drawer as it may with a check.

Although a check can be hand-delivered, this is not the case with a demand draft. The draft may be drawn regardless of whether an individual holds an account at the bank while a check may be written only by an account holder.

Demand Draft vs. Wire Transfer

A demand draft is a physical payment instrument issued by a bank or financial institution representing a guaranteed form of payment as the purchaser pre-pays the funds. On the other hand, a wire transfer, also known as a bank transfer or electronic funds transfer (EFT), involves the electronic transfer of funds from one bank account to another.

The processing time for a demand draft may vary depending on factors such as the issuing bank and delivery method. However, wire transfers are generally faster than demand drafts. Often completed within hours or minutes, this allows for swift transfer of funds.

Banks typically charge a fee for issuing a demand draft, which may vary depending on the bank and the amount of the draft. Additional charges may apply for services such as courier delivery. Wire transfers usually also involve transaction fees, which can vary depending on the banks involved, the transfer amount, and whether it is domestic or international.

Demand drafts are commonly used for secure transactions such as large amounts, educational fees, property purchases, or settling financial obligations. This is the case where substantiation and secure payment delivery are highly important. Though wire transfers may also be used in this case, wire transfers are a more versatile form of payment that includes regular daily transactions of lower importance.

Demand Draft vs. Online Payment System

Online payment systems are digital platforms that facilitate electronic transactions over the Internet, allowing individuals and businesses to make payments or transfer funds between bank accounts or digital wallets without the need for physical instruments. Compared to demand drafts, online payment systems typically offer faster processing times, allowing transactions to be completed in real time.

While demand drafts often incur transaction fees, more and more online payment systems may also offer free transactions for certain transfers or within specific limits. This may be free transactions based on the number of quantities or free transactions based on the size of the transaction. Consider how popular shopping websites can easily facilitate online payments for free.

Online payment systems have gained significant popularity worldwide, used for various transactions, including e-commerce purchases, bill payments, peer-to-peer transfers, and subscription services. As noted above with wire transfers, demand drafts may be more suitable for more select types of transactions as opposed to online payments which may be used much more broadly.

How Long Does It Take for a Demand Draft to Clear?

The clearing time for a demand draft can vary depending on factors such as the banks involved and the method of presentation. It typically takes several business days for the demand draft to clear and for the funds to become available to the payee. The exact time frame can depend on the policies and processes of the banks involved.

What Fees and Charges Are Associated with Demand Drafts?

Fees associated with demand drafts include an issuance fee charged by the bank for providing the draft. Additionally, there may be additional charges for services like courier delivery if the draft needs to be sent to the payee through postal services or courier. The fees can vary between banks, so check with your bank for the specific charges.

Can I Cancel or Stop a Demand Draft?

Yes, demand drafts can generally be canceled or stopped by the purchaser. If a demand draft needs to be canceled, the purchaser should contact the issuing bank immediately and provide the necessary details. The bank will guide the purchaser through the cancellation process which may involve submitting a written request and paying cancellation fees.

Additional Information

A demand draft (DD) is a negotiable instrument similar to a bill of exchange. A bank issues a demand draft to a client (drawer), directing another bank (drawee) or one of its own branches to pay a certain sum to the specified party (payee).

A demand draft can also be compared to a cheque. However, demand drafts are difficult to countermand or revoke. Cheques can also be made payable to the bearer. However, demand drafts can only be made payable to a specified party, also known as pay-to-order. Demand drafts are usually orders of payment by a bank to another bank, whereas cheques are orders of payment from an account holder to the bank. A Drawer has to visit the branch of the Bank and fill the demand draft form and pay the amount either by cash or any other mode, and Bank will issue a demand draft. A demand draft has a validity of three months from the date of issuance of the demand draft. For instance, when enrolling in a college, an admission fee is required which can be paid through either cash or a demand draft. However, cheques are generally not accepted by most colleges. The primary reason behind this is that demand drafts are considered as a safer payment method than cheques, as the drawee is required to pay the amount indicated before the demand draft is released from the bank. On the other hand, a cheque may not be genuine, since the drawee is uncertain whether the drawer's bank account contains the required funds specified on the cheque. It is not compulsory for the drawer to be a bank customer and a demand draft comes with an official stamp for added authenticity.

96a9fde0231fe696d36fadc4ddad24b4.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2137 2024-05-01 23:34:31

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2139) Chartered Financial Analyst

Summary

A chartered financial analyst (CFA) is a globally-recognized professional designation given by the CFA Institute, (formerly the AIMR (Association for Investment Management and Research)), that measures and certifies the competence and integrity of financial analysts. Candidates are required to pass three levels of exams covering areas, such as accounting, economics, ethics, money management, and security analysis.

From 1963 through November 2023, more than 3.7 million candidates have taken the CFA exam. The overall pass rate was 45%. During 2014 through 2023, the 10-year average pass rate was 43%.

The pass rate of the exam is below 50% in recent decades, making the CFA Charter one of the most difficult financial certifications to obtain.

A minimum of 300 hours of study is recommended for each exam.

* The CFA charter is one of the most respected designations in finance and is widely considered to be the gold standard in the field of investment analysis.
* To become a charter holder, candidates must pass three difficult exams, have a bachelors degree, and have at least 4,000 hours of relevant professional experience over a minimum of three years. Passing the CFA Program exams requires strong discipline and an extensive amount of studying.
* There are more than 200,000 CFA chartholders worldwide in 164 countries.
* The designation is handed out by the CFA Institute, which has 11 offices worldwide and 160 local member societies.

Details

The Chartered Financial Analyst (CFA) program is a postgraduate professional certification offered internationally by the US-based CFA Institute (formerly the Association for Investment Management and Research, or AIMR) to investment and financial professionals. The program teaches a wide range of subjects relating to advanced investment analysis—including security analysis, statistics, probability theory, fixed income, derivatives, economics, financial analysis, corporate finance, alternative investments, portfolio management—and provides a generalist knowledge of other areas of finance.

A candidate who successfully completes the program and meets other professional requirements is awarded the "CFA charter" and becomes a "CFA charter-holder". As of November 2022, at least 190,000 people are charter-holders globally, growing 6% annually since 2012 (including effects of the pandemic). Successful candidates take an average of four years to earn their CFA charter.

The top employers of CFA charter-holders globally include JPMorgan Chase, UBS, Royal Bank of Canada, and Bank of America.

History

The predecessor of the CFA Institute, the Financial Analysts Federation (FAF), was established in 1947 as a service organization for investment professionals. The FAF founded the Institute of Chartered Financial Analysts in 1962; the earliest CFA charter-holders were "grandfathered" in through work experience only, but then a series of three examinations was established along with a requirement to be a practitioner for several years before taking the exams. In 1990, in the hopes of boosting the credential's public profile, the CFA Institute (formerly the Association for Investment Management and Research) merged with the FAF and the Institute of Chartered Financial Analysts.

The CFA exam was first administered in 1963 and began in the United States and Canada, but has become global with many people becoming charter-holders across Europe, Asia, and Australia. By 2003, fewer than half the candidates in the CFA program were based in the United States and Canada, with most of the other candidates based in Asia or Europe. The number of charter-holders in India and China had increased by 25% and 53%, respectively, from 2005 to 2006.

CFA Charter

The CFA designation is designed to demonstrate a strong foundation in advanced investment analysis and portfolio management, accompanied by a strict emphasis on ethical practice.

A charter holder is held to the highest ethical standards. Once an investment professional obtains the charter, this individual also makes an annual commitment to uphold and abide by a strict professional code of conduct and ethical standards. Violations of the CFA code of ethics may result in industry-related sanctions, suspension of the right to use the CFA designation, or a revocation of membership.

Requirements

To become a CFA charter-holder, candidates must satisfy the following requirements:

* Have obtained a bachelor's (or equivalent) degree or be in the final year of a bachelor's degree program. However, an accredited degree may not always be a requirement.
* Pass all three levels of the CFA program (mastery of the current CFA curriculum and passing three examinations).
* Have 4,000 hours in a minimum of three years of qualified work experience acceptable by the CFA Institute. However, individual-level exams may be taken prior to satisfying this requirement.
* Have two or three letters of reference.
* Become a member of the CFA Institute.
* Adhere to the CFA Institute Code of Ethics and Standards of Professional Conduct.

Due to the timing of the exams, completing all three levels of the CFA is possible within two years, but candidates must still complete the work experience requirement of 4,000 hours over a minimum of three years to become a charter-holder.

Pass rates

The CFA exams are noted to be notoriously difficult, with low pass rates. During the period 2010–2021, pass rates for Levels 1-3 ranged from 22-56% . The CFA Level 1 examination in May 2021 and July 2021 made news headlines after plummeting to a record-low pass rate of 25% and 22%, respectively, and in August 2021, the level 2 pass rate fell to 29%.

Curriculum

The curriculum for the CFA program is based on a Candidate Body of Knowledge established by the CFA Institute. The CFA curriculum is updated annually to reflect the latest best practices, with the extent of changes varying by year and level. The curriculum comprises, broadly, the topic areas below. There are three exams ("levels") that test the academic portion of the CFA program. All three levels emphasize the subject of ethics. The material differences among the exams are:

* The Level I study program emphasizes tools and inputs and includes an introduction to asset valuation, financial reporting and analysis, and portfolio-management techniques.
* The Level II study program emphasizes asset valuation and includes applications of the tools and inputs (including economics, financial reporting and analysis, and quantitative methods) in asset valuation.
* The Level III study program emphasizes portfolio management and includes descriptions of strategies for applying the tools, inputs, and asset valuation models to manage equity, fixed income, and derivative investments for individuals and institutions.

2012 Level III CFA Program Curriculum

For exams from 2008 onward, candidates are automatically provided the curriculum readings from the CFA Institute at the time of registration for the exam. The curriculum is not provided separately in the absence of exam registration. If the student fails an exam and is allowed to retest in the same year, the CFA Institute offers a slight rebate and will not send the curriculum again (the curriculum changes only on an annual basis). If the student retests in a year other than the year of failure, he or she will receive the curriculum again, as it may have been changed. Study materials for the CFA exams are available from numerous commercial learning providers, although they are not officially endorsed. Various organizations (some officially accredited) also provide course-based preparation. As of 2019, the examination includes questions on artificial intelligence, automated investment services, and mining unconventional sources of data.

Ethical and professional standards

The ethics section is primarily concerned with compliance and reporting rules when managing an investor's money or when issuing research reports. Some rules pertain more generally to professional behavior (such as prohibitions against plagiarism); others specifically relate to the proper use of the designation for charter-holders and candidates. These rules are delineated in the "Standards of Professional Conduct", within the context of an overarching "Code of Ethics".

Quantitative methods

This topic area is dominated by statistics: the topics are fairly broad, covering probability theory, hypothesis testing, (multi-variate) regression, and time-series analysis. Other topics include time value of money—incorporating basic valuation and yield and return calculations—portfolio-related calculations, and technical analysis. Recent additions, as mentioned above, are a survey of machine learning and big data.

Economics

Both microeconomics and macroeconomics are covered, including international economics (mainly related to currency conversions and how they are affected by international interest rates and inflation). By Level III, the focus is on applying economic analysis to portfolio management and asset allocation.

Financial statement analysis

The curriculum includes financial reporting topics (International Financial Reporting Standards and U.S. Generally Accepted Accounting Principles), and ratio and financial statement analysis. Financial reporting and analysis of accounting information is heavily tested at Levels I and II, but is not a significant part of Level III.

Corporate finance

The curriculum initially covers the major corporate finance topics: capital investment decisions, capital structure policy and implementation, and dividend policy; this builds on the accounting, economics, and statistics areas. It then extends to more advanced topics such as the analysis of mergers and acquisitions, corporate governance, and business and financial risk.

Security analysis

The curriculum includes coverage of global markets as well as analysis and valuation of the various asset types: equity (stocks), fixed income (bonds), derivatives (futures, forwards, options, and swaps), and alternative investments (real estate, private equity, hedge funds, and commodities). The Level I exam requires familiarity with these instruments. Level II focuses on valuation, employing the "tools" studied under quantitative methods, financial statement analysis, corporate finance, and economics. Level III centers on incorporating these instruments into portfolios.

Equity and fixed income

The curriculum for equity investments includes the functioning of the stock market, indices, stock valuation, and industry analysis. Fixed income topics similarly include the various debt securities, the risk associated with these, and valuations and yield spreads.

Derivatives

The curriculum includes coverage of the fundamental framework of derivatives markets, derivatives valuations, and hedging and trading strategies involving derivatives, including futures, forwards, swaps, and options. The curriculum incorporates various pricing models and frameworks, such as Black-Scholes and binomial option pricing (extending to coverage of interest rate trees), while coverage of the underlying mathematics is conceptual as opposed to technical.

Alternative investments

The curriculum includes coverage of a range of topics in the alternative investment category. Topics include hedge funds, private equity, real estate, commodities, infrastructure, and other alternative investments, including, as applicable, strategies, sub-categories, potential benefits and risks, fee structures, and due diligence.

Portfolio management and wealth planning

This sections increases in importance with each of the three levels—it integrates and draws from the other topics, including ethics. It includes: (i) modern portfolio theory (efficient frontier, capital asset pricing model, etc.); (ii) investment practice (defining the investment policy for individual and institutional investors, resultant asset allocation, order execution, and hedging using derivatives); and (iii) measurement of investment performance.

Efficacy of the CFA program

Given the time and effort that candidates must undergo to complete the CFA program, it would be expected that CFA charter-holders have higher performance than those who do not complete the program. However, there is some evidence that differential analyst performance is economically inconsequential, suggesting the predominance of signaling; although other research in the Financial Analysts Journal (a journal published by CFA Institute) suggests a positive human capital impact from the CFA program.

chartered-financial-analyst.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2138 2024-05-03 00:43:22

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2140) Sample space

Gist

A sample space is a collection or a set of possible outcomes of a random experiment. The sample space is represented using the symbol, “S”. The subset of possible outcomes of an experiment is called events. A sample space may contain a number of outcomes that depends on the experiment.

Summary

There are lots of events like when we toss a coin or toss a die, and we cannot predict the outcomes with certainty but we can always say all the possible outcomes. These events are what we call a random phenomenon or a random experiment. Probability theory is usually involved with such random phenomena or random experiments. But what is a sample space? 

A collection of a set of all such possible outcomes is known as a sample space of the experiment. In other words, we can say that in a random experiment, the collection or set of the possible outcomes is referred to as the sample space and is normally denoted by S.

The sample space in case of a random experiment is represented within curly brackets “{ }.” The sample space may depend upon the number of outcomes in an experiment, and the subset of the possible outcomes is referred to as the event. If the number of outcomes is finite, then the sample space is known as discrete or finite sample space.

Now, we have two more questions. First, what is the probability? What are events? We got you! Here is the answer.

Some Important Definitions

Probability:

In mathematics, the probability is actually a branch that is concerned with numerical descriptions of how an event might occur or how it is that a proposition might be true. The probability of an event is basically a number between 0 and 1, where, on an estimate, 0 designates the impossibility of the event, and 1 designates certainty.

Events:

Events are actually a subset of possible outcomes of an experiment.

Difference Between a Sample Space and an Event

Even though a sample space and an event are written within curly braces “ { } “, there is a difference between both of them. When we roll a die, we get sample space as  {1, 2, 3, 4, 5, 6 }, but an event will either represent a set of even numbers like  { 2, 4, 6 } or a set of odd numbers like  {1, 3, 5}.

Sample-Space-on-Tossing-Three-Coins.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2139 2024-05-03 23:29:38

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2141) Digital Logic

Gist

Digital logic is the manipulation of binary values through printed circuit board technology that uses circuits and logic gates to construct the implementation of computer operations. Digital logic is a common part of electrical engineering and design courses.

Summary

Modern computing system consists of complex system and technologies. These technologies are built upon some fundamental simple logics known as digital logic. By using digital logic gates we can develop complex logical circuit for various purposes like data storing, data manipulation or simply data representation.

What is Digital Logic?

One of the most important branch of Electronic and telecommunication Science sector is Digital electronics(logic). Digital logic is mainly used for data(must be digital information) representation, manipulation and processing of using discrete signals or binary digits (bits). It can perform logical operations, data retrieval or storing and data transformation by analyzing logical circuit design.

What is Digital ?

Previously a continuous signal or values are used represent data which is known as Analog signal. In modern computing sectors, data representation changes to discrete/non-continuous signals or values(only 0 or 1) which is known as Digital. Here, the overall information is encoded in a sequence of bits where every bits represents only two states(1 for high and 0 for low) of the information. This is known as binary representation of information.

Why Digital Logic is Necessary ?

In modern computing realm, Digital logic plays a significant role in many sectors which are discussed below :

Universal Representation: For any type of data representation like image, text, video, audio etc. digital logic/system is used by encoding the data in binary form. This binary formatted data enables uniform handling of diverse data and allows seamless integration and compatibility.
Error Reduction and Correction: Digital logic itself is very less prone to error as it works with only two values(0 and 1). Moreover, we can employ redundancy check and error detection mechanisms by digital logic codes which can detect and rectify errors introduced during transmission. This ensures reliable and accurate data processing.
Scalability and Modularity: Digital logic provides scalable framework by which we can develop complex system by using basic logic gates only. This enables a easy and cost effective way to develop a large-scale system with improved flexibility, maintainability, and ease of integration.
Noise Immunity: As digital logic follows the discrete nature of signal so it is less prone to have induced noise compared to analog signal. So it provides more robust communication and data processing by noise filtering and error mitigation.

Details

Digital logic is the underlying logic system that drives electronic circuit board design. Digital logic is the manipulation of binary values through printed circuit board technology that uses circuits and logic gates to construct the implementation of computer operations. Digital logic is a common part of electrical engineering and design courses.

A main component of digital logic consists of five different logic gates:

AND
OR
XOR
NAND
NOR

These basic logic gates are used in conjunction with one another to build elaborate engineering designs that deliver various computing outcomes. In addition to other types of circuitry and board and chip design, logic gates direct the computing and calculation work that electronic technologies do on a device. For example, circuits use logic gates to construct the outputs for digital numbers on calendars and other displays, by returning separate logical results for each particular digital component or “side” of one of these digital numbers.

Digital, or boolean, logic is the fundamental concept underpinning all modern computer systems. Put simply, it's the system of rules that allow us to make extremely complicated decisions based on relatively simple "yes/no" questions.

In this tutorial you will learn about...

Digital circuitry

Digital logic circuits can be broken down into two subcategories- combinational and sequential. Combinational logic changes "instantly"- the output of the circuit responds as soon as the input changes (with some delay, of course, since the propagation of the signal through the circuit elements takes a little time). Sequential circuits have a clock signal, and changes propagate through stages of the circuit on edges of the clock.

Typically, a sequential circuit will be built up of blocks of combinational logic separated by memory elements that are activated by a clock signal.

Programming

Digital logic is important in programming, as well. Understanding digital logic makes complex decision making possible in programs.

Additional Information

Logic design, basic organization of the circuitry of a digital computer. All digital computers are based on a two-valued logic system—1/0, on/off, yes/no (see binary code). Computers perform calculations using components called logic gates (or logic circuits), which are made up of integrated circuits that receive an input signal, process it, and change it into an output signal. The components of the gates pass or block a clock pulse as it travels through them, and the output bits of the gates control other gates or output the result. Following Boolean algebra, there are three basic kinds of logic gates, called AND, which outputs 1 only if both inputs are 1; OR, which outputs 1 if either input is 1; and NOT, which outputs 1 if the input is 0 and outputs 0 if the input is 1. Other logic gates that can be constructed are NAND (NOT-AND), which outputs 1 if either input is 0; NOR (NOT-OR), which outputs 1 only if both inputs are 0; XOR (or EXOR, exclusive OR), which outputs 1 if only one input is 1; and XNOR (or EXNOR, exclusive NOR); which outputs 0 if only one input is 1. By connecting logic gates together, a device can be constructed that can perform basic arithmetic functions.

51e5c089ce395f8d16000000.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2140 2024-05-04 22:00:26

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2142) IP address

Gist

An Internet Protocol (IP) address is the unique identifying number assigned to every device connected to the internet. An IP address definition is a numeric label assigned to devices that use the internet to communicate.

Summary

An Internet Protocol (IP) address is the unique identifying number assigned to every device connected to the internet. An IP address definition is a numeric label assigned to devices that use the internet to communicate. Computers that communicate over the internet or via local networks share information to a specific location using IP addresses.

IP addresses have two distinct versions or standards. The Internet Protocol version 4 (IPv4) address is the older of the two, which has space for up to 4 billion IP addresses and is assigned to all computers. The more recent Internet Protocol version 6 (IPv6) has space for trillions of IP addresses, which accounts for the new breed of devices in addition to computers. There are also several types of IP addresses, including public, private, static, and dynamic IP addresses.

Every device with an internet connection has an IP address, whether it's a computer, laptop, IoT device, or even toys.  The IP addresses allow for the efficient transfer of data between two connected devices, allowing machines on different networks to talk to each other.

How does an IP address work?

An IP address works in helping your device, whatever you are accessing the internet on, to find whatever data or content is located to allow for retrieval.

Common tasks for an IP address include both the identification of a host or a network, or identifying the location of a device. An IP address is not random. The creation of an IP address has the basis of math.  The Internet Assigned Numbers Authority (IANA) allocates the IP address and its creation. The full range of IP addresses can go from 0.0.0.0 to 255.255.255.255. 

With the mathematical assignment of an IP address, the unique identification to make a connection to a destination can be made.

Public IP address

A public IP address, or external-facing IP address, applies to the main device people use to connect their business or home internet network to their internet service provider (ISP). In most cases, this will be the router. All devices that connect to a router communicate with other IP addresses using the router’s IP address.

Knowing an external-facing IP address is crucial for people to open ports used for online gaming, email and web servers, media streaming, and creating remote connections.

Private IP address

A private IP address, or internal-facing IP address, is assigned by an office or home intranet (or local area network) to devices, or by the internet service provider (ISP). The home/office router manages the private IP addresses to the devices that connect to it from within that local network. Network devices are thus mapped from their private IP addresses to public IP addresses by the router.

Private IP addresses are reused across multiple networks, thus preserving valuable IPv4 address space and extending addressability beyond the simple limit of IPv4 addressing (4,294,967,296 or 2^32).

In the IPv6 addressing scheme, every possible device has its own unique identifier assigned by the ISP or primary network organization, which has a unique prefix. Private addressing is possible in IPv6, and when it's used it's called Unique Local Addressing (ULA).

Static IP address

All public and private addresses are defined as static or dynamic. An IP address that a person manually configures and fixes to their device’s network is referred to as a static IP address. A static IP address cannot be changed automatically. An internet service provider may assign a static IP address to a user account. The same IP address will be assigned to that user for every session.

Dynamic IP address

A dynamic IP address is automatically assigned to a network when a router is set up. The Dynamic Host Configuration Protocol (DHCP) assigns the distribution of this dynamic set of IP addresses. The DHCP can be the router that provides IP addresses to networks across a home or an organization.

Each time a user logs into the network, a fresh IP address is assigned from the pool of available (currently unassigned) IP addresses. A user may randomly cycle through several IP addresses across multiple sessions.

Details

An Internet Protocol address (IP address) is a numerical label such as 192.0.2.1 that is assigned to a device connected to a computer network that uses the Internet Protocol for communication. IP addresses serve two main functions: network interface identification, and location addressing.

Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. However, because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP (IPv6), using 128 bits for the IP address, was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s.

IP addresses are written and displayed in human-readable notations, such as 192.0.2.1 in IPv4, and 2001:db8:0:1234:0:567:8:1 in IPv6. The size of the routing prefix of the address is designated in CIDR notation by suffixing the address with the number of significant bits, e.g., 192.0.2.1/24, which is equivalent to the historically used subnet mask 255.255.255.0.

The IP address space is managed globally by the Internet Assigned Numbers Authority (IANA), and by five regional Internet registries (RIRs) responsible in their designated territories for assignment to local Internet registries, such as Internet service providers (ISPs), and other end users. IPv4 addresses were distributed by IANA to the RIRs in blocks of approximately 16.8 million addresses each, but have been exhausted at the IANA level since 2011. Only one of the RIRs still has a supply for local assignments in Africa. Some IPv4 addresses are reserved for private networks and are not globally unique.

Network administrators assign an IP address to each device connected to a network. Such assignments may be on a static (fixed or permanent) or dynamic basis, depending on network practices and software features. Some jurisidications consider IP addresses to be personal data.

Function

An IP address serves two principal functions: it identifies the host, or more specifically its network interface, and it provides the location of the host in the network, and thus the capability of establishing a path to that host. Its role has been characterized as follows: "A name indicates what we seek. An address indicates where it is. A route indicates how to get there." The header of each IP packet contains the IP address of the sending host and that of the destination host.

IP versions

Two versions of the Internet Protocol are in common use on the Internet today. The original version of the Internet Protocol that was first deployed in 1983 in the ARPANET, the predecessor of the Internet, is Internet Protocol version 4 (IPv4).

By the early 1990s, the rapid exhaustion of IPv4 address space available for assignment to Internet service providers and end-user organizations prompted the Internet Engineering Task Force (IETF) to explore new technologies to expand addressing capability on the Internet. The result was a redesign of the Internet Protocol which became eventually known as Internet Protocol Version 6 (IPv6) in 1995. IPv6 technology was in various testing stages until the mid-2000s when commercial production deployment commenced.

Today, these two versions of the Internet Protocol are in simultaneous use. Among other technical changes, each version defines the format of addresses differently. Because of the historical prevalence of IPv4, the generic term IP address typically still refers to the addresses defined by IPv4. The gap in version sequence between IPv4 and IPv6 resulted from the assignment of version 5 to the experimental Internet Stream Protocol in 1979, which however was never referred to as IPv5.

Other versions v1 to v9 were defined, but only v4 and v6 ever gained widespread use. v1 and v2 were names for TCP protocols in 1974 and 1977, as there was no separate IP specification at the time. v3 was defined in 1978, and v3.1 is the first version where TCP is separated from IP. v6 is a synthesis of several suggested versions, v6 Simple Internet Protocol, v7 TP/IX: The Next Internet, v8 PIP — The P Internet Protocol, and v9 TUBA — Tcp & Udp with Big Addresses.

Subnetworks

IP networks may be divided into subnetworks in both IPv4 and IPv6. For this purpose, an IP address is recognized as consisting of two parts: the network prefix in the high-order bits and the remaining bits called the rest field, host identifier, or interface identifier (IPv6), used for host numbering within a network. The subnet mask or CIDR notation determines how the IP address is divided into network and host parts.

The term subnet mask is only used within IPv4. Both IP versions however use the CIDR concept and notation. In this, the IP address is followed by a slash and the number (in decimal) of bits used for the network part, also called the routing prefix. For example, an IPv4 address and its subnet mask may be 192.0.2.1 and 255.255.255.0, respectively. The CIDR notation for the same IP address and subnet is 192.0.2.1/24, because the first 24 bits of the IP address indicate the network and subnet.

ip-address-types.jpeg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2141 2024-05-05 22:06:38

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2143) TCP/IP

Gist

The TCP/IP model defines how devices should transmit data between them and enables communication over networks and large distances. The model represents how data is exchanged and organized over networks.

Summary

What is TCP/IP?

TCP/IP stands for Transmission Control Protocol/Internet Protocol and is a suite of communication protocols used to interconnect network devices on the internet. TCP/IP is also used as a communications protocol in a private computer network -- an intranet or extranet.

The entire IP suite -- a set of rules and procedures -- is commonly referred to as TCP/IP. TCP and IP are the two main protocols, though others are included in the suite. The TCP/IP protocol suite functions as an abstraction layer between internet applications and the routing and switching fabric.

TCP/IP specifies how data is exchanged over the internet by providing end-to-end communications that identify how it should be broken into packets, addressed, transmitted, routed and received at the destination. TCP/IP requires little central management and is designed to make networks reliable with the ability to recover automatically from the failure of any device on the network.

Internet Protocol Version 4 (IPv4) is the primary version used on the internet today. However, due to a limited number of addresses, a newer protocol known as IPv6 was developed in 1998 by the Internet Engineering Task Force (IETF). IPv6 expands the pool of available addresses from IPv4 significantly and is progressively being embraced.

How are TCP and IP different?

The two main protocols in the IP suite serve specific functions and have numerous differences. The key differences between TCP and IP include the following:

TCP

* It ensures a reliable and orderly delivery of packets across networks.
* TCP is a higher-level smart communications protocol that still uses IP as a way to transport data packets, but it also connects computers, applications, web pages and web servers.
* TCP understands holistically the entire stream of data that these assets require to operate and it ensures the entire volume of data needed is sent the first time.
* TCP defines how applications can create channels of communication across a network.
* It manages how a message is assembled into smaller packets before they're transmitted over the internet and reassembled in the right order at the destination address.
* TCP operates at Layer 4, or the transport layer, of the Open Systems Interconnection (OSI model).
* TCP is a connection-oriented protocol, which means it establishes a connection between the sender and the receiver before delivering data to ensure reliable delivery.
* As it does its work, TCP can also control the size and flow rate of data. It ensures that networks are free of any congestion that could block the receipt of data. An example is an application that wants to send a large amount of data over the internet. If the application only used IP, the data would have to be broken into multiple IP packets. This would require multiple requests to send and receive data, as IP requests are issued per packet.
* With TCP, only a single request to send an entire data stream is needed; TCP handles the rest.
* TCP runs checks to ensure data is delivered. It can detect problems that arise in IP and request retransmission of any data packets that were lost.
* TCP can reorganize packets so they're transmitted in the proper order. This minimizes network congestion by preventing network bottlenecks caused by out-of-order packet delivery.

IP

* IP is a low-level internet protocol that facilitates data communications over the internet.
* IP delivers packets of data that consist of a header, which contains routing information, such as the source and destination of the data and the data payload itself.
* It defines how to address and route each packet to ensure it reaches the right destination. Each gateway computer on the network checks this IP address to determine where to forward the message.
* IP is limited by the amount of data it can send. The maximum size of a single IP data packet, which contains both the header and the data, is between 20 and 24 bytes. This means that longer strings of data must be broken into multiple data packets that have to be sent independently and then reorganized into the correct order.
* It provides the mechanism for delivering data from one network node to another.
* IP operates at Layer 3, or the network access layer, of the OSI model.
* IP is a connection-less protocol, which means it doesn't guarantee delivery nor does it provide error checking and correction.

Details

What is TCP?

Transmission Control Protocol (TCP) is a communications standard that enables application programs and computing devices to exchange messages over a network. It is designed to send packets across the internet and ensure the successful delivery of data and messages over networks.

TCP is one of the basic standards that define the rules of the internet and is included within the standards defined by the Internet Engineering Task Force (IETF). It is one of the most commonly used protocols within digital network communications and ensures end-to-end data delivery.

TCP organizes data so that it can be transmitted between a server and a client. It guarantees the integrity of the data being communicated over a network. Before it transmits data, TCP establishes a connection between a source and its destination, which it ensures remains live until communication begins. It then breaks large amounts of data into smaller packets, while ensuring data integrity is in place throughout the process.

As a result, high-level protocols that need to transmit data all use TCP Protocol.  Examples include peer-to-peer sharing methods like File Transfer Protocol (FTP), Secure Shell (SSH), and Telnet. It is also used to send and receive email through Internet Message Access Protocol (IMAP), Post Office Protocol (POP), and Simple Mail Transfer Protocol (SMTP), and for web access through the Hypertext Transfer Protocol (HTTP).

An alternative to TCP in networking is the User Datagram Protocol (UDP), which is used to establish low-latency connections between applications and decrease transmissions time. TCP can be an expensive network tool as it includes absent or corrupted packets and protects data delivery with controls like acknowledgments, connection startup, and flow control.

UDP does not provide error connection or packet sequencing nor does it signal a destination before it delivers data, which makes it less reliable but less expensive. As such, it is a good option for time-sensitive situations, such as Domain Name System (DNS) lookup, Voice over Internet Protocol (VoIP), and streaming media.

What is IP?

The Internet Protocol (IP) is the method for sending data from one device to another across the internet. Every device has an IP address that uniquely identifies it and enables it to communicate with and exchange data with other devices connected to the internet.  Today, it’s considered the standard for fast and secure communication directly between mobile devices.

IP is responsible for defining how applications and devices exchange packets of data with each other. It is the principal communications protocol responsible for the formats and rules for exchanging data and messages between computers on a single network or several internet-connected networks. It does this through the Internet Protocol Suite (TCP/IP), a group of communications protocols that are split into four abstraction layers.

IP is the main protocol within the internet layer of the TCP/IP. Its main purpose is to deliver data packets between the source application or device and the destination using methods and structures that place tags, such as address information, within data packets.

TCP vs IP: What is the difference?

TCP and IP are separate protocols that work together to ensure data is delivered to its intended destination within a network. IP obtains and defines the address—the IP address—of the application or device the data must be sent to. TCP is then responsible for transporting and routing data through the network architecture and ensuring it gets delivered to the destination application or device that IP has defined. Both technologies working together allow communication between devices over long distances, making it possible to transfer data where it needs to go in the most efficient way possible.

In other words, the IP address is akin to a phone number assigned to a smartphone. TCP is the computer networking version of the technology used to make the smartphone ring and enable its user to talk to the person who called them.

Now that we’ve looked at TCP and ICP separately, what is TCP/IP? The two protocols are frequently used together and rely on each other for data to have a destination and safely reach it, which is why the process is regularly referred to as TCP/IP. With the right security protocols in place, the combination of the TCP/IP allows users to follow a safe and secure process when they need to move data between two or more devices.

How Does Transmission Control Protocol (TCP)/IP Work?

The TCP/IP model is the default method of data communication on the Internet.  It was developed by the United States Department of Defense to enable the accurate and correct transmission of data between devices. It breaks messages into packets to avoid having to resend the entire message in case it encounters a problem during transmission. Packets are automatically reassembled once they reach their destination. Every packet can take a different route between the source and the destination computer, depending on whether the original route used becomes congested or unavailable.

TCP/IP divides communication tasks into layers that keep the process standardized, without hardware and software providers doing the management themselves. The data packets must pass through four layers before they are received by the destination device, then TCP/IP goes through the layers in reverse order to put the message back into its original format.

As a connection based protocol, the TCP establishes and maintains a connection between applications or devices until they finish exchanging data. It determines how the original message should be broken into packets, numbers and reassembles the packets, and sends them on to other devices on the network, such as routers, security gateways, and switches, then on to their destination. TCP also sends and receives packets from the network layer, handles the transmission of any dropped packets, manages flow control, and ensures all packets reach their destination.

A good example of how this works in practice is when an email is sent using SMTP from an email server. To start the process, the TCP layer in the server divides the message into packets, numbers them, and forwards them to the IP layer, which then transports each packet to the destination email server. When packets arrive, they are handed back to the TCP layer to be reassembled into the original message format and handed back to the email server, which delivers the message to a user’s email inbox.

TCP/IP uses a three-way handshake to establish a connection between a device and a server, which ensures multiple TCP socket connections can be transferred in both directions concurrently. Both the device and server must synchronize and acknowledge packets before communication begins, then they can negotiate, separate, and transfer TCP socket connections.

tcp-ip.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2142 2024-05-06 18:18:54

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2144) Programming Language

Gist

A programming language is a way for programmers (developers) to communicate with computers. Programming languages consist of a set of rules that allows string values to be converted into various ways of generating machine code, or, in the case of visual programming languages, graphical elements.

Summary

Programming Language

As we know, to communicate with a person, we need a specific language, similarly to communicate with computers, programmers also need a language is called Programming language.

The tools used by software engineers to write down computer packages are programming languages. They are the means of interacting with and commanding computer systems. Numerous distinct programming languages exist, each with its benefits and downsides. Certain languages are more appropriate for optimistic roles than others. For example, some languages are made for basic programming, while others are made for specific fields like networking, statistics generation, and web and app development.

Before learning the programming language, let's understand what is language?

What is Language?

Language is a mode of communication that is used to share ideas, opinions with each other. For example, if we want to teach someone, we need a language that is understandable by both communicators.

What is a Programming Language?

A programming language is a computer language that is used by programmers (developers) to communicate with computers. It is a set of instructions written in any specific language ( C, C++, Java, Python) to perform a specific task.

A programming language is mainly used to develop desktop applications, websites, and mobile applications.

Details

A programming language is a system of notation for writing computer programs.

Programming languages are described in terms of their syntax (form) and semantics (meaning), usually defined by a formal language. Languages usually provide features such as a type system, variables and mechanisms for error handling. An implementation of a programming language in the form of a compiler or interpreter allows programs to be executed, either directly or by producing what's known in programming as an executable.

Computer architecture has strongly influenced the design of programming languages, with the most common type (imperative languages—which implement operations in a specified order) developed to perform well on the popular von Neumann architecture. While early programming languages were closely tied to the hardware, over time they have developed more abstraction to hide implementation details for greater simplicity.

Thousands of programming languages—often classified as imperative, functional, logic, or object-oriented—have been developed for a wide variety of uses. Many aspects of programming language design involve tradeoffs—for example, exception handling simplifies error handling, but at a performance cost. Programming language theory is the subfield of computer science that studies the design, implementation, analysis, characterization, and classification of programming languages.

Definitions

There are a variety of criteria that may be considered when defining what constitutes a programming language.

Computer languages vs programming languages

The term computer language is sometimes used interchangeably with programming language. However, the usage of both terms varies among authors, including the exact scope of each. One usage describes programming languages as a subset of computer languages. Similarly, languages used in computing that have a different goal than expressing computer programs are generically designated computer languages. For instance, markup languages are sometimes referred to as computer languages to emphasize that they are not meant to be used for programming. One way of classifying computer languages is by the computations they are capable of expressing, as described by the theory of computation. The majority of practical programming languages are Turing complete, and all Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL-92 and Charity are examples of languages that are not Turing complete, yet are often called programming languages. However, some authors restrict the term "programming language" to Turing complete languages.

Another usage regards programming languages as theoretical constructs for programming abstract machines and computer languages as the subset thereof that runs on physical computers, which have finite hardware resources. John C. Reynolds emphasizes that formal specification languages are just as much programming languages as are the languages intended for execution. He also argues that textual and even graphical input formats that affect the behavior of a computer are programming languages, despite the fact they are commonly not Turing-complete, and remarks that ignorance of programming language concepts is the reason for many flaws in input formats.

Domain and target

In most practical contexts, a programming language involves a computer; consequently, programming languages are usually defined and studied this way. Programming languages differ from natural languages in that natural languages are only used for interaction between people, while programming languages also allow humans to communicate instructions to machines.

The domain of the language is also worth consideration. Markup languages like XML, HTML, or troff, which define structured data, are not usually considered programming languages. Programming languages may, however, share the syntax with markup languages if a computational semantics is defined. XSLT, for example, is a Turing complete language entirely using XML syntax. Moreover, LaTeX, which is mostly used for structuring documents, also contains a Turing complete subset.

Abstractions

Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. The practical necessity that a programming language supports adequate abstractions is expressed by the abstraction principle.This principle is sometimes formulated as a recommendation to the programmer to make proper use of such abstractions.

Best-Programming-Languages-to-Start-Learning-Today.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2143 2024-05-07 19:17:40

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2145) B School

Gist

What Is a B-School? “B-School” is an abbreviation for business school, which refers to schools specializing in business subjects. These include both undergraduate colleges and graduate schools. The most well-known B-School offering is the Master of Business Administration (MBA) degree program.

Summary

A business school is a higher education institution or professional school that teaches courses leading to degrees in business administration or management. A business school may also be referred to as school of management, management school, school of business administration, college of business, or colloquially b-school or biz school. A business school offers comprehensive education in various disciplines related to the world of business.

Types

There are several forms of business schools, including a school of business, business administration, and management.

* Most of the university business schools consist of faculties, colleges, or departments within the university, and predominantly teach business courses.
* In North America, a business school is often understood to be a university program that offers a graduate Master of Business Administration degrees and/or undergraduate bachelor's degrees.
* In Europe and Asia, some universities teach predominantly business courses.
* Privately owned business school which is not affiliated with any university.
* Highly specialized business schools focussing on a specific sector or domain.
* In France, many business schools are public-private partnerships (École consulaire or EESC) largely financed by the public Chambers of Commerce. These schools offer accredited undergraduate and graduate degrees in business from the elite Conférence des Grandes Écoles and have only loose ties, or no ties at all, to any university.

Details

“B-School” is an abbreviation for business school, which refers to schools specializing in business subjects. These include both undergraduate colleges and graduate schools. The most well-known B-School offering is the Master of Business Administration (MBA) degree program.

B-Schools are known for their highly competitive admission standards, with the most sought-after schools regularly rejecting over 90% of applicants. These schools have also been the subject of debate in recent years because of their substantial financial costs (the tuition of some B-Schools can surpass $80,000 per year).

KEY TAKEAWAYS

* “B-School” is a shorthand term that refers to schools at universities that offer business degrees.
* B-Schools offer both undergraduate and graduate programs, although their most famous programs are Master in Business Administration (MBA) degree programs.
* B-Schools can differ significantly in terms of their national and international rankings and their costs of attendance.

Topics of Study at B-Schools

B-Schools are similar to other post-secondary higher education institutions, except that they are focused on subject areas related to business and finance. Common examples include accounting, finance, marketing, and entrepreneurship. In some cases, schools will offer specialized programs in less common areas of study, such as actuarial sciences or taxation law.

Like other institutions, various rankings aim to help students assess the quality and prestige of specific schools. These include rankings published by The Financial Times, The Economist, Forbes, and BusinessWeek. Although the exact placement of schools changes yearly, examples of schools with consistently high rankings include the Stanford Graduate School of Business, the University of Chicago’s Booth School of Business, London Business School, Harvard Business School, and the University of Pennsylvania’s Wharton School.

Although schools at the upper echelon of international B-School rankings will excel in multiple areas, they are often known for having certain areas in which they are particularly strong. For instance, the Wharton School is known for its excellence in finance, whereas Harvard Business School is known for its general managerial education.

Financial Cost of Attending B-Schools

In addition to considering each B-School's prestige and specialization areas, it is also important for prospective students to carefully weigh the costs of attendance against the potential benefits of obtaining a B-School degree. After all, attendance costs can reach above $160,000 for the elite B-Schools (tuition only), and even less prestigious schools will routinely cost over $65,000 annually. For many students, this will require incurring substantial student debt. Student debt can drain a student's financial life for many years or even decades following graduation—but the most prestigious schools generally take less time because of the average salary increase graduates experience.

Some of the most prestigious business schools' annual tuition is very expensive when considering the tuition. In the following list, the total program costs come from the school's websites and include living expenses (as of August 2023):

Chicago Booth (two-year): Yearly tuition, $80,961; total program costs, $244,620.

Northwestern University Kellogg (two-year): Yearly tuition, $81,015; total program costs $240,864.

University of Pennsylvania Wharton (two-year): Yearly tuition, $87,370; total program costs, $246,952.

Harvard Business School (two-year): Yearly tuition, $73,440; total program costs, $223,084.

Dartmouth College Tuck (two-year): Yearly tuition, $77,520; total program costs, $247,402.

Pros and Cons of B-Schools

While B-schools are known for their excellent curriculums and prestige in the business and finance industries, several downsides exist.

Pros

* B-schools generally provide networking opportunities that colleges without business-dedicated schools don't.
* A well-known B-school has more credibility than other institutions, which makes your background, experience, and education more attractive.
* Many B-school graduates experience an increase in opportunities with higher pay.

Cons

* When the total costs of attendance—tuition, room and board, books, fees, and living expenses—are considered, other schools may appear more attractive by comparison.
* A degree from a B-school doesn't guarantee an increase in income—even from a prestigious one. It only demonstrates that you have the knowledge and intestinal fortitude to push through a rigorous course of study designed for business leadership.
* There are a limited number of positions available for those with degrees from B-schools, so it's possible to spend an enormous amount of money and not be rewarded with a lucrative-paying position.

What GPA Do You Need for Business School?

In general, a GPA of 3.0 to 3.5 is a "good" score. However, higher GPAs make you more competitive; some schools accept graduates with 3.0 GPAs, while others might require a 3.5 or higher.

Are Business Schools Worth It?

It can be worth it if your interests lie in learning about and analyzing corporate financial performance, economics, assessing strengths and weaknesses to create strategies, and any other aspect of upper-level business management. If you're seeking a graduate degree in business to make more money, it might not be worth it because there is no guarantee that you will make more.

Which College Has the Best Business Class?

Many colleges have excellent business curriculums. The most well-known are the Wharton School and the Harvard School of Business. However, you don't necessarily need to attend the top-tier (and the most expensive) schools to get a good business education.

The Bottom Line

B-school is an abbreviation of "business school," referring to any school that focuses on teaching business topics. When people talk about B-schools, they are usually referring to schools respected across industries as having the best business programs. However, a B-school doesn't need to be an expensive one; it only needs to be a school of business at an accredited college or university.

IMAGE_1698565148.webp?w=640&auto=format%2Ccompress&fit=max&format=webp&dpr=1.0


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2144 2024-05-08 21:13:19

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2146) Fixed deposit

Gist

A fixed deposit (FD) is a tenured deposit account provided by banks or non-bank financial institutions which provides investors a higher rate of interest than a regular savings account, until the given maturity date.

Summary

A fixed deposit (FD), also known as a term deposit or time deposit, is a safe investment option offered by banks and other financial institutions. It allows you to deposit a lump sum of money at a fixed interest rate for a predetermined amount of time and earn assured returns.

You are not permitted to withdraw money during this period without paying a penalty. Generally, FDs have higher interest rates than standard saving accounts. People seeking a guaranteed return on their investment with minimal risk often opt for fixed deposits.

The FD interest rates for the general public range from 2.75% p.a. to 8.25% p.a. for tenures from 7 days up to 10 years. Senior citizens are offered interest rates higher by 0.25% p.a. to 0.75% p.a. of the rates offered to the general public.

Details

A fixed deposit (FD) is a tenured deposit account provided by banks or non-bank financial institutions which provides investors a higher rate of interest than a regular savings account, until the given maturity date. It may or may not require the creation of a separate account. The term fixed deposit is most commonly used in India and the United States. It is known as a term deposit or time deposit in Canada, Australia, New Zealand, and as a bond in the United Kingdom.

A fixed deposit means that the money cannot be withdrawn before maturity unlike a recurring deposit or a demand deposit. Due to this limitation, some banks offer additional services to FD holders such as loans against FD certificates at competitive interest rates. Banks may offer lesser interest rates under uncertain economic conditions. The tenure of an FD can vary from 7, 15 or 45 days to 1.5 years and can be as high as 10 years.

In India these investments can be safer than Post Office Schemes as they are covered by the Indian Deposit Insurance and Credit Guarantee Corporation (DICGC). However, DICGC guarantees amount up to ₹ 500000 (about $6850) per depositor per bank. In India they also offer income tax and wealth tax benefits.

Functioning

Fixed deposits are high-interest-yielding term deposits and are offered by banks. The most popular form of term deposits are fixed deposits, while other forms of term deposits are recurring deposit and Flexi Fixed deposits (the latter is actually a combination of demand deposit and fixed deposit)[citation needed].

To compensate for the low liquidity, FDs offer higher rates of interest than saving accounts.[citation needed] The longest permissible term for FDs is 10 years. Generally, the longer the term of deposit, the higher is the rate of interest but a bank may offer a lower rate of interest for a longer period if it expects interest rates, at which the Central Bank of a nation lends to banks ("repo rates"), will dip in the future.

Usually the interest on FDs is paid every three months from the date of the deposit (e.g. if FD a/c was opened on 15 Feb, the first interest installment would be paid on 15 May). The interest is credited to the customers' Savings bank account or sent to them by cheque. This is a Simple FD. The customer may choose to have the interest reinvested in the FD account. In this case, the deposit is called the Cumulative FD or compound interest FD. For such deposits, the interest is paid with the invested amount on maturity of the deposit at the end of the term.

Although banks can refuse to repay FDs before the expiry of the deposit, they generally don't. This is known as a premature withdrawal. In such cases, interest is paid at the rate applicable at the time of withdrawal. For example, a deposit is made for 5 years at 8% but is withdrawn after 2 years. If the rate applicable on the date of deposit for 2 years is 5 percent, the interest will be paid at 5 percent. Banks can charge a penalty for premature withdrawal.

Banks issue a separate receipt for every FD because each deposit is treated as a distinct contract. This receipt is known as the Fixed Deposit Receipt (FDR), which has to be surrendered to the bank at the time of renewal or encashment.

Many banks offer the facility of automatic renewal of FDs where the customers do give new instructions for the matured deposit. On the date of maturity, such deposits are renewed for a similar term as that of the original deposit at the rate prevailing on the date of renewal.

Income tax regulations require that FD maturity proceeds exceeding Rs 20,000 not to be paid in cash. Repayment of such and larger deposits has to be either by "A/c payee" crossed cheque in the name of the customer or by credit to the saving bank a/c or current a/c of the customer.

Nowadays, banks give the facility of Flexi or sweep in FD, where in customers can withdraw their money through ATM, through cheque or through funds transfer from their FD account. In such cases, whatever interest is accrued on the amount they have withdrawn will be credited to their savings account (the account that has been linked to their FD) and the balance amount will automatically be converted in their new FD. This system helps them in getting their funds from their FD account at the times of emergency in a timely manner.

Benefits

* Customers can avail loans against FDs up to 80 to 90 percent of the value of deposits. The rate of interest on the loan could be 1 to 2 percent over the rate offered on the deposit.
* Residents of India can open these accounts for a minimum of seven days.
* Investing in a fixed deposit earns customers a higher interest rate than depositing money in a saving account.
* Tax saving fixed deposits are a type of fixed deposits that allow the investor to save tax under Section 80C of the Income Tax Act.

Taxability

In India, tax is deducted at source by the banks on FDs if interest paid to a customer at any bank exceeds ₹ 10,000 in a financial year. This is applicable to both interest payable or reinvested per customer. This is called Tax deducted at Source and is presently fixed at 10% of the interest. With CBS banks can tally FD holding of a customer across various branches and TDS is applied if interest exceeds ₹ 10,000. Banks issue Form 16 A every quarter to the customer, as a receipt for Tax Deducted at Source.

However, tax on interest from fixed deposits is not 10%; it is applicable at the rate of tax slab of the deposit holder. If any tax on Fixed Deposit interest is due after TDS, the holder is expected to declare it in Income Tax returns and pay it by himself.

If the total income for a year does not fall within the overall taxable limits, customers can submit a Form 15 G (below 60 years of age) or Form 15 H (above 60 years of age) to the bank when starting the FD and at the start of every financial year to avoid TDS.

How bank FD rates of interest vary with Central Bank policy

In certain macroeconomic conditions (particularly during periods of high inflation) a Central Bank adopts a tight monetary policy, that is, it hikes the interest rates at which it lends to banks ("repo rates"). Under such conditions, banks also hike both their lending (i.e. loan) as well as deposit (FD) rates. Under such conditions of high FD rates, FDs become an attractive investment avenue as they offer good returns and are almost completely secure with no risk.

Fixed-Deposits.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2145 2024-05-09 22:54:25

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2147) Business mathematics

Business mathematics are mathematics used by commercial enterprises to record and manage business operations. Commercial organizations use mathematics in accounting, inventory management, marketing, sales forecasting, and financial analysis.

Mathematics typically used in commerce includes elementary arithmetic, elementary algebra, statistics and probability. For some management problems, more advanced mathematics - calculus, matrix algebra, and linear programming - may be applied.

High school

Business mathematics, sometimes called commercial math or consumer math, is a group of practical subjects used in commerce and everyday life. In schools, these subjects are often taught to students who are not planning a university education. In the United States, they are typically offered in high schools and in schools that grant associate's degrees; elsewhere they may be included under business studies. These courses often fulfill the general math credit for high school students.

The emphasis in these courses is on computational skills and their practical application, with practice being predominant. A (U.S.) business math course typically  includes a review of elementary arithmetic, including fractions, decimals, and percentages; elementary algebra is included also. The practical applications of these techniques include revenues, checking accounts, price discounts, markups, payroll calculations, simple and compound interest, consumer and business credit, and mortgages.

University level:

Undergraduate

Business mathematics comprises mathematics credits taken at an undergraduate level by business students. The course  is often organized around the various business sub-disciplines, including the above applications, and usually includes a separate module on interest calculations; the mathematics itself comprises mainly algebraic techniques. Many programs, as mentioned, extend to more sophisticated mathematics. Common credits are Business Calculus  and Business Statistics. Programs may also cover matrix operations, as above. At many US universities, business students (instead) study "finite mathematics", a course combining several applicable topics, including basic probability theory, an introduction to linear programming, some theory of matrices, and introductory game theory; the course sometimes includes a high level treatment of calculus.

Since these courses are focused on problems from the business world, the syllabus is then adjusted re standard courses in the mathematics or science fields. The calculus course especially emphasizes differentiation, leading to optimization of costs and revenue. Integration is less emphasized, as its business applications are fewer — used here  in some interest calculations, and for (theoretically) aggregating costs and / or revenue — and it is more technically demanding. Relatedly, in a regular calculus course students study trigonometric functions, whereas courses here do not typically cover this area. As regards formal theory, as these are applied courses, very few proofs or derivations are included, unlike standard courses. 

Note that economics majors, especially those planning to pursue graduate study in the field, are encouraged to instead take regular calculus, as well as linear algebra and other advanced math courses, especially real analysis. Some economics programs (instead) include a module in "mathematics for economists", providing a bridge between the above "Business Mathematics" courses and mathematical economics and econometrics. Programs in management accounting, operations management, risk management and credit management, may similarly include supplementary coursework in relevant quantitative techniques, generally regression, and often linear programming as above, as well as other optimization techniques and various probability topics.

Postgraduate

At the postgraduate level, generalist management and finance programs include quantitative topics which are foundational for the study in question - often exempting students with an appropriate background. These are usually "interest mathematics" and statistics, both at the above level. MBA programs often also include basic operations research (linear programming, as above) with the emphasis on practice, and may combine the topics as "quantitative analysis"; MSF programs may similarly cover applied / financial econometrics.

More technical Master's in these areas, such as those in management science and in quantitative finance, will entail a deeper, more theoretical study of operations research and econometrics, and extend to further advanced topics such as mathematical optimization and stochastic calculus.  These programs, then, do not include or entail "Business mathematics" per se.

Where mathematical economics is not a degree requirement, graduate economics programs often include "quantitative techniques", which covers (applied) linear algebra, multivariate calculus, and optimization, and may include dynamical systems and analysis;  regardless, econometrics is usually a separate course, and is dealt with in depth. Similarly, Master of Financial Economics (and MSc Finance) programs often include a supplementary or bridging course covering the calculus, optimization, linear algebra, and probability techniques required for specific topics.

800--v112243.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2146 2024-05-10 17:38:30

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2148) Helipad

Gist

A helipad is a landing area or platform for helicopters and powered lift aircraft.

A helicopter landing pad (helipad) is a landing area for helicopters, like a runway for an airplane.

Summary

The word "helipad" is used to describe a helicopter landing pad, which is a landing area for helicopters. Even though helicopters can usually land on any flat surface, a fabricated helipad provides a clearly marked, designated, hard surface area away from obstacles on which a helicopter can land. Helipads are often constructed out of concrete and are marked with a circle and/or a letter "H", so that rotorcraft can spot them from the air.

Helipads may be located at heliports or airports where fuel, air traffic control, and service facilities for aircraft are available. In addition, helipads may also be located away from such facilities and are commonly placed on the roofs of hospitals to facilitate MEDEVACs and/or even on corporate office buildings. Large ships and sea vessels sometimes have a helipad onboard to allow people to board or deboard the vessel while it is still at sea. This is particularly useful in the event of any emergency.

Helipads are not always constructed out of concrete and can be set-up in remote areas to allow rotorcraft to land in hard to reach areas. For example, helipads may be constructed in extreme conditions such as frozen ice or on mountain tops. Most rooftop helipads often display a large two-digit number that represents the weight limit (in thousands of pounds) of the helipad. In addition, a second number may also be present, which represents the maximum rotor diameter in feet.

Details

A helipad is a landing area or platform for helicopters and powered lift aircraft.

While helicopters and powered lift aircraft are able to operate on a variety of relatively flat surfaces, a fabricated helipad provides a clearly marked hard surface away from obstacles where such aircraft can land safely.

Larger helipads, intended for use by helicopters and other vertical take-off and landing (VTOL) aircraft, may be called vertiports. An example is Vertiport Chicago, which opened in 2015.

Usage

Helipads may be located at a heliport or airport where fuel, air traffic control and service facilities for aircraft are available.

Most helipads are located away from populated areas due to sounds, winds, space and cost constraints. Some skyscrapers have one on their roofs to accommodate air taxi services. Some basic helipads are built on top of highrise buildings for evacuation in case of a major fire outbreak. Major police departments may use a dedicated helipad at heliports as a base for police helicopters.

Large ships and oil platforms usually have a helipad on board for emergency use. In such a case, the terms "helicopter deck", "helideck", or "helodeck" are used.

Helipads are common features at hospitals where they serve to facilitate medical evacuation or air ambulance transfers of patients to trauma centers or to accept patients from remote areas without local hospitals or facilities capable of providing the level of emergency medicine required. In urban environments, these heliports are sometimes located on the roof of the hospital.

Rooftop helipads sometimes display a large two-digit number, representing the weight limit (in thousands of pounds) of the pad. A second number may be present, representing the maximum rotor diameter in feet.

Location identifiers are often, but not always, issued for helipads. They may be issued by the appropriate aviation authority. Authorized agencies include the Federal Aviation Administration in the United States, Transport Canada in Canada, the International Civil Aviation Organization, and the International Air Transport Association. Some helipads may have location identifiers from multiple sources, and these identifiers may be of different format and name.

Construction

Helipads are usually constructed out of concrete and are marked with a circle and/or a letter "H", so as to be visible from the air. Sometimes wildfire fighters will construct a temporary one from timber to receive supplies in remote areas. Rig mats may be used to build helipads. Landing pads may also be constructed in extreme conditions such as on ice.

The world's highest helipad, built by India, is located on the Siachen Glacier at a height of 21,000 feet (6400 m) above sea level.

Portability

A portable helipad is a helipad structure with a rugged frame that can be used to land helicopters in any areas with slopes of up to 30 degrees, such as hillsides, riverbeds and boggy areas. Portable helipads can be transported by helicopter or powered-lift to place them where a VTOL needs to land, as long as there are no insurmountable obstructions nearby.

why-H-written-on-Helipad-2024-01-49257a2029697478fcbe5e19cb2cff01.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2147 2024-05-11 19:46:22

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2149) Chromosphere

Gist

The chromosphere is a thin layer of plasma that lies between the Sun's visible surface (the photosphere) and the corona (the Sun's upper atmosphere). It extends for at least 2,000 km (1,200 mi.) above the surface.

Summary

Chromosphere, lowest layer of the Sun’s atmosphere, several thousand kilometres thick, located above the bright photosphere and below the extremely tenuous corona. The chromosphere (colour sphere), named by the English astronomer Sir Joseph Norman Lockyer in 1868, appears briefly as a bright crescent, red with hydrogen light, during solar eclipses when the body of the Sun is almost obscured by the Moon. The chromosphere can be observed at other times across the face of the Sun in filters that let through the red light of the hydrogen alpha line at 6562.8 angstroms. The lower chromosphere is more or less homogeneous. The upper contains comparatively cool columns of ascending gas known as spicules, having between them hotter gas much like that of the corona, into which the upper chromosphere merges gradually. Spicules occur at the edges of the chromosphere’s magnetic network, which traces areas of enhanced field strength. Temperatures in the chromosphere range from about 4,500 to 100,000 Kelvins (K), increasing with altitude; the mean temperature is about 6,000 K. Solar prominences are primarily chromospheric phenomena.

Details

A chromosphere ("sphere of color") is the second layer of a star's atmosphere, located above the photosphere and below the solar transition region and corona. The term usually refers to the Sun's chromosphere, but not exclusively.

In the Sun's atmosphere, the chromosphere is roughly 3,000 to 5,000 kilometers (1,900 to 3,100 miles) in height, or slightly more than 1% of the Sun's radius at maximum thickness. It possesses a homogeneous layer at the boundary with the photosphere. Hair-like jets of plasma, called spicules, rise from this homogeneous region and through the chromosphere, extending up to 10,000 km (6,200 mi) into the corona above.

The chromosphere has a characteristic red color due to electromagnetic emissions in the Hα spectral line. Information about the chromosphere is primarily obtained by analysis of its emitted electromagnetic radiation. The chromosphere is also visible in the light emitted by ionized calcium, Ca II, in the violet part of the solar spectrum at a wavelength of 393.4 nanometers (the Calcium K-line).

Chromospheres have also been observed on stars other than the Sun. On large stars, chromospheres sometimes make up a significant proportion of the entire star. For example, the chromosphere of supergiant star Antares has been found to be about 2.5 times larger in thickness than the star's radius.

Network

Images taken in typical chromospheric lines show the presence of brighter cells, usually referred to as the network, while the surrounding darker regions are named internetwork. They look similar to granules commonly observed on the photosphere due to the heat convection.

On other stars

Chromospheres are present on almost all luminous stars other than white dwarfs. They are most prominent and magnetically active on lower-main sequence stars, on brown dwarfs of F and later spectral types, and on giant and subgiant stars.

A spectroscopic measure of chromospheric activity on other stars is the Mount Wilson S-index.

sun_regions_900x640.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2148 2024-05-12 21:50:40

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2150) Proxy server

Gist

A proxy server is a system or router that provides a gateway between users and the internet. Therefore, it helps prevent cyber attackers from entering a private network. It is a server, referred to as an “intermediary” because it goes between end-users and the web pages they visit online.

Summary

In computer networking, a proxy server is a server application that acts as an intermediary between a client requesting a resource and the server providing that resource. It improves privacy, security, and performance in the process.

Instead of connecting directly to a server that can fulfill a request for a resource, such as a file or web page, the client directs the request to the proxy server, which evaluates the request and performs the required network transactions. This serves as a method to simplify or control the complexity of the request, or provide additional benefits such as load balancing, privacy, or security. Proxies were devised to add structure and encapsulation to distributed systems. A proxy server thus functions on behalf of the client when requesting service, potentially masking the true origin of the request to the resource server.

Details

A proxy server is a system or router that provides a gateway between users and the internet. Therefore, it helps prevent cyber attackers from entering a private network. It is a server, referred to as an “intermediary” because it goes between end-users and the web pages they visit online.

When a computer connects to the internet, it uses an IP address. This is similar to your home’s street address, telling incoming data where to go and marking outgoing data with a return address for other devices to authenticate. A proxy server is essentially a computer on the internet that has an IP address of its own.

Proxy Servers and Network Security

Proxies provide a valuable layer of security for your computer. They can be set up as web filters or firewalls, protecting your computer from internet threats like malware.

This extra security is also valuable when coupled with a secure web gateway or other email security products. This way, you can filter traffic according to its level of safety or how much traffic your network—or individual computers—can handle.

How to use a proxy?

Some people use proxies for personal purposes, such as hiding their location while watching movies online, for example. For a company, however, they can be used to accomplish several key tasks such as:

1. Improve security
2. Secure employees’ internet activity from people trying to snoop on them
3. Balance internet traffic to prevent crashes
4. Control the websites employees and staff access in the office
5. Save bandwidth by caching files or compressing incoming traffic

How a Proxy Works

Because a proxy server has its own IP address, it acts as a go-between for a computer and the internet. Your computer knows this address, and when you send a request on the internet, it is routed to the proxy, which then gets the response from the web server and forwards the data from the page to your computer’s browser, like Chrome, Safari, Firefox, or Microsoft Edge.

How to Get a Proxy

There are hardware and software versions. Hardware connections sit between your network and the internet, where they get, send, and forward data from the web. Software proxies are typically hosted by a provider or reside in the cloud. You download and install an application on your computer that facilitates interaction with the proxy.

Often, a software proxy can be obtained for a monthly fee. Sometimes, they are free. The free versions tend to offer users fewer addresses and may only cover a few devices, while the paid proxies can meet the demands of a business with many devices.

How Is the Server Set Up?

To get started with a proxy server, you have to configure it in your computer, device, or network. Each operating system has its own setup procedures, so check the steps required for your computer or network.

In most cases, however, setup means using an automatic configuration script. If you want to do it manually, there will be options to enter the IP address and the appropriate port.

How Does the Proxy Protect Computer Privacy and Data?

A proxy server performs the function of a firewall and filter. The end-user or a network administrator can choose a proxy designed to protect data and privacy. This examines the data going in and out of your computer or network. It then applies rules to prevent you from having to expose your digital address to the world. Only the proxy’s IP address is seen by hackers or other bad actors. Without your personal IP address, people on the internet do not have direct access to your personal data, schedules, apps, or files.

With it in place, web requests go to the proxy, which then reaches out and gets what you want from the internet. If the server has encryption capabilities, passwords and other personal data get an extra tier of protection.

Benefits of a Proxy Server

Proxies come with several benefits that can give your business an advantage:

1. Enhanced security: Can act like a firewall between your systems and the internet. Without them, hackers have easy access to your IP address, which they can use to infiltrate your computer or network.
2. Private browsing, watching, listening, and shopping: Use different proxies to help you avoid getting inundated with unwanted ads or the collection of IP-specific data. With a proxy, site browsing is well-protected and impossible to track.
3. Access to location-specific content: You can designate a proxy server with an address associated with another country. You can, in effect, make it look like you are in that country and gain full access to all the content computers in that country are allowed to interact with. For example, the technology can allow you to open location-restricted websites by using local IP addresses of the location you want to appear to be in.
4. Prevent employees from browsing inappropriate or distracting sites: You can use it to block access to websites that run contrary to your organization’s principles. Also, you can block sites that typically end up distracting employees from important tasks. Some organizations block social media sites like Facebook and others to remove time-wasting temptations.

proxy-server-1.jpeg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2149 2024-05-13 17:15:19

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2151) Orange Juice

Details

Not all orange juice is made the same — some contain added sugars or contain only a small percentage of real juice, decreasing its nutritional benefits. Fresh-squeezed or 100% orange juice are healthier options.

Orange juice is the most popular fruit juice worldwide and has long been a breakfast staple.

Television commercials and marketing slogans portray this drink as unquestionably natural and healthy.

Yet, some scientists and health experts are concerned that this sweet beverage could harm your health.

From the Orchard to Your Glass

Most store-bought types of orange juice aren’t made by simply squeezing fresh-picked oranges and pouring the juice into bottles or cartons.

Rather, they’re produced through a multi-step, rigorously controlled process, and the juice can be stored in large tanks for up to a year before packaging.

First, oranges are washed and squeezed by a machine. Pulp and oils are removed. The juice is heat-pasteurized to inactivate enzymes and kill microbes that could otherwise cause deterioration and spoilage.

Next, some of the oxygen is removed, which helps reduce oxidative damage to vitamin C during storage. Juice to be stored as frozen concentrate is evaporated to remove most of the water.

Unfortunately, these processes also remove compounds that provide aroma and flavor. Some of them are later added back to the juice from carefully blended flavor packs.

Finally, before packaging, juice from oranges harvested at different times may be mixed to help minimize variations in quality. Pulp, which undergoes further processing after extraction, is added back to some juices.

SUMMARY

Supermarket orange juice isn’t the simple product it may appear to be. It undergoes complex, multi-step processing and can be stored in large tanks for up to a year before being packaged for sale in stores.

Orange Juice vs Whole Oranges

Orange juice and whole oranges are nutritionally similar, but there are some important differences.

Most notably, compared to a whole orange, a serving of orange juice has significantly less fiber and about twice the calories and carbs — which are mostly fruit sugar.

Orange juice and whole oranges are nutritionally similar, but there are some important differences.

Most notably, compared to a whole orange, a serving of orange juice has significantly less fiber and about twice the calories and carbs — which are mostly fruit sugar.

The nutrient content of whole oranges and juice is similar. Both are excellent sources of vitamin C — which supports immune health — and a good source of folate — which helps reduce the risk of certain birth defects in pregnancy.

However, juice would be even higher in these nutrients if some weren’t lost during processing and storage. For example, in one study, store-bought orange juice had 15% less vitamin C and 27% less folate than home-squeezed orange juice.

Though not listed on nutrition labels, oranges and orange juice are also rich in flavonoids and other beneficial plant compounds. Some of these are reduced during orange juice processing and storage.

What’s more, one study found that — compared to unprocessed orange juice — pasteurized orange juice had 26% less antioxidant activity immediately after heat processing and 67% less antioxidant activity after about a month in storage.

SUMMARY

An 8-ounce (240-ml) serving of orange juice has about twice the calories and sugar of a whole orange. Their vitamin and mineral content is similar, but juice loses some vitamins and beneficial plant compounds during processing and storage.

Potential Downsides

Though orange juice is linked to some health benefits, it also has drawbacks that are mainly linked to its calorie content and effects on blood sugar levels.

High in Calories

Fruit juice is less filling than whole fruits and quick to drink, increasing your risk of overeating and weight gain.

What’s more, studies show that when you drink calorie-rich beverages, such as orange juice, you don’t necessarily eat less food overall and may consume more calories than you would have without the juice.

Large observational studies in adults have linked each one-cup (240-ml) daily serving of 100% fruit juice with weight gain of 0.5–0.75 pounds (0.2–0.3 kg) over four years.

Additionally, when adults and teens drank two cups (500 ml) of orange juice with breakfast, it decreased their body’s fat burning after meals by 30% compared to drinking water. This may be partly due to the sugary juice stimulating fat production in the liver.

Perhaps most concerning are the effects of orange juice in children, as they’re the top consumers of juice and juice drinks.

Orange juice and other sugary drinks can contribute to excess calorie intake in children, as well as tooth decay. Diluting orange juice doesn’t necessarily decrease dental risks, though it can reduce calorie intake.

May Raise Blood Sugar Levels

Orange juice could also increase your blood sugar more than whole oranges.

The glycemic load — which is a measure of how a food’s carb quality and quantity affect blood sugar levels — ranges from 3–6 for whole oranges and 10–15 for orange juice.

The higher the glycemic load, the more likely a food is to raise your blood sugar.

To help overcome some of these drawbacks of orange juice, scientists have tested the benefits of adding orange pomace — fiber and flavonoid-rich remnants of oranges retrieved from the segments, broken pulp and core — to juice.

Preliminary human studies suggest that the addition of pomace to orange juice may help reduce its blood sugar impact and improve feelings of fullness.

However, more research is needed, and pomace-enriched orange juice isn’t available in stores yet.

SUMMARY

Drinking orange juice isn’t very filling and may contribute to excess calorie intake and weight gain. It may also raise your blood sugar more than a whole orange and can increase your risk of dental decay.

The Bottom Line

Though nutritionally similar to whole oranges, orange juice provides very little fiber but twice the calories and sugar.

It may be an easy way to reach your recommended fruit intake but can cause blood sugar spikes and even weight gain.

It’s best to limit yourself to no more than 8 ounces (240 ml) per day.

Even better, if you can, opt for whole oranges over juice whenever possible.

Additional Information

Orange juice is a liquid extract of the orange tree fruit, produced by squeezing or reaming oranges. It comes in several different varieties, including blood orange, navel oranges, valencia orange, clementine, and tangerine. As well as variations in oranges used, some varieties include differing amounts of juice vesicles, known as "pulp" in American English, and "(juicy) bits" in British English. These vesicles contain the juice of the orange and can be left in or removed during the manufacturing process. How juicy these vesicles are depend upon many factors, such as species, variety, and season. In American English, the beverage name is often abbreviated as "OJ".

Commercial orange juice with a long shelf life is made by pasteurizing the juice and removing the oxygen from it. This removes much of the taste, necessitating the later addition of a flavor pack, generally made from orange products. Additionally, some juice is further processed by drying and later rehydrating the juice, or by concentrating the juice and later adding water to the concentrate.

The health value of orange juice is debatable: it has a high concentration of vitamin C, but also a very high concentration of simple sugars, comparable to soft drinks. As a result, some government nutritional advice has been adjusted to encourage substitution of orange juice with raw fruit, which is digested more slowly, and limit daily consumption.

OrangeJuice_27fafc40-7ea8-4e76-b94d-53b911295052_800x.jpg?v=1604669136


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2150 2024-05-14 17:07:53

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,467

Re: Miscellany

2152) Pineapple Juice

Details

Pineapple juice contains a rich assortment of vitamins, minerals, and antioxidants. Possible health benefits of pineapple juice include boosting the immune system, aiding digestion, promoting healthy skin, and more.

People have long used pineapple as an ancient home remedy for digestive issues and inflammation.

Researchers have also looked into its potential immune benefits and its ability to reduce healing time.

In this article, we examine some of the benefits that pineapple juice can provide, as well as some of the precautions that people should take. We also offer some tips on adding pineapple juice into the diet.

1. Boosting the immune system

Pineapple juice may help boost the immune system.

In a 2014 study that took place in the Philippines, researchers examined the effects of pineapple on 98 children of school age.

The children who ate canned pineapple every day developed fewer viral and bacterial infections than those who did not eat any of this fruit.

The participants who did develop an infection in the group that ate pineapple had a shorter recovery time.

While this is a limited study, the results suggest an association between pineapple intake and a more effective immune response to infection.

2. Aiding digestion

Bromelain is an enzyme that is present in the pineapple stem and juice. It helps the body break down and digest proteins. In capsule form, bromelain may reduce swelling, bruising, healing time, and pain after surgery, according to a clinical trial from 2018.

However, consuming pineapple and pineapple juice will not provide enough bromelain to heal wounds as a standalone medical treatment.

Standard procedures in the processing of pineapple juice severely decrease the amount of bromelain that it provides.

Despite this, when people consume it as part of a high fiber diet that is rich in fruits and vegetables, pineapple juice might promote improved digestion.

3. Fighting cancer

In a laboratory study from 2015, researchers exposed cancer cells to fresh pineapple juice. They found that juice from the core, stem, and flesh suppressed the growth of ovarian and colon cancer cells.

However, further studies in humans are necessary before scientists can confirm any connection between pineapple juice and cancer prevention or treatment.

Pineapples also provide a good quantity of beta carotene. In a 2014 study with 893 participants, people who ate more foods containing beta carotene showed signs of a reduced risk of colon cancer.

4. Promoting healthy skin

Pineapple juice contains vitamin C and beta carotene. These antioxidants can help reduce wrinkles, improve overall skin texture, and minimize skin damage from sun and pollution exposure.

Vitamin C also helps with the formation of collagen, which is a common protein in the body that gives the skin its strength and structure.

5. Supporting eye health

The vitamin C content of pineapples may help preserve eye health.

A 2016 study found that diets rich in vitamin C can reduce the risk of cataract progression by one-third. The researchers examined the effects of these diets on 1,027 pairs of female twins in the United Kingdom.

The fluid inside the eye contains vitamin C. A diet containing plenty of vitamin C may help maintain that fluid and prevent the breakdown that leads to cataracts.

Nutrition

According to the United States Department of Agriculture, one cup of canned, unsweetened pineapple juice weighing 250 grams (g) contains:

* 132 calories
* 0.9 g of protein
* 0.3 g of fat
* 32.2 g of carbohydrate
* 0.5 g of fiber
* 25 g of sugar

One cup of pineapple juice is also an abundant source of several vital vitamins and minerals, including:

* manganese
* vitamin C
* thiamine
* vitamin B-6
* folate

Pineapple also contains the following nutrients:

* potassium
* magnesium
* copper
* beta-carotene

Diet

People should pay attention to serving size when pouring out fruit juices. Although these drinks are rich in vitamins and minerals, they also contain high levels of carbohydrates and sugars.

Pineapple juice has a naturally sweet but tart flavor. It is most healthful to choose pineapple juice that does not contain added sugar.

People can make pineapple juice at home to get the most nutritional benefit from it. Processing and storing juice often decreases its nutritional content.

By making juice at home, a person can be sure that it contains no added preservatives or sweeteners and that they are getting the maximum nutritional benefit from the ripe fruit.

Precautions

Some people may experience tenderness or discomfort in the mouth, lips, or tongue after consuming pineapple juice due to the presence of bromelain.

Exposure to extremely high amounts of bromelain can cause rashes, vomiting, and diarrhea.

Bromelain can also interfere with certain medications, including some drugs in the following classes:

* antibiotics
* blood thinners
* antidepressants
* anticonvulsants

People who take these medications regularly should speak with their doctor about which foods they should exclude from their diet.

The acidity in pineapples and other tropical or citrus fruits can cause an increase in heartburn or reflux symptoms in people who have gastroesophageal reflux disease (GERD).

Consuming too much potassium can be harmful to individuals whose kidneys are not fully functional. If a person’s kidneys are unable to remove excess potassium from the blood, this mineral can cause a condition called hyperkalemia. This medical problem can be fatal in some people.

Excess potassium can also interfere with beta-blockers, a medication for heart disease and anxiety.

People with a latex allergy are more likely than others to be allergic to pineapple. Latex allergy symptoms can range from mild to severe and may include the following:

* rashes or hives
* swelling
* a sore throat
* stomach cramps
* wheezing
* itchy eyes

Additional Information

Pineapple juice is a juice made from pressing the natural liquid out from the pulp of the pineapple (a fruit from a tropical plant). Numerous pineapple varieties may be used to manufacture commercial pineapple juice, the most common of which are Smooth Cayenne, Red Spanish, Queen, and Abacaxi. In manufacturing, pineapple juice is typically canned.

It is used as a single or mixed juice beverage, and for smoothies, math, culinary flavor, and as a meat tenderizer. Pineapple juice is a main ingredient in the piña colada and the tepache.

tasty-pineapple-juice-recipe-to-improve-sleep-little-west.jpg?v=1654547407


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

Board footer

Powered by FluxBB