Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#1 Re: Dark Discussions at Cafe Infinity » crème de la crème » Today 18:45:44

2429) Fredrick Sanger

Gist:

Life

Frederick Sanger was born in the small village of Rendcomb, England. His father was a doctor. After having converted to quakerism he brought up his sons as quakers. Frederick Sanger studied and received his PhD at the University of Cambridge in 1943. He remained in Cambridge for the rest of his career. Frederick Sanger was married with three children.

Work

Proteins, which are molecules made up of chains of amino acids, play a pivotal role in life processes in our cells. One important protein is insulin, a hormone that regulates sugar content in blood. Beginning in the 1940s, Frederick Sanger studied the composition of the insulin molecule. He used acids to break the molecule into smaller parts, which were separated from one another with the help of electrophoresis and chromatography. Further analyses determined the amino acid sequences in the molecule’s two chains, and in 1955 Sanger identified how the chains are linked together.

Summary

Frederick Sanger (born August 13, 1918, Rendcombe, Gloucestershire, England—died November 19, 2013, Cambridge) was an English biochemist who was twice the recipient of the Nobel Prize for Chemistry. He was awarded the prize in 1958 for his determination of the structure of the insulin molecule. He shared the prize (with Paul Berg and Walter Gilbert) in 1980 for his determination of base sequences in nucleic acids. Sanger was the fourth two-time recipient of the Nobel Prize.

Education

Sanger was the middle child of Frederick Sanger, a medical practitioner, and Cicely Crewsdon Sanger, the daughter of a wealthy cotton manufacturer. The family expected him to follow in his father’s footsteps and become a medical doctor. After much thought, he decided to become a scientist. In 1936 Sanger entered St. John’s College, Cambridge. He initially concentrated on chemistry and physics, but he was later attracted to the new field of biochemistry. He received a bachelor’s degree in 1939 and stayed at Cambridge an additional year to take an advanced course in biochemistry. He and Joan Howe married in 1940 and subsequently had three children.

Because of his Quaker upbringing, Sanger was a conscientious objector and was assigned as an orderly to a hospital near Bristol when World War II began. He soon decided to visit Cambridge to see if he could enter the doctoral program in biochemistry. Several researchers there were interested in having a student, especially one who did not need money. He studied lysine metabolism with biochemist Albert Neuberger. They also had a project in support of the war effort, analyzing nitrogen from potatoes. Sanger received a doctorate in 1943.

Insulin research

Biochemist Albert C. Chibnall and his protein research group moved from Imperial College in London to the safer wartime environment of the biochemistry department at Cambridge. Two schools of thought existed among protein researchers at the time. One group thought proteins were complex mixtures that would not readily lend themselves to chemical analysis. Chibnall was in the other group, which considered a given protein to be a distinct chemical compound.

Chibnall was studying insulin when Sanger joined the group. At Chibnall’s suggestion, Sanger set out to identify and quantify the free-amino groups of insulin. Sanger developed a method using dinitrofluorobenzene to produce yellow-coloured derivatives of amino groups (see amino acid). Information about a new separation technique, partition chromatography, had recently been published. In a pattern that typified Sanger’s career, he immediately recognized the utility of the new technique in separating the hydrolysis products of the treated protein. He identified two terminal amino groups for insulin, phenylalanine and glycine, suggesting that insulin is composed of two types of chains. Working with his first graduate student, Rodney Porter, Sanger used the method to study the amino terminal groups of several other proteins. (Porter later shared the 1972 Nobel Prize for Physiology or Medicine for his work in determining the chemical structure of antibodies.)

On the assumption that insulin chains are held together by disulphide linkages, Sanger oxidized the chains and separated two fractions. One fraction had phenylalanine at its amino terminus; the other had glycine. Whereas complete acid hydrolysis degraded insulin to its constituent amino acids, partial acid hydrolysis generated insulin peptides composed of several amino acids. Using another recently introduced technique, paper chromatography, Sanger was able to sequence the amino-terminal peptides of each chain, demonstrating for the first time that a protein has a specific sequence at a specific site. A combination of partial acid hydrolysis and enzymatic hydrolysis allowed Sanger and the Austrian biochemist Hans Tuppy to determine the complete sequence of amino acids in the phenylalanine chain of insulin. Similarly, Sanger and the Australian biochemist E.O.P. Thompson determined the sequence of the glycine chain.

Two problems remained: the distribution of the amide groups and the location of the disulphide linkages. With the completion of those two puzzles in 1954, Sanger had deduced the structure of insulin. For being the first person to sequence a protein, Sanger was awarded the 1958 Nobel Prize for Chemistry.

Sanger and his coworkers continued their studies of insulin, sequencing insulin from several other species and comparing the results. Utilizing newly introduced radiolabeling techniques, Sanger mapped the amino acid sequences of the active centres from several enzymes. One of these studies was conducted with another graduate student, Argentine-born immunologist César Milstein. (Milstein later shared the 1984 Nobel Prize for Physiology or Medicine for discovering the principle for the production of monoclonal antibodies.)

RNA research

In 1962 the Medical Research Council opened its new laboratory of molecular biology in Cambridge. The Austrian-born British biochemist Max Perutz, British biochemist John Kendrew, and British biophysicist Francis Crick moved to the new laboratory. Sanger joined them as head of the protein division. It was a banner year for the group, as Perutz and Kendrew shared the 1962 Nobel Prize for Chemistry and Crick shared the 1962 Nobel Prize for Physiology or Medicine with the American geneticist James D. Watson and the New Zealand-born British biophysicist Maurice Wilkins for the discovery of DNA (deoxyribonucleic acid).

Sanger’s interaction with nucleic acid groups at the new laboratory led to his pursuing studies on ribonucleic acid (RNA). RNA molecules are much larger than proteins, so obtaining molecules small enough for technique development was difficult. The American biochemist Robert W. Holley and his coworkers were the first to sequence RNA when they sequenced alanine-transfer RNA. They used partial hydrolysis methods somewhat like those Sanger had used for insulin. Unlike other RNA types, transfer RNAs have many unusual nucleotides. This partial hydrolysis method would not work well with other RNA molecules, which contain only four types of nucleotides, so a new strategy was needed.

The goal of Sanger’s lab was to sequence a messenger RNA and determine the genetic code, thereby solving the puzzle of how groups of nucleotides code for amino acids. Working with British biochemists George G. Brownlee and Bart G. Barrell, Sanger developed a two-dimensional electrophoresis method for sequencing RNA. By the time the sequence methods were worked out, the code had been broken by other researchers, mainly the American biochemist Marshall Nirenberg and the Indian-born American biochemist Har Gobind Khorana, using in vitro protein synthesis techniques. The RNA sequence work of Sanger’s group did confirm the genetic code.

DNA research

By the early 1970s Sanger was interested in deoxyribonucleic acid (DNA). DNA sequence studies had not developed because of the immense size of DNA molecules and the lack of suitable enzymes to cleave DNA into smaller pieces. Building on the enzyme copying approach used by the Swiss chemist Charles Weissmann in his studies on bacteriophage RNA, Sanger began using the enzyme DNA polymerase to make new strands of DNA from single-strand templates, introducing radioactive nucleotides into the new DNA. DNA polymerase requires a primer that can bind to a known region of the template strand. Early success was limited by the lack of suitable primers. Sanger and British colleague Alan R. Coulson developed the “plus and minus” method for rapid DNA sequencing. It represented a radical departure from earlier methods in that it did not utilize partial hydrolysis. Instead, it generated a series of DNA molecules of varying lengths that could be separated by using polyacrylamide gel electrophoresis. For both plus and minus systems, DNA was synthesized from templates to generate random sets of DNA molecules from very short to very long. When both plus and minus sets were separated on the same gel, the sequence could be read from either system, one confirming the other. In 1977 Sanger’s group used this system to deduce most of the DNA sequence of bacteriophage ΦX174, the first complete genome to be sequenced.

A few problems remained with the plus and minus system. Sanger, Coulson, and British colleague Steve Nicklen developed a similar procedure using dideoxynucleotide chain-terminating inhibitors. DNA was synthesized until an inhibitor molecule was incorporated into the growing DNA chain. Using four reactions, each with a different inhibitor, sets of DNA fragments were generated ending in every nucleotide. For example, in the A reaction, a series of DNA fragments ending in A (adenine) was generated. In the C reaction, a series of DNA fragments ending in C (cytosine) was generated, and so on for G (guanine) and T (thymine). When the four reactions were separated side by side on a gel and an autoradiograph developed, the sequence was read from the film. Sanger and his coworkers used the dideoxy method to sequence human mitochondrial DNA. For his contributions to DNA sequencing methods, Sanger shared the 1980 Nobel Prize for Chemistry. He retired in 1983.

Additional Honors

Sanger’s additional honours included election as a fellow of the Royal Society (1954), being named a Commander of the Order of the British Empire (CBE; 1963), receiving the Royal Society’s Royal Medal (1969) and Copley Medal (1977), and election to the Order of the Companions of Honour (CH; 1981) and the Order of Merit (OM; 1986). In 1993 the Wellcome Trust and the British Medical Research Council established a genome research centre, honouring Sanger by naming it the Wellcome Trust Sanger Institute.

Details

Frederick Sanger (13 August 1918 – 19 November 2013) was a British biochemist who received the Nobel Prize in Chemistry twice.

He won the 1958 Chemistry Prize for determining the amino acid sequence of insulin and numerous other proteins, demonstrating in the process that each had a unique, definite structure; this was a foundational discovery for the central dogma of molecular biology.

At the newly constructed Laboratory of Molecular Biology in Cambridge, he developed and subsequently refined the first-ever DNA sequencing technique, which vastly expanded the number of feasible experiments in molecular biology and remains in widespread use today. The breakthrough earned him the 1980 Nobel Prize in Chemistry, which he shared with Walter Gilbert and Paul Berg.

He is one of only three people to have won multiple Nobel Prizes in the same category (the others being John Bardeen in physics and Karl Barry Sharpless in chemistry), and one of five persons with two Nobel Prizes.

Early life and education

Frederick Sanger was born on 13 August 1918 in Rendcomb, a small village in Gloucestershire, England, the second son of Frederick Sanger, a general practitioner, and his wife, Cicely Sanger (née Crewdson). He was one of three children. His brother, Theodore, was only a year older, while his sister May (Mary) was five years younger. His father had worked as an Anglican medical missionary in China but returned to England because of ill health. He was 40 in 1916 when he married Cicely, who was four years younger. Sanger's father converted to Quakerism soon after his two sons were born and brought up the children as Quakers. Sanger's mother was the daughter of an affluent cotton manufacturer and had a Quaker background, but was not a Quaker.

When Sanger was around five years old the family moved to the small village of Tanworth-in-Arden in Warwickshire. The family was reasonably wealthy and employed a governess to teach the children. In 1927, at the age of nine, he was sent to the Downs School, a residential preparatory school run by Quakers near Malvern. His brother Theo was a year ahead of him at the same school. In 1932, at the age of 14, he was sent to the recently established Bryanston School in Dorset. This used the Dalton system and had a more liberal regime which Sanger much preferred. At the school he liked his teachers and particularly enjoyed scientific subjects. Able to complete his School Certificate a year early, for which he was awarded seven credits, Sanger was able to spend most of his last year of school experimenting in the laboratory alongside his chemistry master, Geoffrey Ordish, who had originally studied at Cambridge University and been a researcher in the Cavendish Laboratory. Working with Ordish made a refreshing change from sitting and studying books and awakened Sanger's desire to pursue a scientific career. In 1935, prior to heading off to college, Sanger was sent to Schule Schloss Salem in southern Germany on an exchange program. The school placed a heavy emphasis on athletics, which caused Sanger to be much further ahead in the course material compared to the other students. He was shocked to learn that each day was started with readings from Hitler's Mein Kampf, followed by a Sieg Heil salute.

In 1936 Sanger went to St John's College, Cambridge, to study natural sciences. His father had attended the same college. For Part I of his Tripos he took courses in physics, chemistry, biochemistry and mathematics but struggled with physics and mathematics. Many of the other students had studied more mathematics at school. In his second year he replaced physics with physiology. He took three years to obtain his Part I. For his Part II he studied biochemistry and obtained a 1st Class Honours. Biochemistry was a relatively new department founded by Gowland Hopkins with enthusiastic lecturers who included Malcolm Dixon, Joseph Needham and Ernest Baldwin.

Both his parents died from cancer during his first two years at Cambridge. His father was 60 and his mother was 58. As an undergraduate Sanger's beliefs were strongly influenced by his Quaker upbringing. He was a pacifist and a member of the Peace Pledge Union. It was through his involvement with the Cambridge Scientists' Anti-War Group that he met his future wife, Joan Howe, who was studying economics at Newnham College. They courted while he was studying for his Part II exams and married after he had graduated in December 1940. Sanger, although brought up and influenced by his religious upbringing, later began to lose sight of his Quaker related ways. He began to see the world through a more scientific lens, and with the growth of his research and scientific development he slowly drifted farther from the faith he grew up with. He had nothing but respect for the religious and states he took two things from it, truth and respect for all life. Under the Military Training Act 1939 he was provisionally registered as a conscientious objector, and again under the National Service (Armed Forces) Act 1939, before being granted unconditional exemption from military service by a tribunal. In the meantime he undertook training in social relief work at the Quaker centre, Spicelands, Devon and served briefly as a hospital orderly.

Sanger began studying for a PhD in October 1940 under N.W. "Bill" Pirie. His project was to investigate whether edible protein could be obtained from grass. After little more than a month Pirie left the department and Albert Neuberger became his adviser. Sanger changed his research project to study the metabolism of lysine and a more practical problem concerning the nitrogen of potatoes. His thesis had the title, "The metabolism of the amino acid lysine in the animal body". He was examined by Charles Harington and Albert Charles Chibnall and awarded his doctorate in 1943.

sanger-13123-portrait-medium.jpg

#2 This is Cool » Integrated Circuit » Today 18:21:49

Jai Ganesh
Replies: 0

Integrated Circuit

Gist

An integrated circuit (IC), or microchip, is a tiny electronic device containing thousands to billions of interconnected transistors, resistors, and capacitors fabricated onto a single small piece of semiconductor material, usually silicon. This miniaturization allows for complex electronic functions, forming the backbone of modern electronics like smartphones, computers, and medical devices, replacing bulky, separate components. 

The components are interconnected through a complex network of pathways etched onto the chip's surface. These pathways allow electrical signals to flow between the components, enabling the IC to perform specific functions, such as processing data, amplifying signals, or storing information.

Summary

An integrated circuit (IC), also known as a microchip or simply chip, is a compact assembly of electronic circuits formed from various electronic components — such as transistors, resistors, and capacitors — and their interconnections.[1] These components are fabricated onto a thin, flat piece ("chip") of semiconductor material, most commonly silicon. Integrated circuits are integral to a wide variety of electronic devices — including computers, smartphones, and televisions — performing functions such as data processing, control, and storage. They have transformed the field of electronics by enabling device miniaturization, improving performance, and reducing cost.

Compared to assemblies built from discrete components, integrated circuits are orders of magnitude smaller, faster, more energy-efficient, and less expensive, allowing for a very high transistor count. Its capability for mass production, its high reliability, and the standardized, modular approach of integrated circuit design facilitated rapid replacement of designs using discrete transistors. Today, ICs are present in virtually all electronic devices and have revolutionized modern technology. Products such as computer processors, microcontrollers, digital signal processors, and embedded processing chips in home appliances are foundational to contemporary society due to their small size, low cost, and versatility.

Very-large-scale integration was made practical by technological advancements in semiconductor device fabrication. Since their origins in the 1960s, the size, speed, and capacity of chips have progressed enormously, driven by technical advances that fit more and more transistors on chips of the same size – a modern chip may have many billions of transistors in an area the size of a human fingernail. These advances, roughly following Moore's law, make the computer chips of today possess millions of times the capacity and thousands of times the speed of the computer chips of the early 1970s.

ICs have three main advantages over circuits constructed out of discrete components: size, cost and performance. The size and cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time. Furthermore, packaged ICs use much less material than discrete circuits. Performance is high because the IC's components switch quickly and consume comparatively little power because of their small size and proximity. The main disadvantage of ICs is the high initial cost of designing them and the enormous capital cost of factory construction. This high initial cost means ICs are only commercially viable when high production volumes are anticipated.

Details

An integrated circuit (IC) — commonly called a chip — is a compact, highly efficient semiconductor device that contains a multitude of interconnected electronic components such as transistors, resistors, and capacitors, all fabricated on a single piece of silicon. This revolutionary technology forms the backbone of modern electronics, enabling high-speed, miniaturized, and reliable devices found in everything from smartphones and computers to medical equipment and vehicles.

Before the invention of ICs, electronic systems relied on discrete components connected individually, resulting in bulky and unreliable systems. Integrated circuits enabled the miniaturization, increased performance, and cost-effectiveness that define today’s digital world.

What Do ICs Do?

You’re probably familiar with the little black boxes nestled neatly inside your favorite devices. With their diminutive size and unassuming characteristics, it can be hard to believe these vessels are actually the linchpin of most modern electronics. But without integrated chips, most technologies would not be possible, and we — as a technology-dependent society — would be helpless.

Integrated circuits are compact electronic chips made up of interconnected components that include resistors, transistors, and capacitors. Built on a single piece of semiconductor material, such as silicon, integrated circuits can contain collections of hundreds to billions of components — all working together to make our world go ‘round.

The uses of integrated circuits are vast: children’s toys, cars, computers, mobile phones, spaceships, subway trains, airplanes, video games, toothbrushes, and more. Basically, if it has a power switch, it likely owes its electronic life to an integrated circuit. An integrated circuit can function within each device as a microprocessor, amplifier, or memory.

Integrated circuits are created using photolithography, a process that uses ultraviolet light to print the components onto a single substrate all at once — similar to the way you can make many prints of a photograph from a single negative. The efficiency of printing all the IC’s components together means ICs can be produced more cheaply and reliably than using discrete components. Other benefits of ICs include:

* Extremely small size, so devices can be compact
* High reliability
* High-speed performance
* Low power requirement

Who Invented the Integrated Circuit?

The integrated circuit was independently invented by two pioneering engineers in the late 1950s: Jack Kilby of Texas Instruments and Robert Noyce of Fairchild Semiconductor.

Jack Kilby built the first working IC prototype in 1958 using germanium, which earned him the Nobel Prize in Physics in 2000 for his contribution to technology.

Robert Noyce developed a practical method for mass-producing ICs using silicon and the planar process, which laid the foundation for the modern semiconductor industry and led to the founding of Intel.

Their combined innovations set the stage for the explosive growth of electronics and computing power that continues today.

Evolution of IC Manufacturing

Since their creation, integrated circuits have gone through several evolutions to make our devices ever smaller, faster, and cheaper. While the first generation of ICs consisted of only a few components on a single chip, each generation since has prompted exponential leaps in power and economy.

1950s: Integrated circuits were introduced with only a few transistors and diodes on one chip.
1960s: The introduction of bipolar junction transistors and small- and medium-scale integration made it possible for thousands of transistors to be connected on a single chip.
1970s: Large-scale integration and very large-scale integration (VLSI) allowed for chips with tens of thousands, then millions of components, enabling the development of the personal computer and advanced computing systems.
2000s: In the early 2000s, ultra-large-scale integration (ULSI) allowed billions of components to be integrated on one substrate.
Next: The 2.5D and 3D integrated circuit (3D-IC) technologies currently under development will create unparalleled flexibility, propelling another great leap in electronics advancement.

The first IC manufacturers were vertically integrated companies that did all the design and manufacturing steps themselves. This is still the case for some companies like Intel, Samsung, and memory chip manufacturers. But since the 1980s, the “fabless” business model has become the norm in the semiconductor industry.

A fabless IC company does not manufacture the chips they design. Instead, they contract this out to dedicated manufacturing companies that operate fabrication facilities (fabs) shared by many design companies. Industry leaders like Apple, AMD, and NVIDIA are examples of fabless IC design houses. Leading IC manufacturers today include TSMC, Samsung, and GlobalFoundries.

What are the Main Types of Integrated Circuits?

ICs can be classified into different types based on their complexity and purpose. Some common types of ICs include:

* Digital ICs: These are used in devices such as computers and microprocessors. Digital ICs can be used for memory, storing data, or logic. They are economical and easy to design for low-frequency applications.
* Analog ICs: Analog ICs are designed to process continuous signals in which the signal magnitude varies from zero to full supply voltage. These ICs are used to process analog signals such as sound or light. In comparison to digital ICs, they are made of fewer transistors but are more difficult to design. Analog ICs can be used in a wide range of applications, including amplifiers, filters, oscillators, voltage regulators, and power management circuits. They are commonly found in electronic devices such as audio equipment, radio frequency (RF) transceivers, communications, sensors, and medical instruments.
* Mixed-signal ICs: Combining both digital and analog circuits, mixed-signal ICs are used in areas where both types of processing are required, such as screen, sensor, and communications applications in mobile phones, cars, and portable electronics.
* Memory ICs: These ICs are used store data both temporarily or permanently. Examples of memory ICs include random access memory (RAM) and read-only memory (ROM). Memory ICs are among the largest ICs in terms of transistor count and require extremely high-capacity and fast simulation tools.
* Application-Specific Integrated Circuit (ASIC): ASICs are designed to perform a particular task efficiently. It is not a general-purpose IC that can be implemented in most applications but is instead a system-on-chip (SoC) customized to execute a targeted function.

What is the Difference Between an IC and a Microprocessor?

While all microprocessors are integrated circuits, not all ICs are microprocessors. Here’s how they differ:

* Integrated Circuit (IC): A broad term for any chip that contains interconnected electronic components. ICs can be as simple as a single logic gate or as complex as a full system-on-chip (SoC).
* Microprocessor: A specific type of digital IC designed to function as the central processing unit (CPU) of a computer or embedded device. Microprocessors execute instructions, perform arithmetic and logic operations, and manage data flow.
In essence, a microprocessor is a highly specialized IC that acts as the “brain” of a computer, while ICs as a category include a wide range of chips with diverse functions.

Additional Information

An integrated circuit (IC) is an assembly of electronic components, fabricated as a single unit, in which miniaturized active devices (e.g., transistors and diodes) and passive devices (e.g., capacitors and resistors) and their interconnections are built up on a thin substrate of semiconductor material (typically silicon). The resulting circuit is thus a small monolithic “chip,” which may be as small as a few square centimetres or only a few square millimetres. The individual circuit components are generally microscopic in size.

Integrated circuits have their origin in the invention of the transistor in 1947 by William B. Shockley and his team at the American Telephone and Telegraph Company’s Bell Laboratories. Shockley’s team (including John Bardeen and Walter H. Brattain) found that, under the right circumstances, electrons would form a barrier at the surface of certain crystals, and they learned to control the flow of electricity through the crystal by manipulating this barrier. Controlling electron flow through a crystal allowed the team to create a device that could perform certain electrical operations, such as signal amplification, that were previously done by vacuum tubes. They named this device a transistor, from a combination of the words transfer and resistor. The study of methods of creating electronic devices using solid materials became known as solid-state electronics. Solid-state devices proved to be much sturdier, easier to work with, more reliable, much smaller, and less expensive than vacuum tubes. Using the same principles and materials, engineers soon learned to create other electrical components, such as resistors and capacitors. Now that electrical devices could be made so small, the largest part of a circuit was the awkward wiring between the devices.

In 1958 Jack Kilby of Texas Instruments, Inc., and Robert Noyce of Fairchild Semiconductor Corporation independently thought of a way to reduce circuit size further. They laid very thin paths of metal (usually aluminum or copper) directly on the same piece of material as their devices. These small paths acted as wires. With this technique an entire circuit could be “integrated” on a single piece of solid material and an integrated circuit (IC) thus created. ICs can contain hundreds of thousands of individual transistors on a single piece of material the size of a pea. Working with that many vacuum tubes would have been unrealistically awkward and expensive. The invention of the integrated circuit made technologies of the Information Age feasible. ICs are now used extensively in all walks of life, from cars to toasters to amusement park rides.

Basic IC types:

Analog versus digital circuits

Analog, or linear, circuits typically use only a few components and are thus some of the simplest types of ICs. Generally, analog circuits are connected to devices that collect signals from the environment or send signals back to the environment. For example, a microphone converts fluctuating vocal sounds into an electrical signal of varying voltage. An analog circuit then modifies the signal in some useful way—such as amplifying it or filtering it of undesirable noise. Such a signal might then be fed back to a loudspeaker, which would reproduce the tones originally picked up by the microphone. Another typical use for an analog circuit is to control some device in response to continual changes in the environment. For example, a temperature sensor sends a varying signal to a thermostat, which can be programmed to turn an air conditioner, heater, or oven on and off once the signal has reached a certain value.

A digital circuit, on the other hand, is designed to accept only voltages of specific given values. A circuit that uses only two states is known as a binary circuit. Circuit design with binary quantities, “on” and “off” representing 1 and 0 (i.e., true and false), uses the logic of Boolean algebra. (Arithmetic is also performed in the binary number system employing Boolean algebra.) These basic elements are combined in the design of ICs for digital computers and associated devices to perform the desired functions.

Microprocessor circuits

Microprocessors are the most-complicated ICs. They are composed of billions of transistors that have been configured as thousands of individual digital circuits, each of which performs some specific logic function. A microprocessor is built entirely of these logic circuits synchronized to each other. Microprocessors typically contain the central processing unit (CPU) of a computer.

Just like a marching band, the circuits perform their logic function only on direction by the bandmaster. The bandmaster in a microprocessor, so to speak, is called the clock. The clock is a signal that quickly alternates between two logic states. Every time the clock changes state, every logic circuit in the microprocessor does something. Calculations can be made very quickly, depending on the speed (clock frequency) of the microprocessor.

Microprocessors contain some circuits, known as registers, that store information. Registers are predetermined memory locations. Each processor has many different types of registers. Permanent registers are used to store the preprogrammed instructions required for various operations (such as addition and multiplication). Temporary registers store numbers that are to be operated on and also the result. Other examples of registers include the program counter (also called the instruction pointer), which contains the address in memory of the next instruction; the stack pointer (also called the stack register), which contains the address of the last instruction put into an area of memory called the stack; and the memory address register, which contains the address of where the data to be worked on is located or where the data that has been processed will be stored.

Microprocessors can perform billions of operations per second on data. In addition to computers, microprocessors are common in video game systems, televisions, cameras, and automobiles.

Memory circuits

Microprocessors typically have to store more data than can be held in a few registers. This additional information is relocated to special memory circuits. Memory is composed of dense arrays of parallel circuits that use their voltage states to store information. Memory also stores the temporary sequence of instructions, or program, for the microprocessor.

Manufacturers continually strive to reduce the size of memory circuits—to increase capability without increasing space. In addition, smaller components typically use less power, operate more efficiently, and cost less to manufacture.

Digital signal processors

A signal is an analog waveform—anything in the environment that can be captured electronically. A digital signal is an analog waveform that has been converted into a series of binary numbers for quick manipulation. As the name implies, a digital signal processor (DSP) processes signals digitally, as patterns of 1s and 0s. For instance, using an analog-to-digital converter, commonly called an A-to-D or A/D converter, a recording of someone’s voice can be converted into digital 1s and 0s. The digital representation of the voice can then be modified by a DSP using complex mathematical formulas. For example, the DSP algorithm in the circuit may be configured to recognize gaps between spoken words as background noise and digitally remove ambient noise from the waveform. Finally, the processed signal can be converted back (by a D/A converter) into an analog signal for listening. Digital processing can filter out background noise so fast that there is no discernible delay and the signal appears to be heard in “real time.” For instance, such processing enables “live” television broadcasts to focus on a quarterback’s signals in an American gridiron football game.

DSPs are also used to produce digital effects on live television. For example, the yellow marker lines displayed during the football game are not really on the field; a DSP adds the lines after the cameras shoot the picture but before it is broadcast. Similarly, some of the advertisements seen on stadium fences and billboards during televised sporting events are not really there.

Application-specific ICs

An application-specific IC (ASIC) can be either a digital or an analog circuit. As their name implies, ASICs are not reconfigurable; they perform only one specific function. For example, a speed controller IC for a remote control car is hard-wired to do one job and could never become a microprocessor. An ASIC does not contain any ability to follow alternate instructions.

Radio-frequency ICs

Radio-frequency ICs (RFICs) are widely used in mobile phones and wireless devices. RFICs are analog circuits that usually run in the frequency range of 3 kHz to 2.4 GHz (3,000 hertz to 2.4 billion hertz), circuits that would work at about 1 THz (1 trillion hertz) being in development. They are usually thought of as ASICs even though some may be configurable for several similar applications.

Most semiconductor circuits that operate above 500 MHz (500 million hertz) cause the electronic components and their connecting paths to interfere with each other in unusual ways. Engineers must use special design techniques to deal with the physics of high-frequency microelectronic interactions.

Monolithic microwave ICs

A special type of RFIC is known as a monolithic microwave IC (MMIC; also called microwave monolithic IC). These circuits usually run in the 2- to 100-GHz range, or microwave frequencies, and are used in radar systems, in satellite communications, and as power amplifiers for cellular telephones.

Just as sound travels faster through water than through air, electron velocity is different through each type of semiconductor material. Silicon offers too much resistance for microwave-frequency circuits, and so the compound GaAs is often used for MMICs. Unfortunately, GaAs is mechanically much less sound than silicon. It breaks easily, so GaAs wafers are usually much more expensive to build than silicon wafers.

circuit-board?qlt=82&wid=1200&ts=1761752137480&$responsive$&fit=constrain&dpr=off

#3 Re: This is Cool » Miscellany » Today 17:48:23

2491) Red Sea

Gist

The Red Sea is a vital 2,250 km long, 355 km wide, and up to 2,730 m deep, narrow, and hypersaline inlet of the Indian Ocean, positioned between Northeast Africa and the Arabian Peninsula. As a crucial, warm-water maritime trade route (carrying 12%) of global trade) connected via the Suez Canal and Bab-el-Mandeb Strait, it separates countries like Egypt, Sudan, and Eritrea from Saudi Arabia and Yemen. 

The Red Sea is known for its vital trade route (Suez Canal), incredibly rich biodiversity with vibrant coral reefs and unique marine life, extremely warm and salty water, year-round sunshine, and significant historical/religious importance, particularly the biblical story of Moses. It's a major global hotspot for scuba diving and snorkeling due to its clear waters and abundant fish species.

Summary

The Red Sea is a sea inlet of the Indian Ocean, lying between Africa and Asia. Its connection to the ocean is in the south, through the Bab-el-Mandeb Strait and the Gulf of Aden. To the north of the Red Sea lies the Sinai Peninsula, the Gulf of Aqaba, and the Gulf of Suez, which leads to the Suez Canal. It is underlain by the Red Sea Rift, which is part of the Great Rift Valley.

The Red Sea has a surface area of roughly 438,000 sq km (169,000 sq mi), is about 2,250 km (1,400 mi) long, and 355 km (221 mi) across at its widest point. It has an average depth of 490 m (1,610 ft), and in the central Suakin Trough, it reaches its maximum depth of 2,730 m (8,960 ft).

The Red Sea is quite shallow, with approximately 40% of its area being less than 100 m (330 ft) deep, and approximately 25% being less than 50 m (160 ft) deep. The extensive shallow shelves are noted for their marine life and corals. More than 1,000 invertebrate species and 200 types of soft and hard coral live in the sea. The Red Sea is the world's northernmost tropical sea and has been designated a Global 200 ecoregion.

Details

Red Sea is a narrow strip of water extending southeastward from Suez, Egypt, for about 1,200 miles (1,930 km) to the Bab el-Mandeb Strait, which connects with the Gulf of Aden and thence with the Arabian Sea. Geologically, the Gulfs of Suez and Aqaba (Elat) must be considered as the northern extension of the same structure. The sea separates the coasts of Egypt, Sudan, and Eritrea to the west from those of Saudi Arabia and Yemen to the east. Its maximum width is 190 miles, its greatest depth 9,974 feet (3,040 metres), and its area approximately 174,000 square miles (450,000 square km).

The Red Sea contains some of the world’s hottest and saltiest seawater. With its connection to the Mediterranean Sea via the Suez Canal, it is one of the most heavily traveled waterways in the world, carrying maritime traffic between Europe and Asia. Its name is derived from the colour changes observed in its waters. Normally, the Red Sea is an intense blue-green; occasionally, however, it is populated by extensive blooms of the algae Trichodesmium erythraeum, which, upon dying off, turn the sea a reddish brown colour.

The following discussion focuses on the Red Sea and the Gulfs of Suez and Aqaba.

Physical features:

Physiography and submarine morphology

The Red Sea lies in a fault depression that separates two great blocks of Earth’s crust—Arabia and North Africa. The land on either side, inland from the coastal plains, reaches heights of more than 6,560 feet above sea level, with the highest land in the south.

At its northern end the Red Sea splits into two parts, the Gulf of Suez to the northwest and the Gulf of Aqaba to the northeast. The Gulf of Suez is shallow—approximately 180 to 210 feet deep—and it is bordered by a broad coastal plain. The Gulf of Aqaba, on the other hand, is bordered by a narrow plain, and it reaches a depth of 5,500 feet. From approximately 28° N, where the Gulfs of Suez and Aqaba converge, south to a latitude near 25° N, the Red Sea’s coasts parallel each other at a distance of roughly 100 miles apart. There the seafloor consists of a main trough, with a maximum depth of some 4,000 feet, running parallel to the shorelines.

South of this point and continuing southeast to latitude 16° N, the main trough becomes sinuous, following the irregularities of the shoreline. About halfway down this section, roughly between 20° and 21° N, the topography of the trough becomes more rugged, and several sharp clefts appear in the seafloor. Because of an extensive growth of coral banks, only a shallow narrow channel remains south of 16° N. The sill (submarine ridge) separating the Red Sea and the Gulf of Aden at the Bab el-Mandeb Strait is affected by this growth; therefore, the depth of the water is only about 380 feet, and the main channel becomes narrow.

The clefts within the deeper part of the trough are unusual seafloor areas in which hot brine concentrates are found. These patches apparently form distinct and separated deeps within the trough and have a north-south trend, whereas the general trend of the trough is from northwest to southeast. At the bottom of these areas are unique sediments, containing deposits of heavy metal oxides from 30 to 60 feet thick.

Most of the islands of the Red Sea are merely exposed reefs. There is, however, a group of active volcanoes just south of the Dahlak Archipelago (15° 50′ N), as well as a recently extinct volcano on the island of Jabal Al-Ṭāʾir.

Geology

The Red Sea occupies part of a large rift valley in the continental crust of Africa and Arabia. This break in the crust is part of a complex rift system that includes the East African Rift System, which extends southward through Ethiopia, Kenya, and Tanzania for almost 2,200 miles and northward for more than 280 miles from the Gulf of Aqaba to form the great Wadi Aqaba–Dead Sea–Jordan Rift; the system also extends eastward for 600 miles from the southern end of the Red Sea to form the Gulf of Aden.

The Red Sea valley cuts through the Arabian-Nubian Massif, which was a continuous central mass of Precambrian igneous and metamorphic rocks (i.e., formed deep within the Earth under heat and pressure more than 540 million years ago), the outcrops of which form the rugged mountains of the adjoining region. The massif is surrounded by these Precambrian rocks overlain by Paleozoic marine sediments (542 to 251 million years old). These sediments were affected by the folding and faulting that began late in the Paleozoic; the laying down of deposits, however, continued to occur during this time and apparently continued into the Mesozoic Era (251 to 65.5 million years ago). The Mesozoic sediments appear to surround and overlap those of the Paleozoic and are in turn surrounded by early Cenozoic sediments (i.e., between 65.5 and 55.8 million years old). In many places large remnants of Mesozoic sediments are found overlying the Precambrian rocks, suggesting that a fairly continuous cover of deposits once existed above the older massif.

The Red Sea is considered a relatively new sea, whose development probably resembles that of the Atlantic Ocean in its early stages. The Red Sea’s trough apparently formed in at least two complex phases of land motion. The movement of Africa away from Arabia began about 55 million years ago. The Gulf of Suez opened up about 30 million years ago, and the northern part of the Red Sea about 20 million years ago. The second phase began about 3 to 4 million years ago, creating the trough in the Gulf of Aqaba and also in the southern half of the Red Sea valley. This motion, estimated as amounting to 0.59 to 0.62 inch (15.0 to 15.7 mm) per year, is still proceeding, as indicated by the extensive volcanism of the past 10,000 years, by seismic activity, and by the flow of hot brines in the trough.

Climate

The Red Sea region receives very little precipitation in any form, although prehistoric artifacts indicate that there were periods with greater amounts of rainfall. In general, the climate is conducive to outdoor activity in fall, winter, and spring—except during windstorms—with temperatures varying between 46 and 82 °F (8 and 28 °C). Summer temperatures, however, are much higher, up to 104 °F (40 °C), and relative humidity is high, rendering vigorous activity unpleasant. In the northern part of the Red Sea area, extending down to 19° N, the prevailing winds are north to northwest. Best known are the occasional westerly, or “Egyptian,” winds, which blow with some violence during the winter months and generally are accompanied by fog and blowing sand. From latitude 14° to 16° N the winds are variable, but from June through August strong northwest winds move down from the north, sometimes extending as far south as the Bab el-Mandeb Strait; by September, however, this wind pattern retreats to a position north of 16° N. South of 14° N the prevailing winds are south to southeast.

Hydrology

No water enters the Red Sea from rivers, and rainfall is scant; but the evaporation loss—in excess of 80 inches per year—is made up by an inflow through the eastern channel of the Bab el-Mandeb Strait from the Gulf of Aden. This inflow is driven toward the north by prevailing winds and generates a circulation pattern in which these low-salinity waters (the average salinity is about 36 parts per thousand) move northward. Water from the Gulf of Suez has a salinity of about 40 parts per thousand, owing in part to evaporation, and consequently a high density. This dense water moves toward the south and sinks below the less dense inflowing waters from the Red Sea. Below a transition zone, which extends from depths of about 300 to 1,300 feet, the water conditions are stabilized at about 72 °F (22 °C), with a salinity of almost 41 parts per thousand. This south-flowing bottom water, displaced from the north, spills over the sill at Bab el-Mandeb, mostly through the eastern channel. It is estimated that there is a complete renewal of water in the Red Sea every 20 years.

Below this southward-flowing water, in the deepest portions of the trough, there is another transition layer, only 80 feet thick, below which, at some 6,400 feet, lie pools of hot brine. The brine in the Atlantis II Deep has an average temperature of almost 140 °F (60 °C), a salinity of 257 parts per thousand, and no oxygen. There are similar pools of water in the Discovery Deep and in the Chain Deep (at about 21°18′ N). Heating from below renders these pools unstable, so that their contents mix with the overlying waters; they thus become part of the general circulation system of the sea.

Economic aspects:

Resources

Five major types of mineral resources are found in the Red Sea region: petroleum deposits, evaporite deposits (sediments laid down as a result of evaporation, such as halite, sylvite, gypsum, and dolomite), sulfur, phosphates, and the heavy-metal deposits in the bottom oozes of the Atlantis II, Discovery, and other deeps. The oil and natural gas deposits have been exploited to varying degrees by the nations adjoining the sea; of note are the deposits near Jamsah (Gemsa) Promontory (in Egypt) at the juncture of the Gulf of Suez and the Red Sea. Despite their ready availability, the evaporites have been exploited only slightly, primarily on a local basis. Sulfur has been mined extensively since the early 20th century, particularly from deposits at Jamsah Promontory. Phosphate deposits are present on both sides of the sea, but the grade of the ore has been too low to warrant exploitation with existing techniques.

None of the heavy metal deposits have been exploited, although the sediments of the Atlantis II Deep alone have been estimated to be of considerable economic value. The average analysis of the Atlantis II Deep deposit has revealed an iron content of 29 percent; zinc 3.4 percent; copper 1.3 percent; and trace quantities of lead, silver, and gold. The total brine-free sediment estimated to be present in the upper 30 feet of the Atlantis II Deep is about 50 million tons. These deposits appear to extend to a depth of 60 feet below the present sediment surface, but the quality of the deposits below 30 feet is unknown. The sediments of the Discovery Deep and of several other deposits also have significant metalliferous content but at lower concentrations than that in the Atlantis II Deep, and thus they have not been of as much economic interest. The recovery of sediment located beneath 5,700 to 6,400 feet of water poses problems. But since most of these metalliferous deposits are fluid oozes, it is thought to be possible to pump them to the surface in much the same way as oil. There also are numerous proposals for drying and beneficiating (treating for smelting) these deposits after recovery.

Navigation

Navigation in the Red Sea is difficult. The unindented shorelines of the sea’s northern half provide few natural harbours, and in the southern half the growth of coral reefs has restricted the navigable channel and blocked some harbour facilities. At Bab el-Mandeb Strait, the channel is kept open to shipping by blasting and dredging. Atmospheric distortion (heat shimmer), sandstorms, and highly irregular water currents add to the navigational hazards.

Study and exploration

The Red Sea is one of the first large bodies of water mentioned in recorded history. It was important in early Egyptian maritime commerce (2000 bce) and was used as a water route to India by about 1000 bce. It is believed that it was reasonably well-charted by 1500 bce, because at that time Queen Hatshepsut of Egypt sailed its length. Later the Phoenicians explored its shores during their circumnavigatory exploration of Africa in about 600 bce. Shallow canals were dug between the Nile and the Red Sea before the 1st century ce but were later abandoned. A deep canal between the Mediterranean and Red seas was first suggested about 800 ce by the caliph Hārūn al-Rashīd, but it was not until 1869 that the French diplomat Ferdinand de Lesseps oversaw the completion of the Suez Canal connecting the two seas.

The Red Sea was subject to substantial scientific research in the 20th century, particularly since World War II. Notable cruises included those of the Swedish research vessel Albatross (1948) and the American Glomar Challenger (1972). In addition to studying the sea’s chemical and biological properties, researchers focused considerable attention on understanding its geologic structure. Much of the geologic study was in conjunction with oil exploration.

Additional Information

When my friends and family ask me what I am doing in my research, I respond that “I am investigating the winds and currents of the Red Sea in the Middle East.” Scary faces pop up. All they see are the winds of wars—the ever-present terrorist attacks, fighting, and killings in the region. “Are you crazy?” they say.

I get this same question (and sometimes the same reaction) from my oceanography colleagues. Since I began my postdoctoral research at Woods Hole Oceanographic Institution, working with WHOI physical oceanographers Amy Bower and Tom Farrar, I have learned two things: first, that few people realize how beautiful the Middle East is, and second, that the seas there have fascinating and unusual characteristics and far-reaching impacts on life in and around them. These seas furnish moisture for the arid Middle Eastern atmosphere and allowed great civilizations to flourish thousands of years ago around these seas.

For an oceanographer like myself, the Red Sea can be viewed as a mini-ocean, like a toy model ocean. Most of the oceanic features in a big ocean such as the Atlantic, we can also find there.

But the Red Sea also has its own curious characteristics that are not seen in other oceans. It is extremely warm—temperatures in its surface waters reach than 30° Celsius (86° Fahrenheit)—and water evaporates from it at a prodigious rate, making it extremely salty. Because of its narrow confines and constricted connection to the global ocean and because it is subject to seasonal flip-flopping wind patterns governed by the monsoons, it has odd circulation patterns. Its currents change in summer and winter.

The Red Sea is one of the few places on Earth that has what is known as a poleward-flowing eastern boundary current. Eastern boundary currents are so called because they hug the eastern coasts of continents. But all other such eastern boundary currents head south in the northern hemisphere. But the Red Sea Eastern Boundary Current, unlike all others, flows in the direction of the North Pole.

Unravelling the intricate tapestry that creates this rare eastern boundary current in the Red Sea was a goal of my postdoctoral research. But I have found that the Red Sea is far more mesmerizing and complex than I initially imagined. A variety of exotic threads are woven into the tapestry that produce the Red Sea’s unusual oceanographic phenomena: seasonal monsoons, desert sandstorms, wind jets through narrow mountain gaps, the Strait of Bab Al Mandeb that squeezes passage in and out of the sea—even locust swarms.

The politics of the nations surrounding the Red Sea are also complex and make it among the more difficult places to collect data. That explains why many Red Sea phenomena have remained unknown. But unexplored regions are the juiciest for scientists, because they are the ripest places to make new discoveries.

Gateway to the Red Sea

In the Red Sea, the water evaporates at one of the highest rates in the world. Like a bathtub in a steam room, you would have to add water from the tap to keep its water level stable.

The Red Sea compensates for the large water volume it loses each year through evaporation by importing water from the Gulf of Aden—through the narrow Strait of Bab Al Mandeb between Yemen on the Arabian Peninsula and Djibouti and Eritrea on the Horn of Africa.

The Strait of Bab Al Mandeb works as a gate. All waters in and out of the sea must pass through it. No other gates exist, making the Red Sea what is known as a semi-enclosed marginal sea.

In winter, incoming surface waters from the Gulf of Aden flow in a typical western boundary current, hugging the western side of the Red Sea along the coasts of Eritrea and Sudan. The current transports the waters northward. But in the central part of the Red Sea, this current veers sharply to the right. When it reaches the eastern side, it continues its convoluted journey to the north, but now it hugs the eastern side of the sea along coast of Saudi Arabia.

Here’s where the mystery deepens. The Red Sea Eastern Boundary Current exists only in winter. In summer, it’s not there. I wanted to find out how it forms, how it changes, and why it seasonally disappears.

Detectives and pirates

To unravel the complex tapestry that makes the Red Sea Eastern Boundary Current, I am like a CSI (Crime Scene Investigator) agent, sifting through as much data as I can get and putting them together to solve a mystery.

But it’s hard to obtain data from the Red Sea. Its narrow confines mean that its waters are restricted by countries around it that are often in conflict. It’s hard for researchers to get permission to enter them.

In addition, many waters in and enroute to the Red Sea have been beset by piracy. In the spring of 2018, I was aboard of the NOAA ship Ronald H. Brown in the Arabian Sea. It was the first time in more than a decade that the U.S. Navy allowed an American research vessel to go to the Arabian Sea. We were allowed to go only on the eastern side, and we couldn’t go anywhere beyond 17.5° N, because it wasn’t safe. On board, we conducted many safety drills, learning how to hide from pirates.

The Red Sea is also hard for satellites. Its width is small compared with the spatial resolution of most oceanographic satellites. Altimeter and wind satellites have a spatial resolution around 30 to 50 kilometers. The maximum Red Sea width is only 355 kilometers. In addition, satellites can give us information about what happens at the sea surface, but they can’t reveal the mixing and other processes that go on beneath the surface.

That’s why my research was kidnapped and carried off into an unanticipated direction, and my focus shifted from sea to air.

A plague of locusts

In the Red Sea, evaporation is a critical factor driving how the sea operates, and to determine how much water evaporates, we need to know about the winds.   Why? Because evaporation rates depend on the winds. If the winds are stronger, the evaporation is stronger; if the winds are weaker, evaporation is weaker.

To complicate the situation a bit more, evaporation depends not only on the strength of the winds but where the winds are coming from. If the winds are coming over the sea, the air humidity in the winds will be higher, and evaporation will be lower; if the winds are coming from the desert, the air will be dry, and evaporation will be higher.

So, to unravel the Eastern Boundary Current, we needed to have a pretty good picture of how the winds blow in winter. When I started my postdoctoral research, I was really surprised to see that this important factor—the wind variability of the Red Sea—wasn’t well-known, even though interest on it goes way back!

Pioneering studies about winds in the Red Sea were motivated by a desire to determine the northward migration of desert locust swarms that invade areas and voraciously consume all the vegetation in it. This plague has been described in the Old Testament of the Bible and has tormented countries bordering the Red Sea since times immemorial.

The locusts breed along the shores of the Red Sea. Summer monsoon rains spur locust eggs to hatch. When enough rains fall to create plenty of water and vegetation for food, large numbers of locusts hatch and form swarms. Winds determine where the swarms will be carried off to infest neighboring regions.

Lining both sides of the Red Sea are tall mountains that create a kind of tunnel, so that winds blow predominantly along, not across, the Red Sea. In summer, the winds blow from north to south. In winter, however, the monsoon flips the wind direction in the southern part of the Red Sea, andtwo opposing airstreams meet at some point in the central Red Sea called the Red Sea Convergence Zone. It acts as a conduit for migrating locust swarms, and where it is positioned determines where the swarms go.

Mountain-gap wind jets

The mountains along Red Sea coasts affect the winds in another way. The mountains aren’t entirely compact; there are several gaps in them. The tunnel surrounding the Red Sea has a few holes in both sides. Sometimes the winds blow through one of these holes and cross the tunnel. These are the mountain-gap wind jets.

The mountain-gap winds in summer blow from Africa to Saudi Arabia through the Tokar Gap near the Sudanese coast. In winter, the mountain-gap winds blow in the opposite direction, from Saudi Arabia to Africa, through many nameless gaps in the northern part of the Red Sea.

These jets stir up frequent sandstorms carrying sand and dirt from surrounding deserts into the Red Sea. The sandstorms carry fertilizing nutrients that promote life in the Red Sea. The sands also block incoming sunlight and cool the sea surface.

But do these overlooked jets also affect the Red Sea in other ways?

Blasts of dry air

We decided to put together lots of different data to find out the fundamental characteristics of these mountain-gap jet events. Our data came from satellites and from a heavily instrumented mooring that measured winds and humidity in the air and temperatures and salinity in the sea below. WHOI maintained the mooring for two-years in the Red Sea when it collaborated with King Abdullah University of Science and Technology.

The satellite images revealed that these events weren’t rare. We learned that in most winters, there are typically two to three events in December and January in which the winds blow west across the northern part of the Red Sea. In satellite images, they are impressive and beautiful.

The mountain-gap events typically last three to eight days. We observed large year-to-year differences, with an increasing number of events in the last decade.

­We discovered that the wintertime mountain-gap wind events blast the Red Sea with dry air. They are like the cold-air outbreaks that hit the U.S. East Coast in winter. Of course, for the Red Sea, it would be better to name them as dry-air outbreaks!

The dry-air blasts abruptly increase evaporation on the surface of the sea. This colossal evaporation removes a large amount of heat and water vapor out of the sea, leaving it much saltier. The mountain-gap winds also stir up deeper, cooler waters that mix with surface waters.

The waters become saltier and colder. This disrupts the Eastern Boundary Current. During most wind-jet events, it seems to fade away.

Connecting the oceans

We are still looking for answers about how the Eastern Boundary Current forms and why it flows north. But we have learned much about the wind-jet events that cause it to disappear periodically in winter.

The large-scale evaporation from these wind-jet events may also drive waters in the northern Red Sea to become cooler, saltier, and dense enough to sink the depths and flow all the way south and back out of the Strait of Bab Al Mandeb.

These salty Red Sea waters escape to the Gulf Aden, where they start a long journey through the Indian Ocean. They cross the Equator. Some may travel into the Atlantic Ocean. Some may flow toward Western Australia.

web_504193.jpg

#4 Dark Discussions at Cafe Infinity » Combined Quotes - II » Today 17:05:58

Jai Ganesh
Replies: 0

Combined Quotes - II

1. The human animal cannot be trusted for anything good except en masse. The combined thought and action of the whole people of any race, creed or nationality, will always point in the right direction. - Harry S Truman

2. I'll try to work on being an all-rounder and if something doesn't work out then my batting is always there. But Hardik Pandya combined with both bat and ball, it sounds better than just a batter. - Hardik Pandya

3. Maybe we can see more men's and women's combined events so the young players can be marketed better. - Mats Wilander

4. I have relationships with people I'm working with, based on our combined interest. It doesn't make the relationship any less sincere, but it does give it a focus that may not last beyond the experience. - Harrison Ford

5. My parent's divorce and hard times at school, all those things combined to mold me, to make me grow up quicker. And it gave me the drive to pursue my dreams that I wouldn't necessarily have had otherwise. - Christina Aguilera

6. There isn't a flaw in his golf or his makeup. He will win more majors than Arnold Palmer and me combined. Somebody is going to dust my records. It might as well be Tiger, because he's such a great kid. - Jack Nicklaus

7. You know you're not anonymous on our site. We're greeting you by name, showing you past purchases, to the degree that you can arrange to have transparency combined with an explanation of what the consumer benefit is. - Jeff Bezos

8. Intelligence and courtesy not always are combined; Often in a wooden house a golden room we find. - Henry Wadsworth Longfellow.

#5 Jokes » Corn Jokes - II » Today 16:43:57

Jai Ganesh
Replies: 0

Q: Why shouldn't you tell a secret on a farm?
A: Because the potatoes have eyes, the corn has ears, and the beans stalk.
* * *
Q: How is an ear of corn like an army?
A: It has lots of kernels.
* * *
Q: What do you call the State fair in Iowa?
A: A corn-ival.
* * *
Q: What do you call a buccaneer?
A: A good price for corn.
* * *
Q: What do you get when a Corn cob is runover by a truck?
A: "Creamed" corn.
* * *

#6 Science HQ » Wavelength » Today 16:37:22

Jai Ganesh
Replies: 0

Wavelength

Gist

Wavelength is the spatial distance over which a wave's shape repeats, measured from one peak to the next (or trough to trough), represented by the Greek letter lambda. It's a fundamental property of waves (like light, sound, water) and is inversely related to frequency (higher frequency means shorter wavelength, like blue light). In common language, being "on the same wavelength" means sharing understanding, while in technology, AWS Wavelength provides edge computing for low-latency applications.

Wavelength is the distance between two corresponding points on consecutive waves, like from one crest to the next or one trough to the next, representing the spatial period of a wave. Denoted by the Greek letter lambda, it's a fundamental property of waves (light, sound, etc.) and is inversely proportional to frequency; longer wavelengths have lower frequencies, and shorter wavelengths have higher frequencies, measured in meters (m) or nanometers (nm). 

Summary

Wavelength is the distance between corresponding points of two consecutive waves. “Corresponding points” refers to two points or particles in the same phase—i.e., points that have completed identical fractions of their periodic motion. Usually, in transverse waves (waves with points oscillating at right angles to the direction of their advance), wavelength is measured from crest to crest or from trough to trough; in longitudinal waves (waves with points vibrating in the same direction as their advance), it is measured from compression to compression or from rarefaction to rarefaction. Wavelength is usually denoted by the Greek letter lambda (λ); it is equal to the speed (v) of a wave train in a medium divided by its frequency (f): λ = v/f.

Details

In physics and mathematics, wavelength or spatial period of a wave or periodic function is the distance over which the wave's shape repeats. In other words, it is the distance between consecutive corresponding points of the same phase on the wave, such as two adjacent crests, troughs, or zero crossings. Wavelength is a characteristic of both traveling waves and standing waves, as well as other spatial wave patterns. The inverse of the wavelength is called the spatial frequency. Wavelength is commonly designated by the Greek letter lambda (λ). For a modulated wave, wavelength may refer to the carrier wavelength of the signal. The term wavelength may also apply to the repeating envelope of modulated waves or waves formed by interference of several sinusoids.

Assuming a sinusoidal wave moving at a fixed wave speed, wavelength is inversely proportional to the frequency of the wave: waves with higher frequencies have shorter wavelengths, and lower frequencies have longer wavelengths.

Wavelength depends on the medium (for example, vacuum, air, or water) that a wave travels through. Examples of waves are sound waves, light, water waves, and periodic electrical signals in a conductor. A sound wave is a variation in air pressure, while in light and other electromagnetic radiation the strength of the electric and the magnetic field vary. Water waves are variations in the height of a body of water. In a crystal lattice vibration, atomic positions vary.

The range of wavelengths or frequencies for wave phenomena is called a spectrum. The name originated with the visible light spectrum but now can be applied to the entire electromagnetic spectrum as well as to a sound spectrum or vibration spectrum.

Additional Information

There are many kinds of waves all around us. There are waves in the ocean and in lakes. Did you also know that there are also waves in the air? Sound travels through the air in waves and light is made up of waves of electromagnetic energy.

The wavelength of a wave describes how long the wave is. The distance from the "crest" (top) of one wave to the crest of the next wave is the wavelength. Alternately, we can measure from the "trough" (bottom) of one wave to the trough of the next wave and get the same value for the wavelength.

The frequency of a wave is inversely proportional to its wavelength. That means that waves with a high frequency have a short wavelength, while waves with a low frequency have a longer wavelength.

Light waves have very, very short wavelengths. Red light waves have wavelengths around 700 nanometers (nm), while blue and purple light have even shorter waves with wavelengths around 400 or 500 nm. Some radio waves, another type of electromagnetic radiation, have much longer waves than light, with wavelengths ranging from millimeters to kilometers.

Sound waves traveling through air have wavelengths from millimeters to meters. Low-pitch bass notes that humans can barely hear have huge wavelengths around 17 meters and frequencies around 20 hertz (Hz). Extremely high-pitched sounds that are on the other edge of the range that humans can hear have smaller wavelengths around 17 mm and frequencies around 20 kHz (kilohertz, or thousands of Hertz).

wavelength_800x240.png.webp?itok=q98WVTPL

#7 Re: Jai Ganesh's Puzzles » General Quiz » Today 15:58:51

Hi,

#10739. What does the term in Biology Food chain mean?

#10740. What does the term in Biology Foramen mean?

#8 Re: Jai Ganesh's Puzzles » English language puzzles » Today 15:42:13

Hi,

#5935. What does the adjective freehand mean?

#5936. What does the noun freehold mean?

#9 Re: Jai Ganesh's Puzzles » Doc, Doc! » Today 15:30:37

Hi,

#2564. What does the medical term Dix–Hallpike test mean?

#13 This is Cool » Transistor » Yesterday 18:28:38

Jai Ganesh
Replies: 0

Transistor

Gist

A transistor is a semiconductor device that amplifies or switches electronic signals and power, acting as a fundamental building block of modern electronics, found in everything from smartphones to computers. It uses a small current or voltage at one terminal to control a much larger current flow between the other two, functioning like a tiny, fast electronic switch or amplifier.

A transistor is a semiconductor device that acts as either an electronic switch (turning current on/off) or an amplifier (boosting signals), controlling a larger current with a smaller one, forming the fundamental building block of modern electronics like computers, radios, and smartphones. 

Summary

A transistor is a semiconductor device used to amplify or switch electrical signals and power. It is one of the basic building blocks of modern electronics. It is composed of semiconductor material, usually with at least three terminals for connection to an electronic circuit. A voltage or current applied to one pair of the transistor's terminals controls the current through another pair of terminals. Because the controlled (output) power can be higher than the controlling (input) power, a transistor can amplify a signal. Some transistors are packaged individually, but many more in miniature form are found embedded in integrated circuits. Because transistors are the key active components in practically all modern electronics, many people consider them one of the 20th century's greatest inventions.

Physicist Julius Edgar Lilienfeld proposed the concept of a field-effect transistor (FET) in 1925, but it was not possible to construct a working device at that time. The first working device was a point-contact transistor invented in 1947 by physicists John Bardeen, Walter Brattain, and William Shockley at Bell Labs who shared the 1956 Nobel Prize in Physics for their achievement. The most widely used type of transistor, the metal–oxide–semiconductor field-effect transistor (MOSFET), was invented at Bell Labs between 1955 and 1960. Transistors revolutionized the field of electronics and paved the way for smaller and cheaper radios, calculators, computers, and other electronic devices.

Most transistors are made from very pure silicon, and some from germanium, but certain other semiconductor materials are sometimes used. A transistor may have only one kind of charge carrier in a field-effect transistor, or may have two kinds of charge carriers in bipolar junction transistor devices. Compared with the vacuum tube, transistors are generally smaller and require less power to operate. Certain vacuum tubes have advantages over transistors at very high operating frequencies or high operating voltages, such as traveling-wave tubes and gyrotrons. Many types of transistors are made to standardized specifications by multiple manufacturers.

Details

A transistor is a semiconductor device for amplifying, controlling, and generating electrical signals. Transistors are the active components of integrated circuits, or “microchips,” which often contain billions of these minuscule devices etched into their shiny surfaces. Deeply embedded in almost everything electronic, transistors have become the nerve cells of the Information Age.

There are typically three electrical leads in a transistor, called the emitter, the collector, and the base—or, in modern switching applications, the source, the drain, and the gate. An electrical signal applied to the base (or gate) influences the semiconductor material’s ability to conduct electrical current, which flows between the emitter (or source) and collector (or drain) in most applications. A voltage source such as a battery drives the current, while the rate of current flow through the transistor at any given moment is governed by an input signal at the gate—much as a faucet valve is used to regulate the flow of water through a garden hose.

The first commercial applications for transistors were for hearing aids and “pocket” radios during the 1950s. With their small size and low power consumption, transistors were desirable substitutes for the vacuum tubes (known as “valves” in Great Britain) then used to amplify weak electrical signals and produce audible sounds. Transistors also began to replace vacuum tubes in the oscillator circuits used to generate radio signals, especially after specialized structures were developed to handle the higher frequencies and power levels involved. Low-frequency, high-power applications, such as power-supply inverters that convert alternating current (AC) into direct current (DC), have also been transistorized. Some power transistors can now handle currents of hundreds of amperes at electric potentials over a thousand volts.

By far the most common application of transistors today is for computer memory chips—including solid-state multimedia storage devices for electronic games, cameras, and MP3 players—and microprocessors, where millions of components are embedded in a single integrated circuit. Here the voltage applied to the gate electrode, generally a few volts or less, determines whether current can flow from the transistor’s source to its drain. In this case the transistor operates as a switch: if a current flows, the circuit involved is on, and if not, it is off. These two distinct states, the only possibilities in such a circuit, correspond respectively to the binary 1s and 0s employed in digital computers. Similar applications of transistors occur in the complex switching circuits used throughout modern telecommunications systems. The potential switching speeds of these transistors now are hundreds of gigahertz, or more than 100 billion on-and-off cycles per second.

Development of transistors

The transistor was invented in 1947–48 by three American physicists, John Bardeen, Walter H. Brattain, and William B. Shockley, at the American Telephone and Telegraph Company’s Bell Laboratories. The transistor proved to be a viable alternative to the electron tube and, by the late 1950s, supplanted the latter in many applications. Its small size, low heat generation, high reliability, and low power consumption made possible a breakthrough in the miniaturization of complex circuitry. During the 1960s and ’70s, transistors were incorporated into integrated circuits, in which a multitude of components (e.g., diodes, resistors, and capacitors) are formed on a single “chip” of semiconductor material.

Motivation and early radar research

Electron tubes are bulky and fragile, and they consume large amounts of power to heat their cathode filaments and generate streams of electrons; also, they often burn out after several thousand hours of operation. Electromechanical switches, or relays, are slow and can become stuck in the on or off position. For applications requiring thousands of tubes or switches, such as the nationwide telephone systems developing around the world in the 1940s and the first electronic digital computers, this meant constant vigilance was needed to minimize the inevitable breakdowns.

An alternative was found in semiconductors, materials such as silicon or germanium whose electrical conductivity lies midway between that of insulators such as glass and conductors such as aluminum. The conductive properties of semiconductors can be controlled by “doping” them with select impurities, and a few visionaries had seen the potential of such devices for telecommunications and computers. However, it was military funding for radar development in the 1940s that opened the door to their realization. The “superheterodyne” electronic circuits used to detect radar waves required a diode rectifier—a device that allows current to flow in just one direction—that could operate successfully at ultrahigh frequencies over one gigahertz. Electron tubes just did not suffice, and solid-state diodes based on existing copper-oxide semiconductors were also much too slow for this purpose.

Crystal rectifiers based on silicon and germanium came to the rescue. In these devices a tungsten wire was jabbed into the surface of the semiconductor material, which was doped with tiny amounts of impurities, such as boron or phosphorus. The impurity atoms assumed positions in the material’s crystal lattice, displacing silicon (or germanium) atoms and thereby generating tiny populations of charge carriers (such as electrons) capable of conducting usable electrical current. Depending on the nature of the charge carriers and the applied voltage, a current could flow from the wire into the surface or vice-versa, but not in both directions. Thus, these devices served as the much-needed rectifiers operating at the gigahertz frequencies required for detecting rebounding microwave radiation in military radar systems. By the end of World War II, millions of crystal rectifiers were being produced annually by such American manufacturers as Sylvania and Western Electric.

Innovation at Bell Labs

Executives at Bell Labs had recognized that semiconductors might lead to solid-state alternatives to the electron-tube amplifiers and electromechanical switches employed throughout the nationwide Bell telephone system. In 1936 the new director of research at Bell Labs, Mervin Kelly, began recruiting solid-state physicists. Among his first recruits was William B. Shockley, who proposed a few amplifier designs based on copper-oxide semiconductor materials then used to make diodes. With the help of Walter H. Brattain, an experimental physicist already working at Bell Labs, he even tried to fabricate a prototype device in 1939, but it failed completely. Semiconductor theory could not yet explain exactly what was happening to electrons inside these devices, especially at the interface between copper and its oxide. Compounding the difficulty of any theoretical understanding was the problem of controlling the exact composition of these early semiconductor materials, which were binary combinations of different chemical elements (such as copper and oxygen).

With the close of World War II, Kelly reorganized Bell Labs and created a new solid-state research group headed by Shockley. The postwar search for a solid-state amplifier began in April 1945 with Shockley’s suggestion that silicon and germanium semiconductors could be used to make a field-effect amplifier (see integrated circuit: Field-effect transistors). He reasoned that an electric field from a third electrode could increase the conductivity of a sliver of semiconductor material just beneath it and thereby allow usable current to flow through the sliver. But attempts to fabricate such a device by Brattain and others in Shockley’s group again failed. The following March, John Bardeen, a theoretical physicist whom Shockley had hired for his group, offered a possible explanation. Perhaps electrons drawn to the semiconductor surface by the electric field were blocking the penetration of this field into the bulk material, thereby preventing it from influencing the conductivity.

Bardeen’s conjecture spurred a basic research program at Bell Labs into the behaviour of these “surface-state” electrons. While studying this phenomenon in November 1947, Brattain stumbled upon a way to neutralize their blocking effect and permit the applied field to penetrate deep into the semiconductor material. Working closely together over the next month, Bardeen and Brattain invented the first successful semiconductor amplifier, called the point-contact transistor, on December 16, 1947. Similar to the World War II crystal rectifiers, this weird-looking device had not one but two closely spaced metal wires jabbing into the surface of a semiconductor—in this case, germanium. The input signal on one of these wires (the emitter) boosted the conductivity of the germanium beneath both of them, thus modulating the output signal on the other wire (the collector). Observers present at a demonstration of this device the following week could hear amplified voices in the earphones that it powered. Shockley later called this invention a “magnificent Christmas present” for the farsighted company, which had supported the research program that made this breakthrough.

Not to be outdone by members of his own group, Shockley conceived yet another way to fabricate a semiconductor amplifier the very next month, on January 23, 1948. His junction transistor was basically a three-layer sandwich of germanium or silicon in which the adjacent layers would be doped with different impurities to induce distinct electrical characteristics. An input signal entering the middle layer—the “meat” of the semiconductor sandwich—determined how much current flowed from one end of the device to the other under the influence of an applied voltage. Shockley’s device is often called the bipolar junction transistor because its operation requires that the negatively charged electrons and their positively charged counterparts (the holes corresponding to an absence of electrons in the crystal lattice) coexist briefly in the presence of one another.

The name transistor, a combination of transfer and resistor, was coined for these devices in May 1948 by Bell Labs electrical engineer John Robinson Pierce, who was also a science-fiction author in his spare time. A month later Bell Labs announced the revolutionary invention in a press conference held at its New York City headquarters, heralding Bardeen, Brattain, and Shockley as the three coinventors of the transistor. The three were eventually awarded the Nobel Prize for Physics for their invention.

Although the point-contact transistor was the first transistor invented, it faced a difficult gestation period and was eventually used only in a switch made for the Bell telephone system. Manufacturing them reliably and with uniform operating characteristics proved a daunting problem, largely because of hard-to-control variations in the metal-to-semiconductor point contacts.

Shockley had foreseen these difficulties in the process of conceiving the junction transistor, which he figured would be much easier to manufacture. But it still required more than three years, until mid-1951, to resolve its own development problems. Bell Labs scientists, engineers, and technicians first had to find ways to make ultrapure germanium and silicon, form large crystals of these elements, dope them with narrow layers of the required impurities, and attach delicate wires to these layers to serve as electrodes. In July 1951 Bell Labs announced the successful invention and development of the junction transistor, this time with only Shockley in the spotlight.

Commercialization

Commercial transistors began to roll off production lines during the 1950s, after Bell Labs licensed the technology of their production to other companies, including General Electric, Raytheon, RCA, Sylvania, and Transitron Electronics. Transistors found ready applications in lightweight devices such as hearing aids and portable radios. Texas Instruments Inc., working with the Regency Division of Industrial Development Engineering Associates, manufactured the first transistor radio in late 1954. Selling for $49.95, the Regency TR-1 employed four germanium junction transistors in a multistage amplifier of radio signals. The very next year a new Japanese company, Sony, introduced its own transistor radio and began to corner the market for this and other transistorized consumer electronics.

Transistors also began replacing vacuum tubes in the digital computers manufactured by IBM, Control Data, and other companies. “It seems to me that in these robot brains the transistor is the ideal nerve cell,” Shockley had observed in a 1949 radio interview. “The advantage of the transistor is that it is inherently a small-size and low-power device,” noted Bell Labs circuit engineer Robert Wallace early in the 1950s. “This means you can pack a large number of them in a small space without excessive heat generation and achieve low propagation delays. And that’s what you need for logic applications. The significance of the transistor is not that it can replace the tube but that it can do things the vacuum tube could never do!” After 1955 IBM started purchasing germanium transistors from Texas Instruments to employ in its computer circuits. By the end of the 1950s, bipolar junction transistors had almost completely replaced electron tubes in computer applications.

Silicon transistors

During the 1950s, meanwhile, scientists and engineers at Bell Labs and Texas Instruments were developing advanced technologies needed to produce silicon transistors. Because of its higher melting temperature and greater reactivity, silicon was much more difficult to work with than germanium, but it offered major prospects for better performance, especially in switching applications. Germanium transistors make leaky switches; substantial leakage currents can flow when these devices are supposedly in their off state. Silicon transistors have far less leakage. In 1954 Texas Instruments produced the first commercially available silicon junction transistors and quickly dominated this new market—especially for military applications, in which their high cost was of little concern.

In the mid-1950s Bell Labs focused its transistor-development efforts around new diffusion technologies, in which very narrow semiconductor layers—with thicknesses measured in microns, or millionths of a metre—are prepared by diffusing impurity atoms into the semiconductor surface from a hot gas. Inside a diffusion furnace the impurity atoms penetrate more readily into the silicon or germanium surface; their penetration depth is controlled by varying the density, temperature, and pressure of the gas as well as the processing time. (See integrated circuit: Fabricating ICs.) For the first time, diodes and transistors produced by these diffusion implantation processes functioned at frequencies above 100 megahertz (100 million cycles per second). These diffused-base transistors could be used in receivers and transmitters for FM radio and television, which operate at such high frequencies.

Another important breakthrough occurred at Bell Labs in 1955, when Carl Frosch and Link Derick developed a means of producing a glassy silicon dioxide outer layer on the silicon surface during the diffusion process. This layer offered transistor producers a promising way to protect the silicon underneath from further impurities once the diffusion process was finished and the desired electrical properties had been established.

Texas Instruments, Fairchild Semiconductor Corporation, and other companies took the lead in applying these diffusion technologies to the large-scale manufacture of transistors. At Fairchild, physicist Jean Hoerni developed the planar manufacturing process, whereby the various semiconductor layers and their sensitive interfaces are embedded beneath a protective silicon dioxide outer layer. The company was soon making and selling planar silicon transistors, largely for military applications. Led by Robert Noyce and Gordon E. Moore, Fairchild’s scientists and engineers extended this revolutionary technique to the manufacture of integrated circuits.

In the late 1950s Bell Labs researchers developed ways to use the new diffusion technologies to realize Shockley’s original 1945 idea of a field-effect transistor (FET). To do so, they had to overcome the problem of surface-state electrons, which would otherwise have blocked external electric fields from penetrating into the semiconductor. They succeeded by carefully cleaning the silicon surface and growing a very pure silicon dioxide layer on it. This approach reduced the number of surface-state electrons at the interface between the silicon and oxide layers, permitting fabrication of the first successful field-effect transistor in 1960 at Bell Labs—which, however, did not pursue its development any further.

Refinements of the FET design by other companies, especially RCA and Fairchild, resulted in the metal-oxide-semiconductor field-effect transistor (MOSFET) during the early 1960s. The key problems to be solved were the stability and reliability of these MOS transistors, which relied upon interactions occurring at or near the sensitive silicon surface rather than deep inside. The two firms began to make MOS transistors commercially available in late 1964.

In early 1963 Frank Wanlass at Fairchild developed the complementary MOS (CMOS) transistor circuit, based on a pair of MOS transistors. This approach eventually proved ideal for use in integrated circuits because of its simplicity of production and very low power dissipation during standby operation. Stability problems continued to plague MOS transistors, however, until researchers at Fairchild developed solutions in the mid-1960s. By the end of the decade, MOS transistors were beginning to displace bipolar junction transistors in microchip manufacturing. Since the late 1980s CMOS has been the technology of choice for digital applications, while bipolar transistors are now used primarily for analog and microwave devices.

Transistor principles

The operation of junction transistors, as well as most other semiconductor devices, depends heavily on the behaviour of electrons and holes at the interface between two dissimilar layers, known as a p-n junction. Discovered in 1940 by Bell Labs electrochemist Russell Ohl, p-n junctions are formed by adding two different impurity elements to adjacent regions of germanium or silicon. The addition of these impurity elements is called doping. Atoms of elements from Group 15 of the periodic table (which possess five valence electrons), such as phosphorus or math, contribute an electron that has no natural resting place within the crystal lattice. These excess electrons are therefore loosely bound and relatively free to roam about, acting as charge carriers that can conduct electrical current. Atoms of elements from Group 13 (which have three valence electrons), such as boron or aluminum, induce a deficit of electrons when added as impurities, effectively creating “holes” in the lattice. These positively charged quantum mechanical entities are also fairly free to roam around and conduct electricity. Under the influence of an electric field, the electrons and holes move in opposite directions. During and immediately after World War II, chemists and metallurgists at Bell Labs perfected techniques of adding impurities to high-purity silicon and germanium to induce the desired electron-rich layer (known as the n-layer) and the electron-poor layer (known as the p-layer) in these semiconductors, as described in the section Development of transistors.

A p-n junction acts as a rectifier, similar to the old point-contact crystal rectifiers, permitting easy flow of current in only a single direction. If no voltage is applied across the junction, electrons and holes will gather on opposite sides of the interface to form a depletion layer that will act as an insulator between the two sides. A negative voltage applied to the n-layer will drive the excess electrons within it toward the interface, where they will combine with the positively charged holes attracted there by the electric field. Current will then flow easily. If instead a positive voltage is applied to the n-layer, the resulting electric field will draw electrons away from the interface, so combinations of them with holes will occur much less often. In this case current will not flow (other than tiny leakage currents). Thus, electricity will flow in only one direction through a p-n junction.

Junction transistors

Shortly after his colleagues John Bardeen and Walter H. Brattain invented their point-contact device, Bell Labs physicist William B. Shockley recognized that these rectifying characteristics might also be used in making a junction transistor. In a 1949 paper Shockley explained the physical principles behind the operation of these junctions and showed how to use them in a three-layer—n-p-n or p-n-p—device that could act as a solid-state amplifier or switch. Electric current would flow from one end to the other, with the voltage applied to the inner layer governing how much current rushed by at any given moment. In the n-p-n junction transistor, for example, electrons would flow from one n-layer through the inner p-layer to the other n-layer. Thus, a weak electrical signal applied to the inner, base layer would modulate the current flowing through the entire device. For this current to flow, some of the electrons would have to survive briefly in the presence of holes; in order to reach the second n-layer, they could not all combine with holes in the p-layer. Such bipolar operation was not at all obvious when Shockley first conceived his junction transistor. Experiments with increasingly pure crystals of silicon and germanium showed that it indeed occurred, making bipolar junction transistors possible.

To achieve bipolar operation, it also helps that the base layer be narrow, so that electrons (in n-p-n transistors) and holes (in p-n-p) do not have to travel very far in the presence of their opposite numbers. Narrow base layers also promote high-frequency operation of junction transistors: the narrower the base, the higher the operating frequency. That is a major reason why there was so much interest in developing diffused-base transistors during the 1950s, as described in the section Silicon transistors. Their microns-thick bases permitted transistors to operate above 100 megahertz (100 million cycles per second) for the first time.

MOS-type transistors

A similar principle applies to metal-oxide-semiconductor (MOS) transistors, but here it is the distance between source and drain that largely determines the operating frequency. In an n-channel MOS (NMOS) transistor, for example, the source and the drain are two n-type regions that have been established in a piece of p-type semiconductor, usually silicon. Except for the two points at which metal leads contact these regions, the entire semiconductor surface is covered by an insulating oxide layer. The metal gate, usually aluminum, is deposited atop the oxide layer just above the gap between source and drain. If there is no voltage (or a negative voltage) upon the gate, the semiconductor material beneath it will contain excess holes, and very few electrons will be able to cross the gap, because one of the two p-n junctions will block their path. Therefore, no current will flow in this configuration—other than unavoidable leakage currents. If the gate voltage is instead positive, an electric field will penetrate through the oxide layer and attract electrons into the silicon layer (often called the inversion layer) directly beneath the gate. Once this voltage exceeds a specific threshold value, electrons will begin flowing easily between source and drain. The transistor turns on.

Analogous behaviour occurs in a p-channel MOS transistor, in which the source and the drain are p-type regions formed in n-type semiconductor material. Here a negative voltage above a threshold induces a layer of holes (instead of electrons) beneath the gate and permits a current of them to flow from source to drain. For both n-channel and p-channel MOS (also called NMOS and PMOS) transistors, the operating frequency is largely governed by the speed at which the electrons or holes can drift through the semiconductor material divided by the distance from source to drain. Because electrons have mobilities through silicon that are about three times higher than holes, NMOS transistors can operate at substantially higher frequencies than PMOS transistors. Small separations between source and drain also promote high-frequency operation, and extensive efforts have been devoted to reducing this distance.

In the 1960s Frank Wanlass of Fairchild Semiconductor recognized that combinations of an NMOS and a PMOS transistor would draw extremely little current in standby operation—just the tiny, unavoidable leakage currents. These CMOS, or complementary metal-oxide-semiconductor, transistor circuits consume significant power only when the gate voltage exceeds some threshold and a current flows from source to drain. Thus, they can serve as very low-power devices, often a million times lower than the equivalent bipolar junction transistors. Together with their inherent simplicity of fabrication, this feature of CMOS transistors has made them the natural choice for manufacturing microchips, which today cram millions of transistors into a surface area smaller than a fingernail. In such cases the waste heat generated by the component’s power consumption must be kept to an absolute minimum, or the chips will simply melt.

Field-effect transistors

Another kind of unipolar transistor, called the metal-semiconductor field-effect transistor (MESFET), is particularly well suited for microwave and other high-frequency applications because it can be manufactured from semiconductor materials with high electron mobilities that do not support an insulating oxide surface layer. These include compound semiconductors such as germanium-silicon and gallium math. A MESFET is built much like a MOS transistor but with no oxide layer between the gate and the underlying conduction channel. Instead, the gate makes a direct, rectifying contact with the channel, which is generally a thin layer of n-type semiconductor supported underneath by an insulating substrate. A negative voltage on the gate induces a depletion layer just beneath it that restricts the flow of electrons between source and drain. The device acts like a voltage-controlled resistor; if the gate voltage is large enough, it can block this flow almost completely. By contrast, a positive voltage on the gate encourages electrons to traverse the channel.

To improve MESFET performance even further, advanced devices known as heterojunction field-effect transistors have been developed, in which p-n junctions are established between two slightly dissimilar semiconductor materials, such as gallium math and aluminum gallium math. By properly controlling the impurities in the two substances, a high-conductivity channel can be formed at their interface, promoting the flow of electrons through it. If one semiconductor is a high-purity material, its electron mobility can be large, resulting in a high operating frequency for this kind of transistor. (The electron mobility of gallium math, for example, is five times that of silicon.) Heterojunction MESFETs are increasingly used for microwave applications such as cellular telephone systems.

Transistors and Moore’s law

In 1965, four years after Fairchild Semiconductor Corporation and Texas Instruments Inc. marketed their first integrated circuits, Fairchild research director Gordon E. Moore made a prediction in a special issue of Electronics magazine. Observing that the total number of components in these circuits had roughly doubled each year, he blithely extrapolated this annual doubling to the next decade, estimating that microcircuits of 1975 would contain an astounding 65,000 components per chip.

History proved Moore correct. His bold extrapolation has since become enshrined as Moore’s law—though its doubling period was lengthened to 18 months in the mid-1970s. What has made this dramatic explosion in circuit complexity possible is the steadily shrinking size of transistors over the decades. Measured in millimetres in the late 1940s, the dimensions of a typical transistor are typically about 10 nanometres, a reduction factor of over 100,000. Submicron transistor features were attained during the 1980s, when dynamic random-access memory (DRAM) chips began offering megabit storage capacities. At the dawn of the 21st century, these features approached 0.1 micron across, which allowed the manufacture of gigabit memory chips and microprocessors that operate at gigahertz frequencies. Moore’s law continued into the second decade of the 21st century with the introduction of three-dimensional transistors that were tens of nanometres in size.

As the size of transistors has shrunk, their cost has plummeted correspondingly from tens of dollars apiece to thousandths of a penny. As Moore was fond of saying, every year more transistors are produced than raindrops over California, and it costs less to make one than to print a single character on the page of a book. They are by far the most common human artifact on the planet. Deeply embedded in everything electronic, transistors permeate modern life almost as thoroughly as molecules permeate matter. Cheap, portable, and reliable equipment based on this remarkable device can be found in almost any village and hamlet in the world. This tiny invention, by making possible the Information Age, has transformed the world into a truly global society, making it a far more intimately connected place than ever before.

Additional Information:

What is a transistor?

Transistors are the key building blocks of integrated circuits and microchips. They’re basically microscopic electronic switches or amplifiers. As such, they control the flow of electrical signals, enabling the chip to process and store information.

A transistor is usually made from silicon or another semiconductor material. The properties of these types of material are in between those of an electric conductive material (like a metal) and an insulator (like rubber).

Depending on the temperature, for example, or on the presence of impurities, they can either conduct or block electricity. This makes them perfectly suited to control electrical signals.

A transistor consists of three terminals: the base, collector, and emitter. Through these terminals, the transistor can control the flow of current in a circuit.

How does a transistor work?

When a small electrical current is applied to the transistor base, it allows a larger current to flow between the collector and the emitter. This is like a valve: a little pressure on the base controls a much bigger flow of electricity.

* If there is no current at the base, the transistor acts like a closed switch. No current flows between the collector and emitter.
* If there is a current at the base, the transistor opens up. Current flows through.

This ability to control electrical current allows transistors to work as a switch (switching things on and off) or as an amplifier (making signals stronger).

* As a switch: Transistors can rapidly turn on and off, representing binary states (0 and 1) that form the foundation of digital computing. When a small voltage is applied to the base terminal, it allows a larger current to flow between the collector and emitter, switching the transistor ‘on.’ When the voltage is removed, the current stops, and the transistor turns ‘off.’
* As an amplifier: Transistors can also be used to boost weak electrical signals. A small input signal applied to the base can control a larger output signal between the collector and emitter, amplifying it. This is essential in devices like radios, televisions, and audio systems, where signal amplification is necessary for proper operation.

In digital electronics, transistors are used in large numbers to build logic gates. These form the foundation of computer processors. By switching on and off very quickly, transistors help process the binary code that computers use to operate.

The first transistors

The history of transistors starts in the 1940s. Scientists were looking for better ways to control electrical signals in devices like radios and televisions. At the time, these devices used vacuum tubes, which were big, used a lot of power, and often broke.

In 1947, three scientists at Bell Labs—John Bardeen, Walter Brattain, and William Shockley—created the first working transistor. This compact device could do the same job as a vacuum tube. But it was much smaller, used less power, and was more reliable.

By the 1950s, transistors were used in radios and early computers, making these devices smaller and easier to carry around. In 1956, MIT created the first computer that used transistors instead of vacuum tubes, showing just how useful they were for building faster, more efficient machines. By the 1960s, scientists figured out how to put many transistors onto a single chip. This lead to the creation of increasingly more powerful computer chips.

Why are transistors so important?

Transistors are the core components in integrated circuits and chips. Today’s microchips can contain billions of transistors, allowing devices to perform complex tasks at high speeds while using less power. Their invention and continuous miniaturizations revolutionized electronics. They make it possible to build smaller, faster, and more efficient devices.

Today, transistors are everywhere—in smartphones, laptops, and virtually all electronic devices. They are the key to the digital world we live in, and keep getting smaller and more powerful as technology improves.

npn-pnp-symbols.png

#14 Re: Dark Discussions at Cafe Infinity » crème de la crème » Yesterday 17:39:58

2428) Igor Tamm

Gist:

Work

In certain media the speed of light is lower than in a vacuum and particles can travel faster than light. One result of this was discovered in 1934 by Pavel Cherenkov, when he saw a bluish light around a radioactive preparation placed in water. Igor Tamm and Ilya Frank explained the phenomenon in 1937. On their way through a medium, charged particles disturb electrons in the medium. When these resume their position, they emit light. Normally this does not produce any light that can be observed, but if the particle moves faster than light, a kind of backwash of light appears.

Summary

Igor Yevgenyevich Tamm (born July 8 [June 26, Old Style], 1895, Vladivostok, Siberia, Russia—died April 12, 1971, Moscow, Russia, Soviet Union) was a Soviet physicist who shared the 1958 Nobel Prize for Physics with Pavel A. Cherenkov and Ilya M. Frank for his efforts in explaining Cherenkov radiation. Tamm was one of the theoretical physicists who contributed to the construction of the first Soviet thermonuclear bomb.

Tamm’s father was an engineer in the city of Yelizavetgrad (now Kirovohrad, Ukr.), where he was responsible for building and managing electric power stations and water systems. Tamm graduated from the gymnasium there in 1913 and went abroad to study at the University of Edinburgh. The following year he returned to Moscow State University, and he graduated in 1918. In 1924 he became a lecturer in the physics department, and in 1930 he succeeded his mentor, Leonid I. Mandelstam, to the chair of theoretical physics. In 1933 Tamm was elected a corresponding member of the Soviet Academy of Sciences. The following year, he joined the P.N. Lebedev Physics Institute of the Soviet Academy of Sciences (FIAN), where he organized and headed the theoretical division, a position he occupied until his death.

Tamm’s early studies of unique forms of electron bonding (“Tamm surface levels”) on the surfaces of crystalline solids had important applications in the later development of solid-state semiconductor devices. In 1934 Cherenkov had discovered that light is emitted when gamma rays pass through a liquid medium. In 1937 Tamm and Frank explained this phenomenon as the emission of light waves by electrically charged particles moving faster than the speed of light in a medium. Tamm developed this theory more fully in a paper published in 1939. For these discoveries Tamm, Frank, and Cherenkov received the 1958 Nobel Prize for Physics.

Immediately after World War II, Tamm, though a major theoretician, was not assigned to work on the atomic bomb project, possibly for political reasons. In particular, he was branded a “bourgeois idealist” and his brother an “enemy of the state.” Nevertheless, in June 1948, when physicist Igor V. Kurchatov needed a strong team to investigate the feasibility of creating a thermonuclear bomb, Tamm was recruited to organize the theoretical division of FIAN in Moscow. The Tamm group came to include physicists Yakov B. Zeldovich, Vitaly L. Ginzburg, Semyon Z. Belenky, Andrey D. Sakharov, Efim S. Fradkin, Yuri A. Romanov, and Vladimir Y. Fainberg. Between March and April 1950, Tamm and several members of his group were sent to the secret installation known as Arzamas-16 (near the present-day village of Sarov) to work under physicist Yuly Khariton’s direction on a thermonuclear bomb project. One bomb design, known as the Sloika (“Layer Cake”), was successfully tested on Aug. 12, 1953. Tamm was elected a full member of the Academy of Sciences in October 1953 and the same year was awarded a Hero of Socialist Labour. On Nov. 22, 1955, the Soviet Union successfully tested a more modern thermonuclear bomb that was analogous to the design of the American physicists Edward Teller and Stanislaw Ulam.

Tamm spent the latter decades of his career at the Lebedev Institute, where he worked on building a fusion reactor to control fusion, using a powerful magnetic field in a donut-shaped device known as a Tokamak reactor.
Britannica Quiz

Details

Igor Yevgenyevich Tamm (8 July 1895 – 12 April 1971) was a Soviet physicist who received the 1958 Nobel Prize in Physics, jointly with Pavel Alekseyevich Cherenkov and Ilya Mikhailovich Frank, for their 1934 discovery and demonstration of Cherenkov radiation. He also predicted the quasi-particle of sound: the phonon; and in 1951, together with Andrei Sakharov, proposed the Tokamak system.

Biography

Igor Tamm was born in 1895 in Vladivostok into the family of Eugene Tamm, a civil engineer, and his wife Olga Davydova. According to Russian sources, Tamm had German noble descent on his father's side through his grandfather Theodor Tamm, who emigrated from Thuringia. Although his surname "Tamm" is rather common in Estonia, other sources state he was Jewish or had Jewish ancestry.

He studied at a gymnasium in Elisavetgrad (now Kropyvnytskyi, Ukraine). In 1913–1914 he studied at the University of Edinburgh together with his school-friend Boris Hessen.

At the outbreak of World War I in 1914 he joined the army as a volunteer field medic. In 1917 he joined the Revolutionary movement and became an active anti-war campaigner, serving on revolutionary committees after the March Revolution. He returned to the Moscow State University from which he graduated in 1918.

Tamm married Nataliya Shuyskaya (1894–1980) in September 1917. Shе belonged to a noble Rurikid Shuysky family. They eventually had two children, Irina (1921–2009, chemist) and Evgeny (1926–2008, experimental physicist and famous mountain climber, leader of the Soviet Everest expedition in 1982[).

On 1 May 1923, Tamm began teaching physics at the Second Moscow State University. The same year, he finished his first scientific paper, Electrodynamics of the Anisotropic Medium in the Special Theory of Relativity. In 1928, he spent a few months with Paul Ehrenfest at the University of Leiden and made a life-long friendship with Paul Dirac. From 1934 until his death in 1971 Tamm was the head of the theoretical department at Lebedev Physical Institute in Moscow.

In 1932, Tamm published a paper with his proposal of the concept of surface states. This concept is important for metal–oxide–semiconductor field-effect transistor (MOSFET) physics.

In 1934, Tamm and Semen Altshuller suggested that the neutron has a non-zero magnetic moment, the idea was met with scepticism at that time, as the neutron was supposed to be an elementary particle with zero charge, and thus could not have a magnetic moment. The same year, Tamm coined an idea that proton-neutron interactions can be described as an exchange force transmitted by a yet unknown massive particle, this idea was later developed by Hideki Yukawa into a theory of meson forces.

In 1945 he developed an approximation method for many-body physics. As Sidney Dancoff developed it independently in 1950, it is now called the Tamm-Dancoff approximation.

He was the Nobel Laureate in Physics for the year 1958 together with Pavel Cherenkov and Ilya Frank for the discovery and the interpretation of the Cherenkov-Vavilov effect.

In late 1940s to early 1950s Tamm was involved in the Soviet thermonuclear bomb project; in 1949–1953 he spent most of his time in the "secret city" of Sarov, working as a head of the theoretical group developing the hydrogen bomb, however he retired from the project and returned to the Moscow Lebedev Physical Institute after the first successful test of a hydrogen bomb in 1953.

In 1951, together with Andrei Sakharov, Tamm proposed a tokamak system for the realization of controlled thermonuclear fusion on the basis of toroidal magnetic thermonuclear reactor and soon after the first such devices were built by the INF. Results from the T-3 Soviet magnetic confinement device in 1968, when the plasma parameters unique for that time were obtained, showed temperatures in their machine to be over an order of magnitude higher than what was expected by the rest of the community. The western scientists visited the experiment and verified the high temperatures and confinement, sparking a wave of optimism for the prospects of the tokamak as well as construction of new experiments, which is still the dominant magnetic confinement device today.

In 1964 he was elected a Member of the German Academy of Sciences Leopoldina.

Tamm was a student of Leonid Isaakovich Mandelshtam in science and life.

Tamm was an atheist.

Tamm died in Moscow, Soviet Union on 12 April 1971, the Lunar crater Tamm is named after him. He is buried at Novodevichy Cemetery.

tamm-13131-portrait-medium.jpg

#15 Re: This is Cool » Miscellany » Yesterday 17:23:15

2490) Refractory

Gist

Refractory materials are specialized, non-metallic substances engineered to withstand extreme temperatures, (often >1000 Degrees Fahrenheit or 538 Degrees Centigrade), high pressure, and corrosion without softening or deformation. They are critical for lining furnaces, kilns, and reactors. Key properties include high thermal stability, low thermal expansion, and resistance to molten slag and thermal shock.

Refractory materials are inorganic (not from living material), non-metal substances that can withstand extremely high temperatures without any loss of strength or shape. They are used in devices, such as furnaces, that heat substances and in tanks and other storage devices that hold hot materials.

Summary:

What Are Refractories?

Refractories are ceramic materials designed to withstand the very high temperatures (in excess of 1,000°F [538°C]) encountered in modern manufacturing. More heat-resistant than metals, they are used to line the hot surfaces found inside many industrial processes.

In addition to being resistant to thermal stress and other physical phenomena induced by heat, refractories can withstand physical wear and corrosion caused by chemical agents. Thus, they are essential to the manufacture of petrochemical products and the refining of gasoline.

Refractory products generally fall into one of two broad categories: preformed shapes or unformed compositions, often called specialty or monolithic refractories. Then, there are refractory ceramic fibers, which resemble residential insulation, but insulate at much higher temperatures. Bricks and shapes are the more traditional form of refractories and historically have accounted for the majority of refractory production.

Refractories come in all shapes and sizes. They can be pressed or molded for use in floors and walls, produced in interlocking shapes and wedges, or curved to fit the insides of boilers and ladles. Some refractory parts are small and possess a complex and delicate geometry; others, in the form of precast or fusion-cast blocks, are massive and may weigh several tons.

What Are Refractories Made Of?

Refractories are produced from natural and synthetic materials, usually nonmetallic, or combinations of compounds and minerals such as alumina, fireclays, bauxite, chromite, dolomite, magnesite, silicon carbide, and zirconia.

What Are Refractories Used For?

From the simple (e.g., fireplace brick linings) to the sophisticated (e.g., reentry heat shields for the space shuttle), refractories are used to contain heat and protect processing equipment from intense temperatures. In industry, they are used to line boilers and furnaces of all types (reactors, ladles, stills, kilns, etc.).

It is a tribute to refractory engineers, scientists, technicians, and plant personnel that more than 5,000 brand name products in the United States are listed in the latest “Product Directory of the Refractories Industry.”

Details

In materials science, a refractory (or refractory material) is a material that is resistant to decomposition by heat or chemical attack and that retains its strength and rigidity at high temperatures. They are inorganic, non-metallic compounds that may be porous or non-porous, and their crystallinity varies widely: they may be crystalline, polycrystalline, amorphous, or composite. They are typically composed of oxides, carbides or nitrides of the following elements: silicon, aluminium, magnesium, calcium, boron, chromium and zirconium. Many refractories are ceramics, but some such as graphite are not, and some ceramics such as clay pottery are not considered refractory. Refractories are distinguished from the refractory metals, which are elemental metals and their alloys that have high melting temperatures.

Refractories are defined by ASTM C71 as "non-metallic materials having those chemical and physical properties that make them applicable for structures, or as components of systems, that are exposed to environments above 1,000 °F (811 K; 538 °C)". Refractory materials are used in furnaces, kilns, incinerators, and reactors. Refractories are also used to make crucibles and molds for casting glass and metals. The iron and steel industry and metal casting sectors use approximately 70% of all refractories produced.

Refractory materials

Refractory materials must be chemically and physically stable at high temperatures. Depending on the operating environment, they must be resistant to thermal shock, be chemically inert, and/or have specific ranges of thermal conductivity and of the coefficient of thermal expansion.

The oxides of aluminium (alumina), silicon (silica) and magnesium (magnesia) are the most important materials used in the manufacturing of refractories. Another oxide usually found in refractories is the oxide of calcium (lime). Fire clays are also widely used in the manufacture of refractories.

Refractories must be chosen according to the conditions they face. Some applications require special refractory materials. Zirconia is used when the material must withstand extremely high temperatures. Silicon carbide and carbon (graphite) are two other refractory materials used in some very severe temperature conditions, but they cannot be used in contact with oxygen, as they would oxidize and burn.

Binary compounds such as tungsten carbide or boron nitride can be very refractory. Hafnium carbide is the most refractory binary compound known, with a melting point of 3890 °C. The ternary compound tantalum hafnium carbide has one of the highest melting points of all known compounds (4215 °C).

Molybdenum disilicide has a high melting point of 2030 °C and is often used as a heating element.

Uses

Refractory materials are useful for the following functions:

* Serving as a thermal barrier between a hot medium and the wall of a containing vessel
* Withstanding physical stresses and preventing erosion of vessel walls due to the hot medium
* Protecting against corrosion
* Providing thermal insulation

Refractories have multiple useful applications. In the metallurgy industry, refractories are used for lining furnaces, kilns, reactors, and other vessels which hold and transport hot media such as metal and slag. Refractories have other high temperature applications such as fired heaters, hydrogen reformers, ammonia primary and secondary reformers, cracking furnaces, utility boilers, catalytic cracking units, air heaters, and sulfur furnaces. They are used for surfacing flame deflectors in rocket launch structures.

Additional Information

A refractory is any material that has an unusually high melting point and that maintains its structural properties at very high temperatures. Composed principally of ceramics, refractories are employed in great quantities in the metallurgical, glassmaking, and ceramics industries, where they are formed into a variety of shapes to line the interiors of furnaces, kilns, and other devices that process materials at high temperatures.

In this article the essential properties of ceramic refractories are reviewed, as are the principal refractory materials and their applications. At certain points in the article reference is made to the processing techniques employed in the manufacture of ceramic refractories; more detailed description of these processes can be found in the articles traditional ceramics and advanced ceramics. The connection between the properties of ceramic refractories and their chemistry and microstructure is explained in ceramic composition and properties.

Properties

Because of the high strengths exhibited by their primary chemical bonds, many ceramics possess unusually good combinations of high melting point and chemical inertness. This makes them useful as refractories. (The word refractory comes from the French réfractaire, meaning “high-melting.”) The property of chemical inertness is of special importance in metallurgy and glassmaking, where the furnaces are exposed to extremely corrosive molten materials and gases. In addition to temperature and corrosion resistance, refractories must possess superior physical wear or abrasion resistance, and they also must be resistant to thermal shock. Thermal shock occurs when an object is rapidly cooled from high temperature. The surface layers contract against the inner layers, leading to the development of tensile stress and the propagation of cracks. Ceramics, in spite of their well-known brittleness, can be made resistant to thermal shock by adjusting their microstructure during processing. The microstructure of ceramic refractories is quite coarse when compared with whitewares such as porcelain or even with less finely textured structural clay products such as brick. The size of filler grains can be on the scale of millimetres, instead of the micrometre scale seen in whiteware ceramics. In addition, most ceramic refractory products are quite porous, with large amounts of air spaces of varying size incorporated into the material. The presence of large grains and pores can reduce the load-bearing strength of the product, but it also can blunt cracks and thereby reduce susceptibility to thermal shock. However, in cases where a refractory will come into contact with corrosive substances (for example, in glass-melting furnaces), a porous structure is undesirable. The ceramic material can then be made with a higher density, incorporating smaller amounts of pores.

Composition and processing

The composition and processing of ceramic refractories vary widely according to the application and the type of refractory. Most refractories can be classified on the basis of composition as either clay-based or nonclay-based. In addition, they can be classified as either acidic (containing silica [SiO2] or zirconia [ZrO2]) or basic (containing alumina [Al2O3] or alkaline-earth oxides such as lime [CaO] or magnesia [MgO]). Among the clay-based refractories are fireclay, high-alumina, and mullite ceramics. There is a wide range of nonclay refractories, including basic, extra-high alumina, silica, silicon carbide, and zircon materials. Most clay-based products are processed in a manner similar to other traditional ceramics such as structural clay products; e.g., stiff-mud processes such as press forming or extrusion are employed to form the ware, which is subsequently dried and passed through long tunnel kilns for firing. Firing, as described in the article traditional ceramics, induces partial vitrification, or glass formation, which is a liquid-sintering process that binds particles together. Nonclay-based refractories, on the other hand, are bonded using techniques reserved for advanced ceramic materials. For instance, extra-high alumina and zircon ceramics are bonded by transient-liquid or solid-state sintering, basic bricks are bonded by chemical reactions between constituents, and silicon carbide is reaction-bonded from silica sand and coke. These processes are described in the article advanced ceramics.

Clay-based refractories

In this section the composition and properties of the clay-based refractories are described. Most are produced as preformed brick. Much of the remaining products are so-called monolithics, materials that can be formed and solidified on-site. This category includes mortars for cementing bricks and mixes for ramming or gunning (spraying from a pressure gun) into place. In addition, lightweight refractory insulation can be made in the form of fibreboards, blankets, and vacuum-cast shapes.

Fireclay

The workhorse of the clay-based refractories are the so-called fireclay materials. These are made from clays containing the aluminosilicate mineral kaolinite (Al2[Si2O5][OH]4) plus impurities such as alkalis and iron oxides. The alumina content ranges from 25 to 45 percent. Depending upon the impurity content and the alumina-to-silica ratio, fireclays are classified as low-duty, medium-duty, high-duty, and super-duty, with use temperature rising as alumina content increases. Fireclay bricks, or firebricks, exhibit relatively low expansion upon heating and are therefore moderately resistant against thermal shock. They are fairly inert in acidic environments but are quite reactive in basic environments. Fireclay bricks are used to line portions of the interiors of blast furnaces, blast-furnace stoves, and coke ovens.

High alumina

High-alumina refractories are made from bauxite, a naturally occurring material containing aluminum hydroxide (Al[OH]3) and kaolinitic clays. These raw materials are roasted to produce a mixture of synthetic alumina and mullite (an aluminosilicate mineral with the chemical formula 3Al2O3 · 2SiO2). By definition high-alumina refractories contain between 50 and 87.5 percent alumina. They are much more robust than fireclay refractories at high temperatures and in basic environments. In addition, they exhibit better volume stability and abrasion resistance. High-alumina bricks are used in blast furnaces, blast-furnace stoves, and liquid-steel ladles.

Mullite

Mullite is an aluminosilicate compound with the specific formula 3Al2O3 · 2SiO3 and an alumina content of approximately 70 percent. It has a melting point of 1,850° C (3,360° F). Various clays are mixed with bauxite in order to achieve this composition. Mullite refractories are solidified by sintering in electric furnaces at high temperatures. They are the most stable of the aluminosilicate refractories and have excellent resistance to high-temperature loading. Mullite bricks are used in blast-furnace stoves and in the forehearth roofs of glass-melting furnaces.

Non-clay-based refractories

Nonclay refractories such as those described below are produced almost exclusively as bricks and pressed shapes, though some magnesite-chrome and alumina materials are fuse-cast into molds. The usual starting materials for these products are carbonates or oxides of metals such as magnesium, aluminum, and zirconium.

Basic

Basic refractories include magnesia, dolomite, chrome, and combinations of these materials. Magnesia brick is made from periclase, the mineral form of magnesia (MgO). Periclase is produced from magnesite (a magnesium carbonate, MgCO3), or it is produced from magnesium hydroxide (Mg[OH]2), which in turn is derived from seawater or underground brine solutions. Magnesia bricks can be chemically bonded, pitch-bonded, burned, or burned and then pitch-impregnated.

Dolomite refractories take their name from the dolomite ore, a combination of calcium and magnesium carbonates (CaCO3 · MgCO3), from which they are produced. After burning they must be impregnated with tar or pitch to prevent rehydration of lime (CaO). Chrome brick is made from chromium ores, which are complex solid solutions of the spinel type (a series of oxide minerals including chromite and magnetite) plus silicate gangue, or impurity phases.

All the basic refractories exhibit outstanding resistance to iron oxides and the basic slags associated with steelmaking—especially when they incorporate carbon additions either as flakes or as residual carbon from pitch-bonding or tar-impregnation. For this reason they find wide employment in the linings of basic oxygen furnaces, electric furnaces, and open-hearth furnaces. They also are used to line the insides of copper converters.

Extra-high alumina

Extra-high alumina refractories are classified as having between 87.5 and 100 percent Al2O3 content. The alumina grains are fused or densely sintered together to obtain high density. Extra-high alumina refractories exhibit excellent volume stability to over 1,800° C (3,275° F).

Silica

Silica refractories are made from quartzites and silica gravel deposits with low alumina and alkali contents. They are chemically bonded with 3–3.5 percent lime. Silica refractories have good load resistance at high temperatures, are abrasion-resistant, and are particularly suited to containing acidic slags. Of the various grades—coke-oven quality, conventional, and super-duty—the super-duty, which has particularly low impurity contents, is used in the superstructures of glass-melting furnaces.

Zircon

Refractories made of zircon (a zirconium silicate, ZrSiO4) also are used in glass tanks because of their good resistance to the corrosive action of molten glasses. They possess good volume stability for extended periods at elevated temperatures, and they also show good creep resistance (i.e., low deformation under hot loading).

Silicon carbide

Silicon carbide (SiC) ceramics are made by a process referred to as reaction bonding, invented by the American Edward G. Acheson in 1891. In the Acheson process, pure silica sand and finely divided carbon (coke) are reacted in an electric furnace at temperatures in the range of 2,200°–2,480° C (4,000°–4,500° F). SiC ceramics have outstanding high-temperature load-bearing strength and dimensional stability. They also exhibit great thermal shock resistance because of their high thermal conductivity. (In this case, high thermal conductivity prevents the formation of extreme temperature differences between inner and outer layers of a material, which frequently are a source of thermal expansion stresses.) Therefore, SiC makes good kiln furniture for supporting other ceramics during their firing.

Other non-clay-based refractories

Other refractories produced in smaller quantities for special applications include graphite (a layered, multicrystalline form of carbon), zirconia (ZrO2), forsterite (Mg2SiO4), and combinations such as magnesia-alumina, magnesite-chrome, chrome-alumina, and alumina-zirconia-silica. Alumina-zirconia-silica (AZS), which is melted and cast into molds or directly into the melting tanks of glass furnaces, is an excellent corrosion-resistant refractory that does not release impurities into the glass melt. AZS is also poured to make tank blocks (also called soldier blocks or sidewall blocks) used in the construction and repair of glass furnaces.

refractories-1302808816_600x400.jpg

#16 Science HQ » Reflection » Yesterday 16:44:00

Jai Ganesh
Replies: 0

Reflection

Gist

Reflection is the phenomenon where waves, like light or sound, bounce off a surface and return to their original medium, creating an image or echo, with the angle of incidence (incoming angle) equaling the angle of reflection (bouncing-off angle) for smooth surfaces, as seen in mirrors or water. It also describes a "flip" in math, creating a mirror image, and can refer to thoughtful consideration or a sign of something.

Reflection of light is the process where a light ray strikes a surface and bounces back into the same medium, enabling vision and image formation. It follows two key laws: the angle of incidence equals the angle of reflection, and the incident ray, reflected ray, and normal all lie in the same plane. 

Reflection of light is the process where light rays strike a surface—typically smooth and shiny like a mirror—and bounce back into the original medium. This phenomenon, which enables vision and image formation, follows two fundamental laws: the angle of incidence equals the angle of reflection, and the incident ray, normal, and reflected ray all lie in the same plane.

Summary

Reflection is an abrupt change in the direction of propagation of a wave that strikes the boundary between different mediums. At least part of the oncoming wave disturbance remains in the same medium. Regular reflection, which follows a simple law, occurs at plane boundaries. The angle between the direction of motion of the oncoming wave and a perpendicular to the reflecting surface (angle of incidence) is equal to the angle between the direction of motion of the reflected wave and a perpendicular (angle of reflection). Reflection at rough, or irregular, boundaries is diffuse. The reflectivity of a surface material is the fraction of energy of the oncoming wave that is reflected by it.

Details

Reflection is the change in direction of a wavefront at an interface between two different media so that the wavefront returns into the medium from which it originated. Common examples include the reflection of light, sound and water waves. The law of reflection says that for specular reflection (for example at a mirror) the angle at which the wave is incident on the surface equals the angle at which it is reflected.

In acoustics, reflection causes echoes and is used in sonar. In geology, it is important in the study of seismic waves. Reflection is observed with surface waves in bodies of water. Reflection is observed with many types of electromagnetic wave, besides visible light. Reflection of VHF and higher frequencies is important for radio transmission and for radar. Even hard X-rays and gamma rays can be reflected at shallow angles with special "grazing" mirrors.

Reflection of Light

A mirror provides the most common model for specular light reflection, and typically consists of a glass sheet with a metallic coating where the significant reflection occurs. Reflection is enhanced in metals by suppression of wave propagation beyond their skin depths. Reflection also occurs at the surface of transparent media, such as water or glass, although the reflection is generally less effective compared with mirrors.

In fact, reflection of light may occur whenever light travels from a medium of a given refractive index into a medium with a different refractive index. In the most general case, a certain fraction of the light is reflected from the interface, and the remainder is refracted. Solving Maxwell's equations for a light ray striking a boundary allows the derivation of the Fresnel equations, which can be used to predict how much of the light is reflected, and how much is refracted in a given situation. This is analogous to the way impedance mismatch in an electric circuit causes reflection of signals. Total internal reflection of light from a denser medium occurs if the angle of incidence is greater than the critical angle.

Total internal reflection is used as a means of focusing waves that cannot effectively be reflected by common means. X-ray telescopes are constructed by creating a converging "tunnel" for the waves. As the waves interact at low angle with the surface of this tunnel they are reflected toward the focus point (or toward another interaction with the tunnel surface, eventually being directed to the detector at the focus). A conventional reflector would be useless as the X-rays would simply pass through the intended reflector.

When light reflects off a material with higher refractive index than the medium in which is traveling, it undergoes a 180° phase shift. In contrast, when light reflects off a material with lower refractive index the reflected light is in phase with the incident light. This is an important principle in the field of thin-film optics.

Specular reflection forms images. Reflection from a flat surface forms a mirror image, which appears to be reversed from left to right because we compare the image we see to what we would see if we were rotated into the position of the image. Specular reflection at a curved surface forms an image which may be magnified or demagnified; curved mirrors have optical power. Such mirrors may have surfaces that are spherical or parabolic.

Laws of reflection

If the reflecting surface is very smooth, the reflection of light that occurs is called specular or regular reflection. The laws of reflection are as follows:

1) The incident ray, the reflected ray and the normal to the reflection surface at the point of the incidence lie in the same plane.
2) The angle which the incident ray makes with the normal is equal to the angle which the reflected ray makes to the same normal.
3) The reflected ray and the incident ray are on the opposite sides of the normal.

These three laws can all be derived from the Fresnel equations.

Mechanism

In classical electrodynamics, light is considered as an electromagnetic wave, which is described by Maxwell's equations. Light waves incident on a material induce small oscillations of polarisation in the individual atoms (or oscillation of electrons, in metals), causing each particle to radiate a small secondary wave in all directions, like a dipole antenna. All these waves add up to give specular reflection and refraction, according to the Huygens–Fresnel principle.

In the case of dielectrics such as glass, the electric field of the light acts on the electrons in the material, and the moving electrons generate fields and become new radiators. The refracted light in the glass is the combination of the forward radiation of the electrons and the incident light. The reflected light is the combination of the backward radiation of all of the electrons.

In metals, electrons with no binding energy are called free electrons. When these electrons oscillate with the incident light, the phase difference between their radiation field and the incident field is π radians (180°), so the forward radiation cancels the incident light, and backward radiation is just the reflected light.

Light–matter interaction in terms of photons is a topic of quantum electrodynamics, and is described in detail by Richard Feynman in his popular book QED: The Strange Theory of Light and Matter.

Diffuse reflection

When light strikes the surface of a (non-metallic) material it bounces off in all directions due to multiple reflections by the microscopic irregularities inside the material (e.g. the grain boundaries of a polycrystalline material, or the cell or fiber boundaries of an organic material) and by its surface, if it is rough. Thus, an 'image' is not formed. This is called diffuse reflection. The exact form of the reflection depends on the structure of the material. One common model for diffuse reflection is Lambertian reflectance, in which the light is reflected with equal luminance (in photometry) or radiance (in radiometry) in all directions, as defined by Lambert's cosine law.

The light sent to our eyes by most of the objects we see is due to diffuse reflection from their surface, so that this is our primary mechanism of physical observation.

Retroreflection

Some surfaces exhibit retroreflection. The structure of these surfaces is such that light is returned in the direction from which it came.

When flying over clouds illuminated by sunlight the region seen around the aircraft's shadow will appear brighter, and a similar effect may be seen from dew on grass. This partial retro-reflection is created by the refractive properties of the curved droplet's surface and reflective properties at the backside of the droplet.

Some animals' retinas act as retroreflectors (see tapetum lucidum for more detail), as this effectively improves the animals' night vision. Since the lenses of their eyes modify reciprocally the paths of the incoming and outgoing light the effect is that the eyes act as a strong retroreflector, sometimes seen at night when walking in wildlands with a flashlight.

A simple retroreflector can be made by placing three ordinary mirrors mutually perpendicular to one another (a corner reflector). The image produced is the inverse of one produced by a single mirror. A surface can be made partially retroreflective by depositing a layer of tiny refractive spheres on it or by creating small pyramid like structures. In both cases internal reflection causes the light to be reflected back to where it originated. This is used to make traffic signs and automobile license plates reflect light mostly back in the direction from which it came. In this application perfect retroreflection is not desired, since the light would then be directed back into the headlights of an oncoming car rather than to the driver's eyes.

Multiple reflections

When light reflects off a mirror, one image appears. Two mirrors placed exactly face to face give the appearance of an infinite number of images along a straight line. The multiple images seen between two mirrors that sit at an angle to each other lie over a circle. The center of that circle is located at the imaginary intersection of the mirrors. A square of four mirrors placed face to face give the appearance of an infinite number of images arranged in a plane. The multiple images seen between four mirrors assembling a pyramid, in which each pair of mirrors sits an angle to each other, lie over a sphere. If the base of the pyramid is rectangle shaped, the images spread over a section of a torus.

Note that these are theoretical ideals, requiring perfect alignment of perfectly smooth, perfectly flat perfect reflectors that absorb none of the light. In practice, these situations can only be approached but not achieved because the effects of any surface imperfections in the reflectors propagate and magnify, absorption gradually extinguishes the image, and any observing equipment (biological or technological) will interfere.

Complex conjugate reflection

In this process (which is also known as phase conjugation), light bounces exactly back in the direction from which it came due to a nonlinear optical process. Not only the direction of the light is reversed, but the actual wavefronts are reversed as well. A conjugate reflector can be used to remove aberrations from a beam by reflecting it and then passing the reflection through the aberrating optics a second time. If one were to look into a complex conjugating mirror, it would be black because only the photons which left the pupil would reach the pupil.

Other types of reflection:

Neutron reflection

Materials that reflect neutrons, for example beryllium, are used in nuclear reactors and nuclear weapons. In the physical and biological sciences, the reflection of neutrons off atoms within a material is commonly used to determine the material's internal structure.

Sound reflection

When a longitudinal sound wave strikes a flat surface, sound is reflected in a coherent manner provided that the dimension of the reflective surface is large compared to the wavelength of the sound. Note that audible sound has a very wide frequency range (from 20 to about 17000 Hz), and thus a very wide range of wavelengths (from about 20 mm to 17 m). As a result, the overall nature of the reflection varies according to the texture and structure of the surface. For example, porous materials will absorb some energy, and rough materials (where rough is relative to the wavelength) tend to reflect in many directions—to scatter the energy, rather than to reflect it coherently. This leads into the field of architectural acoustics, because the nature of these reflections is critical to the auditory feel of a space. In the theory of exterior noise mitigation, reflective surface size mildly detracts from the concept of a noise barrier by reflecting some of the sound into the opposite direction. Sound reflection can affect the acoustic space.

Seismic reflection

Seismic waves produced by earthquakes or other sources (such as explosions) may be reflected by layers within the Earth. Study of the deep reflections of waves generated by earthquakes has allowed seismologists to determine the layered structure of the Earth. Shallower reflections are used in reflection seismology to study the Earth's crust generally, and in particular to prospect for petroleum and natural gas deposits.

Time reflections

Scientists have speculated that there could be time reflections. Scientists from the Advanced Science Research Center at the CUNY Graduate Center report that they observed time reflections by sending broadband signals into a strip of metamaterial filled with electronic switches. The "time reflections" in electromagnetic waves are discussed in a 2023 paper published in the journal Nature Physics.

Additional Information:

Introduction to the Reflection of Light

Light reflection occurs when a ray of light bounces off a surface and changes direction. From a detailed definition of ‘reflection of light’ to the different types of reflection and example images, our introductory article tells you everything you need to know about the reflection of light.

What is Reflection of Light?

Reflection of light (and other forms of electromagnetic radiation) occurs when the waves encounter a surface or other boundary that does not absorb the energy of the radiation and bounces the waves away from the surface.

Reflection of Light Example

The simplest example of visible light reflection is the surface of a smooth pool of water, where incident light is reflected in an orderly manner to produce a clear image of the scenery surrounding the pool. Throw a rock into the pool, and the water is perturbed to form waves, which disrupt the reflection by scattering the reflected light rays in all directions.

Who Discovered the Reflection of Light?

Some of the earliest accounts of light reflection originate from the ancient Greek mathematician Euclid, who conducted a series of experiments around 300 BC, and appears to have had a good understanding of how light is reflected. However, it wasn’t until a millennium and a half later that the Arab scientist Alhazen proposed a law describing exactly what happens to a light ray when it strikes a smooth surface and then bounces off into space.

The incoming light wave is referred to as an incident wave, and the wave that is bounced away from the surface is termed the reflected wave. Visible white light that is directed onto the surface of a mirror at an angle (incident) is reflected back into space by the mirror surface at another angle (reflected) that is equal to the incident angle, as presented for the action of a beam of light from a flashlight on a smooth, flat mirror. Thus, the angle of incidence is equal to the angle of reflection for visible light as well as for all other wavelengths of the electromagnetic radiation spectrum. This concept is often termed the Law of Reflection. It is important to note that the light is not separated into its component colors because it is not being “bent” or refracted, and all wavelengths are being reflected at equal angles. The best surfaces for reflecting light are very smooth, such as a glass mirror or polished metal, although almost all surfaces will reflect light to some degree.

Because light behaves in some ways as a wave and in other ways as if it were composed of particles, several independent theories of light reflection have emerged. According to wave-based theories, the light waves spread out from the source in all directions, and upon striking a mirror, are reflected at an angle determined by the angle at which the light arrives. The reflection process inverts each wave back-to-front, which is why a reverse image is observed. The shape of light waves depends upon the size of the light source and how far the waves have traveled to reach the mirror. Wavefronts that originate from a source near the mirror will be highly curved, while those emitted by distant light sources will be almost linear, a factor that will affect the angle of reflection.

According to particle theory, which differs in some important details from the wave concept, light arrives at the mirror in the form of a stream of tiny particles, termed photons, which bounce away from the surface upon impact. Because the particles are so small, they travel very close together (virtually side by side) and bounce from different points, so their order is reversed by the reflection process, producing a mirror image. Regardless of whether light is acting as particles or waves, the result of reflection is the same. The reflected light produces a mirror image.

The amount of light reflected by an object, and how it is reflected, is highly dependent upon the degree of smoothness or texture of the surface. When surface imperfections are smaller than the wavelength of the incident light (as in the case of a mirror), virtually all of the light is reflected equally. However, in the real world most objects have convoluted surfaces that exhibit a diffuse reflection, with the incident light being reflected in all directions. Many of the objects that we casually view every day (people, cars, houses, animals, trees, etc.) do not themselves emit visible light but reflect incident natural sunlight and artificial light. For instance, an apple appears a shiny red color because it has a relatively smooth surface that reflects red light and absorbs other non-red (such as green, blue, and yellow) wavelengths of light.

How Many Types of Reflection of Light Are There?

The reflection of light can be roughly categorized into two types of reflection. Specular reflection is defined as light reflected from a smooth surface at a definite angle, whereas diffuse reflection is produced by rough surfaces that tend to reflect light in all directions. There are far more occurrences of diffuse reflection than specular reflection in our everyday environment.

To visualize the differences between specular and diffuse reflection, consider two very different surfaces: a smooth mirror and a rough reddish surface. The mirror reflects all of the components of white light (such as red, green, and blue wavelengths) almost equally and the reflected specular light follows a trajectory having the same angle from the normal as the incident light. The rough reddish surface, however, does not reflect all wavelengths because it absorbs most of the blue and green components, and reflects the red light. Also, the diffuse light that is reflected from the rough surface is scattered in all directions.

How Do Mirrors Reflect Light?

Perhaps the best example of specular reflection, which we encounter on a daily basis, is the mirror image produced by a household mirror that people might use many times a day to view their appearance. The mirror’s smooth reflective glass surface renders a virtual image of the observer from the light that is reflected directly back into the eyes. This image is referred to as “virtual” because it does not actually exist (no light is produced) and appears to be behind the plane of the mirror due to an assumption that the brain naturally makes. The way in which this occurs is easiest to visualize when looking at the reflection of an object placed on one side of the observer, so that the light from the object strikes the mirror

The type of reflection that is seen in a mirror depends upon the mirror’s shape and, in some cases, how far away from the mirror the object being reflected is positioned. Mirrors are not always flat and can be produced in a variety of configurations that provide interesting and useful reflection characteristics. Concave mirrors, commonly found in the largest optical telescopes, are used to collect the faint light emitted from very distant stars. The curved surface concentrates parallel rays from a great distance into a single point for enhanced intensity. This mirror design is also commonly found in shaving or cosmetic mirrors where the reflected light produces a magnified image of the face. The inside of a shiny spoon is a common example of a concave mirror surface, and can be used to demonstrate some properties of this mirror type. If the inside of the spoon is held close to the eye, a magnified upright view of the eye will be seen (in this case the eye is closer than the focal point of the mirror). If the spoon is moved farther away, a demagnified upside-down view of the whole face will be seen. Here the image is inverted because it is formed after the reflected rays have crossed the focal point of the mirror surface.

Another common mirror having a curved-surface, the convex mirror, is often used in automobile rear-view reflector applications where the outward mirror curvature produces a smaller, more panoramic view of events occurring behind the vehicle. When parallel rays strike the surface of a convex mirror, the light waves are reflected outward so that they diverge. When the brain retraces the rays, they appear to come from behind the mirror where they would converge, producing a smaller upright image (the image is upright since the virtual image is formed before the rays have crossed the focal point). Convex mirrors are also used as wide-angle mirrors in hallways and businesses for security and safety. The most amusing applications for curved mirrors are the novelty mirrors found at state fairs, carnivals, and fun houses. These mirrors often incorporate a mixture of concave and convex surfaces, or surfaces that gently change curvature, to produce bizarre, distorted reflections when people observe themselves.

The concave mirror has a reflection surface that curves inward, resembling a portion of the interior of a sphere. When light rays that are parallel to the principal or optical axis reflect from the surface of a concave mirror (in this case, light rays from the owl's feet), they converge on the focal point (red dot) in front of the mirror. The distance from the reflecting surface to the focal point is known as the mirror's focal length. The size of the image depends upon the distance of the object from the mirror and its position with respect to the mirror surface. In this case, the owl is placed away from the center of curvature and the reflected image is upside down and positioned between the mirror's center of curvature and its focal point.

The convex mirror has a reflecting surface that curves outward, resembling a portion of the exterior of a sphere. Light rays parallel to the optical axis are reflected from the surface in a direction that diverges from the focal point, which is behind the mirror (Figure 5). Images formed with convex mirrors are always right side up and reduced in size. These images are also termed virtual images, because they occur where reflected rays appear to diverge from a focal point behind the mirror.

Total Internal Reflection of Light

The principle of total internal reflection is the basis for fiber optic light transmission that makes possible medical procedures such as endoscopy, telephone voice transmissions encoded as light pulses, and devices such as fiber optic illuminators that are widely used in microscopy and other tasks requiring precision lighting effects. The prisms employed in binoculars and in single-lens reflex cameras also utilize total internal reflection to direct images through several 90-degree angles and into the user’s eye. In the case of fiber optic transmission, light entering one end of the fiber is reflected internally numerous times from the wall of the fiber as it zigzags toward the other end, with none of the light escaping through the thin fiber walls. This method of “piping” light can be maintained for long distances and with numerous turns along the path of the fiber.

Total internal reflection is only possible under certain conditions. The light is required to travel in a medium that has relatively high refractive index, and this value must be higher than that of the surrounding medium. Water, glass, and many plastics are therefore suitable for use when they are surrounded by air. If the materials are chosen appropriately, reflections of the light inside the fiber or light pipe will occur at a shallow angle to the inner surface, and all light will be totally contained within the pipe until it exits at the far end. At the entrance to the optic fiber, however, the light must strike the end at a high incidence angle in order to travel across the boundary and into the fiber.

The principles of reflection are exploited to great benefit in many optical instruments and devices, and this often includes the application of various mechanisms to reduce reflections from surfaces that take part in image formation. The concept behind antireflection technology is to control the light used in an optical device in such a manner that the light rays reflect from surfaces where it is intended and beneficial, and do not reflect away from surfaces where this would have a deleterious effect on the image being observed. One of the most significant advances made in modern lens design, whether for microscopes, cameras, or other optical devices, is the improvement in antireflection coating technology.

ReflectionofLight.png

#17 Dark Discussions at Cafe Infinity » Combined Quotes - I » Yesterday 15:52:46

Jai Ganesh
Replies: 0

Combined Quotes - I

1. Your positive action combined with positive thinking results in success. - Shiv Khera

2. All the armies of Europe, Asia and Africa combined, with all the treasure of the earth (our own excepted) in their military chest; with a Buonaparte for a commander, could not by force, take a drink from the Ohio, or make a track on the Blue Ridge, in a trial of a thousand years. - Abraham Lincoln

3. A multitude of causes unknown to former times are now acting with a combined force to blunt the discriminating powers of the mind, and unfitting it for all voluntary exertion to reduce it to a state of almost savage torpor. - William Wordsworth

4. I always tell people that religious institutions and political institutions should be separate. So while I'm telling people this, I myself continue with them combined. Hypocrisy! - Dalai Lama

5. I love playing ego and insecurity combined. - Jim Carrey

6. Talent and effort, combined with our various backgrounds and life experiences, has always been the lifeblood of our singular American genius. - Michelle Obama

7. Rapid population growth and technological innovation, combined with our lack of understanding about how the natural systems of which we are a part work, have created a mess. - David Suzuki

8. A life of stasis would be population control, combined with energy rationing. That is the stasis world that you live in if you stay. And even with improvements in efficiency, you'll still have to ration energy. That, to me, doesn't sound like a very exciting civilization for our grandchildren's grandchildren to live in. - Jeff Bezos.

#18 Jokes » Corn Jokes - I » Yesterday 15:30:08

Jai Ganesh
Replies: 0

Q: Why didn't anyone laugh at the gardener's jokes?
A: Because they were too corny!
* * *'
Q: How did the tomato court the corn?
A: He whispered sweet nothings into her ear.
* * *
Q: What did the corn say when he got complimented?
A: Aww, shucks!
* * *
Q: What does Chuck Norris do when he wants popcorn?
A: He breathes on Nebraska!
* * *
Q: What do you tell a vegetable after it graduates from College?
A: Corn-gratulations.
* * *

#19 Re: Jai Ganesh's Puzzles » General Quiz » Yesterday 15:17:55

Hi,

#10737. What does the term in Geography Cryosphere mean?

#10738. What does the term in Geography Cryoturbation mean?

#20 Re: Jai Ganesh's Puzzles » English language puzzles » Yesterday 15:06:30

Hi,

#5933. What does the adjective inflexible mean?

#5934. What does the verd (used with object) inflict mean?

#21 Re: Jai Ganesh's Puzzles » Doc, Doc! » Yesterday 14:56:29

Hi,

#2563. What does the medical term Hallux rigidus mean?

#25 This is Cool » X-ray » 2026-02-07 20:05:12

Jai Ganesh
Replies: 0

X-ray

Gist

An X-ray is a quick, painless test that captures images of the structures inside the body — particularly the bones. X-ray beams pass through the body. These beams are absorbed in different amounts depending on the density of the material they pass through.

The full name for "X-ray" is X-radiation, referring to its nature as a form of high-energy electromagnetic radiation, with the 'X' signifying its unknown nature when first discovered by Wilhelm Conrad Röntgen in 1895.

In many languages, it's also called Röntgen radiation, honoring its discoverer. Through experimentation, he found that the mysterious light would pass through most substances but leave shadows of solid objects. Because he did not know what the rays were, he called them 'X,' meaning 'unknown,' rays.

Summary

An X-ray is a form of high-energy electromagnetic radiation with a wavelength shorter than those of ultraviolet rays and longer than those of gamma rays. Roughly, X-rays have a wavelength ranging from 10 nanometers to 10 picometers, corresponding to frequencies in the range of 30 petahertz to 30 exahertz (3×{10}^{16} Hz to 3×{10}^{19} Hz) and photon energies in the range of 100 eV to 100 keV, respectively.

X-rays were discovered in 1895 by the German scientist Wilhelm Conrad Röntgen, who named it X-radiation to signify an unknown type of radiation.

X-rays can penetrate many solid substances such as construction materials and living tissue, so X-ray radiography is widely used in medical diagnostics (e.g., checking for broken bones) and materials science (e.g., identification of some chemical elements and detecting weak points in construction materials). However X-rays are ionizing radiation and exposure can be hazardous to health, causing DNA damage, cancer and, at higher intensities, burns and radiation sickness. Their generation and use is strictly controlled by public health authorities.

Details

X-rays are a way for healthcare providers to get pictures of the inside of your body. X-rays use radiation to create black-and-white images that a radiologist reads. Then, they send a report to your provider. X-rays are mostly known for looking at bones and joints. But providers can use them to diagnose other conditions, too.

Overview:

What is an X-ray?

An X-ray is a type of medical imaging that uses radiation to take pictures of the inside of your body. We often think of an X-ray as something that checks for broken bones. But X-ray images can help providers diagnose other injuries and diseases, too.

Many people think of X-rays as black-and-white, two-dimensional images. But modern X-ray technology is often combined with other technologies to make more advanced types of images.

Types of X-rays

Some specific imaging tests that use X-rays are:

* Bone density (DXA) scan: This test captures X-ray images while also checking the strength and mineral content of your bones.
* CT scan (computed tomography): CT scans use X-ray and computers to create 3D images of the inside of your body.
* Dental X-ray: A dental provider takes X-rays of your mouth to look for cavities or issues with your gums.
* Fluoroscopy: This test uses a series of X-rays to show the inside of your body in real time. Providers use it to help diagnose issues with specific body parts. They also use it to help guide certain procedures, like an angiogram.
* Mammogram: This is a special X-ray of your breasts that shows irregularities that could lead to breast cancer.

X-rays can help healthcare providers diagnose various conditions in your body. Some of the most common areas on your body to get an X-ray are:

* Abdominal X-ray: This X-ray helps providers evaluate parts of your digestive system and diagnose conditions like kidney stones and bladder stones.
* Bone X-ray: You might get a bone X-ray if your provider suspects you have a broken bone, dislocated joint or arthritis. Images from bone X-rays can also show signs of bone cancer or infection.
* Chest X-ray: Your provider might order a chest X-ray if you have symptoms like chest pain, shortness of breath or a cough. It can look for signs of infection in your lungs or congestive heart failure.
* Head X-ray: These can help providers see skull fractures from head injuries or conditions that affect how the bones in your skull form, like craniosynostosis.
* Spine X-ray: A provider can use a spine X-ray to look for arthritis and scoliosis.

Test Details:

How do X-rays work?

X-rays work by sending beams of radiation through your body to create images on an X-ray detector nearby. Radiation beams are invisible, and you can’t feel them.

As the beams go through your body, bones, soft tissues and other structures absorb radiation in different ways:

* Solid or dense tissues (like bones and tumors) absorb radiation easily, so they appear bright white on the image.
* Soft tissues (like organs, muscle and fat) don’t absorb radiation as easily, so they appear in shades of gray on the X-ray.

A radiologist interprets the image and writes a report for the physician who ordered the X-ray. They make note of anything that’s abnormal or concerning. Then, your healthcare provider shares the results with you.

How do I prepare?

Preparation for an X-ray depends on the type of X-ray you’re getting. Your provider may ask you to:

* Remove metal objects like jewelry, hairpins or hearing aids (metal can interfere with X-rays and make the results inaccurate)
* Wear comfortable clothing or change into a gown before the X-ray

Tell your healthcare provider about your health history, allergies and any medications you’re taking. Let them know if you’re pregnant or think you could be.

What can I expect during an X-ray?

The exact steps of an X-ray depend on the kind you’re getting. In general, your provider will follow these steps during an X-ray:

* They’ll ask you to sit, stand or lie down on a table. In the past, your provider may have covered you with a lead shield (apron), but new evidence suggests that they aren’t necessary.
* Your provider will position the camera near the body part that they’re getting a picture of.
* Then, they’ll move your body or limbs in different positions and ask you to hold still. They may also ask you to hold your breath for a few seconds so the images aren’t blurry.

Sometimes, children can’t stay still long enough to produce clear images. Your child’s provider may recommend using a restraint during an X-ray. The restraint helps your child stay still and reduces the need for retakes. The restraints don’t hurt and won’t harm your child.

What happens after?

Most of the time, there aren’t any restrictions on what you can do after an X-ray. You can go back to your typical activities.

What are the risks or side effects of X-rays?

X-rays are safe and low risk.

X-rays use a safe and small amount of radiation — not much more than the naturally occurring radiation you get in your daily life. For instance, a dental X-ray exposes you to about the same amount of background radiation you’d get in one day.

X-ray radiation is usually safe for adults. But it can be harmful to a fetus. If you’re pregnant, your provider may choose another imaging test, like ultrasound.

Additional Information:

Overview

An X-ray is a quick, painless test that captures images of the structures inside the body — particularly the bones.

X-ray beams pass through the body. These beams are absorbed in different amounts depending on the density of the material they pass through. Dense materials, such as bone and metal, show up as white on X-rays. The air in the lungs shows up as black. Fat and muscle appear as shades of gray.

For some types of X-ray tests, a contrast medium — such as iodine or barium — is put into the body to get greater detail on the images.

Why it's done:

X-ray technology is used to examine many parts of the body.

Bones and teeth

* Fractures and infections. In most cases, fractures and infections in bones and teeth show up clearly on X-rays.
* Arthritis. X-rays of the joints can show evidence of arthritis. X-rays taken over the years can help your healthcare team tell if your arthritis is worsening.
* Dental decay. Dentists use X-rays to check for cavities in the teeth.
* Osteoporosis. Special types of X-ray tests can measure bone density.
* Bone cancer. X-rays can reveal bone tumors.

Chest

* Lung infections or conditions. Evidence of pneumonia, tuberculosis or lung cancer can show up on chest X-rays.
* Breast cancer. Mammography is a special type of X-ray test used to examine breast tissue.
* Enlarged heart. This sign of congestive heart failure shows up clearly on X-rays.
* Blocked blood vessels. Injecting a contrast material that contains iodine can help highlight sections of the circulatory system so they can be seen easily on X-rays.

Abdomen

* Digestive tract issues. Barium, a contrast medium delivered in a drink or an enema, can help show problems in the digestive system.
* Swallowed items. If a child has swallowed something such as a key or a coin, an X-ray can show the location of that object.

X-ray technology is used to examine many parts of the body.

Risks:

Radiation exposure

Some people worry that X-rays aren't safe. This is because radiation exposure can cause cell changes that may lead to cancer. The amount of radiation you're exposed to during an X-ray depends on the tissue or organ being examined. Sensitivity to the radiation depends on your age, with children being more sensitive than adults.

Generally, however, radiation exposure from an X-ray is low, and the benefits from these tests far outweigh the risks.

However, if you are pregnant or suspect that you may be pregnant, tell your healthcare team before having an X-ray. Though most diagnostic X-rays pose only small risk to an unborn baby, your care team may decide to use another imaging test, such as ultrasound.

Contrast medium

In some people, the injection of a contrast medium can cause side effects such as:

* A feeling of warmth or flushing.
* A metallic taste.
* Lightheadedness.
* Nausea.
* Itching.
* Hives.

Rarely, severe reactions to a contrast medium occur, including:

* Very low blood pressure.
* Difficulty breathing.
* Swelling of the throat or other parts of the body.

How you prepare:

Different types of X-rays require different preparations. Ask your healthcare team to provide you with specific instructions.

What to wear

In general, you undress whatever part of your body needs examination. You may wear a gown during the exam depending on which area is being X-rayed. You also may be asked to remove jewelry, eyeglasses and any metal objects because they can show up on an X-ray.

Contrast material

Before having some types of X-rays, you're given a liquid called contrast medium. Contrast mediums, such as barium and iodine, help outline a specific area of your body on the X-ray image. You may swallow the contrast medium or receive it as an injection or an enema.

What you can expect:

During the X-ray

X-rays are performed at medical offices, dentists' offices, emergency rooms and hospitals — wherever an X-ray machine is available. The machine produces a safe level of radiation that passes through the body and records an image on a specialized plate. You can't feel an X-ray.

A technologist positions your body to get the necessary views. Pillows or sandbags may be used to help you hold the position. During the X-ray exposure, you remain still and sometimes hold your breath to avoid moving so that the image doesn't blur.

An X-ray procedure may take just a few minutes for a simple X-ray or longer for more-involved procedures, such as those using a contrast medium.

Your child's X-ray

If a young child is having an X-ray, restraints or other tools may be used to keep the child still. These won't harm the child and they prevent the need for a repeat procedure, which may be necessary if the child moves during the X-ray exposure.

You may be allowed to remain with your child during the test. If you remain in the room during the X-ray exposure, you'll likely be asked to wear a lead apron to shield you from unnecessary X-ray exposure.

After the X-ray

After an X-ray, you generally can resume usual activities. Routine X-rays usually have no side effects. However, if you're given contrast medium before your X-ray, drink plenty of fluids to help rid your body of the contrast. Call your healthcare team if you have pain, swelling or redness at the injection site. Ask your team about other symptoms to watch for.

Results:

X-rays are saved digitally on computers and can be viewed on-screen within minutes. A radiologist typically views and interprets the results and sends a report to a member of your healthcare team, who then explains the results to you. In an emergency, your X-ray results can be made available in minutes.

iStock_22401848_MEDIUM-58262cb63df78c6f6adebb27.jpg

Board footer

Powered by FluxBB