Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#401 Dark Discussions at Cafe Infinity » Come Quotes - IV » 2026-02-14 16:38:49

Jai Ganesh
Replies: 0

Come Quotes - IV

1. If you have an important point to make, don't try to be subtle or clever. Use a pile driver. Hit the point once. Then come back and hit it again. Then hit it a third time - a tremendous whack. - Winston Churchill

2. There will be no end to the troubles of states, or of humanity itself, till philosophers become kings in this world, or till those we now call kings and rulers really and truly become philosophers, and political power and philosophy thus come into the same hands. - Plato

3. With mirth and laughter let old wrinkles come. - William Shakespeare

4. Come Fairies, take me out of this dull world, for I would ride with you upon the wind and dance upon the mountains like a flame! - William Butler Yeats

5. The automobile engine will come, and then I will consider my life's work complete. - Rudolf Diesel

6. It is very important to generate a good attitude, a good heart, as much as possible. From this, happiness in both the short term and the long term for both yourself and others will come. - Dalai Lama

7. Let us endeavor so to live so that when we come to die even the undertaker will be sorry. - Mark Twain

8. Nothing brings me more happiness than trying to help the most vulnerable people in society. It is a goal and an essential part of my life - a kind of destiny. Whoever is in distress can call on me. I will come running wherever they are. - Princess Diana.

#402 Jokes » Grape Jokes - I » 2026-02-14 16:21:41

Jai Ganesh
Replies: 0

Q: What did the green grape say to the purple grape?
A: Breathe! Breathe!
* * *
Q: Why aren't grapes ever lonely?
A: Because they come in bunches!
* * *
Q: What is purple and long?
A: The grape wall of China.
* * *
Q: What did the grape say when he got stepped on?
A: He let out a little wine.
* * *
Q: "What's purple and huge and swims in the ocean?"
A: "Moby Grape."
* * *

#403 Science HQ » Oscillator » 2026-02-14 16:13:20

Jai Ganesh
Replies: 0

Oscillator

Gist

An oscillator is an electronic circuit that converts DC (Direct Current) into a periodic, repeating AC (Alternating Current) signal—such as a sine, square, or triangle wave—without needing an external input signal. These devices are essential for generating, timing, and controlling frequencies in systems like radio, clocks, computers, and sensors.

Oscillators are fundamental in electronics, generating precise frequencies for applications like clocks in computers, carrier waves in radios & Wi-Fi, and timing signals in microcontrollers, enabling everything from timekeeping (watches) to data synchronization (Bluetooth) and medical devices (ultrasound), acting as versatile signal generators for diverse needs. 

Summary

An electronic oscillator is an electronic circuit that produces a periodic, oscillating or alternating current (AC) signal, usually a sine wave, square wave or a triangle wave, powered by a direct current (DC) source. Oscillators are found in many electronic devices, such as radio receivers, television sets, radio and television broadcast transmitters, computers, computer peripherals, cellphones, radar, and many other devices.

Oscillators are often characterized by the frequency of their output signal:

* A low-frequency oscillator (LFO) is an oscillator that generates a frequency below approximately 20 Hz. This term is typically used in the field of audio synthesizers, to distinguish it from an audio frequency oscillator.
* An audio oscillator produces frequencies in the audio range, 20 Hz to 20 kHz.
* A radio frequency (RF) oscillator produces signals above the audio range, more generally in the range of 100 kHz to 100 GHz.

There are two general types of electronic oscillators: the linear or harmonic oscillator, and the nonlinear or relaxation oscillator. The two types are fundamentally different in how oscillation is produced, as well as in the characteristic type of output signal that is generated.

The most-common linear oscillator in use is the crystal oscillator, in which the output frequency is controlled by a piezo-electric resonator consisting of a vibrating quartz crystal. Crystal oscillators are ubiquitous in modern electronics, being the source for the clock signal in computers and digital watches, as well as a source for the signals generated in radio transmitters and receivers. As a crystal oscillator's “native” output waveform is sinusoidal, a signal-conditioning circuit may be used to convert the output to other waveform types, such as the square wave typically utilized in computer clock circuits.

Details

Oscillators are essential components in the world of electronics, playing a crucial role in generating periodic signals. From the simplest applications to complex systems, oscillators provide the timing signals needed for synchronization and control. This article explains what oscillators are and how they work, explores the various types and their performance characteristics, highlights their applications across industries, and reviews recent advancements in this essential technology.

What is an Oscillator?

An oscillator is an electronic circuit that produces a continuous, periodic signal - typically in the form of a sine wave, square wave, or triangle wave - without requiring an input signal. These signals are defined by their frequency and amplitude, which can be precisely controlled to suit specific applications. In essence, an oscillator converts energy from a DC power supply into an AC signal.

Oscillators are found in a wide array of devices, including clocks, radios, and computers. They are considered the heartbeat of electronic systems, serving as timing references that enable circuits to synchronize and function properly.

What is an Oscillator in a CPU?

A CPU oscillator is responsible for generating clock signals that regulate the timing and speed of the processor. These clock signals synchronize various CPU components, allowing for the coordinated execution of instructions.

Typically, a crystal oscillator is used, which relies on the mechanical resonance of a vibrating quartz crystal to produce a stable frequency. This precise timing is critical to a CPU’s performance and efficiency, as it directly affects the instruction execution rate.

How Do Oscillators Work?

Oscillators generate a continuous, periodic signal - such as a sine wave or square wave - without requiring an input signal of the same frequency. They achieve this through the combined principles of feedback and resonance.

Basic Components

• Amplifier: Boosts the signal.

• Feedback Network: Determines the frequency of oscillation.

• Energy Source: Supplies power to sustain the oscillation.

The system continuously feeds part of its output back to the input, allowing the signal to regenerate itself. The frequency of oscillation depends on the configuration of components such as resistors, capacitors, and inductors within the feedback loop.

Purpose of an Oscillator

The primary purpose of an oscillator is to generate consistent clock signals that control the timing and synchronization of electronic systems, especially CPUs. These signals are essential for ensuring the coordinated execution of instructions, which in turn impacts overall system performance.

Types of Oscillators:

What is an oscillator? And what are the types of oscillators?

Oscillators, essential components in electronic circuits, can be categorized based on the type of waveform they produce and their method of operation. These components are generally divided into two main categories.

Relaxation vs Linear Oscillators

Relaxation Oscillators: Produce non-sinusoidal waveforms such as sawtooth or square waves.

Linear Oscillators: Generate sinusoidal waveforms.

Specific Types

Crystal Oscillators: Crystal oscillators are linear oscillators, and use quartz crystals to generate precise frequencies. Known for their stability and accuracy, they are ideal for communication devices and clocks.

RC Oscillators: RC oscillators can be both relaxation oscillators and linear oscillators. These oscillators utilize resistors and capacitors to generate sine or square waves. Often used in audio applications due to their simplicity and cost-effectiveness.

LC Oscillators: LC oscillators are considered linear oscillators and use inductors (L) and capacitors (C) to produce oscillations. Typically employed in radio frequency (RF) applications due to their high-frequency capability.

Phase-Locked Loop (PLL) Oscillators: PLL oscillators are primarily considered linear oscillators and are used for frequency synthesis and modulation. Essential in telecommunications for signal processing and frequency control.

Emerging Oscillator Technologies

Recent advancements in oscillator technology focus on performance improvement, miniaturization, and integration with other electronic components.

MEMS Oscillators: Microelectromechanical systems offer smaller form factors, highly stable reference frequencies, and low power consumption - ideal for portable devices.

Programmable Oscillators: Allow for customized frequency outputs, reducing component count and streamlining the design process.

Devices That Use Oscillators

Many electronic devices rely on oscillators for essential functions like timing, signal generation, and frequency control. Their ability to produce consistent waveforms makes them indispensable in both consumer electronics and industrial systems.

Examples:

Quartz Watches: Use crystal oscillators to generate highly accurate timekeeping signals, ensuring the watch maintains precise seconds, minutes, and hours.

Radios: Rely on oscillators to generate carrier frequencies and to tune into specific broadcast channels for both AM and FM signals.

Computers: Employ oscillators in their system clocks to synchronize processor operations, manage data transfer, and maintain stable performance.

Cellphones: Utilize oscillators for network synchronization, frequency hopping in wireless communication, and internal clocking for processors and sensors.

Radar Systems: Depend on high-frequency oscillators to generate the radio waves that detect and measure the speed, range, and position of objects.

Metal Detectors: Use oscillators to produce electromagnetic fields that interact with metallic objects, enabling detection through changes in oscillation frequency or amplitude.

Performance Characteristics of Oscillators

Oscillators are evaluated based on several performance metrics that directly influence their suitability for specific applications. The three most critical are frequency stability, phase noise, and waveform shape.

Frequency Stability

Frequency stability describes an oscillator’s ability to maintain its output frequency under varying conditions over time.

• Short-Term Stability: Covers rapid variations over seconds or minutes, often caused by noise or small environmental changes.
• Long-Term Stability: Considers changes over hours, days, or years, typically influenced by component aging and gradual environmental shifts.
• Environmental Factors: Temperature fluctuations, supply voltage changes, and mechanical vibrations can affect stability.
• Crystal Oscillators: These oscillators excel in this area because the resonant frequency of a quartz crystal is highly resistant to such disturbances, making them ideal for precision timing applications like GPS, telecommunications, and laboratory measurement systems.

Phase Noise

Phase noise measures short-term, rapid fluctuations in the oscillator's phase, which manifest as small, random deviations from the ideal frequency.

• It is usually represented as a power density (dBc/Hz) at a given frequency offset from the carrier signal.
• Low Phase Noise: Essential in high-performance systems, such as satellite communications, radar, and high-speed data links, where timing jitter can degrade system performance or cause data errors.
• High Phase Noise: Can lead to signal distortion, reduced sensitivity in receivers, and degraded performance in frequency synthesizers.

Waveform Shape

The oscillator’s output waveform determines how well it interfaces with downstream circuitry.

• Sine Waves: Preferred in RF applications because they have minimal harmonic content, reducing the need for filtering.
• Square Waves: Common in digital clocking applications, as their fast transitions make it easy for digital circuits to detect logic states.
• Sawtooth or Triangular Waveforms: May be required in specialized systems, such as sweep generators in analog oscilloscopes.
• Poor Waveform Shape: Can cause signal integrity issues, increased electromagnetic interference (EMI), or inaccurate timing in digital circuits.

Are Oscillators Active Components?

Oscillators are classified as active components. They amplify electrical signals and generate power, distinguishing them from passive components like resistors and capacitors. While oscillators incorporate passive elements in their circuits, their role in signal generation qualifies them as active devices.

Industries That Use Oscillators

Oscillators' ability to generate stable, precise signals makes them indispensable for timing, synchronization, and frequency control across a wide range of sectors. The specific oscillator type used often depends on the application's demands - whether it’s ultra-high precision, rugged durability, or low power consumption.

Telecommunications: Oscillators generate carrier signals for data transmission. Their stability and accuracy ensure signal integrity over long distances. Crystal and PLL oscillators are widely used here.

Consumer Electronics: Devices like smartphones and TVs rely on oscillators to generate clock signals for microcontrollers. Their precision directly impacts device performance.

Automotive: Used in engine control units, infotainment systems, and sensor applications (e.g., ABS), oscillators regulate timing for ignition and fuel injection.

Medical Devices: Essential in pacemakers and diagnostic tools, where reliability and precision are critical. Crystal oscillators are often chosen for their long-term stability.

Additional Information

An oscillator is an electronic device that produces repetitive oscillating signals in the form of a sine wave, a square wave, or a triangle wave. Basically, this circuit converts DC (Direct Current) into an AC (Alternating Current) signal at a specific frequency.

An oscillator is essential in various electronic devices. It is used in Bluetooth modules for frequency generation and maintaining a stable connection. In relays, oscillators help with debouncing and pulse generation.

In sensors, they are used for generating carrier signals and stabilizing readings. Integrated circuits (ICs) use oscillators for clock generation and data synchronization. In connectors, oscillators assist with signal integrity and timing matching.

Microcontrollers rely on oscillators for peripheral operation and system clock management. Additionally, oscillators are used in LCD and LED displays for backlight control and data driving.

A basic oscillator circuit typically includes components like an amplifier stage, a feedback network, frequency-determining components, and a power supply.

1. Amplifier

An amplifier in an oscillator can be a transistor, an operational amplifier, or any active device that boosts small signals to maintain continuous oscillations. For that amplifier must provide a gain greater than or equal to one to sustain oscillations.

2. Feedback Network

In this network, it feeds a portion of the output back to the input with the correct phase. This network includes components like capacitive, inductive, or resistive networks like LC circuits or RC circuits.

3. Frequency Determining Components

This component sets the frequency at which the oscillator operates, which includes RC networks, LC networks, and crystal resonators.

4. Power Supply

It provides the necessary voltage and current for operation.

Types of Oscillators

Based on the design, frequency range, and application, oscillators are classified into various types. They are as follows:

1. LC Oscillator

An LC oscillator uses an inductor and a capacitor to determine the frequency of oscillation. It is a high-frequency operation oscillator that gives a smooth sine wave output, and its frequency depends on the values of L and C.

LC oscillator consists of different types like Hartley Oscillator (uses a tapped inductor), Colpitts Oscillator (uses a capacitive voltage divider), and Clapp Oscillator ( it is a variation of the Colpitts with an additional capacitor for better frequency stability.

It is mostly used in radio transmitters, RF communication circuits, and signal generators.

2. RC Oscillator

RC oscillator uses resistors and capacitors to produce oscillations. It produces stable low-frequency sine waves and is ideal for audio frequency generation, which is cost cost-effective design.

This includes the Wien bridge oscillator (for audio applications) and the Phase shift oscillator (produces sine waves using multiple RC stages). RC oscillators are used in audio signal generation, function generation, and low-frequency timing circuits.

3. Crystal Oscillator

To create a very stable frequency oscillation, a crystal oscillator uses the mechanical resonance of a quartz crystal. It generates a pure sine wave output with extremely high frequency stability. They have very low frequency drift due to temperature changes.

These are of the types Pierce oscillator and AT-cut crystal oscillator (widely used in microcontrollers). It is used in microcontrollers and microprocessors, Bluetooth and Wi-Fi modules, digital watches and clocks, and GPS systems.

Working Principle of Oscillator

The working principle of an oscillator is based on the concept of positive feedback and energy conversion from a direct current (DC) source into an alternating current (AC) signal at a specific, stable frequency.

The working of the oscillator is explained in step below:

1. Initial

Due to thermal activity, every electronic circuit has inherent noise, and this tiny noise signal acts as the seed for oscillation.

2. Amplification

At the amplification stage, the amplifier boosts this initial noise signal, and amplification must be sufficient to compensate for any losses in the feedback network.

3. Positive Feedback Loop

A portion of the output is fed back to the input in phase, which reinforces the input signal rather than cancelling it.

4. Frequency Selection

The frequency-determining network (RC, LC, or crystal) controls the frequency of oscillation.

5. Steady State Oscillation

As the feedback sustains the oscillations, the amplitude stabilizes. Non-linear effects or amplitude limiting mechanisms prevent the output from growing indefinitely, ensuring stable oscillations.

Applications of Oscillators

1. Communication Systems

* Oscillators generate high-frequency carrier signals for AM, FM, and digital modulation.
* Used to produce a range of frequencies from a single oscillator source.
* LC and crystal oscillators are used for tuning and frequency control.

Example: Radio Transmitters, Mobile phones, Wi-Fi modules, Bluetooth devices

2. Microcontrollers and Microprocessors

* Oscillators provide the clock signals needed for the timing and operation of microcontrollers and microprocessors.
* Crystal oscillators generate precise timing signals that ensure all processes operate in harmony and within correct timing constraints.

Example: Arduino boards, PIC microcontrollers, Embedded systems.

3. Sensors

* Oscillators are used in sensor circuits for data acquisition and signal processing.

Example: Proximity sensors, Ultrasonic sensors, and Environmental monitoring systems.

4. Display Technologies

Oscillators help maintain the refresh rate of digital displays. Used in the PWM (Pulse Width Modulation) circuits for adjusting display brightness.

Example: LED displays, LCD displays, OLED panels, Digital signage.

Frequently Asked Questions:

1. Is an Oscillator AC or DC?

An oscillator converts DC power into an AC signal by generating a continuous, oscillating waveform without an external input.

2. Is the Oscillator Negative or Positive?

An oscillator uses positive feedback to sustain continuous oscillations.

3. Which Oscillator is Better?

The crystal oscillator is considered better for applications requiring high-frequency stability and accuracy.

4. How Does an Oscillator Differ from an Amplifier?

An oscillator generates its own periodic signal without an external input, while an amplifier boosts the strength of an existing input signal.

5. What is the Difference Between RC and LC Oscillators?

An RC oscillator uses resistors and capacitors for low-frequency generation, while an LC oscillator uses inductors and capacitors for high-frequency generation.

6. What Causes an Oscillator to Fail?

An oscillator can fail due to component aging, temperature variations, power supply issues, or physical damage to the resonator elements, like crystals or inductors.

7. Can an Oscillator be Used as a Signal Generator?

Yes, an oscillator can be used as a signal generator to produce continuous waveforms like sine, square, or triangular signals.

What%20is%20an%20Oscillator%20Types-%20Circuit-%20Working-%20and%20Applications.jpg

#404 Re: Jai Ganesh's Puzzles » General Quiz » 2026-02-14 15:36:53

Hi,

#10749. What does the term in Geography Cusp or Beach cusps mean?

#10750. What does the term in Geography Cut bank mean?

#405 Re: Jai Ganesh's Puzzles » English language puzzles » 2026-02-14 15:19:08

Hi,

#5945. What does the verb (used with object) mutate mean?

#5946. What does the verd (used without object) mutter mean?

#406 Re: Jai Ganesh's Puzzles » Doc, Doc! » 2026-02-14 15:00:08

Hi,

#2569. What does the medical term Dilated cardiomyopathy (DCM) mean?

#410 Re: Dark Discussions at Cafe Infinity » crème de la crème » 2026-02-13 22:08:18

2433) Yang Chen-Ning

Gist:

Work

For a long time, physicists assumed that various symmetries characterized nature. In a kind of “mirror world” where right and left were reversed and matter was replaced by antimatter, the same physical laws would apply, they posited. The equality of these laws was questioned concerning the decay of certain elementary particles, however, and in 1956 Chen Ning Yang and Tsung Dao Lee formulated a theory that the left-right symmetry law is violated by the weak interaction. Measurements of electrons’ direction of motion during a cobalt isotope’s beta decay confirmed this.

Summary

Chen Ning Yang (born October 1, 1922, Hofei, Anhwei, China—died October 18, 2025, Beijing, China) was a Chinese-born American theoretical physicist whose research with Tsung-Dao Lee showed that parity—the symmetry between physical phenomena occurring in right-handed and left-handed coordinate systems—is violated when certain elementary particles decay. Until this discovery it had been assumed by physicists that parity symmetry was as universal a law as the conservation of energy or electric charge. This and other studies in particle physics earned Yang and Lee the Nobel Prize for Physics for 1957.

Life

Yang’s father, Yang Ko-chuen (also known as Yang Wu-chih), was a professor of mathematics at Tsinghua University, near Peking. While still young, Yang read the autobiography of Benjamin Franklin and adopted “Franklin” as his first name. After graduation from the Southwest Associated University, in K’unming, he took his B.Sc. in 1942 and his M.S. in 1944. On a fellowship, he studied in the United States, enrolling at the University of Chicago in 1946. He took his Ph.D. in nuclear physics with Edward Teller and then remained in Chicago for a year as an assistant to Enrico Fermi, the physicist who was probably the most influential in Yang’s scientific development. Lee had also come to Chicago on a fellowship, and the two men began the collaboration that led eventually to their Nobel Prize work on parity. In 1949 Yang went to the Institute for Advanced Study in Princeton, New Jersey, and became a professor there in 1955. He became a U.S. citizen in 1964.

Work

Almost from his earliest days as a physicist, Yang had made significant contributions to the theory of the weak interactions—the forces long thought to cause elementary particles to disintegrate. (The strong forces that hold nuclei together and the electromagnetic forces that are responsible for chemical reactions are parity-conserving. Since these are the dominant forces in most physical processes, parity conservation appeared to be a valid physical law, and few physicists before 1955 questioned it.) By 1953 it was recognized that there was a fundamental paradox in this field since one of the newly discovered mesons—the so-called K meson—seemed to exhibit decay modes into configurations of differing parity. Since it was believed that parity had to be conserved, this led to a severe paradox.

After exploring every conceivable alternative, Lee and Yang were forced to examine the experimental foundations of parity conservation itself. They discovered, in early 1956, that, contrary to what had been assumed, there was no experimental evidence against parity nonconservation in the weak interactions. The experiments that had been done, it turned out, simply had no bearing on the question. They suggested a set of experiments that would settle the matter, and, when these were carried out by several groups over the next year, large parity-violating effects were discovered. In addition, the experiments also showed that the symmetry between particle and antiparticle, known as charge conjugation symmetry, is also broken by the weak decays.

In addition to his work on weak interactions, Yang, in collaboration with Lee and others, carried out important work in statistical mechanics—the study of systems with large numbers of particles—and later investigated the nature of elementary particle reactions at extremely high energies. From 1965 Yang was Albert Einstein professor at the Institute of Science, State University of New York at Stony Brook, Long Island. During the 1970s he was a member of the board of Rockefeller University and the American Association for the Advancement of Science and, from 1978, of the Salk Institute for Biological Studies, San Diego. He was also on the board of Ben-Gurion University, Beersheba, Israel. He received the Einstein Award in 1957 and the Rumford Prize in 1980; in 1986 he received the Liberty Award and the National Medal of Science.

Details

Yang Chen-Ning (1 October 1922 – 18 October 2025) also known as C.N. Yang and Franklin Yang, was a Chinese-American theoretical physicist who made significant contributions to statistical mechanics, integrable systems, gauge theory, particle physics and condensed matter physics.

Yang is known for his collaboration with Robert Mills in 1954 in developing non-abelian gauge theory, widely known as the Yang–Mills theory, which describes the nuclear forces in the Standard Model of particle physics.

Yang and Tsung-Dao Lee received the 1957 Nobel Prize in Physics for their work on parity non-conservation of the weak interaction, which was confirmed by the Wu experiment in 1956. The two proposed that the conservation of parity, a physical law observed to hold in all other physical processes, is violated in weak nuclear reactions – those nuclear processes that result in the emission of beta or alpha particles.

Early life and education

Yang was born in Hefei, Anhui, China, on 1 October 1922. His mother was Luo Meng-hua and his father, Ko-Chuen Yang (1896–1973), was a mathematician.

Yang attended elementary school and high school in Beijing, and in the autumn of 1937 his family moved to Hefei after the Japanese invaded China. In 1938 they moved to Kunming, Yunnan, where National Southwestern Associated University was located. In the same year, as a second-year student, Yang passed the entrance examination and studied at National Southwestern Associated University. He received a Bachelor of Science in 1942, with his thesis on the application of group theory to molecular spectra, under the supervision of Ta-You Wu.

Yang continued to study graduate courses there for two years under the supervision of Wang Zhuxi (J.S. Wang), working on statistical mechanics. In 1944, he received a Master of Science from National Tsing Hua University, which had moved to Kunming during the Sino-Japanese War (1937–1945). Yang was then awarded a scholarship from the Boxer Indemnity Scholarship Program, set up by the United States government using part of the money China had been forced to pay following the Boxer Rebellion. His departure for the United States was delayed for one year, during which time he taught in a middle school as a teacher and studied field theory.

Yang entered the University of Chicago in January 1946 and studied with Edward Teller. He received a Doctor of Philosophy in 1948.

Career

Yang remained at the University of Chicago for a year as an assistant to Enrico Fermi. In 1949 he was invited to do his research at the Institute for Advanced Study in Princeton, New Jersey, where he began a period of fruitful collaboration with Tsung-Dao Lee. Lee and Yang published 32 papers together. He was made a permanent member of the Institute in 1952, and full professor in 1955. In 1963, Princeton University Press published his textbook, Elementary Particles. In 1965 he moved to Stony Brook University, where he was named the Albert Einstein Professor of Physics and the first director of the newly founded Institute for Theoretical Physics. Today this institute is known as the C. N. Yang Institute for Theoretical Physics. Yang retired from Stony Brook University in 1999.

Yang visited the Chinese mainland in 1971 for the first time after the thaw in China–US relations, and subsequently worked to help the Chinese physics community rebuild the research atmosphere, which later eroded due to political movements during the Cultural Revolution. After retiring from Stony Brook, he returned to Beijing as an honorary director of Tsinghua University, where he was the first Huang Jibei-Lu Kaiqun Professor at the Center for Advanced Study (CASTU). He was also one of the two Shaw Prize Founding Members and was a Distinguished Professor-at-Large at the Chinese University of Hong Kong.

Yang helped to establish the Theoretical Physics Division at the Chern Institute of Mathematics in 1986 at the request of Shiing-Shen Chern who was serving as the inaugural director of the Institute at the time.

Personal life and death

Yang married Tu Chih-li, a teacher, in 1950; they had two sons and a daughter together. His father-in-law was the Kuomintang general Du Yuming. Tu died in October 2003. In January 2005, Yang married Weng Fan, a university student. They met in 1995 at a physics seminar; the couple reestablished contact in February 2004 when Yang moved to China to become affiliated with Tsinghua University. Yang called Weng, who was 54 years his junior, his "final blessing from God".

Yang obtained U.S. citizenship during his research within the country. According to the state-run Xinhua News Agency, Yang said the decision was painful as his father never forgave him for that. According to Xinhua and other mainstream Chinese media, he formally renounced his American citizenship on April 1, 2015. He acknowledged that while the U.S. was a beautiful country that gave him good opportunities to study science, China since his youth had offered the best secondary and undergraduate institutions, though the US had the top graduate studies. However, circumstances changed in favor of China's growth by the turn of the century.

His son Guangnuo was a computer scientist. His second son Guangyu is an astronomer, and his daughter Youli is a doctor.

Yang turned 100 on 1 October 2022, and died in Beijing on 18 October 2025, at the age of 103. The day after the announcement of his death, people gathered and waited in line at Tsinghua University to pay tributes to Yang.

Views on the CEPC

Yang is known for having opposed the construction of the Circular Electron Positron Collider (CEPC), a 100 km circumference particle collider in China that would study the Higgs boson. He catalogued the project as "guess" work and without guaranteed results. Yang said that "even if they see something with the machine, it's not going to benefit the life of Chinese people any sooner."

yang-13122-portrait-medium.jpg

#411 Re: Jai Ganesh's Puzzles » General Quiz » 2026-02-13 19:58:27

Hi,

#10747. What does the term in Geography Culture mean?

#10748. What does the term in Geography Culvert mean?

#412 Re: Jai Ganesh's Puzzles » English language puzzles » 2026-02-13 19:46:10

Hi,

#5943. What does the noun mandate mean?

#5944. What does the adjective mandatory mean?

#413 This is Cool » Anion » 2026-02-13 19:22:57

Jai Ganesh
Replies: 0

Anion

Gist

An anion is an atom or molecule that carries a net negative electrical charge because it has gained one or more electrons, resulting in more electrons than protons. Formed typically by non-metals, anions are attracted to the positive electrode (anode) during electrolysis and readily combine with positive ions (cations) to form ionic compounds, like chloride or sulfate.

Anions are negatively-charged ions (meaning they have more electrons than protons due to having gained one or more electrons). Cations are also called positive ions, and anions are also called negative ions.

Summary

Anions are atoms or groups of atoms that have a negative electric charge. An anion has more electrons in its atomic orbitals than it has protons in its atomic nucleus. The opposite of an anion is a cation, which has a positive charge.

The name "anion" comes from the words anode and ion. In an electrochemical cell, anions are attracted to the positively charged anode.

Anions can be monatomic, made of only one atom, or polyatomic, made of multiple atoms. Anions can exist on their own only as gases: to make a solid, ionic liquid, or solution the total electrical charge must be zero, meaning a mix of anions and cations.

Properties

In many crystals the anions are bigger; the little cations fit into the spaces between them.

All anions are Brønsted bases: they can make a chemical bond with a proton, H+, to form a conjugate acid.

Examples

Oxide is the most common anion on Earth. It is made from an oxygen atom with two extra electrons. The formula for oxide is written O2−. The oxide ion reacts with water, so it cannot be dissolved to make a solution.

Chloride is a monatomic anion made from an atom of chlorine with an extra electron. The chemical formula is written Cl−. Chloride is the most common anion in seawater.

Sulfate is the second most common anion in seawater after chloride. It is made of a sulfur atom, four oxygen atoms, and two extra electrons. Sulfate is a special type of anion called an oxyanion, which are made of a central element (like sulfur) surrounded by oxygen atoms.

Hydroxide is a polyatomic anion made of one oxygen atom, one hydrogen atom, and one extra electron. It has the formula OH−. Hydroxide is the conjugate base of water, so it is the strongest base that can be mixed with water. Other strong bases, including the oxide anion, react with water to make hydroxide.  

Details

Anions are negatively charged ions formed when an atom or group of atoms gains one or more valence electrons. The term "anion" comes from "anode ions," reflecting their movement toward the positive terminal in an electrolytic solution. Anions, along with positively charged cations, form ionic bonds that are fundamental to many chemical compounds, such as table salt (sodium chloride). Common examples of anions include fluoride, chloride, and sulfate, each named based on their elemental origin or structural characteristics, following specific nomenclature rules established by the International Union of Pure and Applied Chemistry (IUPAC).

The formation of anions is closely tied to the electronic structure of atoms, which consists of a dense nucleus surrounded by electron shells containing orbitals that dictate the arrangement and behavior of electrons. Elements tend to achieve stability by completing their outer electron shell, often following the octet rule, which promotes the formation of anions in various chemical reactions. Understanding anions is essential for grasping broader concepts related to atomic structure, chemical bonding, and the periodic table's organization.

The basic structure of anions is defined, and the development of the modern theory of atomic structure is elaborated. Electronic structure is fundamental to all chemical behaviors and is responsible for the relationships seen in the periodic table.

The Nature of Anions

An anion is any atom or group of atoms that bears a net negative charge due to the presence of one or more extra valence electrons. The term is a contraction of "anode ions," which is a reference to the fact that when a direct electric current is applied to an electrolytic solution, negatively charged ions are attracted to the anode, or positive terminal, of the source of the current. By the same token, cations, from "cathode ions," have a net positive charge and are attracted to the cathode, or negative terminal, of the current source. Anions and cations often combine to form compounds held together with ionic bonds; one common example is sodium chloride (NaCl), better known as table salt, which is created when the sodium cation, Na+, bonds to the chloride anion, Cl−.

The formation of any ion is the result of an atom or molecule gaining or losing one or more valence electrons. This is most apparent in monatomic (single-atom) ions, in which the electrical charge is equal to the oxidation state. For example, the halogens—fluorine, chlorine, bromine, iodine, and astatine—are all highly electronegative, meaning that they readily accept an extra valence electron so that their outer electron shell is completely full and therefore stable. The resulting anions are called fluoride, chloride, bromide, iodide, and astatinide, respectively, and they have an electrical charge of 1− because they gained one electron and thus one unit of negative charge. Similarly, the chalcogens oxygen (O) and sulfur (S) readily accept two extra electrons to form the oxide ion, O2−, and the sulfide ion, S2−.

Compound ions in which a central atom is bonded to a number of oxygen atoms are extremely common. These oxoanions tend to form very stable compounds and are the basic materials of many minerals.

The Electronics of Anion Formation

According to the modern theory of atomic structure, each atom contains a very small, extremely dense nucleus that holds at least 99.98 percent of the atom’s mass and all of its positive electrical charge. The nucleus is surrounded by a diffuse and comparatively very large cloud of electrons containing all of the atom’s negative electrical charge. These electrons are allowed to possess only very specific energies. This restricts their movement around the nucleus to specific regions called "electron shells." Within each shell are well-defined regions called "orbitals." The strict geometric arrangement of the orbitals regulates the formation of chemical bonds between atoms.

Each shell and orbital is subject to a number of restrictions that dictate how many electrons it can hold. There are four different types of electron orbitals, designated s, p, d, and f, each of which can contain a specific number of electrons: s orbitals can hold a maximum of two electrons; p orbitals, a maximum of six; d orbitals, a maximum of ten; and f orbitals, a maximum of fourteen. One or more of these orbitals make up an electron shell. The various electron shells are indicated by an integer value known as the "principal quantum number," starting with 1 for the innermost shell, typically referred to as the "n = 1 shell." The standard notation to describe an electron orbital is the principal quantum number, followed by the type of orbital, followed by a superscript number indicating how many electrons it holds. For example, helium has only one s orbital, which is completely full, so its electron configuration is represented as 1s2. The first p orbital appears in the n = 2 shell, the first d orbital in the n = 3 shell, and the first f orbital in the n = 4 shell. Due to variances in energy levels, electrons usually fill atomic orbitals in the order 1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s . . . rather than in strict numerical order.

The outermost electron shell is the valence shell, and in all elements, except the noble gases (helium, neon, argon, krypton, xenon, and radon), the valence shell is incompletely occupied. Because of this, the noble gases are often called "inert gases," reflecting the fact that they are far less chemically reactive than elements whose valence shells are not filled. All noble gases except for helium have eight electrons in their valence shell, as a shell with eight electrons approximates a stable closed-shell configuration of s2p6. This is the basis of the octet rule, which states that atoms of lower atomic numbers tend to achieve the greatest stability when they have eight electrons in their valence shells. The closer an element is to having eight valence electrons, the more likely it is to undergo ionization or a chemical reaction in order to achieve a stable configuration, either by gaining enough electrons to complete the octet or by losing all valence electrons so that the next-highest completed shell becomes the outermost shell. Elements with similar electron distributions in their valence shells typically exhibit similar chemical behaviors, which is the basis on which the periodictable of the elements is arranged.

Naming the Anions

The rules for naming various chemical species are established and standardized by the International Union of Pure and Applied Chemistry (IUPAC). Monatomic anions and polyatomic anions composed of a single element are named by adding the suffix -ide to the name of the element, either instead of or in addition to the existing suffix. Thus a sulfur anion becomes sulfide, a xenon anion becomes xenonide, a potassium anion becomes potasside, and so on. In some cases, the suffix is added to the element’s Latin name instead of its common name; for example, an anion of silver, which has the Latin name argentum, is called argentide.

Oxoanions are generally named for the central atom of the anion and the number of oxygen atoms that surround it, with the charge number given in parentheses at the end. An oxoanion consisting of a sulfur atom surrounded by three oxygen atoms has the formal name trioxidosulfate, while one with a sulfur atom and four oxygen atoms is called tetraoxidosulfate. The common (nonsystematic) names of these two oxoanions are sulfite and sulfate, respectively, following the convention that the oxoanion with fewer oxygen atoms takes the suffix -ite and the one with more oxygen atoms takes the suffix -ate. If one element is capable of forming more than two different oxoanions, as is the case with chlorine, the prefixes hypo- and per- are used as well, so that the common name of ClO− is hypochlorite, ClO2− is chlorite, ClO3− is chlorate, and ClO4− is perchlorate.

Principal Terms

cation: any chemical species bearing a net positive electrical charge, which causes it to be drawn toward the negative pole, or cathode, of an electrochemical cell.
ionic bond: a type of chemical bond formed by mutual attraction between two ions of opposite charges.
ionization: the process by which an atom or molecule loses or gains one or more electrons to acquire a net positive or negative electrical charge.
oxoanion: an ion consisting of one or more central atoms bonded to a number of oxygen atoms and bearing a net negative electrical charge.
valence electron: an electron that occupies the outermost or valence shell of an atom and participates in chemical processes such as bond formation and ionization.

Additional Information

An anion is an atom that has a negative charge. So, given that anion definition, the answer to the question "Is an anion negative?" is yes. Anions are a type of atom, the smallest particle of an element that still retains the element's properties. Atoms are made of three types of subatomic particles: neutrons, protons, and electrons. Neutrons are neutrally charged subatomic particles and, along with protons, they make up the nucleus. Protons have a positive charge. Electrons are very small subatomic particles that orbit the nucleus in levels called shells. Electrons have a negative charge.

Anions are created when an atom gains one or more electrons. The number of electrons gained by an atom is determined by how many are needed to fill their outer shell. For example, fluorine has seven electrons in its outer shell, but a full outer shell contains eight electrons. Thus, fluorine tends to gain one electron to fill its outer shell and generally has a charge of -1. Oxygen, on the other hand, has an outer shell of six electrons. This means it requires two electrons to complete its outer shell and tends to carry a charge of -2.

Uses for Anions

Fluoride ion is widely used in water supplies to help prevent tooth decay. Chloride is an important component in ion balance in blood. Iodide ion is needed by the thyroid gland to make the hormone thyroxine.

Summary

* Anions are formed by the addition of one or more electrons to the outer shell of an atom.
* Group 17 elements add one electron to the outer shell, group 16 elements add two electrons, and group 15 elements add three electrons.
* Anions are named by dropping the ending of the element's name and adding -ide.

CationAnion.png

#414 Re: This is Cool » Miscellany » 2026-02-13 18:32:41

2495) Adhesive

Gist

An adhesive is a substance—such as glue, cement, or paste—that binds materials together through surface attachment, resisting separation. Primarily divided into natural (e.g., starch) and synthetic (e.g., epoxy, acrylic) types, they are used for bonding, sealing, and coating in industrial and consumer applications.

Adhesives are used to bond materials together across virtually every industry, from construction, automotive, and aerospace to everyday household tasks, by creating strong, durable connections that distribute stress evenly, join dissimilar materials (like metal to plastic), and offer aesthetic advantages over mechanical fasteners like nails or screws, by avoiding holes and creating smoother finishes. They're essential for product assembly, packaging (like sealing boxes), labeling (stickers, bottle labels), and repairs, enabling lighter, stronger, and more complex designs in everything from smartphones to spacecraft.

Summary

Adhesive, also known as glue, cement, mucilage, or paste, is any non-metallic substance applied to one or both surfaces of two separate items that binds them together and resists their separation.

The use of adhesives offers certain advantages over other binding techniques such as sewing, mechanical fastenings, and welding. These include the ability to bind different materials together, the more efficient distribution of stress across a joint, the cost-effectiveness of an easily mechanized process, and greater flexibility in design. Disadvantages of adhesive use include decreased stability at high temperatures, relative weakness in bonding large objects with a small bonding surface area, and greater difficulty in separating objects during testing. Adhesives are typically organized by the method of adhesion followed by reactive or non-reactive, a term which refers to whether the adhesive chemically reacts in order to harden. Alternatively, they can be organized either by their starting physical phase or whether their raw stock is of natural or synthetic origin.

Adhesives may be found naturally or produced synthetically. The earliest human use of adhesive-like substances was approximately 200,000 years ago, when Neanderthals produced tar from the dry distillation of birch bark for use in binding stone tools to wooden handles. The first references to adhesives in literature appeared approximately 2000 BC. The Greeks and Romans made great contributions to the development of adhesives. In Europe, glue was not widely used until the period AD 1500–1700. From then until the 1900s increases in adhesive use and discovery were relatively gradual. Only since the 20th century has the development of synthetic adhesives accelerated rapidly, and innovation in the field continues to the present.

Details

An adhesive is any substance that is capable of holding materials together in a functional manner by surface attachment that resists separation. “Adhesive” as a general term includes cement, mucilage, glue, and paste—terms that are often used interchangeably for any organic material that forms an adhesive bond. Inorganic substances such as portland cement also can be considered adhesives, in the sense that they hold objects such as bricks and beams together through surface attachment, but this article is limited to a discussion of organic adhesives, both natural and synthetic.

Natural adhesives have been known since antiquity. Egyptian carvings dating back 3,300 years depict the gluing of a thin piece of veneer to what appears to be a plank of sycamore. Papyrus, an early nonwoven fabric, contained fibres of reedlike plants bonded together with flour paste. Bitumen, tree pitches, and beeswax were used as sealants (protective coatings) and adhesives in ancient and medieval times. The gold leaf of illuminated manuscripts was bonded to paper by egg white, and wooden objects were bonded with glues from fish, horn, and cheese. The technology of animal and fish glues advanced during the 18th century, and in the 19th century rubber- and nitrocellulose-based cements were introduced. Decisive advances in adhesives technology, however, awaited the 20th century, during which time natural adhesives were improved and many synthetics came out of the laboratory to replace natural adhesives in the marketplace. The rapid growth of the aircraft and aerospace industries during the second half of the 20th century had a profound impact on adhesives technology. The demand for adhesives that had a high degree of structural strength and were resistant to both fatigue and severe environmental conditions led to the development of high-performance materials, which eventually found their way into many industrial and domestic applications.

This article begins with a brief explanation of the principles of adhesion and then proceeds to a review of the major classes of natural and synthetic adhesives.

Adhesion

In the performance of adhesive joints, the physical and chemical properties of the adhesive are the most important factors. Also important in determining whether the adhesive joint will perform adequately are the types of adherend (that is, the components being joined—e.g., metal alloy, plastic, composite material) and the nature of the surface pretreatment or primer. These three factors—adhesive, adherend, and surface—have an impact on the service life of the bonded structure. The mechanical behaviour of the bonded structure in turn is influenced by the details of the joint design and by the way in which the applied loads are transferred from one adherend to the other.

Implicit in the formation of an acceptable adhesive bond is the ability of the adhesive to wet and spread on the adherends being joined. Attainment of such interfacial molecular contact is a necessary first step in the formation of strong and stable adhesive joints. Once wetting is achieved, intrinsic adhesive forces are generated across the interface through a number of mechanisms. The precise nature of these mechanisms have been the object of physical and chemical study since at least the 1960s, with the result that a number of theories of adhesion exist. The main mechanism of adhesion is explained by the adsorption theory, which states that substances stick primarily because of intimate intermolecular contact. In adhesive joints this contact is attained by intermolecular or valence forces exerted by molecules in the surface layers of the adhesive and adherend.

In addition to adsorption, four other mechanisms of adhesion have been proposed. The first, mechanical interlocking, occurs when adhesive flows into pores in the adherend surface or around projections on the surface. The second, interdiffusion, results when liquid adhesive dissolves and diffuses into adherend materials. In the third mechanism, adsorption and surface reaction, bonding occurs when adhesive molecules adsorb onto a solid surface and chemically react with it. Because of the chemical reaction, this process differs in some degree from simple adsorption, described above, although some researchers consider chemical reaction to be part of a total adsorption process and not a separate adhesion mechanism. Finally, the electronic, or electrostatic, attraction theory suggests that electrostatic forces develop at an interface between materials with differing electronic band structures. In general, more than one of these mechanisms play a role in achieving the desired level of adhesion for various types of adhesive and adherend.

In the formation of an adhesive bond, a transitional zone arises in the interface between adherend and adhesive. In this zone, called the interphase, the chemical and physical properties of the adhesive may be considerably different from those in the noncontact portions. It is generally believed that the interphase composition controls the durability and strength of an adhesive joint and is primarily responsible for the transference of stress from one adherend to another. The interphase region is frequently the site of environmental attack, leading to joint failure.

The strength of adhesive bonds is usually determined by destructive tests, which measure the stresses set up at the point or line of fracture of the test piece. Various test methods are employed, including peel, tensile lap shear, cleavage, and fatigue tests. These tests are carried out over a wide range of temperatures and under various environmental conditions. An alternate method of characterizing an adhesive joint is by determining the energy expended in cleaving apart a unit area of the interphase. The conclusions derived from such energy calculations are, in principle, completely equivalent to those derived from stress analysis.

Adhesive materials

Virtually all synthetic adhesives and certain natural adhesives are composed of polymers, which are giant molecules, or macromolecules, formed by the linking of thousands of simpler molecules known as monomers. The formation of the polymer (a chemical reaction known as polymerization) can occur during a “cure” step, in which polymerization takes place simultaneously with adhesive-bond formation (as is the case with epoxy resins and cyanoacrylates), or the polymer may be formed before the material is applied as an adhesive, as with thermoplastic elastomers such as styrene-isoprene-styrene block copolymers. Polymers impart strength, flexibility, and the ability to spread and interact on an adherend surface—properties that are required for the formation of acceptable adhesion levels.

Natural adhesives

Natural adhesives are primarily of animal or vegetable origin. Though the demand for natural products has declined since the mid-20th century, certain of them continue to be used with wood and paper products, particularly in corrugated board, envelopes, bottle labels, book bindings, cartons, furniture, and laminated film and foils. In addition, owing to various environmental regulations, natural adhesives derived from renewable resources are receiving renewed attention. The most important natural products are described below.

Animal glue

The term animal glue usually is confined to glues prepared from mammalian collagen, the principal protein constituent of skin, bone, and muscle. When treated with acids, alkalies, or hot water, the normally insoluble collagen slowly becomes soluble. If the original protein is pure and the conversion process is mild, the high-molecular-weight product is called gelatin and may be used for food or photographic products. The lower-molecular-weight material produced by more vigorous processing is normally less pure and darker in colour and is called animal glue.

Animal glue traditionally has been used in wood joining, book bindery, sandpaper manufacture, heavy gummed tapes, and similar applications. In spite of its advantage of high initial tack (stickiness), much animal glue has been modified or entirely replaced by synthetic adhesives.

Casein glue

This product is made by dissolving casein, a protein obtained from milk, in an aqueous alkaline solvent. The degree and type of alkali influences product behaviour. In wood bonding, casein glues generally are superior to true animal glues in moisture resistance and aging characteristics. Casein also is used to improve the adhering characteristics of paints and coatings.

Blood albumen glue

Glue of this type is made from serum albumen, a blood component obtainable from either fresh animal blood or dried soluble blood powder to which water has been added. Addition of alkali to albumen-water mixtures improves adhesive properties. A considerable quantity of glue products from blood is used in the plywood industry.

Starch and dextrin

Starch and dextrin are extracted from corn, wheat, potatoes, or rice. They constitute the principal types of vegetable adhesives, which are soluble or dispersible in water and are obtained from plant sources throughout the world. Starch and dextrin glues are used in corrugated board and packaging and as a wallpaper adhesive.

Natural gums

Substances known as natural gums, which are extracted from their natural sources, also are used as adhesives. Agar, a marine-plant colloid (suspension of extremely minute particles), is extracted by hot water and subsequently frozen for purification. Algin is obtained by digesting seaweed in alkali and precipitating either the calcium salt or alginic acid. Gum arabic is harvested from acacia trees that are artificially wounded to cause the gum to exude. Another exudate is natural rubber latex, which is harvested from Hevea trees. Most gums are used chiefly in water-remoistenable products.

Synthetic adhesives

Although natural adhesives are less expensive to produce, most important adhesives are synthetic. Adhesives based on synthetic resins and rubbers excel in versatility and performance. Synthetics can be produced in a constant supply and at constantly uniform properties. In addition, they can be modified in many ways and are often combined to obtain the best characteristics for a particular application.

The polymers used in synthetic adhesives fall into two general categories—thermoplastics and thermosets. Thermoplastics provide strong, durable adhesion at normal temperatures, and they can be softened for application by heating without undergoing degradation. Thermoplastic resins employed in adhesives include nitrocellulose, polyvinyl acetate, vinyl acetate-ethylene copolymer, polyethylene, polypropylene, polyamides, polyesters, acrylics, and cyanoacrylics.

Thermosetting systems, unlike thermoplastics, form permanent, heat-resistant, insoluble bonds that cannot be modified without degradation. Adhesives based on thermosetting polymers are widely used in the aerospace industry. Thermosets include phenol formaldehyde, urea formaldehyde, unsaturated polyesters, epoxies, and polyurethanes. Elastomer-based adhesives can function as either thermoplastic or thermosetting types, depending on whether cross-linking is necessary for the adhesive to perform its function. The characteristics of elastomeric adhesives include quick assembly, flexibility, variety of type, economy, high peel strength, ease of modification, and versatility. The major elastomers employed as adhesives are natural rubber, butyl rubber, butadiene rubber, styrene-butadiene rubber, nitrile rubber, silicone, and neoprene.

An important challenge facing adhesive manufacturers and users is the replacement of adhesive systems based on organic solvents with systems based on water. This trend has been driven by restrictions on the use of volatile organic compounds (VOC), which include solvents that are released into the atmosphere and contribute to the depletion of ozone. In response to environmental regulation, adhesives based on aqueous emulsions and dispersions are being developed, and solvent-based adhesives are being phased out.

The polymer types noted above are employed in a number of functional types of adhesives. These functional types are described below.

Contact cements

Contact adhesives or cements are usually based on solvent solutions of neoprene. They are so named because they are usually applied to both surfaces to be bonded. Following evaporation of the solvent, the two surfaces may be joined to form a strong bond with high resistance to shearing forces. Contact cements are used extensively in the assembly of automotive parts, furniture, leather goods, and decorative laminates. They are effective in the bonding of plastics.

Structural adhesives

Structural adhesives are adhesives that generally exhibit good load-carrying capability, long-term durability, and resistance to heat, solvents, and fatigue. Ninety-five percent of all structural adhesives employed in original equipment manufacture fall into six structural-adhesive families: (1) epoxies, which exhibit high strength and good temperature and solvent resistance, (2) polyurethanes, which are flexible, have good peeling characteristics, and are resistant to shock and fatigue, (3) acrylics, a versatile adhesive family that bonds to oily parts, cures quickly, and has good overall properties, (4) anaerobics, or surface-activated acrylics, which are good for bonding threaded metal parts and cylindrical shapes, (5) cyanoacrylates, which bond quickly to plastic and rubber but have limited temperature and moisture resistance, and (6) silicones, which are flexible, weather well out-of-doors, and provide good sealing properties. Each of these families can be modified to provide adhesives that have a range of physical and mechanical properties, cure systems, and application techniques.

Polyesters, polyvinyls, and phenolic resins are also used in industrial applications but have processing or performance limitations. High-temperature adhesives, such as polyimides, have a limited market.

Hot-melt adhesives

Hot-melt adhesives are employed in many nonstructural applications. Based on thermoplastic resins, which melt at elevated temperatures without degrading, these adhesives are applied as hot liquids to the adherend. Commonly used polymers include polyamides, polyesters, ethylene-vinyl acetate, polyurethanes, and a variety of block copolymers and elastomers such as butyl rubber, ethylene-propylene copolymer, and styrene-butadiene rubber.

Hot-melts find wide application in the automotive and home-appliance fields. Their utility, however, is limited by their lack of high-temperature strength, the upper use temperature for most hot-melts being in the range of 40–65 °C (approximately 100–150 °F). In order to improve performance at higher temperatures, so-called structural hot-melts—thermoplastics modified with reactive urethanes, moisture-curable urethanes, or silane-modified polyethylene—have been developed. Such modifications can lead to enhanced peel adhesion, higher heat capability (in the range of 70–95 °C [160–200 °F]), and improved resistance to ultraviolet radiation.

Pressure-sensitive adhesives

Pressure-sensitive adhesives, or PSAs, represent a large industrial and commercial market in the form of adhesive tapes and films directed toward packaging, mounting and fastening, masking, and electrical and surgical applications. PSAs are capable of holding adherends together when the surfaces are mated under briefly applied pressure at room temperature. (The difference between these adhesives and contact cements is that the latter require no pressure to bond.)

Materials used to formulate PSA systems include natural and synthetic rubbers, thermoplastic elastomers, polyacrylates, polyvinylalkyl ethers, and silicones. These polymers, in both solvent-based and hot-melt formulations, are applied as a coating onto a substrate of paper, cellophane, plastic film, fabric, or metal foil. As solvent-based adhesive formulations are phased out in response to environmental regulations, water-based PSAs will find greater use.

Ultraviolet-cured adhesives

Ultraviolet-cured adhesives became available in the early 1960s but developed rapidly with advances in chemical and equipment technology during the 1980s. These types of adhesive normally consist of a monomer (which also can serve as the solvent) and a low-molecular-weight prepolymer combined with a photoinitiator. Photoinitiators are compounds that break down into free radicals upon exposure to ultraviolet radiation. The radicals induce polymerization of the monomer and prepolymer, thus completing the chain extension and cross-linking required for the adhesive to form. Because of the low process temperatures and very rapid polymerization (from 2 to 60 seconds), ultraviolet-cured adhesives are making rapid advances in the electronic, automotive, and medical areas. They consist mainly of acrylated formulations of silicones, urethanes, and methacrylates. Combined ultraviolet–heat-curing formulations also exist.

Additional Information

An adhesive is a type of substance that holds two or more materials together with cohesive forces and surface attachment in a practical way. Adhesives can be made from and be composed of a variety of substances such as tree sap, bee wax, cement, and epoxy.Ideally, there are two broad adhesive categories, natural and synthetic adhesives. Most commercially found adhesives are synthetic adhesives as they provide better consistency, bond strength, and adaptability compared to natural adhesives. Synthetic adhesives are further classified as consumer adhesives and Industrial adhesives based on their application.

An adhesive’s chemical composition determines its application methods, usage, and bonding strength. Therefore, adhesive manufacturers need to custom-engineer modern synthetic based on the needs of different industries and applications.

Adhesive Applications In Various Industries

1. Bonding:
Bonding is a process in which two surfaces are practically joined together with the help of a suitable adhesive, such as epoxy adhesives. Adhesives are used for bonding materials in various industries, such as electronics, medical, food, optical, chemical and oil and gas industries to bond a range of metals, ceramics, glass, plastics, rubbers and composites.

2. Sealing:
Unlike bonding which sees two surfaces fused together, sealants are ideal for closing gaps and cavities to block fluids, dust, and dirt from either entering or getting out. Sealants are widely used in aerospace, oil and gas, chemical, electronic, optical, automotive and specialty OEM industries.

3. Coating:
Coatings are predominantly used in aerospace, electronic conformal coating, along with some other uses in OEM and oil & chemical industries. Industrial adhesive coatings can provide superior protection against chemicals, dust and moisture,  reduce friction, improve abrasion resistance and provide EMI/RFI shielding.

4. Potting:
Potting is an encapsulation method used in the electronics industry to cover small or large electrical components placed inside a housing with a suitable potting material that can withstand high temperatures, protect the circuits from moisture, dirt, dust and other harsh conditions. Potting and encapsulation are used for electronic and microelectronic components, such as sensors, motors, coils, transformers, capacitors, switches, connectors, power supplies, and cable harnesses.

5. Impregnation:
Impregnation is a method used to wet various fibres, such as glass, carbon, kevlar, aramid among others. Once the fibres are completely saturated with the resin, the resin is allowed to fully cure in place forming a composite substrate. Such impregnated composite surfaces are widely used in the aerospace, windmill and electronics and electrical industries.

How Do Adhesives Work?

The working of adhesive depends on the types of bonding process used to attach the surfaces to each other. Mechanical adhesion and chemical adhesion are two types of bondings that can be used to stick one surface to another with adhesives.

Usually, surfaces that need to be attached with the help of adhesives, have a lot of micropores. These pores when filled with adhesives act as grips to keep another surface attached to them. This is called mechanical adhesion. With mechanical adhesion, the adhesives are in liquid form. The liquid adhesives will gradually penetrate the pores during the drying and curing process. You should also keep in mind that mechanical bonding is dependent on the surface roughness and surface energy of the substrates to be bonded. The higher the surface energy and roughness  of a substance, the stronger is the bond.

On the other hand, chemical bonding is completely different which sees the surface of a material completely bond with another material on a molecular level. It is a complex process but very effective at the same time. Chemical bonding is further categorised into two types; adsorption and chemisorption depending on the type of bond between the adhesive’s molecules and the surface. Although chemical adhesives are easily available, they are not a common form of adhesive used in Industries.

Types of Adhesives

1. Hot Melt:
Hot melt is a type of thermoplastic polymer adhesive. Thermoplastic polymer adhesives are in a solid state at room temperature. During the application process , they are liquified by heating to be applied as an adhesive. Hot melt adhesives are used for manufacturing and packaging purposes in a wide array of industries due to their superior bonding strength, versatility and setting time. They are also eco-friendly, safe and have a longer shelf life.

Different hot melt adhesives might have different softening points and hardening times as per their applications. Some of the common types of hot melt adhesives are polyurethane, metallocene, EVA hot melt, and polyethene hot melt adhesives.

Let’s talk about reactive hot melt adhesives, which are different from hot melt adhesives. Reactive hot melt adhesives, once applied to a surface and cured, will not be able to melt again as they generate additional chemical bonds during the curing process. This makes reactive hot melt adhesives a better choice than simple hot melt adhesives as they have stronger adherence. Reactive hot melt or RHM are high-quality adhesives also known for their heightened resistance to moisture, and other chemicals with higher thermal stability.

2. Thermosetting:
Thermosetting adhesives are materials which cannot be re-melted after they have cured. Thermosetting adhesives are usually made of two parts, namely, the resin and hardener. However one-part forms can also be found.

There are various types of thermosetting grades such as:

* Phenolics
* Epoxies
* Polyesters
* Polyurethanes
* Silicones

Out of these, epoxy thermosetting resins are the most commonly used in various industries such as electronics and electrical, oil and chemical, automotive, aerospace, optical etc. This is due to their excellent resistance to heat and harsh chemicals and superior mechanical bonding properties.

3. Pressure Sensitive:
Pressure-sensitive adhesives are low-modulus elastomers which means they can be easily taken apart, but are the best choice for light usage. Pressure-sensitive adhesives can be easily found in tapes, bandages, sticky notes, etc. Pressure sensitive adhesives are non-structural adhesives which are not suitable for high-pressure industrial applications. However, they can be used for lighter and thinner material surfaces for which strong adhesives are not suitable. Pressure sensitive adhesives are also cheaper as compared to other adhesive materials and can be found more easily.

4. Contact Adhesive:
Contact adhesives are generally used to create strong mechanical bonds by applying adhesive to both surfaces that are supposed to be bonded together. Contact adhesives are also elastomeric which means the polymers used in the adhesives have rubber-like properties which helps them stay in shape. This gives contact adhesives excellent flexibility and mechanical strength. These adhesives are commonly used in the automotive industry, construction, aerospace and OEM for sealing and coating. They can also be found in rubber cement or countertop laminates. Contact adhesives are ideal for applications which require stability and durability.

Adhesive Application Methods

1. Manual:
As the name suggests, in this method, the applicator uses handheld devices and tools to apply adhesives to the surfaces. Manual adhesive application methods can include spraying, web coating, using a brush and a roller, curtain coating etc. Manual application is cost-effective and is recommended for smaller applications.

2. Glue Applicator:
Glue applicators are handheld devices that assist you to apply adhesives uniformly and at a faster rate than manually. These applicators contain a gun fitted with a cartridge containing the adhesive. A mixing tip is attached to the front of the cartridge to eliminate the need for any manual mixing. These semi-automatic devices enable higher speed, precision and efficiency. Glue applicators are ideal for medium to large-scale applications and are commonly used in the aerospace, electronics and optical industry to fuse small and detailed pieces of equipment.

3. Automatic Dispensing:
Automatic dispensing is ideal for fast-paced and high-volume environments where consistency and quality finish is crucial. This method is more costly as compared to the above two, however, automatic dispensing can increase efficiency, reduce waste and complete the task at a large scale. Metre-mix-dispense systems are used for two component adhesives and robotic dispensing is used for single component adhesives.

Conclusion

Adhesives are used in almost every manufacturing and packaging industry and are an important part of their process. As we have seen, there are different types of adhesives with varying properties suitable for numerous industries.

461.webp

#415 Dark Discussions at Cafe Infinity » Come Quotes - III » 2026-02-13 18:01:30

Jai Ganesh
Replies: 0

Come Quotes - III

1. Put two ships in the open sea, without wind or tide, and, at last, they will come together. Throw two planets into space, and they will fall one on the other. Place two enemies in the midst of a crowd, and they will inevitably meet; it is a fatality, a question of time; that is all. - Jules Verne

2. On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question. - Charles Babbage

3. I've always believed that if you put in the work, the results will come. - Michael Jordan

4. From the deepest desires often come the deadliest hate. - Socrates

5. The most important thing about Spaceship Earth - an instruction book didn't come with it. - R. Buckminster Fuller

6. Hope smiles from the threshold of the year to come, whispering, 'It will be happier.' - Alfred Lord Tennyson

7. Tears come from the heart and not from the brain. - Leonardo da Vinci

8. What goes up must come down. - Isaac Newton.

#416 Jokes » Doughnet Jokes - II » 2026-02-13 17:43:27

Jai Ganesh
Replies: 0

Q: What kind of donuts can fly?
A: A plain one.
* * *
Q: What do you call a Jamaican donut?
A: Cinnamon.
* * *
Q: What did one donut say to the other?
A: I donut care.
* * *
Q: How did the police department figure out a perp stole a cop car?
A: The lojacked cop car went 5 hours without stopping at a Dunkin Donuts!
* * *
Donuts will make your clothes shrink.
* * *

#417 Re: Jai Ganesh's Puzzles » Doc, Doc! » 2026-02-13 17:35:30

Hi,

#2568. What does the medical term Humectant mean?

#421 Science HQ » Acoustics » 2026-02-13 16:34:38

Jai Ganesh
Replies: 0

Acoustics

Gist

Acoustics: The branch of physics that is concerned with the study of sound is known as acoustics. We can define acoustics as, The science that deals with the study of sound and its production, transmission, and effects.

"Acoustic" relates to sound or hearing, describing things like instruments not needing electronic amplification (acoustic guitar), materials controlling sound (acoustic panels), or the scientific study of sound waves (acoustics) in physics, architecture, and medicine. It signifies natural sound production or properties that affect sound quality, from the science of hearing to the design of concert halls or musical styles.

Summary

Acoustics is a branch of continuum mechanics that deals with the study of mechanical waves in gases, liquids, and solids including topics such as vibration, sound, ultrasound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of acoustics technology may be called an acoustical engineer. The application of acoustics is present in almost all aspects of modern society with the most obvious being the audio and noise control industries.

Hearing is one of the most crucial means of survival in the animal world and speech is one of the most distinctive characteristics of human development and culture. Accordingly, the science of acoustics spreads across many facets of human society—music, medicine, architecture, industrial production, warfare and more. Likewise, animal species such as songbirds and frogs use sound and hearing as a key element of mating rituals or for marking territories. Art, craft, science and technology have provoked one another to advance the whole, as in many other fields of knowledge. Robert Bruce Lindsay's "Wheel of Acoustics" is a well-accepted overview of the various fields in acoustics.

Details

Acoustics is the science concerned with the production, control, transmission, reception, and effects of sound. The term is derived from the Greek akoustos, meaning “heard.”

Beginning with its origins in the study of mechanical vibrations and the radiation of these vibrations through mechanical waves, acoustics has had important applications in almost every area of life. It has been fundamental to many developments in the arts—some of which, especially in the area of musical scales and instruments, took place after long experimentation by artists and were only much later explained as theory by scientists. For example, much of what is now known about architectural acoustics was actually learned by trial and error over centuries of experience and was only recently formalized into a science.

Other applications of acoustic technology are in the study of geologic, atmospheric, and underwater phenomena. Psychoacoustics, the study of the physical effects of sound on biological systems, has been of interest since Pythagoras first heard the sounds of vibrating strings and of hammers hitting anvils in the 6th century bc, but the application of modern ultrasonic technology has only recently provided some of the most exciting developments in medicine. Even today, research continues into many aspects of the fundamental physical processes involved in waves and sound and into possible applications of these processes in modern life.

Sound waves follow physical principles that can be applied to the study of all waves; these principles are discussed thoroughly in the article mechanics of solids. The article here explains in detail the physiological process of hearing—that is, receiving certain wave vibrations and interpreting them as sound.

Early experimentation

The origin of the science of acoustics is generally attributed to the Greek philosopher Pythagoras (6th century bc), whose experiments on the properties of vibrating strings that produce pleasing musical intervals were of such merit that they led to a tuning system that bears his name. Aristotle (4th century bc) correctly suggested that a sound wave propagates in air through motion of the air—a hypothesis based more on philosophy than on experimental physics; however, he also incorrectly suggested that high frequencies propagate faster than low frequencies—an error that persisted for many centuries. Vitruvius, a Roman architectural engineer of the 1st century bc, determined the correct mechanism for the transmission of sound waves, and he contributed substantially to the acoustic design of theatres. In the 6th century ad, the Roman philosopher Boethius documented several ideas relating science to music, including a suggestion that the human perception of pitch is related to the physical property of frequency.

The modern study of waves and acoustics is said to have originated with Galileo Galilei (1564–1642), who elevated to the level of science the study of vibrations and the correlation between pitch and frequency of the sound source. His interest in sound was inspired in part by his father, who was a mathematician, musician, and composer of some repute. Following Galileo’s foundation work, progress in acoustics came relatively rapidly. The French mathematician Marin Mersenne studied the vibration of stretched strings; the results of these studies were summarized in the three Mersenne’s laws. Mersenne’s Harmonicorum Libri (1636) provided the basis for modern musical acoustics. Later in the century Robert Hooke, an English physicist, first produced a sound wave of known frequency, using a rotating cog wheel as a measuring device. Further developed in the 19th century by the French physicist Félix Savart, and now commonly called Savart’s disk, this device is often used today for demonstrations during physics lectures. In the late 17th and early 18th centuries, detailed studies of the relationship between frequency and pitch and of waves in stretched strings were carried out by the French physicist Joseph Sauveur, who provided a legacy of acoustic terms used to this day and first suggested the name acoustics for the study of sound.

One of the most interesting controversies in the history of acoustics involves the famous and often misinterpreted “bell-in-vacuum” experiment, which has become a staple of contemporary physics lecture demonstrations. In this experiment the air is pumped out of a jar in which a ringing bell is located; as air is pumped out, the sound of the bell diminishes until it becomes inaudible. As late as the 17th century many philosophers and scientists believed that sound propagated via invisible particles originating at the source of the sound and moving through space to affect the ear of the observer. The concept of sound as a wave directly challenged this view, but it was not established experimentally until the first bell-in-vacuum experiment was performed by Athanasius Kircher, a German scholar, who described it in his book Musurgia Universalis (1650). Even after pumping the air out of the jar, Kircher could still hear the bell, so he concluded incorrectly that air was not required to transmit sound. In fact, Kircher’s jar was not entirely free of air, probably because of inadequacy in his vacuum pump. By 1660 the Anglo-Irish scientist Robert Boyle had improved vacuum technology to the point where he could observe sound intensity decreasing virtually to zero as air was pumped out. Boyle then came to the correct conclusion that a medium such as air is required for transmission of sound waves. Although this conclusion is correct, as an explanation for the results of the bell-in-vacuum experiment it is misleading. Even with the mechanical pumps of today, the amount of air remaining in a vacuum jar is more than sufficient to transmit a sound wave. The real reason for a decrease in sound level upon pumping air out of the jar is that the bell is unable to transmit the sound vibrations efficiently to the less dense air remaining, and that air is likewise unable to transmit the sound efficiently to the glass jar. Thus, the real problem is one of an impedance mismatch between the air and the denser solid materials—and not the lack of a medium such as air, as is generally presented in textbooks. Nevertheless, despite the confusion regarding this experiment, it did aid in establishing sound as a wave rather than as particles.

Measuring the speed of sound

Once it was recognized that sound is in fact a wave, measurement of the speed of sound became a serious goal. In the 17th century, the French scientist and philosopher Pierre Gassendi made the earliest known attempt at measuring the speed of sound in air. Assuming correctly that the speed of light is effectively infinite compared with the speed of sound, Gassendi measured the time difference between spotting the flash of a gun and hearing its report over a long distance on a still day. Although the value he obtained was too high—about 478.4 metres per second (1,569.6 feet per second)—he correctly concluded that the speed of sound is independent of frequency. In the 1650s, Italian physicists Giovanni Alfonso Borelli and Vincenzo Viviani obtained the much better value of 350 metres per second using the same technique. Their compatriot G.L. Bianconi demonstrated in 1740 that the speed of sound in air increases with temperature. The earliest precise experimental value for the speed of sound, obtained at the Academy of Sciences in Paris in 1738, was 332 metres per second—incredibly close to the presently accepted value, considering the rudimentary nature of the measuring tools of the day. A more recent value for the speed of sound, 331.45 metres per second (1,087.4 feet per second), was obtained in 1942; it was amended in 1986 to 331.29 metres per second at 0° C (1,086.9 feet per second at 32° F).

The speed of sound in water was first measured by Daniel Colladon, a Swiss physicist, in 1826. Strangely enough, his primary interest was not in measuring the speed of sound in water but in calculating water’s compressibility—a theoretical relationship between the speed of sound in a material and the material’s compressibility having been established previously. Colladon came up with a speed of 1,435 metres per second at 8° C; the presently accepted value interpolated at that temperature is about 1,439 metres per second.

Two approaches were employed to determine the velocity of sound in solids. In 1808 Jean-Baptiste Biot, a French physicist, conducted direct measurements of the speed of sound in 1,000 metres of iron pipe by comparing it with the speed of sound in air. A better measurement had earlier been carried out by a German, Ernst Florenz Friedrich Chladni, using analysis of the nodal pattern in standing-wave vibrations in long rods.

Modern advances

Simultaneous with these early studies in acoustics, theoreticians were developing the mathematical theory of waves required for the development of modern physics, including acoustics. In the early 18th century, the English mathematician Brook Taylor developed a mathematical theory of vibrating strings that agreed with previous experimental observations, but he was not able to deal with vibrating systems in general without the proper mathematical base. This was provided by Isaac Newton of England and Gottfried Wilhelm Leibniz of Germany, who, in pursuing other interests, independently developed the theory of calculus, which in turn allowed the derivation of the general wave equation by the French mathematician and scientist Jean Le Rond d’Alembert in the 1740s. The Swiss mathematicians Daniel Bernoulli and Leonhard Euler, as well as the Italian-French mathematician Joseph-Louis Lagrange, further applied the new equations of calculus to waves in strings and in the air. In the 19th century, Siméon-Denis Poisson of France extended these developments to stretched membranes, and the German mathematician Rudolf Friedrich Alfred Clebsch completed Poisson’s earlier studies. A German experimental physicist, August Kundt, developed a number of important techniques for investigating properties of sound waves.

One of the most important developments in the 19th century involved the theory of vibrating plates. In addition to his work on the speed of sound in metals, Chladni had earlier introduced a technique of observing standing-wave patterns on vibrating plates by sprinkling sand onto the plates—a demonstration commonly used today. Perhaps the most significant step in the theoretical explanation of these vibrations was provided in 1816 by the French mathematician Sophie Germain, whose explanation was of such elegance and sophistication that errors in her treatment of the problem were not recognized until some 35 years later, by the German physicist Gustav Robert Kirchhoff.

The analysis of a complex periodic wave into its spectral components was theoretically established early in the 19th century by Jean-Baptiste-Joseph Fourier of France and is now commonly referred to as the Fourier theorem. The German physicist Georg Simon Ohm first suggested that the ear is sensitive to these spectral components; his idea that the ear is sensitive to the amplitudes but not the phases of the harmonics of a complex tone is known as Ohm’s law of hearing (distinguishing it from the more famous Ohm’s law of electrical resistance).

Hermann von Helmholtz made substantial contributions to understanding the mechanisms of hearing and to the psychophysics of sound and music. His book On the Sensations of Tone As a Physiological Basis for the Theory of Music (1863) is one of the classics of acoustics. In addition, he constructed a set of resonators, covering much of the audio spectrum, which were used in the spectral analysis of musical tones. The Prussian physicist Karl Rudolph Koenig, an extremely clever and creative experimenter, designed many of the instruments used for research in hearing and music, including a frequency standard and the manometric flame. The flame-tube device, used to render standing sound waves “visible,” is still one of the most fascinating of physics classroom demonstrations. The English physical scientist John William Strutt, 3rd Baron Rayleigh, carried out an enormous variety of acoustic research; much of it was included in his two-volume treatise, The Theory of Sound, publication of which in 1877–78 is now thought to mark the beginning of modern acoustics. Much of Rayleigh’s work is still directly quoted in contemporary physics textbooks.

The study of ultrasonics was initiated by the American scientist John LeConte, who in the 1850s developed a technique for observing the existence of ultrasonic waves with a gas flame. This technique was later used by the British physicist John Tyndall for the detailed study of the properties of sound waves. The piezoelectric effect, a primary means of producing and sensing ultrasonic waves, was discovered by the French physical chemist Pierre Curie and his brother Jacques in 1880. Applications of ultrasonics, however, were not possible until the development in the early 20th century of the electronic oscillator and amplifier, which were used to drive the piezoelectric element.

Among 20th-century innovators were the American physicist Wallace Sabine, considered to be the originator of modern architectural acoustics, and the Hungarian-born American physicist Georg von Békésy, who carried out experimentation on the ear and hearing and validated the commonly accepted place theory of hearing first suggested by Helmholtz. Békésy’s book Experiments in Hearing, published in 1960, is the magnum opus of the modern theory of the ear.

Amplifying, recording, and reproducing

The earliest known attempt to amplify a sound wave was made by Athanasius Kircher, of “bell-in-vacuum” fame; Kircher designed a parabolic horn that could be used either as a hearing aid or as a voice amplifier. The amplification of body sounds became an important goal, and the first stethoscope was invented by a French physician, René Laënnec, in the early 19th century.

Attempts to record and reproduce sound waves originated with the invention in 1857 of a mechanical sound-recording device called the phonautograph by Édouard-Léon Scott de Martinville. The first device that could actually record and play back sounds was developed by the American inventor Thomas Alva Edison in 1877. Edison’s phonograph employed grooves of varying depth in a cylindrical sheet of foil, but a spiral groove on a flat rotating disk was introduced a decade later by the German-born American inventor Emil Berliner in an invention he called the gramophone. Much significant progress in recording and reproduction techniques was made during the first half of the 20th century, with the development of high-quality electromechanical transducers and linear electronic circuits. The most important improvement on the standard phonograph record in the second half of the century was the compact disc, which employed digital techniques developed in mid-century that substantially reduced noise and increased the fidelity and durability of the recording.

Architectural acoustics:

Reverberation time

Although architectural acoustics has been an integral part of the design of structures for at least 2,000 years, the subject was only placed on a firm scientific basis at the beginning of the 20th century by Wallace Sabine. Sabine pointed out that the most important quantity in determining the acoustic suitability of a room for a particular use is its reverberation time, and he provided a scientific basis by which the reverberation time can be determined or predicted.

When a source creates a sound wave in a room or auditorium, observers hear not only the sound wave propagating directly from the source but also the myriad reflections from the walls, floor, and ceiling. These latter form the reflected wave, or reverberant sound. After the source ceases, the reverberant sound can be heard for some time as it grows softer. The time required, after the sound source ceases, for the absolute intensity to drop by a factor of {10}^{6} - or, equivalently, the time for the intensity level to drop by 60 decibels—is defined as the reverberation time (RT, sometimes referred to as RT60). Sabine recognized that the reverberation time of an auditorium is related to the volume of the auditorium and to the ability of the walls, ceiling, floor, and contents of the room to absorb sound. Using these assumptions, he set forth the empirical relationship through which the reverberation time could be determined: RT = 0.05V/A, where RT is the reverberation time in seconds, V is the volume of the room in cubic feet, and A is the total sound absorption of the room, measured by the unit sabin. The sabin is the absorption equivalent to one square foot of perfectly absorbing surface—for example, a one-square-foot hole in a wall or five square feet of surface that absorbs 20 percent of the sound striking it.

Both the design and the analysis of room acoustics begin with this equation. Using the equation and the absorption coefficients of the materials from which the walls are to be constructed, an approximation can be obtained for the way in which the room will function acoustically. Absorbers and reflectors, or some combination of the two, can then be used to modify the reverberation time and its frequency dependence, thereby achieving the most desirable characteristics for specific uses.

While there is no exact value of reverberation time that can be called ideal, there is a range of values deemed to be appropriate for each application. These vary with the size of the room, but the averages can be calculated and indicated by lines on a graph. The need for clarity in understanding speech dictates that rooms used for talking must have a reasonably short reverberation time. On the other hand, the full sound desirable in the performance of music of the Romantic era, such as Wagner operas or Mahler symphonies, requires a long reverberation time. Obtaining a clarity suitable for the light, rapid passages of Bach or Mozart requires an intermediate value of reverberation time. For playing back recordings on an audio system, the reverberation time should be short, so as not to create confusion with the reverberation time of the music in the hall where it was recorded.

Acoustic criteria

Many of the acoustic characteristics of rooms and auditoriums can be directly attributed to specific physically measurable properties. Because the music critic or performing artist uses a different vocabulary to describe these characteristics than does the physicist, it is helpful to survey some of the more important features of acoustics and correlate the two sets of descriptions.

“Liveness” refers directly to reverberation time. A live room has a long reverberation time and a dead room a short reverberation time. “Intimacy” refers to the feeling that listeners have of being physically close to the performing group. A room is generally judged intimate when the first reverberant sound reaches the listener within about 20 milliseconds of the direct sound. This condition is met easily in a small room, but it can also be achieved in large halls by the use of orchestral shells that partially enclose the performers. Another example is a canopy placed above a speaker in a large room such as a cathedral: this leads to both a strong and a quick first reverberation and thus to a sense of intimacy with the person speaking.

The amplitude of the reverberant sound relative to the direct sound is referred to as fullness. Clarity, the opposite of fullness, is achieved by reducing the amplitude of the reverberant sound. Fullness generally implies a long reverberation time, while clarity implies a shorter reverberation time. A fuller sound is generally required of Romantic music or performances by larger groups, while more clarity would be desirable in the performance of rapid passages from Bach or Mozart or in speech.

“Warmth” and “brilliance” refer to the reverberation time at low frequencies relative to that at higher frequencies. Above about 500 hertz, the reverberation time should be the same for all frequencies. But at low frequencies an increase in the reverberation time creates a warm sound, while, if the reverberation time increased less at low frequencies, the room would be characterized as more brilliant.

“Texture” refers to the time interval between the arrival of the direct sound and the arrival of the first few reverberations. To obtain good texture, it is necessary that the first five reflections arrive at the observer within about 60 milliseconds of the direct sound. An important corollary to this requirement is that the intensity of the reverberations should decrease monotonically; there should be no unusually large late reflections.

“Blend” refers to the mixing of sounds from all the performers and their uniform distribution to the listeners. To achieve proper blend it is often necessary to place a collection of reflectors on the stage that distribute the sound randomly to all points in the audience.

Although the above features of auditorium acoustics apply to listeners, the idea of ensemble applies primarily to performers. In order to perform coherently, members of the ensemble must be able to hear one another. Reverberant sound cannot be heard by the members of an orchestra, for example, if the stage is too wide, has too high a ceiling, or has too much sound absorption on its sides.

Acoustic problems

Certain acoustic problems often result from improper design or from construction limitations. If large echoes are to be avoided, focusing of the sound wave must be avoided. Smooth, curved reflecting surfaces such as domes and curved walls act as focusing elements, creating large echoes and leading to bad texture. Improper blend results if sound from one part of the ensemble is focused to one section of the audience. In addition, parallel walls in an auditorium reflect sound back and forth, creating a rapid, repetitive pulsing of sound known as flutter echo and even leading to destructive interference of the sound wave. Resonances at certain frequencies should also be avoided by use of oblique walls.

Acoustic shadows, regions in which some frequency regions of sound are attenuated, can be caused by diffraction effects as the sound wave passes around large pillars and corners or underneath a low balcony. Large reflectors called clouds, suspended over the performers, can be of such a size as to reflect certain frequency regions while allowing others to pass, thus affecting the mixture of the sound.

External noise can be a serious problem for halls in urban areas or near airports or highways. One technique often used for avoiding external noise is to construct the auditorium as a smaller room within a larger room. Noise from air blowers or other mechanical vibrations can be reduced using techniques involving impedance and by isolating air handlers.

Good acoustic design must take account of all these possible problems while emphasizing the desired acoustic features. One of the problems in a large auditorium involves simply delivering an adequate amount of sound to the rear of the hall. The intensity of a spherical sound wave decreases in intensity at a rate of six decibels for each factor of two increase in distance from the source, as shown above. If the auditorium is flat, a hemispherical wave will result. Absorption of the diffracted wave by the floor or audience near the bottom of the hemisphere will result in even greater absorption, so that the resulting intensity level will fall off at twice the theoretical rate, at about 12 decibels for each factor of two in distance. Because of this absorption, the floors of an auditorium are generally sloped upward toward the rear.

Additional Information

Acoustics is defined as the science that deals with the production, control, transmission, reception, and effects of sound (as defined by Merriam-Webster). Many people mistakenly think that acoustics is strictly musical or architectural in nature. While acoustics does include the study of musical instruments and architectural spaces, it also covers a vast range of topics, including: noise control, SONAR for submarine navigation, ultrasounds for medical imaging, thermoacoustic refrigeration, seismology, bioacoustics, and electroacoustic communication. Below is the so called "Lindsay's Wheel of Acoustics", created by R. Bruce Lindsey in J. Acoust. Soc. Am. V. 36, p. 2242 (1964). This wheel describes the scope of acoustics starting from the four broad fields of Earth Sciences, Engineering, Life Sciences, and the Arts. The outer circle lists the various disciplines one may study to prepare for a career in acoustics. The inner circle lists the fields within acoustics that the various disciplines naturally lead to.

Curiously enough, Lindsey (himself a physicist) didn't list physics specifically in the outer circle. This is likely because a background in physics provides one with the foundational knowledge necessary to study nearly any of the fields of acoustics research. In fact, the Acoustical Society of America (ASA) (founded in 1929) was one of the five original societies that helped in the formation of the American Institute of Physics in 1931. The ASA is composed of 13 main areas of study called Technical Committees (TCs):

* Acoustical Oceanography (AO)
* Animal Bioacoustics (AB)
* Architectural Acoustics (AA)
* Biomedical Ultrasound/Bioresponse to Vibration (BB)
* Engineering Acoustics (EA)
* Musical Acoustics (MU)
* Noise (NS)
* Physical Acoustics (PA)
* Psychological and Physiological Acoustics (PP)
* Signal Processing in Acoustics (SP)
* Speech Communication (SC)
* Structural Acoustics and Vibration (SA)
* Underwater Acoustics (UW).

Spectrum-of-Acoustics.jpg

WheelOfAcoustics.jpg

#422 Re: Dark Discussions at Cafe Infinity » crème de la crème » 2026-02-12 19:29:42

2432) Joshua Lederberg

Gist:

Work

It was long thought that bacteria multiply by dividing, so that all bacteria have the same genetic make-up. Joshua Lederberg and Edward Tatum demonstrated in 1946 that bacteria's genes can also change in a way similar to that of sexual reproduction seen in more complex organisms. Bacteria can go through a phase in which two bacteria exchange genetic material with one another by passing pieces of DNA across a bridge-like connection. Lederberg also proved the phenomenon known as transduction, in which DNA is transferred between bacteria via bacteriophages.

Summary

Joshua Lederberg (born May 23, 1925, Montclair, N.J., U.S.—died Feb. 2, 2008, New York, N.Y.) was an American geneticist and a pioneer in the field of bacterial genetics. He shared the 1958 Nobel Prize for Physiology or Medicine (with George W. Beadle and Edward L. Tatum) for discovering the mechanisms of genetic recombination in bacteria.

Lederberg studied under Tatum at Yale (Ph.D., 1948) and taught at the University of Wisconsin (1947–59), where he established a department of medical genetics. In 1959 he joined the faculty of the Stanford Medical School, serving as director of the Kennedy Laboratories of Molecular Medicine there from 1962 to 1978, when he moved to New York City to become president of Rockefeller University. He held that post until 1990.

With Tatum he published “Gene Recombination in Escherichia coli” (1946), in which he reported that the mixing of two different strains of a bacterium resulted in genetic recombination between them and thus to a new, crossbred strain of the bacterium. Scientists had previously thought that bacteria only reproduced asexually—i.e., by cells splitting in two; Lederberg and Tatum showed that they could also reproduce sexually, and that bacterial genetic systems are similar to those of multicellular organisms.

While biologists who had not previously believed that “sex” existed in bacteria such as E. coli were still confirming Lederberg’s discovery, he and his student Norton D. Zinder reported another and equally surprising finding. In the paper “Genetic Exchange in Salmonella” (1952), they revealed that certain bacteriophages (bacteria-infecting viruses) were capable of carrying a bacterial gene from one bacterium to another, a phenomenon they termed transduction.

Lederberg’s discoveries greatly increased the utility of bacteria as a tool in genetics research, and it soon became as important as the fruit fly Drosophila and the bread mold Neurospora. Moreover, his discovery of transduction provided the first hint that genes could be inserted into cells. The realization that the genetic material of living things could be directly manipulated eventually bore fruit in the field of genetic engineering, or recombinant DNA technology.

At the dawn of space exploration, Lederberg coined the term exobiology to describe the scientific study of life outside Earth’s atmosphere. He later served as a consultant to NASA’s Viking mission to Mars.

Details

Joshua Lederberg (May 23, 1925 – February 2, 2008) was an American molecular biologist known for his work in microbial genetics, artificial intelligence, and the United States space program. He was 33 years old when he won the 1958 Nobel Prize in Physiology or Medicine for discovering that bacteria can mate and exchange genes (bacterial conjugation). He shared the prize with Edward Tatum and George Beadle, who won for their work with genetics.

In addition to his contributions to biology, Lederberg did extensive research in artificial intelligence. This included work in the NASA experimental programs seeking life on Mars and the chemistry expert system Dendral.

Early life and education

Lederberg was born in Montclair, New Jersey, to a Jewish family, son of Esther Goldenbaum Schulman Lederberg and Rabbi Zvi Hirsch Lederberg, in 1925, and moved to Washington Heights, Manhattan as an infant. He had two younger brothers. Lederberg graduated from Stuyvesant High School in New York City at the age of 15 in 1941. After graduation, he was allowed lab space as part of the American Institute Science Laboratory, a forerunner of the Westinghouse Science Talent Search. He enrolled in Columbia University in 1941, majoring in zoology. Under the mentorship of Francis J. Ryan, he conducted biochemical and genetic studies on the bread mold Neurospora crassa. Intending to receive his MD and fulfill his military service obligations, Lederberg worked as a hospital corpsman during 1943 in the clinical pathology laboratory at St. Albans Naval Hospital, where he examined sailors' blood and stool samples for malaria. He went on to receive his undergraduate degree in 1944.

Bacterial genetics

Joshua Lederberg began medical studies at Columbia's College of Physicians and Surgeons while continuing to perform experiments. Inspired by Oswald Avery's discovery of the importance of DNA, Lederberg began to investigate his hypothesis that, contrary to prevailing opinion, bacteria did not simply pass down exact copies of genetic information, making all cells in a lineage essentially clones. After making little progress at Columbia, Lederberg wrote to Edward Tatum, Ryan's post-doctoral mentor, proposing a collaboration. In 1946 and 1947, Lederberg took a leave of absence to study under the mentorship of Tatum at Yale University. Lederberg and Tatum showed that the bacterium Escherichia coli entered a sexual phase during which it could share genetic information through bacterial conjugation. With this discovery and some mapping of the E. coli chromosome, Lederberg was able to receive his Ph.D. from Yale University in 1947. Joshua married Esther Miriam Zimmer (herself a student of Edward Tatum) on December 13, 1946.

Instead of returning to Columbia to finish his medical degree, Lederberg chose to accept an offer of an assistant professorship in genetics at the University of Wisconsin–Madison. His wife Esther Lederberg went with him to Wisconsin. She received her doctorate there in 1950.

Joshua Lederberg and Norton Zinder showed in 1951 that genetic material could be transferred from one strain of the bacterium Salmonella typhimurium to another using viral material as an intermediary step. This process is called transduction. In 1956, M. Laurance Morse, Esther Lederberg and Joshua Lederberg also discovered specialized transduction. The research in specialized transduction focused upon lambda phage infection of E. coli. Transduction and specialized transduction explained how bacteria of different species could gain resistance to the same antibiotic very quickly.

During her time in Joshua Lederberg's laboratory, Esther Lederberg also discovered fertility factor F, later publishing with Joshua Lederberg and Luigi Luca Cavalli-Sforza. In 1956, the Society of Illinois Bacteriologists simultaneously awarded Joshua Lederberg and Esther Lederberg the Pasteur Medal, for "their outstanding contributions to the fields of microbiology and genetics".

In 1957, Joshua Lederberg founded the Department of Medical Genetics at the University of Wisconsin–Madison. He has held visiting professorship in Bacteriology at the University of California, Berkeley in summer 1950 and University of Melbourne (1957). Also in 1957, he was elected to the National Academy of Sciences.

Sir Gustav Nossal views Lederberg as his mentor, describing him as "lightning fast" and "loving a robust debate."

Post Nobel Prize research

In 1958, Joshua Lederberg received the Nobel Prize and moved to Stanford University, where he was the founder and chairman of the Department of Genetics. He collaborated with Frank Macfarlane Burnet to study viral antibodies.

With the launching of Sputnik in 1957, Lederberg became concerned about the biological impact of space exploration. In a letter to the National Academies of Sciences, he outlined his concerns that extraterrestrial microbes might gain entry to Earth onboard spacecraft, causing catastrophic diseases. He also argued that, conversely, microbial contamination of manmade satellites and probes may obscure the search for extraterrestrial life. He advised quarantine for returning astronauts and equipment and sterilization of equipment prior to launch. Teaming up with Carl Sagan, his public advocacy for what he termed exobiology helped expand the role of biology in NASA.

Lederberg was elected to the American Academy of Arts and Sciences in 1959 and the American Philosophical Society in 1960.

In the 1960s, he collaborated with Edward Feigenbaum in Stanford's computer science department to develop DENDRAL.

In 1978, he became the president of Rockefeller University, until he stepped down in 1990 and became professor-emeritus of molecular genetics and informatics at Rockefeller University, reflecting his extensive research and publications in these disciplines.

Throughout his career, Lederberg was active as a scientific advisor to the U.S. government. Starting in 1950, he was a member of various panels of the Presidential Science Advisory Committee. In 1979, he became a member of the U.S. Defense Science Board and the chairman of President Jimmy Carter's President's Cancer Panel. In 1989, he received National Medal of Science for his contributions to the scientific world. In 1994, he headed the Department of Defense's Task Force on Persian Gulf War Health Effects, which investigated Gulf War Syndrome.

During a 1986 fact finding mission of the 1979 Soviet Union epidemic of anthrax bacteria that killed 66 people in the city of Sverdlovsk (now Yekaterinburg, Russia), Lederberg sided with Soviets that the anthrax outbreak was from animal to human transmission stating, "Wild rumors do spread around every epidemic." "The current Soviet account is very likely to be true." After the fall of the Soviet Union and subsequent US investigations in the early 1990s, a team of scientists confirmed the outbreak was caused by a release of an aerosol of anthrax pathogen from a nearby military facility, the lab leak is one of the deadliest ever documented.

lederberg-13126-portrait-medium.jpg

#423 Re: This is Cool » Miscellany » 2026-02-12 18:55:43

2494) Metal Detector

Gist

Metal detectors use electromagnetic fields to find hidden metal, with major uses in security (airports, events for weapons), hobby/recreation (treasure hunting, coin collecting), industry (quality control in food/pharma, construction for pipes/wires), and archaeology/recovery (locating artifacts, landmines). They range from handheld wands to large walk-through arches, alerting users with sound or visual cues when metal is detected.

Metal detectors detect any conductive metal by sensing disruptions in their electromagnetic field, finding everything from weapons and coins to jewelry and tools, distinguishing between easier-to-detect ferrous (iron-based) and harder-to-detect non-ferrous (like aluminum, copper) metals, with sensitivity adjusted for different uses like security or treasure hunting.

Summary

A metal detector is an instrument that detects the nearby presence of metal. Metal detectors are useful for finding metal objects on the surface, underground, and under water. A metal detector typically consists of a control box, an adjustable shaft, and a variable-shaped pickup coil. When the coil nears metal, the control box signals its presence with a tone, numerical reading, light, or needle movement. Signal intensity typically increases with proximity or metal size and composition. A common type are stationary "walk through" metal detectors used at access points in prisons, courthouses, airports and psychiatric hospitals to detect concealed metal weapons on a person's body.

The simplest form of a metal detector consists of an oscillator producing an alternating current that passes through a coil producing an alternating magnetic field. If a piece of electrically conductive metal is close to the coil, eddy currents will be induced (inductive sensor) in the metal, and this produces a magnetic field of its own. If another coil is used to measure the magnetic field (acting as a magnetometer), the change in the magnetic field due to the metallic object can be detected.

The first industrial metal detectors came out in the 1960s, and were used for finding minerals, among other things. Metal detectors help find land mines. They also detect weapons like knives and guns, which is important for airport security. People most commonly use them to search for buried objects, like in archaeology and treasure hunting. Metal detectors are also used to detect foreign bodies in food, and in the construction industry to detect steel reinforcing bars in concrete and pipes and wires buried in walls and floors.

Details

Mention the words metal detector and you'll get completely different reactions from different people. For instance, some people think of combing a beach in search of coins or buried treasure. Other people think of airport security, or the handheld scanners at a concert or sporting event.

The fact is that all of these scenarios are valid. Metal-detector technology is a huge part of our lives, with a range of uses that spans from leisure to work to safety. The metal detectors in airports, office buildings, schools, government agencies and prisons help ensure that no one is bringing a weapon onto the premises. Consumer-oriented metal detectors provide millions of people around the world with an opportunity to discover hidden treasures (along with lots of junk).

In this article, you'll learn about metal detectors and the various technologies they use. Our focus will be on consumer metal detectors, but most of the information also applies to mounted detection systems, like the ones used in airports, as well as handheld security scanners.

Anatomy of a Metal Detector

A typical metal detector is light-weight and consists of just a few parts:

Stabilizer (optional) - used to keep the unit steady as you sweep it back and forth
Control box - contains the circuitry, controls, speaker, batteries and the microprocessor
Shaft - connects the control box and the coil; often adjustable so you can set it at a comfortable level for your height
Search coil - the part that actually senses the metal; also known as the "search head," "loop" or "antenna"

Most systems also have a jack for connecting headphones, and some have the control box below the shaft and a small display unit above.

Operating a metal detector is simple. Once you turn the unit on, you move slowly over the area you wish to search. In most cases, you sweep the coil (search head) back and forth over the ground in front of you. When you pass it over a target object, an audible signal occurs. More advanced metal detectors provide displays that pinpoint the type of metal it has detected and how deep in the ground the target object is located.

Metal detectors use one of three technologies:

* Very low frequency (VLF)
* Pulse induction (PI)
* Beat-frequency oscillation (BFO)

In the following sections, we will look at each of these technologies in detail to see how they work.

VLF Technology

Very low frequency (VLF), also known as induction balance, is probably the most popular detector technology in use today. In a VLF metal detector, there are two distinct coils:

* Transmitter coil - This is the outer coil loop. Within it is a coil of wire. Electricity is sent along this wire, first in one direction and then in the other, thousands of times each second. The number of times that the current's direction switches each second establishes the frequency of the unit.
* Receiver coil - This inner coil loop contains another coil of wire. This wire acts as an antenna to pick up and amplify frequencies coming from target objects in the ground.
The current moving through the transmitter coil creates an electromagnetic field, which is like what happens in an electric motor. The polarity of the magnetic field is perpendicular to the coil of wire. Each time the current changes direction, the polarity of the magnetic field changes. This means that if the coil of wire is parallel to the ground, the magnetic field is constantly pushing down into the ground and then pulling back out of it.

As the magnetic field pulses back and forth into the ground, it interacts with any conductive objects it encounters, causing them to generate weak magnetic fields of their own. The polarity of the object's magnetic field is directly opposite the transmitter coil's magnetic field. If the transmitter coil's field is pulsing downward, the object's field is pulsing upward.

The receiver coil is completely shielded from the magnetic field generated by the transmitter coil. However, it is not shielded from magnetic fields coming from objects in the ground. Therefore, when the receiver coil passes over an object giving off a magnetic field, a small electric current travels through the coil. This current oscillates at the same frequency as the object's magnetic field. The coil amplifies the frequency and sends it to the control box of the metal detector, where sensors analyze the signal.

The metal detector can determine approximately how deep the object is buried based on the strength of the magnetic field it generates. The closer to the surface an object is, the stronger the magnetic field picked up by the receiver coil and the stronger the electric current generated. The farther below the surface, the weaker the field. Beyond a certain depth, the object's field is so weak at the surface that it is undetectable by the receiver coil.

In the next section, we'll see how a VLF metal detector distinguishes between different types of metals.

VLF Phase Shifting

How does a VLF metal detector distinguish between different metals? It relies on a phenomenon known as phase shifting. Phase shift is the difference in timing between the transmitter coil's frequency and the frequency of the target object. This discrepancy can result from a couple of things:

* Inductance - An object that conducts electricity easily (is inductive) is slow to react to changes in the current. You can think of inductance as a deep river: Change the amount of water flowing into the river and it takes some time before you see a difference.
* Resistance - An object that does not conduct electricity easily (is resistive) is quick to react to changes in the current. Using our water analogy, resistance would be a small, shallow stream: Change the amount of water flowing into the stream and you notice a drop in the water level very quickly.

Basically, this means that an object with high inductance is going to have a larger phase shift, because it takes longer to alter its magnetic field. An object with high resistance is going to have a smaller phase shift.

Phase shift provides VLF-based metal detectors with a capability called discrimination. Since most metals vary in both inductance and resistance, a VLF metal detector examines the amount of phase shift, using a pair of electronic circuits called phase demodulators, and compares it with the average for a particular type of metal. The detector then notifies you with an audible tone or visual indicator as to what range of metals the object is likely to be in.

Many metal detectors even allow you to filter out (discriminate) objects above a certain phase-shift level. Usually, you can set the level of phase shift that is filtered, generally by adjusting a knob that increases or decreases the threshold. Another discrimination feature of VLF detectors is called notching. Essentially, a notch is a discrimination filter for a particular segment of phase shift. The detector will not only alert you to objects above this segment, as normal discrimination would, but also to objects below it.

Advanced detectors even allow you to program multiple notches. For example, you could set the detector to disregard objects that have a phase shift comparable to a soda-can tab or a small nail. The disadvantage of discrimination and notching is that many valuable items might be filtered out because their phase shift is similar to that of "junk." But, if you know that you are looking for a specific type of object, these features can be extremely useful.

PI Technology

A less common form of metal detector is based on pulse induction (PI). Unlike VLF, PI systems may use a single coil as both transmitter and receiver, or they may have two or even three coils working together. This technology sends powerful, short bursts (pulses) of current through a coil of wire. Each pulse generates a brief magnetic field. When the pulse ends, the magnetic field reverses polarity and collapses very suddenly, resulting in a sharp electrical spike. This spike lasts a few microseconds (millionths of a second) and causes another current to run through the coil. This current is called the reflected pulse and is extremely short, lasting only about 30 microseconds. Another pulse is then sent and the process repeats. A typical PI-based metal detector sends about 100 pulses per second, but the number can vary greatly based on the manufacturer and model, ranging from a couple of dozen pulses per second to over a thousand.

If the metal detector is over a metal object, the pulse creates an opposite magnetic field in the object. When the pulse's magnetic field collapses, causing the reflected pulse, the magnetic field of the object makes it take longer for the reflected pulse to completely disappear. This process works something like echoes: If you yell in a room with only a few hard surfaces, you probably hear only a very brief echo, or you may not hear one at all; but if you yell in a room with a lot of hard surfaces, the echo lasts longer. In a PI metal detector, the magnetic fields from target objects add their "echo" to the reflected pulse, making it last a fraction longer than it would without them.

A sampling circuit in the metal detector is set to monitor the length of the reflected pulse. By comparing it to the expected length, the circuit can determine if another magnetic field has caused the reflected pulse to take longer to decay. If the decay of the reflected pulse takes more than a few microseconds longer than normal, there is probably a metal object interfering with it.

The sampling circuit sends the tiny, weak signals that it monitors to a device call an integrator. The integrator reads the signals from the sampling circuit, amplifying and converting them to direct current (DC). The direct current's voltage is connected to an audio circuit, where it is changed into a tone that the metal detector uses to indicate that a target object has been found.

PI-based detectors are not very good at discrimination because the reflected pulse length of various metals are not easily separated. However, they are useful in many situations in which VLF-based metal detectors would have difficulty, such as in areas that have highly conductive material in the soil or general environment. A good example of such a situation is salt-water exploration. Also, PI-based systems can often detect metal much deeper in the ground than other systems.

BFO Technology

The most basic way to detect metal uses a technology called beat-frequency oscillator (BFO). In a BFO system, there are two coils of wire. One large coil is in the search head, and a smaller coil is located inside the control box. Each coil is connected to an oscillator that generates thousands of pulses of current per second. The frequency of these pulses is slightly offset between the two coils.

As the pulses travel through each coil, the coil generates radio waves. A tiny receiver within the control box picks up the radio waves and creates an audible series of tones (beats) based on the difference between the frequencies.

If the coil in the search head passes over a metal object, the magnetic field caused by the current flowing through the coil creates a magnetic field around the object. The object's magnetic field interferes with the frequency of the radio waves generated by the search-head coil. As the frequency deviates from the frequency of the coil in the control box, the audible beats change in duration and tone.

BFO Technology

The simplicity of BFO-based systems allows them to be manufactured and sold for a very low cost. But these detectors do not provide the level of control and accuracy provided by VLF or PI systems.

Buried Treasure

Metal detectors are great for finding buried objects. But typically, the object must be within a foot or so of the surface for the detector to find it. Most detectors have a normal maximum depth somewhere between 8 and 12 inches (20 and 30 centimeters). The exact depth varies based on a number of factors:

The type of metal detector - The technology used for detection is a major factor in the capability of the detector. Also, there are variations and additional features that differentiate detectors that use the same technology. For example, some VLF detectors use higher frequencies than others, while some provide larger or smaller coils. Plus, the sensor and amplification technology can vary between manufacturers and even between models offered by the same manufacturer.

* The type of metal in the object - Some metals, such as iron, create stronger magnetic fields than others.
* The size of the object - A dime is much harder to detect at deep levels than a quarter.
* The makeup of the soil - Certain minerals are natural conductors and can seriously interfere with the metal detector.
* The object's halo - When certain types of metal objects have been in the ground for a long time, they can actually increase the conductivity of the soil around them.
* Interference from other objects - This can be items in the ground, such as pipes or cables, or items above ground, like power lines.

Hobbyist metal detecting is a fascinating world with several sub-groups. Here are some of the more popular activities:

* Coin shooting - looking for coins after a major event, such as a ball game or concert, or just searching for old coins in general
* Prospecting - searching for valuable metals, such as gold nuggets
* Relic hunting - searching for items of historical value, such as weapons used in the U.S. Civil War
* Treasure hunting - researching and trying to find caches of gold, silver or anything else rumored to have been hidden somewhere
* Many metal-detector enthusiasts join local or national clubs that provide tips and tricks for hunting. Some of these clubs even sponsor organized treasure hunts or other outings for their members.

Detective Work

In addition to recreational use, metal detectors serve a wide range of utilitarian functions. Mounted detectors usually use some variation of PI technology, while many of the basic handheld scanners are BFO-based.

Some nonrecreational applications for metal detectors are:

* Airport security - screen people before allowing access to the boarding area and the plane (see How Airport Security Works)
* Building security - screen people entering a particular building, such as a school, office or prison
* Event security - screen people entering a sporting event, concert or other large gathering of people
* Item recovery - help someone search for a lost item, such as a piece of jewelry
* Archaeological exploration - find metallic items of historical significance
* Geological research - detect the metallic composition of soil or rock formations

Manufacturers of metal detectors are constantly tuning the process to make their products more accurate, more sensitive and more versatile. On the next page, you will find links to the manufacturers, as well as clubs and more information on metal detecting as a hobby.

Additional Information

A metal detector is an electronic device that finds metal objects nearby. These devices are very useful for discovering metal pieces hidden inside other objects. They can also find metal items buried underground. Most metal detectors have a handheld part with a special sensor. You can sweep this sensor over the ground or other things. If the sensor gets close to metal, you will hear a changing sound in earphones. Sometimes, a needle on a display will move. The closer the metal is, the louder the sound or the higher the needle goes.

Another common type of metal detector is a "walk-through" scanner. These are used for security checks at places like airports. They help find hidden metal weapons on a person's body.

The basic idea behind a metal detector is simple. It uses an electronic circuit to create an alternating current. This current goes through a coil, which then makes an invisible magnetic field. If a piece of metal that conducts electricity comes close to this coil, tiny electric currents called eddy currents are created in the metal. These eddy currents then make their own magnetic field. The metal detector has another coil that measures these magnetic fields. When the magnetic field changes because of a metal object, the detector senses it.

History and Uses of Metal Detectors

The first industrial metal detectors were made in the 1960s. They quickly became popular for finding minerals and for other industrial jobs.

Finding Hidden Treasures and More

Metal detectors have many interesting uses today:

* Finding landmines: They help locate dangerous land mines left behind after wars.
* Security: They are used to find weapons like knives and guns, especially at airports and other secure locations.
* Exploring the Earth: Scientists use them for geophysical prospecting, which means exploring the Earth's surface for minerals.
* Archaeology: Archaeologists use them to find old artifacts buried underground.
* Treasure hunting: Many people use metal detectors as a hobby to search for lost coins, jewelry, and other treasures.

Metal Detectors in Everyday Life

Metal detectors are also used in other important ways:

* Food safety: They help find foreign objects, like small pieces of metal, in food products. This keeps our food safe to eat.
* Construction: In the construction industry, they find steel reinforcing bars inside concrete. They can also locate pipes and wires hidden in walls and floors.

584b75a397c2-metal-detecting-on-beach.webp

#424 This is Cool » Cation » 2026-02-12 18:13:42

Jai Ganesh
Replies: 0

Cation

Gist

A cation is an atom or molecule with a net positive electrical charge, formed when a neutral atom loses one or more negatively charged electrons, resulting in more protons than electrons. Most metals readily form cations, such as sodium (Na⁺) or calcium (Ca²⁺), by losing electrons to achieve a stable electron configuration. 

A cation is a positively charged ion, formed when a metal atom loses one or more electrons, resulting in more protons than electrons. Cations possess a net positive charge and are attracted to the cathode in an electric field. Common examples include sodium, calcium, and aluminum.

Cations are positively charged ions. They are formed when a metal loses its electrons. They lose one or more than one electron. It has fewer electrons than protons.

Summary

Cations are positively charged ions that result from an atom or group of atoms losing one or more valence electrons. The term "cation" is derived from "cathode ion," reflecting their attraction to the cathode in an electrolytic solution. Common examples of cations include sodium (Na⁺) and calcium (Ca²⁺), which typically form when alkali and alkaline earth metals lose their valence electrons to achieve a more stable electron configuration. The formation of cations is crucial to the creation of ionic compounds, such as sodium chloride (NaCl), where cations bond with negatively charged ions, or anions.

The electronic structure of atoms, including the arrangement of electrons in shells and orbitals, plays a significant role in determining the behavior of cations. Elements with similar valence electron configurations often exhibit similar chemical properties, a principle that underpins the organization of the periodic table. Naming conventions for cations are standardized by the International Union of Pure and Applied Chemistry (IUPAC), which includes the use of charge notation for monatomic and polyatomic cations. Understanding cations is fundamental to the study of chemistry, influencing reactions, bonding, and the properties of various compounds. 

Details

A cation is a type of ion that has a positive electric charge. This means it has fewer electrons than protons. The opposite of a cation is an anion, which has a negative charge.

Cations can have only one atom (monatomic cations) or be made of multiple atoms together (polyatomic cations). Most metals form monatomic cations, while polyatomic cations are rarer.

Examples

Most metals make one or more monatomic cations. Alkali metals like sodium can lose one electron to make cations like Na+. Alkaline earth metals like calcium lose two electrons to make cations like Ca2+. These are the only ions these elements form, and so are just named after the element: the sodium cation Na+ is just called "sodium" in compounds like sodium chloride.

Transition metals and post-transition metals can make more than one type of cation: iron forms two cations, Fe2+ and Fe3+. The charge on transition metal cations is usually between +1 (such as silver in silver iodide) and +4 (such as titanium in titanium tetrachloride).

Because transition metal cations can have more than one charge, the charge (formally, the oxidation state), is included in the name of the cation in compounds using Roman numerals. Pyrite is made of Fe2+ and the sulfide anion S2−, so it is called iron(II) sulfide. Magnetite is made of Fe3+ and the oxide anion O2−, so it is called iron(III) oxide. Sometimes in older sources these cations have specific names: another name for iron(II) is the "ferrous" cation, while iron(III) is the "ferric" cation.

Ammonium is an example of a polyatomic cation. It is made of a nitrogen atom connected to four hydrogen atoms. The formula for ammonium is written NH4+. Ammonium is made when an acid gives a hydrogen ion to a molecule of ammonia.

Additional Information:

Frequently Asked Questions

What is an example of a cation?

Calcium in its most common state is a cation. It has a 2+ charge and thus has a net ratio of two more protons than electrons. Calcium is an important cation in the human body and its positive charge is needed to complete muscle contractions.

What is the difference between a cation and an anion?

A cation and an anion have a different net charge. Cations have more protons than electrons, and thus have an overall positive charge. Anions have more electrons than protons, and thus have an overall negative charge.

What is a cation?

A cation is any ion that is positively charged. This results in an atom or molecule which has a net positive charge due to the greater number of protons than electrons.

What are cations and anions?

Cations are atoms or molecules that have a positive charge due to a higher ratio of protons to electrons. Anions are atoms or molecules with a negative charge due to a higher ratio of electrons to protons.

1750402826php7puZyA.jpeg

#425 Dark Discussions at Cafe Infinity » Come Quotes - II » 2026-02-12 17:33:54

Jai Ganesh
Replies: 0

Come Quotes - II

1. You have to dream before your dreams can come true. - A. P. J. Abdul Kalam

2. Everyone's dream can come true if you just stick to it and work hard. - Serena Williams

3. Hope smiles from the threshold of the year to come, whispering, 'It will be happier.' - Alfred Lord Tennyson

4. Only a man who knows what it is like to be defeated can reach down to the bottom of his soul and come up with the extra ounce of power it takes to win when the match is even. - Muhammad Ali

5. To enjoy good health, to bring true happiness to one's family, to bring peace to all, one must first discipline and control one's own mind. If a man can control his mind he can find the way to Enlightenment, and all wisdom and virtue will naturally come to him. - Buddha

6. I will prepare and some day my chance will come. - Abraham Lincoln

7. I've always believed that if you put in the work, the results will come. - Michael Jordan

8. Death does not concern us, because as long as we exist, death is not here. And when it does come, we no longer exist. - Epicurus.

Board footer

Powered by FluxBB