You are not logged in.
Fracture
Gist
A fracture is a partial or complete break in the continuity of a bone, caused by trauma (falls, impacts, sports), repetitive stress (stress fractures), or underlying conditions (osteoporosis). Symptoms include severe pain, swelling, bruising, deformity, and inability to move the area, requiring immediate medical attention for diagnosis (X-ray) and treatment, often involving immobilization (cast) or surgery (pins, plates) for proper healing.
A fracture is a break or crack in a bone, typically caused by trauma like a fall or accident. It can range from a minor crack to a complete break where the bone is in multiple pieces. Other causes include repetitive stress on the bone, a condition called a stress fracture, or a disease that weakens the bone, known as a pathological fracture.
A bone fracture, or broken bone, is a break in the bone's continuity, caused by trauma (like falls, accidents) or stress, leading to symptoms like pain, swelling, and inability to move the limb normally. Fractures range from hairline cracks (incomplete) to multiple fragments (comminuted) or bone breaking the skin (open/compound). Treatment involves aligning the bone (casting, surgery) so it can knit back together, requiring immediate care for severe breaks.
Summary
A bone fracture (abbreviated FRX or Fx, Fx, or #) is a medical condition in which there is a partial or complete break in the continuity of any bone in the body. In more severe cases, the bone may be broken into several fragments, known as a comminuted fracture. An open fracture (or compound fracture) is a bone fracture where the broken bone breaks through the skin.
A bone fracture may be the result of high force impact or stress, or a minimal trauma injury as a result of certain medical conditions that weaken the bones, such as osteoporosis, osteopenia, bone cancer, or osteogenesis imperfecta, where the fracture is then properly termed a pathologic fracture. Most bone fractures require urgent medical attention to prevent further injury.
Signs and symptoms
Although bone tissue contains no pain receptors, a bone fracture is painful for several reasons:
* Breaking in the continuity of the periosteum, with or without similar discontinuity in endosteum, as both contain multiple pain receptors.
* Edema and hematoma of nearby soft tissues caused by ruptured bone marrow evokes pressure pain.
* Involuntary muscle spasms trying to hold bone fragments in place.
Damage to adjacent structures such as nerves, muscles or blood vessels, spinal cord, and nerve roots (for spine fractures), or cranial contents (for skull fractures) may cause other specific signs and symptoms.
Complications
Some fractures may lead to serious complications, including a condition known as compartment syndrome. If not treated, eventually, compartment syndrome may require amputation of the affected limb. Other complications may include non-union, where the fractured bone fails to heal, or malunion, where the fractured bone heals in a deformed manner. One form of malunion is the malrotation of a bone, which is especially common after femoral and tibial fractures. Complications of fractures may be classified into three broad groups, depending upon their time of occurrence. These are as follows –
* Immediate complications – occurs at the time of the fracture.
* Early complications – occurring in the initial few days after the fracture.
* Late complications – occurring a long time after the fracture.
Details
Bone fractures are a very common injury and can affect anyone at any age. If you’re older than 50 or have a family history of osteoporosis, talk to your provider about a bone density screening.
Overview:
What is a bone fracture?
A bone fracture is the medical definition for a broken bone.
Fractures are usually caused by traumas like falls, car accidents or sports injuries. But some medical conditions and repetitive forces (like running) can increase your risk for experiencing certain types of fractures.
If you break a bone, you might need surgery to repair it. Some people only need a splint, cast, brace or sling for their bone to heal. How long it takes to recover fully depends on which of your bones are fractured, where the fracture is and what caused it.
Bone fracture vs. break
Bone fractures and broken bones are the same injury and mean the same thing. You might see them used interchangeably. A fracture is the medical term for a broken bone, so your healthcare provider will probably refer to your broken bone as a certain type of fracture after they diagnose it.
Bone fracture vs. bone bruise
Bone fractures and bone bruises are both painful injuries caused by a strong force hitting your body — usually a fall, car accident or sports injury. The difference is how damaged your bone is.
Your bones are living tissue that can get bruised in lots of the same ways your skin can. It takes much more force to bruise a bone than it does your skin, but the injury is very similar. If something hits your bones with enough force, they can bleed without being broken. Blood trapped under the surface of your bone after an injury is a bone bruise.
A bone fracture happens when something hits your bone with enough force not only to damage it, but to break it in at least one place. Fractures are more serious injuries and can take much longer to heal than bone bruises.
If you’ve experienced a trauma and have pain on or near a bone, go to the emergency room or visit your provider as soon as possible. No matter which injury you have, it’s important to get your bone examined right away.
Bone fractures vs. sprains
Bone fractures and sprains are common sports injuries.
If you experience a bone fracture, you’ve broken one or more of your bones. You can’t sprain a bone. A sprain happens when one of your ligaments is stretched or torn.
It’s possible to experience a bone fracture and a ligament sprain during the same injury, especially if you damage a joint like your knee or elbow.
What are the different types of bone fractures?
There are many different types of fractures. Your provider will diagnose a specific fracture type depending on a few criteria, including its:
* Pattern: A fracture pattern is the medical term for the shape of a break or what it looks like.
* Cause: Some fractures are classified by how they happen.
* Body part: Where in your body your broke a bone.
Fractures diagnosed by pattern or shape
Some fractures are classified by their pattern. This can either be the direction a break goes (if it’s a straight light across your bone) or its shape (if it’s more than a single line break).
Fractures that have a single straight-line break include:
* Oblique fractures.
* Transverse fractures.
* Longitudinal fractures (breaks that happen along the length of the bone).
Fracture patterns that don’t break your bone in a single straight line include:
* Greenstick fractures.
* Comminuted fractures.
* Segmental fractures.
* Spiral fractures.
Fractures diagnosed by cause
A few types of fractures are named or classified by what causes them. These include:
* Stress fractures (sometimes referred to as hairline fractures).
* Avulsion fractures.
* Buckle fractures (sometimes referred to as torus or impacted fractures).
Fractures diagnosed by location
Lots of fractures are specific to where they happen in your body. In some cases, it’s possible to experience a location-based fracture that’s also one of the other types listed above. For example, someone who experiences a severe fall might have a comminuted tibia (shin bone) fracture.
Fractures that affect people’s chest, arms and upper body include:
* Clavicle fractures (broken collarbones).
* Shoulder fractures.
* Humerus (upper arm bone) fractures.
* Elbow fractures.
* Rib fractures.
* Compression fractures.
* Facial fractures.
Some fractures that can affect your hands or wrists include:
* Barton fractures.
* Chauffeur fractures.
* Colles fractures.
* Smith fractures.
* Scaphoid fractures.
* Metacarpal fractures (breaking any of the bones in your hand that connect your wrist to your fingers).
Fractures that damage the bones in your lower body and legs include:
* Pelvic fractures.
* Acetabular fractures.
* Hip fractures.
* Femur fractures.
* Patella fractures.
* Growth plate fractures.
* Tibia (your shin bone) and fibula (your calf bone) fractures.
Fractures that affect your feet and ankles are more likely to have complications like nonunion. They include:
* Calcaneal stress fractures.
* Fifth metatarsal fractures.
* Jones fractures.
* Lisfranc fractures.
* Talus fractures.
* Trimalleolar fractures.
* Pilon fractures.
Open vs. closed fractures
Your provider will classify your fracture as either open or closed. If you have an open fracture, your bone breaks through your skin. Open fractures are sometimes referred to as compound fractures. Open fractures usually take longer to heal and have an increased risk of infections and other complications. Closed fractures are still serious, but your bone doesn’t push through your skin.
Displaced vs. non-displaced fractures
Displaced or non-displaced are more words your provider will use to describe your fracture. A displaced fracture means the pieces of your bone moved so much that a gap formed around the fracture when your bone broke. Non-displaced fractures are still broken bones, but the pieces weren’t moved far enough during the break to be out of alignment. Displaced fractures are much more likely to require surgery to repair.
Who gets bone fractures?
Bone fractures can affect anyone. Because they’re usually caused by traumas like falls, car accidents or sports injuries, it’s hard to know when someone will break a bone.
You’re more likely to experience a fracture if your bones are weakened by osteoporosis.
Osteoporosis
Osteoporosis weakens bones, making them more susceptible to sudden and unexpected fractures. Many people don’t know they have osteoporosis until after it causes them to break a bone. There usually aren’t obvious symptoms.
Females and adults older than 50 have an increased risk for developing osteoporosis. Talk to your provider about a bone density screening that can catch osteoporosis before it causes a fracture.
How common are bone fractures?
Bone fractures are a common injury. Millions of people break a bone every year.
Additional Information
A fracture is a partial or complete break in the bone. When a fracture happens, it’s classified as either open or closed:
* Open fracture (compound fracture): The bone pokes through the skin and can be seen. Or a deep wound exposes the bone through the skin.
* Closed fracture (simple fracture). The bone is broken, but the skin is intact.
Fractures have a variety of names. Here is a list of types that may happen:
* Greenstick. This is an incomplete break. A part of the bone is broken, causing the other side to bend.
* Transverse. The break is in a straight line across the bone.
* Spiral. The break spirals around the bone. This is common in a twisting injury.
* Oblique. The break is diagonal across the bone.
* Compression. The bone is crushed. This causes the broken bone to be wider or flatter in appearance.
* Comminuted. The bone has broken into 3 or more pieces. Fragments are present at the fracture site.
* Segmental. The same bone is broken in 2 places, so there is a "floating" piece of bone.
* Avulsion. The bone is broken near a tendon or ligament. A tendon or ligament pulls off a small piece of bone.
What causes fractures?
Fractures most often happen when more force is applied to the bone than the bone can take. Bones are weakest when they are twisted.
Bone fractures can be caused by falls, injury, or as a result of a direct hit or kick to the body.
Overuse or repetitive motions can tire muscles and put more pressure on the bone. This causes stress fractures. This is more common in athletes and military recruits.
Fractures can also be caused by diseases that weaken the bone. This includes osteoporosis or cancer in the bones.
What are the symptoms of a fracture?
Symptoms may be a bit different for each person. Symptoms of a broken or fractured bone may include:
* Sudden pain
* Trouble using or moving the injured area or nearby joints
* Unable to bear weight
* Swelling
* Obvious deformity
* Warmth, bruising, or redness
The symptoms of a broken bone may seem like other health conditions or problems. Always see a healthcare provider for a diagnosis.
How is a fracture diagnosed?
Your healthcare provider will take a full health history (including asking how the injury happened). You will also have a physical exam. Tests used for a fracture may include:
* X-ray. A diagnostic test that uses invisible electromagnetic energy beams to make pictures of internal tissues, bones, and organs on film.
* MRI. An imaging test that uses large magnets, radiofrequencies, and a computer to make detailed pictures of structures within the body.
* CT scan. This is an imaging test that uses X-rays and a computer to make detailed images of the body. A CT scan shows details of the bones, muscles, fat, and organs.

2406) Andrew Huxley
Gist:
Work
The nervous system in people and animals consists of many different cells. In cells, signals are conveyed by small electrical currents and by chemical substances. By measuring changes in electrical charges in a very large nerve fiber from a species of octopus, Andrew Huxley and Alan Hodgkin were able to show how nerve impulses are exchanged between cells. In 1952 they could demonstrate that a fundamental mechanism involves the passage of sodium and potassium ions in opposite directions in and out through the cell wall, which gives rise to electrical charges.
Summary
Sir Andrew Fielding Huxley (born November 22, 1917, Hampstead, London, England—died May 30, 2012, Cambridge) was an English physiologist, cowinner (with Sir Alan Hodgkin and Sir John Carew Eccles) of the 1963 Nobel Prize for Physiology or Medicine. His researches centred on nerve and muscle fibres and dealt particularly with the chemical phenomena involved in the transmission of nerve impulses. He was knighted in 1974 and was president of the Royal Society from 1980 to 1985.
Andrew Fielding, a grandson of the biologist T.H. Huxley and son of the biographer and man of letters Leonard Huxley, received his M.A. from Trinity College, Cambridge, where later, from 1941 to 1960, he was a fellow and then director of studies, a demonstrator, an assistant director of research, and finally a reader in experimental biophysics in the Department of Physiology. In 1960 he went to University College, London, first as Jodrell professor and then, from 1969, as Royal Society research professor, in the Department of Physiology. Huxley and Hodgkin’s researches were concerned largely with studying the exchange of sodium and potassium ions that causes a brief reversal in a nerve cell’s electrical polarization; this phenomenon, known as an action potential, results in the transmission of an impulse along a nerve fibre. Apart from the researches directly mentioned in the Nobel citation, Huxley made contributions of fundamental importance to knowledge of the process of contraction by a muscle fibre. He published many important papers in periodicals, particularly in the Journal of Physiology. His Sherrington Lectures were published as Reflections on Muscle (1980).
Details
Sir Andrew Fielding Huxley (22 November 1917 – 30 May 2012) was an English physiologist and biophysicist. He was born into the prominent Huxley family. After leaving Westminster School in central London, he went to Trinity College, Cambridge, on a scholarship, after which he joined Alan Hodgkin to study nerve impulses. Their eventual discovery of the basis for propagation of nerve impulses (called an action potential) earned them the Nobel Prize in Physiology or Medicine in 1963. They made their discovery from the giant axon of the Atlantic squid. Soon after the outbreak of the Second World War, Huxley was recruited by the British Anti-Aircraft Command and later transferred to the Admiralty. After the war he resumed research at the University of Cambridge, where he developed interference microscopy that would be suitable for studying muscle fibres.
In 1952, he was joined by a German physiologist Rolf Niedergerke. Together they discovered in 1954 the mechanism of muscle contraction, popularly called the "sliding filament theory", which is the foundation of our modern understanding of muscle mechanics. In 1960 he became head of the Department of Physiology at University College London. He was elected a Fellow of the Royal Society in 1955, and President in 1980. The Royal Society awarded him the Copley Medal in 1973 for his collective contributions to the understanding of nerve impulses and muscle contraction. He was conferred a Knight Bachelor by the Queen in 1974, and was appointed to the Order of Merit in 1983. He was a fellow of Trinity College, Cambridge, until his death.
Career
Having entered Cambridge in 1935, Huxley graduated with a bachelor's degree in 1938. In 1939, Alan Lloyd Hodgkin returned from the US to take up a fellowship at Trinity College, and Huxley became one of his postgraduate students. Hodgkin was interested in the transmission of electrical signals along nerve fibres. Beginning in 1935 in Cambridge, he had made preliminary measurements on frog sciatic nerves suggesting that the accepted view of the nerve as a simple, elongated battery was flawed. Hodgkin invited Huxley to join him researching the problem. The work was experimentally challenging. One major problem was that the small size of most neurons made it extremely difficult to study them using the techniques of the time. They overcame this by working at the Marine Biological Association laboratory in Plymouth using the giant axon of the longfin inshore squid (Doryteuthis (formerly Loligo) pealeii), which have the largest neurons known. The experiments were still extremely challenging as the nerve impulses only last a few milliseconds, during which time they needed to measure the changing electrical potential at different points along the nerve. Using equipment largely of their own construction and design, including one of the earliest applications of a technique of electrophysiology known as the voltage clamp, they were able to record ionic currents. In 1939, they jointly published a short paper in Nature reporting on the work done in Plymouth and announcing their achievement of recording action potentials from inside a nerve fibre.
Then World War II broke out, and their research was abandoned. Huxley was recruited by the British Anti-Aircraft Command, where he worked on radar control of anti-aircraft guns. Later he was transferred to the Admiralty to do work on naval gunnery, and worked in a team led by Patrick Blackett. Hodgkin, meanwhile, was working on the development of radar at the Air Ministry. When he had a problem concerning a new type of gun sight, he contacted Huxley for advice. Huxley did a few sketches, borrowed a lathe and produced the necessary parts.
Huxley was elected to a research fellowship at Trinity College, Cambridge, in 1941. In 1946, with the war ended, he was able to take this up and to resume his collaboration with Hodgkin on understanding how nerves transmit signals. Continuing their work in Plymouth, they were, within six years, able to solve the problem using equipment they built themselves. The solution was that nerve impulses, or action potentials, do not travel down the core of the fiber, but rather along the outer membrane of the fiber as cascading waves of sodium ions diffusing inward on a rising pulse and potassium ions diffusing out on a falling edge of a pulse. In 1952, they published their theory of how action potentials are transmitted in a joint paper, in which they also describe one of the earliest computational models in biochemistry. This model forms the basis of most of the models used in neurobiology during the following four decades.
In 1952, having completed work on action potentials, Huxley was teaching physiology at Cambridge and became interested in another difficult, unsolved problem: how does muscle contract? To make progress on understanding the function of muscle, new ways of observing how the network of filaments behave during contraction were needed. Prior to the war, he had been working on a preliminary design for interference microscopy, which at the time he believed to be original, though it turned out to have been tried 50 years before and abandoned. He, however, was able to make interference microscopy work and to apply it to the problem of muscle contraction with great effect. He was able to view muscle contraction with greater precision than conventional microscopes, and to distinguish types of fiber more easily. By 1953, with the assistance of Rolf Niedergerke, he began to find the features of muscle movement. Around that time, Hugh Huxley and Jean Hanson came to a similar observation. Authored in pairs, their papers were simultaneously published in the 22 May 1954 issue of Nature. Thus the four people introduced what is called the sliding filament theory of muscle contractions. Huxley synthesized his findings, and the work of colleagues, into a detailed description of muscle structure and how muscle contraction occurs and generates force that he published in 1957. In 1966 his team provided the proof of the theory, and has remained the basis of modern understanding of muscle physiology.
In 1953, Huxley worked at Woods Hole, Massachusetts, as a Lalor Scholar. He gave the Herter Lectures at Johns Hopkins Medical School in 1959 and the Jesup Lectures at Columbia University in 1964. In 1961 he lectured on neurophysiology at Kiev University as part of an exchange scheme between British and Russian professors.
He was an editor of the Journal of Physiology from 1950 to 1957 and also of the Journal of Molecular Biology. In 1955, he was elected a Fellow of the Royal Society and served on the Council of the Royal Society from 1960 to 1962.
Huxley held college and university posts in Cambridge until 1960, when he became head of the Department of Physiology at University College London. In addition to his administrative and teaching duties, he continued to work actively on muscle contraction, and also made theoretical contributions to other work in the department, such as that on animal reflectors. In 1963, he was jointly awarded the Nobel Prize in Physiology or Medicine for his part in discoveries concerning the ionic mechanisms of the nerve cell. In 1969 he was appointed to a Royal Society Research Professorship, which he held in the Department of Physiology at University College London.
In 1980, Huxley was elected as President of the Royal Society, a post he held until 1985. In his Presidential Address in 1981, he chose to defend the Darwinian explanation of evolution, as his ancestor, T. H. Huxley had in 1860. Whereas T. H. Huxley was defying the bishops of his day, Sir Andrew was countering new theories of periods of accelerated change. In 1983, he defended the Society's decision to elect Margaret Thatcher as a fellow on the ground of her support for science even after 44 fellows had signed a letter of protest.
In 1984, he was elected Master of Trinity, succeeding his longtime collaborator, Sir Alan Hodgkin. His appointment broke the tradition that the office of Master of Trinity alternates between a scientist and an arts man. He was Master until 1990 and was fond of reminding interviewers that Trinity College had more Nobel Prize winners than did the whole of France. He maintained up to his death his position as a fellow at Trinity College, Cambridge, teaching in physiology, natural sciences and medicine. He was also a fellow of Imperial College London in 1980.
From his experimental work with Hodgkin, Huxley developed a set of differential equations that provided a mathematical explanation for nerve impulses—the "action potential". This work provided the foundation for all of the current work on voltage-sensitive membrane channels, which are responsible for the functioning of animal nervous systems. Quite separately, he developed the mathematical equations for the operation of myosin "cross-bridges" that generate the sliding forces between actin and myosin filaments, which cause the contraction of skeletal muscles. These equations presented an entirely new paradigm for understanding muscle contraction, which has been extended to provide understanding of almost all of the movements produced by cells above the level of bacteria. Together with the Swiss physiologist Robert Stämpfli, he evidenced the existence of saltatory conduction in myelinated nerve fibres.
Awards and honours
Huxley, Alan Hodgkin and John Eccles jointly won the 1963 Nobel Prize in Physiology or Medicine "for their discoveries concerning the ionic mechanisms involved in excitation and inhibition in the peripheral and central portions of the nerve cell membrane". Huxley and Hodgkin won the prize for experimental and mathematical work on the process of nerve action potentials, the electrical impulses that enable the activity of an organism to be coordinated by a central nervous system. Eccles had made important discoveries on synaptic transmission.
Huxley was elected a Fellow of the Royal Society (FRS) in 1955, and was awarded its Copley Medal in 1973 "in recognition of his outstanding studies on the mechanisms of the nerve impulse and of activation of muscular contraction." Huxley was elected to the American Academy of Arts and Sciences in 1961. He was knighted by Queen Elizabeth II on 12 November 1974. He was elected to the American Philosophical Society in 1975 and the United States National Academy of Sciences in 1979. He was appointed to the Order of Merit on 11 November 1983. In 1976–77, he was President of the British Science Association and from 1980 to 1985 he served as President of the Royal Society. In 1986 he was elected an Honorary Fellow of the Royal Academy of Engineering then known as the Fellowship of Engineering.
Huxley's portrait by David Poole hangs in Trinity College's collection.

2459) Compass
Gist
A compass is a tool for finding direction. A simple compass is a magnetic needle mounted on a pivot, or short pin. The needle, which can spin freely, always points north. The pivot is attached to a compass card. The compass card is marked with the directions. To use a compass, a person lines up the needle with the marking for north. Then the person can figure out all the other directions.
A compass works because Earth is a huge magnet. A magnet has two main centers of force, called poles—one at each end. Lines of magnetic force connect these poles. Bits of metal near a magnet always arrange themselves along these lines. A compass needle acts like these bits of metal. It points north because it lines up with Earth’s lines of magnetic force.
Earth’s magnetic poles are not the same as the geographic North and South poles. The geographic poles are located at the very top and bottom of a globe. The magnetic poles are nearby but not at exactly the same places. A compass points to the magnetic North Pole, not the geographic North Pole. Therefore, a compass user has to make adjustments to find true north.
A special kind of compass called a gyrocompass does point to true north. The gyrocompass uses a device called a gyroscope, which always points in the same direction. Today large ships carry both magnetic compasses and gyrocompasses.
People in China and Europe first learned how to make magnetic compasses during the 1100s. They discovered that when a magnetized bit of iron floated in water, it pointed north. Sailors soon began to use compasses to navigate, or find their way, at sea.
Summary
A compass, in navigation or surveying, is the primary device for direction-finding on the surface of the Earth. Compasses may operate on magnetic or gyroscopic principles or by determining the direction of the Sun or a star.
The oldest and most familiar type of compass is the magnetic compass, which is used in different forms in aircraft, ships, and land vehicles and by surveyors. Sometime in the 12th century, mariners in China and Europe made the discovery, apparently independently, that a piece of lodestone, a naturally occurring magnetic ore, when floated on a stick in water, tends to align itself so as to point in the direction of the polestar. This discovery was presumably quickly followed by a second, that an iron or steel needle touched by a lodestone for long enough also tends to align itself in a north-south direction. From the knowledge of which way is north, of course, any other direction can be found.
The reason magnetic compasses work as they do is that the Earth itself acts as an enormous bar magnet with a north-south field that causes freely moving magnets to take on the same orientation. The direction of the Earth’s magnetic field is not quite parallel to the north-south axis of the globe, but it is close enough to make an uncorrected compass a reasonably good guide. The inaccuracy, known as variation (or declination), varies in magnitude from point to point upon the Earth. The deflection of a compass needle due to local magnetic influences is called deviation.
Over the centuries a number of technical improvements have been made in the magnetic compass. Many of these were pioneered by the English, whose large empire was kept together by naval power and who hence relied heavily upon navigational devices. By the 13th century the compass needle had been mounted upon a pin standing on the bottom of the compass bowl. At first only north and south were marked on the bowl, but then the other 30 principal points of direction were filled in. A card with the points painted on it was mounted directly under the needle, permitting navigators to read their direction from the top of the card. The bowl itself was subsequently hung on gimbals (rings on the side that let it swing freely), ensuring that the card would always be level. In the 17th century the needle itself took the shape of a parallelogram, which was easier to mount than a thin needle.
During the 15th century navigators began to understand that compass needles do not point directly to the North Pole but rather to some nearby point; in Europe, compass needles pointed slightly east of true north. To counteract this difficulty, British navigators adopted conventional meridional compasses, in which the north on the compass card and the “needle north” were the same when the ship passed a point in Cornwall, England. (The magnetic poles, however, wander in a predictable manner—in more recent centuries Europeans have found magnetic north to be west of true north—and this must be considered for navigation.)
In 1745 Gowin Knight, an English inventor, developed a method of magnetizing steel in such a way that it would retain its magnetization for long periods of time; his improved compass needle was bar-shaped and large enough to bear a cap by which it could be mounted on its pivot. The Knight compass was widely used.
Some early compasses did not have water in the bowl and were known as dry-card compasses; their readings were easily disturbed by shocks and vibration. Although they were less affected by shock, liquid-filled compasses were plagued by leaks and were difficult to repair when the pivot became worn. Neither the liquid nor the dry-card type was decisively advantageous until 1862, when the first liquid compass was made with a float on the card that took most of the weight off the pivot. A system of bellows was invented to expand and contract with the liquid, preventing most leaks. With these improvements liquid compasses made dry-card compasses obsolete by the end of the 19th century.
Modern mariners’ compasses are usually mounted in binnacles, cylindrical pedestals with provision for illuminating the compass face from below. Each binnacle contains specially placed magnets and pieces of steel that cancel the magnetic effects of the metal of the ship. Much the same kind of device is used aboard aircraft, except that, in addition, it contains a corrective mechanism for the errors induced in magnetic compasses when airplanes suddenly change course. The corrective mechanism is a gyroscope, which has the property of resisting efforts to change its axis of spin. This system is called a gyromagnetic compass.
Gyroscopes are also employed in a type of nonmagnetic compass called the gyrocompass. The gyroscope is mounted in three sets of concentric rings connected by gimbals, each ring spinning freely. When the initial axis of spin of the central gyroscope is set to point to true north, it will continue to do so and will resist efforts to realign it in any other direction; the gyroscope itself thus functions as a compass. If it begins to precess (wobble), a pendulum weight pulls it back into line. Gyrocompasses are generally used in navigation systems because they can be set to point to true north rather than to magnetic north.
Details
A compass is a device that shows the cardinal directions used for navigation and geographic orientation. It typically consists of a magnetized needle or another element, such as a compass card or compass rose, that pivots to align itself with magnetic north. Other methods may be used, including gyroscopes, magnetometers, and GPS receivers.
Compasses often show angles in degrees: north corresponds to 0°, and the angles increase clockwise, so east is 90°, south is 180°, and west is 270°. These numbers allow the compass to show azimuths or bearings which are commonly stated in degrees. If local variation between magnetic north and true north is known, then direction of magnetic north also gives direction of true north.
Among the Four Great Inventions, the magnetic compass was first invented as a device for divination as early as the Chinese Han dynasty (since c. 206 BC), and later adopted for navigation by the Song dynasty Chinese during the 11th century. The first usage of a compass recorded in Western Europe and the Islamic world occurred around 1190.
The magnetic compass is the most familiar compass type. It functions as a pointer to "magnetic north", the local magnetic meridian, because the magnetized needle at its heart aligns itself with the horizontal component of the Earth's magnetic field. The magnetic field exerts a torque on the needle, pulling the North end or pole of the needle approximately toward the Earth's North magnetic pole, and pulling the other toward the Earth's South magnetic pole. The needle is mounted on a low-friction pivot point, in better compasses a jewel bearing, so it can turn easily. When the compass is held level, the needle turns until, after a few seconds to allow oscillations to die out, it settles into its equilibrium orientation.
In navigation, directions on maps are usually expressed with reference to geographical or true north, the direction toward the Geographical North Pole, the rotation axis of the Earth. Depending on where the compass is located on the surface of the Earth the angle between true north and magnetic north, called magnetic declination can vary widely with geographic location. The local magnetic declination is given on most maps, to allow the map to be oriented with a compass parallel to true north. The locations of the Earth's magnetic poles slowly change with time, which is referred to as geomagnetic secular variation. The effect of this means a map with the latest declination information should be used. Some magnetic compasses include means to manually compensate for the magnetic declination, so that the compass shows true directions.
Additional Information
A compass is a device that indicates direction. It is one of the most important instruments for navigation.
A compass is a device that indicates direction. It is one of the most important instruments used for navigation. Magnetic compasses are the best-known type of compass. While the design and construction of the magnetic compass have changed significantly over the centuries, the concept of how it works remains the same. A magnetic compass consists of a magnetized needle that rotates to line up with Earth's magnetic field. The ends point to what are known as the north magnetic pole and the south magnetic pole.
History of Compasses
The principle of magnetism has been observed by humans for thousands of years. Ancient Greeks observed the principle of magnetism, but they did not understand its relationship to the Earth or that a magnetized metal would point north. People in Ancient China also recognized magnetism. They learned that a magnetized bar of lodestone tied to a string would always point in the same direction. This observation became part of their spiritual beliefs as religious leaders used a magnetic spoon balanced on a plate to predict the future. Some consider this to be the earliest form of a compass, although compasses are more accurately defined as instruments devised for navigational purposes. A unique aspect of these early compasses in China is that they were oriented to the south and were referred to as “south pointing spoons” or “south pointers.” Later Chinese compasses were similarly oriented to point south rather than north like today’s compasses.
There is evidence that explorers from China and Europe were using compasses to navigate the seas as far back as the 1100s. In fact, many historians believe that people in China were using compasses to navigate long before that time. Miners in search of jade, for instance, appear to have used “south pointing spoons.” Some historians also believe that compasses originated in China and traveled to Europe through trade routes, but others think that Europeans developed the technology independently.
Early Compasses
Very early compasses were made of a magnetized needle attached to a piece of wood or cork that floated freely in a dish of water. As the needle settled, the marked end would point toward magnetic north.
As engineers and scientists learned more about magnetism, the compass needle was mounted and placed in the middle of a card that showed the cardinal directions: north, south, east, and west. In time, 32 points of direction were added to the compass card.
In their earliest use, compasses were likely used as a backup navigational tool for when the sun, stars, or other landmarks could not be seen. As compasses became more reliable and more explorers understood how to use them, compasses became an essential tool for travelers.
Adjustments and Adaptations
By the 15th century, explorers realized that the “north” indicated by a compass needle was not the same as Earth’s true geographic north. This discrepancy between magnetic north and true north is called variation (by mariners or pilots) or magnetic declination (by land navigators), and it varies depending on location. Because of this variation, a compass could lead a novice user many kilometers off-course. Navigators learn to adjust their compass readings to account for variation.
Other adaptations have been made to magnetic compasses over time, especially for their use in marine navigation. When ships evolved from being made of wood to being made of iron and steel, the magnetism of the ship affected compass readings. This difference is called deviation. Adjustments, such as placing soft iron balls (called Kelvin spheres) and bar magnets (called Flinders bars) near the compass, help increase the accuracy of the readings. Deviation must also be taken into account on aircraft using compasses, due to the metal in the construction of an airplane.
Magnetic compasses come in many forms. The most basic are portable compasses for use on casual hikes. Magnetic compasses can have additional features, such as magnifiers for use with maps, a prism or mirror that allows the user to see the landscape and compass reading at the same time, or markings in Braille for people with a visual impairment. The most complicated compasses are complex devices on ships or planes that can calculate and adjust for motion, variation, and deviation.
Other Types of Compasses
Some compasses do not use Earth’s magnetism to indicate direction. The gyrocompass, invented in the early 20th century, uses a spinning gyroscope to follow Earth’s axis of rotation to point to true north. Since magnetic north is not measured, variation is not an issue. Once the gyroscope begins spinning, motion will not disturb it. This type of compass is often used on ships and aircraft.
A solar compass uses the sun as a navigational tool. It was used in the 19thand 20thcenturies to survey land. Because a solar compass is not affected by iron metal deposits or location relative to the poles, a solar compass can be more accurate than a magnetic compass, particularly near the poles. The most common method is to use a compass card and the angle of the shadow of the sun to indicate direction.
Another type of solar compass is an old-fashioned analog (not digital) watch. Using the watch’s hands and the position of the sun, it is possible to determine north or south. Simply hold the watch parallel to the ground (in your hand) and point the 12 o'clock mark in the direction of the sun. Find the angle between the hour hand and the 12 o’clock mark. This is the north-south line. In the Southern Hemisphere, north will be the direction closer to the sun. In the Northern Hemisphere, north will be the direction further from the sun.
Receivers from the global positioning system (GPS) have begun to take the place of compasses. A GPS receiver coordinates with satellites orbiting Earth and monitoring stations on Earth to pinpoint the receiver's location. GPS receivers can plot latitude, longitude, and altitude on a map. In open areas and optimal conditions, standard GPS is accurate to about 6 meters (20 feet), and researchers have developed a so-called “SuperGPS” that is accurate within 10 centimeters (3.9 inches).
Many people throughout history have used knowledge of the stars and constellations as a kind of compass. For example, Polynesians have been using patterns in the sky to navigate the ocean for at least 2,000 years. A native Hawaiian historian, Charles Nainoa Thompson, developed a Hawaiian star compass in the mid-2000s to illustrate how Polynesians use the constellations for navigation. Much like a magnetic compass, there are 32 stars situated around a center point on a star compass.
Impact of the Compass
The compass had a major impact on the world, particularly for people who were navigating the sea. Because compasses were more accurate than other naval navigational tools, explorers used them to explore parts of the world that were unknown to them during the so-called Age of Exploration that began in the early 15th century, a development with both positive and negative impacts. European exploration contributed to trade and the circulation of knowledge, but it would also lead to the spread of disease, the colonization of new lands, and the enslavement of Africans and other indigenous populations. Because today’s global relationships have emerged from these relationships, the legacy of colonization—and the invention of the compass—continues to have a profound impact on the world today.

Collar Quotes
1. I remember in that red leisure suit I sort of felt like a Pizza Hut employee, and the white one was the ultimate, with the white turtleneck collar, that was the ultimate in bad taste. - Johnny Depp
2. I hate ready-made suits, button-down collars, and sports shirts. - Bobby Fischer
3. Joe Frazier was the epitome of a champion. I mean, here is a guy who was total old school, blue collar, who would fight anybody. You know, he didn't tell you he was the best fighter pound for pound. - Sugar Ray Leonard.
Q: Whats orange and sounds like a parrot?
A: A carrot.
* * *
Q: What's a Vegetable's favourite martial art?
A: Carrotee!
* * *
Q: How do you lead a horse to water?
A: With carrots.
* * *
Q: What vegetable are all others afraid of?
A: A scarrot.
* * *
Q: Why did the carrot get an award?
A: Because he was out standing in his field.
* * *
Hi,
#10691. What does the term in Geography Conifer mean?
#10692. What does the term in Geography Geographic contiguity mean?
Hi,
#5887. What does the adjective compliant mean?
#5888. What does the noun complicity mean?
Hi,
#2539. What does the medical term Occipital lobe mean?
Hi,
#9819.
Hi,
#6313.
Hi,
2668.
2405) Alan Hodgkin
Gist:
Work
The nervous system in people and animals consists of many different cells. In cells, signals are conveyed by small electrical currents and by chemical substances. By measuring changes in electrical charges in a very large nerve fiber from a species of octopus, Alan Hodgkin and Andrew Huxley were able to show how nerve impulses are exchanged between cells. In 1952 they could demonstrate that a fundamental mechanism involves the passage of sodium and potassium ions in opposite directions in and out through the cell wall, which gives rise to electrical charges.
Summary
Sir Alan Hodgkin (born February 5, 1914, Banbury, Oxfordshire, England—died December 20, 1998, Cambridge) was an English physiologist and biophysicist, who received (with Andrew Fielding Huxley and Sir John Eccles) the 1963 Nobel Prize for Physiology or Medicine for the discovery of the chemical processes responsible for the passage of impulses along individual nerve fibres.
Hodgkin was educated at Trinity College, Cambridge. After conducting radar research (1939–45) for the British Air Ministry, he joined the faculty at Cambridge, where he worked (1945–52) with Huxley on measuring the electrical and chemical behaviour of individual nerve fibres. By inserting microelectrodes into the giant nerve fibres of the squid Loligo forbesi, they were able to show that the electrical potential of a fibre during conduction of an impulse exceeds the potential of the fibre at rest, contrary to the accepted theory, which postulated a breakdown of the nerve membrane during impulse conduction.
They knew that the activity of a nerve fibre depends on the fact that a large concentration of potassium ions is maintained inside the fibre, while a large concentration of sodium ions is found in the surrounding solution. Their experimental results (1947) indicated that the nerve membrane allows only potassium to enter the fibre during the resting phase but allows sodium to penetrate when the fibre is excited.
Hodgkin served as a research professor for the Royal Society (1952–69), professor of biophysics at Cambridge (from 1970), chancellor of the University of Leicester (1971–84), and master of Trinity College (1978–85). He was knighted in 1972 and admitted into the Order of Merit in 1973. Publications by Hodgkin include Conduction of the Nervous Impulse (1964) and his autobiography, Chance and Design: Reminiscences of Science in Peace and War (1992).
Details
Sir Alan Lloyd Hodgkin (5 February 1914 – 20 December 1998) was a British physiologist and biophysicist who shared the 1963 Nobel Prize in Physiology or Medicine with Andrew Huxley and John Eccles.
Early life and education
Hodgkin was born in Banbury, Oxfordshire, on 5 February 1914. He was the oldest of three sons of Quakers George Hodgkin and Mary Wilson Hodgkin. His father was the son of Thomas Hodgkin and had read for the Natural Science Tripos at Cambridge where he had befriended electrophysiologist Keith Lucas.
Because of poor eyesight, he was unable to study medicine and eventually ended up working for a bank in Banbury. As members of the Society of Friends, George and Mary opposed the Military Service Act of 1916, which introduced conscription, and had to endure a great deal of abuse from their local community, including an attempt to throw George in one of the town canals. In 1916, George Hodgkin travelled to Armenia as part of an investigation of distress. Moved by the misery and suffering of Armenian refugees he attempted to go back there in 1918 on a route through the Persian Gulf (as the northern route was closed because of the October Revolution in Russia). He died of dysentery in Baghdad on 24 June 1918, just a few weeks after his youngest son, Keith, had been born.
From an early life on, Hodgkin and his brothers were encouraged to explore the country around their home, which instilled in Alan an interest in natural history, particularly ornithology. At the age of 15, he helped Wilfred Backhouse Alexander with surveys of heronries and later, at Gresham's School, he overlapped and spent a lot of time with David Lack. In 1930, he was the winner of a bronze medal in the Public Schools Essay Competition organised by the Royal Society for the Protection of Birds.
School and university
Alan started his education at The Downs School where his contemporaries included future scientists Frederick Sanger, Alec Bangham, "neither outstandingly brilliant at school" according to Hodgkin, as well as future artists Lawrence Gowing and Kenneth Rowntree. After the Downs School, he went on to Gresham's School where he overlapped with future composer Benjamin Britten as well as Maury Meiklejohn. He ended up receiving a scholarship at Trinity College, Cambridge in botany, zoology and chemistry.
Between school and college, he spent May 1932 at the Freshwater Biological Station at Wray Castle based on a recommendation of his future Director of Studies at Trinity, Carl Pantin. After Wray Castle, he spent two months with a German family in Frankfurt as "in those days it was thought highly desirable that anyone intending to read science should have a reasonable knowledge of German." After his return to England in early August 1932, his mother Mary was remarried to Lionel Smith (1880–1972), the eldest son of A. L. Smith, whose daughter Dorothy was also married to Alan's uncle Robert Howard Hodgkin.
In the autumn of 1932, Hodgkin started as a freshman scholar at Trinity College where his friends included classicists John Raven and Michael Grant, fellow-scientists Richard Synge and John H. Humphrey, as well as Polly and David Hill, the children of Nobel laureate Archibald Hill. He took physiology with chemistry and zoology for the first two years, including lectures by Nobel laureate E.D. Adrian. For Part II of the tripos he decided to focus on physiology instead of zoology. Nevertheless, he participated in a zoological expedition to the Atlas Mountains in Morocco led by John Pringle in 1934. He finished Part II of the tripos in July 1935 and stayed at Trinity as a research fellow.
During his studies, Hodgkin, who described himself as "having been brought up as a supporter of the British Labour Party" was friends with communists and actively participated in the distribution of anti-war pamphlets. At Cambridge, he knew James Klugmann and John Cornford, but he emphasised in his autobiography that none of his friends "made any serious effort to convert me [to Communism], either then or later." From 1935 to 1937, Hodgkin was a member of the Cambridge Apostles.
Pre-war research
Hodgkin started conducting experiments on how electrical activity is transmitted in the sciatic nerve of frogs in July 1934. He found that a nerve impulse arriving at a cold or compression block, can decrease the electrical threshold beyond the block, suggesting that the impulse produces a spread of an electrotonic potential in the nerve beyond the block. In 1936, Hodgkin was invited by Herbert Gasser, then director of the Rockefeller Institute in New York City, to work in his laboratory during 1937–38. There he met Rafael Lorente de Nó and Kenneth Stewart Cole with whom he ended up publishing a paper. During that year he also spent time at the Woods Hole Marine Biological Laboratory where he was introduced to the squid giant axon, which ended up being the model system with which he conducted most of the research that eventually led to his Nobel Prize. In the spring of 1938, he visited Joseph Erlanger at Washington University in St. Louis who told him he would take Hodgkin's local circuit theory of nerve impulse propagation seriously if he could show that altering the resistance of the fluid outside a nerve fibre made a difference to the velocity of nerve impulse conduction. Working with single nerve fibres from shore crabs and squids, he showed that the conduction rate was much faster in seawater than in oil, providing strong evidence for the local circuit theory.
After his return to Cambridge he started collaborating with Andrew Huxley who had entered Trinity as a freshman in 1935, three years after Hodgkin. With a £300 equipment grant from the Rockefeller Foundation, Hodgkin managed to set up a similar physiology setup to the one he had worked with at the Rockefeller Institute. He moved all his equipment to the Plymouth Marine Laboratory in July 1939. There, he and Huxley managed to insert a fine cannula into the giant axon of squids and record action potentials from inside the nerve fibre. They sent a short note of their success to Nature just before the outbreak of World War II.
Later career and administrative positions
From 1951 to 1969, Hodgkin was the Foulerton Professor of the Royal Society at Cambridge. In 1970 he became the John Humphrey Plummer Professor of Biophysics at Cambridge. Around this time he also ended his experiments on nerve at the Plymouth Marine Laboratory and switched his focus to visual research which he could do in Cambridge with the help of others while serving as president of the Royal Society. Together with Denis Baylor and Peter Detwiler he published a series of papers on turtle photoreceptors.
From 1970 to 1975 Hodgkin served as the 53rd president of the Royal Society (PRS). During his tenure as PRS, he was knighted in 1972 and admitted into the Order of Merit in 1973.[68] From 1978 to 1985 he was the 34th Master of Trinity College, Cambridge.
He served on the Royal Society Council from 1958 to 1960 and on the Medical Research Council from 1959 to 1963. He was foreign secretary of the Physiological Society from 1961 to 1967. He also held additional administrative posts such as Chancellor, University of Leicester, from 1971 to 1984.

2458) RADAR
Gist
RADAR stands for Radio Detection And Ranging, a system that uses radio waves to detect, locate, and track objects by sending out signals and analyzing the returning echoes to determine distance, speed, and direction, effective even in poor visibility for things like aircraft, ships, weather, and vehicles. It's crucial in aviation, shipping, meteorology, and traffic control.
The principle of Radar (Radio Detection And Ranging) is to detect objects by sending out electromagnetic waves (radio waves) and analyzing the returning echoes, much like SONAR uses sound. A transmitter sends pulses, an antenna radiates them, and the signal reflects off targets, returning as weak echoes. By measuring the time delay and direction of these returning signals, the radar system calculates an object's distance, bearing (direction), and velocity (using Doppler shift), displaying the information visually.
Summary
Radar is a system that uses radio waves to determine the distance (ranging), direction (azimuth and elevation angles), and radial velocity of objects relative to the site. It is a radiodetermination method used to detect and track aircraft, ships, spacecraft, guided missiles, motor vehicles, weather formations and terrain. The term RADAR was coined in 1940 by the United States Navy as an acronym for "radio detection and ranging". The term radar has since entered English and other languages as an anacronym, a common noun, losing all capitalization.
A radar system consists of a transmitter producing electromagnetic waves in the radio or microwave domain, a transmitting antenna, a receiving antenna (often the same antenna is used for transmitting and receiving) and a receiver and processor to determine properties of the objects. Radio waves (pulsed or continuous) from the transmitter reflect off the objects and return to the receiver, giving information about the objects' locations and speeds. This device was developed secretly for military use by several countries in the period before and during World War II. A key development was the cavity magnetron in the United Kingdom, which allowed the creation of relatively small systems with sub-meter resolution.
The modern uses of radar are highly diverse, including air and terrestrial traffic control, radar astronomy, air-defense systems, anti-missile systems, marine radars to locate landmarks and other ships, aircraft anti-collision systems, ocean surveillance systems, outer space surveillance and rendezvous systems, meteorological precipitation monitoring, radar remote sensing, altimetry and flight control systems, guided missile target locating systems, self-driving cars, and ground-penetrating radar for geological observations. Modern high tech radar systems use digital signal processing and machine learning and are capable of extracting useful information from very high noise levels.
Other systems which are similar to radar make use of other regions of the electromagnetic spectrum. One example is lidar, which uses predominantly infrared light from lasers rather than radio waves. With the emergence of driverless vehicles, radar is expected to assist the automated platform to monitor its environment, thus preventing unwanted incidents.
Details
A radar is an electromagnetic sensor used for detecting, locating, tracking, and recognizing objects of various kinds at considerable distances. It operates by transmitting electromagnetic energy toward objects, commonly referred to as targets, and observing the echoes returned from them. The targets may be aircraft, ships, spacecraft, automotive vehicles, and astronomical bodies, or even birds, insects, and rain. Besides determining the presence, location, and velocity of such objects, radar can sometimes obtain their size and shape as well. What distinguishes radar from optical and infrared sensing devices is its ability to detect faraway objects under adverse weather conditions and to determine their range, or distance, with precision.
Radar is an “active” sensing device in that it has its own source of illumination (a transmitter) for locating targets. It typically operates in the microwave region of the electromagnetic spectrum—measured in hertz (cycles per second), at frequencies extending from about 400 megahertz (MHz) to 40 gigahertz (GHz). It has, however, been used at lower frequencies for long-range applications (frequencies as low as several megahertz, which is the HF [high-frequency], or shortwave, band) and at optical and infrared frequencies (those of laser radar, or lidar). The circuit components and other hardware of radar systems vary with the frequency used, and systems range in size from those small enough to fit in the palm of the hand to those so enormous that they would fill several football fields.
Radar underwent rapid development during the 1930s and ’40s to meet the needs of the military. It is still widely employed by the armed forces, where many technological advances have originated. At the same time, radar has found an increasing number of important civilian applications, notably air traffic control, weather observation, remote sensing of the environment, aircraft and ship navigation, speed measurement for industrial applications and for law enforcement, space surveillance, and planetary observation.
Fundamentals of radar
Radar typically involves the radiating of a narrow beam of electromagnetic energy into space from an antenna. The narrow antenna beam scans a region where targets are expected. When a target is illuminated by the beam, it intercepts some of the radiated energy and reflects a portion back toward the radar system. Since most radar systems do not transmit and receive at the same time, a single antenna is often used on a time-shared basis for both transmitting and receiving.
A receiver attached to the output element of the antenna extracts the desired reflected signals and (ideally) rejects those that are of no interest. For example, a signal of interest might be the echo from an aircraft. Signals that are not of interest might be echoes from the ground or rain, which can mask and interfere with the detection of the desired echo from the aircraft. The radar measures the location of the target in range and angular direction. Range, or distance, is determined by measuring the total time it takes for the radar signal to make the round trip to the target and back. The angular direction of a target is found from the direction in which the antenna points at the time the echo signal is received. Through measurement of the location of a target at successive instants of time, the target’s recent track can be determined. Once this information has been established, the target’s future path can be predicted. In many surveillance radar applications, the target is not considered to be “detected” until its track has been established.
Pulse radar
The most common type of radar signal consists of a repetitive train of short-duration pulses. The figure shows a simple representation of a sine-wave pulse that might be generated by the transmitter of a medium-range radar designed for aircraft detection. The sine wave represents the variation with time of the output voltage of the transmitter. The numbers given in parentheses in the figure are meant only to be illustrative and are not necessarily those of any particular radar. They are, however, similar to what might be expected for a ground-based radar system with a range of about 50 to 60 nautical miles (90 to 110 km), such as the kind used for air traffic control at airports. The pulse width is given in the figure as 1 microsecond ({10}^{-6} second). It should be noted that the pulse is shown as containing only a few cycles of the sine wave; however, in a radar system having the values indicated, there would be 1,000 cycles within the pulse. In the figure the time between successive pulses is given as 1 millisecond ({10}^{-3} second), which corresponds to a pulse repetition frequency of 1 kilohertz (kHz). The power of the pulse, called the peak power, is taken here to be 1 megawatt. Since a pulse radar does not radiate continually, the average power is much less than the peak power. In this example, the average power is 1 kilowatt. The average power, rather than the peak power, is the measure of the capability of a radar system. Radars have average powers from a few milliwatts to as much as one or more megawatts, depending on the application.
A weak echo signal from a target might be as low as 1 picowatt ({10}^{-12} watt). In short, the power levels in a radar system can be very large (at the transmitter) and very small (at the receiver).
Another example of the extremes encountered in a radar system is the timing. An air-surveillance radar (one that is used to search for aircraft) might scan its antenna 360 degrees in azimuth in a few seconds, but the pulse width might be about one microsecond in duration. Some radar pulse widths are even of nanosecond ({10}^{-9} second) duration.
Radar waves travel through the atmosphere at roughly 300,000 km per second (the speed of light). The range to a target is determined by measuring the time that a radar signal takes to travel out to the target and back. The range to the target is equal to cT/2, where c = velocity of propagation of radar energy, and T = round-trip time as measured by the radar. From this expression, the round-trip travel of the radar signal through air is at a rate of 150,000 km per second. For example, if the time that it takes the signal to travel out to the target and back was measured by the radar to be 0.0006 second (600 microseconds), then the range of the target would be 90 km. The ability to measure the range to a target accurately at long distances and under adverse weather conditions is radar’s most distinctive attribute. There are no other devices that can compete with radar in the measurement of range.
The range accuracy of a simple pulse radar depends on the width of the pulse: the shorter the pulse, the better the accuracy. Short pulses, however, require wide bandwidths in the receiver and transmitter (since bandwidth is equal to the reciprocal of the pulse width). A radar with a pulse width of one microsecond can measure the range to an accuracy of a few tens of metres or better. Some special radars can measure to an accuracy of a few centimetres. The ultimate range accuracy of the best radars is limited by the known accuracy of the velocity at which electromagnetic waves travel.
Directive antennas and target direction
Almost all radars use a directive antenna—i.e., one that directs its energy in a narrow beam. (The beamwidth of an antenna of fixed size is inversely proportional to the radar frequency.) The direction of a target can be found from the direction in which the antenna is pointing when the received echo is at a maximum. A precise means for determining the direction of a target is the monopulse method—in which information about the angle of a target is obtained by comparing the amplitudes of signals received from two or more simultaneous receiving beams, each slightly offset (squinted) from the antenna’s central axis. A dedicated tracking radar—one that follows automatically a single target so as to determine its trajectory—generally has a narrow, symmetrical “pencil” beam. (A typical beamwidth might be about 1 degree.) Such a radar system can determine the location of the target in both azimuth angle and elevation angle. An aircraft-surveillance radar generally employs an antenna that radiates a “fan” beam, one that is narrow in azimuth (about 1 or 2 degrees) and broad in elevation (elevation beamwidths of from 20 to 40 degrees or more). A fan beam allows only the measurement of the azimuth angle.
Doppler frequency and target velocity
Radar can extract the Doppler frequency shift of the echo produced by a moving target by noting how much the frequency of the received signal differs from the frequency of the signal that was transmitted. (The Doppler effect in radar is similar to the change in audible pitch experienced when a train whistle or the siren of an emergency vehicle moves past the listener.) A moving target will cause the frequency of the echo signal to increase if it is approaching the radar or to decrease if it is receding from the radar. For example, if a radar system operates at a frequency of 3,000 MHz and an aircraft is moving toward it at a speed of 400 knots (740 km per hour), the frequency of the received echo signal will be greater than that of the transmitted signal by about 4.1 kHz. The Doppler frequency shift in hertz is equal to 3.4 f0vr, where f0 is the radar frequency in gigahertz and vr is the radial velocity (the rate of change of range) in knots.
Since the Doppler frequency shift is proportional to radial velocity, a radar system that measures such a shift in frequency can provide the radial velocity of a target. The Doppler frequency shift can also be used to separate moving targets from stationary targets even when the echo signal from undesired clutter is much more powerful than the echo from the desired moving targets. A form of pulse radar that uses the Doppler frequency shift to eliminate stationary clutter is called either a moving-target indication (MTI) radar or a pulse Doppler radar, depending on the particular parameters of the signal waveform.
The above measurements of range, angle, and radial velocity assume that the target is a “point-scatterer.” Actual targets, however, are of finite size and can have distinctive shapes. The range profile of a finite-sized target can be determined if the range resolution of the radar is small compared with the target’s size in the range dimension. (The range resolution of a radar, given in units of distance, is a measure of the ability of a radar to separate two closely spaced echoes.) Some radars can have resolutions much smaller than one metre, which is quite suitable for determining the radial size and profile of many targets of interest.
The resolution in angle, or cross range, that can be obtained with conventional antennas is poor compared with that which can be obtained in range. It is possible, however, to achieve good resolution in angle by resolving in Doppler frequency (i.e., separating one Doppler frequency from another). If the radar is moving relative to the target (as when the radar is on an aircraft and the target is the ground), the Doppler frequency shift will be different for different parts of the target. Thus, the Doppler frequency shift can allow the various parts of the target to be resolved. The resolution in cross range derived from the Doppler frequency shift is far better than that achieved with a narrow-beam antenna. It is not unusual for the cross-range resolution obtained from Doppler frequency to be comparable to that obtained in the range dimension.
Radar imaging
Radar can distinguish one kind of target from another (such as a bird from an aircraft), and some systems are able to recognize specific classes of targets (for example, a commercial airliner as opposed to a military jet fighter). Target recognition is accomplished by measuring the size and speed of the target and by observing the target with high resolution in one or more dimensions. Propellers and jet engines modify the radar echo from aircraft and can assist in target recognition. The flapping of the wings of a bird in flight produces a characteristic modulation that can be used to recognize that a bird is present or even to distinguish one type of bird from another.
Cross-range resolution obtained from Doppler frequency, along with range resolution, is the basis for synthetic aperture radar (SAR). SAR produces an image of a scene that is similar, but not identical, to an optical photograph. One should not expect the image seen by radar “eyes” to be the same as that observed by optical eyes. Each provides different information. Radar and optical images differ because of the large difference in the frequencies involved; optical frequencies are approximately 100,000 times higher than radar frequencies.
SAR can operate from long range and through clouds or other atmospheric effects that limit optical and infrared imaging sensors. The resolution of a SAR image can be made independent of range, an advantage over passive optical imaging where the resolution worsens with increasing range. Synthetic aperture radars that map areas of the Earth’s surface with resolutions of a few metres can provide information about the nature of the terrain and what is on the surface.
A SAR operates on a moving vehicle, such as an aircraft or spacecraft, to image stationary objects or planetary surfaces. Since relative motion is the basis for the Doppler resolution, high resolution (in cross range) also can be accomplished if the radar is stationary and the target is moving. This is called inverse synthetic aperture radar (ISAR). Both the target and the radar can be in motion with ISAR.
Additional Information
Radars are critical for understanding the weather; they allow us to “see” inside clouds and help us to observe what is really happening. Working together, engineers, technicians, and scientists collectively design, develop and operate the advanced technology of radars that are used to study the atmosphere.
What are Weather Radars?
Doppler weather radars are remote sensing instruments and are capable of detecting particle type (rain, snow, hail, insects, etc), intensity, and motion. Radar data can be used to determine the structure of storms and to help with predicting severity of storms.
The Electromagnetic Spectrum
Energy is emitted in various frequencies and wavelengths from large wavelength radio waves to shorter wavelength gamma rays. Radars emit microwave energy, a longer wavelength, highlighted in yellow.
How Do Radars Work?
The radar transmits a focused pulse of microwave energy (yup, just like a microwave oven or a cell phone, but stronger) at an object, most likely a cloud. Part of this beam of energy bounces back and is measured by the radar, providing information about the object. Radar can measure precipitation size, quantity, speed and direction of movement, within about 100 mile radius of its location.
Doppler radar is a specific type of radar that uses the Doppler effect to gather velocity data from the particles that are being measured. For example, a Doppler radar transmits a signal that gets reflected off raindrops within a storm. The reflected radar signal is measured by the radar's receiver with a change in frequency. That frequency shift is directly related to the motion of the raindrops.
Why does NCAR use radars for research?
Atmospheric scientists use different types of ground-based and aircraft-mounted radar to study weather and climate. Radar can be used to help study severe weather events such tornadoes and hurricanes, or long-term climate processes in the atmosphere.
Ground-based Research Radar
The NCAR S-Band Dual-Polarization Doppler Radar (S-PolKa) is a 10-cm wavelength weather radar initially designed and fielded by NCAR in the 1990s. Continuously modified and improved, this state-of-the-art radar system now includes dual-wavelength capability. When the Ka-band is added, a 0.8-cm wavelength radar, it is known as S-PolKa. S-PolKa’s mission is to promote a better understanding of weather and its causes and thereby ultimately provide improved forecasting of severe storms, tornadoes, floods, hail, damaging winds, aircraft icing conditions, and heavy snow.
Airborne Research Radar
In the air, research aircraft can be outfitted with an array of radars. The NCAR HIAPER Cloud Radar (HCR) can be mounted to the underside of the wing of the NSF/NCAR HIAPER research aircraft (a modified Gulfstream V jet) and delivers high quality observations of winds, precipitation and other particles. It was designed and manufactured by a collaborative team of mechanical, electrical, aerospace, and software engineers; research scientists; and instrument makers from EOL.
Collapse II/Collapsed/Collapsing Quotes
1. I come from a country in which I experienced economic collapse. - Angela Merkel
2. I can feel it in my bones that no matter what we do, even if we do not do anything, the revolutionary government of Madame Cory Aquino will collapse. - Ferdinand Marcos
3. If the U.N. is ineffective, the whole concept of multilateralism will collapse. - Sushma Swaraj
4. I would push myself so much that in the end I would collapse and I would have to be admitted to hospital, I would pray to God to save me, promise that I would be more careful in future. And then I would do it all over again. - Milkha Singh
5. My life collapsed. People ran from me because suddenly it was 'Oh my God! It's over for her now!' - Nicole Kidman
6. We would look up at the night sky together, and although Stephen wasn't actually very good at detecting constellations, he would tell me about the expanding universe and the possibility of it contracting again and describe a star collapsing in on itself to form a black hole in a way that was quite easy to understand. - Jane Hawking.
Q: Did you hear about the carrot detective?
A: He got to the root of every case.
* * *
Q: How can you make a soup rich?
A: Add 14 carrots (carats) to it.
* * *
Q: Why is a carrot orange and pointy?
A: Because if it was green and round it would want to pea!
* * *
Q: How do you kill a salad?
A: You go for the carrot-id artery.
* * *
Q: When does a carrot wear a mask?
A: To the mascarrot ball. (Masquerade).
* * *
Hi,
#10689. What does the term in Geography Compass rose mean?
#10690. What does the term in Geography Confluence mean?
Hi,
#5885. What does the verb (used with object) qualify mean?
#5886. What does the adjective qualitative mean?
Hi,
Good work!
2667.
Hi,
#2538. What does the medical term Colectomy mean?
Hi,
#9818.
Hi,
#6312.
Hi,
2666.
Hypotension
Gist
Hypotension (low blood pressure) means blood flows at lower than normal force, typically below 90/60 mmHg, potentially depriving organs of oxygen, causing dizziness, fatigue, blurred vision, or fainting, and can stem from dehydration, blood loss, medications, or underlying conditions like heart issues, with treatment focusing on managing the cause, increasing fluids/salt, or compression. While some have naturally low BP without issues, sudden drops (orthostatic hypotension) are common, requiring prompt attention to prevent serious complications like shock.
Hypotension treatment focuses on raising blood pressure through lifestyle changes (more water, salt intake with doctor's advice, smaller meals, avoiding sudden movements, compression stockings) and, if needed, medications like midodrine or fludrocortisone, or IV fluids for severe cases, all depending on the underlying cause, with severe drops requiring emergency care.
Summary
Hypotension is a condition in which the blood pressure is abnormally low, either because of reduced blood volume or because of increased blood-vessel capacity. Though not in itself an indication of ill health, it often accompanies disease.
Extensive bleeding is an obvious cause of reduced blood volume that leads to hypotension. There are other possible causes. A person who has suffered an extensive burn loses blood plasma—blood minus the red and white blood cells and the platelets. Blood volume is reduced in a number of conditions involving loss of salt and water from the tissues—as in excessive sweating and diarrhea—and its replacement with water from the blood. Loss of water from the blood to the tissues may result from exposure to cold temperatures. Also, a person who remains standing for as long as one-half hour may temporarily lose as much as 15 percent of the blood water into the tissues of the legs.
Orthostatic hypotension—low blood pressure upon standing up—seems to stem from a failure in the autonomic nervous system. Normally, when a person stands up, there is a reflex constriction of the small arteries and veins to offset the effects of gravity. Hypotension from an increase in the capacity of the blood vessels is a factor in fainting (see syncope). Hypotension is also a factor in poliomyelitis, in shock, and in overdose of depressant drugs, such as barbiturates.
Details
Hypotension, also known as low blood pressure, is a cardiovascular condition characterized by abnormally reduced blood pressure. Blood pressure is the force of blood pushing against the walls of the arteries as the heart pumps out blood and is indicated by two numbers, the systolic blood pressure (the top number) and the diastolic blood pressure (the bottom number), which are the maximum and minimum blood pressures within the cardiac cycle, respectively. A systolic blood pressure of less than 90 millimeters of mercury (mmHg) or diastolic of less than 60 mmHg is generally considered to be hypotension. Different numbers apply to children. However, in practice, blood pressure is considered too low only if noticeable symptoms are present.
Symptoms may include dizziness, lightheadedness, confusion, feeling tired, weakness, headache, blurred vision, nausea, neck or back pain, an irregular heartbeat or feeling that the heart is skipping beats or fluttering, and fainting. Hypotension is the opposite of hypertension, which is high blood pressure. It is best understood as a physiological state rather than a disease. Severely low blood pressure can deprive the brain and other vital organs of oxygen and nutrients, leading to a life-threatening condition called shock. Shock is classified based on the underlying cause, including hypovolemic shock, cardiogenic shock, distributive shock, and obstructive shock.
Hypotension can be caused by strenuous exercise, excessive heat, low blood volume (hypovolemia), hormonal changes, widening of blood vessels, anemia, vitamin B12 deficiency, anaphylaxis, heart problems, or endocrine problems. Some medications can also lead to hypotension. There are also syndromes that can cause hypotension in patients including orthostatic hypotension, vasovagal syncope, and other rarer conditions.
For many people, excessively low blood pressure can cause dizziness and fainting or indicate serious heart, endocrine or neurological disorders.
For some people who exercise and are in top physical condition, low blood pressure could be normal. A single session of exercise can induce hypotension, and water-based exercise can induce a hypotensive response.
Treatment depends on the cause of the low blood pressure. Treatment of hypotension may include the use of intravenous fluids or vasopressors. When using vasopressors, trying to achieve a mean arterial pressure (MAP) of greater than 70 mmHg does not appear to result in better outcomes than trying to achieve an MAP of greater than 65 mmHg in adults.
Additional Information
Low blood pressure is a reading below 90/60 mm Hg. Many issues can cause low blood pressure. Treatment varies depending on what’s causing it. Symptoms of low blood pressure include dizziness and fainting, but many people don’t have symptoms. The cause also affects your prognosis.
Overview:
What is low blood pressure?
Hypotension, or low blood pressure, is when your blood pressure is much lower than expected. It can happen either as a condition on its own or as a symptom of a wide range of conditions. It may not cause symptoms. But when it does, you may need medical attention.
Types of low blood pressure
Hypotension has two definitions:
* Absolute hypotension: Your resting blood pressure is below 90/60 millimeters of mercury (mm Hg).
* Orthostatic hypotension: Your blood pressure stays low for longer than three minutes after you stand up from a sitting position. (It’s normal for your blood pressure to drop briefly when you change positions, but not for that long.) The drop must be 20 mm Hg or more for your systolic (top) pressure and 10 mm Hg or more for your diastolic (bottom) pressure. Another name for this is postural hypotension because it happens with changes in posture.
Measuring blood pressure involves two numbers:
* Systolic (top number): This is the pressure on your arteries each time your heart beats.
* Diastolic (bottom number): This is how much pressure your arteries are under between heartbeats.
What is considered low blood pressure?
Low blood pressure is below 90/60 mm Hg. Normal blood pressure is above that, up to 120/80 mm Hg.
How common is low blood pressure?
Because low blood pressure is common without any symptoms, it’s impossible to know how many people it affects. However, orthostatic hypotension seems to be more and more common as you get older. An estimated 5% of people have it at age 50, while that figure climbs to more than 30% in people over 70.
Who does low blood pressure affect?
Hypotension can affect people of any age and background, depending on why it happens. However, it’s more likely to cause symptoms in people over 50 (especially orthostatic hypotension). It can also happen (with no symptoms) to people who are very physically active, which is more common in younger people.
Symptoms and Causes:
What are the symptoms of low blood pressure?
Low blood pressure symptoms include:
* Dizziness or feeling lightheaded.
* Fainting or passing out (syncope).
* Nausea or vomiting.
* Distorted or blurred vision.
* Fast, shallow breathing.
* Fatigue or weakness.
* Feeling tired, sluggish or lethargic.
* Confusion or trouble concentrating.
* Agitation or other unusual changes in behavior (a person not acting like themselves).
For people with symptoms, the effects depend on why hypotension is happening, how fast it develops and what caused it. Slow decreases in blood pressure happen normally, so hypotension becomes more common as people get older. Fast decreases in blood pressure can mean certain parts of your body aren’t getting enough blood flow. That can have effects that are unpleasant, disruptive or even dangerous.
Usually, your body can automatically control your blood pressure and keep it from dropping too much. If it starts to drop, your body tries to make up for that, either by speeding up your heart rate or constricting blood vessels to make them narrower. Symptoms of hypotension happen when your body can’t offset the drop in blood pressure.
For many people, hypotension doesn’t cause any symptoms. Many people don’t even know their blood pressure is low unless they measure their blood pressure.
What are the possible signs of low blood pressure?
Your healthcare provider may observe these signs of low blood pressure:
* A heart rate that’s too slow or too fast.
* A skin color that looks lighter than it usually does.
* Cool kneecaps.
* Low cardiac output (how much blood your heart pumps).
* Low urine (pee) output.
What causes low blood pressure?
Hypotension can happen for a wide range of reasons. Causes of low blood pressure include:
* Orthostatic hypotension: This happens when you stand up too quickly and your body can’t compensate with more blood flow to your brain.
* Central nervous system diseases: Conditions like Parkinson’s disease can affect how your nervous system controls your blood pressure. People with these conditions may feel the effects of low blood pressure after eating because their digestive systems use more blood as they digest food.
* Low blood volume: Blood loss from severe injuries can cause low blood pressure. Dehydration can also contribute to low blood volume.
* Life-threatening conditions: These conditions include irregular heart rhythms (arrhythmias), pulmonary embolism (PE), heart attacks and collapsed lung. Life-threatening allergic reactions (anaphylaxis) or immune reactions to severe infections (sepsis) can also cause hypotension.
* Heart and lung conditions: You can get hypotension when your heart beats too quickly or too slowly, or if your lungs aren’t working as they should. Advanced heart failure (weak heart muscle) is another cause.
* Prescription medications: Hypotension can happen with medications that treat high blood pressure, heart failure, erectile dysfunction, neurological problems, depression and more. Don’t stop taking any prescribed medicine unless your provider tells you to stop.
* Alcohol or recreational drugs: Recreational drugs can lower your blood pressure, as can alcohol (for a short time). Certain herbal supplements, vitamins or home remedies can also lower your blood pressure. This is why you should always include these when you tell your healthcare provider what medications you’re taking.
* Pregnancy: Orthostatic hypotension is possible in the first and second trimesters of pregnancy. Bleeding or other complications of pregnancy can also cause low blood pressure.
* Extreme temperatures: Being too hot or too cold can affect hypotension and make its effects worse.
What are the complications of low blood pressure?
Complications that can happen because of hypotension include:
* Falls and fall-related injuries: These are the biggest risks with hypotension because it can cause dizziness and fainting. Falls can lead to broken bones, concussions and other serious or even life-threatening injuries. If you have hypotension, preventing falls should be one of your biggest priorities.
* Shock: When your blood pressure is low, that can affect your organs by reducing the amount of blood they get. That can cause organ damage or even shock (where your body starts to shut down because of limited blood flow and oxygen).
* Heart problems or stroke: Low blood pressure can cause your heart to try to compensate by pumping faster or harder. Over time, that can cause permanent heart damage and even heart failure. It can also cause problems like deep vein thrombosis (DVT) and stroke because blood isn’t flowing like it should, causing clots to form.
Diagnosis and Tests:
How is low blood pressure diagnosed?
Hypotension itself is easy to diagnose. Taking your blood pressure is all you need to do. But figuring out why you have hypotension is another story. If you have symptoms, a healthcare provider will likely use a variety of tests to figure out why it’s happening and if there’s any danger to you because of it.
What tests will be done to diagnose low blood pressure?
Your provider may recommend the following tests:
Lab testing
Tests on your blood and pee (urine) can look for any potential problems, like:
* Diabetes.
* Vitamin deficiencies.
* Thyroid or hormone problems.
* Low iron levels (anemia).
* Pregnancy (for anyone who can become pregnant).
Imaging
If providers suspect a heart or lung problem is behind your hypotension, they’ll likely use imaging tests to see if they’re right. These tests include:
* X-rays.
* Computed tomography (CT) scans.
* Magnetic resonance imaging (MRI).
* Echocardiogram or similar ultrasound-based tests.
Diagnostic testing
These tests look for specific problems with your heart or other body systems.
* Electrocardiogram (ECG or EKG).
* Exercise stress testing.
* Tilt table test (can help in diagnosing orthostatic hypotension).

Mobile Phone
Gist
A cell phone (or mobile phone) is a portable, wireless device that uses cellular networks (radio waves connecting to cell towers) for voice calls and data, evolving from simple phones to powerful smartphones with internet, apps, cameras, and more, replacing traditional phones for communication.
A cellphone, also known as a mobile phone, allows users to make and receive calls over a radio frequency network while on the move. Modern cellphones support additional services beyond calls such as texting, multimedia messaging, email, internet access, Bluetooth, apps, and photos.
Summary
A mobile phone or cell phone is a portable wireless telephone that allows users to make and receive calls over a radio frequency link while moving within a designated telephone service area, unlike fixed-location phones (landline phones). This radio frequency link connects to the switching systems of a mobile phone operator, providing access to the public switched telephone network (PSTN). Modern mobile telephony relies on a cellular network architecture, which is why mobile phones are often referred to as 'cell phones' in North America.
Beyond traditional voice communication, digital mobile phones have evolved to support a wide range of additional services. These include text messaging, multimedia messaging, email, and internet access (via LTE, 5G NR or Wi-Fi), as well as short-range wireless technologies like Bluetooth, infrared, and ultra-wideband (UWB).
Mobile phones also support a variety of multimedia capabilities, such as digital photography, video recording, and gaming. In addition, they enable multimedia playback and streaming, including video content, as well as radio and television streaming. Furthermore, mobile phones offer satellite-based services, such as navigation and messaging, as well as business applications and payment solutions (via scanning QR codes or near-field communication (NFC)). Mobile phones offering only basic features are often referred to as feature phones (slang: dumbphones), while those with advanced computing power are known as smartphones.
The first handheld mobile phone was demonstrated by Martin Cooper of Motorola in New York City on 3 April 1973, using a handset weighing c. 2 kilograms (4.4 lbs). In 1979, Nippon Telegraph and Telephone (NTT) launched the world's first cellular network in Japan. In 1983, the DynaTAC 8000x was the first commercially available handheld mobile phone. From 1993 to 2024, worldwide mobile phone subscriptions grew to over 9.1 billion; enough to provide one for every person on Earth. In 2024, the top smartphone manufacturers worldwide were Samsung, Apple and Xiaomi; smartphone sales represented about 50 percent of total mobile phone sales. For feature phones as of 2016, the top-selling brands were Samsung, Nokia and Alcatel.
Mobile phones are considered an important human invention as they have been one of the most widely used and sold pieces of consumer technology. The growth in popularity has been rapid in some places; for example, in the UK, the total number of mobile phones overtook the number of houses in 1999. Today, mobile phones are globally ubiquitous, and in almost half the world's countries, over 90% of the population owns at least one.
Details
A cell phone is a wireless telephone that permits telecommunication within a defined area that may include hundreds of square miles, using radio waves in the 800–900 megahertz (MHz) band. To implement a cell-phone system, a geographic area is broken into smaller areas, or cells, usually mapped as uniform hexagrams but in fact overlapping and irregularly shaped. Each cell is equipped with a low-powered radio transmitter and receiver that permit propagation of signals between cell-phone users.
Cellular telephones, or simply cell phones, are portable devices that may be used in motor vehicles or by pedestrians. Communicating by radio waves, they permit a significant degree of mobility within a defined serving region that may range in area from a few city blocks to hundreds of square kilometres. The first mobile and portable subscriber units for cellular systems were large and heavy. With significant advances in component technology, though, the weight and size of portable transceivers have been significantly reduced. In this section, the concept of cell phones and the development of cellular systems are discussed.
Cellular communication
All cellular telephone systems exhibit several fundamental characteristics, as summarized in the following:
1) The geographic area served by a cellular system is broken up into smaller geographic areas, or cells. Uniform hexagons most frequently are employed to represent these cells on maps and diagrams; in practice, though, radio waves do not confine themselves to hexagonal areas, so the actual cells have irregular shapes.
2) All communication with a mobile or portable instrument within a given cell is made to a base station that serves the cell.
3) Because of the low transmitting power of battery-operated portable instruments, specific sending and receiving frequencies assigned to a cell may be reused in other cells within the larger geographic area. Thus, the spectral efficiency of a cellular system (that is, the uses to which it can put its portion of the radio spectrum) is increased by a factor equal to the number of times a frequency may be reused within its service area.
4) As a mobile instrument proceeds from one cell to another during the course of a call, a central controller automatically reroutes the call from the old cell to the new cell without a noticeable interruption in the signal reception. This process is known as handoff. The central controller, or mobile telephone switching office (MTSO), thus acts as an intelligent central office switch that keeps track of the movement of the mobile subscriber.
5) As demand for the radio channels within a given cell increases beyond the capacity of that cell (as measured by the number of calls that may be supported simultaneously), the overloaded cell is “split” into smaller cells, each with its own base station and central controller. The radio-frequency allocations of the original cellular system are then rearranged to account for the greater number of smaller cells.
Frequency reuse between discontiguous cells and the splitting of cells as demand increases are the concepts that distinguish cellular systems from other wireless telephone systems. They allow cellular providers to serve large metropolitan areas that may contain hundreds of thousands of customers.
Development of cellular systems
In the United States, interconnection of mobile transmitters and receivers with the public switched telephone network (PSTN) began in 1946, with the introduction of mobile telephone service (MTS) by the American Telephone & Telegraph Company (AT&T). In the U.S. MTS system, a user who wished to place a call from a mobile phone had to search manually for an unused channel before placing the call. The user then spoke with a mobile operator, who actually dialed the call over the PSTN. The radio connection was simplex—i.e., only one party could speak at a time, the call direction being controlled by a push-to-talk switch in the mobile handset. In 1964 AT&T introduced the improved mobile telephone service (IMTS). This provided full duplex operation, automatic dialing, and automatic channel searching. Initially 11 channels were provided, but in 1969 an additional 12 channels were made available. Since only 11 (or 12) channels were available for all users of the system within a given geographic area (such as the metropolitan area around a large city), the IMTS system faced a high demand for a very limited channel resource. Moreover, each base-station antenna had to be located on a tall structure and had to transmit at high power in order to provide coverage throughout the entire service area. Because of these high power requirements, all subscriber units in the IMTS system were motor-vehicle-based instruments that carried large storage batteries.
During this time a truly cellular system, known as the advanced mobile phone system, or AMPS, was developed primarily by AT&T and Motorola, Inc. AMPS was based on 666 paired voice channels, spaced every 30 kilohertz in the 800-megahertz region. The system employed an analog modulation approach—frequency modulation, or FM—and was designed from the outset to support subscriber units for use both in automobiles and by pedestrians. It was publicly introduced in Chicago in 1983 and was a success from the beginning. At the end of the first year of service, there were a total of 200,000 AMPS subscribers throughout the United States; five years later there were more than 2,000,000. In response to expected service shortages, the American cellular industry proposed several methods for increasing capacity without requiring additional spectrum allocations. One analog FM approach, proposed by Motorola in 1991, was known as narrowband AMPS, or NAMPS. In NAMPS systems each existing 30-kilohertz voice channel was split into three 10-kilohertz channels. Thus, in place of the 832 channels available in AMPS systems, the NAMPS system offered 2,496 channels. A second approach, developed by a committee of the Telecommunications Industry Association (TIA) in 1988, employed digital modulation and digital voice compression in conjunction with a time-division multiple access (TDMA) method; this also permitted three new voice channels in place of one AMPS channel. Finally, in 1994 there surfaced a third approach, developed originally by Qualcomm, Inc., but also adopted as a standard by the TIA. This third approach used a form of spread spectrum multiple access known as code-division multiple access (CDMA)—a technique that, like the original TIA approach, combined digital voice compression with digital modulation. (For more information on the techniques of information compression, signal modulation, and multiple access, see telecommunications.) The CDMA system offered 10 to 20 times the capacity of existing AMPS cellular techniques. All of these improved-capacity cellular systems were eventually deployed in the United States, but, since they were incompatible with one another, they supported rather than replaced the older AMPS standard.
Although AMPS was the first cellular system to be developed, a Japanese system was the first cellular system to be deployed, in 1979. Other systems that preceded AMPS in operation include the Nordic mobile telephone (NMT) system, deployed in 1981 in Denmark, Finland, Norway, and Sweden, and the total access communication system (TACS), deployed in the United Kingdom in 1983. A number of other cellular systems were developed and deployed in many more countries in the following years. All of them were incompatible with one another. In 1988 a group of government-owned public telephone bodies within the European Community announced the digital global system for mobile communications, referred to as GSM, the first such system that would permit any cellular user in one European country to operate in another European country with the same equipment. GSM soon became ubiquitous throughout Europe.
The analog cellular systems of the 1980s are now referred to as “first-generation” (or 1G) systems, and the digital systems that began to appear in the late 1980s and early ’90s are known as the “second generation” (2G). Since the introduction of 2G cell phones, various enhancements have been made in order to provide data services and applications such as Internet browsing, two-way text messaging, still-image transmission, and mobile access by personal computers. One of the most successful applications of this kind is iMode, launched in 1999 in Japan by NTT DoCoMo, the mobile service division of the Nippon Telegraph and Telephone Corporation. Supporting Internet access to selected Web sites, interactive games, information retrieval, and text messaging, iMode became extremely successful; within three years of its introduction, more than 35 million users in Japan had iMode-enabled cell phones.
Beginning in 1985, a study group of the Geneva-based International Telecommunication Union (ITU) began to consider specifications for Future Public Land Mobile Telephone Systems (FPLMTS). These specifications eventually became the basis for a set of “third-generation” (3G) cellular standards, known collectively as IMT-2000. The 3G standards are based loosely on several attributes: the use of CDMA technology; the ability eventually to support three classes of users (vehicle-based, pedestrian, and fixed); and the ability to support voice, data, and multimedia services. The world’s first 3G service began in Japan in October 2001 with a system offered by NTT DoCoMo. Soon 3G service was being offered by a number of different carriers in Japan, South Korea, the United States, and other countries. Several new types of service compatible with the higher data rates of 3G systems have become commercially available, including full-motion video transmission, image transmission, location-aware services (through the use of global positioning system [GPS] technology), and high-rate data transmission.
The increasing demands placed on mobile telephones to handle even more data than 3G could led to the development of 4G technology. In 2008 the ITU set forward a list of requirements for what it called IMT-Advanced, or 4G; these requirements included data rates of 1 gigabit per second for a stationary user and 100 megabits per second for a moving user. The ITU in 2010 decided that two technologies, LTE-Advanced (Long Term Evolution; LTE) and WirelessMan-Advanced (also called WiMAX), met the requirements. The Swedish telephone company TeliaSonera introduced the first 4G LTE network in Stockholm in 2009.
Airborne cellular systems
In addition to the terrestrial cellular phone systems described above, there also exist several systems that permit the placement of telephone calls to the PSTN by passengers on commercial aircraft. These in-flight telephones, known by the generic name aeronautical public correspondence (APC) systems, are of two types: terrestrial-based, in which telephone calls are placed directly from an aircraft to an en route ground station; and satellite-based, in which telephone calls are relayed via satellite to a ground station. In the United States the North American terrestrial system (NATS) was introduced by GTE Corporation in 1984. Within a decade the system was installed in more than 1,700 aircraft, with ground stations in the United States providing coverage over most of the United States and southern Canada. A second-generation system, GTE Airfone GenStar, employed digital modulation. In Europe the European Telecommunications Standards Institute (ETSI) adopted a terrestrial APC system known as the terrestrial flight telephone system (TFTS) in 1992. This system employs digital modulation methods and operates in the 1,670–1,675- and 1,800–1,805-megahertz bands. In order to cover most of Europe, the ground stations must be spaced every 50 to 700 km (30 to 435 miles).
Satellite-based telephone communication
In order to augment the terrestrial and aircraft-based mobile telephone systems, several satellite-based systems have been put into operation. The goal of these systems is to permit ready connection to the PSTN from anywhere on Earth’s surface, especially in areas not presently covered by cellular telephone service. A form of satellite-based mobile communication has been available for some time in airborne cellular systems that utilize Inmarsat satellites. However, the Inmarsat satellites are geostationary, remaining approximately 35,000 km (22,000 miles) above a single location on Earth’s surface. Because of this high-altitude orbit, Earth-based communication transceivers require high transmitting power, large communication antennas, or both in order to communicate with the satellite. In addition, such a long communication path introduces a noticeable delay, on the order of a quarter-second, in two-way voice conversations. One viable alternative to geostationary satellites would be a larger system of satellites in low Earth orbit (LEO). Orbiting less than 1,600 km (1,000 miles) above Earth, LEO satellites are not geostationary and therefore cannot provide constant coverage of specific areas on Earth. Nevertheless, by allowing radio communications with a mobile instrument to be handed off between satellites, an entire constellation of satellites can assure that no call will be dropped simply because a single satellite has moved out of range.
The first LEO system intended for commercial service was the Iridium system, designed by Motorola, Inc., and owned by Iridium LLC, a consortium made up of corporations and governments from around the world. The Iridium concept employed a constellation of 66 satellites orbiting in six planes around Earth. They were launched from May 1997 to May 1998, and commercial service began in November 1998. Each satellite, orbiting at an altitude of 778 kilometres (483 miles), had the capability to transmit 48 spot beams to Earth. Meanwhile, all the satellites were in communication with one another via 23-gigahertz radio “crosslinks,” thus permitting ready handoff between satellites when communicating with a fixed or mobile user on Earth. The crosslinks provided an uninterrupted communication path between the satellite serving a user at any particular instant and the satellite connecting the entire constellation with the gateway ground station to the PSTN. In this way, the 66 satellites provided continuous telephone communication service for subscriber units around the globe. However, the service failed to attract sufficient subscribers, and Iridium LLC went out of business in March 2000. Its assets were acquired by Iridium Satellite LLC, which continued to provide worldwide communication service to the U.S. Department of Defense as well as business and individual users.
Another LEO system, Globalstar, consisted of 48 satellites that were launched about the same time as the Iridium constellation. Globalstar began offering service in October 1999, though it too went into bankruptcy, in February 2002; a reorganized Globalstar LP continued to provide service thereafter.
Smartphone
Smartphone is a mobile telephone with a display screen (typically a liquid crystal display, or LCD), built-in personal information management programs (such as an electronic calendar and address book)), and an operating system (OS) that allows other computer software to be installed for Web browsing, email, music, video, and other applications. A smartphone may be thought of as a handheld computer integrated within a mobile telephone.
The first smartphone was designed by IBM and sold by BellSouth (formerly part of the AT&T Corporation) in 1993. It included a touchscreen interface for accessing its calendar, address book, calculator, and other functions. As the market matured and solid-state computer memory and integrated circuits became less expensive over the following decade, smartphones became more computer-like, and more advanced services, such as Internet access, became possible. Advanced services became ubiquitous with the introduction of the so-called third-generation (3G) mobile phone networks in 2001. Before 3G, most mobile phones could send and receive data at a rate sufficient for telephone calls and text messages. Using 3G, communication takes place at bit-rates high enough for sending and receiving photographs, video clips, music files, e-mails, and more. Most smartphone manufacturers license an operating system, such as Microsoft Corporation’s Windows Mobile OS, Symbian OS, Google’s Android OS, or Palm OS. Research in Motion’s BlackBerry and Apple Inc.’s iPhone have their own proprietary systems.
Smartphones contain either a keyboard integrated with the telephone number pad or a standard “QWERTY” keyboard for text messaging, e-mailing, and using Web browsers. “Virtual” keyboards can be integrated into a touch-screen design. Smartphones often have a built-in camera for recording and transmitting photographs and short videos. In addition, many smartphones can access Wi-Fi “hot spots” so that users can access VoIP (voice over Internet protocol) rather than pay cellular telephone transmission fees. The growing capabilities of handheld devices and transmission protocols have enabled a growing number of inventive and fanciful applications—for instance, “augmented reality,” in which a smartphone’s global positioning system (GPS) location chip can be used to overlay the phone’s camera view of a street scene with local tidbits of information, such as the identity of stores, points of interest, or real estate listings.
4G
4G refers to the fourth generation of cellular network technology, first introduced in the late 2000s and early 2010s. Compared to preceding third-generation (3G) technologies, 4G has been designed to support all-IP communications and broadband services, and eliminates circuit switching in voice telephony. It also has considerably higher data bandwidth compared to 3G, enabling a variety of data-intensive applications such as high-definition media streaming and the expansion of Internet of Things (IoT) applications.
The earliest deployed technologies marketed as "4G" were Long Term Evolution (LTE), developed by the 3GPP group, and Mobile Worldwide Interoperability for Microwave Access (Mobile WiMAX), based on IEEE specifications. These provided significant enhancements over previous 3G and 2G.
5G
5G, fifth-generation telecommunications technology. Introduced in 2019 and now globally deployed, 5G delivers faster connectivity with higher bandwidth and “lower latency” (shorter delay times), improving the performance of phone calls, streaming, videoconferencing, gaming, and business applications as well as the responsiveness of connected systems and mobile apps. 5G can double the download speeds for smartphones and improve performance considerably more for devices tied to the Internet of Things (IoT).
5G technology improves the data processing of more-advanced digital operations such as those tied to machine learning (ML), artificial intelligence (AI), virtual reality (VR), and augmented reality (AR), improving performance and the user experience alike. It also better supports autonomous vehicles, drones, and other robotic systems.
How 5G works
5G signals rely on a different part of the radiofrequency spectrum than previous versions of cellular technology. As a result, mobile phones and other devices must be built with a specific 5G microchip.
Three primary types of 5G technology exist: low-band networks that support a wide coverage area but increase speeds only by about 20 percent over 4G; high-band networks that deliver ultrafast connectivity but which are limited by distance and access to 5G base stations (which transmit the signals for the technology); and mid-band networks that balance both speed and breadth of coverage. 5G also supports “OpenRoaming” capabilities that allow a user to switch seamlessly and automatically from a cellular to a Wi-Fi connection while traveling, eliminating any interruption of service and the need for entering passwords to access the latter.
Telecom providers use a different type of antenna, known as MIMO (multiple-input multiple-output), to transmit 5G signals. This does not require the traditional large cell tower (base station) but can be deployed through a multiplicity of “small cells” (which are the micro boxes commonly seen on poles and lamp posts). Many observers see this as an aesthetic improvement to the city landscape. Proximity to these cells remains an issue globally, however, especially for rural and remote regions, underscoring the current limitations of 5G.
Security concerns accompany changing technologies. Since 5G networks rely on cloud-based data storage, they are susceptible to the same possible dangers as other types of cellar and noncellular networks, including data damage, cyberattacks, and theft. Additionally, companies must be mindful of data-point vulnerabilities during a transition to 5G from networks with different security capabilities.
How 5G is used
Besides the use of 5G for voice communications, the technology supports advanced IoT functionality. For example, 5G enables more-sophisticated smart home technology, including locks, lights, and appliances; more-advanced smart medical devices, such as blood sugar and blood pressure monitors; and enhanced retail experiences, facilitating such novelties as virtual product demonstrations and “phygital” shopping (blending the ease of online buying with the in-store experience).
5G technology can potentially enhance every field of work. Urban planners creating smart cities, for example, can move from magnetic loops embedded in roads for detecting vehicles (and triggering traffic signals and opening gates) to more efficient and cost-effective wireless cameras equipped with AI. Municipal trash collection can operate on demand, concentrating on key trash areas and at optimal times, instead of operating according to a schedule divorced from real-time needs. Inexpensive connected sensors can allow farmers to monitor water and soil nutrients remotely (and more frequently), while architects and engineers can more efficiently view information about infrastructure systems and operations, all done remotely on their smartphones or tablets; they can even contribute to site construction and building maintenance in real time through augmented-reality software. 5G can enable and enhance remote worker training, especially in fields with crippling worker shortages that result from frequent employee turnover and long training periods, as is common in emergency fields and medicine. Virtual reality, for instance, is common in training firefighters today, and emergency medical technicians (EMTs) can not only stay in better contact with 911 call centres and emergency rooms but also receive more efficient and effective interactive training, delivered to their personal phones and tablets, through ultrarealistic emergency simulations, all enabled through high-speed low-latency 5G technology.

2404) John Eccles (neurophysiologist)
Gist:
Work
The nervous system in people and animals consists of many different cells. In cells, signals are conveyed by small electrical currents and by chemical substances. By measuring small variations in electrical charges at contact surfaces between nerve cells, or synapses, in the early 1950s John Eccles showed how nerve impulses are conveyed from one cell to another. The synapses are of different types, which has either a stimulating or inhibiting effect. A nerve cell receives signals from many different synapses, and the effect is determined by which type prevails.
Summary
Sir John Carew Eccles (born Jan. 27, 1903, Melbourne, Australia—died May 2, 1997, Contra, Switz.) was an Australian research physiologist who received (with Alan Hodgkin and Andrew Huxley) the 1963 Nobel Prize for Physiology or Medicine for his discovery of the chemical means by which impulses are communicated or repressed by nerve cells (neurons).
After graduating from the University of Melbourne in 1925, Eccles studied at the University of Oxford under a Rhodes scholarship. He received a Ph.D. there in 1929 after having worked under the neurophysiologist Charles Scott Sherrington. He held a research post at Oxford before returning to Australia in 1937, teaching there and in New Zealand over the following decades.
Eccles conducted his prizewinning research while at the Australian National University, Canberra (1951–66). He demonstrated that one nerve cell communicates with a neighbouring cell by releasing chemicals into the synapse (the narrow cleft, or gap, between the two cells). He showed that the excitement of a nerve cell by an impulse causes one kind of synapse to release into the neighbouring cell a substance (probably acetylcholine) that expands the pores in nerve membranes. The expanded pores then allow free passage of sodium ions into the neighbouring nerve cell and reverse the polarity of electric charge. This wave of electric charge, which constitutes the nerve impulse, is conducted from one cell to another. In the same way, Eccles found, an excited nerve cell induces another type of synapse to release into the neighbouring cell a substance that promotes outward passage of positively charged potassium ions across the membrane, reinforcing the existing polarity and inhibiting the transmission of an impulse.
Eccles’s research, which was based largely on the findings of Hodgkin and Huxley, settled a long-standing controversy over whether nerve cells communicate with each other by chemical or by electric means. His work had a profound influence on the medical treatment of nervous diseases and research on kidney, heart, and brain function.
Among his scientific books are Reflex Activity of the Spinal Cord (1932), The Physiology of Nerve Cells (1957), The Inhibitory Pathways of the Central Nervous System (1969), and The Understanding of the Brain (1973). He also wrote a number of philosophical works, including Facing Reality: Philosophical Adventures by a Brain Scientist (1970) and The Human Mystery (1979).
Details
Sir John Carew Eccles (27 January 1903 – 2 May 1997) was an Australian neurophysiologist and philosopher who won the 1963 Nobel Prize in Physiology or Medicine for his work on the synapse. He shared the prize with Andrew Huxley and Alan Lloyd Hodgkin.
Life and work:
Early life
Eccles was born in Melbourne, Australia. He grew up there with his two sisters and his parents: William and Mary Carew Eccles (both teachers, who home schooled him until he was 12). He initially attended Warrnambool High School (now Warrnambool College) (where a science wing is named in his honour), then completed his final year of schooling at Melbourne High School. Aged 17, he was awarded a senior scholarship to study medicine at the University of Melbourne.[3] As a medical undergraduate, he was never able to find a satisfactory explanation for the interaction of mind and body; he started to think about becoming a neuroscientist. He graduated (with first class honours) in 1925, and was awarded a Rhodes Scholarship to study under Charles Scott Sherrington at Magdalen College, Oxford University, where he received his Doctor of Philosophy in 1929.
In 1937 Eccles returned to Australia, where he worked on military research during World War II. During this time, Eccles was the director of the Kanematsu Institute at Sydney Medical School, where he and Bernard Katz gave research lectures at the University of Sydney, strongly influencing the intellectual environment of the university. After the war, he became a professor at the University of Otago in New Zealand. From 1952 to 1962, he worked as a professor at the John Curtin School of Medical Research (JCSMR) of the Australian National University. From 1966 to 1968, Eccles worked at the Feinberg School of Medicine at Northwestern University in Chicago.
Career
In the early 1950s, Eccles and his colleagues performed the research that would lead to his receiving the Nobel Prize. To study synapses in the peripheral nervous system, Eccles and colleagues used the stretch reflex as a model, which is easily studied because it consists of only two neurones: a sensory neurone (the muscle spindle fibre) and the motor neurone. The sensory neurone synapses onto the motor neurone in the spinal cord. When a current is passed into the sensory neurone in the quadriceps, the motor neurone innervating the quadriceps produced a small excitatory postsynaptic potential (EPSP). When a similar current is passed through the hamstring, the opposing muscle to the quadriceps, an inhibitory postsynaptic potential (IPSP) is produced in the quadriceps motor neurone. Although a single EPSP was not enough to fire an action potential in the motor neurone, the sum of several EPSPs from multiple sensory neurones synapsing onto the motor neurone can cause the motor neurone to fire, thus contracting the quadriceps. On the other hand, IPSPs could subtract from this sum of EPSPs, preventing the motor neurone from firing.
Apart from these seminal experiments, Eccles was key to a number of important developments in neuroscience. Until around 1949, Eccles believed that synaptic transmission was primarily electrical rather than chemical. Although he was wrong in this hypothesis, his arguments led him and others to perform some of the experiments which proved chemical synaptic transmission. Bernard Katz and Eccles worked together on some of the experiments which elucidated the role of acetylcholine as a neurotransmitter in the brain.
Honours
He was appointed a Knight Bachelor in 1958 in recognition of services to physiological research.
He won the Australian of the Year Award in 1963, the same year he won the Nobel Prize.
In 1964, he became an honorary member to the American Philosophical Society, and in 1966 he moved to the United States to work as a professor at the Institute for Biomedical Research at the Feinberg School of Medicine in Chicago. Unhappy with the working conditions there, he left to become a professor at The State University of New York at Buffalo from 1968 until he retired in 1975. After retirement, he moved to Switzerland and wrote on the mind–body problem.
In 1981, Eccles became a founding member of the World Cultural Council.
In 1990 he was appointed a Companion of the Order of Australia (AC) in recognition of service to science, particularly in the field of neurophysiology. He died at the age of 94 in 1997 in Tenero-Contra, Locarno, Switzerland.
In March 2012, the Eccles Institute of Neuroscience was constructed in a new wing of the John Curtin School of Medical Research, with the assistance of a $63M grant from the Commonwealth Government. In 2021, a new $60M animal research building was opened at the University of Otago, Dunedin, New Zealand, and named the Eccles Building.
