Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#2101 2024-03-25 00:04:00

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2103) Portable Network Graphics

Gist

The full form of PNG is Portable Network Graphics. It is a file format that is used for image compression.

Summary

This document describes PNG (Portable Network Graphics), an extensible file format for the lossless, portable, well-compressed storage of static and animated raster images. PNG provides a patent-free replacement for GIF and can also replace many common uses of TIFF. Indexed-colour, greyscale, and truecolour images are supported, plus an optional alpha channel. Sample depths range from 1 to 16 bits.

PNG is designed to work well in online viewing applications, such as the World Wide Web, so it is fully streamable with a progressive display option. PNG is robust, providing both full file integrity checking and simple detection of common transmission errors. Also, PNG can store colour space data for improved colour matching on heterogeneous platforms.

This specification defines two Internet Media Types, image/png and image/apng.

Status of This Document

This specification is intended to become an International Standard, but is not yet one. It is inappropriate to refer to this specification as an International Standard.

This document was published by the Portable Network Graphics (PNG) Working Group as a Candidate Recommendation Snapshot using the Recommendation track.

Publication as a Candidate Recommendation does not imply endorsement by W3C and its Members. A Candidate Recommendation Snapshot has received wide review, is intended to gather implementation experience, and has commitments from Working Group members to royalty-free licensing for implementations.

This Candidate Recommendation is not expected to advance to Proposed Recommendation any earlier than 21 December 2023.

This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 12 June 2023 W3C Process Document.

Details

PNG (Portable Network Graphics) is a file format used for lossless image compression. PNG has almost entirely replaced the Graphics Interchange Format (GIF) that was widely used in the past.

Like a GIF, a PNG file is compressed in lossless fashion, meaning all image information is restored when the file is decompressed during viewing. A PNG file is not intended to replace the JPEG format, which is "lossy" but lets the creator make a trade-off between file size and image quality when the image is compressed. Typically, an image in a PNG file can be 10 percent to 30 percent more compressed than in a GIF format.

File format of PNG

The PNG format includes these features:

* Not only can one color be made transparent, but the degree of transparency, called opacity, can be controlled.
* Supports image interlacing and develops faster than in interlaced GIF format.
* Gamma correction allows tuning of the image’s color brightness required by specific display manufacturers.
* Images can be saved using true color, as well as in the palette and grayscale formats provided by the GIF.

JPEG vs. PNG

JPEG and PNG are the two most commonly used image file formats on the web, but there are differences between them.

JPEG (Joint Photographic Experts Group) was created in 1986. This image format takes up very little storage space and is quick to upload or download. JPEGs can display millions of colors, so they’re perfect for real-life images, such as photographs. They work well on websites and are ideal for posting on social media.

Because JPEG is “lossy,” -- which means that when data is compressed, unnecessary (redundant) information is deleted from the file permanently -- some quality will be lost or compromised when a file is converted to a JPEG.

JPEG is the default file format for uploading pictures to the web, unless they have text in them, need transparency, are animated or would benefit from color changes, such as logos or icons.

However, JPEGs aren’t good for images that have very few color data, such as interface screenshots and other simple computer-generated graphics.

The main advantage of PNG over JPEG is that the compression is lossless, which means there’s no loss in quality each time a file is opened and saved again. PNG is also good for detailed, high-contrast images. Consequently, PNG is typically the default file format for screenshots because, instead of compressing groups of pixels together, it offers a nearly perfect pixel-for-pixel representation of the screen.

Another key feature of PNG is that it supports transparency. With both grayscale and color and images, pixels in PNG files can be transparent, enabling users to create images that overlay neatly with the content of a website or image.

Uses of PNG

PNG can be used for:

* Photos with line art, such as drawings, illustrations and comics.
* Photos or scans of text, such as handwritten letters or newspaper articles.
* Charts, logos, graphs, architectural plans and blueprints.
* Anything with text, such as page layouts made in Photoshop or InDesign then saved as images.

Advantages of PNG

The advantages of the PNG format include:

* Lossless compression -- doesn’t lose detail and quality after image compression.
* Supports a large number of colors -- the format is suitable for different types of digital images, including photographs and graphics.
* Support for transparency -- supports compression of digital images with transparent areas.
* Perfect for editing images – lossless compressions makes it perfect for storing digital images for editing.
* Sharp edges and solid colors -- ideal for images containing texts, line arts and graphics.

The disadvantages of the PNG format include:

* Bigger file size -- compresses digital images at a larger file size.
* Not ideal for professional-quality print graphics -- doesn’t support non-RGB color spaces such as CMYK (cyan, magenta, yellow and black).
* Doesn’t support embedding EXIF metadata used by most digital cameras.
* Doesn’t natively support animation, but there are unofficial extensions available.

History of PNG

PNG was developed by an Internet working group headed up by Thomas Boutell that came together in 1994 to begin creating the PNG format. At the time, the GIF format was already well-established. Their goal was to increase color support as well as provide an image format that didn’t need at patent license.

The GIF format was owned by Unisys and its use in image-handling software involved licensing or other legal considerations. Web users could make, view and send GIF files freely but they couldn’t develop software that built them without an arrangement with Unisys.

The first PNG draft was issued on January 4, 1995, and within a week, most of the major PNG features had been proposed and accepted. Over the next three weeks, the group produced seven important drafts.

By the beginning of March 1995, all the specifications were in place (draft nine) and accepted. In October 1996, the first version of the PNG specification was issued as a W3C recommendation. Additional versions were released in 1998, 1999 and 2003, when it became an international standard.

Additional Information

Portable Network Graphics (PNG) is a raster-graphics file format that supports lossless data compression. PNG was developed as an improved, non-patented replacement for Graphics Interchange Format (GIF)—unofficially, the initials PNG stood for the recursive acronym "PNG's not GIF".

PNG supports palette-based images (with palettes of 24-bit RGB or 32-bit RGBA colors), grayscale images (with or without an alpha channel for transparency), and full-color non-palette-based RGB or RGBA images. The PNG working group designed the format for transferring images on the Internet, not for professional-quality print graphics; therefore, non-RGB color spaces such as CMYK are not supported. A PNG file contains a single image in an extensible structure of chunks, encoding the basic pixels and other information such as textual comments and integrity checks documented in RFC 2083.

PNG files have the ".png" file extension and the "image/png" MIME media type. PNG was published as an informational RFC 2083 in March 1997 and as an ISO/IEC 15948 standard in 2004.

History and development

The motivation for creating the PNG format was the realization, on 28 December 1994, that the Lempel–Ziv–Welch (LZW) data compression algorithm used in the Graphics Interchange Format (GIF) format was patented by Unisys. The patent required that all software supporting GIF pay royalties, leading to a flurry of criticism from Usenet users. One of them was Thomas Boutell, who on 4 January 1995 posted a precursory discussion thread on the Usenet newsgroup "comp.graphics" in which he devised a plan for a free alternative to GIF. Other users in that thread put forth many propositions that would later be part of the final file format. Oliver Fromme, author of the popular JPEG viewer QPEG, proposed the PING name, eventually becoming PNG, a recursive acronym meaning PING is not GIF, and also the .png extension. Other suggestions later implemented included the deflate compression algorithm and 24-bit color support, the lack of the latter in GIF also motivating the team to create their file format. The group would become known as the PNG Development Group, and as the discussion rapidly expanded, it later used a mailing list associated with a CompuServe forum.

The full specification of PNG was released under the approval of W3C on 1 October 1996, and later as RFC 2083 on 15 January 1997. The specification was revised on 31 December 1998 as version 1.1, which addressed technical problems for gamma and color correction. Version 1.2, released on 11 August 1999, added the iTXt chunk as the specification's only change, and a reformatted version of 1.2 was released as a second edition of the W3C standard on 10 November 2003, and as an International Standard (ISO/IEC 15948:2004) on 3 March 2004.

Although GIF allows for animation, it was decided that PNG should be a single-image format. In 2001, the developers of PNG published the Multiple-image Network Graphics (MNG) format, with support for animation. MNG achieved moderate application support, but not enough among mainstream web browsers and no usage among web site designers or publishers. In 2008, certain Mozilla developers published the Animated Portable Network Graphics (APNG) format with similar goals. APNG is a format that is natively supported by Gecko- and Presto-based web browsers and is also commonly used for thumbnails on Sony's PlayStation Portable system (using the normal PNG file extension). In 2017, Chromium based browsers adopted APNG support. In January 2020, Microsoft Edge became Chromium based, thus inheriting support for APNG. With this all major browsers now support APNG.

PNG Working Group

The original PNG specification was authored by an ad hoc group of computer graphics experts and enthusiasts. Discussions and decisions about the format were conducted by email. The original authors listed on RFC 2083 are:

Editor: Thomas Boutell
Contributing Editor: Tom Lane
Authors (in alphabetical order by last name): Mark Adler, Thomas Boutell, Christian Brunschen, Adam M. Costello, Lee Daniel Crocker, Andreas Dilger, Oliver Fromme, Jean-loup Gailly, Chris Herborth, Aleks Jakulin, Neal Kettler, Tom Lane, Alexander Lehmann, Chris Lilley, Dave Martindale, Owen Mortensen, Keith S. Pickens, Robert P. Poole, Glenn Randers-Pehrson, Greg Roelofs, Willem van Schaik, Guy Schalnat, Paul Schmidt, Tim Wegner, Jeremy Wohl.

PNG-(1)-(1)-660.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2102 2024-03-26 00:02:32

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2104) Tagged Image File Format

Gist

TIF (or TIFF) is an image format used for containing high quality graphics. It stands for “Tagged Image File Format” or “Tagged Image Format”. The format was created by Aldus Corporation but Adobe acquired the format later and made subsequent update in this format.

Summary

A TIFF, which stands for Tag Image File Format, is a computer file used to store raster graphics and image information. A favorite among photographers, TIFFs are a handy way to store high-quality images before editing if you want to avoid lossy file formats.

Aldus eventually merged with Adobe Systems, who held the patent on the format from then on. Today, TIFF files are still widely used in the printing and publishing industry.

A TIFF file is a great choice when high quality is your goal, especially when it comes to printing photos or even billboards. TIFF is also an adaptable format that can support both lossy and lossless compression.

Details

Tag Image File Format or Tagged Image File Format, commonly known by the abbreviations TIFF or TIF, is an image file format for storing raster graphics images, popular among graphic artists, the publishing industry, and photographers. TIFF is widely supported by scanning, faxing, word processing, optical character recognition, image manipulation, desktop publishing, and page-layout applications. The format was created by the Aldus Corporation for use in desktop publishing. It published the latest version 6.0 in 1992, subsequently updated with an Adobe Systems copyright after the latter acquired Aldus in 1994. Several Aldus or Adobe technical notes have been published with minor extensions to the format, and several specifications have been based on TIFF 6.0, including TIFF/EP (ISO 12234-2), TIFF/IT (ISO 12639), TIFF-F (RFC 2306) and TIFF-FX (RFC 3949).

History

TIFF was created as an attempt to get desktop scanner vendors of the mid-1980s to agree on a common scanned image file format, in place of a multitude of proprietary formats. In the beginning, TIFF was only a binary image format (only two possible values for each pixel), because that was all that desktop scanners could handle. As scanners became more powerful, and as desktop computer disk space became more plentiful, TIFF grew to accommodate grayscale images, then color images. Today, TIFF, along with JPEG and PNG, is a popular format for deep-color images.

The first version of the TIFF specification was published by the Aldus Corporation in the autumn of 1986 after two major earlier draft releases. It can be labeled as Revision 3.0. It was published after a series of meetings with various scanner manufacturers and software developers. In April 1987 Revision 4.0 was released and it contained mostly minor enhancements. In October 1988 Revision 5.0 was released and it added support for palette color images and LZW compression.

TIFF is a complex format, defining many tags of which typically only a few are used in each file. This led to implementations supporting many varying subsets of the format, a situation that gave rise to the joke that TIFF stands for Thousands of Incompatible File Formats. This problem was addressed in revision 6.0 of the TIFF specification (June 1992) by introducing a distinction between Baseline TIFF (which all implementations were required to support) and TIFF Extensions (which are optional). Additional extensions are defined in two supplements to the specification, published September 1995 and March 2002 respectively.

Overview

A TIFF file contains one or several images, termed subfiles in the specification. The basic use-case for having multiple subfiles is to encode a multipage telefax in a single file, but it is also allowed to have different subfiles be different variants of the same image, for example scanned at different resolutions. Rather than being a continuous range of bytes in the file, each subfile is a data structure whose top-level entity is called an image file directory (IFD). Baseline TIFF readers are only required to make use of the first subfile, but each IFD has a field for linking to a next IFD.

The IFDs are where the tags for which TIFF is named are located. Each IFD contains one or several entries, each of which is identified by its tag. The tags are arbitrary 16-bit numbers; their symbolic names such as ImageWidth often used in discussions of TIFF data do not appear explicitly in the file itself. Each IFD entry has an associated value, which may be decoded based on general rules of the format, but it depends on the tag what that value then means. There may within a single IFD be no more than one entry with any particular tag. Some tags are for linking to the actual image data, other tags specify how the image data should be interpreted, and still other tags are used for image metadata.

TIFF images are made up of rectangular grids of pixels. The two axes of this geometry are termed horizontal (or X, or width) and vertical (or Y, or length). Horizontal and vertical resolution need not be equal (since in a telefax they typically would not be equal). A baseline TIFF image divides the vertical range of the image into one or several strips, which are encoded (in particular: compressed) separately. Historically this served to facilitate TIFF readers (such as fax machines) with limited capacity to store uncompressed data — one strip would be decoded and then immediately printed — but the present specification motivates it by "increased editing flexibility and efficient I/O buffering".  A TIFF extension provides the alternative of tiled images, in which case both the horizontal and the vertical ranges of the image are decomposed into smaller units.

An example of these things, which also serves to give a flavor of how tags are used in the TIFF encoding of images, is that a striped TIFF image would use tags 273 (StripOffsets), 278 (RowsPerStrip), and 279 (StripByteCounts). The StripOffsets point to the blocks of image data, the StripByteCounts say how long each of these blocks are (as stored in the file), and RowsPerStrip says how many rows of pixels there are in a strip; the latter is required even in the case of having just one strip, in which case it merely duplicates the value of tag 257 (ImageLength). A tiled TIFF image instead uses tags 322 (TileWidth), 323 (TileLength), 324 (TileOffsets), and 325 (TileByteCounts). The pixels within each strip or tile appear in row-major order, left to right and top to bottom.

The data for one pixel is made up of one or several samples; for example an RGB image would have one Red sample, one Green sample, and one Blue sample per pixel, whereas a greyscale or palette color image only has one sample per pixel. TIFF allows for both additive (e.g. RGB, RGBA) and subtractive (e.g. CMYK) color models. TIFF does not constrain the number of samples per pixel (except that there must be enough samples for the chosen color model), nor does it constrain how many bits are encoded for each sample, but baseline TIFF only requires that readers support a few combinations of color model and bit-depth of images. Support for custom sets of samples is very useful for scientific applications; 3 samples per pixel is at the low end of multispectral imaging, and hyperspectral imaging may require hundreds of samples per pixel. TIFF supports having all samples for a pixel next to each other within a single strip/tile (PlanarConfiguration = 1) but also different samples in different strips/tiles (PlanarConfiguration = 2). The default format for a sample value is as an unsigned integer, but a TIFF extension allows declaring them as alternatively being signed integers or IEEE-754 floats, as well as specify a custom range for valid sample values.

TIFF images may be uncompressed, compressed using a lossless compression scheme, or compressed using a lossy compression scheme. The lossless LZW compression scheme has at times been regarded as the standard compression for TIFF, but this is technically a TIFF extension, and the TIFF6 specification notes the patent situation regarding LZW. Compression schemes vary significantly in at what level they process the data: LZW acts on the stream of bytes encoding a strip or tile (without regard to sample structure, bit depth, or row width), whereas the JPEG compression scheme both transforms the sample structure of pixels (switching to a different color model) and encodes pixels in 8×8 blocks rather than row by row.

Most data in TIFF files are numerical, but the format supports declaring data as rather being textual, if appropriate for a particular tag. Tags that take textual values include Artist, Copyright, DateTime, DocumentName, InkNames, and Model.

Internet Media Type

The MIME type image/tiff (defined in RFC 3302) without an application parameter is used for Baseline TIFF 6.0 files or to indicate that it is not necessary to identify a specific subset of TIFF or TIFF extensions. The optional "application" parameter (Example: Content-type: image/tiff; application=foo) is defined for image/tiff to identify a particular subset of TIFF and TIFF extensions for the encoded image data, if it is known. According to RFC 3302, specific TIFF subsets or TIFF extensions used in the application parameter must be published as an RFC.

MIME type image/tiff-fx (defined in RFC 3949 and RFC 3950) is based on TIFF 6.0 with TIFF Technical Notes TTN1 (Trees) and TTN2 (Replacement TIFF/JPEG specification). It is used for Internet fax compatible with the ITU-T Recommendations for Group 3 black-and-white, grayscale and color fax.

Digital preservation

Adobe holds the copyright on the TIFF specification (aka TIFF 6.0) along with the two supplements that have been published. These documents can be found on the Adobe TIFF Resources page. The Fax standard in RFC 3949 is based on these TIFF specifications.

TIFF files that strictly use the basic "tag sets" as defined in TIFF 6.0 along with restricting the compression technology to the methods identified in TIFF 6.0 and are adequately tested and verified by multiple sources for all documents being created can be used for storing documents. Commonly seen issues encountered in the content and document management industry associated with the use of TIFF files arise when the structures contain proprietary headers, are not properly documented, and/or contain "wrappers" or other containers around the TIFF datasets, and/or include improper compression technologies, or those compression technologies are not properly implemented.

Variants of TIFF can be used within document imaging and content/document management systems using CCITT Group IV 2D compression which supports black-and-white (bitonal, monochrome) images, among other compression technologies that support color. When storage capacity and network bandwidth was a greater issue than commonly seen in today's server environments, high-volume storage scanning, documents were scanned in black and white (not in color or in grayscale) to conserve storage capacity.

The inclusion of the SampleFormat tag in TIFF 6.0 allows TIFF files to handle advanced pixel data types, including integer images with more than 8 bits per channel and floating point images. This tag made TIFF 6.0 a viable format for scientific image processing where extended precision is required. An example would be the use of TIFF to store images acquired using scientific CCD cameras that provide up to 16 bits per photosite of intensity resolution. Storing a sequence of images in a single TIFF file is also possible, and is allowed under TIFF 6.0, provided the rules for multi-page images are followed.

Details

TIFF is a flexible, adaptable file format for handling images and data within a single file, by including the header tags (size, definition, image-data arrangement, applied image compression) defining the image's geometry. A TIFF file, for example, can be a container holding JPEG (lossy) and PackBits (lossless) compressed images. A TIFF file also can include a vector-based clipping path (outlines, croppings, image frames). The ability to store image data in a lossless format makes a TIFF file a useful image archive, because, unlike standard JPEG files, a TIFF file using lossless compression (or none) may be edited and re-saved without losing image quality. This is not the case when using the TIFF as a container holding compressed JPEG. Other TIFF options are layers and pages.

TIFF offers the option of using LZW compression, a lossless data-compression technique for reducing a file's size. Use of this option was limited by patents on the LZW technique until their expiration in 2004.

The TIFF 6.0 specification consists of the following parts:

* Introduction (contains information about TIFF Administration, usage of Private fields and values, etc.)
* Part 1: Baseline TIFF
* Part 2: TIFF Extensions
* Part 3: Appendices

Additional Information

TIFFs are a file format popular with graphic designers and photographers for their flexibility, high quality, and near-universal compatibility. Learn more about these raster graphic files and how you can put them to use in your next project.

What is a TIFF file?

A TIFF, which stands for Tag Image File Format, is a computer file used to store raster graphics and image information. A favorite among photographers, TIFFs are a handy way to store high-quality images before editing if you want to avoid lossy file formats.

TIFF files:

* Have either a .tiff or .tif extension.
* Are a lossless form of file compression, which means they’re larger than most but don’t lose image quality.
* Work with Windows, Linux, and macOS.

TIFFs aren’t the smallest files around, but they enable a user to tag up extra image information and data, such as additional layers. They’re also compatible with editing software like Adobe Photoshop.

History of the TIFF file.

Aldus Corporation created the TIFF file in the mid-1980s for use in desktop publishing. TIFFs retained high-quality data and could publish content directly from a computer. The file was designed as a universally applicable format for desktop scanners — hardware that previously handled, depending on the make and model, only a limited set of file formats.

Initially, TIFFs were restricted to print publications before they expanded into digital content. Aldus Corporation was later acquired by Adobe, which has since been responsible for the copyright of the file format.

What are TIFFs used for?

TIFFs are popular across a range of industries — such as design, photography, and desktop publishing. TIFF files can be used for:

* High-quality photographs.

TIFFs are perfect for retaining lots of impressively detailed image data because they use a predominately lossless form of file compression. This makes them a great choice for professional photographers and editors.

* High-resolution scans.

The detailed image quality stored within a TIFF means they’re ideal for scanned images and high-resolution documents. You might find them a useful choice for storing high-resolution images of your artwork or personal documents.

* Container files.

TIFFs also work as container files that store smaller JPEGs. You could store several lower-resolution JPEGs within one TIFF if you wanted to email a selection of photos to a contact.

tiff.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2103 2024-03-27 00:02:51

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2105) Anatomy

Gist

The study of the structure of a plant or animal. Human anatomy includes the cells, tissues, and organs that make up the body and how they are organized in the body.

Summary

Anatomy is the study of the structure of living things – animal, human, plant – from microscopic cells and molecules to whole organisms as large as whales.

Anatomy Is Everywhere

* Anthropologists study cultures around the world.
* Paleontologists use cutting-edge technology to discover the ancient world.
* Archeologists uncover our history one artifact at a time.
* Veterinarians help humans care for pets and farm animals.
* Zoologists ensure captive animals – from backyard critters to endangered species – receive optimal care.
* Medical students learn anatomy before becoming nurses, doctors, and dentists.
* Inventors create exoskeletons to give people mobility.
* Biomedical engineers create better pacemakers and prosthetics.
* Physical therapists find remedies for their patients’ challenges.

Who Are Anatomists?

An anatomist broadly describes someone who studies, researches, or teaches in the anatomical sciences, including the study of extinct species, such as dinosaurs and Neanderthals. They help us understand how things are formed and constructed, which has enormous impact. However, not everyone who studies, applies, or researches anatomy calls themselves ‘anatomists.’

WHAT ANATOMISTS DO

Anatomists work with students and researchers to better understand humans and animals, in order to teach the next generation of doctors, nurses, physical therapists, dentists, and veterinarians. Their research into cell and molecular anatomy means that conditions such as cleft palate, congenital heart defects, neurological disorders, and cancer biology are better understood – and can be treated.

WHERE ANATOMISTS WORK

Anatomists work in universities, research institutions, and private industry. They teach anatomy in medical, dental, and veterinary schools, as well as at large undergraduate universities. They run their own research labs at organizations and universities, and they work together in teams of scientists, postdoctoral researchers, and students to uncover discoveries that lead to better understanding of our biology.

Details

Anatomy (from Ancient Greek anatomḗ) 'dissection') is the branch of biology concerned with the study of the structure of organisms and their parts. Anatomy is a branch of natural science that deals with the structural organization of living things. It is an old science, having its beginnings in prehistoric times. Anatomy is inherently tied to developmental biology, embryology, comparative anatomy, evolutionary biology, and phylogeny, as these are the processes by which anatomy is generated, both over immediate and long-term timescales. Anatomy and physiology, which study the structure and function of organisms and their parts respectively, make a natural pair of related disciplines, and are often studied together. Human anatomy is one of the essential basic sciences that are applied in medicine, and is often studied alongside physiology.

Anatomy is a complex and dynamic field that is constantly evolving as new discoveries are made. In recent years, there has been a significant increase in the use of advanced imaging techniques, such as MRI and CT scans, which allow for more detailed and accurate visualizations of the body's structures.

The discipline of anatomy is divided into macroscopic and microscopic parts. Macroscopic anatomy, or gross anatomy, is the examination of an animal's body parts using unaided eyesight. Gross anatomy also includes the branch of superficial anatomy. Microscopic anatomy involves the use of optical instruments in the study of the tissues of various structures, known as histology, and also in the study of cells.

The history of anatomy is characterized by a progressive understanding of the functions of the organs and structures of the human body. Methods have also improved dramatically, advancing from the examination of animals by dissection of carcasses and cadavers (corpses) to 20th-century medical imaging techniques, including X-ray, ultrasound, and magnetic resonance imaging.

Etymology and definition

Derived from the Greek "dissection" (from "I cut up, cut open" from "up", and "I cut"), anatomy is the scientific study of the structure of organisms including their systems, organs and tissues. It includes the appearance and position of the various parts, the materials from which they are composed, and their relationships with other parts. Anatomy is quite distinct from physiology and biochemistry, which deal respectively with the functions of those parts and the chemical processes involved. For example, an anatomist is concerned with the shape, size, position, structure, blood supply and innervation of an organ such as the liver; while a physiologist is interested in the production of bile, the role of the liver in nutrition and the regulation of bodily functions.

The discipline of anatomy can be subdivided into a number of branches, including gross or macroscopic anatomy and microscopic anatomy. Gross anatomy is the study of structures large enough to be seen with the naked eye, and also includes superficial anatomy or surface anatomy, the study by sight of the external body features. Microscopic anatomy is the study of structures on a microscopic scale, along with histology (the study of tissues), and embryology (the study of an organism in its immature condition). Regional anatomy is the study of the interrelationships of all of the structures in a specific body region, such as the abdomen. In contrast, systemic anatomy is the study of the structures that make up a discrete body system—that is, a group of structures that work together to perform a unique body function, such as the digestive system.

Anatomy can be studied using both invasive and non-invasive methods with the goal of obtaining information about the structure and organization of organs and systems. Methods used include dissection, in which a body is opened and its organs studied, and endoscopy, in which a video camera-equipped instrument is inserted through a small incision in the body wall and used to explore the internal organs and other structures. Angiography using X-rays or magnetic resonance angiography are methods to visualize blood vessels.

The term "anatomy" is commonly taken to refer to human anatomy. However, substantially similar structures and tissues are found throughout the rest of the animal kingdom, and the term also includes the anatomy of other animals. The term zootomy is also sometimes used to specifically refer to non-human animals. The structure and tissues of plants are of a dissimilar nature and they are studied in plant anatomy.

Animal tissues

The kingdom Animalia contains multicellular organisms that are heterotrophic and motile (although some have secondarily adopted a sessile lifestyle). Most animals have bodies differentiated into separate tissues and these animals are also known as eumetazoans. They have an internal digestive chamber, with one or two openings; the gametes are produced in multicellular gender organs, and the zygotes include a blastula stage in their embryonic development. Metazoans do not include the sponges, which have undifferentiated cells.

Unlike plant cells, animal cells have neither a cell wall nor chloroplasts. Vacuoles, when present, are more in number and much smaller than those in the plant cell. The body tissues are composed of numerous types of cells, including those found in muscles, nerves and skin. Each typically has a cell membrane formed of phospholipids, cytoplasm and a nucleus. All of the different cells of an animal are derived from the embryonic germ layers. Those simpler invertebrates which are formed from two germ layers of ectoderm and endoderm are called diploblastic and the more developed animals whose structures and organs are formed from three germ layers are called triploblastic. All of a triploblastic animal's tissues and organs are derived from the three germ layers of the embryo, the ectoderm, mesoderm and endoderm.

Animal tissues can be grouped into four basic types: connective, epithelial, muscle and nervous tissue.

Connective tissue

Connective tissues are fibrous and made up of cells scattered among inorganic material called the extracellular matrix. Connective tissue gives shape to organs and holds them in place. The main types are loose connective tissue, adipose tissue, fibrous connective tissue, cartilage and bone. The extracellular matrix contains proteins, the chief and most abundant of which is collagen. Collagen plays a major part in organizing and maintaining tissues. The matrix can be modified to form a skeleton to support or protect the body. An exoskeleton is a thickened, rigid cuticle which is stiffened by mineralization, as in crustaceans or by the cross-linking of its proteins as in insects. An endoskeleton is internal and present in all developed animals, as well as in many of those less developed.[16]

Epithelium

Epithelial tissue is composed of closely packed cells, bound to each other by cell adhesion molecules, with little intercellular space. Epithelial cells can be squamous (flat), cuboidal or columnar and rest on a basal lamina, the upper layer of the basement membrane, the lower layer is the reticular lamina lying next to the connective tissue in the extracellular matrix secreted by the epithelial cells. There are many different types of epithelium, modified to suit a particular function. In the respiratory tract there is a type of ciliated epithelial lining; in the small intestine there are microvilli on the epithelial lining and in the large intestine there are intestinal villi. Skin consists of an outer layer of keratinized stratified squamous epithelium that covers the exterior of the vertebrate body. Keratinocytes make up to 95% of the cells in the skin. The epithelial cells on the external surface of the body typically secrete an extracellular matrix in the form of a cuticle. In simple animals this may just be a coat of glycoproteins. In more advanced animals, many glands are formed of epithelial cells.

Muscle tissue

Muscle cells (myocytes) form the active contractile tissue of the body. Muscle tissue functions to produce force and cause motion, either locomotion or movement within internal organs. Muscle is formed of contractile filaments and is separated into three main types; smooth muscle, skeletal muscle and cardiac muscle. Smooth muscle has no striations when examined microscopically. It contracts slowly but maintains contractibility over a wide range of stretch lengths. It is found in such organs as sea anemone tentacles and the body wall of sea cucumbers. Skeletal muscle contracts rapidly but has a limited range of extension. It is found in the movement of appendages and jaws. Obliquely striated muscle is intermediate between the other two. The filaments are staggered and this is the type of muscle found in earthworms that can extend slowly or make rapid contractions. In higher animals striated muscles occur in bundles attached to bone to provide movement and are often arranged in antagonistic sets. Smooth muscle is found in the walls of the uterus, bladder, intestines, stomach, oesophagus, respiratory airways, and blood vessels. Cardiac muscle is found only in the heart, allowing it to contract and pump blood round the body.

Nervous tissue

Nervous tissue is composed of many nerve cells known as neurons which transmit information. In some slow-moving radially symmetrical marine animals such as ctenophores and cnidarians (including sea anemones and jellyfish), the nerves form a nerve net, but in most animals they are organized longitudinally into bundles. In simple animals, receptor neurons in the body wall cause a local reaction to a stimulus. In more complex animals, specialized receptor cells such as chemoreceptors and photoreceptors are found in groups and send messages along neural networks to other parts of the organism. Neurons can be connected together in ganglia. In higher animals, specialized receptors are the basis of sense organs and there is a central nervous system (brain and spinal cord) and a peripheral nervous system. The latter consists of sensory nerves that transmit information from sense organs and motor nerves that influence target organs. The peripheral nervous system is divided into the somatic nervous system which conveys sensation and controls voluntary muscle, and the autonomic nervous system which involuntarily controls smooth muscle, certain glands and internal organs, including the stomach.

Vertebrate anatomy

All vertebrates have a similar basic body plan and at some point in their lives, mostly in the embryonic stage, share the major chordate characteristics: a stiffening rod, the notochord; a dorsal hollow tube of nervous material, the neural tube; pharyngeal arches; and a tail posterior to the math. The spinal cord is protected by the vertebral column and is above the notochord, and the gastrointestinal tract is below it. Nervous tissue is derived from the ectoderm, connective tissues are derived from mesoderm, and gut is derived from the endoderm. At the posterior end is a tail which continues the spinal cord and vertebrae but not the gut. The mouth is found at the anterior end of the animal, and the math at the base of the tail. The defining characteristic of a vertebrate is the vertebral column, formed in the development of the segmented series of vertebrae. In most vertebrates the notochord becomes the nucleus pulposus of the intervertebral discs. However, a few vertebrates, such as the sturgeon and the coelacanth, retain the notochord into adulthood. Jawed vertebrates are typified by paired appendages, fins or legs, which may be secondarily lost. The limbs of vertebrates are considered to be homologous because the same underlying skeletal structure was inherited from their last common ancestor. This is one of the arguments put forward by Charles Darwin to support his theory of evolution.

Mammal anatomy

Mammals are a diverse class of animals, mostly terrestrial but some are aquatic and others have evolved flapping or gliding flight. They mostly have four limbs, but some aquatic mammals have no limbs or limbs modified into fins, and the forelimbs of bats are modified into wings. The legs of most mammals are situated below the trunk, which is held well clear of the ground. The bones of mammals are well ossified and their teeth, which are usually differentiated, are coated in a layer of prismatic enamel. The teeth are shed once (milk teeth) during the animal's lifetime or not at all, as is the case in cetaceans. Mammals have three bones in the middle ear and a cochlea in the inner ear. They are clothed in hair and their skin contains glands which secrete sweat. Some of these glands are specialized as mammary glands, producing milk to feed the young. Mammals breathe with lungs and have a muscular diaphragm separating the thorax from the abdomen which helps them draw air into the lungs. The mammalian heart has four chambers, and oxygenated and deoxygenated blood are kept entirely separate. Nitrogenous waste is excreted primarily as urea.

Mammals are amniotes, and most are viviparous, giving birth to live young. Exceptions to this are the egg-laying monotremes, the platypus and the echidnas of Australia. Most other mammals have a placenta through which the developing foetus obtains nourishment, but in marsupials, the foetal stage is very short and the immature young is born and finds its way to its mother's pouch where it latches on to a nipple and completes its development.

Human anatomy

In humans, dexterous hand movements and increased brain size are likely to have evolved simultaneously.
Humans have the overall body plan of a mammal. Humans have a head, neck, trunk (which includes the thorax and abdomen), two arms and hands, and two legs and feet.

Generally, students of certain biological sciences, paramedics, prosthetists and orthotists, physiotherapists, occupational therapists, nurses, podiatrists, and medical students learn gross anatomy and microscopic anatomy from anatomical models, skeletons, textbooks, diagrams, photographs, lectures and tutorials and in addition, medical students generally also learn gross anatomy through practical experience of dissection and inspection of cadavers. The study of microscopic anatomy (or histology) can be aided by practical experience examining histological preparations (or slides) under a microscope.

Human anatomy, physiology and biochemistry are complementary basic medical sciences, which are generally taught to medical students in their first year at medical school. Human anatomy can be taught regionally or systemically; that is, respectively, studying anatomy by bodily regions such as the head and chest, or studying by specific systems, such as the nervous or respiratory systems. The major anatomy textbook, Gray's Anatomy, has been reorganized from a systems format to a regional format, in line with modern teaching methods. A thorough working knowledge of anatomy is required by physicians, especially surgeons and doctors working in some diagnostic specialties, such as histopathology and radiology.

Academic anatomists are usually employed by universities, medical schools or teaching hospitals. They are often involved in teaching anatomy, and research into certain systems, organs, tissues or cells.

Additional Information

Anatomy is a field in the biological sciences concerned with the identification and description of the body structures of living things. Gross anatomy involves the study of major body structures by dissection and observation and in its narrowest sense is concerned only with the human body. “Gross anatomy” customarily refers to the study of those body structures large enough to be examined without the help of magnifying devices, while microscopic anatomy is concerned with the study of structural units small enough to be seen only with a light microscope. Dissection is basic to all anatomical research. The earliest record of its use was made by the Greeks, and Theophrastus called dissection “anatomy,” from ana temnein, meaning “to cut up.”

Comparative anatomy, the other major subdivision of the field, compares similar body structures in different species of animals in order to understand the adaptive changes they have undergone in the course of evolution.

Gross anatomy

This ancient discipline reached its culmination between 1500 and 1850, by which time its subject matter was firmly established. None of the world’s oldest civilizations dissected a human body, which most people regarded with superstitious awe and associated with the spirit of the departed soul. Beliefs in life after death and a disquieting uncertainty concerning the possibility of bodily resurrection further inhibited systematic study. Nevertheless, knowledge of the body was acquired by treating wounds, aiding in childbirth, and setting broken limbs. The field remained speculative rather than descriptive, though, until the achievements of the Alexandrian medical school and its foremost figure, Herophilus (flourished 300 BCE), who dissected human cadavers and thus gave anatomy a considerable factual basis for the first time. Herophilus made many important discoveries and was followed by his younger contemporary Erasistratus, who is sometimes regarded as the founder of physiology. In the 2nd century CE, Greek physician Galen assembled and arranged all the discoveries of the Greek anatomists, including with them his own concepts of physiology and his discoveries in experimental medicine. The many books Galen wrote became the unquestioned authority for anatomy and medicine in Europe because they were the only ancient Greek anatomical texts that survived the Dark Ages in the form of Arabic (and then Latin) translations.

Owing to church prohibitions against dissection, European medicine in the Middle Ages relied upon Galen’s mixture of fact and fancy rather than on direct observation for its anatomical knowledge, though some dissections were authorized for teaching purposes. In the early 16th century, the artist Leonardo da Vinci undertook his own dissections, and his beautiful and accurate anatomical drawings cleared the way for Flemish physician Andreas Vesalius to “restore” the science of anatomy with his monumental De humani corporis fabrica libri septem (1543; “The Seven Books on the Structure of the Human Body”), which was the first comprehensive and illustrated textbook of anatomy. As a professor at the University of Padua, Vesalius encouraged younger scientists to accept traditional anatomy only after verifying it themselves, and this more critical and questioning attitude broke Galen’s authority and placed anatomy on a firm foundation of observed fact and demonstration.

From Vesalius’s exact descriptions of the skeleton, muscles, blood vessels, nervous system, and digestive tract, his successors in Padua progressed to studies of the digestive glands and the urinary and reproductive systems. Hieronymus Fabricius, Gabriello Fallopius, and Bartolomeo Eustachio were among the most important Italian anatomists, and their detailed studies led to fundamental progress in the related field of physiology. William Harvey’s discovery of the circulation of the blood, for instance, was based partly on Fabricius’s detailed descriptions of the venous valves.

Microscopic anatomy

The new application of magnifying glasses and compound microscopes to biological studies in the second half of the 17th century was the most important factor in the subsequent development of anatomical research. Primitive early microscopes enabled Marcello Malpighi to discover the system of tiny capillaries connecting the arterial and venous networks, Robert Hooke to first observe the small compartments in plants that he called “cells,” and Antonie van Leeuwenhoek to observe muscle fibres and spermatozoa. Thenceforth attention gradually shifted from the identification and understanding of bodily structures visible to the naked eye to those of microscopic size.

The use of the microscope in discovering minute, previously unknown features was pursued on a more systematic basis in the 18th century, but progress tended to be slow until technical improvements in the compound microscope itself, beginning in the 1830s with the gradual development of achromatic lenses, greatly increased that instrument’s resolving power. These technical advances enabled Matthias Jakob Schleiden and Theodor Schwann to recognize in 1838–39 that the cell is the fundamental unit of organization in all living things. The need for thinner, more transparent tissue specimens for study under the light microscope stimulated the development of improved methods of dissection, notably machines called microtomes that can slice specimens into extremely thin sections. In order to better distinguish the detail in these sections, synthetic dyes were used to stain tissues with different colours. Thin sections and staining had become standard tools for microscopic anatomists by the late 19th century. The field of cytology, which is the study of cells, and that of histology, which is the study of tissue organization from the cellular level up, both arose in the 19th century with the data and techniques of microscopic anatomy as their basis.

In the 20th century anatomists tended to scrutinize tinier and tinier units of structure as new technologies enabled them to discern details far beyond the limits of resolution of light microscopes. These advances were made possible by the electron microscope, which stimulated an enormous amount of research on subcellular structures beginning in the 1950s and became the prime tool of anatomical research. About the same time, the use of X-ray diffraction for studying the structures of many types of molecules present in living things gave rise to the new subspecialty of molecular anatomy.

Anatomical nomenclature

Scientific names for the parts and structures of the human body are usually in Latin; for example, the name musculus biceps brachii denotes the biceps muscle of the upper arm. Some such names were bequeathed to Europe by ancient Greek and Roman writers, and many more were coined by European anatomists from the 16th century on. Expanding medical knowledge meant the discovery of many bodily structures and tissues, but there was no uniformity of nomenclature, and thousands of new names were added as medical writers followed their own fancies, usually expressing them in a Latin form.

By the end of the 19th century the confusion caused by the enormous number of names had become intolerable. Medical dictionaries sometimes listed as many as 20 synonyms for one name, and more than 50,000 names were in use throughout Europe. In 1887 the German Anatomical Society undertook the task of standardizing the nomenclature, and, with the help of other national anatomical societies, a complete list of anatomical terms and names was approved in 1895 that reduced the 50,000 names to 5,528. This list, the Basle Nomina Anatomica, had to be subsequently expanded, and in 1955 the Sixth International Anatomical Congress at Paris approved a major revision of it known as the Paris Nomina Anatomica (or simply Nomina Anatomica). In 1998 this work was supplanted by the Terminologia Anatomica, which recognizes about 7,500 terms describing macroscopic structures of human anatomy and is considered to be the international standard on human anatomical nomenclature. The Terminologia Anatomica, produced by the International Federation of Associations of Anatomists and the Federative Committee on Anatomical Terminology (later known as the Federative International Programme on Anatomical Terminologies), was made available online in 2011.

Human-body-anatomy.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2104 2024-03-27 22:54:29

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2106) Muscular System

Gist

The muscular system is composed of specialized cells called muscle fibers. Their predominant function is contractibility. Muscles, attached to bones or internal organs and blood vessels, are responsible for movement. Nearly all movement in the body is the result of muscle contraction.

Summary

The muscular system is composed of specialized cells called muscle fibers. Their predominant function is contractibility. Muscles, attached to bones or internal organs and blood vessels, are responsible for movement. Nearly all movement in the body is the result of muscle contraction. Exceptions to this are the action of cilia, the flagellum on sperm cells, and amoeboid movement of some white blood cells.

The integrated action of joints, bones, and skeletal muscles produces obvious movements such as walking and running. Skeletal muscles also produce more subtle movements that result in various facial expressions, eye movements, and respiration.

In addition to movement, muscle contraction also fulfills some other important functions in the body, such as posture, joint stability, and heat production. Posture, such as sitting and standing, is maintained as a result of muscle contraction. The skeletal muscles are continually making fine adjustments that hold the body in stationary positions. The tendons of many muscles extend over joints and in this way contribute to joint stability. This is particularly evident in the knee and shoulder joints, where muscle tendons are a major factor in stabilizing the joint. Heat production, to maintain body temperature, is an important by-product of muscle metabolism. Nearly 85 percent of the heat produced in the body is the result of muscle contraction.

Details

Muscles play a part in every function of the body. The muscular system is made up of over 600 muscles. These include three muscle types: smooth, skeletal, and cardiac.

Only skeletal muscles are voluntary, meaning you can control them consciously. Smooth and cardiac muscles act involuntarily.

Each muscle type in the muscular system has a specific purpose. You’re able to walk because of your skeletal muscles. You can digest because of your smooth muscles. And your heart beats because of your cardiac muscle.

The different muscle types also work together to make these functions possible. For instance, when you run (skeletal muscles), your heart pumps harder (cardiac muscle), and causes you to breathe heavier (smooth muscles).

Keep reading to learn more about your muscular system’s functions.

1. Mobility

Your skeletal muscles are responsible for the movements you make. Skeletal muscles are attached to your bones and partly controlled by the central nervous system (CNS).

You use your skeletal muscles whenever you move. Fast-twitch skeletal muscles cause short bursts of speed and strength. Slow-twitch muscles function better for longer movements.

2. Circulation

The involuntary cardiac and smooth muscles help your heart beat and blood flow through your body by producing electrical impulses. The cardiac muscle (myocardium) is found in the walls of the heart. It’s controlled by the autonomic nervous system responsible for most bodily functions.

The myocardium also has one central nucleus like a smooth muscle.

Your blood vessels are made up of smooth muscles, and also controlled by the autonomic nervous system.

3. Respiration

Your diaphragm is the main muscle at work during quiet breathing. Heavier breathing, like what you experience during exercise, may require accessory muscles to help the diaphragm. These can include the abdominal, neck, and back muscles.

4. Digestion

Digestion is controlled by smooth muscles found in your gastrointestinal tract. This comprises the:

* mouth
* esophagus
* stomach
* small and large intestines
* rectum
* the last part of the digestive tract

The digestive system also includes the liver, pancreas, and gallbladder.

Your smooth muscles contract and relax as food passes through your body during digestion. These muscles also help push food out of your body through defecation, or vomiting when you’re sick.

5. Urination

Smooth and skeletal muscles make up the urinary system. The urinary system includes the:

* kidneys
* bladder
* ureters
* urethra
* male or female reproductive organs
* prostate

All the muscles in your urinary system work together so you can urinate. The dome of your bladder is made of smooth muscles. You can release urine when those muscles tighten. When they relax, you can hold in your urine.

6. Childbirth

Smooth muscles are found in the uterus. During pregnancy, these muscles grow and stretch as the baby grows. When a woman goes into labor, the smooth muscles of the uterus contract and relax to help push the baby through the math.

7. Vision

Your eye sockets are made up of six skeletal muscles that help you move your eyes. And the internal muscles of your eyes are made up of smooth muscles. All these muscles work together to help you see. If you damage these muscles, you may impair your vision.

8. Stability

The skeletal muscles in your core help protect your spine and help with stability. Your core muscle group includes the abdominal, back, and pelvic muscles. This group is also known as the trunk. The stronger your core, the better you can stabilize your body. The muscles in your legs also help steady you.

9. Posture

Your skeletal muscles also control posture. Flexibility and strength are keys to maintaining proper posture. Stiff neck muscles, weak back muscles, or tight hip muscles can throw off your alignment. Poor posture can affect parts of your body and lead to joint pain and weaker muscles. These parts include the:

* shoulders
* spine
* hips
* knees

The bottom line

The muscular system is a complex network of muscles vital to the human body. Muscles play a part in everything you do. They control your heartbeat and breathing, help digestion, and allow movement.

Muscles, like the rest of your body, thrive when you exercise and eat healthily. But too much exercise can cause sore muscles. Muscle pain can also be a sign that something more serious is affecting your body.

The following conditions can affect your muscular system:

* myopathy (muscle disease)
* muscular dystrophy
* multiple sclerosis (MS)
* Parkinson’s disease
* fibromyalgia

Talk to your doctor if you have one of these conditions. They can help you find ways to manage your health. It’s important to take care of your muscles so they stay healthy and strong.

Additional Information

The muscular system is an organ system consisting of skeletal, smooth, and cardiac muscle. It permits movement of the body, maintains posture, and circulates blood throughout the body. The muscular systems in vertebrates are controlled through the nervous system although some muscles (such as the cardiac muscle) can be completely autonomous. Together with the skeletal system in the human, it forms the musculoskeletal system, which is responsible for the movement of the body.

Types

There are three distinct types of muscle: skeletal muscle, cardiac or heart muscle, and smooth (non-striated) muscle. Muscles provide strength, balance, posture, movement, and heat for the body to keep warm.

There are approximately 640 muscles in an adult male human body. A kind of elastic tissue makes up each muscle, which consists of thousands, or tens of thousands, of small muscle fibers. Each fiber comprises many tiny strands called fibrils, impulses from nerve cells control the contraction of each muscle fiber.

Skeletal

Skeletal muscle, is a type of striated muscle, composed of muscle cells, called muscle fibers, which are in turn composed of myofibrils. Myofibrils are composed of sarcomeres, the basic building blocks of striated muscle tissue. Upon stimulation by an action potential, skeletal muscles perform a coordinated contraction by shortening each sarcomere. The best proposed model for understanding contraction is the sliding filament model of muscle contraction. Within the sarcomere, actin and myosin fibers overlap in a contractile motion towards each other. Myosin filaments have club-shaped myosin heads that project toward the actin filaments, and provide attachment points on binding sites for the actin filaments. The myosin heads move in a coordinated style; they swivel toward the center of the sarcomere, detach and then reattach to the nearest active site of the actin filament. This is called a ratchet type drive system.

This process consumes large amounts of adenosine triphosphate (ATP), the energy source of the cell. ATP binds to the cross-bridges between myosin heads and actin filaments. The release of energy powers the swiveling of the myosin head. When ATP is used, it becomes adenosine diphosphate (ADP), and since muscles store little ATP, they must continuously replace the discharged ADP with ATP. Muscle tissue also contains a stored supply of a fast-acting recharge chemical, creatine phosphate, which when necessary can assist with the rapid regeneration of ADP into ATP.

Calcium ions are required for each cycle of the sarcomere. Calcium is released from the sarcoplasmic reticulum into the sarcomere when a muscle is stimulated to contract. This calcium uncovers the actin-binding sites. When the muscle no longer needs to contract, the calcium ions are pumped from the sarcomere and back into storage in the sarcoplasmic reticulum.

There are approximately 639 skeletal muscles in the human body.

Cardiac

Heart muscle is striated muscle but is distinct from skeletal muscle because the muscle fibers are laterally connected. Furthermore, just as with smooth muscles, their movement is involuntary. Heart muscle is controlled by the sinus node influenced by the autonomic nervous system.

Smooth

Smooth muscle contraction is regulated by the autonomic nervous system, hormones, and local chemical signals, allowing for gradual and sustained contractions. This type of muscle tissue is also capable of adapting to different levels of stretch and tension, which is important for maintaining proper blood flow and the movement of materials through the digestive system.

Physiology:

Contraction

Neuromuscular junctions are the focal point where a motor neuron attaches to a muscle. Acetylcholine, (a neurotransmitter used in skeletal muscle contraction) is released from the axon terminal of the nerve cell when an action potential reaches the microscopic junction called a synapse. A group of chemical messengers across the synapse and stimulate the formation of electrical changes, which are produced in the muscle cell when the acetylcholine binds to receptors on its surface. Calcium is released from its storage area in the cell's sarcoplasmic reticulum. An impulse from a nerve cell causes calcium release and brings about a single, short muscle contraction called a muscle twitch. If there is a problem at the neuromuscular junction, a very prolonged contraction may occur, such as the muscle contractions that result from tetanus. Also, a loss of function at the junction can produce paralysis.

Skeletal muscles are organized into hundreds of motor units, each of which involves a motor neuron, attached by a series of thin finger-like structures called axon terminals. These attach to and control discrete bundles of muscle fibers. A coordinated and fine-tuned response to a specific circumstance will involve controlling the precise number of motor units used. While individual muscle units contract as a unit, the entire muscle can contract on a predetermined basis due to the structure of the motor unit. Motor unit coordination, balance, and control frequently come under the direction of the cerebellum of the brain. This allows for complex muscular coordination with little conscious effort, such as when one drives a car without thinking about the process.

Tendon

A tendon is a piece of connective tissue that connects a muscle to a bone.[8] When a muscle intercept, it pulls against the skeleton to create movement. A tendon connects this muscle to a bone, making this function possible.

Aerobic and anaerobic muscle activity

At rest, the body produces the majority of its ATP aerobically in the mitochondria without producing lactic acid or other fatiguing byproducts. During exercise, the method of ATP production varies depending on the fitness of the individual as well as the duration and intensity of exercise. At lower activity levels, when exercise continues for a long duration (several minutes or longer), energy is produced aerobically by combining oxygen with carbohydrates and fats stored in the body.

During activity that is higher in intensity, with possible duration decreasing as intensity increases, ATP production can switch to anaerobic pathways, such as the use of the creatine phosphate and the phosphagen system or anaerobic glycolysis. Aerobic ATP production is biochemically much slower and can only be used for long-duration, low-intensity exercise, but produces no fatiguing waste products that can not be removed immediately from the sarcomere and the body, and it results in a much greater number of ATP molecules per fat or carbohydrate molecule. Aerobic training allows the oxygen delivery system to be more efficient, allowing aerobic metabolism to begin quicker. Anaerobic ATP production produces ATP much faster and allows near-maximal intensity exercise, but also produces significant amounts of lactic acid which render high-intensity exercise unsustainable for more than several minutes. The phosphagen system is also anaerobic. It allows for the highest levels of exercise intensity, but intramuscular stores of phosphocreatine are very limited and can only provide energy for exercises lasting up to ten seconds. Recovery is very quick, with full creatine stores regenerated within five minutes.

Clinical significance

Multiple diseases can affect the muscular system.

Muscular Dystrophy

Muscular dystrophy is a group of disorders associated with progressive muscle weakness and loss of muscle mass. These disorders are caused by mutations in a person’s genes. The disease affects between 19.8 and 25.1 per 100,000 person-years globally.

There are more than 30 types of muscular dystrophy. Depending on the type, muscular dystrophy can affect the patient's heart and lungs, and/or their ability to move, walk, and perform daily activities. The most common types include:

* Duchenne muscular dystrophy (DMD) and Becker muscular dystrophy (BMD)
* Myotonic dystrophy
* Limb-Girdle (LGMD)
* Facioscapulohumeral dystrophy (FSHD)
* Congenital dystrophy (CMD)
* Distal (DD)
* Oculopharyngeal dystrophy (OPMD)
* Emery-Dreifuss (EDMD).

cfdf9ff9bb97d14026951fe77fd3ae0e.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2105 2024-03-28 23:08:49

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2107) Iron Lung

Gist

An iron lung is a device for artificial respiration in which rhythmic alternations in the air pressure in a chamber surrounding a patient's chest force air into and out of the lungs.

Summary

Can you name any truly life-changing inventions? There have been many over the course of human history. Just look at the field of medicine, for example. In this area alone, inventions like vaccines, anesthesia, and the stethoscope have changed the world.

Today’s Wonder of the Day is about another medical invention. Before the creation of respirators, it helped people who couldn’t breathe on their own. That’s right—today, we’re learning about the iron lung.

Iron lungs aren’t common today, but there was a time when they could be found in many hospitals. Invented in 1928, they offered treatment for severe cases of polio. This illness, which affected mostly children, could lead to life-threatening issues. It could even cause paralysis.

In these cases, patients could even lose the ability to breathe. This happened when the virus affected the diaphragm, a muscle below the lungs. Many of these patients regained the ability to breathe after a few weeks or months using an iron lung. Others relied on the machine for the rest of their lives.

How do iron lungs work? They rely on air pressure. To begin the treatment, patients are put on a sliding bed. A nurse or doctor pushes the bed into the machine, which is a large metal tube. Once patients are inside the lung, only their heads are outside of the tube. A rubber seal around their neck stops air from escaping the machine.

When the iron lung is switched on, it increases air pressure inside the tube. This causes the lungs to deflate, forcing the patient to exhale. Then, the air pressure decreases. This, in turn, leads the patient to inhale as their lungs inflate.

When patients begin treatment with an iron lung, they spend most of their time inside the machine. They may be taken out for mere minutes a day until they’re able to breathe on their own. As such, they rely on nurses, doctors, and other hospital staff to help them with everyday tasks like eating and changing clothes.

Does anyone still use an iron lung today? Yes, though they are very few. One example is Paul Alexander, who was diagnosed with polio in 1952 at age six.  Today, Alexander can spend short periods of time, sometimes hours, outside of the lung. He has built a successful career as an attorney and lived a full life thanks to the breathing support he receives from this machine.

Thanks to vaccines, there hasn’t been a new case of polio in the U.S. since 1979. This has made it difficult for Alexander to find replacement parts for his iron lung. It’s also difficult to find people to repair the machine. Additionally, insurance companies no longer cover the repairs. As he relies on the machine for survival, Alexander must pay for its upkeep out-of-pocket.

Have you ever seen an iron lung in action? Today, hospitals opt for modern respirators in place of these devices. Still, the iron lung is to thank for the survival of many children who contracted polio in the 20th century. Can you think of any other medical inventions that have changed the world?

Details

An iron lung is a type of negative pressure ventilator (NPV), a mechanical respirator which encloses most of a person's body and varies the air pressure in the enclosed space to stimulate breathing. It assists breathing when muscle control is lost, or the work of breathing exceeds the person's ability. Need for this treatment may result from diseases including polio and botulism and certain poisons (for example, barbiturates, tubocurarine).

The use of iron lungs is largely obsolete in modern medicine as more modern breathing therapies have been developed and due to the eradication of polio in most of the world. However, in 2020, the COVID-19 pandemic revived some interest in the device as a cheap, readily-producible substitute for positive-pressure ventilators, which were feared to be outnumbered by patients potentially needing temporary artificially assisted respiration.

The iron lung is a large horizontal cylinder designed to stimulate breathing in patients who have lost control of their respiratory muscles. The patient's head is exposed outside the cylinder, while the body is sealed inside. Air pressure inside the cylinder is cycled to facilitate inhalation and exhalation. Devices like the Drinker, Emerson, and Both respirators are examples of iron lungs, which can be manually or mechanically powered. Smaller versions, like the cuirass ventilator and jacket ventilator, enclose only the patient's torso. Breathing in humans occurs through negative pressure, where the rib cage expands and the diaphragm contracts, causing air to flow in and out of the lungs.

The concept of external negative pressure ventilation was introduced by John Mayow in 1670. The first widely used device was the iron lung, developed by Philip Drinker and Louis Shaw in 1928. Initially used for coal gas poisoning treatment, the iron lung gained fame for treating respiratory failure caused by polio in the mid-20th century. John Haven Emerson introduced an improved and more affordable version in 1931. The Both respirator, a cheaper and lighter alternative to the Drinker model, was invented in Australia in 1937. British philanthropist William Morris financed the production of the Both–Nuffield respirators, donating them to hospitals throughout Britain and the British Empire. During the polio outbreaks of the 1940s and 1950s, iron lungs filled hospital wards, assisting patients with paralyzed diaphragms in their recovery.

Polio vaccination programs and the development of modern ventilators have nearly eradicated the use of iron lungs in the developed world. Positive pressure ventilation systems, which blow air into the patient's lungs via intubation, have become more common than negative pressure systems like iron lungs. However, negative pressure ventilation is more similar to normal physiological breathing and may be preferable in rare conditions. As of 2024, after the death of Paul Alexander, only one patient in the U.S. is still using iron lungs. In response to the COVID-19 pandemic and the shortage of modern ventilators, some enterprises developed prototypes of new, easily producible versions of the iron lung.

Design and function

The iron lung is typically a large horizontal cylinder in which a person is laid, with their head protruding from a hole in the end of the cylinder, so that their full head (down to their voice box) is outside the cylinder, exposed to ambient air, and the rest of their body sealed inside the cylinder, where air pressure is continuously cycled up and down to stimulate breathing.

To cause the patient to inhale, air is pumped out of the cylinder, causing a slight vacuum, which causes the patient's chest and abdomen to expand (drawing air from outside the cylinder, through the patient's exposed nose or mouth, into their lungs). Then, for the patient to exhale, the air inside the cylinder is compressed slightly (or allowed to equalize to ambient room pressure), causing the patient's chest and abdomen to partially collapse, forcing air out of the lungs, as the patient exhales the breath through their exposed mouth and nose, outside the cylinder.

Examples of the device include the Drinker respirator, the Emerson respirator, and the Both respirator. Iron lungs can be either manually or mechanically powered, but are normally powered by an electric motor linked to a flexible pumping diaphragm (commonly opposite the end of the cylinder from the patient's head). Larger "room-sized" iron lungs were also developed, allowing for simultaneous ventilation of several patients (each with their heads protruding from sealed openings in the outer wall), with sufficient space inside for a nurse or a respiratory therapist to be inside the sealed room, attending the patients.

Smaller, single-patient versions of the iron lung include the so-called cuirass ventilator (named for the cuirass, a torso-covering body armor). The cuirass ventilator encloses only the patient's torso, or chest and abdomen, but otherwise operates essentially the same as the original, full-sized iron lung. A lightweight variation on the cuirass ventilator is the jacket ventilator or poncho or raincoat ventilator, which uses a flexible, impermeable material (such as plastic or rubber) stretched over a metal or plastic frame over the patient's torso.

Method and use

Humans, like most mammals, breathe by negative pressure breathing: the rib cage expands and the diaphragm contracts, expanding the chest cavity. This causes the pressure in the chest cavity to decrease, and the lungs expand to fill the space. This, in turn, causes the pressure of the air inside the lungs to decrease (it becomes negative, relative to the atmosphere), and air flows into the lungs from the atmosphere: inhalation. When the diaphragm relaxes, the reverse happens and the person exhales. If a person loses part or all of the ability to control the muscles involved, breathing becomes difficult or impossible.

Invention and early use:

Initial development

In 1670, English scientist John Mayow came up with the idea of external negative pressure ventilation. Mayow built a model consisting of bellows and a bladder to pull in and expel air. The first negative pressure ventilator was described by British physician John Dalziel in 1832. Successful use of similar devices was described a few years later. Early prototypes included a hand-operated bellows-driven "Spirophore" designed by Dr Woillez of Paris (1876), and an airtight wooden box designed specifically for the treatment of polio by Dr Stueart of South Africa (1918). Stueart's box was sealed at the waist and shoulders with clay and powered by motor-driven bellows.

Drinker and Shaw tank

The first of these devices to be widely used however was developed in 1928 by Phillip Drinker and Louis Shaw of the United States. The iron lung, often referred to in the early days as the "Drinker respirator", was invented by Philip Drinker (1894–1972) and Louis Agassiz Shaw Jr., professors of industrial hygiene at the Harvard School of Public Health. The machine was powered by an electric motor with air pumps from two vacuum cleaners. The air pumps changed the pressure inside a rectangular, airtight metal box, pulling air in and out of the lungs. The first clinical use of the Drinker respirator on a human was on October 12, 1928, at the Boston Children's Hospital in the US. The subject was an eight-year-old girl who was nearly dead as a result of respiratory failure due to polio. Her dramatic recovery within less than a minute of being placed in the chamber helped popularize the new device.

Variations

Boston manufacturer Warren E. Collins began production of the iron lung that year. Although it was initially developed for the treatment of victims of coal gas poisoning, it was most famously used in the mid-20th century for the treatment of respiratory failure caused by polio.

Danish physiologist August Krogh, upon returning to Copenhagen in 1931 from a visit to New York where he saw the Drinker machine in use, constructed the first Danish respirator designed for clinical purposes. Krogh's device differed from Drinker's in that its motor was powered by water from the city pipelines. Krogh also made an infant respirator version.

In 1931, John Haven Emerson (1906–1997) introduced an improved and less expensive iron lung. The Emerson iron lung had a bed that could slide in and out of the cylinder as needed, and the tank had portal windows which allowed attendants to reach in and adjust limbs, sheets, or hot packs. Drinker and Harvard University sued Emerson, claiming he had infringed on patent rights. Emerson defended himself by making the case that such lifesaving devices should be freely available to all. Emerson also demonstrated that every aspect of Drinker's patents had been published or used by others at earlier times. Since an invention must be novel to be patentable, prior publication/use of the invention meant it was not novel and therefore unpatentable. Emerson won the case, and Drinker's patents were declared invalid.

The United Kingdom's first iron lung was designed in 1934 by Robert Henderson, an Aberdeen doctor. Henderson had seen a demonstration of the Drinker respirator in the early 1930s and built a device of his own upon his return to Scotland. Four weeks after its construction, the Henderson respirator was used to save the life of a 10-year-old boy from New Deer, Aberdeenshire who had poliomyelitis. Despite this success, Henderson was reprimanded for secretly using hospital facilities to build the machine.

Both respirator

The Both respirator, a negative pressure ventilator, was invented in 1937 when Australia's epidemic of poliomyelitis created an immediate need for more ventilating machines to compensate for respiratory paralysis. Although the Drinker model was effective and saved lives, its widespread use was hindered by the fact that the machines were very large, heavy (about 750 lbs or 340 kg), bulky, and expensive. In the US, an adult machine cost about $2,000 in 1930, and £2,000 delivered to Melbourne in 1936. The cost in Europe in the mid-1950s was around £1,500. Consequently, there were few of the Drinker devices in Australia and Europe.

The South Australia Health Department asked Adelaide brothers Edward and Don Both to create an inexpensive "iron lung". Biomedical engineer Edward Both designed and developed a cabinet respirator made of plywood that worked similarly to the Drinker device, with the addition of a bi-valved design which allowed temporary access to the patient's body. Far cheaper to make (only £100) than the Drinker machine, the Both Respirator also weighed less and could be constructed and transported more quickly. Such was the demand for the machines that they were often used by patients within an hour of production.

Visiting London in 1938 during another polio epidemic, Both produced additional respirators there which attracted the attention of William Morris (Lord Nuffield), a British motor manufacturer and philanthropist. Nuffield, intrigued by the design, financed the production of approximately 1700 machines at his car factory in Cowley and donated them to hospitals throughout all parts of Britain and the British Empire. Soon, the Both–Nuffield respirators were able to be produced by the thousand at about one-thirteenth the cost of the American design. By the early 1950s, there were over 700 Both-Nuffield iron lungs in the United Kingdom, but only 50 Drinker devices.

Polio epidemic

Staff in a Rhode Island hospital examine a patient in an iron lung tank respirator during a polio epidemic in 1960.
Rows of iron lungs filled hospital wards at the height of the polio outbreaks of the 1940s and 1950s, helping children, and some adults, with bulbar polio and bulbospinal polio. A polio patient with a paralyzed diaphragm would typically spend two weeks inside an iron lung while recovering.

Modern development and usage

Polio vaccination programs have virtually eradicated new cases of poliomyelitis in the developed world. Because of this, the development of modern ventilators, and widespread use of tracheal intubation and tracheotomy, the iron lung has mostly disappeared from modern medicine. In 1959, 1,200 people were using tank respirators in the United States, but by 2004 that number had decreased to just 39. By 2014, only 10 people were left with an iron lung.

Replacement

Positive pressure ventilation systems are now more common than negative pressure systems. Positive pressure ventilators work by blowing air into the patient's lungs via intubation through the airway; they were used for the first time in Blegdams Hospital, Copenhagen, Denmark, during a polio outbreak in 1952. It proved a success and by 1953 it had superseded the iron lung throughout Europe.

The positive pressure ventilator has the asset that the patient's airways can be cleared and the patient can be seated on semi-seated position in the acute phase of polio. The fatality rate on using iron lungs on respiratory paralysis patients could be as high as 80% to 90%, most patients either drowning in their own saliva as their swallowing muscles had been paralyzed, or from organ shutdown due to acidosis due to accumulated carbon dioxide in bloodstream due to clogged airways. By using the positive pressure ventilators instead of iron lungs, the Copenhagen hospital team was able to decrease the fatality rate eventually down to 11%. The first patient treated this way was a 12-year-old girl named Vivi Ebert, who had bulbar polio.

The iron lung now has a marginal place in modern respiratory therapy. Most patients with paralysis of the breathing muscles use modern mechanical ventilators that push air into the airway with positive pressure. These are generally efficacious and have the advantage of not restricting patients' movements or caregivers' ability to examine the patients as significantly as an iron lung does.

Continued use

Despite the advantages of positive ventilation systems, negative pressure ventilation is a truer approximation of normal physiological breathing and results in a more normal distribution of air in the lungs. It may also be preferable in certain rare conditions, such as central hypoventilation syndrome, in which failure of the medullary respiratory centers at the base of the brain results in patients having no autonomic control of breathing. At least one reported polio patient, Dianne Odell, had a spinal deformity that caused the use of mechanical ventilators to be contraindicated.

At least a few patients today still use the older machines, often in their homes, despite the occasional difficulty of finding replacement parts.

Joan Headley of Post-Polio Health International said that as of May 28, 2008, about 30 patients in the US were still using an iron lung. That figure may be inaccurately low; Houston alone had 19 iron lung patients living at home in 2008.

Martha Mason of Lattimore, North Carolina, died on May 4, 2009, after spending 61 of her 72 years in an iron lung.

On October 30, 2009, June Middleton of Melbourne, Australia, who had been entered in the Guinness Book of Records as the person who spent the longest time in an iron lung, died aged 83, having spent more than 60 years in her iron lung.

In 2013, the Post-Polio Health International (PHI) organizations estimated that only six to eight iron lung users were in the United States; as of 2017, its executive director knew of none. Press reports then emerged, however, of at least three (perhaps the last three) users of such devices, sparking interest amongst those in the makerspace community such as Naomi Wu in the manufacture of the obsolete components, particularly the gaskets.

In 2021, the National Public Radio programs Radio Diaries and All Things Considered gave a report on Martha Lillard, one of the last remaining Americans depending on the daily use of an iron lung, which she had been using since 1953. In her audio interview, she reported that she was having problems obtaining replacement parts to keep her machine working properly.

On March 11, 2024, Paul Alexander of Dallas, Texas, United States, died at the age of 78. He had been confined to an iron lung for 72 years from the age of six, longer than anyone, and was the last man living in an iron lung. With his death, Martha Lillard is the only person in the U.S. known to use an iron lung.

COVID-19 pandemic

In early 2020, reacting to the COVID-19 pandemic, to address the urgent global shortage of modern ventilators (needed for patients with advanced, severe COVID-19), some enterprises developed prototypes of new, readily-producible versions of the iron lung. These developments included:

* a compact, torso-sized "exovent" developed by a team in the United Kingdom, which included the University of Warwick, the Royal National Throat Nose and Ear Hospital, the Marshall Aerospace and Defence Group, the Imperial College Healthcare NHS Trust, along with teams of medical clinicians, academics, manufacturers, engineers and citizen scientists;
* a full-size iron lung developed in the United States by a team led by Hess Services, Inc., of Hays, Kansas.

Additional Information

The iron lung was born in 1927, when Philip Drinker and Louis Agassiz Shaw at Harvard University devised a machine that could maintain respiration, pulling air into and out of the lungs by changing the pressure in an airtight metal box.

shutterstock_2200115355.jpg?fm=jpg&fl=progressive&w=660&h=433&fit=fill


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2106 2024-03-30 00:07:09

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2108) Integumentary System

Gist

Your integumentary system is your body's outer layer. It consists of your skin, hair, nails and glands. These organs and structures are your first line of defense against bacteria and help protect you from injury and sunlight. Your integumentary system works with other systems in your body to keep it in balance.

Details

The integumentary system is the largest organ of the body that forms a physical barrier between the external environment and the internal environment that it serves to protect and maintain.

The integumentary system includes

* Skin (epidermis, dermis)
* Hypodermis
* Associated glands
* Hair
* Nails.

In addition to its barrier function, this system performs many intricate functions such as body temperature regulation, cell fluid maintenance, synthesis of Vitamin D, and detection of stimuli. The various components of this system work in conjunction to carry out these functions.

General Function

The integumentary system has several functions that provide several purposes:

Physical protection: The integumentary is the covering of the human body and its' most apparent function is physical protection: skin - a tightly knit network of cells, with each layer contributing to its strength. The epidermis has an outermost layer created by layers of dead keratin that can withstand wear and tear of the outer environment, the dermis provides the epidermis with blood supply and has nerves that bring danger to attention amongst other functions; hypodermis provides physical cushioning to any mechanical trauma through adipose storage; glands secrete protective films throughout the body; nails protect the digits; hairs throughout the body filter harmful particles from entering the eyes, ears, nose, etc.
* Immunity: The skin is the body’s first line of defense acting as a physical barrier preventing direct entry of pathogens. Antimicrobial peptides (AMPs) and lipids on the skin also act as a biomolecular barrier that disrupts bacterial membranes. Resident immune cells, both myeloid and lymphoid cells are present in the skin, and some, eg Langerhans cells or dermal dendritic cells, can travel to the periphery and activate the greater immune system
* Wound healing: When our body undergoes trauma with a resulting injury, the integumentary system orchestrates the wound healing process through hemostasis, inflammation, proliferation, and remodeling.
* Thermoregulation: The skin has a large surface area that is highly vascularized, which allows it to conserve and release heat through vasoconstriction and vasodilation, respectively.
* Vitamin D synthesis: The primary sources of vitamin D are sun exposure and oral intake (crucial for bone health).

Sensation- Skin innervation is by various types of sensory nerve endings that discriminate pain, temperature, touch, and vibration. Each type of receptor and nerve fiber varies in its adaptive and conductive speeds, leading to a wide range of signals that can be integrated to create an understanding of the external environment and help the body to react appropriately.

Organ Systems Involved:

Skin

Integumentary system : Accounts for about 16% of your total body weight. Its surface area covers between 1.5-{2m}^2
Made up of two layers—the superficial epidermis and the deeper dermis.

Epidermis:

* Tough, outer layer that acts as the first line of defense against the external environment
* Regenerates from stem cells located in the basal layer that grow up towards the corneum. The epidermis itself is devoid of blood supply and derives its nutrition from the underlying dermis

* Stratum corneum
* Stratum granulosum
* Stratum spinosum
* Stratum basale
* In the palms and soles where the skin is thicker, there is an additional layer of skin between the stratum corneum and stratum granulosum called the stratum lucidum.

Dermis

* Underlying connective tissue framework that supports the epidermis
* The dermis as a whole contains blood and lymph vessels, nerves, sweat glands, hair follicles, and various other structures embedded within the connective tissue.

Further subdivides into two layers

* Superficial papillary dermis - forms finger-like projections into the epidermis, known as dermal papillae, and consists of highly vascularized, loose connective tissue.
* Deep reticular layer - has dense connective tissue that forms a strong network.

Pathophysiology and Injury eg

* Burns eg of the hand
* Psoriatic Arthritis
* Epidermolysis Bullosa
* Cellulitis
* Cancer eg melanona
* Pressure Sore
* Wounds

Hypodermis

The hypodermis lies between the dermis and underlying organs.

* Commonly referred to as subcutaneous tissue
* Composed of loose areolar tissue and adipose tissue.
* Provides additional cushion and insulation through its function of fat storage and connects the skin to underlying structures such as muscle.

Hair is a component of the integumentary system and extends downward into the dermal layer where it sits in the hair follicle.

* The presence of hair is a primary differentiator of mammals as a unique class of organisms.
* In humans, it is a cherished and highly visible indicator of health, youth, and even class.
* It has a sensory function, protects from cold and UV radiation.
* Areas of clinical significance include diseases of hair loss, excess, alterations due to nutritional deficiencies, infectious causes, and effects of drug reactions

Nails

* Nails form as layers of keratin and appear at the dorsal tips of the fingers and toes.
* Nails function to protect the fingers and toes while increasing the precision of movements and enhancing sensation.
* Pathophysiology: Onychomycosis (fungal infection, common clinical presentation involves nail discoloration, subungual hyperkeratosis, onycholysis, and splitting or destruction of the nail plate), Pitting (presents in conditions such as psoriasis, eczema) Koilonychia (spoon nail, been associated with iron deficiency anemia but can be due to idiopathic changes) Clubbing (the most common manifestation of hypertrophic osteoarthropathy and correlates with many systemic conditions).

Associated Glands

Four types of exocrine glands within human skin—Sweat, sebaceous, ceruminous, and mammary glands.

Sweat glands, are further divided into eccrine and apocrine glands.

* Eccrine glands are distributed throughout the body and primarily produce serous fluid to regulate body temperature.
* Apocrine glands are present in the axilla and pubic area and produce milky protein-rich sweat. These glands are responsible for odor as bacteria break down the secreted organic substances.
* Sebaceous glands are part of the pilosebaceous unit, which includes the hair, hair follicle, and arrector pili muscle.

Associated Glands

* Secretes an oily substance called sebum, a mixture of lipids that forms a thin film on the skin.
* This layer adds a protective layer, prevents fluid loss, and also plays an antimicrobial role.
Pathophysiology eg Seborrheic dermatitis, Hyperhidrosis

Conclusion

The integumentary system provides numerous functions necessary for human life while also maintaining an optimal internal environment for other critical components to thrive.

* When there is an imbalance in this system, many disorders can manifest.
* The integumentary system also acts as a reflection of underlying pathologies eg showing jaundice with liver disfunction, displaying petechiae with thrombocytopenia; decreased skin turgor with dehydration.
* It is a system that can provide many external clues regarding an individual’s physiological state and is a vital component of a complete clinical picture.

Additional Information

The integumentary system is the set of organs forming the outermost layer of an animal's body. It comprises the skin and its appendages, which act as a physical barrier between the external environment and the internal environment that it serves to protect and maintain the body of the animal. Mainly it is the body's outer skin.

The integumentary system includes skin, hair, scales, feathers, hooves, and nails. It has a variety of additional functions: it may serve to maintain water balance, protect the deeper tissues, excrete wastes, and regulate body temperature, and is the attachment site for sensory receptors which detect pain, sensation, pressure, and temperature.

The integumentary system is the set of organs that forms the external covering of the body and protects it from many threats such as infection, desiccation, abrasion, chemical assault and radiation damage. IN humans the integumentary system includes the skin – a thickened keratinized epithelium made of multiple layers of cells that is largely impervious to water. It also contains specialized cells that secrete melanin to protect the body from the carcinogenic effects of UV rays and cells that have an immune function. Sweat glands that excrete wastes and regulate body temperature are also part of the integumentary system. Somatosensory receptors and nociceptors are important components of this organ system that serve as warning sensors, allowing the body to move away from noxious stimuli.

Skin.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2107 2024-03-31 00:03:22

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2109) Architectural Drawing

Gist

Architectural drawings serve as a vital language in the realm of architecture, encapsulating detailed visual representations of a building's design, structure, and spatial arrangements. These drawings translate an architect's conceptual ideas into precise, technical illustrations that communicate the exact specifications and details necessary for constructing a building or structure. They encompass various types, including plans, elevations, sections, and details, offering a comprehensive view of the proposed edifice from different perspectives. These drawings serve as a blueprint for construction teams, guiding them in materializing the architect's vision into a tangible, functional space.

Summary

Architecture drawings are the foundation of every successful construction and remodeling project. And fortunately, it’s easier than ever to create different types of architectural drawings.

Gone are the days of tediously drafting hand-drawn sketches or slaving over a CAD program for days on end.

With modern software you — even with limited design experience — can create professional architectural drawings in just a few hours.

What is an Architectural Drawing?

An architectural drawing is a sketch, plan, diagram, or schematic that communicates detailed information about a building. Architects and designers create these types of technical drawings during the planning stages of a construction project.

Architecture drawings are important for several reasons:

* They help owners and project planners understand how a building will look and function when it’s finished.
* They give necessary information and instructions so the construction crew can build the structure.
* And finally, an architect’s drawings provide a detailed record of the inner workings of a building, which is necessary for future maintenance.

Throughout the project, you’ll need to create different types of architectural drawings.

Details

An architectural drawing or architect's drawing is a technical drawing of a building (or building project) that falls within the definition of architecture. Architectural drawings are used by architects and others for a number of purposes: to develop a design idea into a coherent proposal, to communicate ideas and concepts, to convince clients of the merits of a design, to assist a building contractor to construct it based on design intent, as a record of the design and planned development, or to make a record of a building that already exists.

Architectural drawings are made according to a set of conventions, which include particular views (floor plan, section etc.), sheet sizes, units of measurement and scales, annotation and cross referencing.

Historically, drawings were made in ink on paper or similar material, and any copies required had to be laboriously made by hand. The twentieth century saw a shift to drawing on tracing paper so that mechanical copies could be run off efficiently. The development of the computer had a major impact on the methods used to design and create technical drawings, making manual drawing almost obsolete, and opening up new possibilities of form using organic shapes and complex geometry. Today the vast majority of drawings are created using CAD software.

History:

Size and scale

The size of drawings reflects the materials available and the size that is convenient to transport – rolled up or folded, laid out on a table, or pinned up on a wall. The drafting process may impose limitations on the size that is realistically workable. Sizes are determined by a consistent paper size system, according to local usage. Normally the largest paper size used in modern architectural practice is ISO A0 (841 mm × 1,189 mm or 33.1 in × 46.8 in) or in the USA Arch E (762 mm × 1,067 mm or 30 in × 42 in) or Large E size (915 mm × 1,220 mm or 36 in × 48 in).

Architectural drawings are drawn to scale so that relative sizes are correctly represented. The scale is chosen both to ensure the whole building will fit on the chosen sheet size and to show the required amount of detail. On the scale of one-eighth of an inch to one foot (1:96) or the metric equivalent of 1 to 100, walls are typically shown as simple outlines corresponding to the overall thickness. At a larger scale, half an inch to one foot (1:24) or the nearest common metric equivalent 1 to 20, the layers of different materials that make up the wall construction are shown. Construction details are drawn to a larger scale, in some cases full size (1 to 1 scale).

Scale drawings enable dimensions to be "read" off the drawing, i.e. measured directly. Imperial scales (feet and inches) are equally readable using an ordinary ruler. On a one-eighth inch to one-foot scale drawing, the one-eighth divisions on the ruler can be read off as feet. Architects normally use a scale ruler with different scales marked on each edge. A third method, used by builders in estimating, is to measure directly off the drawing and multiply by the scale factor.

Dimensions can be measured off drawings made on a stable medium such as vellum. All processes of reproduction introduce small errors, especially now that different copying methods mean that the same drawing may be re-copied, or copies made in several different ways. Consequently, dimensions need to be written ("figured") on the drawing. The disclaimer "Do not scale off dimensions" is commonly inscribed on architects' drawings, to guard against errors arising in the copying process.

Standard views used in architectural drawing
.
Floor plan

A floor plan is the most fundamental architectural diagram, a view from above showing the arrangement of spaces in a building in the same way as a map, but showing the arrangement at a particular level of a building. Technically it is a horizontal section cut through a building (conventionally at four feet / one metre and twenty centimetres above floor level), showing walls, windows and door openings, and other features at that level. The plan view includes anything that could be seen below that level: the floor, stairs (but only up to the plan level), fittings, and sometimes furniture. Objects above the plan level (e.g. beams overhead) can be indicated as dashed lines.

Geometrically, plan view is defined as a vertical orthographic projection of an object onto a horizontal plane, with the horizontal plane cutting through the building.

Site plan

A site plan is a specific type of plan, showing the whole context of a building or group of buildings. A site plan shows property boundaries and means of access to the site, and nearby structures if they are relevant to the design. For a development on an urban site, the site plan may need to show adjoining streets to demonstrate how the design fits into the urban fabric. Within the site boundary, the site plan gives an overview of the entire scope of work. It shows the buildings (if any) already existing and those that are proposed, usually as a building footprint; roads, parking lots, footpaths, hard landscaping, trees, and planting. For a construction project, the site plan also needs to show all the services connections: drainage and sewer lines, water supply, electrical and communications cables, exterior lighting etc.

Site plans are commonly used to represent a building proposal prior to detailed design: drawing up a site plan is a tool for deciding both the site layout and the size and orientation of proposed new buildings. A site plan is used to verify that a proposal complies with local development codes, including restrictions on historical sites. In this context the site plan forms part of a legal agreement, and there may be a requirement for it to be drawn up by a licensed professional: architect, engineer, landscape architect or land surveyor.

Elevation

An elevation is a view of a building seen from one side, a flat representation of one façade. This is the most common view used to describe the external appearance of a building. Each elevation is labelled in relation to the compass direction it faces, e.g. looking toward the north you would be seeing the southern elevation of the building. Buildings are rarely a simple rectangular shape in plan, so a typical elevation may show all the parts of the building that are seen from a particular direction.

Geometrically, an elevation is a horizontal orthographic projection of a building onto a vertical plane, the vertical plane normally being parallel to one side of the building.

Architects also use the word elevation as a synonym for façade, so the "north elevation" is the north-facing wall of the building.

Cross section

A cross section, also simply called a section, represents a vertical plane cut through the object, in the same way as a floor plan is a horizontal section viewed from the top. In the section view, everything cut by the section plane is shown as a bold line, often with a solid fill to show objects that are cut through, and anything seen beyond generally shown in a thinner line. Sections are used to describe the relationship between different levels of a building. In the Observatorium drawing illustrated here, the section shows the dome which can be seen from the outside, a second dome that can only be seen inside the building, and the way the space between the two accommodates a large astronomical telescope: relationships that would be difficult to understand from plans alone.

A sectional elevation is a combination of a cross section, with elevations of other parts of the building seen beyond the section plane.

Geometrically, a cross section is a horizontal orthographic projection of a building on to a vertical plane, with the vertical plane cutting through the building.

Isometric and axonometric projections

Isometric and axonometric projections are a simple way of representing a three dimensional object, keeping the elements to scale and showing the relationship between several sides of the same object, so that the complexities of a shape can be clearly understood.

There is some confusion over the distinction between the terms isometric and axonometric. "Axonometric is a word that has been used by architects for hundreds of years. Engineers use the word axonometric as a generic term to include isometric, diametric and trimetric drawings." This article uses the terms in the architecture-specific sense.

Despite fairly complex geometrical explanations, for the purposes of practical drafting the difference between isometric and axonometric is simple (see diagram above). In both, the plan is drawn on a skewed or rotated grid, and the verticals are projected vertically on the page. All lines are drawn to scale so that relationships between elements are accurate. In many cases a different scale is required for different axes, and again this can be calculated but in practice was often simply estimated by eye.

* An isometric uses a plan grid at 30 degrees from the horizontal in both directions, which distorts the plan shape. Isometric graph paper can be used to construct this kind of drawing. This view is useful to explain construction details (e.g. three dimensional joints in joinery). The isometric was the standard view until the mid twentieth century, remaining popular until the 1970s, especially for textbook diagrams and illustrations.

* Cabinet projection is similar, but only one axis is skewed, the others being horizontal and vertical. Originally used in cabinet making, the advantage is that a principal side (e.g. a cabinet front) is displayed without distortion, so only the less important sides are skewed. The lines leading away from the eye are drawn at a reduced scale to lessen the degree of distortion. The cabinet projection is seen in Victorian engraved advertisements and architectural textbooks, but has virtually disappeared from general use.

* An axonometric uses a 45-degree plan grid, which keeps the original orthogonal geometry of the plan. The great advantage of this view for architecture is that the draftsman can work directly from a plan, without having to reconstruct it on a skewed grid. In theory the plan should be set at 45 degrees, but this introduces confusing coincidences where opposite corners align. Unwanted effects can be avoided by rotating the plan while still projecting vertically. This is sometimes called a planometric or plan oblique view, and allows freedom to choose any suitable angle to present the most useful view of an object.

Traditional drafting techniques used 30–60 and 45 degree set squares, and that determined the angles used in these views. Once the adjustable square became common those limitations were lifted.

The axonometric gained in popularity in the twentieth century, not just as a convenient diagram but as a formal presentation technique, adopted in particular by the Modern Movement. Axonometric drawings feature prominently in the influential 1970's drawings of Michael Graves, James Stirling and others, using not only straightforward views but worms-eye view, unusually and exaggerated rotations of the plan, and exploded elements.

Detail drawings

Detail drawings show a small part of the construction at a larger scale, to show how the component parts fit together. They are also used to show small surface details, for example decorative elements. Section drawings at large scale are a standard way of showing building construction details, typically showing complex junctions (such as floor to wall junction, window openings, eaves and roof apex) that cannot be clearly shown on a drawing that includes the full height of the building. A full set of construction details needs to show plan details as well as vertical section details. One detail is seldom produced in isolation: a set of details shows the information needed to understand the construction in three dimensions. Typical scales for details are 1/10, 1/5 and full size.

In traditional construction, many details were so fully standardized, that few detail drawings were required to construct a building. For example, the construction of a sash window would be left to the carpenter, who would fully understand what was required, but unique decorative details of the façade would be drawn up in detail. In contrast, modern buildings need to be fully detailed because of the proliferation of different products, methods and possible solutions.

Architectural perspective

Perspective in drawing is an approximate representation on a flat surface of an image as it is perceived by the eye. The key concepts here are:

* Perspective is the view from a particular fixed viewpoint.
* Horizontal and vertical edges in the object are represented by horizontals and verticals in the drawing.
* Lines leading away into the distance appear to converge at a vanishing point.
* All horizontals converge to a point on the horizon, which is a horizontal line at eye level.
* Verticals converge to a point either above or below the horizon.

The basic categorization of artificial perspective is by the number of vanishing points:

* One-point perspective where objects facing the viewer are orthogonal, and receding lines converge to a single vanishing point.
* Two-point perspective reduces distortion by viewing objects at an angle, with all the horizontal lines receding to one of two vanishing points, both located on the horizon.
* Three-point perspective introduces additional realism by making the verticals recede to a third vanishing point, which is above or below depending upon whether the view is seen from above or below.

The normal convention in architectural perspective is to use two-point perspective, with all the verticals drawn as verticals on the page.

Three-point perspective gives a casual, photographic snapshot effect. In professional architectural photography, conversely, a view camera or a perspective control lens is used to eliminate the third vanishing point, so that all the verticals are vertical on the photograph, as with the perspective convention. This can also be done by digital manipulation of a photograph taken with a standard lens.

Aerial perspective is a technique in painting, for indicating distance by approximating the effect of the atmosphere on distant objects. In daylight, as an ordinary object gets further from the eye, its contrast with the background is reduced, its color saturation is reduced, and its color becomes more blue. Not to be confused with aerial view or bird's eye view, which is the view as seen (or imagined) from a high vantage point. In J M Gandy's perspective of the Bank of England (see illustration at the beginning of this article), Gandy portrayed the building as a picturesque ruin in order to show the internal plan arrangement, a precursor of the cutaway view.

A montage image is produced by superimposing a perspective image of a building on to a photographic background. Care is needed to record the position from which the photograph was taken, and to generate the perspective using the same viewpoint. This technique is popular in computer visualization, where the building can be photorealistically rendered, and the final image is intended to be almost indistinguishable from a photograph.

Sketches and diagrams

A sketch is a rapidly executed freehand drawing, a quick way to record and develop an idea, not intended as a finished work. A diagram could also be drawn freehand but deals with symbols, to develop the logic of a design. Both can be worked up into a more presentable form and used to communicate the principles of a design.

In architecture, the finished work is expensive and time consuming, so it is important to resolve the design as fully as possible before construction work begins. Complex modern buildings involve a large team of different specialist disciplines, and communication at the early design stages is essential to keep the design moving towards a coordinated outcome. Architects (and other designers) start investigating a new design with sketches and diagrams, to develop a rough design that provides an adequate response to the particular design problems.

There are two basic elements to a building design, the aesthetic and the practical. The aesthetic element includes the layout and visual appearance, the anticipated feel of the materials, and cultural references that will influence the way people perceive the building. Practical concerns include space allocated for different activities, how people enter and move around the building, daylight and artificial lighting, acoustics, traffic noise, legal matters and building codes, and many other issues. While both aspects are partly a matter of customary practice, every site is different. Many architects actively seek innovation, thereby increasing the number of problems to be resolved.

Architectural legend often refers to designs made on the back of an envelope or on a napkin. Initial thoughts are important, even if they have to be discarded along the way, because they provide the central idea around which the design can develop. Although a sketch is inaccurate, it is disposable and allows for freedom of thought, for trying different ideas quickly. Choice becomes sharply reduced once the design is committed to a scale drawing, and the sketch stage is almost always essential.

Diagrams are mainly used to resolve practical matters. In the early phases of the design architects use diagrams to develop, explore, and communicate ideas and solutions. They are essential tools for thinking, problem solving, and communication in the design disciplines. Diagrams can be used to resolve spatial relationships, but they can also represent forces and flows, e.g. the forces of sun and wind, or the flows of people and materials through a building.

An exploded view diagram shows component parts dis-assembled in some way, so that each can be seen on its own. These views are common in technical manuals, but are also used in architecture, either in conceptual diagrams or to illustrate technical details. In a cutaway view parts of the exterior are omitted to show the interior, or details of internal construction. Although common in technical illustration, including many building products and systems, the cutaway is in fact little-used in architectural drawing.

Additional Information

An architectural drawing is a technical illustration of a building or building project. These drawings are used by architects for several purposes: to develop a design idea into a coherent proposal, to communicate ideas and concepts, to enable construction by a building contractor or to make a record of a building that already exists.

An architectural drawing can be a sketch, plan, diagram or schematic. Whatever form it takes, the architectural drawing is used to communicate detailed information about what’s being built. These technical drawings are made by architects according to a set of standards, such as the view, which can be a floor plan, section or another perspective of the building, sheet sizes, units of measurement and scales, annotation and cross-referencing.

In the past, these construction drawings and specifications were done with ink and paper. Copies were made by hand, which took a lot of time and effort. This led to drawing on tracing paper and, with the development of the computer, almost all architectural construction drawings are done on computer-aided design (CAD) software.

There are many types of architectural drawings, most of which are a combination of words and pictures. They all communicate precise details on the style and aesthetics of the construction project. Various types of blueprints are used in vertical constructions, such as architectural, structural, heating, ventilation and air conditioning (HVAC), electrical and plumbing, fire protection plans, etc.

1*LBZu7NiXUb4pq9-FRE9YBQ.jpeg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2108 2024-04-01 00:08:49

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2110) Cardiovascular System

Gist

The cardiovascular system provides blood supply throughout the body. By responding to various stimuli, it can control the velocity and amount of blood carried through the vessels. The cardiovascular system consists of the heart, arteries, veins, and capillaries.

Summary

The cardiovascular system consists of the heart, which is an anatomical pump, with its intricate conduits (arteries, veins, and capillaries) that traverse the whole human body carrying blood. The blood contains oxygen, nutrients, wastes, and immune and other functional cells that help provide for homeostasis and basic functions of human cells and organs.

The pumping action of the heart usually maintains a balance between cardiac output and venous return. Cardiac output (CO) is the amount of blood pumped out by each ventricle in one minute. The normal adult blood volume is 5 liters (a little over 1 gallon) and it usually passes through the heart once a minute. Note that cardiac output varies with the demands of the body.

The cardiac cycle refers to events that occur during one heart beat and is split into ventricular systole (contraction/ejection phase) and diastole (relaxation/filling phase). A normal heart rate is approximately 72 beats/minute, and the cardiac cycle spreads over 0.8 seconds. The heart sounds transmitted are due to closing of heart valves, and abnormal heart sounds, called murmurs, usually represent valve incompetency or abnormalities.

Blood is transported through the whole body by a continuum of blood vessels. Arteries are blood vessels that transport blood away from the heart, and veins transport the blood back to the heart. Capillaries carry blood to tissue cells and are the exchange sites of nutrients, gases, wastes, etc.

Details

Human cardiovascular system is an organ system that conveys blood through vessels to and from all parts of the body, carrying nutrients and oxygen to tissues and removing carbon dioxide and other wastes. It is a closed tubular system in which the blood is propelled by a muscular heart. Two circuits, the pulmonary and the systemic, consist of arterial, capillary, and venous components.

The primary function of the heart is to serve as a muscular pump propelling blood into and through vessels to and from all parts of the body. The arteries, which receive this blood at high pressure and velocity and conduct it throughout the body, have thick walls that are composed of elastic fibrous tissue and muscle cells. The arterial tree—the branching system of arteries—terminates in short, narrow, muscular vessels called arterioles, from which blood enters simple endothelial tubes (i.e., tubes formed of endothelial, or lining, cells) known as capillaries. These thin, microscopic capillaries are permeable to vital cellular nutrients and waste products that they receive and distribute. From the capillaries, the blood, now depleted of oxygen and burdened with waste products, moving more slowly and under low pressure, enters small vessels called venules that converge to form veins, ultimately guiding the blood on its way back to the heart.

The heart:

Description:

Shape and location

The adult human heart is normally slightly larger than a clenched fist, with average dimensions of about 13 × 9 × 6 cm (5 × 3.5 × 2.5 inches) and weight approximately 10.5 ounces (300 grams). It is cone-shaped, with the broad base directed upward and to the right and the apex pointing downward and to the left. It is located in the chest (thoracic) cavity behind the breastbone (sternum), in front of the windpipe (trachea), the esophagus, and the descending aorta, between the lungs, and above the diaphragm (the muscular partition between the chest and abdominal cavities). About two-thirds of the heart lies to the left of the midline.

Characteristics of the Human Body:

Pericardium

The heart is suspended in its own membranous sac, the pericardium. The strong outer portion of the sac, or fibrous pericardium, is firmly attached to the diaphragm below, the mediastinal pleura on the side, and the sternum in front. It gradually blends with the coverings of the superior vena cava and the pulmonary (lung) arteries and veins leading to and from the heart. (The space between the lungs, the mediastinum, is bordered by the mediastinal pleura, a continuation of the membrane lining the chest. The superior vena cava is the principal channel for venous blood from the chest, arms, neck, and head.)

Smooth, serous (moisture-exuding) membrane lines the fibrous pericardium, then bends back and covers the heart. The portion of membrane lining the fibrous pericardium is known as the parietal serous layer (parietal pericardium), that covering the heart as the visceral serous layer (visceral pericardium or epicardium).

The two layers of serous membrane are normally separated by only 10 to 15 ml (0.6 to 0.9 cubic inch) of pericardial fluid, which is secreted by the serous membranes. The slight space created by the separation is called the pericardial cavity. The pericardial fluid lubricates the two membranes with every beat of the heart as their surfaces glide over each other. Fluid is filtered into the pericardial space through both the visceral and parietal pericardia.

Chambers of the heart

The heart is divided by septa, or partitions, into right and left halves, and each half is subdivided into two chambers. The upper chambers, the atria, are separated by a partition known as the interatrial septum; the lower chambers, the ventricles, are separated by the interventricular septum. The atria receive blood from various parts of the body and pass it into the ventricles. The ventricles, in turn, pump blood to the lungs and to the remainder of the body.

The right atrium, or right superior portion of the heart, is a thin-walled chamber receiving blood from all tissues except the lungs. Three veins empty into the right atrium, the superior and inferior venae cavae, bringing blood from the upper and lower portions of the body, respectively, and the coronary sinus, draining blood from the heart itself. Blood flows from the right atrium to the right ventricle. The right ventricle, the right inferior portion of the heart, is the chamber from which the pulmonary artery carries blood to the lungs.

The left atrium, the left superior portion of the heart, is slightly smaller than the right atrium and has a thicker wall. The left atrium receives the four pulmonary veins, which bring oxygenated blood from the lungs. Blood flows from the left atrium into the left ventricle. The left ventricle, the left inferior portion of the heart, has walls three times as thick as those of the right ventricle. Blood is forced from this chamber through the aorta to all parts of the body except the lungs.

External surface of the heart

Shallow grooves called the interventricular sulci, containing blood vessels, mark the separation between ventricles on the front and back surfaces of the heart. There are two grooves on the external surface of the heart. One, the atrioventricular groove, is along the line where the right atrium and the right ventricle meet; it contains a branch of the right coronary artery (the coronary arteries deliver blood to the heart muscle). The other, the anterior interventricular sulcus, runs along the line between the right and left ventricles and contains a branch of the left coronary artery.

On the posterior side of the heart surface, a groove called the posterior longitudinal sulcus marks the division between the right and left ventricles; it contains another branch of a coronary artery. A fourth groove, between the left atrium and ventricle, holds the coronary sinus, a channel for venous blood.

Origin and development

In the embryo, formation of the heart begins in the pharyngeal, or throat, region. The first visible indication of the embryonic heart occurs in the undifferentiated mesoderm, the middle of the three primary layers in the embryo, as a thickening of invading cells. An endocardial (lining) tube of flattened cells subsequently forms and continues to differentiate until a young tube with forked anterior and posterior ends arises. As differentiation and growth progress, this primitive tube begins to fold upon itself, and constrictions along its length produce four primary chambers. These are called, from posterior to anterior, the sinus venosus, atrium, ventricle, and truncus arteriosus. The characteristic bending of the tube causes the ventricle to swing first to the right and then behind the atrium, the truncus coming to lie between the sideways dilations of the atrium. It is during this stage of development and growth that the first pulsations of heart activity begin.

Endocardial cushions (local thickenings of the endocardium, or heart lining) “pinch” the single opening between the atrium and the ventricle into two portions, thereby forming two openings. These cushions are also responsible for the formation of the two atrioventricular valves (the valves between atria and ventricles), which regulate the direction of blood flow through the heart.

The atrium becomes separated into right and left halves first by a primary partition with a perforation and later by a secondary partition, which, too, has a large opening, called the foramen ovale, in its lower part. Even though the two openings do not quite coincide in position, blood still passes through, from the right atrium to the left. At birth, increased blood pressure in the left atrium forces the primary partition against the secondary one, so that the two openings are blocked and the atria are completely separated. The two partitions eventually fuse.

The ventricle becomes partially divided into two chambers by an indentation of myocardium (heart muscle) at its tip. This developing partition is largely muscular and is supplemented by membranous connective tissue that develops in conjunction with the subdivision of the truncus arteriosus by a spiral partition into two channels, one for systemic and one for pulmonary circulation (the aorta and the pulmonary artery, respectively). At this time, the heart rotates clockwise and to the left so that it resides in the left thorax, with the left chambers posterior and the right chambers anterior. The greater portion of blood passing through the right side of the heart in the fetus is returned to the systemic circulation by the ductus arteriosus, a vessel connecting the pulmonary artery and the aorta. At birth this duct becomes closed by a violent contraction of its muscular wall. Thereafter the blood in the right side of the heart is driven through the pulmonary arteries to the lungs for oxygenation and returned to the left side of the heart for ejection into the systemic circulation. A distinct median furrow at the apex of the ventricles marks the external subdivision of the ventricle into right and left chambers.

Structure and function:

Valves of the heart

To prevent backflow of blood, the heart is equipped with valves that permit the blood to flow in only one direction. There are two types of valves located in the heart: the atrioventricular valves (tricuspid and mitral) and the semilunar valves (pulmonary and aortic).

The atrioventricular valves are thin, leaflike structures located between the atria and the ventricles. The right atrioventricular opening is guarded by the tricuspid valve, so called because it consists of three irregularly shaped cusps, or flaps. The leaflets consist essentially of folds of endocardium (the membrane lining the heart) reinforced with a flat sheet of dense connective tissue. At the base of the leaflets, the middle supporting flat plate becomes continuous with that of the dense connective tissue of the ridge surrounding the openings.

Tendinous cords of dense tissue (chordae tendineae) covered by thin endocardium extend from the nipplelike papillary muscles to connect with the ventricular surface of the middle supporting layer of each leaflet. The chordae tendineae and the papillary muscles from which they arise limit the extent to which the portions of the valves near their free margin can billow toward the atria. The left atrioventricular opening is guarded by the mitral, or bicuspid, valve, so named because it consists of two flaps. The mitral valve is attached in the same manner as the tricuspid, but it is stronger and thicker because the left ventricle is by nature a more powerful pump working under high pressure.

Blood is propelled through the tricuspid and mitral valves as the atria contract. When the ventricles contract, blood is forced backward, passing between the flaps and walls of the ventricles. The flaps are thus pushed upward until they meet and unite, forming a complete partition between the atria and the ventricles. The expanded flaps of the valves are restrained by the chordae tendineae and papillary muscles from opening into the atria.

The semilunar valves are pocketlike structures attached at the point at which the pulmonary artery and the aorta leave the ventricles. The pulmonary valve guards the orifice between the right ventricle and the pulmonary artery. The aortic valve protects the orifice between the left ventricle and the aorta. The three leaflets of the aortic semilunar and two leaflets of the pulmonary valves are thinner than those of the atrioventricular valves, but they are of the same general construction with the exception that they possess no chordae tendineae.

Closure of the heart valves is associated with an audible sound, called the heartbeat. The first sound occurs when the mitral and tricuspid valves close, the second when the pulmonary and aortic semilunar valves close. These characteristic heart sounds have been found to be caused by the vibration of the walls of the heart and major vessels around the heart. The low-frequency first heart sound is heard when the ventricles contract, causing a sudden backflow of blood that closes the valves and causes them to bulge back. The elasticity of the valves then causes the blood to bounce backward into each respective ventricle. This effect sets the walls of the ventricles into vibration, and the vibrations travel away from the valves. When the vibrations reach the chest wall where the wall is in contact with the heart, sound waves are created that can be heard with the aid of a stethoscope.

The second heart sound results from vibrations set up in the walls of the pulmonary artery, the aorta, and, to a lesser extent, the ventricles, as the blood reverberates back and forth between the walls of the arteries and the valves after the pulmonary and aortic semilunar valves suddenly close. These vibrations are then heard as a high-frequency sound as the chest wall transforms the vibrations into sound waves. The first heart sound is followed after a short pause by the second. A pause about twice as long comes between the second sound and the beginning of the next cycle. The opening of the valves is silent.

Wall of the heart

The wall of the heart consists of three distinct layers—the epicardium (outer layer), the myocardium (middle layer), and the endocardium (inner layer). Coronary vessels supplying arterial blood to the heart penetrate the epicardium before entering the myocardium. This outer layer, or visceral pericardium, consists of a surface of flattened epithelial (covering) cells resting upon connective tissue.

The myocardial layer contains the contractile elements of the heart. The bundles of striated muscle fibres present in the myocardium are arranged in a branching pattern and produce a wringing type of movement that efficiently squeezes blood from the heart with each beat. The thickness of the myocardium varies according to the pressure generated to move blood to its destination. The myocardium of the left ventricle, which must drive blood out into the systemic circulation, is, therefore, thickest; the myocardium of the right ventricle, which propels blood to the lungs, is moderately thickened, while the atrial walls are relatively thin.

The component of the myocardium that causes contraction consists of muscle fibres that are made up of cardiac muscle cells. Each cell contains smaller fibres known as myofibrils that house highly organized contractile units called sarcomeres. The mechanical function arising from sarcomeres is produced by specific contractile proteins known as actin and myosin (or thin and thick filaments, respectively). The sarcomere, found between two Z lines (or Z discs) in a muscle fibre, contains two populations of actin filaments that project from opposite Z lines in antiparallel fashion and are organized around thick filaments of myosin. As actin slides along crossbridges that project from myosin filaments at regular intervals, each myosin is brought into contact with an adjacent myosin filament. This process shortens the muscle fibre and causes contraction.

Interaction between actin and myosin is regulated by a variety of biological processes that are generally related to the concentration of calcium within the cell. The process of actin sliding over myosin requires large amounts of both calcium and energy. While the contractile machinery occupies about 70 percent of the cardiac cell volume, mitochondria occupy about 25 percent and provide the necessary energy for contraction. To facilitate energy and calcium conductance in cardiac muscle cells, unique junctions called intercalated discs (gap junctions) link the cells together and define their borders. Intercalated discs are the major portal for cardiac cell-to-cell communication, which is required for coordinated muscle contraction and maintenance of circulation.

Forming the inner surface of the myocardial wall is a thin lining called the endocardium. This layer lines the cavities of the heart, covers the valves and small muscles associated with opening and closing of the valves, and is continuous with the lining membrane of the large blood vessels.

Additional System

The heart is a powerful automatic pump. It's the part of the cardiovascular system we think of most when we think about good health. But healthy blood and blood vessels are also vital for staying well.

The average adult has between 5 and 6 liters of blood or blood volume. Blood carries oxygen and nutrients to all the living cells in the body. It also carries waste products to systems that eliminate them.

Half the blood consists of a watery, protein-laden fluid called plasma. A little less than half is composed of red and white blood cells and other solid elements called platelets. Platelets cause the blood to coagulate wherever an injury to a blood vessel occurs.

The heart is the center of the Cardiovascular System. It is a hollow muscle that pumps blood via blood vessels throughout the entire body, a process that happens in less than 60 seconds. The circulating blood not only supplies the tissues and organs with Oxygen and other nutrients, but also gets rid of waste material such as carbon dioxide. The Cardiovascular system is divided into two components: the pulmonary circulation and the systemic (bodily) circulation; these two systems are connected.

The pulmonary circulation is responsible for exchanging carbon dioxide for oxygen, so that oxygen-rich blood can flow throughout the body. Veins are responsible for bringing oxygen-poor blood back to the heart so that this process can take place. Many small veins throughout the body join forces in two large veins when returning to the heart: the superior and inferior vena cavae. The inferior vena cava drains blood from below the diaphragm (the lower body), and the superior vena cava drains blood from above the diaphragm (the upper body). Both the superior and inferior vena cavae deliver oxygen-poor blood back into the right atrium of the heart. From the right atrium, blood flows through the tricuspid valve into the right ventricle, and further through the pulmonary valve into the pulmonary arteries. The blood is then carried to the lungs, where carbon dioxide is released and oxygen is collected. The newly oxygenated blood travels via the pulmonary veins back to the left side of the heart.

The systemic circulation begins in the left atrium, where newly oxygenized blood has just arrived from the lungs. The blood flows from the left atrium through the mitral valve into the left ventricle. The left ventricle then pumps blood through the aortic valve into the aorta (the main artery of the body), which sends the oxygen-rich blood to the body. The aorta has many branches, which give way to smaller vessels called arteries, and even smaller vessels known as arterioles. This network of vessels deliver blood all throughout the body, and allow for the exchange of Oxygen and other nutrients within the capillary network (the connection between the arterial and venous systems). After this exchange takes place, the blood is once again oxygen-poor (but rich in carbon dioxide). It is transported via venules (the venous counterpart to arterioles) and veins (the venous counterpart to arteries) into the superior or inferior vena cava, where the blood again enters the pulmonary circulation to become oxygenated.

The cardiovascular system requires a certain pressure in order to function; this pressure is called the blood pressure. The blood pressure is divided into the systolic and diastolic blood pressures. The systolic blood pressure is measured while the heart is contracting (squeezing). When the heart contracts, the pressure in the arteries and veins rises as well. This is the upper number on a blood pressure reading; the lower number is the diastolic blood pressure. The diastolic blood pressure is measured while the heart muscle is relaxing, also causing the pressure in the blood vessels to decrease.

Cardiovascular_system_EN_2.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2109 2024-04-02 00:06:09

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2111) Lymphatic System

Gist

The lymphatic system is a network of delicate tubes throughout the body. It drains fluid (called lymph) that has leaked from the blood vessels into the tissues and empties it back into the bloodstream via the lymph nodes. The main roles of the lymphatic system include: managing the fluid levels in the body.

Summary

The lymphatic system, or lymphoid system, is an organ system in vertebrates that is part of the immune system, and complementary to the circulatory system. It consists of a large network of lymphatic vessels, lymph nodes, lymphoid organs, lymphoid tissues and lymph. Lymph is a clear fluid carried by the lymphatic vessels back to the heart for re-circulation. The Latin word for lymph, lympha, refers to the deity of fresh water, "Lympha".

Unlike the circulatory system that is a closed system, the lymphatic system is open. The human circulatory system processes an average of 20 litres of blood per day through capillary filtration, which removes plasma from the blood. Roughly 17 litres of the filtered blood is reabsorbed directly into the blood vessels, while the remaining three litres are left in the interstitial fluid. One of the main functions of the lymphatic system is to provide an accessory return route to the blood for the surplus three litres.

The other main function is that of immune defense. Lymph is very similar to blood plasma, in that it contains waste products and cellular debris, together with bacteria and proteins. The cells of the lymph are mostly lymphocytes. Associated lymphoid organs are composed of lymphoid tissue, and are the sites either of lymphocyte production or of lymphocyte activation. These include the lymph nodes (where the highest lymphocyte concentration is found), the spleen, the thymus, and the tonsils. Lymphocytes are initially generated in the bone marrow. The lymphoid organs also contain other types of cells such as stromal cells for support. Lymphoid tissue is also associated with mucosas such as mucosa-associated lymphoid tissue (MALT).

Fluid from circulating blood leaks into the tissues of the body by capillary action, carrying nutrients to the cells. The fluid bathes the tissues as interstitial fluid, collecting waste products, bacteria, and damaged cells, and then drains as lymph into the lymphatic capillaries and lymphatic vessels. These vessels carry the lymph throughout the body, passing through numerous lymph nodes which filter out unwanted materials such as bacteria and damaged cells. Lymph then passes into much larger lymph vessels known as lymph ducts. The right lymphatic duct drains the right side of the region and the much larger left lymphatic duct, known as the thoracic duct, drains the left side of the body. The ducts empty into the subclavian veins to return to the blood circulation. Lymph is moved through the system by muscle contractions. In some vertebrates, a lymph heart is present that pumps the lymph to the veins.

The lymphatic system was first described in the 17th century independently by Olaus Rudbeck and Thomas Bartholin.

Details

* The lymphatic system is our body’s ‘sewerage system’.
* It maintains fluid levels in our body tissues by removing all fluids that leak out of our blood vessels.
* The lymphatic system is important for the optimal functioning of our general and specific immune responses.
* The lymph nodes monitor the lymph flowing into them and produce cells and antibodies which protect our body from infection and disease.
* The spleen and thymus are lymphatic organs that monitor the blood and detect and respond to pathogens and malignant cells.
* The lymphatic system plays an important role in the absorption of fats from the intestine.
* When the lymphatic system is not formed well or has been damaged by surgery, radiotherapy or tissue damage, a swelling of a part of the body may occur (most commonly the legs or arms). When this swelling lasts more than about three months it is called lymphoedema.
* When it’s not functioning well the lymphatic system may have a role in obesity, Crohn’s disease and other disorders.

The lymphatic system is a network of delicate tubes throughout the body. It drains fluid (called lymph) that has leaked from the blood vessels into the tissues and empties it back into the bloodstream via the lymph nodes.

The main roles of the lymphatic system include:

* managing the fluid levels in the body
* reacting to bacteria
* dealing with cancer cells
* dealing with cell products that otherwise would result in disease or disorders
* absorbing some of the fats in our diet from the intestine.

The lymph nodes and other lymphatic structures like the spleen and thymus hold special white blood cells called lymphocytes. These can rapidly multiply and release antibodies in response to bacteria, viruses, and a range of other stimuli from dead or dying cells and abnormally behaving cells such as cancer cells.

The lymphatic system and fluid balance

The blood in our blood vessels is under constant pressure. We need that to push nutrients (food the cells need), fluids and some cells into the body’s tissues to supply those tissues with food, oxygen and defence.

All of the fluids and its contents that leak out into the tissues (as well as waste products formed in the tissues, and bacteria that enter them through our skin) are removed from them by the lymphatic system.

When the lymphatic system does not drain fluids from the tissues properly, the tissues swell, appearing puffy and uncomfortable. If the swelling only lasts for a short period it is called oedema. If it lasts longer (more than about three months) it is called lymphoedema.

Lymphatic vessels

The lymphatic vessels are found everywhere in our body. Generally, more active areas have more of them.

The smaller lymphatic vessels, which take up the fluids, are called lymph capillaries. The larger lymphatic vessels have muscles in their walls which helps them gently and slowly pulsate. These larger lymphatic vessels also have valves that stop the lymph flowing back the wrong way.

Lymph vessels take the lymph back to the lymph nodes (there are about 700 of these in total), which are found in our arm pit and groin as well as many other areas of the body such as the mouth, throat and intestines.

The fluid that arrives in the lymph nodes is checked and filtered. Most of it continues on to where the lymphatic system from most of our body (the left arm, tummy, chest, and legs) empties out at the left shoulder area. Lymph from the right arm and face and part of the right chest empties into the blood at the right shoulder area.

Spleen

The spleen is located in the abdominal (tummy) area on the left side, just under the diaphragm. It is the largest of our lymphatic organs.

The spleen does many things as it filters and monitors our blood. It contains a range of cells, including macrophages – the body’s garbage trucks. It also produces and stores many cells, including a range of white blood cells, all of which are important for our body’s defence.

As well as removing microbes, the spleen also destroys old or damaged red blood cells. It can also help in increasing blood volume quickly if a person loses a lot of blood.

Thymus

The thymus is inside the ribcage, just behind the breastbone. It filters and monitors our blood content. It produces cells called T-lymphocytes which circulate around the body. These cells are important for cell mediated response to an immune challenge, such as may occur when we have an infection.

Other lymphoid tissue

Much of our digestive and respiratory system is lined with lymphatic tissue. It’s needed there because those systems are exposed to the external environment. This lymphatic tissue plays a very important role in the defence of our body.
The most important sites of this lymphoid tissue are in the throat (called the tonsils), in the intestine area (called Peyer’s patches) and in the appendix.

Lymph nodes

Lymph nodes are filters. They are found at various points around the body, including the throat, armpits, chest, abdomen and groin. Generally they are in chains or groups All are imbedded in fatty tissue and lie close to veins and arteries.

Lymph nodes have a wide range of functions but are generally associated with body defence. Bacteria (or their products) picked up from the tissues by cells called macrophages, or those that flow into the lymph, are forced to percolate through the lymph nodes. There, white blood cells called lymphocytes can attack and kill the bacteria. Viruses and cancer cells are also trapped and destroyed in the lymph nodes.

More lymphocytes are produced when you have an infection. That is why your lymph nodes tend to swell when you have an infection.

Common problems involving the lymphatic system

Common problems involving the lymphatic system can be separated into those related to:

* infection
* disease
* destruction or damage to the lymphatic system or its nodes.

Those related to infection include:

* glandular fever – symptoms include tender lymph nodes
* tonsillitis – infection of the tonsils in the throat
* Crohn’s disease – inflammatory bowel disorder.

Those related to disease include:

* Hodgkin’s disease – a type of cancer of the lymphatic system.

Those related to malformation or destruction or damage to the lymphatic system or its nodes include:

* primary lymphoedema – when the lymphatic system has not formed properly. May present as a limb or part body swelling at birth, or may develop at puberty or later in life
* secondary lymphoedema – When the lymphatic system is damaged by surgery or radiotherapy associated with the treatment of cancer, when the soft tissues are damaged by trauma, or when the lymphatic system has some other cause of structural or functional impairment.

Additional Information

Lymphatic system, a subsystem of the circulatory system in the vertebrate body that consists of a complex network of vessels, tissues, and organs. The lymphatic system helps maintain fluid balance in the body by collecting excess fluid and particulate matter from tissues and depositing them in the bloodstream. It also helps defend the body against infection by supplying disease-fighting cells called lymphocytes. This article focuses on the human lymphatic system.

The lymphatic system can be thought of as a drainage system needed because, as blood circulates through the body, blood plasma leaks into tissues through the thin walls of the capillaries. The portion of blood plasma that escapes is called interstitial or extracellular fluid, and it contains oxygen, glucose, amino acids, and other nutrients needed by tissue cells. Although most of this fluid seeps immediately back into the bloodstream, a percentage of it, along with the particulate matter, is left behind. The lymphatic system removes this fluid and these materials from tissues, returning them via the lymphatic vessels to the bloodstream, and thus prevents a fluid imbalance that would result in the organism’s death.

The fluid and proteins within the tissues begin their journey back to the bloodstream by passing into tiny lymphatic capillaries that infuse almost every tissue of the body. Only a few regions, including the epidermis of the skin, the mucous membranes, the bone marrow, and the central nervous system, are free of lymphatic capillaries, whereas regions such as the lungs, gut, genitourinary system, and dermis of the skin are densely packed with these vessels. Once within the lymphatic system, the extracellular fluid, which is now called lymph, drains into larger vessels called the lymphatics. These vessels converge to form one of two large vessels called lymphatic trunks, which are connected to veins at the base of the neck. One of these trunks, the right lymphatic duct, drains the upper right portion of the body, returning lymph to the bloodstream via the right subclavian vein. The other trunk, the thoracic duct, drains the rest of the body into the left subclavian vein. Lymph is transported along the system of vessels by muscle contractions, and valves prevent lymph from flowing backward. The lymphatic vessels are punctuated at intervals by small masses of lymph tissue, called lymph nodes, that remove foreign materials such as infectious microorganisms from the lymph filtering through them.

Role in immunity

In addition to serving as a drainage network, the lymphatic system helps protect the body against infection by producing white blood cells called lymphocytes, which help rid the body of disease-causing microorganisms. The organs and tissues of the lymphatic system are the major sites of production, differentiation, and proliferation of two types of lymphocytes—the T lymphocytes and B lymphocytes, also called T cells and B cells. Although lymphocytes are distributed throughout the body, it is within the lymphatic system that they are most likely to encounter foreign microorganisms.

Lymphoid organs

The lymphatic system is commonly divided into the primary lymphoid organs, which are the sites of B and T cell maturation, and the secondary lymphoid organs, in which further differentiation of lymphocytes occurs. Primary lymphoid organs include the thymus, bone marrow, fetal liver, and, in birds, a structure called the bursa of Fabricius. In humans the thymus and bone marrow are the key players in immune function. All lymphocytes derive from stem cells in the bone marrow. Stem cells destined to become B lymphocytes remain in the bone marrow as they mature, while prospective T cells migrate to the thymus to undergo further growth. Mature B and T lymphocytes exit the primary lymphoid organs and are transported via the bloodstream to the secondary lymphoid organs, where they become activated by contact with foreign materials, such as particulate matter and infectious agents, called antigens in this context.

Thymus

The thymus is located just behind the sternum in the upper part of the chest. It is a bilobed organ that consists of an outer, lymphocyte-rich cortex and an inner medulla. The differentiation of T cells occurs in the cortex of the thymus. In humans the thymus appears early in fetal development and continues to grow until puberty, after which it begins to shrink. The decline of the thymus is thought to be the reason T-cell production decreases with age.

In the cortex of the thymus, developing T cells, called thymocytes, come to distinguish between the body’s own components, referred to as “self,” and those substances foreign to the body, called “nonself.” This occurs when the thymocytes undergo a process called positive selection, in which they are exposed to self molecules that belong to the major histocompatibility complex (MHC). Those cells capable of recognizing the body’s MHC molecules are preserved, while those that cannot bind these molecules are destroyed. The thymocytes then move to the medulla of the thymus, where further differentiation occurs. There thymocytes that have the ability to attack the body’s own tissues are destroyed in a process called negative selection.

Positive and negative selection destroy a great number of thymocytes; only about 5 to 10 percent survive to exit the thymus. Those that survive leave the thymus through specialized passages called efferent (outgoing) lymphatics, which drain to the blood and secondary lymphoid organs. The thymus has no afferent (incoming) lymphatics, which supports the idea that the thymus is a T-cell factory rather than a rest stop for circulating lymphocytes.

Bone marrow

In birds B cells mature in the bursa of Fabricius. (The process of B-cell maturation was elucidated in birds—hence B for bursa.) In mammals the primary organ for B-lymphocyte development is the bone marrow, although the prenatal site of B-cell differentiation is the fetal liver. Unlike the thymus, the bone marrow does not atrophy at puberty, and therefore there is no concomitant decrease in the production of B lymphocytes with age.

Secondary lymphoid organs

Secondary lymphoid organs include the lymph nodes, spleen, and small masses of lymph tissue such as Peyer’s patches, the appendix, tonsils, and selected regions of the body’s mucosal surfaces (areas of the body lined with mucous membranes). The secondary lymphoid organs serve two basic functions: they are a site of further lymphocyte maturation, and they efficiently trap antigens for exposure to T and B cells.

Lymph nodes

The lymph nodes, or lymph glands, are small, encapsulated bean-shaped structures composed of lymphatic tissue. Thousands of lymph nodes are found throughout the body along the lymphatic routes, and they are especially prevalent in areas around the armpits (axillary nodes), groin (inguinal nodes), neck (cervical nodes), and knees (popliteal nodes). The nodes contain lymphocytes, which enter from the bloodstream via specialized vessels called the high endothelial venules. T cells congregate in the inner cortex (paracortex), and B cells are organized in germinal centres in the outer cortex. Lymph, along with antigens, drains into the node through afferent (incoming) lymphatic vessels and percolates through the lymph node, where it comes in contact with and activates lymphocytes. Activated lymphocytes, carried in the lymph, exit the node through the efferent (outgoing) vessels and eventually enter the bloodstream, which distributes them throughout the body.

Spleen

The spleen is found in the abdominal cavity behind the stomach. Although structurally similar to a lymph node, the spleen filters blood rather than lymph. One of its main functions is to bring blood into contact with lymphocytes. The functional tissue of the spleen is made up of two types of cells: the red pulp, which contains cells called macrophages that remove bacteria, old blood cells, and debris from the circulation; and surrounding regions of white pulp, which contain great numbers of lymphocytes. The splenic artery enters the red pulp through a web of small blood vessels, and blood-borne microorganisms are trapped in this loose collection of cells until they are gradually washed out through the splenic vein. The white pulp contains both B and T lymphocytes. T cells congregate around the tiny arterioles that enter the spleen, while B cells are located in regions called germinal centres, where the lymphocytes are exposed to antigens and induced to differentiate into antibody-secreting plasma cells.

Mucosa-associated tissues

Another group of important secondary lymphoid structures is the mucosa-associated lymphoid tissues. These tissues are associated with mucosal surfaces of almost any organ, but especially those of the digestive, genitourinary, and respiratory tracts, which are constantly exposed to a wide variety of potentially harmful microorganisms and therefore require their own system of antigen capture and presentation to lymphocytes. For example, Peyer’s patches, which are mucosa-associated lymphoid tissues of the small intestine, sample passing antigens and expose them to underlying B and T cells. Other, less-organized regions of the gut also play a role as secondary lymphoid tissue.

Diseases of the lymphatic system

The host of secondary lymphoid organs provides a system of redundancy for antigen sampling by the cells of the immune system. Removal of the spleen, selected lymph nodes, tonsils, or appendix does not generally result in an excessive increase in disease caused by pathogenic microorganisms. However, the importance of the primary lymphoid organs is clear. For example, two autoimmune diseases, DiGeorge syndrome and Nezelof disease, result in the failure of the thymus to develop and in the subsequent reduction in T-cell numbers, and removal of the bursa from chickens results in a decrease in B-cell counts. The destruction of bone marrow also has devastating effects on the immune system, not only because of its role as the site of B-cell development but also because it is the source of the stem cells that are the precursors for lymphocyte differentiation.

Lymphatic-System-Lymphatic-System-Anatomy-and-Physiology.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2110 2024-04-03 00:10:56

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2112) Urinary System

Gist

The urinary system's function is to filter blood and create urine as a waste by-product. The organs of the urinary system include the kidneys, renal pelvis, ureters, bladder and urethra. The body takes nutrients from food and converts them to energy.

Summary

The human urinary system, also known as the urinary tract or renal system, consists of the kidneys, ureters, bladder, and the urethra. The purpose of the urinary system is to eliminate waste from the body, regulate blood volume and blood pressure, control levels of electrolytes and metabolites, and regulate blood pH. The urinary tract is the body's drainage system for the eventual removal of urine. The kidneys have an extensive blood supply via the renal arteries which leave the kidneys via the renal vein. Each kidney consists of functional units called nephrons. Following filtration of blood and further processing, wastes (in the form of urine) exit the kidney via the ureters, tubes made of smooth muscle fibres that propel urine towards the urinary bladder, where it is stored and subsequently expelled from the body by urination. The female and male urinary system are very similar, differing only in the length of the urethra.

Urine is formed in the kidneys through a filtration of blood. The urine is then passed through the ureters to the bladder, where it is stored. During urination, the urine is passed from the bladder through the urethra to the outside of the body.

800–2,000 milliliters (mL) of urine are normally produced every day in a healthy human. This amount varies according to fluid intake and kidney function.

Structure

The urinary system refers to the structures that produce and transport urine to the point of excretion. In the human urinary system there are two kidneys that are located between the dorsal body wall and parietal peritoneum on both the left and right sides.

The formation of urine begins within the functional unit of the kidney, the nephrons. Urine then flows through the nephrons, through a system of converging tubules called collecting ducts. These collecting ducts then join together to form the minor calyces, followed by the major calyces that ultimately join the renal pelvis. From here, urine continues its flow from the renal pelvis into the ureter, transporting urine into the urinary bladder. The anatomy of the human urinary system differs between males and females at the level of the urinary bladder. In males, the urethra begins at the internal urethral orifice in the trigone of the bladder, continues through the external urethral orifice, and then becomes the prostatic, membranous, bulbar, and penile urethra. Urine exits through the external urethral meatus. The female urethra is much shorter, beginning at the bladder neck and terminating in the vaginal vestibule.

Development

Under microscopy, the urinary system is covered in a unique lining called urothelium, a type of transitional epithelium. Unlike the epithelial lining of most organs, transitional epithelium can flatten and distend. Urothelium covers most of the urinary system, including the renal pelvis, ureters, and bladder.

Function

The main functions of the urinary system and its components are to:

* Regulate blood volume and composition (e.g. sodium, potassium and calcium)
* Regulate blood pressure.
* Regulate pH homeostasis of the blood.
* Contributes to the production of red blood cells by the kidney.
* Helps synthesize calcitriol (the active form of Vitamin D).
* Stores waste products (mainly urea and uric acid) before it and other products are removed from the body.

Urine formation

Average urine production in adult humans is about 1–2 litres (L) per day, depending on state of hydration, activity level, environmental factors, weight, and the individual's health. Producing too much or too little urine requires medical attention. Polyuria is a condition of excessive urine production (> 2.5 L/day). Conditions involving low output of urine are oliguria (< 400 mL/day) and anuria (< 100 mL/day).

The first step in urine formation is the filtration of blood in the kidneys. In a healthy human, the kidney receives between 12 and 30% of cardiac output, but it averages about 20% or about 1.25 L/min.

The basic structural and functional unit of the kidney is the nephron. Its chief function is to regulate the concentration of water and soluble substances like sodium by filtering the blood, reabsorbing what is needed and excreting the rest as urine.

In the first part of the nephron, Bowman's capsule filters blood from the circulatory system into the tubules. Hydrostatic and osmotic pressure gradients facilitate filtration across a semipermeable membrane. The filtrate includes water, small molecules, and ions that easily pass through the filtration membrane. However, larger molecules such as proteins and blood cells are prevented from passing through the filtration membrane. The amount of filtrate produced every minute is called the glomerular filtration rate or GFR and amounts to 180 litres per day. About 99% of this filtrate is reabsorbed as it passes through the nephron and the remaining 1% becomes urine.

The urinary system is regulated by the endocrine system by hormones such as antidiuretic hormone, aldosterone, and parathyroid hormone.

Regulation of concentration and volume

The urinary system is under influence of the circulatory system, nervous system, and endocrine system.

Aldosterone plays a central role in regulating blood pressure through its effects on the kidney. It acts on the distal tubules and collecting ducts of the nephron and increases reabsorption of sodium from the glomerular filtrate. Reabsorption of sodium results in retention of water, which increases blood pressure and blood volume. Antidiuretic hormone (ADH), is a neurohypophysial hormone found in most mammals. Its two primary functions are to retain water in the body and vasoconstriction. Vasopressin regulates the body's retention of water by increasing water reabsorption in the collecting ducts of the kidney nephron. Vasopressin increases water permeability of the kidney's collecting duct and distal convoluted tubule by inducing translocation of aquaporin-CD water channels in the kidney nephron collecting duct plasma membrane.

Urination

Urination, also sometimes referred to as micturition, is the ejection of urine from the urinary bladder through the urethra to the outside of the body. In healthy humans (and many other animals), the process of urination is under voluntary control. In infants, some elderly individuals, and those with neurological injury, urination may occur as an involuntary reflex. Physiologically, micturition involves coordination between the central, autonomic, and somatic nervous systems. Brain centers that regulate urination include the pontine micturition center, periaqueductal gray, and the cerebral cortex. In placental mammals, the male ejects urine through the male reproductive organ, and the female through the female reproductive organ .

Details

Many of the body’s waste products are passed out of the body in urine. The urinary system is made up of kidneys, bladder, ureters and the urethra.

Kidneys

The human body has two kidneys, one on either side of the middle back, just under the ribs. Each kidney contains thousands of small filters called nephrons. Each nephron has a mesh of capillaries, connecting it to the body’s blood supply. Around 180 litres of blood sieve through the kidneys every day. The main functions of the kidney include:

* Regulating the amount of water and salts in the blood
* Filtering out waste products
* Making a hormone that helps to control blood pressure.

Ureters

Each kidney has a tube called a ureter. The filtered waste products (urine) leave the kidneys via the ureters and enter the bladder.

Bladder

The bladder is a hollow organ that sits inside the pelvis. It stores the urine. When a certain amount of urine is inside the bladder, the bladder ‘signals’ the urge to urinate. Urine contains water and waste products like urea and ammonia.

Urethra

The urethra is the small tube connecting the bladder to the outside of the body. The male urethra is about 20 centimetres long, while the female urethra is shorter, about four centimetres. At the urethra’s connection to the bladder is a small ring of muscle, or sphincter. This stops urine from leaking out.

Common problems

Some of the more common problems of the urinary system include:

* Bladder infections - (cystitis) usually caused by bacteria.
* Enlarged prostate - in men, this can make it difficult to empty the bladder.
* Incontinence - when urine leaks out of the urethra.
* Kidney infections - when a bladder infection ‘backs up’ the ureters.
* Kidney stones - caused by infection and high blood levels of calcium.

Details

* Your urinary system, also called the renal system or urinary tract, removes waste from your blood, in the form of urine.
* It also helps regulate your blood volume and pressure and controls the level of chemicals and salts (electrolytes) in your body's cells and blood.
* Common medical problems with the urinary system include infections, kidney stones, urinary retention and urinary incontinence.
* If you experience problems or changes in your urinary frequency or flow, see your doctor.
* You can help keep your urinary system healthy by drinking enough water, avoiding smoking and maintaining a healthy lifestyle.

What is the urinary system?

Your urinary system prevents waste and toxins from building up in your blood. It also:

* controls the levels of chemicals and salts in your blood
* maintains your body's water balance
* helps regulate your blood pressure
* maintains vitamin D production to help keep bones strong and healthy
* helps make your body's red blood cells

What are the different parts of the urinary system?

Your urinary system is made up of:

* 2 kidneys — body organs that filter blood to make urine
* the bladder — an organ for storing urine
* 2 ureters — tubes connecting your kidneys to your bladder
* the urethra — a tube connecting your bladder to your body's surface

How does the urinary system work?

Your kidneys work non-stop, filtering all of your blood passing through them every 5 minutes.

The urine that collects is a mix of waste and excess fluid. It is carried to your bladder to be stored. Muscles in the bladder wall stay relaxed, so it can expand as it fills. Other muscles work like a dam to keep urine in your bladder until you are ready to go to the toilet. Your brain controls your bladder, signalling it when to hold urine and when to empty. Urinary incontinence is when there is accidental or involuntary loss of urine from the bladder.

To urinate normally, all parts of your urinary tract must work together in proper order.

When you are ready to go to the toilet, your bladder outlet muscles (urethral sphincter and pelvic floor) relax and your bladder wall muscles contract. Urine empties from your bladder through your urethra and leaves your body.

What are some common medical conditions related to the urinary tract?

Urinary tract infection

Urinary tract infections (UTIs) occur when an infection, usually caused by bacteria, enters the urinary tract. The most common types of UTI include:

* cystitis, an infection of the bladder lining, and the most common lower urinary tract infection
* urethritis, an infection of the urethra
* pyelonephritis, an infection of the upper urinary tract, which canbe very serious because it affects the kidneys

Kidney stones

Kidney stones develop when waste chemicals in your urine form crystals that clump together. They can cause severe pain. It is important to see your doctor if you think you might have a kidney stone.

Urinary retention

Urinary retention, is being unable to empty your bladder. It can be acute or chronic (short or long term).

If you can't pass urine even though you feel the need to, and your bladder is full, this is acute urinary retention. If you feel you might have urinary retention, see your doctor, visit a medical centre urgently, or go to your nearest emergency department.

People with chronic urinary retention can urinate, but do not completely empty the urine from their bladders. This can be a slow-developing and long-lasting medical condition.

Urinary incontinence

Urinary incontinence is when you have trouble controlling your bladder. You may experience accidental or involuntary loss of urine from the bladder. It is very common, and can be distressing, but is usually treatable.

Prostate problems

In males, the urethra passes through the prostate gland. Because of this, swelling or enlargement of this gland can affect the flow of urine through the urethra. Problems with urinary flow is one of the most common signs of possible prostate problems.

You should see your doctor if you have symptoms of problems with your urinary system.

Symptoms of bladder problems can include:

* problems with urinary control or incontinence
* needing to pass urine frequently during the day or multiple times overnight
* a weak or slow urine stream
* pain or burning when passing urine
* blood in your urine
* frequent urinary tract infections

Many people with kidney disease don't notice any symptoms in the early stages.

Some people with kidney disease may experience:

* changes in the amount of urine passed
* changes in their urine's appearance (for example, frothy or discoloured urine)
* blood in the urine
* pain in the abdomen or back
* leg swelling
* fatigue
* loss of appetite

If you're at risk of kidney disease, for example, because of diabetes, high blood pressure or smoking, you should see your doctor for a kidney health check at least every 2 years. A kidney health check usually includes a blood pressure check and blood tests, to make sure your kidneys are working well.

How can I look after my urinary system?

Here are some tips to ensure your urinary system stays healthy, and how to notice any potential problems early:

* Avoid smoking.
* Maintain a healthy diet.
* Keep physically active.
* Stay hydrated, preferably with water. Use the urine colour chart to assess your hydrat.
* Limit alcohol intake.
* See your doctor if you have symptoms of any kidney or bladder problems.

Additional Information

The various activities of the body create waste by-products that must be expelled in order to maintain health. To excrete certain fluid wastes, the body has a specialized filtering and recycling system known as the urinary system. It’s also called the renal system or excretory system. Solid wastes are eliminated, or egested, through the large intestine.

In most mammals, including humans, wastes are absorbed from tissues into the passing bloodstream and transported to one of two identical organs called kidneys. In the kidneys, wastes and excess water are removed from the blood as urine. The urine then passes through tubes, called ureters, into a saclike organ called the bladder. From the bladder the urine is excreted, or passed out of the body, through another tube called the urethra.

In the kidneys, excess water and useful blood components such as amino acids, glucose, ions, and various nutrients are reabsorbed into the bloodstream, leaving a concentrated solution of waste material called final, or bladder, urine. It consists of water, urea (from amino-acid metabolism), inorganic salts, creatinine, ammonia, and pigmented products of blood breakdown, one of which (urochrome) gives urine its typically yellowish color. Any substances that cannot be reabsorbed into the blood remain in the urine.

From the kidneys, urine is carried through the ureters by waves of contractions in the ureteral walls. The ureters pass urine to the bladder for temporary storage. The bladder is a muscular organ at the bottom of the abdomen that expands like a sack as it fills. The bladder of an average adult human is uncomfortably distended when it holds a volume of about 1/3 quart (320 milliliters) of urine.

When the bladder is full, nerve endings in the bladder wall are touched off. Impulses from the nerve endings are carried to the brain, triggering the bladder walls to contract and the sphincter, a ringlike muscle that guards the entrance from bladder to urethra, to relax. The response to the nerve signals is part involuntary and part learned and voluntary. Now urination can take place through the urethra, a tube lined with mucous membranes. The male urethra ends at the tip of the male reproductive organ; the female urethra ends just above the entrance to the female reproductive organ. Normally the bladder empties completely.

Disorders

The urinary system, like any other part of the body, is occasionally subject to breakdowns. One disorder of the urinary system is a blockage in the urethra, bladder, or ureters. Disorders of this type, called urinary tract obstruction, cause urine to dam up in the bladder or the kidneys. The condition may be congenital (existing from birth) or it may be caused by tumors, mineral deposits that form stones, or other physical disorders. Other urinary-system disorders include kidney malfunction or kidney diseases, which can lead to an accumulation of wastes in the body—a condition called uremia.

shutterstock_638539345.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2111 2024-04-04 00:22:34

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2113) Musculoskeletal System

Gist

Bones, muscles and joints make up the musculoskeletal system, along with cartilage, tendons, ligaments and connective tissue. This system gives your body its structure and support and lets you move around. The parts of the musculoskeletal system grow and change throughout life.

Summary

The muscular and skeletal systems provide support to the body and allow for a wide range of movement. The bones of the skeletal system protect the body’s internal organs and support the weight of the body. The muscles of the muscular system contract and pull on the bones, allowing for movements as diverse as standing, walking, running, and grasping items.

Injury or disease affecting the musculoskeletal system can be very debilitating. In humans, the most common musculoskeletal diseases worldwide are caused by malnutrition. Ailments that affect the joints are also widespread, such as arthritis, which can make movement difficult and—in advanced cases—completely impair mobility. In severe cases in which the joint has suffered extensive damage, joint replacement surgery may be needed.

Progress in the science of prosthesis design has resulted in the development of artificial joints, with joint replacement surgery in the hips and knees being the most common. Replacement joints for shoulders, elbows, and fingers are also available. Even with this progress, there is still room for improvement in the design of prostheses. The state-of-the-art prostheses have limited durability and therefore wear out quickly, particularly in young or active individuals.

Details

The musculoskeletal system (locomotor system) is a human body system that provides our body with movement, stability, shape, and support. It is subdivided into two broad systems:

* Muscular system, which includes all types of muscles in the body. Skeletal muscles, in particular, are the ones that act on the body joints to produce movements. Besides muscles, the muscular system contains the tendons which attach the muscles to the bones.
* Skeletal system, whose main component is the bone. Bones articulate with each other and form the joints, providing our bodies with a hard-core, yet mobile, skeleton. The integrity and function of the bones and joints is supported by the accessory structures of the skeletal system; articular cartilage, ligaments, and bursae.

Besides its main function to provide the body with stability and mobility, the musculoskeletal system has many other functions; the skeletal part plays an important role in other homeostatic functions such as storage of minerals (e.g., calcium) and hematopoiesis, while the muscular system stores the majority of the body's carbohydrates in the form of glycogen.

Key facts about the musculoskeletal system

Definition    : A human body system that provides the body with movement, stability, shape, and support
Components : Muscular system: skeletal muscles and tendons
Function : Muscles: Movement production, joint stabilization, maintaining posture, body heat production

Muscular system

The muscular system is an organ system composed of specialized contractile tissue called the muscle tissue. There are three types of muscle tissue, based on which all the muscles are classified into three groups:

* Cardiac muscle, which forms the muscular layer of the heart (myocardium)
* Smooth muscle, which comprises the walls of blood vessels and hollow organs
* Skeletal muscle, which attaches to the bones and provides voluntary movement.

Based on their histological appearance, these types are classified into striated and non-striated muscles; with the skeletal and cardiac muscles being grouped as striated, while the smooth muscle is non-striated. The skeletal muscles are the only ones that we can control by the power of our will, as they are innervated by the somatic part of the nervous system. In contrast to this, the cardiac and smooth muscles are innervated by the autonomic nervous system, thus being controlled involuntarily by the autonomic centers in our brain.

Skeletal muscles

The skeletal muscles are the main functional units of the muscular system. There are more than 600 muscles in the human body. They vary greatly in shape in size, with the smallest one being the stapedius muscle in the inner ear, and the largest one being the quadriceps femoris muscle in the thigh.

The skeletal muscles of the human body are organized into four groups for every region of the body:

* Muscles of the head and neck, which include the muscles of the facial expression, muscles of mastication, muscles of the orbit, muscles of the tongue, muscles of the pharynx, muscles of the larynx, and muscles of the neck
* Muscles of the trunk, which include the muscles of the back, anterior and lateral abdominal muscles, and muscles of the pelvic floor
* Muscles of the upper limbs, which include muscles of the shoulder, muscles of the arm, muscles of the forearm and muscles of the hand
* Muscles of the lower limbs, which include hip and thigh muscles, leg muscles and foot muscles

The fact that there are more than 600 muscles in the body can be quite intimidating. If you’re tired of all the big, comprehensive anatomy books, take a look at our condensed muscle anatomy reference charts, which contain all the muscle facts in one place organized into neat tables!

Structure

Structurally, the skeletal muscles are composed of the skeletal muscle cells which are called the myocytes (muscle fibres, or myofibrils). Muscle fibers are specialized cells whose main feature is the ability to contract. They are elongated, cylindrical, multinucleated cells bounded by a cell membrane called sarcolemma. The cytoplasm of skeletal muscle fibers (sarcoplasm), contains contractile proteins called actin and myosin. These proteins are arranged into patterns, forming the units of contractile micro-apparatus called sarcomeres.

Each muscle fiber is enclosed with a loose connective tissue sheath called endomysium. Multiple muscle fibers are grouped into muscle fascicles or muscle bundles, which are encompassed by their own connective tissue sheath called the perimysium. Ultimately, a group of muscle fascicles comprises a whole muscle belly which is externally enclosed by another connective tissue layer called the epimysium. This layer is continuous with yet another layer of connective tissue called the deep fascia of skeletal muscle, that separates the muscles from other tissues and organs.

This structure gives the skeletal muscle tissue four main physiological properties:

* Excitability - the ability to detect the neural stimuli (action potential);
* Contractibility - the ability to contract in response to a neural stimulus;
* Extensibility - the ability of a muscle to be stretched without tearing;
* Elasticity - the ability to return to its normal shape after being extended.

Muscle contraction

The most important property of skeletal muscles is its ability to contract. Muscle contraction occurs as a result of the interaction of myofibrils inside the muscle cells. This process either shortens the muscle or increases its tension, generating a force that either facilitates or slows down a movement.

There are two types of muscle contraction; isometric and isotonic. A muscle contraction is deemed as isometric if the length of the muscle does not change during the contraction, and isotonic if the tension remains unchanged while the length of the muscle changes. There are two types of isotonic contractions:

* Concentric contraction, in which the muscle shortens due to generating enough force to overcome the imposed resistance. This type of contraction serves to facilitate any noticeable movement (e.g. lifting a barbell or walking on an incline).
* Eccentric contraction, in which the muscle stretches due to the resistance being greater than the force the muscle generates. During an eccentric contraction, the muscle maintains high tension. This type of contraction usually serves to slow down a movement (e.g. lowering a barbell or walking downhill).

The sequence of events that results in the contraction of a muscle cell begins as the nervous system generates a signal called the action potential. This signal travels through motor neurons to reach the neuromuscular junction, the site of contact between the motor nerve and the muscle. A group of muscle cells innervated by the branches of a single motor nerve is called the motor unit.

The incoming action potential from the motor nerve initiates the release of acetylcholine (ACh) from the nerve into the synaptic cleft, which is the space between the nerve ending and the sarcolemma. The ACh binds to the receptors on the sarcolemma and triggers a chemical reaction in the muscle cell. This involves the release of calcium ions from the sarcoplasmic reticulum, which in turn causes a rearrangement of contractile proteins within the muscle cell. The main proteins involved are actin and myosin, which in the presence of ATP, slide over each other and pull on the ends of each muscle cell together, causing a contraction. As the nerve signal diminishes, the chemical process reverses and the muscle relaxes.

Tendons

A tendon is a tough, flexible band of dense connective tissue that serves to attach skeletal muscles to bones. Tendons are found at the distal and proximal ends of muscles, binding them to the periosteum of bones at their proximal (origin) and distal attachment (insertion) on the bone. As muscles contract, the tendons transmit the mechanical force to the bones, pulling them and causing movement.

Being made of dense regular connective tissue, the tendons have an abundance of parallel collagen fibers, which provide them with high tensile strength (resistance to longitudinal force). The collagen fibers within a tendon are organized into fascicles, and individual fascicles are ensheathed by a thin layer of dense connective tissue called endotenon. In turn, groups of fascicles are ensheathed by a layer of dense irregular connective tissue called epitenon. Finally, the epitenon is encircled with a synovial sheath and attached to it by a delicate connective tissue band called mesotenon.

Additional Information

The human musculoskeletal system (also known as the human locomotor system, and previously the activity system) is an organ system that gives humans the ability to move using their muscular and skeletal systems. The musculoskeletal system provides form, support, stability, and movement to the body.

It is made up of the bones of the skeleton, muscles, cartilage, tendons, ligaments, joints, and other connective tissue that supports and binds tissues and organs together. The musculoskeletal system's primary functions include supporting the body, allowing motion, and protecting vital organs. The skeletal portion of the system serves as the main storage system for calcium and phosphorus and contains critical components of the hematopoietic system.

This system describes how bones are connected to other bones and muscle fibers via connective tissue such as tendons and ligaments. The bones provide stability to the body. Muscles keep bones in place and also play a role in the movement of bones. To allow motion, different bones are connected by joints. Cartilage prevents the bone ends from rubbing directly onto each other. Muscles contract to move the bone attached at the joint.

There are, however, diseases and disorders that may adversely affect the function and overall effectiveness of the system. These diseases can be difficult to diagnose due to the close relation of the musculoskeletal system to other internal systems. The musculoskeletal system refers to the system having its muscles attached to an internal skeletal system and is necessary for humans to move to a more favorable position. Complex issues and injuries involving the musculoskeletal system are usually handled by a physiatrist (specialist in physical medicine and rehabilitation) or an orthopaedic surgeon.

ms.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2112 2024-04-04 23:27:07

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2114) Respiratory System

Gist

The respiratory system takes up oxygen from the air we breathe and expels the unwanted carbon dioxide. The main organ of the respiratory system is the lungs. Other respiratory organs include the nose, the trachea and the breathing muscles (the diaphragm and the intercostal muscles).

Summary

When the respiratory system is mentioned, people generally think of breathing, but breathing is only one of the activities of the respiratory system. The body cells need a continuous supply of oxygen for the metabolic processes that are necessary to maintain life. The respiratory system works with the circulatory system to provide this oxygen and to remove the waste products of metabolism. It also helps to regulate pH of the blood.

Respiration is the sequence of events that results in the exchange of oxygen and carbon dioxide between the atmosphere and the body cells. Every 3 to 5 seconds, nerve impulses stimulate the breathing process, or ventilation, which moves air through a series of passages into and out of the lungs. After this, there is an exchange of gases between the lungs and the blood. This is called external respiration. The blood transports the gases to and from the tissue cells. The exchange of gases between the blood and tissue cells is internal respiration. Finally, the cells utilize the oxygen for their specific activities: this is called cellular metabolism, or cellular respiration. Together, these activities constitute respiration.

Details

* Respiration is the uptake of oxygen and the removal of carbon dioxide from the body.
* This job is performed by the lungs.
* Breathing is achieved by contraction and relaxation of the diaphragm and rib muscles.

Our cells need oxygen to survive. One of the waste products produced by cells is another gas called carbon dioxide. The respiratory system takes up oxygen from the air we breathe and expels the unwanted carbon dioxide.

The main organ of the respiratory system is the lungs. Other respiratory organs include the nose, the trachea and the breathing muscles (the diaphragm and the intercostal muscles).

The nose and trachea

Breathing in through the nose warms and humidifies the air that is breathed in. Nose hairs help to trap any particles of dust. The warmed air enters the lungs through the windpipe, or trachea. The trachea is a hollow tube bolstered by rings of cartilage to prevent it from collapsing.

The lungs

The lungs are inside the chest, protected by the ribcage and wrapped in a membrane called the pleura. The lungs look like giant sponges. They are filled with thousands of tubes, branching smaller and smaller. The smallest components of all are the air sacs, called 'alveoli'. Each one has a fine mesh of capillaries. This is where the exchange of oxygen and carbon dioxide takes place.

The breathing muscles

To stay inflated, the lungs rely on a vacuum inside the chest. The diaphragm is a sheet of muscle slung underneath the lungs. When we breathe, the diaphragm contracts and relaxes. This change in air pressure means that air is ‘sucked’ into the lungs on inhalation and ‘pushed’ out of the lungs on exhalation.

The intercostal muscles between the ribs help to change the internal pressure by lifting and relaxing the ribcage in rhythm with the diaphragm.

The exchange of gas

Blood containing carbon dioxide enters the capillaries lining the alveoli. The gas moves from the blood across a thin film of moisture and into the air sac. The carbon dioxide is then breathed out.

On inhalation, oxygen is drawn down into the alveoli where it passes into the blood using the same film of moisture.

Speech and the respiratory system

The respiratory system also allows us to talk. Exhaled air runs over the vocal cords inside the throat. The sound of the voice depends on:

* the tension and length of the vocal cords
* the shape of the chest
* how much air is being exhaled.

Problems of the respiratory system

Some common problems of the respiratory system include:

* asthma – wheezing and breathlessness caused by a narrowing of the airways
* bronchitis – inflammation of the lung’s larger airways
* emphysema – disease of the alveoli (air sacs) of the lungs
* hay fever – an allergic reaction to pollen, dust or other irritants
* influenza – caused by viruses
* laryngitis – inflammation of the voice box (larynx)
* pneumonia – infection of the lung.

Additional Information

The respiratory system (also respiratory apparatus, ventilatory system) is a biological system consisting of specific organs and structures used for gas exchange in animals and plants. The anatomy and physiology that make this happen varies greatly, depending on the size of the organism, the environment in which it lives and its evolutionary history. In land animals, the respiratory surface is internalized as linings of the lungs. Gas exchange in the lungs occurs in millions of small air sacs; in mammals and reptiles, these are called alveoli, and in birds, they are known as atria. These microscopic air sacs have a very rich blood supply, thus bringing the air into close contact with the blood. These air sacs communicate with the external environment via a system of airways, or hollow tubes, of which the largest is the trachea, which branches in the middle of the chest into the two main bronchi. These enter the lungs where they branch into progressively narrower secondary and tertiary bronchi that branch into numerous smaller tubes, the bronchioles. In birds, the bronchioles are termed parabronchi. It is the bronchioles, or parabronchi that generally open into the microscopic alveoli in mammals and atria in birds. Air has to be pumped from the environment into the alveoli or atria by the process of breathing which involves the muscles of respiration.

In most fish, and a number of other aquatic animals (both vertebrates and invertebrates), the respiratory system consists of gills, which are either partially or completely external organs, bathed in the watery environment. This water flows over the gills by a variety of active or passive means. Gas exchange takes place in the gills which consist of thin or very flat filaments and lammellae which expose a very large surface area of highly vascularized tissue to the water.

Other animals, such as insects, have respiratory systems with very simple anatomical features, and in amphibians, even the skin plays a vital role in gas exchange. Plants also have respiratory systems but the directionality of gas exchange can be opposite to that in animals. The respiratory system in plants includes anatomical features such as stomata, that are found in various parts of the plant.

The respiratory system includes the organs, tissues, and muscles that help you breathe. It helps distribute oxygen throughout your body while filtering out carbon dioxide and other waste products.

The respiratory system aids the body in the exchange of gases between the air and blood, and between the blood and the body’s billions of cells.

It includes air passages, pulmonary vessels, the lungs, and breathing muscles.

Most of the organs of the respiratory system help to distribute air, but only the tiny, grape-like alveoli and the alveolar ducts are responsible for actual gas exchange.

In addition to air distribution and gas exchange, the respiratory system filters, warms, and humidifies the air you breathe. Organs in the respiratory system also play a role in speech and the sense of smell.

The respiratory system also helps the body maintain homeostasis, or balance among the many elements of the body’s internal environment.

The respiratory system is divided into two main components:

Upper respiratory tract: Composed of the nose, the pharynx, and the larynx, the organs of the upper respiratory tract are located outside the chest cavity.

* Nasal cavity: Inside the nose, the sticky mucous membrane lining the nasal cavity traps dust particles, and tiny hairs called cilia help move them to the nose to be sneezed or blown out.
* Sinuses: These air-filled spaces along side the nose help make the skull lighter.
* Pharynx: Both food and air pass through the pharynx before reaching their appropriate destinations. The pharynx also plays a role in speech.
* Larynx: The larynx is essential to human speech.

Lower respiratory tract: Composed of the trachea, the lungs, and all segments of the bronchial tree (including the alveoli), the organs of the lower respiratory tract are located inside the chest cavity.

* Trachea: Located just below the larynx, the trachea is the main airway to the lungs.
* Lungs: Together the lungs form one of the body’s largest organs. They’re responsible for providing oxygen to capillaries and exhaling carbon dioxide.
* Bronchi: The bronchi branch from the trachea into each lung and create the network of intricate passages that supply the lungs with air.
* Diaphragm: The diaphragm is the main respiratory muscle that contracts and relaxes to allow air into the lungs.

respiratory-system-0625c3.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2113 2024-04-05 23:25:18

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2115) Digestive System

Gist

The digestive system is made up of organs that are important for digesting food and liquids. These include the mouth, pharynx (throat), esophagus, stomach, small intestine, large intestine, rectum, and the last part of the digestive tract.

Summary

Human digestive system is a system used in the human body for the process of digestion. The human digestive system consists primarily of the digestive tract, or the series of structures and organs through which food and liquids pass during their processing into forms that can be absorbed into the bloodstream. The system also consists of the structures through which wastes pass in the process of elimination and of organs that contribute juices necessary for the digestive process.

In order to function properly, the human body requires nutrients. Some such nutrients serve as raw materials for the synthesis of cellular materials, while others help regulate chemical reactions or, upon oxidation, yield energy. Many nutrients, however, are in a form that is unsuitable for immediate use by the body; to be useful, they must undergo physical and chemical changes, which are facilitated by digestion.

The digestive tract begins at the lips and ends at the math. It consists of the mouth, or oral cavity, with its teeth, for grinding the food, and its tongue, which serves to knead food and mix it with saliva; the throat, or pharynx; the esophagus; the stomach; the small intestine, consisting of the duodenum, the jejunum, and the ileum; and the large intestine, consisting of the cecum, a closed-end sac connecting with the ileum, the ascending colon, the transverse colon, the descending colon, and the sigmoid colon, which terminates in the rectum. Glands contributing digestive juices include the salivary glands, the gastric glands in the stomach lining, the pancreas, and the liver and its adjuncts—the gallbladder and bile ducts. All of these organs and glands contribute to the physical and chemical breaking down of ingested food and to the eventual elimination of nondigestible wastes.

Details

The human digestive system consists of the gastrointestinal tract plus the accessory organs of digestion (the tongue, salivary glands, pancreas, liver, and gallbladder). Digestion involves the breakdown of food into smaller and smaller components, until they can be absorbed and assimilated into the body. The process of digestion has three stages: the cephalic phase, the gastric phase, and the intestinal phase.

The first stage, the cephalic phase of digestion, begins with secretions from gastric glands in response to the sight and smell of food. This stage includes the mechanical breakdown of food by chewing, and the chemical breakdown by digestive enzymes, that takes place in the mouth. Saliva contains the digestive enzymes amylase, and lingual lipase, secreted by the salivary and serous glands on the tongue. Chewing, in which the food is mixed with saliva, begins the mechanical process of digestion. This produces a bolus which is swallowed down the esophagus to enter the stomach.

The second stage, the gastric phase, happens in the stomach. Here, the food is further broken down by mixing with gastric acid until it passes into the duodenum, the first part of the small intestine.

The third stage, the intestinal phase, begins in the duodenum. Here, the partially digested food is mixed with a number of enzymes produced by the pancreas.

Digestion is helped by the chewing of food carried out by the muscles of mastication, the tongue, and the teeth, and also by the contractions of peristalsis, and segmentation. Gastric acid, and the production of mucus in the stomach, are essential for the continuation of digestion.

Peristalsis is the rhythmic contraction of muscles that begins in the esophagus and continues along the wall of the stomach and the rest of the gastrointestinal tract. This initially results in the production of chyme which when fully broken down in the small intestine is absorbed as chyle into the lymphatic system. Most of the digestion of food takes place in the small intestine. Water and some minerals are reabsorbed back into the blood in the colon of the large intestine. The waste products of digestion (feces) are defecated from the rectum via the last part of the digestive tract.

Components:

There are several organs and other components involved in the digestion of food. The organs known as the accessory digestive organs are the liver, gall bladder and pancreas. Other components include the mouth, salivary glands, tongue, teeth and epiglottis.

The largest structure of the digestive system is the gastrointestinal tract (GI tract). This starts at the mouth and ends at the math, covering a distance of about nine metres.

A major digestive organ is the stomach. Within its mucosa are millions of embedded gastric glands. Their secretions are vital to the functioning of the organ.

Most of the digestion of food takes place in the small intestine which is the longest part of the GI tract.

The largest part of the GI tract is the colon or large intestine. Water is absorbed here and the remaining waste matter is stored prior to defecation.

There are many specialised cells of the GI tract. These include the various cells of the gastric glands, taste cells, pancreatic duct cells, enterocytes and microfold cells.

Some parts of the digestive system are also part of the excretory system, including the large intestine.

Mouth

The mouth is the first part of the upper gastrointestinal tract and is equipped with several structures that begin the first processes of digestion. These include salivary glands, teeth and the tongue. The mouth consists of two regions; the vestibule and the oral cavity proper. The vestibule is the area between the teeth, lips and cheeks, and the rest is the oral cavity proper. Most of the oral cavity is lined with oral mucosa, a mucous membrane that produces a lubricating mucus, of which only a small amount is needed. Mucous membranes vary in structure in the different regions of the body but they all produce a lubricating mucus, which is either secreted by surface cells or more usually by underlying glands. The mucous membrane in the mouth continues as the thin mucosa which lines the bases of the teeth. The main component of mucus is a glycoprotein called mucin and the type secreted varies according to the region involved. Mucin is viscous, clear, and clinging. Underlying the mucous membrane in the mouth is a thin layer of smooth muscle tissue and the loose connection to the membrane gives it its great elasticity. It covers the cheeks, inner surfaces of the lips, and floor of the mouth, and the mucin produced is highly protective against tooth decay.

The roof of the mouth is termed the palate and it separates the oral cavity from the nasal cavity. The palate is hard at the front of the mouth since the overlying mucosa is covering a plate of bone; it is softer and more pliable at the back being made of muscle and connective tissue, and it can move to swallow food and liquids. The soft palate ends at the uvula. The surface of the hard palate allows for the pressure needed in eating food, to leave the nasal passage clear. The opening between the lips is termed the oral fissure, and the opening into the throat is called the fauces.

At either side of the soft palate are the palatoglossus muscles which also reach into regions of the tongue. These muscles raise the back of the tongue and also close both sides of the fauces to enable food to be swallowed.  Mucus helps in the mastication of food in its ability to soften and collect the food in the formation of the bolus.

Salivary glands

There are three pairs of main salivary glands and between 800 and 1,000 minor salivary glands, all of which mainly serve the digestive process, and also play an important role in the maintenance of dental health and general mouth lubrication, without which speech would be impossible. The main glands are all exocrine glands, secreting via ducts. All of these glands terminate in the mouth. The largest of these are the parotid glands—their secretion is mainly serous. The next pair are underneath the jaw, the submandibular glands, these produce both serous fluid and mucus. The serous fluid is produced by serous glands in these salivary glands which also produce lingual lipase. They produce about 70% of the oral cavity saliva. The third pair are the sublingual glands located underneath the tongue and their secretion is mainly mucous with a small percentage of saliva.

Within the oral mucosa, and also on the tongue, palates, and floor of the mouth, are the minor salivary glands; their secretions are mainly mucous and they are innervated by the facial nerve (CN7). The glands also secrete amylase a first stage in the breakdown of food acting on the carbohydrate in the food to transform the starch content into maltose. There are other serous glands on the surface of the tongue that encircle taste buds on the back part of the tongue and these also produce lingual lipase. Lipase is a digestive enzyme that catalyses the hydrolysis of lipids (fats). These glands are termed Von Ebner's glands which have also been shown to have another function in the secretion of histatins which offer an early defense (outside of the immune system) against microbes in food, when it makes contact with these glands on the tongue tissue. Sensory information can stimulate the secretion of saliva providing the necessary fluid for the tongue to work with and also to ease swallowing of the food.

Saliva

Saliva moistens and softens food, and along with the chewing action of the teeth, transforms the food into a smooth bolus. The bolus is further helped by the lubrication provided by the saliva in its passage from the mouth into the esophagus. Also of importance is the presence in saliva of the digestive enzymes amylase and lipase. Amylase starts to work on the starch in carbohydrates, breaking it down into the simple sugars of maltose and dextrose that can be further broken down in the small intestine. Saliva in the mouth can account for 30% of this initial starch digestion. Lipase starts to work on breaking down fats. Lipase is further produced in the pancreas where it is released to continue this digestion of fats. The presence of salivary lipase is of prime importance in young babies whose pancreatic lipase has yet to be developed.

As well as its role in supplying digestive enzymes, saliva has a cleansing action for the teeth and mouth. It also has an immunological role in supplying antibodies to the system, such as immunoglobulin A. This is seen to be key in preventing infections of the salivary glands, importantly that of parotitis.

Saliva also contains a glycoprotein called haptocorrin which is a binding protein to vitamin B12. It binds with the vitamin in order to carry it safely through the acidic content of the stomach. When it reaches the duodenum, pancreatic enzymes break down the glycoprotein and free the vitamin which then binds with intrinsic factor.

Tongue

Food enters the mouth where the first stage in the digestive process takes place, with the action of the tongue and the secretion of saliva. The tongue is a fleshy and muscular sensory organ, and the first sensory information is received via the taste buds in the papillae on its surface. If the taste is agreeable, the tongue will go into action, manipulating the food in the mouth which stimulates the secretion of saliva from the salivary glands. The liquid quality of the saliva will help in the softening of the food and its enzyme content will start to break down the food whilst it is still in the mouth. The first part of the food to be broken down is the starch of carbohydrates (by the enzyme amylase in the saliva).

The tongue is attached to the floor of the mouth by a ligamentous band called the frenum and this gives it great mobility for the manipulation of food (and speech); the range of manipulation is optimally controlled by the action of several muscles and limited in its external range by the stretch of the frenum. The tongue's two sets of muscles, are four intrinsic muscles that originate in the tongue and are involved with its shaping, and four extrinsic muscles originating in bone that are involved with its movement.

Taste

Taste is a form of chemoreception that takes place in the specialised taste receptors, contained in structures called taste buds in the mouth. Taste buds are mainly on the upper surface (dorsum) of the tongue. The function of taste perception is vital to help prevent harmful or rotten foods from being consumed. There are also taste buds on the epiglottis and upper part of the esophagus. The taste buds are innervated by a branch of the facial nerve the chorda tympani, and the glossopharyngeal nerve. Taste messages are sent via these cranial nerves to the brain. The brain can distinguish between the chemical qualities of the food. The five basic tastes are referred to as those of saltiness, sourness, bitterness, sweetness, and umami. The detection of saltiness and sourness enables the control of salt and acid balance. The detection of bitterness warns of poisons—many of a plant's defences are of poisonous compounds that are bitter. Sweetness guides to those foods that will supply energy; the initial breakdown of the energy-giving carbohydrates by salivary amylase creates the taste of sweetness since simple sugars are the first result. The taste of umami is thought to signal protein-rich food. Sour tastes are acidic which is often found in bad food. The brain has to decide very quickly whether the food should be eaten or not. It was the findings in 1991, describing the first olfactory receptors that helped to prompt the research into taste. The olfactory receptors are located on cell surfaces in the nose which bind to chemicals enabling the detection of smells. It is assumed that signals from taste receptors work together with those from the nose, to form an idea of complex food flavours.

Teeth

Teeth are complex structures made of materials specific to them. They are made of a bone-like material called dentin, which is covered by the hardest tissue in the body—enamel. Teeth have different shapes to deal with different aspects of mastication employed in tearing and chewing pieces of food into smaller and smaller pieces. This results in a much larger surface area for the action of digestive enzymes. The teeth are named after their particular roles in the process of mastication—incisors are used for cutting or biting off pieces of food; canines, are used for tearing, premolars and molars are used for chewing and grinding. Mastication of the food with the help of saliva and mucus results in the formation of a soft bolus which can then be swallowed to make its way down the upper gastrointestinal tract to the stomach. The digestive enzymes in saliva also help in keeping the teeth clean by breaking down any lodged food particles.

Epiglottis

The epiglottis is a flap of elastic cartilage attached to the entrance of the larynx. It is covered with a mucous membrane and there are taste buds on its lingual surface which faces into the mouth. Its laryngeal surface faces into the larynx. The epiglottis functions to guard the entrance of the glottis, the opening between the vocal folds. It is normally pointed upward during breathing with its underside functioning as part of the pharynx, but during swallowing, the epiglottis folds down to a more horizontal position, with its upper side functioning as part of the pharynx. In this manner it prevents food from going into the trachea and instead directs it to the esophagus, which is behind. During swallowing, the backward motion of the tongue forces the epiglottis over the glottis' opening to prevent any food that is being swallowed from entering the larynx which leads to the lungs; the larynx is also pulled upwards to assist this process. Stimulation of the larynx by ingested matter produces a strong cough reflex in order to protect the lungs.

Pharynx

The pharynx is a part of the conducting zone of the respiratory system and also a part of the digestive system. It is the part of the throat immediately behind the nasal cavity at the back of the mouth and above the esophagus and larynx. The pharynx is made up of three parts. The lower two parts—the oropharynx and the laryngopharynx are involved in the digestive system. The laryngopharynx connects to the esophagus and it serves as a passageway for both air and food. Air enters the larynx anteriorly but anything swallowed has priority and the passage of air is temporarily blocked. The pharynx is innervated by the pharyngeal plexus of the vagus nerve.  Muscles in the pharynx push the food into the esophagus. The pharynx joins the esophagus at the oesophageal inlet which is located behind the cricoid cartilage.

Esophagus

The esophagus, commonly known as the foodpipe or gullet, consists of a muscular tube through which food passes from the pharynx to the stomach. The esophagus is continuous with the laryngopharynx. It passes through the posterior mediastinum in the thorax and enters the stomach through a hole in the thoracic diaphragm—the esophageal hiatus, at the level of the tenth thoracic vertebra (T10). Its length averages 25 cm, varying with an individual's height. It is divided into cervical, thoracic and abdominal parts. The pharynx joins the esophagus at the esophageal inlet which is behind the cricoid cartilage.

At rest the esophagus is closed at both ends, by the upper and lower esophageal sphincters. The opening of the upper sphincter is triggered by the swallowing reflex so that food is allowed through. The sphincter also serves to prevent back flow from the esophagus into the pharynx. The esophagus has a mucous membrane and the epithelium which has a protective function is continuously replaced due to the volume of food that passes inside the esophagus. During swallowing, food passes from the mouth through the pharynx into the esophagus. The epiglottis folds down to a more horizontal position to direct the food into the esophagus, and away from the trachea.

Once in the esophagus, the bolus travels down to the stomach via rhythmic contraction and relaxation of muscles known as peristalsis. The lower esophageal sphincter is a muscular sphincter surrounding the lower part of the esophagus. The gastroesophageal junction between the esophagus and the stomach is controlled by the lower esophageal sphincter, which remains constricted at all times other than during swallowing and vomiting to prevent the contents of the stomach from entering the esophagus. As the esophagus does not have the same protection from acid as the stomach, any failure of this sphincter can lead to heartburn.

Diaphragm

The diaphragm is an important part of the body's digestive system. The muscular diaphragm separates the thoracic cavity from the abdominal cavity where most of the digestive organs are located. The suspensory muscle attaches the ascending duodenum to the diaphragm. This muscle is thought to be of help in the digestive system in that its attachment offers a wider angle to the duodenojejunal flexure for the easier passage of digesting material. The diaphragm also attaches to, and anchors the liver at its bare area. The esophagus enters the abdomen through a hole in the diaphragm at the level of T10.

Stomach

The stomach is a major organ of the gastrointestinal tract and digestive system. It is a consistently J-shaped organ joined to the esophagus at its upper end and to the duodenum at its lower end. Gastric acid (informally gastric juice), produced in the stomach plays a vital role in the digestive process, and mainly contains hydrochloric acid and sodium chloride. A peptide hormone, gastrin, produced by G cells in the gastric glands, stimulates the production of gastric juice which activates the digestive enzymes. Pepsinogen is a precursor enzyme (zymogen) produced by the gastric chief cells, and gastric acid activates this to the enzyme pepsin which begins the digestion of proteins. As these two chemicals would damage the stomach wall, mucus is secreted by innumerable gastric glands in the stomach, to provide a slimy protective layer against the damaging effects of the chemicals on the inner layers of the stomach.

At the same time that protein is being digested, mechanical churning occurs through the action of peristalsis, waves of muscular contractions that move along the stomach wall. This allows the mass of food to further mix with the digestive enzymes. Gastric lipase secreted by the chief cells in the fundic glands in the gastric mucosa of the stomach, is an acidic lipase, in contrast with the alkaline pancreatic lipase. This breaks down fats to some degree though is not as efficient as the pancreatic lipase.

The pylorus, the lowest section of the stomach which attaches to the duodenum via the pyloric canal, contains countless glands which secrete digestive enzymes including gastrin. After an hour or two, a thick semi-liquid called chyme is produced. When the pyloric sphincter, or valve opens, chyme enters the duodenum where it mixes further with digestive enzymes from the pancreas, and then passes through the small intestine, where digestion continues.

The parietal cells in the fundus of the stomach, produce a glycoprotein called intrinsic factor which is essential for the absorption of vitamin B12. Vitamin B12 (cobalamin), is carried to, and through the stomach, bound to a glycoprotein secreted by the salivary glands – transcobalamin I also called haptocorrin, which protects the acid-sensitive vitamin from the acidic stomach contents. Once in the more neutral duodenum, pancreatic enzymes break down the protective glycoprotein. The freed vitamin B12 then binds to intrinsic factor which is then absorbed by the enterocytes in the ileum.

The stomach is a distensible organ and can normally expand to hold about one litre of food. This expansion is enabled by a series of gastric folds in the inner walls of the stomach. The stomach of a newborn baby will only be able to expand to retain about 30 ml.

Spleen

The spleen is the largest lymphoid organ in the body but has other functions. It breaks down both red and white blood cells that are spent. This is why it is sometimes known as the 'graveyard of red blood cells'. A product of this digestion is the pigment bilirubin, which is sent to the liver and secreted in the bile. Another product is iron, which is used in the formation of new blood cells in the bone marrow. Medicine treats the spleen solely as belonging to the lymphatic system, though it is acknowledged that the full range of its important functions is not yet understood.

Liver

The liver is the second largest organ (after the skin) and is an accessory digestive gland which plays a role in the body's metabolism. The liver has many functions some of which are important to digestion. The liver can detoxify various metabolites; synthesise proteins and produce biochemicals needed for digestion. It regulates the storage of glycogen which it can form from glucose (glycogenesis). The liver can also synthesise glucose from certain amino acids. Its digestive functions are largely involved with the breaking down of carbohydrates. It also maintains protein metabolism in its synthesis and degradation. In lipid metabolism it synthesises cholesterol. Fats are also produced in the process of lipogenesis. The liver synthesises the bulk of lipoproteins. The liver is located in the upper right quadrant of the abdomen and below the diaphragm to which it is attached at one part, the bare area of the liver. This is to the right of the stomach and it overlies the gall bladder. The liver synthesises bile acids and lecithin to promote the digestion of fat.

Bile

Bile produced by the liver is made up of water (97%), bile salts, mucus and pigments, 1% fats and inorganic salts. Bilirubin is its major pigment. Bile acts partly as a surfactant which lowers the surface tension between either two liquids or a solid and a liquid and helps to emulsify the fats in the chyme. Food fat is dispersed by the action of bile into smaller units called micelles. The breaking down into micelles creates a much larger surface area for the pancreatic enzyme, lipase to work on. Lipase digests the triglycerides which are broken down into two fatty acids and a monoglyceride. These are then absorbed by villi on the intestinal wall. If fats are not absorbed in this way in the small intestine problems can arise later in the large intestine which is not equipped to absorb fats. Bile also helps in the absorption of vitamin K from the diet. Bile is collected and delivered through the common hepatic duct. This duct joins with the cystic duct to connect in a common bile duct with the gallbladder. Bile is stored in the gallbladder for release when food is discharged into the duodenum and also after a few hours.

Gallbladder

The gallbladder is a hollow part of the biliary tract that sits just beneath the liver, with the gallbladder body resting in a small depression. It is a small organ where the bile produced by the liver is stored, before being released into the small intestine. Bile flows from the liver through the bile ducts and into the gall bladder for storage. The bile is released in response to cholecystokinin (CCK), a peptide hormone released from the duodenum. The production of CCK (by endocrine cells of the duodenum) is stimulated by the presence of fat in the duodenum.

It is divided into three sections, a fundus, body and neck. The neck tapers and connects to the biliary tract via the cystic duct, which then joins the common hepatic duct to form the common bile duct. At this junction is a mucosal fold called Hartmann's pouch, where gallstones commonly get stuck. The muscular layer of the body is of smooth muscle tissue that helps the gallbladder contract, so that it can discharge its bile into the bile duct. The gallbladder needs to store bile in a natural, semi-liquid form at all times. Hydrogen ions secreted from the inner lining of the gallbladder keep the bile acidic enough to prevent hardening. To dilute the bile, water and electrolytes from the digestion system are added. Also, salts attach themselves to cholesterol molecules in the bile to keep them from crystallising. If there is too much cholesterol or bilirubin in the bile, or if the gallbladder does not empty properly the systems can fail. This is how gallstones form when a small piece of calcium gets coated with either cholesterol or bilirubin and the bile crystallises and forms a gallstone. The main purpose of the gallbladder is to store and release bile, or gall. Bile is released into the small intestine in order to help in the digestion of fats by breaking down larger molecules into smaller ones. After the fat is absorbed, the bile is also absorbed and transported back to the liver for reuse.

Pancreas

The pancreas is a major organ functioning as an accessory digestive gland in the digestive system. It is both an endocrine gland and an exocrine gland. The endocrine part secretes insulin when the blood sugar becomes high; insulin moves glucose from the blood into the muscles and other tissues for use as energy. The endocrine part releases glucagon when the blood sugar is low; glucagon allows stored sugar to be broken down into glucose by the liver in order to re-balance the sugar levels. The pancreas produces and releases important digestive enzymes in the pancreatic juice that it delivers to the duodenum. The pancreas lies below and at the back of the stomach. It connects to the duodenum via the pancreatic duct which it joins near to the bile duct's connection where both the bile and pancreatic juice can act on the chyme that is released from the stomach into the duodenum. Aqueous pancreatic secretions from pancreatic duct cells contain bicarbonate ions which are alkaline and help with the bile to neutralise the acidic chyme that is churned out by the stomach.

The pancreas is also the main source of enzymes for the digestion of fats and proteins. Some of these are released in response to the production of cholecystokinin in the duodenum. (The enzymes that digest polysaccharides, by contrast, are primarily produced by the walls of the intestines.) The cells are filled with secretory granules containing the precursor digestive enzymes. The major proteases, the pancreatic enzymes which work on proteins, are trypsinogen and chymotrypsinogen. Elastase is also produced. Smaller amounts of lipase and amylase are secreted. The pancreas also secretes phospholipase A2, lysophospholipase, and cholesterol esterase. The precursor zymogens, are inactive variants of the enzymes; which avoids the onset of pancreatitis caused by autodegradation. Once released in the intestine, the enzyme enteropeptidase present in the intestinal mucosa activates trypsinogen by cleaving it to form trypsin; further cleavage results in chymotripsin.

Lower gastrointestinal tract

The lower gastrointestinal tract (GI), includes the small intestine and all of the large intestine. The intestine is also called the bowel or the gut. The lower GI starts at the pyloric sphincter of the stomach and finishes at the math. The small intestine is subdivided into the duodenum, the jejunum and the ileum. The cecum marks the division between the small and large intestine. The large intestine includes the rectum and last part of the digestive system.

Small intestine

Partially digested food starts to arrive in the small intestine as semi-liquid chyme, one hour after it is eaten. The stomach is half empty after an average of 1.2 hours. After four or five hours the stomach has emptied.

In the small intestine, the pH becomes crucial; it needs to be finely balanced in order to activate digestive enzymes. The chyme is very acidic, with a low pH, having been released from the stomach and needs to be made much more alkaline. This is achieved in the duodenum by the addition of bile from the gall bladder combined with the bicarbonate secretions from the pancreatic duct and also from secretions of bicarbonate-rich mucus from duodenal glands known as Brunner's glands. The chyme arrives in the intestines having been released from the stomach through the opening of the pyloric sphincter. The resulting alkaline fluid mix neutralises the gastric acid which would damage the lining of the intestine. The mucus component lubricates the walls of the intestine.

Layers of the small intestine

When the digested food particles are reduced enough in size and composition, they can be absorbed by the intestinal wall and carried to the bloodstream. The first receptacle for this chyme is the duodenal bulb. From here it passes into the first of the three sections of the small intestine, the duodenum (the next section is the jejunum and the third is the ileum). The duodenum is the first and shortest section of the small intestine. It is a hollow, jointed C-shaped tube connecting the stomach to the jejunum. It starts at the duodenal bulb and ends at the suspensory muscle of duodenum. The attachment of the suspensory muscle to the diaphragm is thought to help the passage of food by making a wider angle at its attachment.

Most food digestion takes place in the small intestine. Segmentation contractions act to mix and move the chyme more slowly in the small intestine allowing more time for absorption (and these continue in the large intestine). In the duodenum, pancreatic lipase is secreted together with a co-enzyme, colipase to further digest the fat content of the chyme. From this breakdown, smaller particles of emulsified fats called chylomicrons are produced. There are also digestive cells called enterocytes lining the intestines (the majority being in the small intestine). They are unusual cells in that they have villi on their surface which in turn have innumerable microvilli on their surface. All these villi make for a greater surface area, not only for the absorption of chyme but also for its further digestion by large numbers of digestive enzymes present on the microvilli.

The chylomicrons are small enough to pass through the enterocyte villi and into their lymph capillaries called lacteals. A milky fluid called chyle, consisting mainly of the emulsified fats of the chylomicrons, results from the absorbed mix with the lymph in the lacteals. Chyle is then transported through the lymphatic system to the rest of the body.

The suspensory muscle marks the end of the duodenum and the division between the upper gastrointestinal tract and the lower GI tract. The digestive tract continues as the jejunum which continues as the ileum. The jejunum, the midsection of the small intestine contains circular folds, flaps of doubled mucosal membrane which partially encircle and sometimes completely encircle the lumen of the intestine. These folds together with villi serve to increase the surface area of the jejunum enabling an increased absorption of digested sugars, amino acids and fatty acids into the bloodstream. The circular folds also slow the passage of food giving more time for nutrients to be absorbed.

The last part of the small intestine is the ileum. This also contains villi and vitamin B12; bile acids and any residue nutrients are absorbed here. When the chyme is exhausted of its nutrients the remaining waste material changes into the semi-solids called feces, which pass to the large intestine, where bacteria in the gut flora further break down residual proteins and starches.

Transit time through the small intestine is an average of 4 hours. Half of the food residues of a meal have emptied from the small intestine by an average of 5.4 hours after ingestion. Emptying of the small intestine is complete after an average of 8.6 hours.

Cecum

The cecum is a pouch marking the division between the small intestine and the large intestine. It lies below the ileocecal valve in the lower right quadrant of the abdomen. The cecum receives chyme from the last part of the small intestine, the ileum, and connects to the ascending colon of the large intestine. At this junction there is a sphincter or valve, the ileocecal valve which slows the passage of chyme from the ileum, allowing further digestion. It is also the site of the appendix attachment.

Large intestine

In the large intestine, the passage of the digesting food in the colon is a lot slower, taking from 30 to 40 hours until it is removed by defecation. The colon mainly serves as a site for the fermentation of digestible matter by the gut flora. The time taken varies considerably between individuals. The remaining semi-solid waste is termed feces and is removed by the coordinated contractions of the intestinal walls, termed peristalsis, which propels the excreta forward to reach the rectum and exit through the math via defecation. The wall has an outer layer of longitudinal muscles, the taeniae coli, and an inner layer of circular muscles. The circular muscle keeps the material moving forward and also prevents any back flow of waste. Also of help in the action of peristalsis is the basal electrical rhythm that determines the frequency of contractions. The taeniae coli can be seen and are responsible for the bulges (haustra) present in the colon. Most parts of the GI tract are covered with serous membranes and have a mesentery. Other more muscular parts are lined with adventitia.

The_Digestive_System_450x531.jpg?imbypass=true


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2114 2024-04-07 00:03:03

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2116) Grand Central Terminal

Summary

Based on the number of platforms - 44 with 67 tracks - Grand Central is the world’s largest train station.

After having been rebuilt twice, the current Grand Central Terminal was completed in 1913. Embedded in the opulent facade are three 48-foot sculptures of Greek gods Hercules, Minerva and Mercury, and a clock made of the world's largest piece of Tiffany glass.

Inside is the Grand Concourse, with a four-faced opal clock (valued at USD 10-20 million) atop the information booth. Be sure to gaze up at the stunning painting of the constellation-filled heavens on the ceiling. In the 1930s, nicotine obscured the mural so badly that it was repainted - but go to Michael Jordan’s Steak House (on the second floor balcony) to see a small square workers left untouched to prove how dirty the original was.

The terminal has a slew of retail shops, delis and bakeries, and downstairs is a New York legend: the Grand Central Oyster Bar, serving fresh, delicious seafood and oysters from all over North America.

Details

Grand Central Terminal (GCT; also referred to as Grand Central Station or simply as Grand Central) is a commuter rail terminal located at 42nd Street and Park Avenue in Midtown Manhattan, New York City. Grand Central is the southern terminus of the Metro-North Railroad's Harlem, Hudson and New Haven Lines, serving the northern parts of the New York metropolitan area. It also contains a connection to the Long Island Rail Road through the Grand Central Madison station, a 16-acre (65,000 sq m) rail terminal underneath the Metro-North station, built from 2007 to 2023. The terminal also connects to the New York City Subway at Grand Central–42nd Street station. The terminal is the third-busiest train station in North America, after New York Penn Station and Toronto Union Station.

The distinctive architecture and interior design of Grand Central Terminal's station house have earned it several landmark designations, including as a National Historic Landmark. Its Beaux-Arts design incorporates numerous works of art. Grand Central Terminal is one of the world's ten most-visited tourist attractions, with 21.6 million visitors in 2018, excluding train and subway passengers. The terminal's Main Concourse is often used as a meeting place, and is especially featured in films and television. Grand Central Terminal contains a variety of stores and food vendors, including upscale restaurants and bars, a food hall, and a grocery marketplace. The building is also noted for its library, event hall, tennis club, control center and offices for the railroad, and sub-basement power station.

Grand Central Terminal was built by and named for the New York Central Railroad; it also served the New York, New Haven and Hartford Railroad and, later, successors to the New York Central. Opened in 1913, the terminal was built on the site of two similarly named predecessor stations, the first of which dated to 1871. Grand Central Terminal served intercity trains until 1991, when Amtrak began routing its trains through nearby Penn Station.

Grand Central covers 48 acres (19 ha) and has 44 platforms, more than any other railroad station in the world. Its platforms, all below ground, serve 30 tracks on the upper level and 26 on the lower. In total, there are 67 tracks, including a rail yard and sidings; of these, 43 tracks are in use for passenger service, while the remaining two dozen are used to store trains.

Name

Grand Central Terminal was named by and for the New York Central Railroad, which built the station and its two predecessors on the site. It has "always been more colloquially and affectionately known as Grand Central Station", the name of its immediate predecessor that operated from 1900 to 1910. The name "Grand Central Station" is also shared with the nearby U.S. Post Office station at 450 Lexington Avenue and, colloquially, with the Grand Central–42nd Street subway station next to the terminal.

The station has been named "Grand Central Terminal" since before its completion in 1913; the full title is inscribed on its 42nd Street facade. According to 21st-century sources, it is designated a "terminal" because trains originate and terminate there. The CSX Corporation Railroad Dictionary also considers "terminals" as facilities "for the breaking up, making up, forwarding, and servicing of trains" or "where one or more rail yards exist".

Additional Information

Grand Central Station, railroad terminal in New York City. It was designed and built (1903–13) by Reed & Stem in collaboration with the firm of Warren & Wetmore; the latter firm is credited with the aesthetics of the huge structure. The concourse, with its 125-foot (43-metre) ceiling vault painted with constellations, was one of the largest enclosed spaces of its time. A gem of the Beaux-Arts style, the terminal looks as though it could have been transported from 1870s France. Atop the symmetrical main facade is a large clock and sculptures of an American eagle and Roman deities. In the late 20th century the station was lavishly restored; this restoration effort brought national attention to the importance of preserving architectural landmarks. Although popularly known as Grand Central Station, the terminal is formally called Grand Central Terminal. Some observers insist on making a distinction between the terminal and the subway station below it and note that the nearby U.S. Post Office is also referred to as “Grand Central Station.”

PR0320-GrandCentral.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2115 2024-04-08 00:53:25

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2117) Botany

Gist

“Botany is the branch of Biology that deals with the study of plants.” The term 'botany' is derived from an adjective 'botanic' that is again derived from the Greek word 'botane'. One who studies 'botany' is known as a 'botanist'. Botany is one of the world's oldest natural sciences.

Summary

Botany, also called plant science (or plant sciences), plant biology or phytology, is the science of plant life and a branch of biology. A botanist, plant scientist or phytologist is a scientist who specialises in this field. The term "botany" comes from the Ancient Greek word meaning "pasture", "herbs" "grass", or "fodder"; is in turn derived from  "to feed" or "to graze". Traditionally, botany has also included the study of fungi and algae by mycologists and phycologists respectively, with the study of these three groups of organisms remaining within the sphere of interest of the International Botanical Congress. Nowadays, botanists (in the strict sense) study approximately 410,000 species of land plants of which some 391,000 species are vascular plants (including approximately 369,000 species of flowering plants), and approximately 20,000 are bryophytes.

Botany originated in prehistory as herbalism with the efforts of early humans to identify – and later cultivate – plants that were edible, poisonous, and possibly medicinal, making it one of the first endeavours of human investigation. Medieval physic gardens, often attached to monasteries, contained plants possibly having medicinal benefit. They were forerunners of the first botanical gardens attached to universities, founded from the 1540s onwards. One of the earliest was the Padua botanical garden. These gardens facilitated the academic study of plants. Efforts to catalogue and describe their collections were the beginnings of plant taxonomy, and led in 1753 to the binomial system of nomenclature of Carl Linnaeus that remains in use to this day for the naming of all biological species.

In the 19th and 20th centuries, new techniques were developed for the study of plants, including methods of optical microscopy and live cell imaging, electron microscopy, analysis of chromosome number, plant chemistry and the structure and function of enzymes and other proteins. In the last two decades of the 20th century, botanists exploited the techniques of molecular genetic analysis, including genomics and proteomics and DNA sequences to classify plants more accurately.

Modern botany is a broad, multidisciplinary subject with contributions and insights from most other areas of science and technology. Research topics include the study of plant structure, growth and differentiation, reproduction, biochemistry and primary metabolism, chemical products, development, diseases, evolutionary relationships, systematics, and plant taxonomy. Dominant themes in 21st century plant science are molecular genetics and epigenetics, which study the mechanisms and control of gene expression during differentiation of plant cells and tissues. Botanical research has diverse applications in providing staple foods, materials such as timber, oil, rubber, fibre and drugs, in modern horticulture, agriculture and forestry, plant propagation, breeding and genetic modification, in the synthesis of chemicals and raw materials for construction and energy production, in environmental management, and the maintenance of biodiversity.

Details

Botany is a branch of biology that deals with the study of plants, including their structure, properties, and biochemical processes. Also included are plant classification and the study of plant diseases and of interactions with the environment. The principles and findings of botany have provided the base for such applied sciences as agriculture, horticulture, and forestry.

Plants were of paramount importance to early humans, who depended upon them as sources of food, shelter, clothing, medicine, ornament, tools, and magic. Today it is known that, in addition to their practical and economic values, green plants are indispensable to all life on Earth: through the process of photosynthesis, plants transform energy from the Sun into the chemical energy of food, which makes all life possible. A second unique and important capacity of green plants is the formation and release of oxygen as a by-product of photosynthesis. The oxygen of the atmosphere, so absolutely essential to many forms of life, represents the accumulation of over 3,500,000,000 years of photosynthesis by green plants and algae.

Although the many steps in the process of photosynthesis have become fully understood only in recent years, even in prehistoric times humans somehow recognized intuitively that some important relation existed between the Sun and plants. Such recognition is suggested by the fact that worship of the Sun was often combined with the worship of plants by early tribes and civilizations.

Earliest humans, like the other anthropoid mammals (e.g., apes, monkeys), depended totally upon the natural resources of the environment, which, until methods were developed for hunting, consisted almost completely of plants. The behaviour of pre-Stone Age humans can be inferred by studying the botany of aboriginal peoples in various parts of the world. Isolated tribal groups in South America, Africa, and New Guinea, for example, have extensive knowledge about plants and distinguish hundreds of kinds according to their utility, as edible, poisonous, or otherwise important in their culture. They have developed sophisticated systems of nomenclature and classification, which approximate the binomial system (i.e., generic and specific names) found in modern biology. The urge to recognize different kinds of plants and to give them names thus seems to be as old as the human race.

In time plants were not only collected but also grown by humans. This domestication resulted not only in the development of agriculture but also in a greater stability of human populations that had previously been nomadic. From the settling down of agricultural peoples in places where they could depend upon adequate food supplies came the first villages and the earliest civilizations.

Because of the long preoccupation of humans with plants, a large body of folklore, general information, and actual scientific data has accumulated, which has become the basis for the science of botany.

Historical background

Theophrastus, a Greek philosopher who first studied with Plato and then became a disciple of Aristotle, is credited with founding botany. Only two of an estimated 200 botanical treatises written by him are known to science: originally written in Greek about 300 BCE, they have survived in the form of Latin manuscripts, De causis plantarum and De historia plantarum. His basic concepts of morphology, classification, and the natural history of plants, accepted without question for many centuries, are now of interest primarily because of Theophrastus’s independent and philosophical viewpoint.

Pedanius Dioscorides, a Greek botanist of the 1st century CE, was the most important botanical writer after Theophrastus. In his major work, an herbal in Greek, he described some 600 kinds of plants, with comments on their habit of growth and form as well as on their medicinal properties. Unlike Theophrastus, who classified plants as trees, shrubs, and herbs, Dioscorides grouped his plants under three headings: as aromatic, culinary, and medicinal. His herbal, unique in that it was the first treatment of medicinal plants to be illustrated, remained for about 15 centuries the last word on medical botany in Europe.

From the 2nd century BCE to the 1st century CE, a succession of Roman writers—Cato the Elder, Varro, Virgil, and Columella—prepared Latin manuscripts on farming, gardening, and fruit growing but showed little evidence of the spirit of scientific inquiry for its own sake that was so characteristic of Theophrastus. In the 1st century CE, Pliny the Elder, though no more original than his Roman predecessors, seemed more industrious as a compiler. His Historia naturalis—an encyclopaedia of 37 volumes, compiled from some 2,000 works representing 146 Roman and 327 Greek authors—has 16 volumes devoted to plants. Although uncritical and containing much misinformation, this work contains much information otherwise unavailable, since most of the volumes to which he referred have been destroyed.

The printing press revolutionized the availability of all types of literature, including that of plants. In the 15th and 16th centuries, many herbals were published with the purpose of describing plants useful in medicine. Written by physicians and medically oriented botanists, the earliest herbals were based largely on the work of Dioscorides and to a lesser extent on Theophrastus, but gradually they became the product of original observation. The increasing objectivity and originality of herbals through the decades is clearly reflected in the improved quality of the woodcuts prepared to illustrate these books.

In 1552 an illustrated manuscript on Mexican plants, written in Aztec, was translated into Latin by Badianus; other similar manuscripts known to have existed seem to have disappeared. Whereas herbals in China date back much further than those in Europe, they have become known only recently and so have contributed little to the progress of Western botany.

The invention of the optical lens during the 16th century and the development of the compound microscope about 1590 opened an era of rich discovery about plants; prior to that time, all observations by necessity had been made with the unaided eye. The botanists of the 17th century turned away from the earlier emphasis on medical botany and began to describe all plants, including the many new ones that were being introduced in large numbers from Asia, Africa, and America. Among the most prominent botanists of this era was Gaspard Bauhin, who for the first time developed, in a tentative way, many botanical concepts still held as valid.

In 1665 Robert Hooke published, under the title Micrographia, the results of his microscopic observations on several plant tissues. He is remembered as the coiner of the word “cell,” referring to the cavities he observed in thin slices of cork; his observation that living cells contain sap and other materials too often has been forgotten. In the following decade, Nehemiah Grew and Marcello Malpighi founded plant anatomy; in 1671 they communicated the results of microscopic studies simultaneously to the Royal Society of London, and both later published major treatises.

Experimental plant physiology began with the brilliant work of Stephen Hales, who published his observations on the movements of water in plants under the title Vegetable Staticks (1727). His conclusions on the mechanics of water transpiration in plants are still valid, as is his discovery—at the time a startling one—that air contributes something to the materials produced by plants. In 1774, Joseph Priestley showed that plants exposed to sunlight give off oxygen, and Jan Ingenhousz demonstrated, in 1779, that plants in the dark give off carbon dioxide. In 1804 Nicolas de Saussure demonstrated convincingly that plants in sunlight absorb water and carbon dioxide and increase in weight, as had been reported by Hales nearly a century earlier.

The widespread use of the microscope by plant morphologists provided a turning point in the 18th century—botany became largely a laboratory science. Until the invention of simple lenses and the compound microscope, the recognition and classification of plants were, for the most part, based on such large morphological aspects of the plant as size, shape, and external structure of leaves, roots, and stems. Such information was also supplemented by observations on more subjective qualities of plants, such as edibility and medicinal uses.

In 1753 Linnaeus published his master work, Species Plantarum, which contains careful descriptions of 6,000 species of plants from all of the parts of the world known at the time. In this work, which is still the basic reference work for modern plant taxonomy, Linnaeus established the practice of binomial nomenclature—that is, the denomination of each kind of plant by two words, the genus name and the specific name, as Rosa canina, the dog rose. Binomial nomenclature had been introduced much earlier by some of the herbalists, but it was not generally accepted; most botanists continued to use cumbersome formal descriptions, consisting of many words, to name a plant. Linnaeus for the first time put the contemporary knowledge of plants into an orderly system, with full acknowledgment to past authors, and produced a nomenclatural methodology so useful that it has not been greatly improved upon. Linnaeus also introduced a “sexual system” of plants, by which the numbers of flower parts—especially stamens, which produce male gender cells, and styles, which are prolongations of plant ovaries that receive pollen grains—became useful tools for easy identification of plants. This simple system, though effective, had many imperfections. Other classification systems, in which as many characters as possible were considered in order to determine the degree of relationship, were developed by other botanists; indeed, some appeared before the time of Linnaeus. The application of the concepts of Charles Darwin (on evolution) and Gregor Mendel (on genetics) to plant taxonomy has provided insights into the process of evolution and the production of new species.

Systematic botany now uses information and techniques from all the subdisciplines of botany, incorporating them into one body of knowledge. Phytogeography (the biogeography of plants), plant ecology, population genetics, and various techniques applicable to cells—cytotaxonomy and cytogenetics—have contributed greatly to the current status of systematic botany and have to some degree become part of it. More recently, phytochemistry, computerized statistics, and fine-structure morphology have been added to the activities of systematic botany.

The 20th century saw an enormous increase in the rate of growth of research in botany and the results derived therefrom. The combination of more botanists, better facilities, and new technologies, all with the benefit of experience from the past, resulted in a series of new discoveries, new concepts, and new fields of botanical endeavour. Some important examples are mentioned below.

New and more precise information is being accumulated concerning the process of photosynthesis, especially with reference to energy-transfer mechanisms.

The discovery of the pigment phytochrome, which constitutes a previously unknown light-detecting system in plants, has greatly increased knowledge of the influence of both internal and external environment on the germination of seeds and the time of flowering.

Several types of plant hormones (internal regulatory substances) have been discovered—among them auxin, gibberellin, and kinetin—whose interactions provide a new concept of the way in which the plant functions as a unit.

The discovery that plants need certain trace elements usually found in the soil has made it possible to cultivate areas lacking some essential element by adding it to the deficient soil.

The development of genetical methods for the control of plant heredity has made possible the generation of improved and enormously productive crop plants.

The development of radioactive-carbon dating of plant materials as old as 50,000 years is useful to the paleobotanist, the ecologist, the archaeologist, and especially to the climatologist, who now has a better basis on which to predict climates of future centuries.

The discovery of alga-like and bacteria-like fossils in Precambrian rocks has pushed the estimated origin of plants on Earth to 3,500,000,000 years ago.

The isolation of antibiotic substances from fungi and bacteria-like organisms has provided control over many bacterial diseases and has contributed biochemical information of basic scientific importance as well.

The use of phylogenetic data to establish a consensus on the taxonomy and evolutionary lineages of angiosperms (flowering plants) is coordinated through an international effort known as the Angiosperm Phylogeny Group.

Areas of study

For convenience, but not on any mutually exclusive basis, several major areas or approaches are recognized commonly as disciplines of botany. These are morphology, physiology, ecology, and systematics.

Morphology

Structures of a leaf. The epidermis is often covered with a waxy protective cuticle that helps prevent water loss from inside the leaf. Oxygen, carbon dioxide, and water enter and exit the leaf through pores (stomata) scattered mostly along the lower epidermis. The stomata are opened and closed by the contraction and expansion of surrounding guard cells. The vascular, or conducting, tissues are known as xylem and phloem; water and minerals travel up to the leaves from the roots through the xylem, and sugars made by photosynthesis are transported to other parts of the plant through the phloem. Photosynthesis occurs within the chloroplast-containing mesophyll layer.
Morphology deals with the structure and form of plants and includes such subdivisions as: cytology, the study of the cell; histology, the study of tissues; anatomy, the study of the organization of tissues into the organs of the plant; reproductive morphology, the study of life cycles; and experimental morphology, or morphogenesis, the study of development.

Physiology

Physiology deals with the functions of plants. Its development as a subdiscipline has been closely interwoven with the development of other aspects of botany, especially morphology. In fact, structure and function are sometimes so closely related that it is impossible to consider one independently of the other. The study of function is indispensable for the interpretation of the incredibly diverse nature of plant structures. In other words, around the functions of the plant, structure and form have evolved. Physiology also blends imperceptibly into the fields of biochemistry and biophysics, as the research methods of these fields are used to solve problems in plant physiology.

Ecology

Ecology deals with the mutual relationships and interactions between organisms and their physical environment. The physical factors of the atmosphere, the climate, and the soil affect the physiological functions of the plant in all its manifestations, so that, to a large degree, plant ecology is a phase of plant physiology under natural and uncontrolled conditions. Plants are intensely sensitive to the forces of the environment, and both their association into communities and their geographical distribution are determined largely by the character of climate and soil. Moreover, the pressures of the environment and of organisms upon each other are potent forces, which lead to new species and the continuing evolution of larger groups. Ecology also investigates the competitive or mutualistic relationships that occur at different levels of ecosystem composition, such as those between individuals, populations, or communities. Plant-animal interactions, such as those between plants and their herbivores or pollinators, are also an important area of study.

Systematics

Systematics deals with the identification and ranking of all plants. It includes classification and nomenclature (naming) and enables the botanist to comprehend the broad range of plant diversity and evolution.

Other subdisciplines

In addition to the major subdisciplines, several specialized branches of botany have developed as a matter of custom or convenience. Among them are bacteriology, the study of bacteria; mycology, the study of fungi; phycology, the study of algae; bryology, the study of mosses and liverworts; pteridology, the study of ferns and their relatives; and paleobotany, the study of fossil plants. Palynology is the study of modern and fossil pollen and spores, with particular reference to their identification; plant pathology deals with the diseases of plants; economic botany deals with plants of practical use to humankind; and ethnobotany covers the traditional use of plants by local peoples, now and in the distant past.

Botany also relates to other scientific disciplines in many ways, especially to zoology, medicine, microbiology, agriculture, chemistry, forestry, and horticulture, and specialized areas of botanical information may relate closely to such humanistic fields as art, literature, history, religion, archaeology, sociology, and psychology.

Fundamentally, botany remains a pure science, including any research into the life of plants and limited only by humanity’s technical means of satisfying curiosity. It has often been considered an important part of a liberal education, not only because it is necessary for an understanding of agriculture, horticulture, forestry, pharmacology, and other applied arts and sciences but also because an understanding of plant life is related to life in general.

Because humanity has always been dependent upon plants and surrounded by them, plants are woven into designs, into the ornamentation of life, even into religious symbolism. A Persian carpet and a bedspread from a New England loom both employ conventional designs derived from the forms of flowers. Medieval painters and great masters of the Renaissance represented various revered figures surrounded by roses, lilies, violets, and other flowers, which symbolized chastity, martyrdom, humility, and other Christian attributes.

Methods in botany:

Morphological aspects

The invention of the compound microscope provided a valuable and durable instrument for the investigation of the inner structure of plants. Early plant morphologists, especially those studying cell structure, were handicapped as much by the lack of adequate knowledge of how to prepare specimens as they were by the imperfect microscopes of the time. A revolution in the effectiveness of microscopy occurred in the second half of the 19th century with the introduction of techniques for fixing cells and for staining their component parts. Before the development of these techniques, the cell, viewed with the microscope, appeared as a minute container with a dense portion called the nucleus. The discovery that parts of the cell respond to certain stains made observation easier. The development of techniques for preparing tissues of plants for microscopic examination was continued in the 1870s and 1880s and resulted in the gradual refinement of the field of nuclear cytology, or karyology. Chromosomes were recognized as constant structures in the life cycle of cells, and the nature and meaning of meiosis, a type of cell division in which the daughter cells have half the number of chromosomes of the parent, was discovered; without this discovery, the significance of Mendel’s laws of heredity might have gone unrecognized. Vital stains, dyes that can be used on living material, were first used in 1886 and have been greatly refined since then.

Improvement of the methodology of morphology has not been particularly rapid, even though satisfactory techniques for histology, anatomy, and cytology have been developed. The embedding of material in paraffin wax, the development of the rotary microtome for slicing very thin sections of tissue for microscope viewing, and the development of stain techniques are refinements of previously known methods. The invention of the phase microscope made possible the study of unfixed and unstained living material—hopefully nearer its natural state. The development of the electron microscope, however, has provided the plant morphologist with a new dimension of magnification of the structure of plant cells and tissues. The fine structure of the cell and of its components, such as mitochondria and the Golgi apparatus, have come under intensive study. Knowledge of the fine structure of plant cells has enabled investigators to determine the sites of important biochemical activities, especially those involved in the transfer of energy during photosynthesis and respiration. The scanning electron microscope, a relatively recent development, provides a three-dimensional image of surface structures at very great magnifications.

For experimental research on the morphogenesis of plants, isolated organs in their embryonic stage, clumps of cells, or even individual cells are grown. One of the most interesting techniques developed thus far permits the growing of plant tissue of higher plants as single cells; aeration and continuous agitation keep the cells suspended in the liquid culture medium.

Physiological aspects

Plant physiology and plant biochemistry are the most technical areas of botany; most major advances in physiology also reflect the development of either a new technique or the dramatic refinement of an earlier one to give a new degree of precision. Fortunately, the methodology of measurement has been vastly improved in recent decades, largely through the development of various electronic devices. The phytotron at the California Institute of Technology represents the first serious attempt to control the environment of living plants on a relatively large scale; much important information has been gained concerning the effects on plants of day length and night length and the effects on growth, flowering, and fruiting of varying night temperatures. Critical measurements of other plant functions have also been obtained.

Certain complex biochemical processes, such as photosynthesis and respiration, have been studied stepwise by immobilizing the process through the use of extreme cold or biochemical inhibitors and by analyzing the enzymatic activity of specific cell contents after spinning cells at very high speeds in a centrifuge. The pathways of energy transfer from molecule to molecule during photosynthesis and respiration have been determined by biophysical methods, especially those utilizing radioactive isotopes.

An investigation of the natural metabolic products of plants requires, in general, certain standard biochemical techniques—e.g., gas and paper chromatography, electrophoresis, and various kinds of spectroscopy, including infrared, ultraviolet, and nuclear magnetic resonance. Useful information on the structure of the extremely large cellulose molecule has been provided by X-ray crystallography.

Ecological aspects

When plant ecology first emerged as a subscience of botany, it was largely descriptive. Today, however, it has become a common meeting ground for all the plant sciences, as well as for other sciences. In addition, it has become much more quantitative. As a result, the tools and methods of plant ecologists are those available for measuring the intensity of the environmental factors that impinge on the plant and the reaction of the plant to these factors. The extent of the variability of many physical factors must be measured. The integration and reporting of such measurements, which cannot be regarded as constant, may therefore conceal some of the most dynamic and significant aspects of the environment and the responses of the plant to them. Because the physical environment is a complex of biological and physical components, it is measured by biophysical tools. The development of electronic measuring and recording devices has been crucial for a better understanding of the dynamics of the environment. Such devices, however, produce so much information that computer techniques must be used to reduce the data to meaningful results.

The ecologist might be concerned primarily with measuring the effect of the external environment on a plant and could adapt the methodology of the plant physiologist to field conditions.

The plant community ecologist is concerned with both the relation of different kinds of plants to each other and the nature and constitution of their association in natural communities. One widely used technique in this respect is to count the various kinds of plants within a standard area in order to determine such factors as the percentage of ground cover, dominance of species, aggressiveness, and other characteristics of the community. In general, the community ecologist has relatively few quantitative factors to measure, which nevertheless gives extremely useful results and some degree of predictability.

Some ecologists are most concerned with the inner environment of the plant and the way in which it reacts to the external environment. This approach, which is essentially physiological and biochemical, is useful for determining energy flow in ecosystems. The physiological ecologist is also concerned with evaluating the adaptations that certain plants have made toward survival in a hostile environment.

In summary, the techniques and methodology of plant ecology are as diverse and as varied as the large number of sciences that are drawn upon by ecologists. Completely new techniques, although few, are important; among them are techniques for measuring the amount of radioactive carbon-14 in plant deposits up to 50,000 years old. The most important new method in plant ecology is the rapidly growing use of computer techniques for handling vast amounts of data. Furthermore, modern digital computers can be used to simulate simple ecosystems and to analyze real ones.

Taxonomic aspects

Experimental research under controlled conditions, made possible by botanical gardens and their ranges of greenhouses and controlled environmental chambers, has become an integral part of the methodology of modern plant taxonomy.

A second major tool of the taxonomist is the herbarium, a reference collection consisting of carefully selected and dried plants attached to paper sheets of a standard size and filed in a systematic way so that they may be easily retrieved for examination. Each specimen is a reference point representing the features of one plant of a certain species; it lasts indefinitely if properly cared for, and, if the species becomes extinct in nature—as thousands have—it remains the only record of the plant’s former existence. The library is also an essential reference resource for descriptions and illustrations of plants that may not be represented in a particular herbarium.

One of the earliest methods of the taxonomist, the study of living plants in the field, has benefited greatly by fast and easy methods of transportation. Botanists may carry on fieldwork in any part of the world and make detailed studies of the exact environmental conditions under which each species grows.

Many new approaches have been applied to the elucidation of problems in systematic botany. The transmission electron microscope and the scanning electron microscope have added to the knowledge of plant morphology, upon which classical taxonomy so much depends.

Refined methods for cytological and genetical studies of plants have given the taxonomist new insights into the origin of the great diversity among plants, especially the mechanisms by which new species arise and by which they then maintain their individuality in nature. From such studies have arisen further methods and also the subdisciplines of cytotaxonomy, cytogenetics, and population genetics.

Phytochemistry, or the chemistry of plants, one of the early subdivisions of organic chemistry, has been of great importance in the identification of plant substances of medicinal importance. With the development of new phytochemical methods, new information has become available for use in conjunction with plant taxonomy. Thus has arisen the modern field of chemotaxonomy, or biochemical systematics. Each species tends to differ to some degree from every other species, even in the same genus, in the biochemistry of its natural metabolic products. Sometimes the difference is subtle and difficult to determine; sometimes it is obvious and easily perceptible. With new analytical techniques, a large number of individual compounds from one plant can be identified quickly and with certainty. Such information is extremely useful in adding confirmatory or supplemental evidence of an objective and quantitative nature. An interesting by-product of chemical plant taxonomy has resulted in understanding better the restriction of certain insects to specific plants.

Computer techniques have been applied to plant taxonomy to develop a new field, numerical taxonomy, or taximetrics, by which relationships between plant species or those within groups of species are determined quantitatively and depicted graphically. Another method measures the degree of molecular similarity of deoxyribonucleic acid (DNA) molecules in different plants. By this procedure it should be possible to determine the natural taxonomic relationships (phylogeny) among different plants and plant groups by determining the extent of the relationship of their DNA: closely related plants will have more similarities in their DNA than will unrelated ones.

What-is-Botany-1_11zon.jpg&nocache=1


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2116 2024-04-10 00:02:04

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2118) Zoology

Gist

Zoology is the study of all animals of all shapes and sizes, from tiny insects to large mammals. Zoologists investigate what animals eat and how they live, and how animals interact with their habitats.

Summary

Zoology is the scientific study of animals. Its studies include the structure, embryology, classification, habits, and distribution of all animals, both living and extinct, and how they interact with their ecosystems. Zoology is one of the primary branches of biology. The term is derived from Ancient Greek 'animal', and 'knowledge', 'study'.

Although humans have always been interested in the natural history of the animals they saw around them, and used this knowledge to domesticate certain species, the formal study of zoology can be said to have originated with Aristotle. He viewed animals as living organisms, studied their structure and development, and considered their adaptations to their surroundings and the function of their parts. Modern zoology has its origins during the Renaissance and early modern period, with Carl Linnaeus, Antonie van Leeuwenhoek, Robert Hooke, Charles Darwin, Gregor Mendel and many others.

The study of animals has largely moved on to deal with form and function, adaptations, relationships between groups, behaviour and ecology. Zoology has increasingly been subdivided into disciplines such as classification, physiology, biochemistry and evolution. With the discovery of the structure of DNA by Francis Crick and James Watson in 1953, the realm of molecular biology opened up, leading to advances in cell biology, developmental biology and molecular genetics.

Details

Zoology is the branch of biology that studies the members of the animal kingdom and animal life in general. It includes both the inquiry into individual animals and their constituent parts, even to the molecular level, and the inquiry into animal populations, entire faunas, and the relationships of animals to each other, to plants, and to the nonliving environment. Though this wide range of studies results in some isolation of specialties within zoology, the conceptual integration in the contemporary study of living things that has occurred in recent years emphasizes the structural and functional unity of life rather than its diversity.

Historical background

Prehistoric man’s survival as a hunter defined his relation to other animals, which were a source of food and danger. As man’s cultural heritage developed, animals were variously incorporated into man’s folklore and philosophical awareness as fellow living creatures. Domestication of animals forced man to take a systematic and measured view of animal life, especially after urbanization necessitated a constant and large supply of animal products.

Study of animal life by the ancient Greeks became more rational, if not yet scientific, in the modern sense, after the cause of disease—until then thought to be demons—was postulated by Hippocrates to result from a lack of harmonious functioning of body parts. The systematic study of animals was encouraged by Aristotle’s extensive descriptions of living things, his work reflecting the Greek concept of order in nature and attributing to nature an idealized rigidity.

In Roman times Pliny brought together in 37 volumes a treatise, Historia naturalis, that was an encyclopaedic compilation of both myth and fact regarding celestial bodies, geography, animals and plants, metals, and stone. Volumes VII to XI concern zoology; volume VIII, which deals with the land animals, begins with the largest one, the elephant. Although Pliny’s approach was naïve, his scholarly effort had a profound and lasting influence as an authoritative work.

Zoology continued in the Aristotelian tradition for many centuries in the Mediterranean region and by the Middle Ages, in Europe, it had accumulated considerable folklore, superstition, and moral symbolisms, which were added to otherwise objective information about animals. Gradually, much of this misinformation was sifted out: naturalists became more critical as they compared directly observed animal life in Europe with that described in ancient texts. The use of the printing press in the 15th century made possible an accurate transmission of information. Moreover, mechanistic views of life processes (i.e., that physical processes depending on cause and effect can apply to animate forms) provided a hopeful method for analyzing animal functions; for example, the mechanics of hydraulic systems were part of William Harvey’s argument for the circulation of the blood—although Harvey remained thoroughly Aristotelian in outlook. In the 18th century, zoology passed through reforms provided by both the system of nomenclature of Carolus Linnaeus and the comprehensive works on natural history by Georges-Louis Leclerc de Buffon; to these were added the contributions to comparative anatomy by Georges Cuvier in the early 19th century.

Physiological functions, such as digestion, excretion, and respiration, were easily observed in many animals, though they were not as critically analyzed as was blood circulation.

Following the introduction of the word cell in the 17th century and microscopic observation of these structures throughout the 18th century, the cell was incisively defined as the common structural unit of living things in 1839 by two Germans: Matthias Schleiden and Theodor Schwann. In the meanwhile, as the science of chemistry developed, it was inevitably extended to an analysis of animate systems. In the middle of the 18th century the French physicist René Antoine Ferchault de Réaumer demonstrated that the fermenting action of stomach juices is a chemical process. And in the mid-19th century the French physician and physiologist Claude Bernard drew upon both the cell theory and knowledge of chemistry to develop the concept of the stability of the internal bodily environment, now called homeostasis.

The cell concept influenced many biological disciplines, including that of embryology, in which cells are important in determining the way in which a fertilized egg develops into a new organism. The unfolding of these events—called epigenesis by Harvey—was described by various workers, notably the German-trained comparative embryologist Karl von Baer, who was the first to observe a mammalian egg within an ovary. Another German-trained embryologist, Christian Heinrich Pander, introduced in 1817 the concept of germ, or primordial, tissue layers into embryology.

In the latter part of the 19th century, improved microscopy and better staining techniques using aniline dyes, such as hematoxylin, provided further impetus to the study of internal cellular structure.

By this time Darwin had made necessary a complete revision of man’s view of nature with his theory that biological changes in species occur through the process of natural selection. The theory of evolution—that organisms are continuously evolving into highly adapted forms—required the rejection of the static view that all species are especially created and upset the Linnaean concept of species types. Darwin recognized that the principles of heredity must be known to understand how evolution works; but, even though the concept of hereditary factors had by then been formulated by Mendel, Darwin never heard of his work, which was essentially lost until its rediscovery in 1900.

Genetics has developed in the 20th century and now is essential to many diverse biological disciplines. The discovery of the gene as a controlling hereditary factor for all forms of life has been a major accomplishment of modern biology. There has also emerged clearer understanding of the interaction of organisms with their environment. Such ecological studies help not only to show the interdependence of the three great groups of organisms—plants, as producers; animals, as consumers; and fungi and many bacteria, as decomposers—but they also provide information essential to man’s control of the environment and, ultimately, to his survival on Earth. Closely related to this study of ecology are inquiries into animal behaviour, or ethology. Such studies are often cross disciplinary in that ecology, physiology, genetics, development, and evolution are combined as man attempts to understand why an organism behaves as it does. This approach now receives substantial attention because it seems to provide useful insight into man’s biological heritage—that is, the historical origin of man from nonhuman forms.

The emergence of animal biology has had two particular effects on classical zoology. First, and somewhat paradoxically, there has been a reduced emphasis on zoology as a distinct subject of scientific study; for example, workers think of themselves as geneticists, ecologists, or physiologists who study animal rather than plant material. They often choose a problem congenial to their intellectual tastes, regarding the organism used as important only to the extent that it provides favourable experimental material. Current emphasis is, therefore, slanted toward the solution of general biological problems; contemporary zoology thus is to a great extent the sum total of that work done by biologists pursuing research on animal material.

Second, there is an increasing emphasis on a conceptual approach to the life sciences. This has resulted from the concepts that emerged in the late 19th and early 20th centuries: the cell theory; natural selection and evolution; the constancy of the internal environment; the basic similarity of genetic material in all living organisms; and the flow of matter and energy through ecosystems. The lives of microbes, plants, and animals now are approached using theoretical models as guides rather than by following the often restricted empiricism of earlier times. This is particularly true in molecular studies, in which the integration of biology with chemistry allows the techniques and quantitative emphases of the physical sciences to be used effectively to analyze living systems.

Areas of study

Although it is still useful to recognize many disciplines in animal biology—e.g., anatomy or morphology; biochemistry and molecular biology; cell biology; developmental studies (embryology); ecology; ethology; evolution; genetics; physiology; and systematics—the research frontiers occur as often at the interfaces of two or more of these areas as within any given one.

Anatomy or morphology

Descriptions of external form and internal organization are among the earliest records available regarding the systematic study of animals. Aristotle was an indefatigable collector and dissector of animals. He found differing degrees of structural complexity, which he described with regard to ways of living, habits, and body parts. Although Aristotle had no formal system of classification, it is apparent that he viewed animals as arranged from the simplest to the most complex in an ascending series. Since man was even more complex than animals and, moreover, possessed a rational faculty, he therefore occupied the highest position and a special category. This hierarchical perception of the animate world proved to be useful in every century to the present, except that in the modern view there is no such “scale of nature,” and there is change in time by evolution from the simple to the complex.

After the time of Aristotle, Mediterranean science was centred at Alexandria, where the study of anatomy, particularly the central nervous system, flourished and, in fact, first became recognized as a discipline. Galen studied anatomy at Alexandria in the 2nd century and later dissected many animals. Much later, the contributions of the Renaissance anatomist Andreas Vesalius, though made in the context of medicine, as were those of Galen, stimulated to a great extent the rise of comparative anatomy. During the latter part of the 15th century and throughout the 16th century, there was a strong tradition in anatomy; important similarities were observed in the anatomy of different animals, and many illustrated books were published to record these observations.

But anatomy remained a purely descriptive science until the advent of functional considerations in which the correlation between structure and function was consciously investigated; as by French biologists Buffon and Cuvier. Cuvier cogently argued that a trained naturalist could deduce from one suitably chosen part of an animal’s body the complete set of adaptations that characterized the organism. Because it was obvious that organisms with similar parts pursue similar habits, they were placed together in a system of classification. Cuvier pursued this viewpoint, which he called the theory of correlations, in a somewhat dogmatic manner and placed himself in opposition to the romantic natural philosophers, such as the German intellectual Johann Wolfgang von Goethe, who saw a tendency to ideal types in animal form. The tension between these schools of thought—adaptation as the consequence of necessary bodily functions and adaptation as an expression of a perfecting principle in nature—runs as a leitmotiv through much of biology, with overtones extending into the early 20th century.

The twin concepts of homology (similarity of origin) and analogy (similarity of appearance), in relation to structure, are the creation of the 19th-century British anatomist Richard Owen. Although they antedate the Darwinian view of evolution, the anatomical data on which they were based became, largely as a result of the work of the German comparative anatomist Carl Gegenbaur, important evidence in favour of evolutionary change, despite Owen’s steady unwillingness to accept the view of diversification of life from a common origin.

In summary, anatomy moved from a purely descriptive phase as an adjunct to classificatory studies, into a partnership with studies of function and became, in the 19th century, a major contributor to the concept of evolution.

Taxonomy or systematics

Not until the work of Carolus Linnaeus did the variety of life receive a widely accepted systematic treatment. Linnaeus strove for a “natural method of arrangement,” one that is now recognizable as an intuitive grasp of homologous relationships, reflecting evolutionary descent from a common ancestor; however, the natural method of arrangement sought by Linnaeus was more akin to the tenets of idealized morphology because he wanted to define a “type” form as epitomizing a species.

It was in the nomenclatorial aspect of classification that Linnaeus created a revolutionary advance with the introduction of a Latin binomial system: each species received a Latin name, which was not influenced by local names and which invoked the authority of Latin as a language common to the learned people of that day. The Latin name has two parts. The first word in the Latin name for the common chimpanzee, Pan troglodytes, for example, indicates the larger category, or genus, to which chimpanzees belong; the second word is the name of the species within the genus. In addition to species and genera, Linnaeus also recognized other classificatory groups, or taxa (singular taxon), which are still used; namely, order, class, and kingdom, to which have been added family (between genus and order) and phylum (between class and kingdom). Each of these can be divided further by the appropriate prefix of sub- or super-, as in subfamily or superclass. Linnaeus’ great work, the Systema naturae, went through 12 editions during his lifetime; the 13th, and final, edition appeared posthumously. Although his treatment of the diversity of living things has been expanded in detail, revised in terms of taxonomic categories, and corrected in the light of continuing work—for example, Linnaeus treated whales as fish—it still sets the style and method, even to the use of Latin names, for contemporary nomenclatorial work.

Linnaeus sought a natural method of arrangement, but he actually defined types of species on the basis of idealized morphology. The greatest change from Linnaeus’ outlook is reflected in the phrase “the new systematics,” which was introduced in the 20th century and through which an explicit effort is made to have taxonomic schemes reflect evolutionary history. The basic unit of classification, the species, is also the basic unit of evolution—i.e., a population of actually or potentially interbreeding individuals. Such a population shares, through interbreeding, its genetic resources. In so doing, it creates the gene pool—its total genetic material—that determines the biological resources of the species and on which natural selection continuously acts. This approach has guided work on classifying animals away from somewhat arbitrary categorization of new species to that of recreating evolutionary history (phylogeny) and incorporating it in the system of classification. Modern taxonomists or systematists, therefore, are among the foremost students of evolution.

Physiology

The practical consequences of physiology have always been an unavoidable human concern, in both medicine and animal husbandry. Inevitably, from Hippocrates to the present, practical knowledge of human bodily function has accumulated along with that of domestic animals and plants. This knowledge has been expanded, especially since the early 1800s, by experimental work on animals in general, a study known as comparative physiology. The experimental dimension had wide applications following Harvey’s demonstration of the circulation of blood. From then on, medical physiology developed rapidly; notable texts appeared, such as Albrecht von Haller’s eight-volume work Elementa Physiologiae Corporis Humani (Elements of Human Physiology), which had a medical emphasis. Toward the end of the 18th century the influence of chemistry on physiology became pronounced through Antoine Lavoisier’s brilliant analysis of respiration as a form of combustion. This French chemist not only determined that oxygen was consumed by living systems but also opened the way to further inquiry into the energetics of living systems. His studies further strengthened the mechanistic view, which holds that the same natural laws govern both the inanimate and the animate realms.

Physiological principles achieved new levels of sophistication and comprehensiveness with Bernard’s concept of constancy of the internal environment, the point being that only under certain constantly maintained conditions is there optimal bodily function. His rational and incisive insights were augmented by concurrent developments in Germany, where Johannes Müller explored the comparative aspects of animal function and anatomy, and Justus von Liebig and Car1 Ludwig applied chemical and physical methods, respectively, to the solution of physiological problems. As a result, many useful techniques were advanced—e.g., means for precise measurement of muscular action and changes in blood pressure and means for defining the nature of body fluids.

By this time the organ systems—circulatory, digestive, endocrine, excretory, integumentary, muscular, nervous, reproductive, respiratory, and skeletal—had been defined, both anatomically and functionally, and research efforts were focussed on understanding these systems in cellular and chemical terms, an emphasis that continues to the present and has resulted in specialties in cell physiology and physiological chemistry. General categories of research now deal with the transportation of materials across membranes; the metabolism of cells, including synthesis and breakdown of molecules; and the regulation of these processes.

Interest has also increased in the most complex of physiological systems, the nervous system. Much comparative work has been done by utilizing animals with structures especially amenable to various experimental techniques; for example, the large nerves in squids have been extensively studied in terms of the transmission of nerve impulses, and insect and crustacean eyes have yielded significant information on patterns of sensory inputs. Most of this work is closely associated with studies on animal orientation and behaviour. Although the contemporary physiologist often studies functional problems at the molecular and cellular levels, he is also aware of the need to integrate cellular studies into the many-faceted functions of the total organism.

Embryology, or developmental studies

Embryonic growth and differentiation of parts have been major biological problems since ancient times. A 17th-century explanation of development assumed that the adult existed as a miniature—a homunculus—in the microscopic material that initiates the embryo. But in 1759 the German physician Caspar Friedrick Wolff firmly introduced into biology the interpretation that undifferentiated materials gradually become specialized, in an orderly way, into adult structures. Although this epigenetic process is now accepted as characterizing the general nature of development in both plants and animals, many questions remain to be solved. The French physician Marie François Xavier Bichat declared in 1801 that differentiating parts consist of various components called tissues; with the subsequent statement of the cell theory, tissues were resolved into their cellular constituents. The idea of epigenetic change and the identification of structural components made possible a new interpretation of differentiation. It was demonstrated that the egg gives rise to three essential germ layers out of which specialized organs, with their tissues, subsequently emerge. Then, following his own discovery of the mammalian ovum, von Baer in 1828 usefully applied this information when he surveyed the development of various members of the vertebrate groups. At this point, embryology, as it is now recognized, emerged as a distinct subject.

The concept of cellular organization had an effect on embryology that continues to the present day. In the 19th century, cellular mechanisms were considered essentially to be the basis for growth, differentiation, and morphogenesis, or molding of parts. The distribution of the newly formed cells of the rapidly dividing zygote (fertilized egg) was precisely followed to provide detailed accounts not only of the time and mode of germ layer formation but also of the contribution of these layers to the differentiation of tissues and organs. Such descriptive information provided the background for experimental work aimed at elucidating the role of chromosomes and other cellular constituents in differentiation. About 1895, before the formulation of the chromosomal theory of heredity, Theodor Boveri demonstrated that chromosomes show continuity from one cell generation to the next. In fact, biologists soon concluded that in all cells arising from a fertilized egg, half the chromosomes are of maternal and half of paternal origin. The discovery of the constant transmission of the original chromosomal endowment to all cells of the body served to deepen the mystery surrounding the factors that determine cellular differentiation.

The present view is that differential activity of genes is the basis for cellular and tissue differentiation; that is, although the cells of a multicellular body contain the same genetic information, different genes are active in different cells. The result is the formation of various gene products, which regulate the functional and structural differentiation of cells. The actual mechanism involved in the inactivation of certain genes and the activation of others, however, has not yet been established. That cells can move extensively throughout the embryo and selectively adhere to other cells, thus starting tissue aggregations, also contributes to development as does the fate of cells—i.e., certain ones continue to multiply, others stop, and some die.

Research methods in embryology now exploit many experimental situations: both unicellular and multicellular forms; regeneration (replacement of lost parts) and normal development; and growth of tissues outside and inside the host. Hence, the processes of development can be studied with material other than embryos; and the study of embryology has become incorporated into the more inclusive subdiscipline of developmental biology.

Evolutionism

Darwin was not the first to speculate that organisms can change from generation to generation and so evolve, but he was the first to propose a mechanism by which the changes are accumulated. He proposed that heritable variations occur in conjunction with a never-ending competition for survival and that the variations favouring survival are automatically preserved. In time, therefore, the continued accumulation of variations results in the emergence of new forms. Because the variations that are preserved relate to survival, the survivors are highly adapted to their environment. To this process Darwin gave the apt name natural selection.

Many of Darwin’s predecessors, notably Jean-Baptiste Lamarck, were willing to accept the idea of species variation, even though to do so meant denying the doctrine of special creation and the static-type species of Linnaeus. But they argued that some idealized perfecting principle, expressed through the habits of an organism, was the basis of variation. The contrast between the romanticism of Lamarck and the objective analysis of Darwin clearly reveals the type of revolution provoked by the concept of natural selection. Although mechanistic explanations had long been available to biologists—forming, for example, part of Harvey’s explanation of blood circulation—they did not pervade the total structure of biological thinking until the advent of Darwinism.

There were two immediate consequences of Darwin’s viewpoints. One has involved a reappraisal of all subject areas of biology; reinterpretations of morphology and embryology are good examples. The comparative anatomy of the British anatomist Owen became a cornerstone of the evidence for evolution, and German anatomists provided the basis for the comment that evolutionary thinking was born in England but gained its home in Germany. The reinterpretation of morphology carried over into the study of fossil forms, as paleontologists sought and found evidence of gradual change in their study of fossils. But some workers, although accepting evolution in principle, could not easily interpret the changes in terms of natural selection. The German paleontologist Otto Schindewolf, for example, found in shelled mollusks called ammonites evidence of progressive complexity and subsequent simplification of forms. The American paleontologist George Gaylord Simpson, however, has been a consistent interpreter of vertebrate fossils by Darwinian selection. Embryology was seen in an evolutionary light when the German zoologist Ernst Haeckel proposed that the epigenetic sequence of embryonic development (ontogeny) repeated its evolutionary history (phylogeny). Thus, the presence of gill clefts in the mammalian embryo and also in less highly evolved vertebrates can be understood as a remnant of a common ancestor.

The other consequence of Darwinism—to make more explicit the origin and nature of heritable variations and the action of natural selection on them—depended on the emergence of the following: genetics and the elucidation of the rules of Mendelian inheritance; the concept of the gene as the unit of inheritance; and the nature of gene mutation. The development of these ideas provided the basis for the genetics of natural populations.

The subject of population genetics began with the Mendelian laws of inheritance and now takes into account selection, mutation, migration (movement into and out of a given population), breeding patterns, and population size. These factors affect the genetic makeup of a group of organisms that either interbreed or have the potential to do so; i.e., a species. Accurate appraisal of these factors allows precise predictions regarding the content of a given gene pool over significant periods of evolutionary time. From work involving population genetics has come the realization, eloquently documented by two contemporary American evolutionists, Theodosius Dobzhansky and Ernst Mayer, that the species is the basic unit of evolution. The process of speciation occurs as a gene pool breaks up to form isolated gene pools. When selection pressures similar to those of the original gene pool persist in the new gene pools, similar functions and the similar structures on which they depend also persist. When selection pressures differ, however, differences arise. Thus, the process of speciation through natural selection preserves the evolutionary history of a species. The record may be discerned not only in the gross, or macroscopic, anatomy of organisms but also in their cellular structure and molecular organization. Significant work now is carried out, for example, on the homologies of the nucleic acids and proteins of different species.

Genetics

The problem of heredity had been the subject of careful study before its definitive analysis by Mendel. As with Darwin’s predecessors, those of Mendel tended to idealize and interpret all inherited traits as being transmitted through the blood or as determined by various “humors” or other vague entities in animal organisms. When studying plants, Mendel was able to free himself of anthropomorphic and holistic explanations. By studying seven carefully defined pairs of characteristics—e.g., tall and short plants; red and white flowers, etc.—as they were transmitted through as many as three successive generations, he was able to establish patterns of inheritance that apply to all sexually reproducing forms. Darwin, who was searching for an explanation of inheritance, apparently never saw Mendel’s work, which was published in 1866 in the obscure journal of his local natural history society; it was simultaneously rediscovered in 1900 by three different European geneticists.

Further progress in genetics was made early in the 20th century, when it was realized that heredity factors are found on chromosomes. The term gene was coined for these factors. Studies by the American geneticist Thomas Hunt Morgan on the fruit fly (Drosophila), moved animal genetics to the forefront of genetic research. The work of Morgan and his students established such major concepts as the linear array of genes on chromosomes; the exchange of parts between chromosomes; and the interaction of genes in determining traits, including sexual differences. In 1927 one of Morgan’s former students, Hermann Muller, used X rays to induce the mutations (changes in genes) in the fruit fly, thereby opening the door to major studies on the nature of variation.

Meanwhile, other organisms were being used for genetic studies, most notably fungi and bacteria. The results of this work provided insights into animal genetics just as principles initially obtained from animal genetics provided insight into botanical and microbial forms. Work continues not only on the genetics of humans, domestic animals, and plants but also on the control of development through the orderly regulation of gene action in different cells and tissues.

Cellular and molecular biology

Although the cell was recognized as the basic unit of life early in the 19th century, its most exciting period of inquiry has probably occurred since the 1940s. The new techniques developed since that time, notably the perfection of the electron microscope and the tools of biochemistry, have changed the cytological studies of the 19th and early 20th centuries from a largely descriptive inquiry, dependent on the light microscope, into a dynamic, molecularly oriented inquiry into fundamental life processes.

The so-called cell theory, which was enunciated about 1838, was never actually a theory. As Edmund Beecher Wilson, the noted American cytologist, stated in his great work, The Cell,

By force of habit we still continue to speak of the cell ‘theory’ but it is a theory only in name. In substance it is a comprehensive general statement of fact and as such stands today beside the evolution theory among the foundationstones of modern biology.

More precisely, the cell doctrine was an inductive generalization based on the microscopial examination of certain plant and animal species.

Rudolf Virchow, a German medical officer specializing in cellular pathology, first expressed the fundamental dictum regarding cells in his phrase omnis cellula e cellula (all cells from cells). For cellular reproduction is the ultimate basis of the continuity of life; the cell is not only the basic structural unit of life but also the basic physiological and reproductive unit. All areas of biology were affected by the new perspective afforded by the principle of cellular organization. Especially in conjunction with embryology was the study of the cell most prominent in animal biology. The continuity of cellular generations by reproduction also had implications for genetics. It is little wonder, then, that the full title of Wilson’s survey of cytology at the turn of the century was The Cell: Its Role in Development and Heredity.

The study of the cell nucleus, its chromosomes, and their behaviour served as the basis for understanding the regular distribution of genetic material during both sexual and asexual reproduction. This orderly behaviour of the nucleus made it appear to dominate the life of the cell, for by contrast the components of the rest of the cell appeared to be randomly distributed.

The biochemical study of life had helped in the characterization of the major molecules of living systems—proteins, nucleic acids, fats, and carbohydrates—and in the understanding of metabolic processes. That nucleic acids are a distinctive feature of the nucleus was recognized after their discovery by the Swiss biochemist Johann Friedrich Miescher in 1869. In 1944 a group of American bacteriologists, led by Oswald T. Avery, published work on the causative agent of pneumonia in mice (a bacterium) that culminated in the demonstration that deoxyribonucleic acid (DNA) is the chemical basis of heredity. Discrete segments of DNA correspond to genes, or Mendel’s hereditary factors. Proteins were discovered to be especially important for their role in determining cell structure and in controlling chemical reactions.

The advent of techniques for isolating and characterizing proteins and nucleic acids now allows a molecular approach to essentially all biological problems—from the appearance of new gene products in normal development or under pathological conditions to a monitoring of changes in and between nerve cells during the transmission of nerve impulses.

Ecology

The harmony that Linnaeus found in nature, which redounded to the glory and wisdom of a Judaeo-Christian god, was the 18th-century counterpart of the balanced interaction now studied by ecologists. Linnaeus recognized that plants are adapted to the regions in which they grow, that insects play a role in flower pollination, and that certain birds prey on insects and are in turn eaten by other birds. This realization implies, in contemporary terms, the flow of matter and energy in a definable direction through any natural assemblage of plants, animals, and microorganisms. Such an assemblage, termed an ecosystem, starts with the plants, which are designated as producers because they maintain and reproduce themselves at the expense of energy from sunlight and inorganic materials taken from the nonliving environment around them (earth, air, and water). Animals are called consumers because they ingest plant material or other animals that feed on plants, using the energy stored in this food to sustain themselves. Lastly, the organisms known as decomposers, mostly fungi and bacteria, break down plant and animal material and return it to the environment in a form that can be used again by plants in a constantly renewed cycle.

The term ecology, first formulated by Haeckel in the latter part of the 19th century as “oecology” (from the Greek word for house, oikos), referred to the dwelling place of organisms in nature. In the 1890s various European and U.S. scientists laid the foundations for modern work through studies of natural ecosystems and the populations of organisms contained within them.

Animal ecology, the study of consumers and their interactions with the environment, is very complex; attempts to study it usually focus on one particular aspect. Some studies, for example, involve the challenge of the environment to individuals with special adaptations (e.g., water conservation in desert animals); others may involve the role of one species in its ecosystem or the ecosystem itself. Food-chain sequences have been determined for various ecosystems, and the efficiency of the transfer of energy and matter within them has been calculated so that their capacity is known; that is, productivity in terms of numbers of organisms or weight of living matter at a specific level in the food chain can be accurately determined.

In spite of advances in understanding animal ecology, this subject area of zoology does not yet have the major unifying theoretical principles found in genetics (gene theory) or evolution (natural selection).

Ethology

The study of animal behaviour (ethology) is largely a 20th-century phenomenon and is exclusively a zoological discipline. Only animals have nervous systems, with their implications for perception, coordination, orientation, learning, and memory. Not until the end of the 19th century did animal behaviour become free from anthropocentric interests and assume an importance in its own right. The British behaviorist C. Lloyd Morgan was probably most influential with his emphasis on parsimonious explanations—i.e., that the explanation “which stands lower in the psychological scale” must be invoked first. This principle is exemplified in the American Herbert Spencer Jennings’ pioneering work in 1906 on The Behavior of Lower Organisms.

The study of animal behaviour now includes many diverse topics, ranging from swimming patterns of protozoans to socialization and communication among the great apes. Many disparate hypotheses have been proposed in an attempt to explain the variety of behavioral patterns found in animals. They focus on the mechanisms that stimulate courtship in reproductive behaviour of such diverse groups as spiders, crabs, and domestic fowl; and on whole life histories, starting from the special attachment of newly born ducks and goats to their actual mothers or to surrogate (substitute) mothers. The latter phenomenon, called imprinting, has been intensively studied by the Austrian ethologist Konrad Lorenz. Physiologically oriented behaviour now receives much attention; studies range from work on conditioned reflexes to the orientation of crustaceans and the location and communication of food among bees; such diversity of material is one measure of the somewhat diffuse but exciting current state of these studies.

General trends

Zoology has become animal biology—that is, the life sciences display a new unity, one that is founded on the common basis of all life, on the gene pool–species organization of organisms, and on the obligatory interacting of the components of ecosystems. Even as regards the specialized features of animals—involving physiology, development, or behaviour—the current emphasis is on elucidating the broad biological principles that identify animals as one aspect of nature. Zoology has thus given up its exclusive emphasis on animals—an emphasis maintained from Aristotle’s time well into the 19th century—in favour of a broader view of life. The successes in applying physical and chemical ideas and techniques to life processes have not only unified the life sciences but have also created bridges to other sciences in a way only dimly foreseen by earlier workers. The practical and theoretical consequences of this trend have just begun to be realized.

Methods in zoology

Because the study of animals may be concentrated on widely different topics, such as ecosystems and their constituent populations, organisms, cells, and chemical reactions, specific techniques are needed for each kind of investigation. The emphasis on the molecular basis of genetics, development, physiology, behaviour, and ecology has placed increasing importance on those techniques involving cells and their many components. Microscopy, therefore, is a necessary technique in zoology, as are certain physicochemical methods for isolating and characterizing molecules. Computer technology also has a special role in the analysis of animal life. These newer techniques are used in addition to the many classical ones—measurement and experimentation at the tissue, organ, organ system, and organismic levels.

Microscopy

In addition to continuous improvements in the techniques of staining cells, so that their components can be seen clearly, the light used in microscopy can now be manipulated to make visible certain structures in living cells that are otherwise undetectable. The ability to observe living cells is an advantage of light microscopes over electron microscopes; the latter require the cells to be in an environment that kills them. The particular advantage of the electron microscope, however, is its great powers of magnification. Theoretically, it can resolve single atoms; in biology, however, magnifications of lesser magnitude are most useful in determining the nature of structures lying between whole cells and their constituent molecules.

Separation and purification techniques

The characterization of components of cellular systems is necessary for biochemical studies. The specific molecular composition of cellular organelles, for example, affects their shape and density (mass per unit volume); as a result, cellular components settle at different rates (and thus can be separated) when they are spun in a centrifuge.

Other methods of purification rely on other physical properties. Molecules vary in their affinity for the positive or negative pole of an electrical field. Migration to or away from these poles, therefore, occurs at different rates for different molecules and allows their separation; the process is called electrophoresis. The separation of molecules by liquid solvents exploits the fact that the molecules differ in their solubility, and hence they migrate to various degrees as a solvent flows past them. This process, known as chromatography because of the colour used to identify the position of the migrating materials, yields samples of extraordinarily high purity.

Radioactive tracers

Radioactive compounds are especially useful in biochemical studies involving metabolic pathways of synthesis and degradation. Radioactive compounds are incorporated into cells in the same way as their nonradioactive counterparts. These compounds provide information on the sites of specific metabolic activities within cells and insights into the fates of these compounds in both organisms and the ecosystem.

Computers

Computers process information using their own general language, which is able to complete calculations as complex and diverse as statistical analyses and determinations of enzymatically controlled reaction rates. Computers with access to extensive data files can select information associated with a specific problem and display it to aid the researcher in formulating possible solutions. They help perform routine examinations such as scanning chromosome preparations in order to identify abnormalities in number or shape. Test organisms can be electronically monitored with computers, so that adjustments can be made during experiments; this procedure improves the quality of the data and allows experimental situations to be fully exploited. Computer simulation is important in analyzing complex problems; as many as 100 variables, for example, are involved in the management of salmon fisheries. Simulation makes possible the development of models that approach the complexities of conditions in nature, a procedure of great value in studying wildlife management and related ecological problems.

Applied zoology

Animal-related industries produce food (meats and dairy products), hides, furs, wool, organic fertilizers, and miscellaneous chemical byproducts. There has been a dramatic increase in the productivity of animal husbandry since the 1870s, largely as a consequence of selective breeding and improved animal nutrition. The purpose of selective breeding is to develop livestock whose desirable traits have strong heritable components and can therefore be propagated. Heritable components are distinguished from environmental factors by determining the coefficient of heritability, which is defined as the ratio of variance in a gene-controlled character to total variance.

Another aspect of food production is the control of pests. The serious side effects of some chemical pesticides make extremely important the development of effective and safe control mechanisms. Animal food resources include commercial fishing. The development of shellfish resources and fisheries management (e.g., growth of fish in rice paddies in Asia) are important aspects of this industry.

48a95e36aaed4e3ebbbf0d36c0b126ea.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2117 2024-04-11 22:59:26

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2119) Film Director

A film director is a person who controls a film's artistic and dramatic aspects and visualizes the screenplay (or script) while guiding the film crew and actors in the fulfillment of that vision. The director has a key role in choosing the cast members, production design and all the creative aspects of filmmaking.

The film director gives direction to the cast and crew and creates an overall vision through which a film eventually becomes realized or noticed. Directors need to be able to mediate differences in creative visions and stay within the budget.

There are many pathways to becoming a film director. Some film directors started as screenwriters, cinematographers, producers, film editors or actors. Other film directors have attended film school. Directors use different approaches. Some outline a general plotline and let the actors improvise dialogue, while others control every aspect and demand that the actors and crew follow instructions precisely. Some directors also write their own screenplays or collaborate on screenplays with long-standing writing partners. Other directors edit or appear in their films or compose music score for their films.

Responsibility

A film director's task is to envisage a way to translate a screenplay into a fully formed film, and then to realize this vision. To do this, they oversee the artistic and technical elements of film production. This entails organizing the film crew in such a way to achieve their vision of the film and communicating with the actors. This requires skills of group leadership, as well as the ability to maintain a singular focus even in the stressful, fast-paced environment of a film set. Moreover, it is necessary to have an artistic eye to frame shots and to give precise feedback to cast and crew, thus, excellent communication skills are a must.

Because the film director depends on the successful cooperation of many different creative individuals with possibly strongly contradicting artistic ideals and visions, they also need to possess conflict-resolution skills to mediate whenever necessary. Thus the director ensures that all individuals involved in the film production are working towards an identical vision for the completed film. The set of varying challenges they have to tackle has been described as "a multi-dimensional jigsaw puzzle with egos and weather thrown in for good measure". It adds to the pressure that the success of a film can influence when and how they will work again, if at all.

Generally, the sole superiors of the director are the producers and the studio that is financing the film, although sometimes the director can also be a producer of the same film. The role of a director differs from producers in that producers typically manage the logistics and business operations of the production, whereas the director is tasked with making creative decisions. The director must work within the restrictions of the film's budget and the demands of the producer and studio (such as the need to get a particular age rating).

Directors also play an important role in post-production. While the film is still in production, the director sends "dailies" to the film editor and explains their overall vision for the film, allowing the editor to assemble an editor's cut. In post-production, the director works with the editor to edit the material into the director's cut. Well-established directors have the "final cut privilege", meaning that they have the final say on which edit of the film is released. For other directors, the studio can order further edits without the director's permission.

The director is one of the few positions that requires intimate involvement during every stage of film production. Thus, the position of film director is widely considered to be a highly stressful and demanding one. It has been said that "20-hour days are not unusual". Some directors also take on additional roles, such as producing, writing or editing.

Under European Union law, the film director is considered the "author" or one of the authors of a film, largely as a result of the influence of auteur theory. Auteur theory is a film criticism concept that holds that a film director's film reflects the director's personal creative vision, as if they were the primary "auteur" (the French word for "author"). In spite of—and sometimes even because of—the production of the film as part of an industrial process, the auteur's creative voice is distinct enough to shine through studio interference and the collective process.

Career pathways

Some film directors started as screenwriters, film editors, producers, actors, or film critics, as well as directing for similar media like television and commercials. Several American cinematographers have become directors, including Barry Sonnenfeld, originally the Coen brothers' Director of Photography; Wally Pfister, cinematographer on Christopher Nolan's three Batman films made his directorial debut with Transcendence (2014). Despite the misnomer, assistant director has become a completely separate career path and is not typically a position for aspiring directors, but there are exceptions in some countries such as India where assistant directors are indeed directors-in-training.

Education

Many film directors have attended a film school to get a bachelor's degree studying film or cinema. Film students generally study the basic skills used in making a film. This includes, for example, preparation, shot lists and storyboards, blocking, communicating with professional actors, communicating with the crew, and reading scripts. Some film schools are equipped with sound stages and post-production facilities. Besides basic technical and logistical skills, students also receive education on the nature of professional relationships that occur during film production. A full degree course can be designed for up to five years of studying. Future directors usually complete short films during their enrollment. The National Film School of Denmark has the student's final projects presented on national TV. Some film schools retain the rights for their students' works. Many directors successfully prepared for making feature films by working in television. The German Film and Television Academy Berlin consequently cooperates with the Berlin/Brandenburg TV station RBB (Berlin-Brandenburg Broadcasting) and ARTE.

In recent decades American directors have primarily been coming out of USC, UCLA, AFI, Columbia University, and NYU, each of which is known for cultivating a certain style of filmmaking. Notable film schools outside of the United States include Beijing Film Academy, Centro de Capacitación Cinematográfica in Mexico City, Dongseo University in South Korea, FAMU in Prague, Film and Television Institute of India, HFF Munich, La Fémis in Paris, Tel Aviv University, and Vancouver Film School.

Compensation

Film directors usually are self-employed and hired per project based on recommendations and industry reputation. Compensation might be arranged as a flat fee for the project, as a weekly salary, or as a daily rate.

A handful of top Hollywood directors made from $133.3 million to $257.95 million in 2011, such as James Cameron and Steven Spielberg, but the average United States film directors and producers made $89,840 in 2018. A new Hollywood director typically gets paid around $400,000 for directing their first studio film.

The average annual salary in England is £50,440,  in Canada is $62,408, and in Western Australia it can range from $75,230 to $97,119. In France, the average salary is €4000 per month, paid per project. Luc Besson was the highest paid French director in 2017, making €4.44 million for Valerian and the City of a Thousand Planets. That same year, the top ten French directors' salaries in total represented 42% of the total directors' salaries in France.

Film directors in Japan average a yearly salary from ¥4 million to ¥10 million, and the Directors Guild of Japan requires a minimum payment of ¥3.5 million. Korean directors make 300 million to 500 million won for a film, and beginning directors start out making around 50 million won. A Korean director who breaks into the Chinese market might make 1 billion won for a single film.

Gender disparities

According to a 2018 report from UNESCO, the film industry throughout the world has a disproportionately higher number of male directors compared to female directors, and they provide as an example the fact that only 20% of films in Europe are directed by women. 44% of graduates from a sample of European films schools are women, and yet women are only 24% of working film directors in Europe. However only a fraction of film school graduates aspire to direct with the majority entering the industry in other roles.  In Hollywood, women make up only 12.6 percent of film directors, as reported by a UCLA study of the 200 top theatrical films of 2017, but that number is a significant increase from 6.9% in 2016. As of 2014, there were only 20 women in the Directors Guild of Japan out of the 550 total members. Indian film directors are also greatly underrepresented by women, even compared to other countries, but there has been a recent trend of more attention to women directors in India, brought on partly by Amazon and Netflix moving into the industry. Of the movies produced in Nollywood, women direct only 2%.

Awards

There are many different awards for film directing, run by various academies, critics associations, film festivals, and guilds. The Academy Award for Best Director and Cannes Film Festival Award for Best Director are considered among the most prestigious awards for directing, and there is even an award for worst directing given out during the Golden Raspberry Awards.

main-qimg-afbccc7ca7002db8ef8301e8cbe4b4d0-c


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2118 2024-04-13 00:02:03

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2120) Physics

Gist

Physics is a science that deals with the structure of matter and the interactions between the fundamental constituents of the observable universe. In the broadest sense, physics (from the Greek physikos) is concerned with all aspects of nature on both the macroscopic and submicroscopic levels.

Summary

Physics is the natural science of matter, involving the study of matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. Physics is one of the most fundamental scientific disciplines, with its main goal being to understand how the universe behaves. A scientist who specializes in the field of physics is called a physicist.

Physics is one of the oldest academic disciplines and, through its inclusion of astronomy, perhaps the oldest.[6] Over much of the past two millennia, physics, chemistry, biology, and certain branches of mathematics were a part of natural philosophy, but during the Scientific Revolution in the 17th century these natural sciences emerged as unique research endeavors in their own right. Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the fundamental mechanisms studied by other sciences and suggest new avenues of research in these and other academic disciplines such as mathematics and philosophy.

Advances in physics often enable new technologies. For example, advances in the understanding of electromagnetism, solid-state physics, and nuclear physics led directly to the development of new products that have dramatically transformed modern-day society, such as television, computers, domestic appliances, and nuclear weapons; advances in thermodynamics led to the development of industrialization; and advances in mechanics inspired the development of calculus.

Details

Physics is the science that deals with the structure of matter and the interactions between the fundamental constituents of the observable universe. In the broadest sense, physics (from the Greek physikos) is concerned with all aspects of nature on both the macroscopic and submicroscopic levels. Its scope of study encompasses not only the behaviour of objects under the action of given forces but also the nature and origin of gravitational, electromagnetic, and nuclear force fields. Its ultimate objective is the formulation of a few comprehensive principles that bring together and explain all such disparate phenomena.

Physics is the basic physical science. Until rather recent times physics and natural philosophy were used interchangeably for the science whose aim is the discovery and formulation of the fundamental laws of nature. As the modern sciences developed and became increasingly specialized, physics came to denote that part of physical science not included in astronomy, chemistry, geology, and engineering. Physics plays an important role in all the natural sciences, however, and all such fields have branches in which physical laws and measurements receive special emphasis, bearing such names as astrophysics, geophysics, biophysics, and even psychophysics. Physics can, at base, be defined as the science of matter, motion, and energy. Its laws are typically expressed with economy and precision in the language of mathematics.

Both experiment, the observation of phenomena under conditions that are controlled as precisely as possible, and theory, the formulation of a unified conceptual framework, play essential and complementary roles in the advancement of physics. Physical experiments result in measurements, which are compared with the outcome predicted by theory. A theory that reliably predicts the results of experiments to which it is applicable is said to embody a law of physics. However, a law is always subject to modification, replacement, or restriction to a more limited domain, if a later experiment makes it necessary.

The ultimate aim of physics is to find a unified set of laws governing matter, motion, and energy at small (microscopic) subatomic distances, at the human (macroscopic) scale of everyday life, and out to the largest distances (e.g., those on the extragalactic scale). This ambitious goal has been realized to a notable extent. Although a completely unified theory of physical phenomena has not yet been achieved (and possibly never will be), a remarkably small set of fundamental physical laws appears able to account for all known phenomena. The body of physics developed up to about the turn of the 20th century, known as classical physics, can largely account for the motions of macroscopic objects that move slowly with respect to the speed of light and for such phenomena as heat, sound, electricity, magnetism, and light. The modern developments of relativity and quantum mechanics modify these laws insofar as they apply to higher speeds, very massive objects, and to the tiny elementary constituents of matter, such as electrons, protons, and neutrons.

The scope of physics

The traditionally organized branches or fields of classical and modern physics are delineated below.

Mechanics

Mechanics is generally taken to mean the study of the motion of objects (or their lack of motion) under the action of given forces. Classical mechanics is sometimes considered a branch of applied mathematics. It consists of kinematics, the description of motion, and dynamics, the study of the action of forces in producing either motion or static equilibrium (the latter constituting the science of statics). The 20th-century subjects of quantum mechanics, crucial to treating the structure of matter, subatomic particles, superfluidity, superconductivity, neutron stars, and other major phenomena, and relativistic mechanics, important when speeds approach that of light, are forms of mechanics that will be discussed later in this section.

In classical mechanics the laws are initially formulated for point particles in which the dimensions, shapes, and other intrinsic properties of bodies are ignored. Thus in the first approximation even objects as large as Earth and the Sun are treated as pointlike—e.g., in calculating planetary orbital motion. In rigid-body dynamics, the extension of bodies and their mass distributions are considered as well, but they are imagined to be incapable of deformation. The mechanics of deformable solids is elasticity; hydrostatics and hydrodynamics treat, respectively, fluids at rest and in motion.

The three laws of motion set forth by Isaac Newton form the foundation of classical mechanics, together with the recognition that forces are directed quantities (vectors) and combine accordingly. The first law, also called the law of inertia, states that, unless acted upon by an external force, an object at rest remains at rest, or if in motion, it continues to move in a straight line with constant speed. Uniform motion therefore does not require a cause. Accordingly, mechanics concentrates not on motion as such but on the change in the state of motion of an object that results from the net force acting upon it. Newton’s second law equates the net force on an object to the rate of change of its momentum, the latter being the product of the mass of a body and its velocity. Newton’s third law, that of action and reaction, states that when two particles interact, the forces each exerts on the other are equal in magnitude and opposite in direction. Taken together, these mechanical laws in principle permit the determination of the future motions of a set of particles, providing their state of motion is known at some instant, as well as the forces that act between them and upon them from the outside. From this deterministic character of the laws of classical mechanics, profound (and probably incorrect) philosophical conclusions have been drawn in the past and even applied to human history.

Lying at the most basic level of physics, the laws of mechanics are characterized by certain symmetry properties, as exemplified in the aforementioned symmetry between action and reaction forces. Other symmetries, such as the invariance (i.e., unchanging form) of the laws under reflections and rotations carried out in space, reversal of time, or transformation to a different part of space or to a different epoch of time, are present both in classical mechanics and in relativistic mechanics, and with certain restrictions, also in quantum mechanics. The symmetry properties of the theory can be shown to have as mathematical consequences basic principles known as conservation laws, which assert the constancy in time of the values of certain physical quantities under prescribed conditions. The conserved quantities are the most important ones in physics; included among them are mass and energy (in relativity theory, mass and energy are equivalent and are conserved together), momentum, angular momentum, and electric charge.

The study of gravitation

Laser Interferometer Space Antenna (LISA), a Beyond Einstein Great Observatory, is scheduled for launch in 2034. Funded by the European Space Agency, LISA will consist of three identical spacecraft that will trail the Earth in its orbit by about 50 million km (30 million miles). The spacecraft will contain thrusters for maneuvering them into an equilateral triangle, with sides of approximately 5 million km (3 million miles), such that the triangle's centre will be located along the Earth's orbit. By measuring the transmission of laser signals between the spacecraft (essentially a giant Michelson interferometer in space), scientists hope to detect and accurately measure gravity waves.

This field of inquiry has in the past been placed within classical mechanics for historical reasons, because both fields were brought to a high state of perfection by Newton and also because of its universal character. Newton’s gravitational law states that every material particle in the universe attracts every other one with a force that acts along the line joining them and whose strength is directly proportional to the product of their masses and inversely proportional to the square of their separation. Newton’s detailed accounting for the orbits of the planets and the Moon, as well as for such subtle gravitational effects as the tides and the precession of the equinoxes (a slow cyclical change in direction of Earth’s axis of rotation), through this fundamental force was the first triumph of classical mechanics. No further principles are required to understand the principal aspects of rocketry and space flight (although, of course, a formidable technology is needed to carry them out).

The four dimensional space-time continuum itself is distorted in the vicinity of any mass, with the amount of distortion depending on the mass and the distance from the mass. Thus, relativity accounts for Newton's inverse square law of gravity through geometry and thereby does away with the need for any mysterious “action at a distance.”
The modern theory of gravitation was formulated by Albert Einstein and is called the general theory of relativity. From the long-known equality of the quantity “mass” in Newton’s second law of motion and that in his gravitational law, Einstein was struck by the fact that acceleration can locally annul a gravitational force (as occurs in the so-called weightlessness of astronauts in an Earth-orbiting spacecraft) and was led thereby to the concept of curved space-time. Completed in 1915, the theory was valued for many years mainly for its mathematical beauty and for correctly predicting a small number of phenomena, such as the gravitational bending of light around a massive object. Only in recent years, however, has it become a vital subject for both theoretical and experimental research. (Relativistic mechanics refers to Einstein’s special theory of relativity, which is not a theory of gravitation.)

The study of heat, thermodynamics, and statistical mechanics

Heat is a form of internal energy associated with the random motion of the molecular constituents of matter or with radiation. Temperature is an average of a part of the internal energy present in a body (it does not include the energy of molecular binding or of molecular rotation). The lowest possible energy state of a substance is defined as the absolute zero (−273.15 °C, or −459.67 °F) of temperature. An isolated body eventually reaches uniform temperature, a state known as thermal equilibrium, as do two or more bodies placed in contact. The formal study of states of matter at (or near) thermal equilibrium is called thermodynamics; it is capable of analyzing a large variety of thermal systems without considering their detailed microstructures.

First law

The first law of thermodynamics is the energy conservation principle of mechanics (i.e., for all changes in an isolated system, the energy remains constant) generalized to include heat.

Second law

The second law of thermodynamics asserts that heat will not flow from a place of lower temperature to one where it is higher without the intervention of an external device (e.g., a refrigerator). The concept of entropy involves the measurement of the state of disorder of the particles making up a system. For example, if tossing a coin many times results in a random-appearing sequence of heads and tails, the result has a higher entropy than if heads and tails tend to appear in clusters. Another formulation of the second law is that the entropy of an isolated system never decreases with time.

Third law

The third law of thermodynamics states that the entropy at the absolute zero of temperature is zero, corresponding to the most ordered possible state.

Statistical mechanics

The science of statistical mechanics derives bulk properties of systems from the mechanical properties of their molecular constituents, assuming molecular chaos and applying the laws of probability. Regarding each possible configuration of the particles as equally likely, the chaotic state (the state of maximum entropy) is so enormously more likely than ordered states that an isolated system will evolve to it, as stated in the second law of thermodynamics. Such reasoning, placed in mathematically precise form, is typical of statistical mechanics, which is capable of deriving the laws of thermodynamics but goes beyond them in describing fluctuations (i.e., temporary departures) from the thermodynamic laws that describe only average behaviour. An example of a fluctuation phenomenon is the random motion of small particles suspended in a fluid, known as Brownian motion.

Quantum statistical mechanics plays a major role in many other modern fields of science, as, for example, in plasma physics (the study of fully ionized gases), in solid-state physics, and in the study of stellar structure. From a microscopic point of view the laws of thermodynamics imply that, whereas the total quantity of energy of any isolated system is constant, what might be called the quality of this energy is degraded as the system moves inexorably, through the operation of the laws of chance, to states of increasing disorder until it finally reaches the state of maximum disorder (maximum entropy), in which all parts of the system are at the same temperature, and none of the state’s energy may be usefully employed. When applied to the universe as a whole, considered as an isolated system, this ultimate chaotic condition has been called the “heat death.”

The study of electricity and magnetism

Although conceived of as distinct phenomena until the 19th century, electricity and magnetism are now known to be components of the unified field of electromagnetism. Particles with electric charge interact by an electric force, while charged particles in motion produce and respond to magnetic forces as well. Many subatomic particles, including the electrically charged electron and proton and the electrically neutral neutron, behave like elementary magnets. On the other hand, in spite of systematic searches undertaken, no magnetic monopoles, which would be the magnetic analogues of electric charges, have ever been found.

The field concept plays a central role in the classical formulation of electromagnetism, as well as in many other areas of classical and contemporary physics. Einstein’s gravitational field, for example, replaces Newton’s concept of gravitational action at a distance. The field describing the electric force between a pair of charged particles works in the following manner: each particle creates an electric field in the space surrounding it, and so also at the position occupied by the other particle; each particle responds to the force exerted upon it by the electric field at its own position.

Classical electromagnetism is summarized by the laws of action of electric and magnetic fields upon electric charges and upon magnets and by four remarkable equations formulated in the latter part of the 19th century by the Scottish physicist James Clerk Maxwell. The latter equations describe the manner in which electric charges and currents produce electric and magnetic fields, as well as the manner in which changing magnetic fields produce electric fields, and vice versa. From these relations Maxwell inferred the existence of electromagnetic waves—associated electric and magnetic fields in space, detached from the charges that created them, traveling at the speed of light, and endowed with such “mechanical” properties as energy, momentum, and angular momentum. The light to which the human eye is sensitive is but one small segment of an electromagnetic spectrum that extends from long-wavelength radio waves to short-wavelength gamma rays and includes X-rays, microwaves, and infrared (or heat) radiation.

Optics

Because light consists of electromagnetic waves, the propagation of light can be regarded as merely a branch of electromagnetism. However, it is usually dealt with as a separate subject called optics: the part that deals with the tracing of light rays is known as geometrical optics, while the part that treats the distinctive wave phenomena of light is called physical optics. More recently, there has developed a new and vital branch, quantum optics, which is concerned with the theory and application of the laser, a device that produces an intense coherent beam of unidirectional radiation useful for many applications.

The formation of images by lenses, microscopes, telescopes, and other optical devices is described by ray optics, which assumes that the passage of light can be represented by straight lines, that is, rays. The subtler effects attributable to the wave property of visible light, however, require the explanations of physical optics. One basic wave effect is interference, whereby two waves present in a region of space combine at certain points to yield an enhanced resultant effect (e.g., the crests of the component waves adding together); at the other extreme, the two waves can annul each other, the crests of one wave filling in the troughs of the other. Another wave effect is diffraction, which causes light to spread into regions of the geometric shadow and causes the image produced by any optical device to be fuzzy to a degree dependent on the wavelength of the light. Optical instruments such as the interferometer and the diffraction grating can be used for measuring the wavelength of light precisely (about 500 micrometres) and for measuring distances to a small fraction of that length.

Atomic and chemical physics

Between 1909 and 1910 the American physicist Robert Millikan conducted a series of oil-drop experiments. By comparing applied electric force with changes in the motion of the oil drops, he was able to determine the electric charge on each drop. He found that all of the drops had charges that were simple multiples of a single number, the fundamental charge of the electron.

One of the great achievements of the 20th century was the establishment of the validity of the atomic hypothesis, first proposed in ancient times, that matter is made up of relatively few kinds of small, identical parts—namely, atoms. However, unlike the indivisible atom of Democritus and other ancients, the atom, as it is conceived today, can be separated into constituent electrons and nucleus. Atoms combine to form molecules, whose structure is studied by chemistry and physical chemistry; they also form other types of compounds, such as crystals, studied in the field of condensed-matter physics. Such disciplines study the most important attributes of matter (not excluding biologic matter) that are encountered in normal experience—namely, those that depend almost entirely on the outer parts of the electronic structure of atoms. Only the mass of the atomic nucleus and its charge, which is equal to the total charge of the electrons in the neutral atom, affect the chemical and physical properties of matter.

Although there are some analogies between the solar system and the atom due to the fact that the strengths of gravitational and electrostatic forces both fall off as the inverse square of the distance, the classical forms of electromagnetism and mechanics fail when applied to tiny, rapidly moving atomic constituents. Atomic structure is comprehensible only on the basis of quantum mechanics, and its finer details require as well the use of quantum electrodynamics (QED).

Atomic properties are inferred mostly by the use of indirect experiments. Of greatest importance has been spectroscopy, which is concerned with the measurement and interpretation of the electromagnetic radiations either emitted or absorbed by materials. These radiations have a distinctive character, which quantum mechanics relates quantitatively to the structures that produce and absorb them. It is truly remarkable that these structures are in principle, and often in practice, amenable to precise calculation in terms of a few basic physical constants: the mass and charge of the electron, the speed of light, and Planck’s constant (approximately 6.62606957 × {10}^{−34} joule∙second), the fundamental constant of the quantum theory named for the German physicist Max Planck.

Condensed-matter physics

The first transistor, invented by American physicists John Bardeen, Walter H. Brattain, and William B. Shockley.
This field, which treats the thermal, elastic, electrical, magnetic, and optical properties of solid and liquid substances, grew at an explosive rate in the second half of the 20th century and scored numerous important scientific and technical achievements, including the transistor. Among solid materials, the greatest theoretical advances have been in the study of crystalline materials whose simple repetitive geometric arrays of atoms are multiple-particle systems that allow treatment by quantum mechanics. Because the atoms in a solid are coordinated with each other over large distances, the theory must go beyond that appropriate for atoms and molecules. Thus conductors, such as metals, contain some so-called free electrons, or valence electrons, which are responsible for the electrical and most of the thermal conductivity of the material and which belong collectively to the whole solid rather than to individual atoms. Semiconductors and insulators, either crystalline or amorphous, are other materials studied in this field of physics.

Other aspects of condensed matter involve the properties of the ordinary liquid state, of liquid crystals, and, at temperatures near absolute zero, of the so-called quantum liquids. The latter exhibit a property known as superfluidity (completely frictionless flow), which is an example of macroscopic quantum phenomena. Such phenomena are also exemplified by superconductivity (completely resistance-less flow of electricity), a low-temperature property of certain metallic and ceramic materials. Besides their significance to technology, macroscopic liquid and solid quantum states are important in astrophysical theories of stellar structure in, for example, neutron stars.

Nuclear physics

Particle tracks from the collision of an accelerated nucleus of a niobium atom with another niobium nucleus. The single line on the left is the track of the incoming projectile nucleus, and the other tracks are fragments from the collision.
This branch of physics deals with the structure of the atomic nucleus and the radiation from unstable nuclei. About 10,000 times smaller than the atom, the constituent particles of the nucleus, protons and neutrons, attract one another so strongly by the nuclear forces that nuclear energies are approximately 1,000,000 times larger than typical atomic energies. Quantum theory is needed for understanding nuclear structure.

Like excited atoms, unstable radioactive nuclei (either naturally occurring or artificially produced) can emit electromagnetic radiation. The energetic nuclear photons are called gamma rays. Radioactive nuclei also emit other particles: negative and positive electrons (beta rays), accompanied by neutrinos, and helium nuclei (alpha rays).

A principal research tool of nuclear physics involves the use of beams of particles (e.g., protons or electrons) directed as projectiles against nuclear targets. Recoiling particles and any resultant nuclear fragments are detected, and their directions and energies are analyzed to reveal details of nuclear structure and to learn more about the strong force. A much weaker nuclear force, the so-called weak interaction, is responsible for the emission of beta rays. Nuclear collision experiments use beams of higher-energy particles, including those of unstable particles called mesons produced by primary nuclear collisions in accelerators dubbed meson factories. Exchange of mesons between protons and neutrons is directly responsible for the strong force. (For the mechanism underlying mesons, see below Fundamental forces and fields.)

In radioactivity and in collisions leading to nuclear breakup, the chemical identity of the nuclear target is altered whenever there is a change in the nuclear charge. In fission and fusion nuclear reactions in which unstable nuclei are, respectively, split into smaller nuclei or amalgamated into larger ones, the energy release far exceeds that of any chemical reaction.

Particle physics

One of the most significant branches of contemporary physics is the study of the fundamental subatomic constituents of matter, the elementary particles. This field, also called high-energy physics, emerged in the 1930s out of the developing experimental areas of nuclear and cosmic-ray physics. Initially investigators studied cosmic rays, the very-high-energy extraterrestrial radiations that fall upon Earth and interact in the atmosphere (see below The methodology of physics). However, after World War II, scientists gradually began using high-energy particle accelerators to provide subatomic particles for study. Quantum field theory, a generalization of QED to other types of force fields, is essential for the analysis of high-energy physics. Subatomic particles cannot be visualized as tiny analogues of ordinary material objects such as billiard balls, for they have properties that appear contradictory from the classical viewpoint. That is to say, while they possess charge, spin, mass, magnetism, and other complex characteristics, they are nonetheless regarded as pointlike.

During the latter half of the 20th century, a coherent picture evolved of the underlying strata of matter involving two types of subatomic particles: fermions (baryons and leptons), which have odd half-integral angular momentum (spin 1/2, 3/2) and make up ordinary matter; and bosons (gluons, mesons, and photons), which have integral spins and mediate the fundamental forces of physics. Leptons (e.g., electrons, muons, taus), gluons, and photons are believed to be truly fundamental particles. Baryons (e.g., neutrons, protons) and mesons (e.g., pions, kaons), collectively known as hadrons, are believed to be formed from indivisible elements known as quarks, which have never been isolated.

Quarks come in six types, or “flavours,” and have matching antiparticles, known as antiquarks. Quarks have charges that are either positive two-thirds or negative one-third of the electron’s charge, while antiquarks have the opposite charges. Like quarks, each lepton has an antiparticle with properties that mirror those of its partner (the antiparticle of the negatively charged electron is the positive electron, or positron; that of the neutrino is the antineutrino). In addition to their electric and magnetic properties, quarks participate in both the strong force (which binds them together) and the weak force (which underlies certain forms of radioactivity), while leptons take part in only the weak force.

Baryons, such as neutrons and protons, are formed by combining three quarks—thus baryons have a charge of −1, 0, or 1. Mesons, which are the particles that mediate the strong force inside the atomic nucleus, are composed of one quark and one antiquark; all known mesons have a charge of −2, −1, 0, 1, or 2. Most of the possible quark combinations, or hadrons, have very short lifetimes, and many of them have never been seen, though additional ones have been observed with each new generation of more powerful particle accelerators.

The quantum fields through which quarks and leptons interact with each other and with themselves consist of particle-like objects called quanta (from which quantum mechanics derives its name). The first known quanta were those of the electromagnetic field; they are also called photons because light consists of them. A modern unified theory of weak and electromagnetic interactions, known as the electroweak theory, proposes that the weak force involves the exchange of particles about 100 times as massive as protons. These massive quanta have been observed—namely, two charged particles, W+ and W−, and a neutral one, W0.

In the theory of the strong force known as quantum chromodynamics (QCD), eight quanta, called gluons, bind quarks to form baryons and also bind quarks to antiquarks to form mesons, the force itself being dubbed the “colour force.” (This unusual use of the term colour is a somewhat forced analogue of ordinary colour mixing.) Quarks are said to come in three colours—red, blue, and green. (The opposites of these imaginary colours, minus-red, minus-blue, and minus-green, are ascribed to antiquarks.) Only certain colour combinations, namely colour-neutral, or “white” (i.e., equal mixtures of the above colours cancel out one another, resulting in no net colour), are conjectured to exist in nature in an observable form. The gluons and quarks themselves, being coloured, are permanently confined (deeply bound within the particles of which they are a part), while the colour-neutral composites such as protons can be directly observed. One consequence of colour confinement is that the observable particles are either electrically neutral or have charges that are integral multiples of the charge of the electron. A number of specific predictions of QCD have been experimentally tested and found correct.

Quantum mechanics

Although the various branches of physics differ in their experimental methods and theoretical approaches, certain general principles apply to all of them. The forefront of contemporary advances in physics lies in the submicroscopic regime, whether it be in atomic, nuclear, condensed-matter, plasma, or particle physics, or in quantum optics, or even in the study of stellar structure. All are based upon quantum theory (i.e., quantum mechanics and quantum field theory) and relativity, which together form the theoretical foundations of modern physics. Many physical quantities whose classical counterparts vary continuously over a range of possible values are in quantum theory constrained to have discontinuous, or discrete, values. Furthermore, the intrinsically deterministic character of values in classical physics is replaced in quantum theory by intrinsic uncertainty.

According to quantum theory, electromagnetic radiation does not always consist of continuous waves; instead it must be viewed under some circumstances as a collection of particle-like photons, the energy and momentum of each being directly proportional to its frequency (or inversely proportional to its wavelength, the photons still possessing some wavelike characteristics). Conversely, electrons and other objects that appear as particles in classical physics are endowed by quantum theory with wavelike properties as well, such a particle’s quantum wavelength being inversely proportional to its momentum. In both instances, the proportionality constant is the characteristic quantum of action (action being defined as energy × time)—that is to say, Planck’s constant divided by 2π, or ℏ.

In principle, all of atomic and molecular physics, including the structure of atoms and their dynamics, the periodic table of elements and their chemical behaviour, as well as the spectroscopic, electrical, and other physical properties of atoms, molecules, and condensed matter, can be accounted for by quantum mechanics. Roughly speaking, the electrons in the atom must fit around the nucleus as some sort of standing wave (as given by the Schrödinger equation) analogous to the waves on a plucked violin or guitar string. As the fit determines the wavelength of the quantum wave, it necessarily determines its energy state. Consequently, atomic systems are restricted to certain discrete, or quantized, energies. When an atom undergoes a discontinuous transition, or quantum jump, its energy changes abruptly by a sharply defined amount, and a photon of that energy is emitted when the energy of the atom decreases, or is absorbed in the opposite case.

Although atomic energies can be sharply defined, the positions of the electrons within the atom cannot be, quantum mechanics giving only the probability for the electrons to have certain locations. This is a consequence of the feature that distinguishes quantum theory from all other approaches to physics, the uncertainty principle of the German physicist Werner Heisenberg. This principle holds that measuring a particle’s position with increasing precision necessarily increases the uncertainty as to the particle’s momentum, and conversely. The ultimate degree of uncertainty is controlled by the magnitude of Planck’s constant, which is so small as to have no apparent effects except in the world of microstructures. In the latter case, however, because both a particle’s position and its velocity or momentum must be known precisely at some instant in order to predict its future history, quantum theory precludes such certain prediction and thus escapes determinism.

When a beam of X-rays is aimed at a target material, some of the beam is deflected, and the scattered X-rays have a greater wavelength than the original beam. The physicist Arthur Holly Compton concluded that this phenomenon could only be explained if the X-rays were understood to be made up of discrete bundles or particles, now called photons, that lost some of their energy in the collisions with electrons in the target material and then scattered at lower energy.
The complementary wave and particle aspects, or wave–particle duality, of electromagnetic radiation and of material particles furnish another illustration of the uncertainty principle. When an electron exhibits wavelike behaviour, as in the phenomenon of electron diffraction, this excludes its exhibiting particle-like behaviour in the same observation. Similarly, when electromagnetic radiation in the form of photons interacts with matter, as in the Compton effect in which X-ray photons collide with electrons, the result resembles a particle-like collision and the wave nature of electromagnetic radiation is precluded. The principle of complementarity, asserted by the Danish physicist Niels Bohr, who pioneered the theory of atomic structure, states that the physical world presents itself in the form of various complementary pictures, no one of which is by itself complete, all of these pictures being essential for our total understanding. Thus both wave and particle pictures are needed for understanding either the electron or the photon.

Although it deals with probabilities and uncertainties, the quantum theory has been spectacularly successful in explaining otherwise inaccessible atomic phenomena and in thus far meeting every experimental test. Its predictions, especially those of QED, are the most precise and the best checked of any in physics; some of them have been tested and found accurate to better than one part per billion.

Relativistic mechanics

In classical physics, space is conceived as having the absolute character of an empty stage in which events in nature unfold as time flows onward independently; events occurring simultaneously for one observer are presumed to be simultaneous for any other; mass is taken as impossible to create or destroy; and a particle given sufficient energy acquires a velocity that can increase without limit. The special theory of relativity, developed principally by Albert Einstein in 1905 and now so adequately confirmed by experiment as to have the status of physical law, shows that all these, as well as other apparently obvious assumptions, are false.

Specific and unusual relativistic effects flow directly from Einstein’s two basic postulates, which are formulated in terms of so-called inertial reference frames. These are reference systems that move in such a way that in them Isaac Newton’s first law, the law of inertia, is valid. The set of inertial frames consists of all those that move with constant velocity with respect to each other (accelerating frames therefore being excluded). Einstein’s postulates are: (1) All observers, whatever their state of motion relative to a light source, measure the same speed for light; and (2) The laws of physics are the same in all inertial frames.

The first postulate, the constancy of the speed of light, is an experimental fact from which follow the distinctive relativistic phenomena of space contraction (or Lorentz-FitzGerald contraction), time dilation, and the relativity of simultaneity: as measured by an observer assumed to be at rest, an object in motion is contracted along the direction of its motion, and moving clocks run slow; two spatially separated events that are simultaneous for a stationary observer occur sequentially for a moving observer. As a consequence, space intervals in three-dimensional space are related to time intervals, thus forming so-called four-dimensional space-time.

The second postulate is called the principle of relativity. It is equally valid in classical mechanics (but not in classical electrodynamics until Einstein reinterpreted it). This postulate implies, for example, that table tennis played on a train moving with constant velocity is just like table tennis played with the train at rest, the states of rest and motion being physically indistinguishable. In relativity theory, mechanical quantities such as momentum and energy have forms that are different from their classical counterparts but give the same values for speeds that are small compared to the speed of light, the maximum permissible speed in nature (about 300,000 kilometres per second, or 186,000 miles per second). According to relativity, mass and energy are equivalent and interchangeable quantities, the equivalence being expressed by Einstein’s famous mass-energy equation E = mc^2, where m is an object’s mass and c is the speed of light.

The general theory of relativity is Einstein’s theory of gravitation, which uses the principle of the equivalence of gravitation and locally accelerating frames of reference. Einstein’s theory has special mathematical beauty; it generalizes the “flat” space-time concept of special relativity to one of curvature. It forms the background of all modern cosmological theories. In contrast to some vulgarized popular notions of it, which confuse it with moral and other forms of relativism, Einstein’s theory does not argue that “all is relative.” On the contrary, it is largely a theory based upon those physical attributes that do not change, or, in the language of the theory, that are invariant.

Conservation laws and symmetry

Since the early period of modern physics, there have been conservation laws, which state that certain physical quantities, such as the total electric charge of an isolated system of bodies, do not change in the course of time. In the 20th century it has been proved mathematically that such laws follow from the symmetry properties of nature, as expressed in the laws of physics. The conservation of mass-energy of an isolated system, for example, follows from the assumption that the laws of physics may depend upon time intervals but not upon the specific time at which the laws are applied. The symmetries and the conservation laws that follow from them are regarded by modern physicists as being even more fundamental than the laws themselves, since they are able to limit the possible forms of laws that may be proposed in the future.

Conservation laws are valid in classical, relativistic, and quantum theory for mass-energy, momentum, angular momentum, and electric charge. (In nonrelativistic physics, mass and energy are separately conserved.) Momentum, a directed quantity equal to the mass of a body multiplied by its velocity or to the total mass of two or more bodies multiplied by the velocity of their centre of mass, is conserved when, and only when, no external force acts. Similarly angular momentum, which is related to spinning motions, is conserved in a system upon which no net turning force, called torque, acts. External forces and torques break the symmetry conditions from which the respective conservation laws follow.

In quantum theory, and especially in the theory of elementary particles, there are additional symmetries and conservation laws, some exact and others only approximately valid, which play no significant role in classical physics. Among these are the conservation of so-called quantum numbers related to left-right reflection symmetry of space (called parity) and to the reversal symmetry of motion (called time reversal). These quantum numbers are conserved in all processes other than the weak force.

Other symmetry properties not obviously related to space and time (and referred to as internal symmetries) characterize the different families of elementary particles and, by extension, their composites. Quarks, for example, have a property called baryon number, as do protons, neutrons, nuclei, and unstable quark composites. All of these except the quarks are known as baryons. A failure of baryon-number conservation would exhibit itself, for instance, by a proton decaying into lighter non-baryonic particles. Indeed, intensive search for such proton decay has been conducted, but so far it has been fruitless. Similar symmetries and conservation laws hold for an analogously defined lepton number, and they also appear, as does the law of baryon conservation, to hold absolutely.

Fundamental forces and fields

The four basic forces of nature, in order of increasing strength, are thought to be: (1) the gravitational force between particles with mass; (2) the electromagnetic force between particles with charge or magnetism or both; (3) the colour force, or strong force, between quarks; and (4) the weak force by which, for example, quarks can change their type, so that a neutron decays into a proton, an electron, and an antineutrino. The strong force that binds protons and neutrons into nuclei and is responsible for fission, fusion, and other nuclear reactions is in principle derived from the colour force. Nuclear physics is thus related to QCD as chemistry is to atomic physics.

According to quantum field theory, each of the four fundamental interactions is mediated by the exchange of quanta, called vector gauge bosons, which share certain common characteristics. All have an intrinsic spin of one unit, measured in terms of Planck’s constant ℏ. (Leptons and quarks each have one-half unit of spin.) Gauge theory studies the group of transformations, or Lie group, that leaves the basic physics of a quantum field invariant. Lie groups, which are named for the 19th-century Norwegian mathematician Sophus Lie, possess a special type of symmetry and continuity that made them first useful in the study of differential equations on smooth manifolds (an abstract mathematical space for modeling physical processes). This symmetry was first seen in the equations for electromagnetic potentials, quantities from which electromagnetic fields can be derived. It is possessed in pure form by the eight massless gluons of QCD, but in the electroweak theory—the unified theory of electromagnetic and weak force interactions—gauge symmetry is partially broken, so that only the photon remains massless, with the other gauge bosons (W+, W−, and Z) acquiring large masses. Theoretical physicists continue to seek a further unification of QCD with the electroweak theory and, more ambitiously still, to unify them with a quantum version of gravity in which the force would be transmitted by massless quanta of two units of spin called gravitons.

The methodology of physics

Physics has evolved and continues to evolve without any single strategy. Essentially an experimental science, refined measurements can reveal unexpected behaviour. On the other hand, mathematical extrapolation of existing theories into new theoretical areas, critical reexamination of apparently obvious but untested assumptions, argument by symmetry or analogy, aesthetic judgment, pure accident, and hunch—each of these plays a role (as in all of science). Thus, for example, the quantum hypothesis proposed by the German physicist Max Planck was based on observed departures of the character of blackbody radiation (radiation emitted by a heated body that absorbs all radiant energy incident upon it) from that predicted by classical electromagnetism. The English physicist P.A.M. Dirac predicted the existence of the positron in making a relativistic extension of the quantum theory of the electron. The elusive neutrino, without mass or charge, was hypothesized by the German physicist Wolfgang Pauli as an alternative to abandoning the conservation laws in the beta-decay process. Maxwell conjectured that if changing magnetic fields create electric fields (which was known to be so), then changing electric fields might create magnetic fields, leading him to the electromagnetic theory of light. Albert Einstein’s special theory of relativity was based on a critical reexamination of the meaning of simultaneity, while his general theory of relativity rests on the equivalence of inertial and gravitational mass.

Although the tactics may vary from problem to problem, the physicist invariably tries to make unsolved problems more tractable by constructing a series of idealized models, with each successive model being a more realistic representation of the actual physical situation. Thus, in the theory of gases, the molecules are at first imagined to be particles that are as structureless as billiard balls with vanishingly small dimensions. This ideal picture is then improved on step by step.

The correspondence principle, a useful guiding principle for extending theoretical interpretations, was formulated by the Danish physicist Niels Bohr in the context of the quantum theory. It asserts that when a valid theory is generalized to a broader arena, the new theory’s predictions must agree with the old one in the overlapping region in which both are applicable. For example, the more comprehensive theory of physical optics must yield the same result as the more restrictive theory of ray optics whenever wave effects proportional to the wavelength of light are negligible on account of the smallness of that wavelength. Similarly, quantum mechanics must yield the same results as classical mechanics in circumstances when Planck’s constant can be considered as negligibly small. Likewise, for speeds small compared to the speed of light (as for baseballs in play), relativistic mechanics must coincide with Newtonian classical mechanics.

Some ways in which experimental and theoretical physicists attack their problems are illustrated by the following examples.

The modern experimental study of elementary particles began with the detection of new types of unstable particles produced in the atmosphere by primary radiation, the latter consisting mainly of high-energy protons arriving from space. The new particles were detected in Geiger counters and identified by the tracks they left in instruments called cloud chambers and in photographic plates. After World War II, particle physics, then known as high-energy nuclear physics, became a major field of science. Today’s high-energy particle accelerators can be several kilometres in length, cost hundreds (or even thousands) of millions of dollars, and accelerate particles to enormous energies (trillions of electron volts). Experimental teams, such as those that discovered the W+, W−, and Z quanta of the weak force at the European Laboratory for Particle Physics (CERN) in Geneva, which is funded by its 20 European member states, can have 100 or more physicists from many countries, along with a larger number of technical workers serving as support personnel. A variety of visual and electronic techniques are used to interpret and sort the huge amounts of data produced by their efforts, and particle-physics laboratories are major users of the most advanced technology, be it superconductive magnets or supercomputers.

Theoretical physicists use mathematics both as a logical tool for the development of theory and for calculating predictions of the theory to be compared with experiment. Newton, for one, invented integral calculus to solve the following problem, which was essential to his formulation of the law of universal gravitation: Assuming that the attractive force between any pair of point particles is inversely proportional to the square of the distance separating them, how does a spherical distribution of particles, such as Earth, attract another nearby object? Integral calculus, a procedure for summing many small contributions, yields the simple solution that Earth itself acts as a point particle with all its mass concentrated at the centre. In modern physics, Dirac predicted the existence of the then-unknown positive electron (or positron) by finding an equation for the electron that would combine quantum mechanics and the special theory of relativity.

Relations between physics and other disciplines and society

Influence of physics on related disciplines

Because physics elucidates the simplest fundamental questions in nature on which there can be a consensus, it is hardly surprising that it has had a profound impact on other fields of science, on philosophy, on the worldview of the developed world, and, of course, on technology.

Indeed, whenever a branch of physics has reached such a degree of maturity that its basic elements are comprehended in general principles, it has moved from basic to applied physics and thence to technology. Thus almost all current activity in classical physics consists of applied physics, and its contents form the core of many branches of engineering. Discoveries in modern physics are converted with increasing rapidity into technical innovations and analytical tools for associated disciplines. There are, for example, such nascent fields as nuclear and biomedical engineering, quantum chemistry and quantum optics, and radio, X-ray, and gamma-ray astronomy, as well as such analytic tools as radioisotopes, spectroscopy, and lasers, which all stem directly from basic physics.

Apart from its specific applications, physics—especially Newtonian mechanics—has become the prototype of the scientific method, its experimental and analytic methods sometimes being imitated (and sometimes inappropriately so) in fields far from the related physical sciences. Some of the organizational aspects of physics, based partly on the successes of the radar and atomic-bomb projects of World War II, also have been imitated in large-scale scientific projects, as, for example, in astronomy and space research.

The great influence of physics on the branches of philosophy concerned with the conceptual basis of human perceptions and understanding of nature, such as epistemology, is evidenced by the earlier designation of physics itself as natural philosophy. Present-day philosophy of science deals largely, though not exclusively, with the foundations of physics. Determinism, the philosophical doctrine that the universe is a vast machine operating with strict causality whose future is determined in all detail by its present state, is rooted in Newtonian mechanics, which obeys that principle. Moreover, the schools of materialism, naturalism, and empiricism have in large degree considered physics to be a model for philosophical inquiry. An extreme position is taken by the logical positivists, whose radical distrust of the reality of anything not directly observable leads them to demand that all significant statements must be formulated in the language of physics.

The uncertainty principle of quantum theory has prompted a reexamination of the question of determinism, and its other philosophical implications remain in doubt. Particularly problematic is the matter of the meaning of measurement, for which recent theories and experiments confirm some apparently noncausal predictions of standard quantum theory. It is fair to say that though physicists agree that quantum theory works, they still differ as to what it means.

Influence of related disciplines on physics

The relationship of physics to its bordering disciplines is a reciprocal one. Just as technology feeds on fundamental science for new practical innovations, so physics appropriates the techniques and instrumentation of modern technology for advancing itself. Thus experimental physicists utilize increasingly refined and precise electronic devices. Moreover, they work closely with engineers in designing basic scientific equipment, such as high-energy particle accelerators. Mathematics has always been the primary tool of the theoretical physicist, and even abstruse fields of mathematics such as group theory and differential geometry have become invaluable to the theoretician classifying subatomic particles or investigating the symmetry characteristics of atoms and molecules. Much of contemporary research in physics depends on the high-speed computer. It allows the theoretician to perform computations that are too lengthy or complicated to be done with paper and pencil. Also, it allows experimentalists to incorporate the computer into their apparatus, so that the results of measurements can be provided nearly instantaneously on-line as summarized data while an experiment is in progress.

The physicist in society

Tracks emerging from a proton-antiproton collision at the centre of the UA1 detector at CERN include those of an energetic electron (straight down) and a positron (upper right). These two particles have come from pair production through the decay of a Z0; when their energies are added together, the total is equal to the Z0's mass.
Because of the remoteness of much of contemporary physics from ordinary experience and its reliance on advanced mathematics, physicists have sometimes seemed to the public to be initiates in a latter-day secular priesthood who speak an arcane language and can communicate their findings to laymen only with great difficulty. Yet, the physicist has come to play an increasingly significant role in society, particularly since World War II. Governments have supplied substantial funds for research at academic institutions and at government laboratories through such agencies as the National Science Foundation and the Department of Energy in the United States, which has also established a number of national laboratories, including the Fermi National Accelerator Laboratory in Batavia, Ill., with one of the world’s largest particle accelerators. CERN is composed of 14 European countries and operates a large accelerator at the Swiss–French border. Physics research is supported in Germany by the Max Planck Society for the Advancement of Science and in Japan by the Japan Society for the Promotion of Science. In Trieste, Italy, there is the International Center for Theoretical Physics, which has strong ties to developing countries. These are only a few examples of the widespread international interest in fundamental physics.

Basic research in physics is obviously dependent on public support and funding, and with this development has come, albeit slowly, a growing recognition within the physics community of the social responsibility of scientists for the consequences of their work and for the more general problems of science and society.

parts-of-micrometer.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2119 2024-04-14 00:02:05

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2121) Professional video camera

A professional video camera (often called a television camera even though its use has spread beyond television) is a high-end device for creating electronic moving images (as opposed to a movie camera, that earlier recorded the images on film). Originally developed for use in television studios or with outside broadcast trucks, they are now also used for music videos, direct-to-video movies (see digital movie camera), corporate and educational videos, wedding videos, among other uses. Since the 2000s, most professional video cameras are digital (instead of analog).

The distinction between professional video cameras and movie cameras narrowed as HD digital video cameras with sensors the same size as 35mm movie cameras - plus dynamic range (exposure latitude) and color rendition approaching film quality - were introduced in the late 2010s. Nowadays, HDTV cameras designed for broadcast television, news, sports, events and other works such as reality TV are termed as professional video cameras. A digital movie camera is designed for movies or scripted television to record files that are then color corrected during post-production. The video signal from a professional video camera can be broadcast live, or is meant to be edited quickly with little or no color or exposure adjustments needed.

History

The earliest video cameras were mechanical flying-spot scanners which were in use in the 1920s and 1930s during the period of mechanical television. Improvements in video camera tubes in the 1930s ushered in the era of electronic television. Earlier, cameras were very large devices, almost always in two sections. The camera section held the lens and camera tube pre-amplifiers and other necessary electronics, and was connected to a large diameter multicore cable to the remainder of the camera electronics, usually mounted in a separate room in the studio, or a remote truck. The camera head could not generate a video picture signal on its own. The video signal was output to the studio for switching and transmission. By the fifties, electronic miniaturization had progressed to the point where some monochrome cameras could operate standalone and even be handheld. But the studio configuration remained, with the large cable bundle transmitting the signals back to the camera control unit (CCU). The CCU in turn was used to align and operate the camera's functions, such as exposure, system timing, video and black levels.

The first color cameras (1950s in the US, early 1960s in Europe), notably the RCA TK-40/41 series, were much more complex with their three (and in some models four) pickup tubes, and their size and weight drastically increased. Handheld color cameras did not come into general use until the early 1970s - the first generation of cameras were split into a camera head unit (the body of the camera, containing the lens and pickup tubes, and held on the shoulder or a body brace in front of the operator) connected via a cable bundle to a backpack CCU.

The Ikegami HL-33, the RCA TKP45 and the Thomson Microcam were portable two piece color cameras introduced in the early 1970s. For field work a separate VTR was still required to record the camera's video output. Typically this was either a portable 1" reel to reel VTR, or a portable 3/4" U-matic VCR. Typically, the two camera units would be carried by the camera operator, while a tape operator would carry the portable recorder. With the introduction of the RCA TK76 in 1976, camera operators were finally able to carry on their shoulders a one piece camera containing all the electronics to output a broadcast quality composite video signal. A separate videotape recording unit was still required.

Electronic news-gathering (ENG) cameras replaced the 16mm film cameras for TV news production from the 1970s onwards because the cost of shooting on film was significantly more than shooting on a reusable tape. Portable video tape production also enabled much faster turnaround time for the quick completion of news stories, compared to the need to chemically process film before it could be shown or edited. However some news feature stories for weekly news magazine shows continued to use 16mm film cameras until the 1990s.

At first all these cameras used tube-based sensors, but charge-coupled device (CCD) imagers came on the scene in the mid-80s, bringing numerous benefits. Early CCD cameras could not match the colour or resolution of their tube counterparts, but the benefits of CCD technology, such as introducing smaller and lightweight cameras, a better and more stable image (that was not prone to image burn in or lag) and no need for registration meant development on CCD imagers quickly took off and, once rivaling and offering a superior image to a tube sensor, began displacing tube-based cameras - the latter of which were all but disused by the early 1990s. Eventually, cameras with the recorder permanently mated to the camera head became the norm for ENG. In studio cameras, the camera electronics shrank, and CCD imagers replaced the pickup tubes. The thick multi-core cables connecting the camera head to the CCU were replaced in the late seventies with triax connections, a slender video cable that carried multiple video signals, intercom audio, and control circuits, and could be run for a mile or more. As the camera innards shrunk, the electronics no longer dictated the size of the enclosure, however the box shape remained, as it is necessary to hold the large studio lenses, teleprompters, electronic viewfinder (EVF), and other paraphernalia needed for studio and sports production. Electronic Field Production cameras were often mounted in studio configurations inside a mounting cage. This cage supported the additional studio accessories.

In the late 1990s, as HDTV broadcasting commenced, HDTV cameras suitable for news and general purpose work were introduced. Though they delivered much better image quality, their overall operation was identical to their standard definition predecessors. New methods of recording for cameras were introduced to supplant video tape, tapeless cameras. Ikegami and Avid introduced EditCam in 1996, based on interchangeable hard drives. Panasonic introduced P2 cameras. These recorded a DVCPro signal on interchangeable flash memory media. Several other data storage device recording systems were introduced, notably XDCAM from Sony. Sony also introduced SxS (S-by-S), a flash memory standard compliant to the Sony and Sandisk-created ExpressCard standard. Eventually flash storage largely supplanted other forms of recording media.

In 2000s, major manufacturers like Sony, Philips introduced the digital professional video cameras. These cameras used CCD sensors and recorded video digitally on flash storage. These were followed by digital HDTV cameras. As digital technology improved and also due to digital television transition, digital professional video cameras have become dominant in television studios, ENG, EFP and even in other areas since 2010s. CCD sensors were eventually replaced by CMOS sensors.

Chronology

* 1926 to 1933 "cameras" were a type of flying spot scanner using mechanical disk.
* 1936 saw the arrival of RCA's iconoscope camera.
* 1946 RCA's TK-10 studio camera used a 3" IO – Image Orthicon tube with a 4 lens turret. The RCA TK-30 (1946) was widely used as a field camera. A TK-30 is simply a TK-10 with a portable camera control unit.
* The 1948 Dumont Marconi MK IV was an Image Orthicon camera. Marconi's first camera was shown in 1938. EMI cameras from the UK were used in the US in the early 1960s, like the EMI 203/4. Later in the 60s the EMI 2000 and EMI 2001.
* In 1950 the arrival of the Vidicon camera tube made smaller cameras possible. 1952 saw the first Walkie-Lookie "portable cameras". Image Orthicon tubes were still used till the arrival of the Plumbicon.
* The RCA TK-40 is considered to be the first color television camera for broadcasts in 1953. RCA continued its lead in the high-end camera market till the (1978) TK-47, last of the high-end tube cameras from RCA.
* 1954 RCA's TK-11 studio camera used a 3" IO – Image Orthicon tube with a four-lens turret. The RCA TK-31 (1954) was widely used as a field camera. A TK-31 is simply a TK-11 with a portable camera control unit. There is some commonality between the TK-11/TK-31 and the earlier TK-10/TK-30.
* Ikegami introduced the first truly portable hand-held TV camera in 1962.
* Philips' line of Norelco cameras were also very popular with models such as PC-60 (1965), PC-70 (1967) and PCP-90 (1968 Handheld). Major US broadcaster CBS was a notable early customer of the PC-60 and PC-70 units. Philips/BTS-Broadcast Television Systems Inc. later came out with an LDK line of camera, like its last high end tube camera the LDK 6 (1982). Philips invented the Plumbicon pick up video camera tube in 1965, that gave tube cameras a cleaner picture. BTS introduced its first handHeld Frame transfer CCD- Charge-coupled device-CCD camera the LDK90 in 1987.
* Bosch Fernseh marketed a line of high end cameras (KCU, KCN, KCP, KCK) in the US ending with the tube camera KCK-40 (1978). Image Transform (in Universal City, California) used specially modified 24 frame KCK-40 for their "Image Vision" system. This had a 10 MHz bandwidth, almost twice NTSC bandwidth. This was a custom pre HDTV video System. At its peak this system was used to make "Monty Python Live at the Hollywood Bowl" in 1982. This was the first major high-definition analog wideband videotape-to-film post production using a film recorder for film out.
* In the 2000s, major manufacturers like Sony, Philips introduced the flash storage based digital television cameras.
Since the 2010s, these digital cameras have become most widely used of all other systems.

Usage types

Most professional cameras utilize an optical prism block directly behind the lens. This prism block (a trichroic assembly comprising two dichroic prisms) separates the image into the three primary colors, red, green, and blue, directing each color into a separate charge-coupled device (CCD) or Active pixel sensor (CMOS image sensor) mounted to the face of each prism. Some high-end consumer cameras also do this, producing a higher-resolution image, with better color fidelity than is normally possible with just a single video pickup.

In both single sensor Bayer filter and triple sensor designs, the weak signal created by the sensors is amplified before being encoded into analog signals for use by the viewfinder and also encoded into digital signals for transmission and recording. The analog outputs were normally in the form of either a composite video signal, which combined the color and luminance information to a single output; or an R-Y B-Y Y component video output through three separate connectors.

Studio cameras

Most television studio cameras stand on the floor, usually with pneumatic or hydraulic mechanisms called pedestals to adjust the height and position in the studio. The cameras in a multiple-camera setup are controlled by a device known as a camera control unit (CCU), to which they are connected via a triax, fibre optic or the almost obsolete multicore cable. The CCU, along with genlock and other equipment, is installed in the central apparatus room (CAR) of the television studio. A remote control panel in the production control room (PCR) for each camera is then used by the vision engineer(s) to balance the pictures.

When used outside a formal television studio in outside broadcasting (OB), they are often on tripods that may or may not have wheels (depending on the model of the tripod). Initial models used analog technology, but are now obsolete, supplanted by digital models.

Studio cameras are light and small enough to be taken off the pedestal and the lens changed to a smaller size to be used handheld on a camera operator's shoulder, but they still have no recorder of their own and are cable-bound. Cameras can also be mounted on a tripod, a dolly or a crane, thus making the cameras much more versatile than previous generations of studio cameras. These cameras have a tally light, a small signal-lamp used that indicates, for the benefit of those being filmed as well as the camera operator, that the camera is 'live' – i.e. its signal is being used for the 'main program' at that moment.

ENG cameras

ENG (electronic news gathering) video cameras were originally designed for use by news camera operators. While they have some similarities to the smaller consumer camcorder, they differ in several regards:

* ENG cameras are larger and heavier (helps dampen small movements), and usually supported by a camera shoulder support or shoulder stock on the camera operator's shoulder, taking the weight off the hand, which is freed to operate the zoom lens control.
* The camera mounts on tripods with Fluid heads and other supports with a quick release plate.
* 3 CCDs or CMOS active pixel sensors are used, one for each of the primary colors
* They have interchangeable lenses.
* The lens is focused manually and directly, without intermediate servo controls. However the lens zoom and focus can be operated with remote controls with a television studio configuration operated by a camera control unit (CCU).
* A rotating behind-the-lens filter wheel, for selecting an 85A and neutral density filters.
* Controls that need quick access are on hard physical switches, all in the same general place on the camera, irrespective of the camera manufacturer, such as Gain Select, White/Black balance, color bar select, and record start controls and not in menu selection.
* All settings, white balance, focus, and iris can be manually adjusted, and automatics can be completely disabled.
* Professional BNC connectors for video out and genlock in.
* Can operate an electronic viewfinder (EVF) or external CRT viewfinder.
* At least two XLR input connectors for audio are included.
* Direct slot-in for portable wireless microphones.
* Audio is adjusted manually, with easily accessed physical knobs.
* A complete time code section is available, allowing time presets; multiple-camera setups can be time code-synchronized or jam synced to a master clock.
* "Bars and tone" are available in-camera (the SMPTE color bars (Society of Motion Picture and Television Engineers) Bars, a reference signal that simplifies calibration of monitors and setting levels when duplicating and transmitting the picture.)
* Recording is to a professional medium like some variant of Betacam or DVCPRO or Direct to disk recording or flash memory. If as in the latter two, it's a data recording, much higher data rates (or less video compression) are used than in consumer devices.

EFP cameras

Electronic field production cameras are similar to studio cameras in that they are used primarily in multiple camera switched configurations, but outside the studio environment, for concerts, sports and live news coverage of special events. These versatile cameras can be carried on the shoulder, or mounted on camera pedestals and cranes, with the large, very long focal length zoom lenses made for studio camera mounting. These cameras have no recording ability on their own, and transmit their signals back to the broadcast truck through a fiber optic, triax, radio frequency or the virtually obsolete multicore cable.

Others

Remote cameras are typically very small camera heads designed to be operated by remote control. Despite their small size, they are often capable of performance close to that of the larger ENG and EFP types.

Block cameras are so called because the camera head is a small block, often smaller than the lens itself. Some block cameras are completely self-contained, while others only contain the sensor block and its pre-amps, thus requiring connection to a separate camera control unit in order to operate. All the functions of the camera can be controlled from a distance, and often there is a facility for controlling the lens focus and zoom as well. These cameras are mounted on pan and tilt heads, and may be placed in a stationary position, such as atop a pole or tower, in a corner of a broadcast booth, or behind a basketball hoop. They can also be placed on robotic dollies, at the end of camera booms and cranes, or "flown" in a cable supported harness, as shown in the illustration.

Lipstick cameras are so called because the lens and sensor block combined are similar in size and appearance to a lipstick container. These are either hard mounted in a small location, such as a race car, or on the end of a boom pole. The sensor block and lens are separated from the rest of the camera electronics by a long thin multi conductor cable. The camera settings are manipulated from this box, while the lens settings are normally set when the camera is mounted in place.

edb4b1c0-31e4-4c87-8dcf-128cd2637e98.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2120 2024-04-15 00:02:17

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2122) Grammar

Gist

1. : the study of the classes of words, their inflections, and their functions and relations in a language. 2. : the facts of language with which grammar deals.

Summary

Grammar, rules of a language governing the sounds, words, sentences, and other elements, as well as their combination and interpretation. The word grammar also denotes the study of these abstract features or a book presenting these rules. In a restricted sense, the term refers only to the study of sentence and word structure (syntax and morphology), excluding vocabulary and pronunciation.

A brief treatment of grammar follows.

Conceptions of grammar

A common contemporary definition of grammar is the underlying structure of a language that any native speaker of that language knows intuitively. The systematic description of the features of a language is also a grammar. These features are the phonology (sound), morphology (system of word formation), syntax (patterns of word arrangement), and semantics (meaning). Depending on the grammarian’s approach, a grammar can be prescriptive (i.e., provide rules for correct usage), descriptive (i.e., describe how a language is actually used), or generative (i.e., provide instructions for the production of an infinite number of sentences in a language). The traditional focus of inquiry has been on morphology and syntax, and for some contemporary linguists (and many traditional grammarians) this is the only proper domain of the subject.

Ancient and medieval grammars

In Europe the Greeks were the first to write grammars. To them, grammar was a tool that could be used in the study of Greek literature; hence their focus on the literary language. The Alexandrians of the 1st century BC further developed Greek grammar in order to preserve the purity of the language. Dionysus Thrax of Alexandria later wrote an influential treatise called The Art of Grammar, in which he analyzed literary texts in terms of letters, syllables, and eight parts of speech.

The Romans adopted the grammatical system of the Greeks and applied it to Latin. Except for Varro, of the 1st century BC, who believed that grammarians should discover structures, not dictate them, most Latin grammarians did not attempt to alter the Greek system and also sought to protect their language from decay. Whereas the model for the Greeks and Alexandrians was the language of Homer, the works of Cicero and Virgil set the Latin standard. The works of Donatus (4th century AD) and Priscian (6th century AD), the most important Latin grammarians, were widely used to teach Latin grammar during the European Middle Ages. In medieval Europe, education was conducted in Latin, and Latin grammar became the foundation of the liberal arts curriculum. Many grammars were composed for students during this time. Aelfric, the abbot of Eynsham (11th century), who wrote the first Latin grammar in Anglo-Saxon, proposed that this work serve as an introduction to English grammar as well. Thus began the tradition of analyzing English grammar according to a Latin model.

The modistae, grammarians of the mid-13th to mid-14th century who viewed language as a reflection of reality, looked to philosophy for explanations of grammatical rules. The modistae sought one “universal” grammar that would serve as a means of understanding the nature of being. In 17th-century France a group of grammarians from Port-Royal were also interested in the idea of universal grammar. They claimed that common elements of thought could be discerned in grammatical categories of all languages. Unlike their Greek and Latin counterparts, the Port-Royal grammarians did not study literary language but claimed instead that usage should be dictated by the actual speech of living languages. Noting their emphasis on linguistic universals, the contemporary linguist Noam Chomsky called the Port-Royal group the first transformational grammarians.

Modern and contemporary grammars

By 1700 grammars of 61 vernacular languages had been printed. These were written primarily for purposes of reforming, purifying, or standardizing language and were put to pedagogical use. Rules of grammar usually accounted for formal, written, literary language only and did not apply to all the varieties of actual, spoken language. This prescriptive approach long dominated the schools, where the study of grammar came to be associated with “parsing” and sentence diagramming. Opposition to teaching solely in terms of prescriptive and proscriptive (i.e., what must not be done) rules grew during the middle decades of the 20th century.

The simplification of grammar for classroom use contrasted sharply with the complex studies that scholars of linguistics were conducting about languages. During the 19th and early 20th centuries the historical point of view flourished. Scholars who realized that every living language was in a constant state of flux studied all types of written records of modern European languages to determine the courses of their evolution. They did not limit their inquiry to literary languages but included dialects and contemporary spoken languages as well. Historical grammarians did not follow earlier prescriptive approaches but were interested, instead, in discovering where the language under study came from.

As a result of the work of historical grammarians, scholars came to see that the study of language can be either diachronic (its development through time) or synchronic (its state at a particular time). The Swiss linguist Ferdinand de Saussure and other descriptive linguists began studying the spoken language. They collected a large sample of sentences produced by native speakers of a language and classified their material starting with phonology and working their way to syntax.

Generative, or transformational, grammarians of the second half of the 20th century, such as Noam Chomsky, studied the knowledge that native speakers possess which enables them to produce and understand an infinite number of sentences. Whereas descriptivists like Saussure examined samples of individual speech to arrive at a description of a language, transformationalists first studied the underlying structure of a language. They attempted to describe the “rules” that define a native speaker’s “competence” (unconscious knowledge of the language) and account for all instances of the speaker’s “performance” (strategies the individual uses in actual sentence production). See generative grammar; transformational grammar.

The study of grammatical theory has been of interest to philosophers, anthropologists, psychologists, and literary critics over the centuries. Today, grammar exists as a field within linguistics but still retains a relationship with these other disciplines. For many people, grammar still refers to the body of rules one must know in order to speak or write “correctly.” However, from the last quarter of the 20th century a more sophisticated awareness of grammatical issues has taken root, especially in schools. In some countries, such as Australia and the United Kingdom, new English curricula have been devised in which grammar is a focus of investigation, avoiding the prescriptivism of former times and using techniques that promote a lively and thoughtful spirit of inquiry.

Details

In linguistics, the grammar of a natural language is its set of structural rules on speakers' or writers' usage and creation of clauses, phrases, and words. The term can also refer to the study of such rules, a subject that includes phonology, morphology, and syntax, together with phonetics, semantics, and pragmatics. There are, broadly speaking, two different ways to study grammar: traditional grammar and theoretical grammar.

Fluent speakers of a language variety or lect have internalised these rules. the vast majority of which – at least in the case of one's native language(s) – are acquired not by intentional study or instruction but by hearing other speakers. Much of this internalisation occurs during early childhood; learning a language later in life usually involves more direct instruction.

The term "grammar" can also describe the linguistic behaviour of groups of speakers and writers rather than individuals. Differences in scale are important to this meaning: for example, the term "English grammar" could refer to the whole of English grammar (that is, to the grammar of all the language's speakers) in which case it covers lots of variation. At a smaller scale, it may refer only to what is shared among the grammars of all or most English speakers (such as subject–verb–object word order in simple sentences). At the smallest scale, this sense of "grammar" can describe the conventions of just one form of English that is better defined than others (such as standard English for a region).

A description, study, or analysis of such rules may also be known as a grammar, or as a grammar book. A reference book describing the grammar of a language is called a "reference grammar" or simply "a grammar" (see History of English grammars). A fully revealed grammar, which describes the grammatical constructions of a particular speech type in great detail is called descriptive grammar. This kind of linguistic description contrasts with linguistic prescription, a plan to actively ban, or lessen the use of, some constructions while popularising and starting others, either absolutely or about a standard variety. For example, some pedants insist that sentences in English should not end with prepositions, a ban that has been traced to John Dryden (1631–1700). His unjustified rejection of the practice may have led other English speakers to avoid it and discourage its use. Yet ending sentences with a preposition has a long history in Germanic languages like English, where it is so widespread as to be the norm.

Outside linguistics, the word grammar often has a different meaning. It may be used more widely to include rules of spelling and punctuation, which linguists would not typically consider as part of grammar but rather of orthography, the conventions used for writing a language. It may also be used more narrowly to refer to a set of prescriptive norms only, excluding the aspects of a language's grammar which do not change or are clearly acceptable (or not) without the need for discussions. Jeremy Butterfield claimed that, for non-linguists, "Grammar is often a generic way of referring to any aspect of English that people object to".

Etymology

The word grammar is derived from Greek (grammatikḕ téchnē), which means "art of letters", from (grámma), "letter", itself from γράφειν (gráphein), "to draw, to write". The same Greek root also appears in the words graphics, grapheme, and photograph.

History

The first systematic grammar of Sanskrit originated in Iron Age India, with Yaska (6th century BC), Pāṇini (6th–5th century BC) and his commentators Pingala (c. 200 BC), Katyayana, and Patanjali (2nd century BC). Tolkāppiyam, the earliest Tamil grammar, is mostly dated to before the 5th century AD. The Babylonians also made some early attempts at language description.

Grammar appeared as a discipline in Hellenism from the 3rd century BC forward with authors such as Rhyanus and Aristarchus of Samothrace. The oldest known grammar handbook is the Art of Grammar, a succinct guide to speaking and writing clearly and effectively, written by the ancient Greek scholar Dionysius Thrax (c. 170–c. 90 BC), a student of Aristarchus of Samothrace who founded a school on the Greek island of Rhodes. Dionysius Thrax's grammar book remained the primary grammar textbook for Greek schoolboys until as late as the twelfth century AD. The Romans based their grammatical writings on it and its basic format remains the basis for grammar guides in many languages even today. Latin grammar developed by following Greek models from the 1st century BC, due to the work of authors such as Orbilius Pupillus, Remmius Palaemon, Marcus Valerius Probus, Verrius Flaccus, and Aemilius Asper.

The grammar of Irish originated in the 7th century with Auraicept na n-Éces. Arabic grammar emerged with Abu al-Aswad al-Du'ali in the 7th century. The first treatises on Hebrew grammar appeared in the High Middle Ages, in the context of Midrash (exegesis of the Hebrew Bible). The Karaite tradition originated in Abbasid Baghdad. The Diqduq (10th century) is one of the earliest grammatical commentaries on the Hebrew Bible. Ibn Barun in the 12th century, compares the Hebrew language with Arabic in the Islamic grammatical tradition.

Belonging to the trivium of the seven liberal arts, grammar was taught as a core discipline throughout the Middle Ages, following the influence of authors from Late Antiquity, such as Priscian. Treatment of vernaculars began gradually during the High Middle Ages, with isolated works such as the First Grammatical Treatise, but became influential only in the Renaissance and Baroque periods. In 1486, Antonio de Nebrija published Las introduciones Latinas contrapuesto el romance al Latin, and the first Spanish grammar, Gramática de la lengua castellana, in 1492. During the 16th-century Italian Renaissance, the Questione della lingua was the discussion on the status and ideal form of the Italian language, initiated by Dante's de vulgari eloquentia (Pietro Bembo, Prose della volgar lingua Venice 1525). The first grammar of Slovene was written in 1583 by Adam Bohorič, and Grammatica Germanicae Linguae, the first grammar of German, was published in 1578.

Grammars of some languages began to be compiled for the purposes of evangelism and Bible translation from the 16th century onward, such as Grammatica o Arte de la Lengua General de Los Indios de Los Reynos del Perú (1560), a Quechua grammar by Fray Domingo de Santo Tomás.

From the latter part of the 18th century, grammar came to be understood as a subfield of the emerging discipline of modern linguistics. The Deutsche Grammatik of Jacob Grimm was first published in the 1810s. The Comparative Grammar of Franz Bopp, the starting point of modern comparative linguistics, came out in 1833.

Development of grammar

Grammars evolve through usage. Historically, with the advent of written representations, formal rules about language usage tend to appear also, although such rules tend to describe writing conventions more accurately than conventions of speech. Formal grammars are codifications of usage which are developed by repeated documentation and observation over time. As rules are established and developed, the prescriptive concept of grammatical correctness can arise. This often produces a discrepancy between contemporary usage and that which has been accepted, over time, as being standard or "correct". Linguists tend to view prescriptive grammar as having little justification beyond their authors' aesthetic tastes, although style guides may give useful advice about standard language employment based on descriptions of usage in contemporary writings of the same language. Linguistic prescriptions also form part of the explanation for variation in speech, particularly variation in the speech of an individual speaker (for example, why some speakers say "I didn't do nothing", some say "I didn't do anything", and some say one or the other depending on social context).

The formal study of grammar is an important part of children's schooling from a young age through advanced learning, though the rules taught in schools are not a "grammar" in the sense that most linguists use, particularly as they are prescriptive in intent rather than descriptive.

Constructed languages (also called planned languages or conlangs) are more common in the modern-day, although still extremely uncommon compared to natural languages. Many have been designed to aid human communication (for example, naturalistic Interlingua, schematic Esperanto, and the highly logical Lojban). Each of these languages has its own grammar.

Syntax refers to the linguistic structure above the word level (for example, how sentences are formed) – though without taking into account intonation, which is the domain of phonology. Morphology, by contrast, refers to the structure at and below the word level (for example, how compound words are formed), but above the level of individual sounds, which, like intonation, are in the domain of phonology. However, no clear line can be drawn between syntax and morphology. Analytic languages use syntax to convey information that is encoded by inflection in synthetic languages. In other words, word order is not significant, and morphology is highly significant in a purely synthetic language, whereas morphology is not significant and syntax is highly significant in an analytic language. For example, Chinese and Afrikaans are highly analytic, thus meaning is very context-dependent. (Both have some inflections, and both have had more in the past; thus, they are becoming even less synthetic and more "purely" analytic over time.) Latin, which is highly synthetic, uses affixes and inflections to convey the same information that Chinese does with syntax. Because Latin words are quite (though not totally) self-contained, an intelligible Latin sentence can be made from elements that are arranged almost arbitrarily. Latin has a complex affixation and simple syntax, whereas Chinese has the opposite.

Education

Prescriptive grammar is taught in primary and secondary school. The term "grammar school" historically referred to a school (attached to a cathedral or monastery) that teaches Latin grammar to future priests and monks. It originally referred to a school that taught students how to read, scan, interpret, and declaim Greek and Latin poets (including Homer, Virgil, Euripides, and others). These should not be mistaken for the related, albeit distinct, modern British grammar schools.

A standard language is a dialect that is promoted above other dialects in writing, education, and, broadly speaking, in the public sphere; it contrasts with vernacular dialects, which may be the objects of study in academic, descriptive linguistics but which are rarely taught prescriptively. The standardized "first language" taught in primary education may be subject to political controversy because it may sometimes establish a standard defining nationality or ethnicity.

Recently, efforts have begun to update grammar instruction in primary and secondary education. The main focus has been to prevent the use of outdated prescriptive rules in favor of setting norms based on earlier descriptive research and to change perceptions about the relative "correctness" of prescribed standard forms in comparison to non-standard dialects. A series of metastudies have found that the explicit teaching of grammatical parts of speech and syntax has little or no effect on the improvement of student writing quality in elementary school, middle school of high school; other methods of writing instruction had far greater positive effect, including strategy instruction, collaborative writing, summary writing, process instruction, sentence combining and inquiry projects.

The preeminence of Parisian French has reigned largely unchallenged throughout the history of modern French literature. Standard Italian is based on the speech of Florence rather than the capital because of its influence on early literature. Likewise, standard Spanish is not based on the speech of Madrid but on that of educated speakers from more northern areas such as Castile and León (see Gramática de la lengua castellana). In Argentina and Uruguay the Spanish standard is based on the local dialects of Buenos Aires and Montevideo (Rioplatense Spanish). Portuguese has, for now, two official standards, respectively Brazilian Portuguese and European Portuguese.

The Serbian variant of Serbo-Croatian is likewise divided; Serbia and the Republika Srpska of Bosnia and Herzegovina use their own distinct normative subvarieties, with differences in yat reflexes. The existence and codification of a distinct Montenegrin standard is a matter of controversy, some treat Montenegrin as a separate standard lect, and some think that it should be considered another form of Serbian.

Norwegian has two standards, Bokmål and Nynorsk, the choice between which is subject to controversy: Each Norwegian municipality can either declare one as its official language or it can remain "language neutral". Nynorsk is backed by 27 percent of municipalities. The main language used in primary schools, chosen by referendum within the local school district, normally follows the official language of its municipality. Standard German emerged from the standardized chancellery use of High German in the 16th and 17th centuries. Until about 1800, it was almost exclusively a written language, but now it is so widely spoken that most of the former German dialects are nearly extinct.

Standard Chinese has official status as the standard spoken form of the Chinese language in the People's Republic of China (PRC), the Republic of China (ROC), and the Republic of Singapore. Pronunciation of Standard Chinese is based on the local accent of Mandarin Chinese from Luanping, Chengde in Hebei Province near Beijing, while grammar and syntax are based on modern vernacular written Chinese.

Modern Standard Arabic is directly based on Classical Arabic, the language of the Qur'an. The Hindustani language has two standards, Hindi and Urdu.

In the United States, the Society for the Promotion of Good Grammar designated 4 March as National Grammar Day in 2008.

teaching-grammar-to-kids.jpg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2121 2024-04-16 00:01:54

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2123) Essay

Gist

Essay is a short piece of writing on a particular subject, often expressing personal views. In a school test, an essay is a written answer that includes information and discussion, usually to test how well the student understands the subject.

Summary

Essay is an analytic, interpretative, or critical literary composition usually much shorter and less systematic and formal than a dissertation or thesis and usually dealing with its subject from a limited and often personal point of view.

Some early treatises—such as those of Cicero on the pleasantness of old age or on the art of “divination,” Seneca on anger or clemency, and Plutarch on the passing of oracles—presage to a certain degree the form and tone of the essay, but not until the late 16th century was the flexible and deliberately nonchalant and versatile form of the essay perfected by the French writer Michel de Montaigne. Choosing the name essai to emphasize that his compositions were attempts or endeavours, a groping toward the expression of his personal thoughts and experiences, Montaigne used the essay as a means of self-discovery. His Essais, published in their final form in 1588, are still considered among the finest of their kind. Later writers who most nearly recall the charm of Montaigne include, in England, Robert Burton, though his whimsicality is more erudite, Sir Thomas Browne, and Laurence Sterne, and in France, with more self-consciousness and pose, André Gide and Jean Cocteau.

At the beginning of the 17th century, social manners, the cultivation of politeness, and the training of an accomplished gentleman became the theme of many essayists. This theme was first exploited by the Italian Baldassare Castiglione in his Il libro del cortegiano (1528; The Book of the Courtier). The influence of the essay and of genres allied to it, such as maxims, portraits, and sketches, proved second to none in molding the behavior of the cultured classes, first in Italy, then in France, and, through French influence, in most of Europe in the 17th century. Among those who pursued this theme was the 17th-century Spanish Jesuit Baltasar Gracián in his essays on the art of worldly wisdom.

Keener political awareness in the 18th century, the age of Enlightenment, made the essay an all-important vehicle for the criticism of society and religion. Because of its flexibility, its brevity, and its potential both for ambiguity and for allusions to current events and conditions, it was an ideal tool for philosophical reformers. The Federalist Papers in America and the tracts of the French Revolutionaries are among the countless examples of attempts during this period to improve the human condition through the essay.

The genre also became the favoured tool of traditionalists of the 18th and 19th centuries, such as Edmund Burke and Samuel Taylor Coleridge, who looked to the short, provocative essay as the most potent means of educating the masses. Essays such as Paul Elmer More’s long series of Shelburne Essays (published between 1904 and 1935), T.S. Eliot’s After Strange Gods (1934) and Notes Towards the Definition of Culture (1948), and others that attempted to reinterpret and redefine culture, established the genre as the most fitting to express the genteel tradition at odds with the democracy of the new world.

Whereas in several countries the essay became the chosen vehicle of literary and social criticism, in other countries the genre became semipolitical, earnestly nationalistic, and often polemical, playful, or bitter. Essayists such as Robert Louis Stevenson and Willa Cather wrote with grace on several lighter subjects, and many writers—including Virginia Woolf, Edmund Wilson, and Charles du Bos—mastered the essay as a form of literary criticism.

Details

An essay is, generally, a piece of writing that gives the author's own argument, but the definition is vague, overlapping with those of a letter, a paper, an article, a pamphlet, and a short story. Essays have been sub-classified as formal and informal: formal essays are characterized by "serious purpose, dignity, logical organization, length," whereas the informal essay is characterized by "the personal element (self-revelation, individual tastes and experiences, confidential manner), humor, graceful style, rambling structure, unconventionality or novelty of theme," etc.

Essays are commonly used as literary criticism, political manifestos, learned arguments, observations of daily life, recollections, and reflections of the author. Almost all modern essays are written in prose, but works in verse have been dubbed essays (e.g., Alexander Pope's An Essay on Criticism and An Essay on Man). While brevity usually defines an essay, voluminous works like John Locke's An Essay Concerning Human Understanding and Thomas Malthus's An Essay on the Principle of Population are counterexamples.

In some countries (e.g., the United States and Canada), essays have become a major part of formal education. Secondary students are taught structured essay formats to improve their writing skills; admission essays are often used by universities in selecting applicants, and in the humanities and social sciences essays are often used as a way of assessing the performance of students during final exams.

The concept of an "essay" has been extended to other media beyond writing. A film essay is a movie that often incorporates documentary filmmaking styles and focuses more on the evolution of a theme or idea. A photographic essay covers a topic with a linked series of photographs that may have accompanying text or captions.

Definitions

The word essay derives from the French infinitive essayer, "to try" or "to attempt". In English essay first meant "a trial" or "an attempt", and this is still an alternative meaning. The Frenchman Michel de Montaigne (1533–1592) was the first author to describe his work as essays; he used the term to characterize these as "attempts" to put his thoughts into writing.

Subsequently, essay has been defined in a variety of ways. One definition is a "prose composition with a focused subject of discussion" or a "long, systematic discourse". It is difficult to define the genre into which essays fall. Aldous Huxley, a leading essayist, gives guidance on the subject. He notes that "the essay is a literary device for saying almost everything about almost anything", and adds that "by tradition, almost by definition, the essay is a short piece". Furthermore, Huxley argues that "essays belong to a literary species whose extreme variability can be studied most effectively within a three-poled frame of reference". These three poles (or worlds in which the essay may exist) are:

* The personal and the autobiographical: The essayists that feel most comfortable in this pole "write fragments of reflective autobiography and look at the world through the keyhole of anecdote and description".
* The objective, the factual, and the concrete particular: The essayists that write from this pole "do not speak directly of themselves, but turn their attention outward to some literary or scientific or political theme. Their art consists of setting forth, passing judgment upon, and drawing general conclusions from the relevant data".
* The abstract-universal: In this pole "we find those essayists who do their work in the world of high abstractions", who are never personal and who seldom mention the particular facts of experience.
Huxley adds that the most satisfying essays "...make the best not of one, not of two, but of all the three worlds in which it is possible for the essay to exist."

Academic

In countries like the United States and the United Kingdom, essays have become a major part of a formal education in the form of free response questions. Secondary students in these countries are taught structured essay formats to improve their writing skills, and essays are often used by universities in these countries in selecting applicants. In both secondary and tertiary education, essays are used to judge the mastery and comprehension of the material. Students are asked to explain, comment on, or assess a topic of study in the form of an essay. In some courses, university students must complete one or more essays over several weeks or months. In addition, in fields such as the humanities and social sciences,[citation needed] mid-term and end of term examinations often require students to write a short essay in two or three hours.

In these countries, so-called academic essays, also called papers, are usually more formal than literary ones. They may still allow the presentation of the writer's own views, but this is done in a logical and factual manner, with the use of the first person often discouraged. Longer academic essays (often with a word limit of between 2,000 and 5,000 words) are often more discursive. They sometimes begin with a short summary analysis of what has previously been written on a topic, which is often called a literature review.

Longer essays may also contain an introductory page that defines words and phrases of the essay's topic. Most academic institutions require that all substantial facts, quotations, and other supporting material in an essay be referenced in a bibliography or works cited page at the end of the text. This scholarly convention helps others (whether teachers or fellow scholars) to understand the basis of facts and quotations the author uses to support the essay's argument. The bibliography also helps readers evaluate to what extent the argument is supported by evidence and to evaluate the quality of that evidence. The academic essay tests the student's ability to present their thoughts in an organized way and is designed to test their intellectual capabilities.

One of the challenges facing universities is that in some cases, students may submit essays purchased from an essay mill (or "paper mill") as their own work. An "essay mill" is a ghostwriting service that sells pre-written essays to university and college students. Since plagiarism is a form of academic dishonesty or academic fraud, universities and colleges may investigate papers they suspect are from an essay mill by using plagiarism detection software, which compares essays against a database of known mill essays and by orally testing students on the contents of their papers.

Magazine or newspaper

Essays often appear in magazines, especially magazines with an intellectual bent, such as The Atlantic and Harpers. Magazine and newspaper essays use many of the essay types described in the section on forms and styles (e.g., descriptive essays, narrative essays, etc.). Some newspapers also print essays in the op-ed section.

Employment

Employment essays detailing experience in a certain occupational field are required when applying for some jobs, especially government jobs in the United States. Essays known as Knowledge Skills and Executive Core Qualifications are required when applying to certain US federal government positions.

A KSA, or "Knowledge, Skills, and Abilities", is a series of narrative statements that are required when applying to Federal government job openings in the United States. KSAs are used along with resumes to determine who the best applicants are when several candidates qualify for a job. The knowledge, skills, and abilities necessary for the successful performance of a position are contained on each job vacancy announcement. KSAs are brief and focused essays about one's career and educational background that presumably qualify one to perform the duties of the position being applied for.

An Executive Core Qualification, or ECQ, is a narrative statement that is required when applying to Senior Executive Service positions within the US Federal government. Like the KSAs, ECQs are used along with resumes to determine who the best applicants are when several candidates qualify for a job. The Office of Personnel Management has established five executive core qualifications that all applicants seeking to enter the Senior Executive Service must demonstrate.

Non-literary types

Film

A film essay (also essay film or cinematic essay) consists of the evolution of a theme or an idea rather than a plot per se, or the film literally being a cinematic accompaniment to a narrator reading an essay. From another perspective, an essay film could be defined as a documentary film visual basis combined with a form of commentary that contains elements of self-portrait (rather than autobiography), where the signature (rather than the life story) of the filmmaker is apparent. The cinematic essay often blends documentary, fiction, and experimental film making using tones and editing styles.

The genre is not well-defined but might include propaganda works of early Soviet filmmakers like Dziga Vertov, present-day filmmakers including Chris Marker, Michael Moore (Roger & Me, Bowling for Columbine and Fahrenheit 9/11), Errol Morris (The Thin Blue Line), Morgan Spurlock (Supersize Me) and Agnès Varda. Jean-Luc Godard describes his recent work as "film-essays". Two filmmakers whose work was the antecedent to the cinematic essay include Georges Méliès and Bertolt Brecht. Méliès made a short film (The Coronation of Edward VII (1902)) about the 1902 coronation of King Edward VII, which mixes actual footage with shots of a recreation of the event. Brecht was a playwright who experimented with film and incorporated film projections into some of his plays. Orson Welles made an essay film in his own pioneering style, released in 1974, called F for Fake, which dealt specifically with art forger Elmyr de Hory and with the themes of deception, "fakery", and authenticity in general.

David Winks Gray's article "The essay film in action" states that the "essay film became an identifiable form of filmmaking in the 1950s and '60s". He states that since that time, essay films have tended to be "on the margins" of the filmmaking the world. Essay films have a "peculiar searching, questioning tone ... between documentary and fiction" but without "fitting comfortably" into either genre. Gray notes that just like written essays, essay films "tend to marry the personal voice of a guiding narrator (often the director) with a wide swath of other voices". The University of Wisconsin Cinematheque website echoes some of Gray's comments; it calls a film essay an "intimate and allusive" genre that "catches filmmakers in a pensive mood, ruminating on the margins between fiction and documentary" in a manner that is "refreshingly inventive, playful, and idiosyncratic".

Music

In the realm of music, composer Samuel Barber wrote a set of "Essays for Orchestra", relying on the form and content of the music to guide the listener's ear, rather than any extra-musical plot or story.

Photography

A photographic essay strives to cover a topic with a linked series of photographs. Photo essays range from purely photographic works to photographs with captions or small notes to full-text essays with a few or many accompanying photographs. Photo essays can be sequential in nature, intended to be viewed in a particular order—or they may consist of non-ordered photographs viewed all at once or in an order that the viewer chooses. All photo essays are collections of photographs, but not all collections of photographs are photo essays. Photo essays often address a certain issue or attempt to capture the character of places and events.

Visual arts

In the visual arts, an essay is a preliminary drawing or sketch that forms a basis for a final painting or sculpture, made as a test of the work's composition (this meaning of the term, like several of those following, comes from the word essay's meaning of "attempt" or "trial").

iStock_000023203117_Medium1.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2122 2024-04-17 00:01:37

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2124) Satellite

Gist

A satellite is something small or less powerful that orbits around something bigger. It often describes a body in space, such as an artificial satellite that orbits the Earth and beams down signals that power devices like cell phones.

The word satellite was first used to describe a follower of someone in a superior position. The word's meaning later broadened to describe anything small that's dependent on something larger. The small satellite circles around the more powerful force, like a moon orbiting a planet. Satellite can describe a small country — a satellite country — controlled by a larger one, or a large organization that has a small office — a satellite office — in another location.

Summary

Satellite is a natural object (moon) or spacecraft (artificial satellite) orbiting a larger astronomical body. Most known natural satellites orbit planets; the Earth’s Moon is the most obvious example.

All the planets in the solar system except Mercury and Venus have natural satellites. More than 160 such objects have so far been discovered, with Jupiter and Saturn together contributing about two-thirds of the total. The planets’ natural satellites vary greatly in size. Some of them measure less than 10 km (6 miles) in diameter, as in the case of some of Jupiter’s moons. A few are larger than Mercury—for example, Saturn’s Titan and Jupiter’s Ganymede, each of which is more than 5,000 km (about 3,100 miles) in diameter. The satellites also differ significantly in composition. The Moon, for example, consists almost entirely of rocky material. On the other hand, the composition of Saturn’s Enceladus is 50 percent or more ice. Some asteroids are known to have their own tiny moons.

Artificial satellites can be either unmanned (robotic) or manned. The first artificial satellite to be placed in orbit was the unmanned Sputnik 1, launched October 4, 1957, by the Soviet Union. Since then, thousands have been sent into Earth orbit. Various robotic artificial satellites have also been launched into orbit around Venus, Mars, Jupiter, and Saturn, as well as around the Moon and the asteroid Eros. Spacecraft of this type are used for scientific research and for other purposes, such as communication, weather forecasting, navigation and global positioning, Earth resources management, and military intelligence. Examples of manned satellites include space stations, space shuttle orbiters circling Earth, and Apollo spacecraft in orbit around the Moon or Earth.

Details

A satellite or artificial satellite is an object, typically a spacecraft, placed into orbit around a celestial body. Satellites have a variety of uses, including communication relay, weather forecasting, navigation (GPS - Global Positioning System), broadcasting, scientific research, and Earth observation. Additional military uses are reconnaissance, early warning, signals intelligence and, potentially, weapon delivery. Other satellites include the final rocket stages that place satellites in orbit and formerly useful satellites that later become defunct.

Except for passive satellites, most satellites have an electricity generation system for equipment on board, such as solar panels or radioisotope thermoelectric generators (RTGs). Most satellites also have a method of communication to ground stations, called transponders. Many satellites use a standardized bus to save cost and work, the most popular of which are small CubeSats. Similar satellites can work together as groups, forming constellations. Because of the high launch cost to space, most satellites are designed to be as lightweight and robust as possible. Most communication satellites are radio relay stations in orbit and carry dozens of transponders, each with a bandwidth of tens of megahertz.

Satellites are placed from the surface to the orbit by launch vehicles, high enough to avoid orbital decay by the atmosphere. Satellites can then change or maintain the orbit by propulsion, usually by chemical or ion thrusters. As of 2018, about 90% of the satellites orbiting the Earth are in low Earth orbit or geostationary orbit; geostationary means the satellites stay still in the sky (relative to a fixed point on the ground). Some imaging satellites chose a Sun-synchronous orbit because they can scan the entire globe with similar lighting. As the number of satellites and space debris around Earth increases, the threat of collision has become more severe. A small number of satellites orbit other bodies (such as the Moon, Mars, and the Sun) or many bodies at once (two for a halo orbit, three for a Lissajous orbit).

Earth observation satellites gather information for reconnaissance, mapping, monitoring the weather, ocean, forest, etc. Space telescopes take advantage of outer space's near perfect vacuum to observe objects with the entire electromagnetic spectrum. Because satellites can see a large portion of the Earth at once, communications satellites can relay information to remote places. The signal delay from satellites and their orbit's predictability are used in satellite navigation systems, such as GPS. Space probes are satellites designed for robotic space exploration outside of Earth, and space stations are in essence crewed satellites.

The first artificial satellite launched into the Earth's orbit was the Soviet Union's Sputnik 1, on October 4, 1957. As of December 31st 2022, there are 6,718 operational satellites in the Earth's orbit, of which 4,529 belong to the United States (3,996 commercial), 590 belong to China, 174 belong to Russia, and 1,425 belong to other nations.

History

Early proposals

The first published mathematical study of the possibility of an artificial satellite was Newton's cannonball, a thought experiment by Isaac Newton to explain the motion of natural satellites, in his Philosophiæ Naturalis Principia Mathematica (1687). The first fictional depiction of a satellite being launched into orbit was a short story by Edward Everett Hale, "The Brick Moon" (1869). The idea surfaced again in Jules Verne's The Begum's Fortune (1879).

In 1903, Konstantin Tsiolkovsky (1857–1935) published Exploring Space Using Jet Propulsion Devices, which was the first academic treatise on the use of rocketry to launch spacecraft. He calculated the orbital speed required for a minimal orbit, and inferred that a multi-stage rocket fueled by liquid propellants could achieve this.

Herman Potočnik explored the idea of using orbiting spacecraft for detailed peaceful and military observation of the ground in his 1928 book, The Problem of Space Travel. He described how the special conditions of space could be useful for scientific experiments. The book described geostationary satellites (first put forward by Konstantin Tsiolkovsky) and discussed the communication between them and the ground using radio, but fell short with the idea of using satellites for mass broadcasting and as telecommunications relays.

In a 1945 Wireless World article, English science fiction writer Arthur C. Clarke described in detail the possible use of communications satellites for mass communications. He suggested that three geostationary satellites would provide coverage over the entire planet.

In May 1946, the United States Air Force's Project RAND released the Preliminary Design of an Experimental World-Circling Spaceship, which stated "A satellite vehicle with appropriate instrumentation can be expected to be one of the most potent scientific tools of the Twentieth Century." The United States had been considering launching orbital satellites since 1945 under the Bureau of Aeronautics of the United States Navy. Project RAND eventually released the report, but considered the satellite to be a tool for science, politics, and propaganda, rather than a potential military weapon.

In 1946, American theoretical astrophysicist Lyman Spitzer proposed an orbiting space telescope.

In February 1954, Project RAND released "Scientific Uses for a Satellite Vehicle", by R. R. Carhart. This expanded on potential scientific uses for satellite vehicles and was followed in June 1955 with "The Scientific Use of an Artificial Satellite", by H. K. Kallmann and W. W. Kellogg.

First satellites

The first artificial satellite was Sputnik 1, launched by the Soviet Union on 4 October 1957 under the Sputnik program, with Sergei Korolev as chief designer. Sputnik 1 helped to identify the density of high atmospheric layers through measurement of its orbital change and provided data on radio-signal distribution in the ionosphere. The unanticipated announcement of Sputnik 1's success precipitated the Sputnik crisis in the United States and ignited the so-called Space Race within the Cold War.

In the context of activities planned for the International Geophysical Year (1957–1958), the White House announced on 29 July 1955 that the U.S. intended to launch satellites by the spring of 1958. This became known as Project Vanguard. On 31 July, the Soviet Union announced its intention to launch a satellite by the fall of 1957.

Sputnik 2 was launched on 3 November 1957 and carried the first living passenger into orbit, a dog named Laika.

In early 1955, after being pressured by the American Rocket Society, the National Science Foundation, and the International Geophysical Year, the Army and Navy worked on Project Orbiter with two competing programs. The army used the Jupiter C rocket, while the civilian–Navy program used the Vanguard rocket to launch a satellite. Explorer 1 became the United States' first artificial satellite, on 31 January 1958. The information sent back from its radiation detector led to the discovery of the Earth's Van Allen radiation belts. The TIROS-1 spacecraft, launched on April 1, 1960, as part of NASA's Television Infrared Observation Satellite (TIROS) program, sent back the first television footage of weather patterns to be taken from space.

In June 1961, three and a half years after the launch of Sputnik 1, the United States Space Surveillance Network cataloged 115 Earth-orbiting satellites. Astérix or A-1 (initially conceptualized as FR.2 or FR-2) is the first French satellite. It was launched on 26 November 1965 by a Diamant A rocket from the CIEES launch site at Hammaguir, Algeria. With Astérix, France became the sixth country to have an artificial satellite and the third country to launch a satellite on its own rocket

France is the third country to launch a satellite on its own rocket, the Astérix, on 26 November 1965 by a Diamant A rocket from the CIEES launch site at Hammaguir, Algeria.

Early satellites were built to unique designs. With advancements in technology, multiple satellites began to be built on single model platforms called satellite buses. The first standardized satellite bus design was the HS-333 geosynchronous (GEO) communication satellite launched in 1972. Beginning in 1997, FreeFlyer is a commercial off-the-shelf software application for satellite mission analysis, design, and operations.

Later Satellite Development

While Canada was the third country to build a satellite which was launched into space, it was launched aboard an American rocket from an American spaceport. The same goes for Australia, whose launch of the first satellite involved a donated U.S. Redstone rocket and American support staff as well as a joint launch facility with the United Kingdom. The first Italian satellite San Marco 1 was launched on 15 December 1964 on a U.S. Scout rocket from Wallops Island (Virginia, United States) with an Italian launch team trained by NASA. In similar occasions, almost all further first national satellites were launched by foreign rockets.

After the late 2010s, and especially after the advent and operational fielding of large satellite internet constellations—where on-orbit active satellites more than doubled over a period of five years—the companies building the constellations began to propose regular planned deorbiting of the older satellites that reached the end of life, as a part of the regulatory process of obtaining a launch license. The largest artificial satellite ever is the International Space Station.

By the early 2000s, and particularly after the advent of CubeSats and increased launches of microsats—frequently launched to the lower altitudes of low Earth orbit (LEO)—satellites began to more frequently be designed to get destroyed, or breakup and burnup entirely in the atmosphere. For example, SpaceX Starlink satellites, the first large satellite internet constellation to exceed 1000 active satellites on orbit in 2020, are designed to be 100% demisable and burn up completely on their atmospheric reentry at the end of their life, or in the event of an early satellite failure.

In different periods, many countries, such as Algeria, Argentina, Australia, Austria, Brazil, Canada, Chile, China, Denmark, Egypt, Finland, France, Germany, India, Iran, Israel, Italy, Japan, Kazakhstan, South Korea, Malaysia, Mexico, the Netherlands, Norway, Pakistan, Poland, Russia, Saudi Arabia, South Africa, Spain, Switzerland, Thailand, Turkey, Ukraine, the United Kingdom and the United States, had some satellites in orbit.

Japan's space agency (JAXA) and NASA plan to send a wooden satellite prototype called LingoSat into orbit in the summer of 2024. They have been working on this project for few years and sent first wood samples to the space in 2021 to test the material's resilience to space conditions.

Components:

Orbit and altitude control

Most satellites use chemical or ion propulsion to adjust or maintain their orbit,  coupled with reaction wheels to control their three axis of rotation or attitude. Satellites close to Earth are affected the most by variations in the Earth's magnetic, gravitational field and the Sun's radiation pressure; satellites that are further away are affected more by other bodies' gravitational field by the Moon and the Sun. Satellites utilize ultra-white reflective coatings to prevent damage from UV radiation. Without orbit and orientation control, satellites in orbit will not be able to communicate with ground stations on the Earth.

Chemical thrusters on satellites usually use monopropellant (one-part) or bipropellant (two-parts) that are hypergolic. Hypergolic means able to combust spontaneously when in contact with each other or to a catalyst. The most commonly used propellant mixtures on satellites are hydrazine-based monopropellants or monomethylhydrazine–dinitrogen tetroxide bipropellants. Ion thrusters on satellites usually are Hall-effect thrusters, which generate thrust by accelerating positive ions through a negatively-charged grid. Ion propulsion is more efficient propellant-wise than chemical propulsion but its thrust is very small (around 0.5 N or 0.1 lbf), and thus requires a longer burn time. The thrusters usually use xenon because it is inert, can be easily ionized, has a high atomic mass and storable as a high-pressure liquid.[5]: 78–79

Power

Most satellites use solar panels to generate power, and a few in deep space with limited sunlight use radioisotope thermoelectric generators. Slip rings attach solar panels to the satellite; the slip rings can rotate to be perpendicular with the sunlight and generate the most power. All satellites with a solar panel must also have batteries, because sunlight is blocked inside the launch vehicle and at night. The most common types of batteries for satellites are lithium-ion, and in the past nickel–hydrogen.

Additional Information

A satellite is an object in space that orbits or circles around a bigger object. There are two kinds of satellites: natural (such as the moon orbiting the Earth) or artificial (such as the International Space Station orbiting the Earth).

There are dozens upon dozens of natural satellites in the solar system, with almost every planet having at least one moon. Saturn, for example, has at least 53 natural satellites, and between 2004 and 2017, it also had an artificial one — the Cassini spacecraft, which explored the ringed planet and its moons.

Artificial satellites, however, did not become a reality until the mid-20th century. The first artificial satellite was Sputnik, a Russian beach-ball-size space probe that lifted off on Oct. 4, 1957. That act shocked much of the western world, as it was believed the Soviets did not have the capability to send satellites into space.

Following that feat, on Nov. 3, 1957 the Soviets launched an even more massive satellite — Sputnik 2 — which carried a dog, Laika. The United States' first satellite was Explorer 1 on Jan. 31, 1958. The satellite was only 2 percent the mass of Sputnik 2, however, at 30 pounds (13 kg).

The Sputniks and Explorer 1 became the opening shots in a space race between the United States and the Soviet Union that lasted until at least the late 1960s. The focus on satellites as political tools began to give way to people as both countries sent humans into space in 1961. Later in the decade, however, the aims of both countries began to split. While the United States went on to land people on the moon and create the space shuttle, the Soviet Union constructed the world's first space station, Salyut 1, which launched in 1971. (Other stations followed, such as the United States' Skylab and the Soviet Union's Mir.)

Other countries began to send their own satellites into space as the benefits rippled through society. Weather satellites improved forecasts, even for remote areas. Land-watching satellites such as the Landsat series (on its ninth generation now) tracked changes in forests, water and other parts of Earth's surface over time. Telecommunications satellites made long-distance telephone calls and eventually, live television broadcasts from across the world a normal part of life. Later generations helped with Internet connections.

With the miniaturization of computers and other hardware, it's now possible to send up much smaller satellites that can do science, telecommunications or other functions in orbit. It's common now for companies and universities to create "CubeSats", or cube-shaped satellites that frequently populate low-Earth orbit.

These can be lofted on a rocket along with a bigger payload, or sent from a mobile launcher on the International Space Station (ISS). NASA is now considering sending CubeSats to Mars or to the moon Europa (near Jupiter) for future missions, although the CubeSats aren't confirmed for inclusion.

The ISS is the biggest satellite in orbit, and took over a decade to construct. Piece by piece, 15 nations contributed financial and physical infrastructure to the orbiting complex, which was put together between 1998 and 2011. Program officials expect the ISS to keep running until at least 2024.

newspaceeconomy_large_satellite_is_floating_in_the_darkness_of__fe303831-a0e9-4784-b06c-1f9619ef1fb3.jpg?resize=800%2C800&ssl=1


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2123 2024-04-18 00:02:02

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2125) Orator

Gist

A person giving a speech is called an orator, like the gifted orator who raised excellent points, making everyone in the audience want to join his revolution.

The noun orator traces back to the Latin word orare, meaning to “speak before a court or assembly, plead.” Orator is really just a formal way of saying “speaker.” Technically, you can use it to describe anyone who is giving a speech, whether it’s a speaker at the United Nations or a classmate giving a short presentation. However, orator often implies that the speaker is particularly gifted.

Summary

An orator is someone who pleads a case in public. Originally, it meant speaking in a public place for or against a person or a proposal. In ancient Greek, Latin, French and English, an orator spoke for and against the accused in courts, and for or against big political decisions, such as whether to go to war. Gradually, it came to mean someone who spoke in public on formal occasions.

Oratory, or rhetoric, is the skill of argument or persuasion used by orators. The invention of printing allowed books to be multiplied and produced cheaply. This has made it possible for orators to do their persuasion in print as well as speaking. Adolf Hitler and Winston Churchill are good examples of how orators in the 20th century used media such as radio and movies where once they could only use speech. Both wrote books which sold in large numbers, though Churchill's books were about more than politics. Today television and newspapers play a vital role in deciding elections; the web less so.

Other types of orator are those who wish to change beliefs. Religious preachers like Martin Luther and John Knox changed religion in western Europe; William Wilberforce and Sojourner Truth led the fight against the evil of slavery. Emmeline Pankhurst, Martin Luther King Jr and others fought to get equal rights for all citizens.

We have orators today as much as the ancient Greeks did. The main difference is that the Greeks could see and listen to them face to face, but we rarely do today.

Rhetoric

The study of how persuasion is done by orators is called rhetoric. It has been studied for 2,500 years at least, and there are a huge number of books about it

Details

Oratory is the rationale and practice of persuasive public speaking. It is immediate in its audience relationships and reactions, but it may also have broad historical repercussions. The orator may become the voice of political or social history.

A vivid instance of the way a speech can focus the concerns of a nation was Martin Luther King’s address to a massive civil rights demonstration in Washington, D.C., in 1963. Repeating the phrase “I have a dream,” King applied the oratorical skill he had mastered as a preacher to heighten his appeal for further rights for U.S. blacks to an intensity that galvanized millions.

An oration involves a speaker; an audience; a background of time, place, and other conditions; a message; transmission by voice, articulation, and bodily accompaniments; and may, or may not, have an immediate outcome.

Rhetoric, classically the theoretical basis for the art of oratory, is the art of using words effectively. Oratory is instrumental and practical, as distinguished from poetic or literary composition, which traditionally aims at beauty and pleasure. Oratory is of the marketplace and as such not always concerned with the universal and permanent. The orator in his purpose and technique is primarily persuasive rather than informational or entertaining. An attempt is made to change human behaviour or to strengthen convictions and attitudes. The orator would correct wrong positions of the audience and establish psychological patterns favourable to his own wishes and platform. Argument and rhetorical devices are used, as are evidence, lines of reasoning, and appeals that support the orator’s aims. Exposition is employed to clarify and enforce the orator’s propositions, and anecdotes and illustrations are used to heighten response.

The orator need not be a first-rate logician, though a capacity for good, clear thought helps to penetrate into the causes and results of tentative premises and conclusions and to use analogy, generalizations, assumptions, deductive–inductive reasoning, and other types of inference. Effective debaters, who depend more heavily on logic, however, are not always impressive orators because superior eloquence also requires strong appeals to the motives, sentiments, and habits of the audience. Oratorical greatness is invariably identified with strong emotional phrasing and delivery. When the intellectual qualities dominate with relative absence of the affective appeals, the oration fails just as it does when emotion sweeps aside reason.

The ideal orator is personal in his appeals and strong in ethical proofs, rather than objective or detached. He enforces his arguments by his personal commitment to his advocacy. William Pitt, later Lord Chatham, punctuated his dramatic appeals for justice to the American colonies with references to his own attitudes and beliefs. So were personal appeals used by the Irish orator Daniel O’Connell, the French orators Mirabeau and Robespierre, and the Americans Daniel Webster, Wendell Phillips, and Robert G. Ingersoll.

The orator, as illustrated by Edmund Burke, has a catholic attitude. Burke’s discussion of American taxation, conciliation, Irish freedoms, justice for India, and the French Revolution show analytical and intellectual maturity, the power of apt generalization, and comprehensiveness of treatment.

Oratory has traditionally been divided into legal, political, or ceremonial, or, according to Aristotle, forensic, deliberative, or epideictic.

Typically, forensic, or legal, oratory is at its best in the defense of individual freedom and resistance to prosecution. It was the most characteristic type of oratory in ancient Athens, where laws stipulated that litigants should defend their own causes. In the so-called Golden Age of Athens, the 4th century BC, great speakers in both the law courts and the assembly included Lycurgus, Demosthenes, Hyperides, Aeschines, and Dinarchus.

In the 1st century BC of ancient Rome, Cicero became the foremost forensic orator and exerted a lasting influence on later Western oratory and prose style. Cicero successfully prosecuted Gaius Verres, notorious for his mismanagement while governor of Sicily, and drove him into exile, and he dramatically presented arguments against Lucius Sergius Catiline that showed a command of analysis and logic and great skill in motivating his audience. Cicero also delivered 14 bitter indictments against Mark Antony, who was to him the embodiment of despotism.

Among the great forensic orators of later times was the 18th- and 19th-century English advocate Thomas Erskine, who contributed to the cause of English liberties and the humane application of the legal system.

Demosthenes, the Athenian lawyer, soldier, and statesman, was a great deliberative orator. In one of his greatest speeches, “On the Crown,” he defended himself against the charge by his political rival Aeschines that he had no right to the golden crown granted him for his services to Athens. So brilliant was Demosthenes’ defense of his public actions and principles that Aeschines, who was also a powerful orator, left Athens for Rhodes in defeat.

The third division of persuasive speaking, epideictic, or ceremonial, oratory was panegyrical, declamatory, and demonstrative. Its aim was to eulogize an individual, a cause, occasion, movement, city, or state, or to condemn them. Prominent in ancient Greece were the funeral orations in honour of those killed in battle. The outstanding example of these is one by Pericles, perhaps the most finished orator of the 5th century BC, in honour of those killed in the first year of the Peloponnesian War.

The 19th-century American speaker Daniel Webster excelled in all three major divisions—forensic, deliberative, and epideictic oratory. He brought more than 150 pleas before the U.S. Supreme Court, including the Dartmouth College Case (1819) and the Gibbons v. Ogden case (1824); he debated in the U.S. Senate against Robert Young Hayne and John Calhoun on the issues of federal government versus states’ rights, slavery, and free trade; and he delivered major eulogies, including those on the deaths of Thomas Jefferson and John Adams.

Another major type of persuasive speaking that developed later than ancient Greek and Roman rhetoric was religious oratory. For more than 1,000 years after Cicero the important orators were churchmen rather than politicians, lawyers, or military spokesmen. This tradition derived from the Judaean prophets, such as Jeremiah and Isaiah, and in the Christian Era, from the Apostle Paul, his evangelistic colleagues, and such later fathers of the church as Tertullian, Chrysostom, and St. Augustine. Ecclesiastical speaking became vigorously polemical. The rhetorical principles of Aristotle and Cicero were adopted by ecclesiastical leaders who challenged rival doctrines and attacked the sins of the communities.

In the Middle Ages, Pope Urban II elicited a great response to his oratorical pleas for enlistment in the First Crusade. The Second Crusade was urged on with great eloquence by St. Bernard, abbot of Clairvaux. In the 15th and 16th centuries the revolt against the papacy and the Reformation movement stimulated the eloquence of Huldrych Zwingli, John Calvin, Hugh Latimer, and, most notably, Martin Luther. At the Diet of Worms, as elsewhere, Luther spoke with courage, sincerity, and well-buttressed logic. Religious controversies in the 17th century engaged such great oratorical skills as those of Richard Baxter, the English Puritan, and Catholic bishop J.B. Bossuet of France. In the 18th century the Methodist George Whitefield in England and North America, and the Congregationalist Jonathan Edwards in America, were notably persuasive speakers. Preachers of oratorical power in the 19th century included Henry Ward Beecher, famous for his antislavery speeches and his advocacy of women’s suffrage from his Congregational pulpit in Plymouth Church, Brooklyn, N.Y., and William Ellery Channing, American spokesman for Unitarianism.

Because the orator intuitively expresses the fears, hopes, and attitudes of his audience, a great oration is to a large extent a reflection of those to whom it is addressed. The audience of Pericles in ancient Greece, for example, was the 30,000 or 40,000 citizens out of the state’s total population of 200,000 or 300,000, including slaves and others. These citizens were sophisticated in the arts, politics, and philosophy. Directing their own affairs in their Assembly, they were at once deliberative, administrative, and judicial. Speaker and audience were identified in their loyalty to Athens. Similarly, the senatorial and forum audience of Cicero in ancient Rome was an even smaller elite among the hundreds of thousands of slaves and aliens who thronged the Roman world. In the Forum the citizens, long trained in law, and with military, literary, and political experience, debated and settled the problems. The speeches of Cato, Catiline, Cicero, Julius Caesar, Brutus, Antony, Augustus, and the others were oratory of and for the Roman citizen.

In the Christian Era, however, the religious orator often found himself addressing an alien audience that he hoped to convert. To communicate with them, the Christian often appealed to ancient Greek and Roman thought, which had achieved widespread authority, and to Judaean thought and method, which had the sanction of scripture. By the time of the Reformation, however, Christian dogma had become so codified that most of the disputation could be carried on in terms of doctrine that had become well known to all.

The history of the British Parliament reveals a continuing trend toward common speech and away from the allusions to ancient Greek and Roman thought that abounded when the members consisted largely of classically educated aristocrats.

In the golden age of British political oratory of the late 18th century, greater parliamentary freedom and the opportunity to defend and extend popular rights gave political oratory tremendous energy, personified by such brilliant orators as both the elder and the younger William Pitt, John Wilkes, Charles James Fox, Richard Sheridan, Edmund Burke, and William Wilberforce. Parliamentary reforms of the 19th century, initiated and promoted by Macaulay, Disraeli, Gladstone, and others of the century, led to more and more direct political speaking on the hustings with the rank and file outside Parliament. Burke and his contemporaries had spoken almost entirely in the Commons or Lords, or to limited electors in their borough homes, but later political leaders appealed directly to the population. With the rise of the Labour Party in the 20th century and the further adaptation of government to the people, delivery became less declamatory and studied. The dramatic stances of the 18th-century parliamentary debaters disappeared as a more direct, spontaneous style prevailed. As delivery habits changed, so did the oratorical language. Alliteration, antithesis, parallelism, and other rhetorical figures of thought and of language had sometimes been carried to extremes, in speeches addressed to those highly trained in Latin- and Greek-language traditions. These devices gave way, however, to a clearness of style and vividness consonant with the idiom of the common man and later with the vocabulary of radio and television.

Similarly, American speech inherited and then gradually discarded British oratorical techniques for its own speaking vernacular. John Calhoun, in his addresses to Congress on behalf of the South, absorbed much of the Greek political philosophy and methods of oral composition and presentation, and his principal opponent in debate, Daniel Webster, too, had the marks of British communicative tradition. This inheritance was absorbed into the speaking adjustments indigenous to those later peoples of New England, the West, and the South. The orator whose speech preceded Lincoln’s at Gettysburg—Edward Everett, statesman and former professor of Greek literature at Harvard—was a classical scholar. Lincoln, on the same platform, had address born of his native Middle West yet expressed with authentic eloquence.

The 20th century saw the development of two leaders of World War II who applied oratorical techniques in vastly different ways with equal effect. It was primarily through his oratory that Adolf Hitler whipped the defeated and divided Germans into a frenzy of conquest, while Winston Churchill used his no less remarkable powers to summon up in the English people their deepest historical reserves of strength against the onslaught. Subsequently, though the importance of persuasive speech in no way diminished, radio and television so reshaped the method of delivery that much of the theory of traditional oratory often seemed no longer to apply. The radio fireside chats of Pres. Franklin Roosevelt were the most successful of his persuasions. In the televised debates of John F. Kennedy and Richard Nixon during the U.S. presidential campaign in 1960, the candidates might be said to have been most persuasive when they were least oratorical, in the traditional sense of the term. Nonetheless, even conventional oratory persisted as peoples in newly developing nations were swept up into national and international political struggles.

A good general collection is H. Peterson (ed.), A Treasury of the World’s Great Speeches, rev. ed. (1965).

Additional Information

An orator, or oratist, is a public speaker, especially one who is eloquent or skilled.

Etymology

Recorded in English c. 1374, with a meaning of "one who pleads or argues for a cause", from Anglo-French oratour, Old French orateur (14th century), Latin orator ("speaker"), from orare ("speak before a court or assembly; plead"), derived from a Proto-Indo-European base *or- ("to pronounce a ritual formula").

The modern meaning of the word, "public speaker", is attested from c. 1430.

History

In ancient Rome, the art of speaking in public (Ars Oratoria) was a professional competence especially cultivated by politicians and lawyers. As the Greeks were still seen as the masters in this field, as in philosophy and most sciences, the leading Roman families often either sent their sons to study these things under a famous master in Greece (as was the case with the young Julius Caesar), or engaged a Greek teacher (under pay or as a slave).

In the young revolutionary French Republic, Orateur (French for "orator") was the formal title for the delegated members of the Tribunat to the Corps législatif, similar to the role of a "Parliamentary Speaker," to motivate their ruling on a presented bill.

In the 19th century, orators and historians and speakers such as Mark Twain, Charles Dickinson, and Col. Robert G. Ingersoll were major providers of popular entertainment.

A pulpit orator is a Christian author, often a clergyman, renowned for their ability to write or deliver (from the pulpit in church, hence the word) rhetorically skilled religious sermons.

In some universities, the title 'Orator' is given to the official whose task it is to give speeches on ceremonial occasions, such as the presentation of honorary degrees.

1708610974309?e=1716422400&v=beta&t=KQvajxvBFhhnw63a4DQCaRt0muonMWjljP8eov-Gpcw


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2124 2024-04-18 23:13:39

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2126) Medical Transcriptionist

Gist

Medical transcription, also known as MT, is an allied health profession dealing with the process of transcribing voice-recorded medical reports that are dictated by physicians, nurses and other healthcare practitioners. Medical reports can be voice files, notes taken during a lecture, or other spoken material.

Summary

A medical transcriptionist is a professional who converts voice recordings from healthcare appointments into written reports. You see these reports in your electronic health records, and they help your provider give you the best possible care. Medical transcriptionists receive training in anatomy, physiology, medical terms and grammar.

What is a medical transcriptionist?

A medical transcriptionist is a healthcare professional who converts voice recordings into written reports. Primary care physicians and other healthcare providers create voice recordings to quickly save appointment notes. Instead of listening to these recordings, you’ll see reports in your electronic medical records or health portal. Medical transcriptionists likely play a role in creating these records so you and your provider can access them later.

You see your healthcare providers in person and have conversations with them. But you’ll probably never meet your medical transcriptionists. They’re like the stage crew behind the curtains that makes sure the show goes on smoothly and safely.

Transcriptionists must know medical concepts and terms to ensure your medical notes are accurate and clear. It’s a high-stakes job. Mistakes in medical reports that seem small could have a serious impact on your health. This sense of responsibility, as well as the fast-paced nature of the job, may cause stress for medical transcriptionists.

Advances in technology are changing the field. For example, transcription software can convert voice recordings to written transcripts with increasing accuracy. As a result, employers (like hospitals) may not need to hire as many new transcriptionists. But it’s still an important role that’s worth knowing about, whether you receive care in a medical setting or are interested in working in the field. Medical transcriptionists are part of the larger team that supports your health and helps you get the care you need.

Another name for a medical transcriptionist is a healthcare documentation specialist.

Details

Medical transcription, also known as MT, is an allied health profession dealing with the process of transcribing voice-recorded medical reports that are dictated by physicians, nurses and other healthcare practitioners. Medical reports can be voice files, notes taken during a lecture, or other spoken material. These are dictated over the phone or uploaded digitally via the Internet or through smart phone apps.

History

Medical transcription as it is currently known has existed since the beginning of the 20th century when standardization of medical records and data became critical to research. At that time, medical stenographers recorded medical information, taking doctors' dictation in shorthand. With the creation of audio recording devices, it became possible for physicians and their transcribers to work asynchronously.

Over the years, transcription equipment has changed from manual typewriters, to electric typewriters, to word processors, and finally, to computers. Storage methods have also changed: from plastic disks and magnetic belts to cassettes, endless loops,[clarification needed] and digital recordings. Today, speech recognition (SR), also known as continuous speech recognition (CSR), is increasingly used, with medical transcriptions and, in some cases, "editors" providing supplemental editorial services. Natural-language processing takes "automatic" transcription a step further, providing an interpretive function that speech recognition alone does not provide.

In the past, these medical reports consisted of very abbreviated handwritten notes that were added in the patient's file for interpretation by the primary physician responsible for the treatment. Ultimately, these handwritten notes and typed reports were consolidated into a single patient file and physically stored along with thousands of other patient records in the medical records department. Whenever the need arose to review the records of a specific patient, the patient's file would be retrieved from the filing cabinet and delivered to the requesting physician. To enhance this manual process, many medical record documents were produced in duplicate or triplicate by means of carbon copy.

In recent years, medical records have changed considerably. Although many physicians and hospitals still maintain paper records, the majority are stored as electronic records. This digital format allows for immediate remote access by any physician who is authorized to review the patient information. Reports are stored electronically and printed selectively as the need arises. Many healthcare providers today work using handheld PCs or personal data assistants (PDAs) and are now utilizing software on them to record dictation.

Overview

Medical transcription is part of the healthcare industry that renders and edits doctor dictated reports, procedures, and notes in an electronic format in order to create files representing the treatment history of patients. Health practitioners dictate what they have done after performing procedures on patients, and MTs transcribe the oral dictation, edit reports that have gone through speech recognition software, or both.

Pertinent, up-to-date and confidential patient information is converted to a written text document by a medical transcriptionist (MT). This text may be printed and placed in the patient's record, retained only in its electronic format, or placed in the patient's record and also retained in its electronic format. Medical transcription can be performed by MTs who are employees in a hospital or who work at home as telecommuting employees for the hospital; by MTs working as telecommuting employees or independent contractors for an outsourced service that performs the work offsite under contract to a hospital, clinic, physician group or other healthcare provider; or by MTs working directly for the providers of service (doctors or their group practices) either onsite or telecommuting as employees or contractors. Hospital facilities often prefer electronic storage of medical records due to the sheer volume of hospital patients and the accompanying paperwork. The electronic storage in their database gives immediate access to subsequent departments or providers regarding the patient's care to date, notation of previous or present medications, notification of allergies, and establishes a history on the patient to facilitate healthcare delivery regardless of geographical distance or location.

The term transcript, or "report" is used to refer to a healthcare professional's specific encounter with a patient. This report is also referred to by many as a "medical record". Each specific transcribed record or report, with its own specific date of service, is then merged and becomes part of the larger patient record commonly known as the patient's medical history. This record is often called the patient's "chart" in a hospital setting.

Medical transcription encompasses the medical transcriptionist, performing document typing and formatting functions according to an established criterion or format, transcribing the spoken word of the patient's care information into a written, easily readable form. A proper transcription requires correct spelling of all terms and words, and correcting medical terminology or dictation errors. Medical transcriptionists also edit the transcribed documents, print or return the completed documents in a timely fashion. All transcription reports must comply with medico-legal concerns, policies and procedures, and laws under patient confidentiality.

In transcribing directly for a doctor or a group of physicians, there are specific formats and report types used, dependent on that doctor's speciality of practice, although history and physical exams or consults are mainly utilized. In most of the off-hospital sites, independent medical practices perform consultations as a second opinion, pre-surgical exams, and as IMEs (Independent Medical Examinations) for liability insurance or disability claims. Some private practice family doctors choose not to utilize a medical transcriptionist, preferring to keep their patient's records in a handwritten format, although this is not true of all family practitioners.

Currently, a growing number of medical providers send their dictation by digital voice files, utilizing a method of transcription called speech or voice recognition. Speech recognition is still a nascent technology that loses much in translation. For dictators to utilize the software, they must first train the program to recognize their spoken words. Dictation is read into the database and the program continuously "learns" the spoken words and phrases.

Poor speech habits and other problems such as heavy accents and mumbling complicate the process for both the MT and the recognition software. An MT can "flag" such a report as unintelligible, but the recognition software will transcribe the unintelligible word(s) from the existing database of "learned" language. The result is often a "word salad" or missing text. Thresholds can be set to reject a bad report and return it for standard dictation, but these settings are arbitrary. Below a set percentage rate, the word salad passes for actual dictation. The MT simultaneously listens, reads, and "edits" the correct version. Every word must be confirmed in this process. The downside of the technology is when the time spent in this process cancels out the benefits. The quality of recognition can range from excellent to poor, with whole words and sentences missing from the report. Not infrequently, negative contractions and the word "not" is dropped altogether. These flaws trigger concerns that the present technology could have adverse effects on patient care. Control over quality can also be reduced when providers choose a server-based program from a vendor Application Service Provider (ASP).

Downward adjustments in MT pay rates for voice recognition are controversial. Understandably, a client will seek optimum savings to offset any net costs. Yet vendors that overstate the gains in productivity do harm to MTs paid by the line. Despite the new editing skills required of MTs, significant reductions in compensation for voice recognition have been reported. Reputable industry sources put the field average for increased productivity in the range of 30–50%; yet this is still dependent on several other factors involved in the methodology.

Operationally, speech recognition technology (SRT) is an interdependent, collaborative effort. It is a mistake to treat it as compatible with the same organizational paradigm as standard dictation, a largely "stand-alone" system. The new software supplants an MT's former ability to realize immediate time-savings from programming tools such as macros and other word/format expanders. Requests for client/vendor format corrections delay those savings. If remote MTs cancel each other out with disparate style choices, they and the recognition engine may be trapped in a seesaw battle over control. Voice recognition managers should take care to ensure that the impositions on MT autonomy are not so onerous as to outweigh its benefits.

Medical transcription is still the primary mechanism for a physician to clearly communicate with other healthcare providers who access the patient record, to advise them on the state of the patient's health and past/current treatment, and to assure continuity of care. More recently, following Federal and State Disability Act changes, a written report (IME) became a requirement for documentation of a medical bill or an application for Workers' Compensation (or continuation thereof) insurance benefits based on requirements of Federal and State agencies.

As a profession

An individual who performs medical transcription is known as a medical transcriber (MT) or a Medical Language Specialist (MLS). The equipment used is called a medical transcriber, e.g., a cassette player with foot controls operated by the MT for report playback and transcription.

Education and training can be obtained through certificate or diploma programs, distance learning, or on-the-job training offered in some hospitals, although there are countries currently employing transcriptionists that require 18 months to 2 years of specialized MT training. Working in medical transcription leads to a mastery in medical terminology and editing, ability to listen and type simultaneously, utilization of playback controls on the transcriber (machine), and use of foot pedal to play and adjust dictations – all while maintaining a steady rhythm of execution. Medical transcription training normally includes coursework in medical terminology, anatomy, editing and proofreading, grammar and punctuation, typing, medical record types and formats, and healthcare documentation.

While medical transcription does not mandate registration or certification, individual MTs may seek out registration/certification for personal or professional reasons. Obtaining a certificate from a medical transcription training program does not entitle an MT to use the title of Certified Medical Transcriptionist. A Certified Healthcare Documentation Specialist (CHDS) credential can be earned by passing a certification examination conducted solely by the Association for Healthcare Documentation Integrity (AHDI), formerly the American Association for Medical Transcription (AAMT), as the credentialing designation they created. AHDI also offers the credential of Registered Healthcare Documentation Specialist (RHDS). According to AHDI, RHDS is an entry-level credential while the CHDS is an advanced level. AHDI maintains a list of approved medical transcription schools. Generally, certified medical transcriptionists earn more than their non-certified counterparts. It is also notable that training through an educational program that is approved by AHDI will increase the chances of an MT getting certified and getting hired.

There is a great degree of internal debate about which training program best prepares an MT for industry work. Yet, whether one has learned medical transcription from an online course, community college, high school night course, or on-the-job training in a doctor's office or hospital, a knowledgeable MT is highly valued. In lieu of these AHDI certification credentials, MTs who can consistently and accurately transcribe multiple document work-types and return reports within a reasonable turnaround-time (TAT) are sought after. TATs set by the service provider or agreed to by the transcriptionist should be reasonable but consistent with the need to return the document to the patient's record in a timely manner.

On March 7, 2006, the MT occupation became an eligible U.S. Department of Labor Apprenticeship, a 2-year program focusing on acute care facility (hospital) work. In May 2004, a pilot program for Vermont residents was initiated, with 737 applicants for only 20 classroom pilot-program openings. The objective was to train the applicants as MTs in a shorter time period.

The medical transcription process

When the patient visits a doctor, the latter spends time with the former discussing their medical problems and performing diagnostic services. After the patient leaves the office, the doctor uses a voice-recording device to record information about the patient encounter. This information may be recorded into a hand-held cassette recorder or into a regular telephone, dialed into a central server located in the hospital or transcription service office, which will 'hold' the report for the transcriptionist. This report is then accessed by a medical transcriptionist, who then listens to the dictation and transcribes it into the required format for the medical record, and of which this medical record is considered a legal document. The next time the patient visits the doctor, the doctor will call for the medical record or the patient's entire chart, which will contain all reports from previous encounters. The doctor can on occasion refill the patient's medications after seeing only the medical record, although doctors prefer to not refill prescriptions without seeing the patient to establish if anything has changed.

It is very important to have a properly formatted, edited, and reviewed medical transcription document. If a medical transcriptionist accidentally typed a wrong medication or the wrong diagnosis, the patient could be at risk if the doctor (or their designee) did not review the document for accuracy. Both the doctor and the medical transcriptionist play an important role to make sure the transcribed dictation is correct and accurate. The doctor should speak slowly and concisely, especially when dictating medications or details of diseases and conditions. The medical transcriptionist must possess hearing acuity, medical knowledge, and good reading comprehension in addition to checking references when in doubt.

However, some doctors do not review their transcribed reports for accuracy, and the computer attaches an electronic signature with the disclaimer that a report is "dictated but not read". This electronic signature is readily acceptable in a legal sense. The transcriptionist is bound to transcribe verbatim (exactly what is said) and make no changes, but has the option to flag any report inconsistencies. On some occasions, the doctors do not speak clearly, or voice files are garbled. Some doctors are time-challenged and need to dictate their reports quickly. In addition, there are many regional or national accents and (mis)pronunciations of words the MT must contend with. It is imperative and a large part of the job of the transcriptionist to look up the correct spelling of complex medical terms, medications, obvious dosage or dictation errors, and when in doubt should "flag" a report. A "flag" on a report requires the dictator (or their designee) to fill in a blank on a finished report, which has been returned to him, before it is considered complete. Transcriptionists are never permitted to guess, or 'just put in anything' in a report transcription. Furthermore, medicine is constantly changing. New equipment, new medical devices, and new medications come on the market on a daily basis, and the Medical Transcriptionist needs to be creative and to tenaciously research (quickly) to find these new words. An MT needs to have access to, or keep on memory, an up-to-date library to quickly facilitate the insertion of a correctly spelled device.

Medical transcription editing

Medical transcription editing is the process of listening to a voice-recorded file and comparing that to the transcribed report of that audio file, correcting errors as needed. Although speech recognition technology has become better at understanding human language, editing is still needed to ensure better accuracy. Medical transcription editing is also performed on medical reports transcribed by medical transcriptionists.

Medical transcription editors

Recent advances in speech recognition technology have shifted the job responsibilities of medical transcriptionists from not only transcribing but also editing. Editing has always been a part of the medical transcriptionist job; however, now editing is a larger requirement as reports are more often being transcribed electronically. With different accents, articulations, and pronunciations, speech recognition technology can still have problems deciphering words. This is where the medical transcriptionist editor steps in. Medical transcription editors will compare and correct the transcribed file to the voice-recorded audio file. The job is similar to medical transcription as editing will use a foot pedal and the education and training requirements are mostly the same.

Training and education

Education and training requirements for medical transcription editors is very similar to that of medical transcriptionists. Many medical transcription editors start out as medical transcriptionists and transition to editing. Several of the AHDI-approved medical transcription schools have seen the need for medical transcription editing training and have incorporated editing in their training programs. Quality training is key to success as a Medical Transcription / Healthcare Documentation Specialist. It is also very important to get work experience while training to ensure employers will be willing to hire freshly graduated students. Students who receive 'real world' training are much better suited for the medical transcription industry, than those who do not.

Additional Information

Medical transcription (MT) is the manual processing of voice reports dictated by physicians and other healthcare professionals into text format.

The MT team of a hospital typically receives the voice files with dictation of medical documents from healthcare providers. The voice files are then converted into text.

The transcribed medical reports are usually created in digital format and submitted to the hospital's Electronic Health Record (EHR) or Electronic Medical Record (EMR) system.

Today, the medical field relies on speech recognition software and medical transcription software (MTS) for transcribing.

Duties involved in a medical transcription service

Medical professionals require a wide range of transcription services on a day-to-day basis. Their duties involve the following:

* Transcribe the voice recordings of a patient's medical history for a variety of medical specialties including Radiology, Acute Care and Oncology.
* Interpret medical information and categorize the data in notes, operative reports, patient records, consultations and discharge summaries.
* Review and edit the transcription of speech recognition apps to ensure the accuracy of medical terminology and optimize patient care.
* Enter patient information into the organization's medical records system.

How to become a medical transcriptionist

There are many education programs available for potential medical transcriptionists, although there is no single, standardized path to achieving the training required to be an MT.

However, in many cases, an MT will go through post-secondary education at a vocational school or community college. Along with this, the MT will have proficiency in English or whatever the healthcare system's primary language is, an understanding of medical terminology, and excellent typing and listening skills.

Candidates can become certified medical transcriptionists through the Association for Healthcare Documentation Integrity (AHDI). The AHDI offers two certificates: the Registered Healthcare Documentation Specialist (RHDS) and the Certified Healthcare Documentation Specialist (CHDS).

The CHDS is only offered to medical language specialists who have already attained the RHDS certificate.

The future of medical transcription

The healthcare industry is increasingly adopting medical transcription services and software to create greater cost-efficiency for their organization.

This has led to rapid growth in the transcription industry as a more cost-effective, on-demand solution. By outsourcing transcription services, healthcare professionals no longer incur fixed costs for managing their needs in-house.

Along with outsourcing, speech recognition technology has seen and will continue to see an increase in adoption, with many organizations and MTs using it as a means to simplify and streamline the medical transcription process.

Transcription-1.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2125 2024-04-20 00:07:55

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,428

Re: Miscellany

2127) Legal Transcription

Summary

What Does a Legal Transcriptionist Do?

A day in the life of a legal transcriptionist first involves accepting a transcription job. If you work for a law firm or transcriptionist company, you’ll likely be expected to complete a specific number of jobs either daily or weekly.

As an independent contractor/freelancer, you’ll call the shots regarding which work you decide to accept. However, you’ll need to come with a little more hustle and develop a nice network of law firms who’ll call on you for their legal transcription needs.

So then, what do legal transcribers do? Once you accept a job, the client will send you an audio file that you’ll be required to transcribe verbatim. These audio files can be anything legally-related and may include legal memos, pleadings, motions, correspondence, depositions, and more.

While court reporters use stenography equipment to transcribe the spoken word, you’ll simply use your computer and a standard keyboard to get the job done. The pace of the job is slower as a transcriptionist than a court reporter, so the use of stenography equipment to take shorthand is unnecessary. Plus, you’ll have the luxury of pausing and rewinding the recording to get your dictation just right; a court reporter doesn’t have the option of pausing courtroom proceedings most of the time.

Once you’ve transcribed the document, you’ll carefully review it, checking for grammatical, punctuation, spelling, and typographical errors. The transcribed document you return to the client should be clean and free of any errors.

Details

Overview

The daily duties of lawyers are quite demanding. Furthermore, they usually put in more hours of work since they have a lot on their plates. This implies they might not have much free time to manage other tasks outside their legal companies and offices. Lawyers are busy with a lot of activities related to their cases, from analyzing papers and attending court hearings to evaluating significant interviews and court recordings. The good news is that there are many solutions to simplify their everyday tasks.

To reduce their burden, lawyers often utilize legal transcribing services, which results in saving more time for their mainstream work. This can help them utilize their time to expand their clientele and their services and even catch up on important life events.

What is Legal Transcription?

A written record of a recorded legal proceeding is known as a legal transcription. It refers to the process of converting a recorded video or audio recording into a written legal document. There is no space for error; the text must be repeated word for word. While you're probably familiar with court reporters, who transcribe live court events into a written record, your work as a legal transcriptionist does not contain any live transcription.

A legal transcriptionist's job is to view or listen to video or audio materials. They will next construct a written text by precisely transcribing all spoken words in a properly prepared legal document. Since they are plain, easy to read, and quick to scan, transcribed documents assist lawyers in locating the particular information they want.

During the last several years, legal transcribing services have grown in popularity. Interrogations, depositions, testimony, legal briefs and papers, client letters, and general communication have all replaced court sessions. Legal transcriptionists may specialize in different areas of law. This makes them more successful at transcribing exact legal papers.

What distinguishes legal transcriptionists from court reporters?

People who aren't "in the know" frequently get legal transcribing and court reporting mixed up. While the two are comparable in many aspects of legal world, they are not the same.

Court reporters, also known as stenographers, are often qualified individuals who attend a court proceeding and write a transcript of what happens there. This occurs in real-time and requires a quick pace of work. The good news is that stenographers may type in shorthand, which their computers subsequently translate into complete words and phrases. The stenographer then proofreads the entire document before forwarding it to courts and attorneys.

On the other hand, many legal professionals and transcribers work remotely and are not present when court proceedings are taking place. The ability to authenticate transcripts is not a skill that all transcriptionists possess. They are also incapable of swearing at someone.

Who is the most likely to use legal transcription services?

Legal professionals, including paralegals, lawyers, and law firms, require access to legal transcribing services. The target market for legal transcription services consists of people who work in the legal industry. To give their clients accurate information, legal practitioners must be able to depend on word-for-word transcription of legal documents that have not been altered in any way.

Transcriptions of video or audio recordings used as evidence or witness testimony are replicas of the originals. Transcription services record nearly all court sessions to make them available to jurors, defense counsel, and judges. To avoid confusion, courts frequently present audio or video recordings alongside transcripts.

What's the process of legal transcription?

Sending the audio or video file to the agency

The majority of legal transcribing and transcription companies will accept audio and video material in a number of formats. Because of this, it is advisable to confirm with the agency that they can accept your material in a certain format.

Legal transcriptionists listen to and watch recordings of court proceedings while typing up what they hear to provide a written record. The proper material layout is crucial for lawyers and other legal experts to obtain the relevant information. The legal transcribing company should also know the deadlines you need for submission.

Document analysis

When the legal transcriptionist completes the transcribing, the document is reviewed & proofread by a different individual. By doing this, the legal transcription company's correctness and grammatical integrity are guaranteed.

Submission of documents

The agency transmits the paper to the customer once it has been reviewed and proofread. Video and audio are converted into text files by transcriptionists. Following that, they record what they observe and hear. You must not edit the grammar when transcribing legal papers. In addition, every particle of background noise and nonverbal clues must be included in the official transcription. Legal transcription should also contain pauses and filler words like "um," "err," "uh," and similar ones.

The agency has the option of sending a paper copy or a digital copy of the transcription. The service provider follows the client's request. The agency stores the final transcription to give the client access to multiple copies of the file in the event of file loss or damage.

Cost factor consideration

The cost of legal transcribing services is calculated per minute. Depending on how intricate the project is, different rates apply. Higher rates are associated with audio files that feature four or more speakers or with recordings that deal with specialized topics like panel discussions, focus groups, or legal terminology only.

For transcription, translation, or interpretation, there is also an additional fee. An incorrectly translated document might cast doubt on your claims, harm your position in court, and even cost you the case.

Why are Legal Transcription Services important?

The ideal format for lawyers

Legal transcription is ideal for all lawyers if done by a professional. For examining the flow of data and the timeline of occurrences, you may utilize timestamps and speaker identification. Developing a case or spotting inconsistencies in testimony might be challenging in this situation.

To ensure that everyone is consuming the same information and to prevent misunderstandings, you may also provide a transcript to every jury member and other participants in the case or hearing.

Most effective defense tool

Without the aid of a specialist, it is challenging to translate legal audio or any other legal documents. You will have the greatest defense tool to handle any accusation leveled at you by your rival's attorney if you can accurately get your files transcribed by experts.

Better case planning

Legal transcriptions greatly simplify the process for other legal professionals, of reviewing cases and preparing their line of defense or attack, which is perhaps their major advantage. By giving them the instrument, legal transcriptionists enable attorneys to prepare themselves to ask probing questions that could just expose the opposing side's defense completely.

Improves the legal record's accuracy

If you've ever tried to jot down notes during a discussion, you are aware of how challenging it can be to recall every word. This is particularly true for difficult judicial proceedings, which might include a lot of technical languages. A legal transcriptionist will guarantee that every word said throughout the procedure is correctly captured. It might be crucial for law firms and courts to maintain the accuracy of the legal record.

Organization

Legal transcribing may be easily gathered and correctly arranged in accordance with the legal system and regulations specific to your business. You can easily discover, save, and search the information with transcription. Legal departments and law companies use transcribing to keep on schedule and organized. Legal professionals may focus on other important activities when they outsource legal transcriptions to a reputable business. Overall productivity is raised as a result.

Simplifies the process of referencing particular points

It might be challenging to refer to a specific statement made at a court hearing if you just have a tape of the proceedings. This is due to the fact that you must listen to the full audio again to locate the particular spot you want. But, you may use the search feature to locate the precise place you're searching the audio file for, provided you have a copy of the proceeding's transcript. Both legal firms and courts may benefit from this.

Helps in time savings

The ability to save time is another advantage of having a legal transcript. This is because you can read through the transcript rather than having to go back and listen to a tape of the proceedings. This can be especially useful for legal firms as they can rapidly evaluate what was said during a process using the transcripts. Courts may also find it useful since they may utilize transcripts to help prepare for impending cases.

Conclusion

Transcribing audio and video recordings is crucial in the legal sector since it makes it easier to review all the details of the case. Ensure your legal transcription service has a solid process focused on accuracy and quality. One of your best choices could be to outsource.

legal-transcriptionist-3.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

Board footer

Powered by FluxBB