You are not logged in.
2178) Doctor of Philosophy
Gist
The term PhD or Doctorate of Philosophy is an abbreviation of the Latin phrase 'philosophiae doctor'. A PhD degree typically involves students independently conducting original and significant research in a specific field or subject, before producing a publication-worthy thesis.
Summary
The term PhD or Doctorate of Philosophy is an abbreviation of the Latin phrase 'philosophiae doctor'.
A PhD degree typically involves students independently conducting original and significant research in a specific field or subject, before producing a publication-worthy thesis.
While some Doctorates include taught components, PhD students are almost always assessed on the quality and originality of the argument presented in their independent research project.
What are the most popular PhD subjects?
* clinical psychology
* law
* philosophy
* psychology
* economics
* creative writing
* education
* computer science
* engineering.
How long does a Doctorate degree take?
Full-time PhDs usually last for three or four years, while part-time PhDs can take up to six or seven. However, the thesis deadline can be extended by up to four years at the institution's discretion. Indeed, many students who enrol on three-year PhDs only finish their thesis in their fourth year.
While most PhD studentships begin in September or October, both funded and self-funded PhDs can be undertaken at any point during the year.
Do I need a Masters to do a PhD?
The majority of institutions require PhD candidates to possess a Masters degree, plus a Bachelors degree at 2:1 or above. However, some universities demand only the latter, while self-funded PhD students or those with significant professional experience may also be accepted with lower grades.
You may need to initially register for a one or two-year Master of Philosophy (MPhil) or Master of Research (MRes) degree rather than a PhD. If you make sufficient progress, you and your work will then be 'upgraded' to a PhD programme. If not, you may be able to graduate with a Masters degree.
Details
A Doctor of Philosophy (PhD, Ph.D., or DPhil; Latin: philosophiae doctor or doctor philosophiae) is a terminal degree that usually denotes the highest level of academic achievement in a given discipline and is awarded following a course of graduate study and original research. The name of the degree is most often abbreviated PhD (or, at times, as Ph.D. in North America), pronounced as three separate letters.
The abbreviation DPhil, from the English "Doctor of Philosophy", is used by a small number of British universities, including the University of Oxford and formerly the University of York and University of Sussex in the United Kingdom.
PhDs are awarded for programs across the whole breadth of academic fields. Since it is an earned research degree, those studying for a PhD are required to produce original research that expands the boundaries of knowledge, normally in the form of a dissertation, and, in some cases, defend their work before a panel of other experts in the field. The completion of a PhD is typically required for employment as a university professor, researcher, or scientist in many fields.
Definition
In the context of the Doctor of Philosophy and other similarly titled degrees, the term "philosophy" does not refer to the field or academic discipline of philosophy, but is used in a broader sense in accordance with its original Greek meaning, which is "love of wisdom". In most of Europe, all fields (history, philosophy, social sciences, mathematics, and natural philosophy/sciences) other than theology, law, and medicine (the so-called professional, vocational, or technical curricula) were traditionally known as philosophy, and in Germany and elsewhere in Europe the basic faculty of liberal arts was known as the "faculty of philosophy".
A PhD candidate must submit a project, thesis, or dissertation often consisting of a body of original academic research, which is in principle worthy of publication in a peer-reviewed journal. In many countries, a candidate must defend this work before a panel of expert examiners appointed by the university. Universities sometimes award other types of doctorate besides the PhD, such as the Doctor of Musical Arts (D.M.A.) for music performers, Doctor of Juridical Science (S.J.D.) for legal scholars and the Doctor of Education (Ed.D.) for studies in education. In 2005 the European University Association defined the "Salzburg Principles", 10 basic principles for third-cycle degrees (doctorates) within the Bologna Process. These were followed in 2016 by the "Florence Principles", seven basic principles for doctorates in the arts laid out by the European League of Institutes of the Arts, which have been endorsed by the European Association of Conservatoires, the International Association of Film and Television Schools, the International Association of Universities and Colleges of Art, Design and Media, and the Society for Artistic Research.
The specific requirements to earn a PhD degree vary considerably according to the country, institution, and time period, from entry-level research degrees to higher doctorates. During the studies that lead to the degree, the student is called a doctoral student or PhD student; a student who has completed any necessary coursework and related examinations and is working on their thesis/dissertation is sometimes known as a doctoral candidate or PhD candidate. A student attaining this level may be granted a Candidate of Philosophy degree at some institutions or may be granted a master's degree en route to the doctoral degree. Sometimes this status is also colloquially known as "ABD", meaning "all but dissertation". PhD graduates may undertake a postdoc in the process of transitioning from study to academic tenure.
Individuals who have earned the Doctor of Philosophy degree use the title Doctor (often abbreviated "Dr" or "Dr."), although the etiquette associated with this usage may be subject to the professional ethics of the particular scholarly field, culture, or society. Those who teach at universities or work in academic, educational, or research fields are usually addressed by this title "professionally and socially in a salutation or conversation". Alternatively, holders may use post-nominal letters such as "Ph.D.", "PhD", or "DPhil", depending on the awarding institution. It is, however, traditionally considered incorrect to use both the title and post-nominals together, although usage in that regard has been evolving over time.
History
Medieval and early modern Europe
In the universities of Medieval Europe, study was organized in four faculties: the basic faculty of arts, and the three higher faculties of theology, medicine, and law (canon law and civil law). All of these faculties awarded intermediate degrees (bachelor of arts, of theology, of laws, of medicine) and final degrees. Initially, the titles of master and doctor were used interchangeably for the final degrees—the title Doctor was merely a formality bestowed on a Teacher/Master of the art—but by the late Middle Ages the terms Master of Arts and Doctor of Theology/Divinity, Doctor of Law, and Doctor of Medicine had become standard in most places (though in the German and Italian universities the term Doctor was used for all faculties).
The doctorates in the higher faculties were quite different from the current PhD degree in that they were awarded for advanced scholarship, not original research. No dissertation or original work was required, only lengthy residency requirements and examinations. Besides these degrees, there was the licentiate. Originally this was a license to teach, awarded shortly before the award of the master's or doctoral degree by the diocese in which the university was located, but later it evolved into an academic degree in its own right, in particular in the continental universities.
According to Keith Allan Noble (1994), the first doctoral degree was awarded in medieval Paris around 1150. The doctorate of philosophy developed in Germany as the terminal teacher's credential in the 17th century (circa 1652). There were no PhDs in Germany before the 1650s (when they gradually started replacing the MA as the highest academic degree; arguably, one of the earliest German PhD holders is Erhard Weigel (Dr. phil. hab., Leipzig, 1652).
The full course of studies might, for example, lead in succession to the degrees of Bachelor of Arts, Licentiate of Arts, Master of Arts, or Bachelor of Medicine, Licentiate of Medicine, or Doctor of Medicine, but before the early modern era, many exceptions to this existed. Most students left the university without becoming masters of arts, whereas regulars (members of monastic orders) could skip the arts faculty entirely.
Educational reforms in Germany
This situation changed in the early 19th century through the educational reforms in Germany, most strongly embodied in the model of the University of Berlin, founded in 1810 and controlled by the Prussian government. The arts faculty, which in Germany was labelled the faculty of philosophy, started demanding contributions to research, attested by a dissertation, for the award of their final degree, which was labelled Doctor of Philosophy (abbreviated as Ph.D.)—originally this was just the German equivalent of the Master of Arts degree. Whereas in the Middle Ages the arts faculty had a set curriculum, based upon the trivium and the quadrivium, by the 19th century it had come to house all the courses of study in subjects now commonly referred to as sciences and humanities. Professors across the humanities and sciences focused on their advanced research. Practically all the funding came from the central government, and it could be cut off if the professor was politically unacceptable.
These reforms proved extremely successful, and fairly quickly the German universities started attracting foreign students, notably from the United States. The American students would go to Germany to obtain a PhD after having studied for a bachelor's degree at an American college. So influential was this practice that it was imported to the United States, where in 1861 Yale University started granting the PhD degree to younger students who, after having obtained the bachelor's degree, had completed a prescribed course of graduate study and successfully defended a thesis or dissertation containing original research in science or in the humanities. In Germany, the name of the doctorate was adapted after the philosophy faculty started being split up − e.g. Dr. rer. nat. for doctorates in the faculty of natural sciences − but in most of the English-speaking world the name "Doctor of Philosophy" was retained for research doctorates in all disciplines.
The PhD degree and similar awards spread across Europe in the 19th and early 20th centuries. The degree was introduced in France in 1808, replacing diplomas as the highest academic degree; into Russia in 1819, when the Doktor Nauk degree, roughly equivalent to a PhD, gradually started replacing the specialist diploma, roughly equivalent to the MA, as the highest academic degree; and in Italy in 1927, when PhDs gradually started replacing the Laurea as the highest academic degree.
History in the United Kingdom
Research degrees first appeared in the UK in the late 19th century in the shape of the Doctor of Science (DSc or ScD) and other such "higher doctorates". The University of London introduced the DSc in 1860, but as an advanced study course, following on directly from the BSc, rather than a research degree. The first higher doctorate in the modern sense was Durham University's DSc, introduced in 1882.
This was soon followed by other universities, including the University of Cambridge establishing its ScD in the same year and the University of London transforming its DSc into a research degree in 1885. These were, however, very advanced degrees, rather than research-training degrees at the PhD level. Harold Jeffreys said that getting a Cambridge ScD was "more or less equivalent to being proposed for the Royal Society."
In 1917, the current PhD degree was introduced, along the lines of the American and German model, and quickly became popular with both British and foreign students. The slightly older degrees of Doctor of Science and Doctor of Literature/Letters still exist at British universities; together with the much older degrees of Doctor of Divinity (DD), Doctor of Music (DMus), Doctor of Civil Law (DCL), and Doctor of Medicine (MD), they form the higher doctorates, but apart from honorary degrees, they are only infrequently awarded.
In English (but not Scottish) universities, the Faculty of Arts had become dominant by the early 19th century. Indeed, the higher faculties had largely atrophied, since medical training had shifted to teaching hospitals, the legal training for the common law system was provided by the Inns of Court (with some minor exceptions, see Doctors' Commons), and few students undertook formal study in theology. This contrasted with the situation in the continental European universities at the time, where the preparatory role of the Faculty of Philosophy or Arts was to a great extent taken over by secondary education: in modern France, the Baccalauréat is the examination taken at the end of secondary studies. The reforms at the Humboldt University transformed the Faculty of Philosophy or Arts (and its more recent successors such as the Faculty of Sciences) from a lower faculty into one on a par with the Faculties of Law and Medicine.
Similar developments occurred in many other continental European universities, and at least until reforms in the early 21st century, many European countries (e.g., Belgium, Spain, and the Scandinavian countries) had in all faculties triple degree structures of bachelor (or candidate) − licentiate − doctor as opposed to bachelor − master − doctor; the meaning of the different degrees varied from country to country, however. To this day, this is also still the case for the pontifical degrees in theology and canon law; for instance, in sacred theology, the degrees are Bachelor of Sacred Theology (STB), Licentiate of Sacred Theology (STL), and Doctor of Sacred Theology (STD), and in canon law: Bachelor of Canon Law (JCB), Licentiate of Canon Law (JCL), and Doctor of Canon Law (JCD).
History in the United States
Until the mid-19th century, advanced degrees were not a criterion for professorships at most colleges. That began to change as the more ambitious scholars at major schools went to Germany for one to three years to obtain a PhD in the sciences or humanities. Graduate schools slowly emerged in the United States. Although honorary PhDs had been awarded in the United States beginning in the early 19th century, the first earned PhD in the nation was at Bucknell University in Lewisburg, Pennsylvania, which awarded the nation's first doctorate in 1852 to Ebenezer Newton Elliott. Nine years later, in 1861, Yale University awarded three PhDs to Eugene Schuyler, Arthur Williams Wright, and James Morris Whiton. although honorary PhDs had been awarded in the U.S. for almost a decade.
Over the following two decades, Harvard University, New York University, Princeton University, and the University of Pennsylvania, also began granting the degree. Major shifts toward graduate education were foretold by the opening of Clark University in 1887 which offered only graduate programs and the Johns Hopkins University which focused on its PhD program. By the 1890s, Harvard, Columbia, Michigan and Wisconsin were building major graduate programs, whose alumni were hired by new research universities. By 1900, 300 PhDs were awarded annually, most of them by six universities. It was no longer necessary to study in Germany. However, half of the institutions awarding earned PhDs in 1899 were undergraduate institutions that granted the degree for work done away from campus. Degrees awarded by universities without legitimate PhD programs accounted for about a third of the 382 doctorates recorded by the US Department of Education in 1900, of which another 8–10% were honorary.
At the start of the 20th century, U.S. universities were held in low regard internationally and many American students were still traveling to Europe for PhDs. The lack of centralised authority meant anyone could start a university and award PhDs. This led to the formation of the Association of American Universities by 14 leading research universities (producing nearly 90% of the approximately 250 legitimate research doctorates awarded in 1900), with one of the main goals being to "raise the opinion entertained abroad of our own Doctor's Degree."
In Germany, the national government funded the universities and the research programs of the leading professors. It was impossible for professors who were not approved by Berlin to train graduate students. In the United States, by contrast, private universities and state universities alike were independent of the federal government. Independence was high, but funding was low. The breakthrough came from private foundations, which began regularly supporting research in science and history; large corporations sometimes supported engineering programs. The postdoctoral fellowship was established by the Rockefeller Foundation in 1919. Meanwhile, the leading universities, in cooperation with the learned societies, set up a network of scholarly journals. "Publish or perish" became the formula for faculty advancement in the research universities. After World War II, state universities across the country expanded greatly in undergraduate enrollment, and eagerly added research programs leading to masters or doctorate degrees. Their graduate faculties had to have a suitable record of publication and research grants. Late in the 20th century, "publish or perish" became increasingly important in colleges and smaller universities.
Requirements
Detailed requirements for the award of a PhD degree vary throughout the world and even from school to school. It is usually required for the student to hold an Honours degree or a Master's degree with high academic standing, in order to be considered for a PhD program. In the US, Canada, India, and Denmark, for example, many universities require coursework in addition to research for PhD degrees. In other countries (such as the UK) there is generally no such condition, though this varies by university and field. Some individual universities or departments specify additional requirements for students not already in possession of a bachelor's degree or equivalent or higher. In order to submit a successful PhD admission application, copies of academic transcripts, letters of recommendation, a research proposal, and a personal statement are often required. Most universities also invite for a special interview before admission.
A candidate must submit a project, thesis, or dissertation often consisting of a body of original academic research, which is in principle worthy of publication in a peer-reviewed context. Moreover, some PhD programs, especially in science, require one to three published articles in peer-reviewed journals. In many countries, a candidate must defend this work before a panel of expert examiners appointed by the university; this defense is open to the public in some countries, and held in private in others; in other countries, the dissertation is examined by a panel of expert examiners who stipulate whether the dissertation is in principle passable and any issues that need to be addressed before the dissertation can be passed.
Some universities in the non-English-speaking world have begun adopting similar standards to those of the anglophone PhD degree for their research doctorates.
A PhD student or candidate is conventionally required to study on campus under close supervision. With the popularity of distance education and e-learning technologies, some universities now accept students enrolled into a distance education part-time mode.
In a "sandwich PhD" program, PhD candidates do not spend their entire study period at the same university. Instead, the PhD candidates spend the first and last periods of the program at their home universities and in between conduct research at another institution or field research. Occasionally a "sandwich PhD" will be awarded by two universities.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2179) Master of Laws
Gist
The Master of Laws (LL. M.) is an internationally recognised postgraduate degree generally acquired after one year of full-time legal studies. Law students and professionals typically pursue a LL. M. to deepen their legal expertise and to enhance their career prospects.
Summary
The Master of Laws (LLM) is the flagship degree in the world-renowned Melbourne Law Masters program, offering students a choice of over 170 subjects across dozens of specialist legal areas.
It’s available only for law graduates and the flexible structure makes it ideal for working professionals looking to immerse themselves in selected areas of the law.
As an LLM student, you can choose from all subjects available in the Melbourne Law Masters program, allowing you to tailor the degree to suit your professional aspirations and personal interests. You can also choose to undertake the degree as a combination of coursework and a minor thesis.
The UNSW Master of Laws (LLM) is a one-year full-time postgraduate degree that offers you the opportunity to develop an advanced and contemporary understanding of one or more areas of legal study, acquire further expertise and enhance your career prospects.
You can choose from eight specialisation areas that reflect UNSW Law & Justice’s expertise and the latest developments in legal scholarship. Our LLM areas of specialisation include:
* Chinese International Business & Economic Law
* Corporate, Commercial & Taxation Law
* Criminal Justice & Criminology
* Dispute Resolution
* Environmental Law & Sustainable Development
* Human Rights Law & Policy
* International Law
* Media, Intellectual Property & Technology Law
Alternatively, you can complete a generalist program and benefit from choosing courses across our specialisations.
Details
A Master of Laws (M.L. or LL.M.; Latin: Magister Legum or Legum Magister) is an advanced postgraduate academic degree, pursued by those either holding an undergraduate academic law degree, a professional law degree, or an undergraduate degree in a related subject. In most jurisdictions, the LL.M. is the advanced professional degree for those usually already admitted into legal practice.
Definition
To become a lawyer and practice law in most states and countries, a person must first obtain a law degree. In most common law countries, a Bachelor of Laws (LL.B.) is required. In the United States, the Juris Doctor (J.D.) is generally a requirement to practice law. Some jurisdictions, such as Canada and Australia, require either an LL.B. or J.D. Individuals with law degrees must typically pass an additional set of examinations to qualify as a lawyer.
The LL.M. program is a postgraduate program, typically for individuals who either possess a law degree or have qualified as a lawyer. The word legum is the genitive plural form of the Latin word lex and means "of the laws". When used in the plural, it signifies a specific body of laws, as opposed to the general collective concept embodied in the word jus, from which the words "juris" and "justice" derive.
An LL.M. is also typically a requirement for entry into the research doctoral programs in law, such as the Doctor of Juridical Science (S.J.D or J.S.D.), the Doctor of Philosophy (Ph.D. or DPhil) or doctorat en droit (in France), Doktor der Rechtswissenschaften (Dr. iur.) (in Germany), the Doctor of Civil Law (D.C.L.), and the Doctor of Laws" (LL.D.).
Historically, the LL.M. degree is an element particular to the education system of English speaking countries, which is based on a distinction between bachelor's and master's degrees. Over the past years, however, specialized LL.M. programs have been introduced in many European countries.
Types of LL.M. degrees
A wide range of LL.M. programs are available worldwide, allowing students to focus on almost any area of the law. Most universities offer only a small number of LL.M. programs. One of the most popular LL.M. degrees in the United States is in tax law, sometimes referred to as an MLT (Master of Laws in Taxation).
In Europe, LL.M. programs in European law are popular, often referred to as LL.M. Eur (Master of European Law).
In the Netherlands and its former colonies, the title used was Meester in de Rechten (mr.). This title is still widely used in the Netherlands and Flanders (Belgium), especially by those who studied Dutch or Belgian law respectively.
Some LL.M. programs, particularly in the United States, and also in China, focus on teaching foreign lawyers the basic legal principles of the host country.
The length of time to study an LLM program depends on the mode of study. Most full-time on-campus courses take one academic year to complete. Other students may complete their LLM program on a part-time basis over two years, and increasingly courses are available online. Part-time online courses can take between two and five years to complete.
Requirements
LL.M. programs are usually only open to those students who have first obtained a degree in law, typically an LL.B. or J.D. Some programs are exceptions to this, requiring only an undergraduate degree or extensive experience in a related field. Full-time LL.M. programs usually last one year and vary in their graduation requirements. Most programs require or allow students to write a thesis. Some programs are research oriented with little classroom time, while others require students to take a set number of classes.
LL.M. degrees are often earned by students wishing to develop more concentrated expertise in a particular area of law. Pursuing an LL.M. degree may also allow law students to build a professional network. Some associations provide LL.M. degree holders with structures designed to strengthen their connections among peers and to access a competitive business environment, much like an MBA degree.
LL.M. programs by country:
Australia
In Australia, the LL.M. is generally only open to law graduates. However, some universities permit non-law graduates to undertake variants of the degree. There are nearly 100 LLM courses in Australia across 25 institutions taught in English.
Unique variants of the LL.M. exist, such as the Master of Legal Practice (M.L.P.) available at the Australian National University, where students who have completed the Graduate Diploma of Legal Practice (which law graduates must obtain before being able to be admitted as a solicitor/barrister), will be granted some credit towards the Master qualification. Other variants of the LL.M. are more similar to the LL.M. available in the wider Commonwealth but under a different title, for example Master of Commercial Law, Master of International Law or Master of Human Rights Law. These courses are usually more specialised than a standard LL.M.
Canada
In Canada, the LL.M. is generally open to law graduates holding an LL.B., LL.L., B.C.L., or a J.D. as a first degree. Students can choose to take research-based LL.M. degrees or course-based LL.M. degrees. Research-based LL.M. degrees are one- or two-year programs that require students to write a thesis that makes a significant contribution to their field of research. Course-based LL.M. degrees do not require a significant research paper. An LL.M. can be studied part-time, and at some schools, through distance learning. LL.M. degrees can be general, or students can choose to pursue a specialized area of research.
Foreign trained lawyers who wish to practice in Canada will first need to have their education and experience assessed by the Federation of Law Societies of Canada's National Committee on Accreditation. Upon having received a certificate of accreditation from the National Committee on Accreditation, foreign law graduates would then have to obtain articles with a law firm, take the professional legal training course, and pass the professional exams to be called to the bar in a province. The University of British Columbia's LLM in Common Law is an example of one of a few LLM courses that help to prepare students for the professional exams.
China (Mainland)
The LL.M. is available at China University of Political Science and Law, and the entrance requirements are: native English competency, or near native English, with any bachelor's degree. The program is flexible and allows students to study Mandarin and assists with organizing work experience in Beijing and other cities in China. It normally takes two years, but can be completed in one and a half years if students take the required credits in time.
The flagship of the China-EU School of Law (CESL) in Beijing is a Double Master Programme including a Master of Chinese Law and a Master of European and International Law. The Master of European and International Law is taught in English, open for international students and can be studied as a single master programme. CESL also offers an International Master of Chinese Law (IMCL) which is an LL.M. in Chinese law taught entirely in English.
Beijing Foreign Studies University has launched an online LLM for international professionals. The course is taken over two years, with the first covering online lessons through video and assignments, the second year is for the dissertation and an online defense is required at the end. Students are required to attend Beijing for an introductory week in September to enroll and meet students and staff. Students also have the opportunity to take work experience at a top five law firm in China.
LL.M degree programs are available at many other universities in Mainland China, such as at Peking University, Tsinghua University, Shanghai Jiaotong University, and Shanghai International Studies University.
Finland
In Finland, an LL.M. is the standard graduate degree required to practice law. No other qualifications are required.
France
In France, the LL.M. is in English. The LL.M. in International Business Law is available at Panthéon-Assas University (Paris), the oldest school of law in France.
The entrance requirements are:
* Very good English level, with master's degree in law (or equivalent); or
* Alternative diploma and four years' professional experience.
The course is flexible and allows students to study French.
A further 11 institutions in France offer almost 20 other LLM programs taught in English, with specialisms including European Law.
Germany
In Germany, the LL.M. is seen as an advanced legal qualification of supplementary character. Some graduates choose to undertake their LL.M. directly following their "Erstes Juristisches Staatsexamen" (the "first state examination", which constitutes the first stage of the official German legal training), an alternative postgraduate course, or their "Zweites Juristisches Staatsexamen" (that is, the second and final stage of the official German legal training).
Hong Kong
LL.M. degree programmes are offered by the law faculties of The University of Hong Kong, The Chinese University of Hong Kong and the City University of Hong Kong. An LL.B. degree is usually required for admission, but for some specialised programmes, such as the LL.M. in Human Rights programme offered by HKU, require an undergraduate degree in laws or any related discipline.
India
Similar to the United Kingdom, a master's degree in law in India is basically opted to specialize in particular areas of law. Traditionally the most popular areas of specialization in these master's degrees in law in India have been constitutional law, family law and taxation law.
With the establishment of the specialized autonomous law schools in India in 1987 (the first was the National Law School of India University) much emphasis is being given at the master's level of legal education in India. With the establishment of these universities, focus in specialization has been shifted to newer areas such as corporate law, intellectual property law, international trade law etc. LL.M programs in India were previously two years in duration but presently typically last one year.
Ireland
A number of universities and colleges in Ireland offer LL.M. programs, such as Dublin City University, Trinity College Dublin, University College Cork, National University of Ireland, Galway (NUIG), National University of Ireland, Maynooth (NUIM), the Law Society of Ireland in partnership with Northumbria University, and Griffith College.
University College Dublin also offers the Masters in Common Law (MCL/ Magisterii in Jure Communi, M.Jur.Com), an advanced two-year programme for non-law graduates. The degree is a qualifying law degree for admittance to the entrance exams of the Honorable Society of King's Inns.
Italy
Italy offers both master programs in Italian and in English, depending on the school. They are often called "laurea specialistica", that is, the second step of the Bologna plan (European curriculum), and in this case they last two years. For example, the University of Milan offers a 2 years LLM on Sustainable Development. In South Tyrol programmes are also taught in German, as in Bolzano.
In Italy the term "master" often refers to a vocational master, 6 or 12 months long, on specific areas, such as "law and internet security", or "law of administrative management", is often taught part-time to allow professionals already working in the field to improve their skills.
Mauritius
The LLM in International Business Law from Panthéon-Assas University is also available in Mauritius in Medine Village campus.
Netherlands
To be allowed to practice law in the Netherlands, one needs an LL.M. degree with a specific set of courses in litigation law. The Dutch Order of Lawyers (NOVA) require these courses for every potential candidate lawyer who wants to be conditionally written in the district court for three years. After receiving all the diplomas prescribed by NOVA and under supervision of a "patron", a lawyer is eligible to have his own practice and is unconditionally written in a court for life.
Norway
The Norwegian legal degrees master i rettsvitenskap (English for master in jurisprudence) are officially translated to English as Master of Laws (LL.M.), as these degrees are more comprehensive than the basic graduate law degree in common law countries (e.g., J.D. and LL.B). The last year in the five-year professional Norwegian law degree program is thus considered to correspond to a LL.M specialization. In addition, the universities with legal faculties at the masters level offers several LL.M programmes. For example, Universitetet i Oslo offers tuition-free LL.M courses in Public International Law, Maritime Law, Information and Communication Technology (ICT Law), as well as distinct specializations in human rights.
Pakistan
In Pakistan, the University of the Punjab, University of Karachi, Shaheed Zulfiqar Ali Bhutto University of Law, International Islamic University, Islamabad, Government College University, Faisalabad, University of Sargodha are LL.M. degree awarding institutions. Completing a LL.M. qualification in Pakistan consists of studying eight subjects in four semesters. This spans over a period of two years and also requires the student to write a thesis on a proposed topic. A student has to pass in each of the subject in order to qualify for LL.M. degree, and the passing mark is set at 60%. The program is taught in English.
Universities in Pakistan teach comparative constitutional law, comparative human rights law and comparative jurisprudence as mandatory subjects. The programs also include research methodology and four elective subjects, which may include company law, taxation law, intellectual property law and banking law.
Portugal
The Master of Laws programmes offered in Portugal are extremely varied but do not, for the most part, use the designation LL.M., being more commonly called Mestrado em Direito (Master's Degree in Law), like the ones at Coimbra University's Faculty of Law and Lusíada University of Porto. Although the classical Mestrado em Direito takes two years to finish and involves a dissertation, there are some shorter variants. A few Mestrados with an international theme have specifically adopted the LL.M designation: the LL.M in European and Transglobal Business Law at the School of Law of the University of Minho and the LL.M. Law in a European and Global Context and the Advanced LL.M. in International Business Law, both at the Católica Global School of Law, in Lisbon.
Singapore
In Singapore, the LL.M. is in English. The LL.M. in International Business Law from Panthéon-Assas University is also available in Singapore in Insead campus.
South Africa
In South Africa, the LL.M. is a postgraduate degree offered both as a course-based and research-based master's degree. In the former case, the degree comprises advanced coursework in a specific area of law as well as limited related research, usually in the form of a short dissertation, while in the latter, the degree is entirely thesis based. The first type, typically, comprises "practice-oriented" topics (e.g. in tax, mining law), while the second type is theory-oriented, often preparing students for admission to LL.D. or Ph.D. programmes. Admission is generally limited to LL.B. graduates, although holders of other law degrees, such as the BProc, may be able to apply if admitted as attorneys and/or by completing supplementary LL.B. coursework.
Sweden
Masters degree (LL.M) is the standard graduate degree among law practitioners in Sweden. The masters programme takes four and half years to complete. At most universities, the last one and a half years are specialization at advanced level (advanced level courses and thesis). It is possible to get a bachelor's degree after three years, but the vast majority of legal practitioners in Sweden have a LL.M.
Taiwan
In Taiwan, law can be studied as a postgraduate degree resulting in an LL.M. Some LL.M. programs in Taiwan are offered to students with or without a legal background. However, the graduation requirements for students with a legal background are lower than for those students who do not have a legal background (to account for fundamental legal subjects that were taken during undergraduate studies). Students studying in an LL.M. program normally take three years to earn the necessary credits and finish a master's thesis.
United Kingdom
In the United Kingdom, an LL.M. programme is open to those holding a recognised legal qualification, generally an undergraduate degree in Laws or equivalent. They do not have to be or intend to be legal practitioners. An LL.M. is not required, nor is it a sufficient qualification in itself to practise as a solicitor or barrister, since this requires completion of the Legal Practice Course, Bar Professional Training Course, or, if in Scotland, the Diploma in Legal Practice. As with other degrees, an LL.M. can be studied on a part-time basis at many institutions and in some circumstances by distance learning. Some providers of the Bar Professional Training Course and the Legal Practice Course also allow the student to gain an LL.M. qualification on top of these professional courses by writing a dissertation.
The UK offers over 1000 different LL.Ms. Large law faculties such as Queen Mary University of London offer multiple different programs, whilst other such as Aston University simply offer one program. Some institutions allow those without a first degree in law onto their LL.M. programme. Examples of such programmes include the Master of Studies in Legal Research at Oxford, the LL.M. degrees at the University of Edinburgh and LL.M.s at the University of Leicester. In addition, Queen's University Belfast offers an LL.M. suite, accessible to legal and social science graduates, leading to specialisms in sustainable development, corporate governance, devolution or human rights. Northumbria University offers an approach to the LL.M. qualification by starting the master's programme as undergraduates. Students completing this four-year programme graduate with a combined LL.M. and Legal Practice Course professional qualification or BPTC.
Oxbridge
The University of Cambridge has a taught postgraduate law course, which formerly conferred an LL.B. on successful candidates (undergraduates studying law at Cambridge received a B.A.). In 1982 the LL.B. for postgraduate students was replaced with a more conventional LL.M. to avoid confusion with undergraduate degrees in other universities. Additionally in 2012 the University of Cambridge introduced the M.C.L. (Masters of Corporate Law) aimed at postgraduate students with interests in corporate law.
The University of Oxford unconventionally names its taught masters of laws B.C.L. (Bachelor of Civil Law) and M.Jur. (Magister Juris), and its research masters either MPhil (Master of Philosophy) or MSt (Master of Studies).[33] Oxford continues to name its principal postgraduate law degree the B.C.L. for largely historic reasons, as the B.C.L. is one of the oldest and most senior degrees, having been conferred since the sixteenth century. The M.Jur. was introduced in 1991. At present there is no LL.M. degree conferred by the university. Oxford claims that the B.C.L. is "the most highly regarded taught masters-level qualification in the common law world". Additionally, the University of Oxford introduced the MSc in Law and Finance (MLF) and the MSc in Taxation in 2010 and 2016, respectively.
United States
In the United States, the acquisition of an LL.M. degree is often a way to specialize in an area of law such as tax law, business law, international business law, health law, trial advocacy, environmental law or intellectual property. A number of schools have combined J.D.-LL.M. programs, while others offer the degree through online study. Some LL.M. programs feature a general study of American law. Degree requirements vary by school, and they often differ for LL.M. students who previously earned a J.D. from an American law school and LL.M. students who previously earned a law degree from a non-American law school.
Programs for foreign legal graduates
An LL.M. degree from an ABA-approved law school also allows a foreign lawyer to become eligible to apply for admission to the bar in certain states. Each state has different rules relating to the admittance of foreign-educated lawyers to state bar associations.
An LL.M. degree from an ABA-approved law school qualifies a foreign legal graduate to take the bar exam in Alabama, California, New Hampshire, New York, Texas, as well as in the independent republic of Palau.
In addition, legal practice in the home jurisdiction plus a certain amount of coursework at an accredited law school qualifies a foreign legal graduate to take the bar exam in Alaska, the District of Columbia, Massachusetts, Missouri, Pennsylvania, Rhode Island, Tennessee, Utah and West Virginia. However, a number of states, including Arizona, Florida, Georgia, New Jersey and North Carolina only recognize J.D. degrees from accredited law schools as qualification to take the bar.
New York allows foreign lawyers from civil law countries to sit for the New York bar exam once they have completed a minimum of 24 credit hours (usually but not necessarily in an LL.M. program) at an ABA-approved law school involving at least two basic subjects tested on the New York bar exam, including 12 credits in specific areas of law. Lawyers from common-law countries face more lenient restrictions and may not need to study at an ABA-approved law school. Foreign lawyers from both civil law and common law jurisdictions, however, are required to demonstrate that they have successfully completed a course of law studies of at least three years that would fulfill the educational requirements to bar admission in their home country.
International law and other LL.M. programs
As of 2008, there is one LL.M. degree in International Law offered by The Fletcher School of Law and Diplomacy at Tufts University, the oldest school of international affairs in the United States. Given that the degree specializes in international law, the program has not sought ABA accreditation.
The Notre Dame Law School at the University of Notre Dame offers an LL.M in International Human Rights Law to JD graduates from ABA-accredited US schools or LL.B or equivalent from accredited non-US schools.
Both Duke University School of Law and Cornell Law School offer J.D. students the opportunity to simultaneously pursue an LL.M. in International and Comparative Law.
The University of Nebraska-Lincoln College of Law provides an LL.M. in Space, Cyber & Telecommunications Law, the only program providing focused study in these three areas. The program was established using a grant from NASA and a partnership with the U.S. Air Force Strategic Command.
St. Mary's University offers the LL.M. in International and Comparative Law, with students having the option to complete both it and their J.D. simultaneously.
The University of Tulsa College of Law offers an LL.M. in American Indian and Indigenous Peoples Law to JD graduates from ABA-accredited US schools or LL.B or equivalent from accredited non-US schools.
The University of Washington School of Law offers an LL.M. in Sustainable International Development, the first program of its kind to focus on international development law. LL.M. students in this program are also able to elect for a concentration in Indigenous Rights Law. Similarly, the school offers a separate LL.M. degree in Asian and Comparative Law.
There is one institution that offers an ABA-approved LL.M, that does not offer a J.D. degree: The U.S. Army Judge Advocate General's Legal Center and School offers an officer's resident graduate course, a specialized program beyond the first degree in law, leading to an LL.M. in Military Law.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2180) International Space Station
Details
The International Space Station Program brings together international flight crews, multiple launch vehicles, globally distributed launch and flight operations, training, engineering, and development facilities, communications networks, and the international scientific research community.
Overview
The Space Station was officially given approval by President Reagan and a budget approved by the US Congress in 1984. NASA Administrator James Beggs immediately set out to find international partners who would cooperate on the program. Canadians, Japanese and many nations of the European Space Agency began to participate in the program soon after.
The Station was designed between 1984 and 1993. Elements of the Station were in construction throughout the US, Canada, Japan, and Europe beginning in the late 1980s.
In 1993, as the Station was undergoing a redesign, the Russians were invited to participate.
Agreement was made to proceed in two phases. During the first phase, NASA Space Shuttles would carry astronauts and cosmonauts to the Russian Mir Orbital Station. The US would help to modify two Russian-built modules to house US and international experiments and to establish working processes between the participating nations. During Phase 2, led by the US and Russia, all of the participating nations would contribute elements and crewmembers to a new International Space Station (ISS).
Phase 1, called NASA-Mir, took place between 1995 and 1998. Eleven Space Shuttle launches went to Mir with the last ten docking to Mir and astronauts and cosmonauts transferring between the two vehicles. Two new Russian modules, Spektr and Priroda were launched, became part of Mir, and housed dozens of US payloads and seven US astronauts.
In Phase 2, the elements of the new ISS were launched beginning in 1998.
Five partner agencies, the Canadian Space Agency, the European Space Agency, the Japan Aerospace Exploration Agency, the National Aeronautics and Space Administration, and the State Space Corporation “Roscosmos”, operate the International Space Station, with each partner responsible for managing and controlling the hardware it provides. The station was designed from the outset to be interdependent and relies on contributions from across the partnership to function. The International Space Station (ISS) is the unique blend of unified and diversified goals among the world’s space agencies that will lead to improvements in life on Earth for all people of all nations. While the various space agency partners may emphasize different aspects of research to achieve their goals in the use of the ISS, they are unified in several important overarching goals. All of the agencies recognize the importance of leveraging the ISS as an education platform to encourage and motivate today’s youth to pursue careers in math, science, engineering, and technology (STEM): educating the children of today to be the leaders and space explorers of tomorrow. All the agencies are unified in their goals to apply knowledge gained through ISS research in human physiology, radiation, materials science, engineering, biology, fluid physics, and technology: enabling future space exploration missions.
The ISS program’s greatest accomplishment is as much a human achievement as a technological one. The global partnership of space agencies exemplifies meshing of cultural differences and political intricacies to plan, coordinate, provide, and operate the complex elements of the ISS. The program also brings together international flight crews and globally distributed launch, operations, training, engineering, communications networks, and scientific research communities.
Although the primary Mission Control centers are in the US and Russia, several ancillary control centers in Canada, Japan, and Europe also have a role in managing each nation’s elements and crew members.
The intended life span of ISS has been extended several times. Since several elements are now beyond their originally intended lifespans, analyses are conducted periodically to ensure the Station is safe for continued habitation and operation. Much of the Station is modular and so as parts and systems wear out, new parts are launched to replace or augment the original. The ISS will continue to be a working laboratory and outpost in orbit until at least 2030.
How it All Began
The idea of living in space was the very first step towards a space station. The first person to write about living and traveling in space was the noted renaissance astronomer Johannes Kepler in the early 1600s. He was the first to realize that planets were worlds, that there was space between the planets and he wrote that one day people would travel through space.
In the 1860s, Edward Everett Hale wrote the “Brick Moon” which was published in the Atlantic Weekly magazine. The Brick Moon had many of the characteristics of a space station; it was a man-made structure that orbited Earth and provided housing and life support for its crew while serving as a navigation aid for people on Earth.
Others, like the Russian theoretician Konstantin Tsiolkovsky were thinking about designs for space stations that could use sunlight for power and that would serve as miniature Earths, with growth of vegetation in the interior.
The first details of the engineering, design and construction of a space station were described by Herman Noordung, in 1928. He described a “wohnrad” or “living wheel“; a wheel shaped rotating space station. He reasoned that the rotation would be required to create artificial gravity for the crewmembers. He described how it would be assembled first on the ground for testing and then its individual parts launched by rocket for reassembly in orbit.
math Ley wrote about life in a space station in 1952. “When man first takes up residence in space, it will be within the spinning hull of a wheel-shaped space station [revolving] around the earth much as the moon does. Life will be cramped and complicated for space dwellers; they will exist under conditions comparable to those in a modem submarine…it will be a self-contained community in which all man’s needs, from air-conditioning to artificial gravity, have been supplied.” [Ley and Chesley Bonestell in The Conquest of Space, Viking Press.] Their ideas went nationwide in Collier’s Magazine and on the Walt Disney television program.
The US government began to develop space station concepts in the 1950s. One of the early concepts was the US Army Project Horizon modular orbital station which would serve to house crews and refuel spacecraft on their way to a moon base. In the early 1960s, NASA’s Manned Spacecraft Center (now Johnson Space Center in Houston) elaborated on the requirements for a station and they patented the concept. Concepts for the first US space station, which would later become known as Skylab, started about this time.
Almost simultaneously, the Soviet Union planned a super rocket launcher that would orbit a large space station. The rocket, designated the N-1, would also be pressed into service for the Soviet manned Moon landing program. But test launches beginning in 1969 proved unsuccessful and so the Soviets turned their attention to smaller stations which could be launched by their most powerful functioning rocket, the Proton.
Assembly
The ISS components were built in various countries around the world, with each piece performing once connected in space, a testament to the teamwork and cultural coordination.
Like a Lego set, each piece of the ISS was launched and assembled in space, using complex robotics systems and humans in spacesuits connecting fluid lines and electrical wires.
The ISS is the largest humanmade object ever to orbit Earth. ISS has a pressurized volume of approximately 900 m3 (31,000 ft^3) and a mass over 400,000 kg (900,000 lbs). Actual numbers vary as logistics resupply vehicles come and go on a frequent and regular basis.
The ISS solar arrays cover an area of 2,247 m^2 (24,187 ft^2) and can generate 735,000 kW-hours of electrical power per year.
The ISS structure measures 109 m (358 ft) (across arrays) by 51 m (168 ft) (module length from the forward end of PMA2 to the aft end of the SM).
ISS orbits at an altitude of between 370–460 km (200–250 nmi). Its falls towards Earth continually due to atmospheric friction and requires periodic rocket firings to boost the orbit. The ISS orbital inclination is 51.6°, permitting ISS to fly over 90% of the inhabited Earth.
ISS carries a crew of between 3 and 13 depending on then number of people and passenger vehicles during handover periods, It continually hosts a crew of seven.
Building the ISS required 36 Space Shuttle assembly flights and 6 Russian Proton and Soyuz rocket launches. More launches are continuing as new modules are completed and ready to become part of the orbiting complex.
Logistics, resupply and crew exchange have been provided by a number of vehicles including the
Space Shuttle, Russian Progress and Soyuz, Japanese H-II Transfer Vehicle (HTV), European Automated Transfer Vehicle (ATV) and commercial Dragon, Cygnus and Starliner vehicles.
Additional Information
The International Space Station (ISS) is a large space station assembled and maintained in low Earth orbit by a collaboration of five space agencies and their contractors: NASA (United States), Roscosmos (Russia), JAXA (Japan), ESA (Europe), and CSA (Canada). The ISS is the largest space station ever built. Its primary purpose is to perform microgravity and space environment experiments.
Operationally, the station is divided into two sections: the Russian Orbital Segment (ROS) assembled by Roscosmos, and the US Orbital Segment, assembled by NASA, JAXA, ESA and CSA. A striking feature of the ISS is the Integrated Truss Structure, which connects the large solar panels and radiators to the pressurized modules. The pressurized modules are specialized for research, habitation, storage, spacecraft control, and airlock functions. Visiting spacecraft dock at the station via its eight docking and berthing ports. The ISS maintains an orbit with an average altitude of 400 kilometres (250 mi) and circles the Earth in roughly 93 minutes, completing 15.5 orbits per day.
The ISS programme combines two prior plans to construct crewed Earth-orbiting stations: Space Station Freedom planned by the United States, and the Mir-2 station, planned by the Soviet Union. The first ISS module was launched in 1998. Major modules have been launched by Proton and Soyuz rockets and by the Space Shuttle launch system. The first long-term residents, Expedition 1, arrived on November 2, 2000. Since then, the station has been continuously occupied for 23 years and 223 days, the longest continuous human presence in space. As of March 2024, 279 individuals from 22 countries have visited the space station. The ISS is expected to have additional modules (the Axiom Orbital Segment, for example) before being de-orbited by a dedicated NASA spacecraft in January 2031.
More Details
International Space Station (ISS), space station assembled in low Earth orbit largely by the United States and Russia, with assistance and components from a multinational consortium.
The project, which began as an American effort, was long delayed by funding and technical problems. Originally called Freedom in the 1980s by U.S. Pres. Ronald Reagan, who authorized the National Aeronautics and Space Administration (NASA) to build it within 10 years, it was redesigned in the 1990s to reduce costs and expand international involvement, at which time it was renamed. In 1993 the United States and Russia agreed to merge their separate space station plans into a single facility, integrating their respective modules and incorporating contributions from the European Space Agency (ESA) and Japan.
Assembly of the International Space Station (ISS) began with the launches of the Russian control module Zarya on November 20, 1998, and the U.S.-built Unity connecting node the following month, which were linked in orbit by U.S. space shuttle astronauts. In mid-2000 the Russian-built module Zvezda, a habitat and control center, was added, and on November 2 of that year the ISS received its first resident crew, comprising Russian cosmonauts Sergey Krikalev and Yuri Gidzenko and American astronaut William Shepherd, who flew up in a Soyuz spacecraft. The ISS has been continuously occupied since then. A NASA microgravity laboratory called Destiny and other elements were subsequently joined to the station, with the overall plan calling for the assembly, over a period of several years, of a complex of laboratories and habitats crossed by a long truss supporting four units that held large solar-power arrays and thermal radiators. Aside from the United States and Russia, station construction involved Canada, Japan, and 11 ESA members. Russian modules were carried into space by Russian expendable launch vehicles, after which they automatically rendezvoused with and docked to the ISS. Other elements were ferried up by space shuttle and assembled in orbit during space walks. During ISS construction, both shuttles and Russian Soyuz spacecraft transported people to and from the station, and a Soyuz remained docked to the ISS at all times as a “lifeboat.”
After the shuttle resumed regular flights in 2006, the ISS crew size was increased to three. Construction resumed in September of that year, with the addition of a pair of solar wings and a thermal radiator. The European-built American node, Harmony, was placed on the end of Destiny in October 2007. Harmony has a docking port for the space shuttle and connecting ports for a European laboratory, Columbus, and a Japanese laboratory, Kibo. In February 2008 Columbus was mounted on Harmony’s starboard side. Columbus was Europe’s first long-duration crewed space laboratory and contained experiments in such fields as biology and fluid dynamics. In the following month an improved variant of the Ariane V rocket launched Europe’s heaviest spacecraft, the Jules Verne Automated Transfer Vehicle (ATV), which carried 7,700 kg (17,000 pounds) of supplies to the ISS. Also in March shuttle astronauts brought the Canadian robot, Dextre, which was so sophisticated that it would be able to perform tasks that previously would have required astronauts to make space walks, and the first part of Kibo. In June 2008 the main part of Kibo was installed.
The ISS became fully operational in May 2009 when it began hosting a six-person crew; this required two Soyuz lifeboats to be docked with the ISS at all times. The six-person crew typically consisted of three Russians, two Americans, and one astronaut from either Japan, Canada, or the ESA. An external platform was attached to the far end of Kibo in July, and a Russian docking port and airlock, Poisk, was attached to the Zvezda module in November. A third node, Tranquility, was installed in 2010, and mounted on this was a cupola, whose robotic workstation and many windows enabled astronauts to supervise external operations.
After completion of the ISS, the shuttle was retired from service in 2011. Thereafter, the ISS was serviced by Russia’s Progress, Europe’s ATV, Japan’s H-II Transfer Vehicle, and two commercial cargo vehicles, SpaceX’s Dragon and Orbital Sciences Corporation’s Cygnus. A new American crew capsule, SpaceX’s Crew Dragon, had its first flight to the ISS in 2020, and the Boeing Company’s CST-100 Starliner was scheduled to have its first crewed test flight in 2024. Prior to Crew Dragon, all astronauts used Soyuz spacecraft to reach the ISS. Crew Dragon carried four astronauts to the station, and the ISS was then able to accommodate a crew of seven. A Russian science module, Nauka, was added to the station in 2021.
More than 200 astronauts from 20 different countries have visited the ISS. Astronauts typically stay on the ISS for about six months. The return of a Soyuz to Earth marks the end of an ISS Expedition, and the command of the ISS is transferred to another astronaut.
However, a few astronauts have spent much longer times on the ISS. On a special mission called “A Year in Space,” Russian cosmonaut Mikhail Korniyenko and American astronaut Scott Kelly spent 340 days in orbit from March 2015 to March 2016. Kelly’s flight was the longest by an American. (Since Kelly’s brother, Mark, was his identical twin, as well as a former astronaut himself, scientists were able to use Mark as a baseline for how the long spaceflight had changed Scott.) In 2017 Russia temporarily cut the number of its ISS crew from three to two, and American astronaut Peggy Whitson extended her mission to 289 days, which at that time was the longest single spaceflight by a woman, so the station would have a full crew of six. Whitson has been to the ISS on three other flights and in total has spent more than 675 days in space, a record for an American and a woman. Whitson’s longest consecutive spaceflight record was surpassed by American astronaut Christina Koch, who spent 328 days on the ISS from March 2019 to February 2020. During that time Koch and American astronaut Jessica Meir performed the first all-female space walk. Russian cosmonaut Pyotr Dubrov and American astronaut Mark Vande Hei stayed on the station for 355 days from April 2021 to March 2022. Vande Hei broke Kelly’s record for longest American spaceflight.
The United States, ESA, Japan, and Canada have not definitively decided when the program will end, but in 2021 the Joe Biden administration indicated that the program would receive U.S. support through 2030. The ESA, Japan, and Canada have also committed to support the ISS through 2030. Russia announced that it would support the station through 2028 and then begin work on its own orbital space station.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2181) Economics
Gist
Economics is the study of how people allocate scarce resources for production, distribution, and consumption, both individually and collectively. The field of economics is connected with and has ramifications on many others, such as politics, government, law, and business.
Summary
Economics is a social science that seeks to analyze and describe the production, distribution, and consumption of wealth. In the 19th century economics was the hobby of gentlemen of leisure and the vocation of a few academics; economists wrote about economic policy but were rarely consulted by legislators before decisions were made. Today there is hardly a government, international agency, or large commercial bank that does not have its own staff of economists. Many of the world’s economists devote their time to teaching economics in colleges and universities around the world, but most work in various research or advisory capacities, either for themselves (in economics consulting firms), in industry, or in government. Still others are employed in accounting, commerce, marketing, and business administration; although they are trained as economists, their occupational expertise falls within other fields. Indeed, this can be considered “the age of economists,” and the demand for their services seems insatiable. Supply responds to that demand, and in the United States alone some 400 institutions of higher learning grant about 900 new Ph.D.’s in economics each year.
Definition
No one has ever succeeded in neatly defining the scope of economics. Many have agreed with Alfred Marshall, a leading 19th-century English economist, that economics is “a study of mankind in the ordinary business of life; it examines that part of individual and social action which is most closely connected with the attainment, and with the use of the material requisites of wellbeing”—ignoring the fact that sociologists, psychologists, and anthropologists frequently study exactly the same phenomena. In the 20th century, English economist Lionel Robbins defined economics as “the science which studies human behaviour as a relationship between (given) ends and scarce means which have alternative uses.” In other words, Robbins said that economics is the science of economizing. While his definition captures one of the striking characteristics of the economist’s way of thinking, it is at once too wide (because it would include in economics the game of chess) and too narrow (because it would exclude the study of the national income or the price level). Perhaps the only foolproof definition is that attributed to Canadian-born economist Jacob Viner: economics is what economists do.
Difficult as it may be to define economics, it is not difficult to indicate the sorts of questions that concern economists. Among other things, they seek to analyze the forces determining prices—not only the prices of goods and services but the prices of the resources used to produce them. This involves the discovery of two key elements: what governs the way in which human labour, machines, and land are combined in production and how buyers and sellers are brought together in a functioning market. Because prices of the various things must be interrelated, economists therefore ask how such a “price system” or “market mechanism” hangs together and what conditions are necessary for its survival.
These questions are representative of microeconomics, the part of economics that deals with the behaviour of individual entities such as consumers, business firms, traders, and farmers. The other major branch of economics is macroeconomics, which focuses attention on aggregates such as the level of income in the whole economy, the volume of total employment, the flow of total investment, and so forth. Here economists are concerned with the forces determining the income of a country or the level of total investment, and they seek to learn why full employment is so rarely attained and what public policies might help a country achieve higher employment or greater price stability.
But these examples still do not exhaust the range of problems that economists consider. There is also the important field of development economics, which examines the attitudes and institutions supporting the process of economic development in poor countries as well as those capable of self-sustained economic growth (for example, development economics was at the heart of the Marshall Plan). In this field the economist is concerned with the extent to which the factors affecting economic development can be manipulated by public policy.
Cutting across these major divisions in economics are the specialized fields of public finance, money and banking, international trade, labour economics, agricultural economics, industrial organization, and others. Economists are frequently consulted to assess the effects of governmental measures such as taxation, minimum-wage laws, rent controls, tariffs, changes in interest rates, changes in government budgets, and so on.
Details
What Is Economics?
Economics is a social science that focuses on the production, distribution, and consumption of goods and services. The study of economics is primarily concerned with analyzing the choices that individuals, businesses, governments, and nations make to allocate limited resources. Economics has ramifications on a wide range of other fields, including politics, psychology, business, and law.
KEY TAKEAWAYS
* Economics is the study of how people allocate scarce resources for production, distribution, and consumption, both individually and collectively.
* The field of economics is connected with and has ramifications on many others, such as politics, government, law, and business.
* The two branches of economics are microeconomics and macroeconomics.
* Economics focuses on efficiency in production and exchange.
* Gross Domestic Product (GDP) and the Consumer Price Index (CPI) are two of the most widely used economic indicators.
Understanding Economics
Assuming humans have unlimited wants within a world of limited means, economists analyze how resources are allocated for production, distribution, and consumption.
The study of microeconomics focuses on the choices of individuals and businesses, and macroeconomics concentrates on the behavior of the economy on an aggregate level.
One of the earliest recorded economists was the 8th-century B.C. Greek farmer and poet Hesiod who wrote that labor, materials, and time needed to be allocated efficiently to overcome scarcity. The publication of Adam Smith's 1776 book An Inquiry Into the Nature and Causes of the Wealth of Nations sparked the beginning of the current Western contemporary economic theories.
Microeconomics
Microeconomics studies how individual consumers and firms make decisions to allocate resources. Whether a single person, a household, or a business, economists may analyze how these entities respond to changes in price and why they demand what they do at particular price levels.
Microeconomics analyzes how and why goods are valued differently, how individuals make financial decisions, and how they trade, coordinate, and cooperate.
Within the dynamics of supply and demand, the costs of producing goods and services, and how labor is divided and allocated, microeconomics studies how businesses are organized and how individuals approach uncertainty and risk in their decision-making.
Macroeconomics
Macroeconomics is the branch of economics that studies the behavior and performance of an economy as a whole. Its primary focus is recurrent economic cycles and broad economic growth and development.
It focuses on foreign trade, government fiscal and monetary policy, unemployment rates, the level of inflation, interest rates, the growth of total production output, and business cycles that result in expansions, booms, recessions, and depressions.
Using aggregate indicators, economists use macroeconomic models to help formulate economic policies and strategies.
What Is the Role of an Economist?
An economist studies the relationship between a society's resources and its production or output, and their opinions help shape economic policies related to interest rates, tax laws, employment programs, international trade agreements, and corporate strategies.
Economists analyze economic indicators such as gross domestic product and the consumer price index to identify potential trends or make economic forecasts.
According to the Bureau of Labor Statistics (BLS), 38% of all economists in the United States work for a federal or state agency. Economists are also employed as consultants, professors, by corporations, or as part of economic think tanks.
What Are Economic Indicators?
Economic indicators detail a country's economic performance. Published periodically by governmental agencies or private organizations, economic indicators often have a considerable effect on stocks, employment, and international markets. They may predict future economic conditions that will move markets and guide investment decisions.
Gross domestic product (GDP)
The gross domestic product (GDP) is considered the broadest measure of a country's economic performance. It calculates the total market value of all finished goods and services produced in a country in a given year. In the U.S., the Bureau of Economic Analysis (BEA) also issues a regular report during the latter part of each month.
Many investors, analysts, and traders focus on the advance GDP report and the preliminary report, both issued before the final GDP figures because the GDP is considered a lagging indicator, meaning it can confirm a trend but can't predict a trend.
Retail sales
Reported by the U.S. Department of Commerce (DOC) during the middle of each month, the retail sales report measures the total receipts, or dollar value, of all merchandise sold in stores.
Sampling retailers across the country acts as a proxy of consumer spending levels. Consumer spending represents more than two-thirds of GDP, proving useful to gauge the economy's general direction.
Industrial production
The industrial production report, released monthly by the Federal Reserve, reports changes in the production of factories, mines, and utilities in the U.S. One measure included in this report is the capacity utilization rate, which estimates the portion of productive capacity that is being used rather than standing idle in the economy. Capacity utilization in the range of 82% to 85% is considered "tight" and can increase the likelihood of price increases or supply shortages in the near term. Levels below 80% are interpreted as showing "slack" in the economy, which may increase the likelihood of a recession.
Employment Data
The Bureau of Labor Statistics releases employment data in a report called the nonfarm payrolls on the first Friday of each month.
Sharp increases in employment indicate prosperous economic growth and potential contractions may be imminent if significant decreases occur. These are generalizations, however, and it is important to consider the current position of the economy.
Consumer Price Index (CPI)
The Consumer Price Index (CPI), also issued by the BLS, measures the level of retail price changes, and the costs that consumers pay, and is the benchmark for measuring inflation. Using a basket that is representative of the goods and services in the economy, the CPI compares the price changes month after month and year after year.
This report is an important economic indicator and its release can increase volatility in equity, fixed income, and forex markets. Greater-than-expected price increases are considered a sign of inflation, which will likely cause the underlying currency to depreciate
Additional Information
Economics is a social science that studies the production, distribution, and consumption of goods and services.
Economics focuses on the behaviour and interactions of economic agents and how economies work. Microeconomics analyses what is viewed as basic elements in the economy, including individual agents and markets, their interactions, and the outcomes of interactions. Individual agents may include, for example, households, firms, buyers, and sellers. Macroeconomics analyses the economy as a system where production, distribution, consumption, savings, and investment expenditure interact, and factors affecting it: factors of production, such as labour, capital, land, and enterprise, inflation, economic growth, and public policies that have impact on these elements.
Other broad distinctions within economics include those between positive economics, describing "what is", and normative economics, advocating "what ought to be"; between economic theory and applied economics; between rational and behavioural economics; and between mainstream economics and heterodox economics.
Economic analysis can be applied throughout society, including business, finance, cybersecurity, health care, engineering and government. It is also applied to such diverse subjects as crime, education, the family, feminism, law, philosophy, politics, religion, social institutions, war, science and the environment.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2182) Driver's License
A driver's license, driving licence, or driving permit is a legal authorization, or the official document confirming such an authorization, for a specific individual to operate one or more types of motorized vehicles—such as motorcycles, cars, trucks, or buses—on a public road. Such licenses are often plastic and the size of a credit card.
In most international agreements the wording "driving permit" is used, for instance in the Vienna Convention on Road Traffic. In Australian English, Canadian English, New Zealand English, and American English, the terms "driver license" or "driver's license" are used, while in British English and in many former British colonies the term is "driving licence".
The laws relating to the licensing of drivers vary between jurisdictions. In some jurisdictions, a permit is issued after the recipient has passed a driving test, while in others a person acquires their permit or a learner's permit before beginning to drive. Different categories of permit often exist for different types of motor vehicles, particularly large trucks and passenger vehicles. The difficulty of the driving test varies considerably between jurisdictions, as do factors such as age and the required level of competence and practice.
History
Karl Benz, inventor of the modern car, received a written "Genehmigung" (permit) from the Grand Ducal authorities to operate his car on public roads in 1888 after residents complained about the noise and smell of his Motorwagen. Up until the start of the 20th century, European authorities issued similar permits to drive motor vehicles ad hoc, if at all.
Mandatory licensing for drivers in the United Kingdom came into force on 1 January 1904 after the Motor Car Act 1903 received royal assent. Every car owner had to register their vehicle with their local government authority and be able to prove registration of their vehicle on request. The minimum qualifying age was set at 17. The "driving licence" gave its holder 'freedom of the road' with a maximum 20 mph (32 km/h) speed limit. Compulsory testing was introduced in 1934, with the passing of the Road Traffic Act.
Prussia, then a kingdom within the German Empire, introduced compulsory licensing on 29 September 1903. A test on mechanical aptitude had to be passed and the Dampfkesselüberwachungsverein ("steam boiler supervision association") was charged with conducting these tests. In 1910, the German imperial government mandated the licensing of drivers on a national scale, establishing a system of tests and driver's education requirements that was adopted in other countries.
Other countries in Europe also introduced driving tests during the twentieth century, the last of them being Belgium where, until as recently as 1977, it was possible to purchase and hold a permit without having to undergo a driving test.
As traffic-related fatalities soared in North America, public outcry provoked legislators to begin studying the French and German statutes as models. On 1 August 1910, North America's first licensing law for motor vehicles went into effect in the U.S. state of New York, though it initially applied only to professional chauffeurs. In July 1913, the state of New Jersey became the first to require all drivers to pass a mandatory examination before being licensed.
In 1909, the Convention with Respect to the International Circulation of Motor Vehicles recognized the need for qualifications, examination, and authorization for international driving.
The notion of an "International Driving Permit" was first mooted in an international convention held in Paris in 1926.
In 1949, the United Nations hosted the Geneva Convention on Road Traffic that standardised rules on roads, occupants, rules, signs, driver's permits and such. It specified that national "driving permits" should be pink and that an "International Driving Permit" for driving in a number of countries should have grey covers with white pages and that "The entire last page shall be drawn up in French".
In 1968, the Vienna Convention on Road Traffic, ratified in 1977 and further updated in 2011, further modernised these agreements.
Its main regulations about drivers permits are in Annex 6 (Domestic Driving Permit) and Annex 7 (International Driving Permit). The currently active version of those is in force in each contracting party no later than "29 March 2011" (Article 43).
Article 41 of the convention describes key requirements:
* every driver of a motor vehicle must hold appropriate documentation;
* "driving permits" can be issued only after passing theoretical and practical exams, which are regulated by each country or jurisdiction;
* Contracting parties shall recognize as valid for driving in their territories:
* "domestic driving permits" conforming to the provisions of Annex 6 to the convention;
* an "International Driving Permit" conforming to the provisions of Annex 7 to the convention, on condition that it is presented with the corresponding domestic driving permit;
* "domestic driving permits" issued by a contracting party shall be recognised in the territory of another contracting party until this territory becomes the place of normal residence of their holder;
* all of the above does not apply to learner-driver permits;
* the period of validity of an international driving permit shall be either no more than three years after the date of issue or until the date of expiry of the domestic driving permit, whichever is earlier;
* Contracting parties may refuse to recognize the validity of driving permits for persons under eighteen or, for categories C, D, CE and DE, under twenty-one;
* an international driving permit shall only be issued by the contracting party in whose territory the holder has their normal residence and that issued the domestic driving permit or that recognized the driving permit issued by another contracting party; it shall not be valid for use in that territory.
In 2018, ISO/IEC standard 18013 was published which established guidelines for the design format and data content of an ISO-compliant driving licence (IDL). The design approach is to establish a secure domestic driving permit (DDP) and accompanying booklet for international use, instead of the international driving permit (IDP) paper document. The main ideology is a minimum acceptable set of requirements with regards to content and layout of the data elements, with sufficient freedom afforded to the issuing authorities of driving licences to meet domestic needs. The ISO standard specifies requirements for a card that is aligned with the UN Conventions on Road Traffic. This standard however, it should be noted, has no official mandate or recognition from the WP.1 of UNECE as a replacement for the current IDP standards as described in the 1949 and 1968 Conventions.
The specifications of the layout of the booklet is defined in Annex G of ISO/IEC 18013-1:2018. There are two options; a booklet with some personalisation or a booklet with no personalisation. The booklet shall be marginally larger than an ID-1 size card, with an insert pocket for storage of the card, and for convenient carrying of the booklet. The front cover should include the logo of the UN or the issuing country and the words "Translation of Driving Licence" and "Traduction du Permis de Conduire ".
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2183) Master (naval)
The master, or sailing master, is a historical rank for a naval officer trained in and responsible for the navigation of a sailing vessel.
In the Royal Navy, the master was originally a warrant officer who ranked with, but after, the lieutenants. The rank became a commissioned officer rank and was renamed navigating lieutenant in 1867; the rank gradually fell out of use from around 1890 since all lieutenants were required to pass the same examinations.
When the United States Navy was formed in 1794, master was listed as one of the warrant officer ranks and ranked between midshipmen and lieutenants. The rank was also a commissioned officer rank from 1837 until it was replaced with the current rank of lieutenant, junior grade in 1883.
Russia
Until 1733 the sailing masters in the Imperial Russian Navy were rated as petty officers, but in that year the rank of Master was introduced after the British model. Masters ranked above sub-lieutenants, but under lieutenants. Meritorious masters could be given lieutenant's rank, but only if they were noblemen. In 1741 the rank of master was abolished, and the officers holding that rank were promoted to lieutenants, while second masters and master's mates became ensigns. Henceforth masters could be promoted to sea officers, even if they were commoners.
The Pauline military reforms also included the navy, and the sailing department henceforth contained masters of VIII Class (rank as lieutenant commanders); masters of IX Class (below lieutenant commander but above lieutenant); masters of XII Class (rank as sub-lieutenants); masters of XIV Class (junior to sub-lieutenants); as well as master's mates and master's apprentices which were rated as petty officers.
In 1827 a navigation corps was founded, which also was in charge of the hydrographic service. In common with other non-executive corps in the Russian navy, members of the navigation corps were given military ranks. This corps contained one major general, and a number of colonels, lieutenant colonels, captains, staff captains, lieutenants, second lieutenants and ensigns, as well as conductors (warrant officers). In 1885 the navigation corps was put under abolishment, and its responsibilities were transferred to the executive corps.
Spain
Spanish sailing masters belonged to a navigation corps, called Cuerpo de Pilotos. They were, unlike their British counterparts, theoretically trained at the famous navigation schools, called Real Colegios Seminarios de San Telmo, in Seville and Málaga. In order to be accepted at these schools, the applicant had to be a Spaniard between eight and 14 years of age. Colored persons, Romani people, heretics, Jews, those punished by the Inquisition, and those whose parents pursued disreputable professions, were not eligible for enrollment. The master's apprentices were called meritorios de pilotaje and were at sea rated as common seamen. In order to become a master's assistant, called pilotín, during the 18th century, three voyages in Europe and one back and forth to America was required, as well as having passed a special examination. Promotion to second master could only take place if a berth was available.
Masters, called primeros pilotos, were originally ranked as ensigns, while the second masters, called pilotos, were ranked below officers but above petty officers. Later the masters were given rank as lieutenant commanders or lieutenants, while the second masters were ranked as sub-lieutenants or ensigns according to seniority. Master's assistant lacked formal rank. From 1821, masters ranked as lieutenants, second masters as sub-lieutenants, and third masters as ensigns. Promotion from the navigation corps to the sea officer corps was not unusual.
Early on, members of the navigation corps sought to improve its status. It was not until 1770, however, that the sailing masters received a uniform different from the petty officers. Under royal orders members of the navigation corps were from 1781 to be called Don, be regarded as caballeros (gentlemen), carry small swords, and take oaths by swearing by a crucifix. In 1823, the senior ranks of the navigation corps was transferred to the executive corps, and in 1846 the corps was abolished and its remaining members included among the sea officers with the rank of sub-lieutenant.
Sweden
Sailing master (ansvarsstyrman, literally: "responsible navigator") was in the Royal Swedish Navy until 1868 a berth, held by the ship's senior warrant officer of the sailing branch, in charge of navigation, steering, anchors, and ballast. In 1868, the responsibility for navigation was transferred to a commissioned officer berth, the navigating officer, and the sailing master became an assistant navigator in charge of navigation stores.
Royal Navy
In the Middle Ages, when 'warships' were typically merchant vessels hired by the crown, the man in charge of the ship and its mariners, as with all ships and indeed most endeavours ashore, was termed the master; the company of embarked soldiers was commanded by their own captain.
From the time of the reforms of Henry VIII, the master was a warrant officer, appointed by the Council of the Marine (later the Navy Board) who also built and provisioned the Navy's ships. The master was tasked with sailing the ship as directed by the captain, who fought the ship when an enemy was engaged. The captain had a commission from (and was responsible to) the Admiralty, who were in charge of the Navy's strategy and tactics.
Duties
The master's main duty was navigation, taking the ship's position at least daily and setting the sails as appropriate for the required course and conditions. During combat, he was stationed on the quarterdeck, next to the captain. The master was responsible for fitting out the ship, and making sure they had all the sailing supplies necessary for the voyage. The master also was in charge of stowing the hold and ensuring the ship was not too weighted down to sail effectively. The master, through his subordinates, hoisted and lowered the anchor, docked and undocked the ship, and inspected the ship daily for problems with the anchors, sails, masts, ropes, or pulleys. Issues were brought to the attention of the master, who would notify the captain. The master was in charge of the entry of parts of the official log such as weather, position, and expenditures.
Promotion
Masters were promoted from the rank of the master's mates, quartermasters, or midshipmen. Masters were also recruited from the merchant service. A prospective master had to pass an oral examination before a senior captain and three masters at Trinity House. After passing the examination, they would be eligible to receive a warrant from the Navy Board, but promotion was not automatic.
Second master
Second master was a rating introduced in 1753 that indicated a deputy master on a first-, second- or third-rate ship-of-the-line. A second master was generally a master's mate who had passed his examination for master and was deemed worthy of being master of a vessel. Master's mates would act as second master of vessels too small to be allocated a warranted master. Second masters were paid significantly more than master's mates, £5 5s per month. Second masters were given the first opportunity for master vacancies as they occurred.
Uniforms
Originally, the sailing master did not have an official officer uniform, which caused problems when they were captured because they had trouble convincing their captors they should be treated as officers and not ordinary sailors. In 1787 the warrant officers of wardroom rank (master, purser and surgeon) received an official uniform, but it did not distinguish them by rank. In 1807, masters, along with pursers, received their own uniform.
Transition to commissioned officer
By the classic Age of Sail the Master in the Royal Navy had become the warrant officer trained specifically in navigation, the senior warrant officer rank, and the second most important officer aboard rated ships. In 1808, Masters (along with Pursers and Surgeons) were given similar status to commissioned officers, as warrant officers of wardroom rank. The master ate in the wardroom with the other officers, had a large cabin in the gunroom, and had a smaller day cabin next to the captain's cabin on the quarterdeck for charts and navigation equipment.
However, the number of sailing-masters halved from 140 to 74 between the years 1840–1860: partly because the pay and privileges were less than equivalent ranks in the military branch, and also because the master's responsibilities had been largely assumed by the executive officers. In 1843 the wardroom warrant officers were given commissioned status. The Admiralty, under the First Lord of the Admiralty the Duke of Somerset, began to phase out the title of master after 1862. The ranks of staff commander and staff captain were introduced in 1863 and 1864 respectively; and in 1867 the Masters Branch was re-organised as the Navigating Branch with a new pay scale, with the following ranks:
* staff captain
* staff commander
* navigating lieutenant (formerly Master)
* navigating sub-lieutenant (second Master)
* navigating midshipman (Master's assistant)
* navigating cadet (formerly Naval cadet 2nd class)
The Royal Naval College exams for navigating lieutenant and lieutenant were the same after 1869. By 1872 the number of navigating cadets had fallen to twelve, and an Admiralty experiment in 1873 under the First Sea Lord George Goschen further merged the duties of navigating lieutenants and sailing masters with those of lieutenants and staff commanders. There were no more masters warranted after 1883, and the last one retired in 1892.
Although the actual rank of navigating lieutenant fell out of use about the same time, lieutenants who had passed their navigating exams were distinguished in the Navy List by an N in a circle by their name, and by N for those passed for first-class ships. The last staff commander disappeared in around 1904, and the last staff captain left the Active List in 1913.
United States Navy
Master, originally sailing master, was a historic warrant officer rank of the United States Navy, above that of a midshipman, after 1819 passed midshipman, after 1862 ensign, and below a lieutenant.
Some masters were appointed to command ships, with the rank of master commandant. In 1837, sailing master was renamed master, master commandant was renamed commander, and some masters were commissioned as officers, formally "master in line for promotion" to distinguish them from the warrant masters who would not be promoted.
After 1855, passed midshipmen who were graduates of the Naval Academy filled the positions of master. Both the commissioned officer rank of master and warrant officer rank of master were maintained until both were merged into the current rank of lieutenant, junior grade on 3 March 1883.
In 1862 masters wore a gold bar for rank insignia, which became a silver bar in 1877. In 1881 they started wearing sleeve stripes of one 1⁄2-inch (13 mm) and one 1⁄4-inch-wide (6.4 mm) strip of gold lace, still used for the rank of lieutenant, junior grade.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2184) Accounting
Gist
Accounting, also known as accountancy, is the process of recording and processing information about economic entities, such as businesses and corporations.
Summary
Accounting, also known as accountancy, is the process of recording and processing information about economic entities, such as businesses and corporations. Accounting measures the results of an organization's economic activities and conveys this information to a variety of stakeholders, including investors, creditors, management, and regulators. Practitioners of accounting are known as accountants. The terms "accounting" and "financial reporting" are often used interchangeably.
Accounting can be divided into several fields including financial accounting, management accounting, tax accounting and cost accounting. Financial accounting focuses on the reporting of an organization's financial information, including the preparation of financial statements, to the external users of the information, such as investors, regulators and suppliers. Management accounting focuses on the measurement, analysis and reporting of information for internal use by management to enhance business operations. The recording of financial transactions, so that summaries of the financials may be presented in financial reports, is known as bookkeeping, of which double-entry bookkeeping is the most common system. Accounting information systems are designed to support accounting functions and related activities.
Accounting has existed in various forms and levels of sophistication throughout human history. The double-entry accounting system in use today was developed in medieval Europe, particularly in Venice, and is usually attributed to the Italian mathematician and Franciscan friar Luca Pacioli. Today, accounting is facilitated by accounting organizations such as standard-setters, accounting firms and professional bodies. Financial statements are usually audited by accounting firms, and are prepared in accordance with generally accepted accounting principles (GAAP). GAAP is set by various standard-setting organizations such as the Financial Accounting Standards Board (FASB) in the United States and the Financial Reporting Council in the United Kingdom. As of 2012, "all major economies" have plans to converge towards or adopt the International Financial Reporting Standards (IFRS).
Details
Accountancy is the process of measuring, processing and recording an organization’s financial and non-financial statements. It has a wider scope than Accounting as it is the route to the Accounting process. Accountancy is responsible for prescribing the accounting conventions, principles, and techniques to be followed by an organization during the accounting process. The nature of Accounting is dynamic and analytical and hence, requires special abilities and skills in an individual to interpret the information better and effectively.
Accountancy is the practice of recording, classifying, and reporting on business transactions for a business. It provides feedback to management regarding the financial results and status of an organization. The key accountancy tasks are noted below.
Recordation
The recording of business transactions usually involves several key transactions that are handled on a repetitive basis, which are issuing customer invoices, paying supplier invoices, recording cash receipts from customers, and paying employees. These tasks are handled by the billing clerk, payables clerk, cashier, and payroll clerk, respectively.
There are also a number of business transactions that are non-repetitive in nature, and so require the use of journal entries to record them in the accounting records. The fixed asset accountant, general ledger clerk, and tax accountant are most likely to be involved in the use of journal entries. There may be a number of closing entries at the end of each reporting period that the general ledger clerk is tasked with entering into the accounting system.
Classification
The results of the efforts of the preceding accountants are accumulated into a set of accounting records, of which the summary document is the general ledger. The general ledger consists of a number of accounts, each of which stores information about a particular type of transaction, such as product sales, depreciation expense, accounts receivable, debt, and so on. Certain high-volume transactions, such as customer billings, may be stored in a subledger, with only its totals rolling into the general ledger. The ending balances in the general ledger may be altered with adjusting entries each month, mostly to record expenses incurred but not yet recorded.
The information in the general ledger is used to derive financial statements, and may also be the source of some information used for internal management reports.
Reporting
The reporting aspects of accountancy are considerable, and so have been divided into smaller areas of specialization, which are noted below.
Financial Accounting
Financial accounting is the province of the general ledger accountant, controller, and chief financial officer, and is concerned with the accumulation of business transactions into financial statements. These documents are presented based on sets of rules known as accounting frameworks, of which the best known are Generally Accepted Accounting Principles (GAAP) and International Financial Reporting Standards (IFRS). The financial statements include the income statement, balance sheet, statement of cash flows, and a number of disclosures that are included in the accompanying footnotes.
Management Accounting
Management accounting is the province of the cost accountant and financial analyst, who investigate ways to improve the profitability of a business and present their results to management. Their reports may be derived from the main system of accounts, but may also include separate data accumulation systems, as may be found with activity-based costing systems. Management accounting is not governed by any accounting framework - the structure of the reports issued to management are tailored to the needs of the business.
In short, accountancy involves each of the preceding tasks - recordation, classification, and reporting.
Additional Information
Accounting is the process of recording financial transactions pertaining to a business. The accounting process includes summarizing, analyzing, and reporting these transactions to oversight agencies, regulators, and tax collection entities. The financial statements used in accounting are a concise summary of financial transactions over an accounting period, summarizing a company's operations, financial position, and cash flows.
KEY TAKEAWAYS
* Regardless of the size of a business, accounting is a necessary function for decision making, cost planning, and measurement of economic performance.
* A bookkeeper can handle basic accounting needs, but a Certified Public Accountant (CPA) should be utilized for larger or more advanced accounting tasks.
* Two important types of accounting for businesses are managerial accounting and cost accounting. Managerial accounting helps management teams make business decisions, while cost accounting helps business owners decide how much a product should cost.
* Professional accountants follow a set of standards known as the Generally Accepted Accounting Principles (GAAP) when preparing financial statements.
* Accounting is an important function of strategic planning, external compliance, fundraising, and operations management.
What Is the Purpose of Accounting?
Accounting is one of the key functions of almost any business. A bookkeeper or an accountant may handle it at a small firm. At larger companies, there might be sizable finance departments guided by a unified accounting manual with dozens of employees. The reports generated by various streams of accounting, such as cost accounting and managerial accounting, are invaluable in helping management make informed business decisions.
The financial statements that summarize a large company's operations, financial position, and cash flows over a particular period are concise and consolidated reports based on thousands of individual financial transactions. As a result, all professional accounting designations are the culmination of years of study and rigorous examinations combined with a minimum number of years of practical accounting experience.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2185) Literature
Gist
Literature is a group of works of art that are made of words. Most are written, but some are shared by word of mouth. Literature usually means a work of poetry, theatre or narrative. There are many different kinds of literature, such as poetry, plays, or novels. They can also be put into groups by their language, historical time, place of origin, genre, and subject. The word "literature" comes from the Latin word "literatura," which means "writing formed with letters."
Most of the earliest works were epic poems. Epic poems are long stories or myths about adventures, such as the Epic of Gilgamesh from ancient Mesopotamia. Ramayana and Mahabharta, two Indian epics, are still read today. The Iliad and Odyssey are two famous Greek poems by Homer. They were shared over time through speaking and memory and were written down around the 9th or 8th century BCE.
Literature can also mean imaginative or creative writing, which is read for its artistic value. Literature is also related to people and community. According to Sangidu (2004) as quoted by Arfani et.al (2023), "literature is aprtt of society, a fact that inspires authors to involve themselves in the life order of the community where they are and try to fight for the position of the social structure and the problems face in society".
Summary
Literature, a body of written works. The name has traditionally been applied to those imaginative works of poetry and prose distinguished by the intentions of their authors and the perceived aesthetic excellence of their execution. Literature may be classified according to a variety of systems, including language, national origin, historical period, genre, and subject matter.
For historical treatment of various literatures within geographical regions, see such articles as African literature; African theater; Oceanic literature; Western literature; Central Asian arts; South Asian arts; and Southeast Asian arts. Some literatures are treated separately by language, by nation, or by special subject (e.g., Arabic literature, Celtic literature, Latin literature, French literature, Japanese literature, and biblical literature).
Definitions of the word literature tend to be circular. The 11th edition of Merriam-Webster’s Collegiate Dictionary considers literature to be “writings having excellence of form or expression and expressing ideas of permanent or universal interest.” The 19th-century critic Walter Pater referred to “the matter of imaginative or artistic literature” as a “transcript, not of mere fact, but of fact in its infinitely varied forms.” But such definitions assume that the reader already knows what literature is. And indeed its central meaning, at least, is clear enough. Deriving from the Latin littera, “a letter of the alphabet,” literature is first and foremost humankind’s entire body of writing; after that it is the body of writing belonging to a given language or people; then it is individual pieces of writing.
But already it is necessary to qualify these statements. To use the word writing when describing literature is itself misleading, for one may speak of “oral literature” or “the literature of preliterate peoples.” The art of literature is not reducible to the words on the page; they are there solely because of the craft of writing. As an art, literature might be described as the organization of words to give pleasure. Yet through words literature elevates and transforms experience beyond “mere” pleasure. Literature also functions more broadly in society as a means of both criticizing and affirming cultural values.
Details
Literature is any collection of written work, but it is also used more narrowly for writings specifically considered to be an art form, especially novels, plays, and poems, and including both print and digital writing. In recent centuries, the definition has expanded to include oral literature, much of which has been transcribed. Literature is a method of recording, preserving, and transmitting knowledge and entertainment, and can also have a social, psychological, spiritual, or political role.
Literature, as an art form, can also include works in various non-fiction genres, such as biography, diaries, memoir, letters, and essays. Within its broad definition, literature includes non-fictional books, articles or other written information on a particular subject.
Etymologically, the term derives from Latin literatura/litteratura "learning, a writing, grammar", originally "writing formed with letters", from litera/littera "letter". In spite of this, the term has also been applied to spoken or sung texts. Literature is often referred to synecdochically as "writing", especially creative writing, and poetically as "the craft of writing" (or simply "the craft"). Syd Field described his discipline, screenwriting, as "a craft that occasionally rises to the level of art."
Developments in print technology have allowed an ever-growing distribution and proliferation of written works, which now include electronic literature.
Definitions
Definitions of literature have varied over time. In Western Europe, prior to the 18th century, literature denoted all books and writing. Literature can be seen as returning to older, more inclusive notions, so that cultural studies, for instance, include, in addition to canonical works, popular and minority genres. The word is also used in reference to non-written works: to "oral literature" and "the literature of preliterate culture".
A value judgment definition of literature considers it as consisting solely of high quality writing that forms part of the belles-lettres ("fine writing") tradition. An example of this is in the 1910–1911 Encyclopædia Britannica, which classified literature as "the best expression of the best thought reduced to writing".
History:
Oral literature
The use of the term "literature" here poses some issue due to its origins in the Latin littera, "letter", essentially writing. Alternatives such as "oral forms" and "oral genres" have been suggested but the word literature is widely used.
Australian Aboriginal culture has thrived on oral traditions and oral histories passed down through tens of thousands of years. In a study published in February 2020, new evidence showed that both Budj Bim and Tower Hill volcanoes erupted between 34,000 and 40,000 years ago. Significantly, this is a "minimum age constraint for human presence in Victoria", and also could be interpreted as evidence for the oral histories of the Gunditjmara people, an Aboriginal Australian people of south-western Victoria, which tell of volcanic eruptions being some of the oldest oral traditions in existence. An axe found underneath volcanic ash in 1947 had already proven that humans inhabited the region before the eruption of Tower Hill.
Oral literature is an ancient human tradition found in "all corners of the world". Modern archaeology has been unveiling evidence of the human efforts to preserve and transmit arts and knowledge that depended completely or partially on an oral tradition, across various cultures:
The Judeo-Christian Bible reveals its oral traditional roots; medieval European manuscripts are penned by performing scribes; geometric vases from archaic Greece mirror Homer's oral style. (...) Indeed, if these final decades of the millennium have taught us anything, it must be that oral tradition never was the other we accused it of being; it never was the primitive, preliminary technology of communication we thought it to be. Rather, if the whole truth is told, oral tradition stands out as the single most dominant communicative technology of our species as both a historical fact and, in many areas still, a contemporary reality.
The earliest poetry is believed to have been recited or sung, employed as a way of remembering history, genealogy, and law.
In Asia, the transmission of folklore, mythologies as well as scriptures in ancient India, in different Indian religions, was by oral tradition, preserved with precision with the help of elaborate mnemonic techniques.
The early Buddhist texts are also generally believed to be of oral tradition, with the first by comparing inconsistencies in the transmitted versions of literature from various oral societies such as the Greek, Serbia and other cultures, then noting that the Vedic literature is too consistent and vast to have been composed and transmitted orally across generations, without being written down. According to Goody, the Vedic texts likely involved both a written and oral tradition, calling it a "parallel products of a literate society".
All ancient Greek literature was to some degree oral in nature, and the earliest literature was completely so. Homer's epic poetry, states Michael Gagarin, was largely composed, performed and transmitted orally. As folklores and legends were performed in front of distant audiences, the singers would substitute the names in the stories with local characters or rulers to give the stories a local flavor and thus connect with the audience, but making the historicity embedded in the oral tradition as unreliable. The lack of surviving texts about the Greek and Roman religious traditions have led scholars to presume that these were ritualistic and transmitted as oral traditions, but some scholars disagree that the complex rituals in the ancient Greek and Roman civilizations were an exclusive product of an oral tradition.
Writing systems are not known to have existed among Native North Americans (north of Mesoamerica) before contact with Europeans. Oral storytelling traditions flourished in a context without the use of writing to record and preserve history, scientific knowledge, and social practices. While some stories were told for amusement and leisure, most functioned as practical lessons from tribal experience applied to immediate moral, social, psychological, and environmental issues. Stories fuse fictional, supernatural, or otherwise exaggerated characters and circumstances with real emotions and morals as a means of teaching. Plots often reflect real life situations and may be aimed at particular people known by the story's audience. In this way, social pressure could be exerted without directly causing embarrassment or social exclusion. For example, rather than yelling, Inuit parents might deter their children from wandering too close to the water's edge by telling a story about a sea monster with a pouch for children within its reach.
The enduring significance of oral traditions is underscored in a systemic literature review on indigenous languages in South Africa, within the framework of contemporary linguistic challenges. Oral literature is crucial for cultural preservation, linguistic diversity, and social justice, as evidenced by the postcolonial struggles and ongoing initiatives to safeguard and promote South African indigenous languages.
Oratory
Oratory or the art of public speaking was considered a literary art for a significant period of time. From Ancient Greece to the late 19th century, rhetoric played a central role in Western education in training orators, lawyers, counselors, historians, statesmen, and poets.
Writing
Around the 4th millennium BC, the complexity of trade and administration in Mesopotamia outgrew human memory, and writing became a more dependable method of recording and presenting transactions in a permanent form. Though in both ancient Egypt and Mesoamerica, writing may have already emerged because of the need to record historical and environmental events. Subsequent innovations included more uniform, predictable legal systems, sacred texts, and the origins of modern practices of scientific inquiry and knowledge-consolidation, all largely reliant on portable and easily reproducible forms of writing.
Early written literature
Ancient Egyptian literature, along with Sumerian literature, are considered the world's oldest literatures. The primary genres of the literature of ancient Egypt—didactic texts, hymns and prayers, and tales—were written almost entirely in verse; By the Old Kingdom (26th century BC to 22nd century BC), literary works included funerary texts, epistles and letters, hymns and poems, and commemorative autobiographical texts recounting the careers of prominent administrative officials. It was not until the early Middle Kingdom (21st century BC to 17th century BC) that a narrative Egyptian literature was created.
Many works of early periods, even in narrative form, had a covert moral or didactic purpose, such as the Sanskrit Panchatantra.200 BC – 300 AD, based on older oral tradition. Drama and satire also developed as urban culture provided a larger public audience, and later readership, for literary production. Lyric poetry (as opposed to epic poetry) was often the speciality of courts and aristocratic circles, particularly in East Asia where songs were collected by the Chinese aristocracy as poems, the most notable being the Shijing or Book of Songs (1046–c. 600 BC).
In ancient China, early literature was primarily focused on philosophy, historiography, military science, agriculture, and poetry. China, the origin of modern paper making and woodblock printing, produced the world's first print cultures. Much of Chinese literature originates with the Hundred Schools of Thought period that occurred during the Eastern Zhou dynasty (769‒269 BC). The most important of these include the Classics of Confucianism, of Daoism, of Mohism, of Legalism, as well as works of military science (e.g. Sun Tzu's The Art of War, c. 5th century BC) and Chinese history (e.g. Sima Qian's Records of the Grand Historian, c. 94 BC). Ancient Chinese literature had a heavy emphasis on historiography, with often very detailed court records. An exemplary piece of narrative history of ancient China was the Zuo Zhuan, which was compiled no later than 389 BC, and attributed to the blind 5th-century BC historian Zuo Qiuming.
In ancient India, literature originated from stories that were originally orally transmitted. Early genres included drama, fables, sutras and epic poetry. Sanskrit literature begins with the Vedas, dating back to 1500–1000 BC, and continues with the Sanskrit Epics of Iron Age India. The Vedas are among the oldest sacred texts. The Samhitas (vedic collections) date to roughly 1500–1000 BC, and the "circum-Vedic" texts, as well as the redaction of the Samhitas, date to c. 1000‒500 BC, resulting in a Vedic period, spanning the mid-2nd to mid 1st millennium BC, or the Late Bronze Age and the Iron Age. The period between approximately the 6th to 1st centuries BC saw the composition and redaction of the two most influential Indian epics, the Mahabharata and the Ramayana, with subsequent redaction progressing down to the 4th century AD such as Ramcharitmanas.
The earliest known Greek writings are Mycenaean (c. 1600–1100 BC), written in the Linear B syllabary on clay tablets. These documents contain prosaic records largely concerned with trade (lists, inventories, receipts, etc.); no real literature has been discovered. Michael Ventris and John Chadwick, the original decipherers of Linear B, state that literature almost certainly existed in Mycenaean Greece, but it was either not written down or, if it was, it was on parchment or wooden tablets, which did not survive the destruction of the Mycenaean palaces in the twelfth century BC. Homer's epic poems, the Iliad and the Odyssey, are central works of ancient Greek literature. It is generally accepted that the poems were composed at some point around the late eighth or early seventh century BC. Modern scholars consider these accounts legendary. Most researchers believe that the poems were originally transmitted orally. From antiquity until the present day, the influence of Homeric epic on Western civilization has been significant, inspiring many of its most famous works of literature, music, art and film. The Homeric epics were the greatest influence on ancient Greek culture and education; to Plato, Homer was simply the one who "has taught Greece" – ten Hellada pepaideuken. Hesiod's Works and Days (c.700 BC) and Theogony are some of the earliest and most influential works of ancient Greek literature. Classical Greek genres included philosophy, poetry, historiography, comedies and dramas. Plato (428/427 or 424/423 – 348/347 BC) and Aristotle (384–322 BC) authored philosophical texts that are regarded as the foundation of Western philosophy, Sappho (c. 630 – c. 570 BC) and Pindar were influential lyric poets, and Herodotus (c. 484 – c. 425 BC) and Thucydides were early Greek historians. Although drama was popular in ancient Greece, of the hundreds of tragedies written and performed during the classical age, only a limited number of plays by three authors still exist: Aeschylus, Sophocles, and Euripides. The plays of Aristophanes (c. 446 – c. 386 BC) provide the only real examples of a genre of comic drama known as Old Comedy, the earliest form of Greek Comedy, and are in fact used to define the genre.
The Hebrew religious text, the Torah, is widely seen as a product of the Persian period (539–333 BC, probably 450–350 BC). This consensus echoes a traditional Jewish view which gives Ezra, the leader of the Jewish community on its return from Babylon, a pivotal role in its promulgation. This represents a major source of Christianity's Bible, which has had a major influence on Western literature.
The beginning of Roman literature dates to 240 BC, when a Roman audience saw a Latin version of a Greek play. Literature in Latin would flourish for the next six centuries, and includes essays, histories, poems, plays, and other writings.
The Qur'an (610 AD to 632 AD), the main holy book of Islam, had a significant influence on the Arab language, and marked the beginning of Islamic literature. Muslims believe it was transcribed in the Arabic dialect of the Quraysh, the tribe of Muhammad. As Islam spread, the Quran had the effect of unifying and standardizing Arabic.
Theological works in Latin were the dominant form of literature in Europe typically found in libraries during the Middle Ages. Western Vernacular literature includes the Poetic Edda and the sagas, or heroic epics, of Iceland, the Anglo-Saxon Beowulf, and the German Song of Hildebrandt. A later form of medieval fiction was the romance, an adventurous and sometimes magical narrative with strong popular appeal.
Controversial, religious, political and instructional literature proliferated during the European Renaissance as a result of the Johannes Gutenberg's invention of the printing press around 1440, while the Medieval romance developed into the novel.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2186) Biochemistry
Gist
Biochemistry is both life science and a chemical science - it explores the chemistry of living organisms and the molecular basis for the changes occurring in living cells. It uses the methods of chemistry, "Biochemistry has become the foundation for understanding all biological processes.
Summary
Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs as well as organism structure and function. Biochemistry is closely related to molecular biology, the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies.[9] In agriculture, biochemists investigate soil and fertilizers with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles and methods have been combined with problem-solving approaches from engineering to manipulate living systems in order to produce useful tools for research, industrial processes, and diagnosis and control of disease—the discipline of biotechnology.
Details
Biochemistry, study of the chemical substances and processes that occur in plants, animals, and microorganisms and of the changes they undergo during development and life. It deals with the chemistry of life, and as such it draws on the techniques of analytical, organic, and physical chemistry, as well as those of physiologists concerned with the molecular basis of vital processes.
All chemical changes within the organism—either the degradation of substances, generally to gain necessary energy, or the buildup of complex molecules necessary for life processes—are collectively called metabolism. These chemical changes depend on the action of organic catalysts known as enzymes, and enzymes, in turn, depend for their existence on the genetic apparatus of the cell. It is not surprising, therefore, that biochemistry enters into the investigation of chemical changes in disease, drug action, and other aspects of medicine, as well as in nutrition, genetics, and agriculture.
The term biochemistry is synonymous with two somewhat older terms: physiological chemistry and biological chemistry. Those aspects of biochemistry that deal with the chemistry and function of very large molecules (e.g., proteins and nucleic acids) are often grouped under the term molecular biology. Biochemistry has been known under that term since about 1900. Its origins, however, can be traced much further back; its early history is part of the early history of both physiology and chemistry.
Historical background
Before chemistry could contribute adequately to medicine and agriculture, however, it had to free itself from immediate practical demands in order to become a pure science. This happened in the period from about 1650 to 1780, starting with the work of Robert Boyle and culminating in that of Antoine-Laurent Lavoisier, the father of modern chemistry. Boyle questioned the basis of the chemical theory of his day and taught that the proper object of chemistry was to determine the composition of substances. His contemporary John Mayow observed the fundamental analogy between the respiration of an animal and the burning, or oxidation, of organic matter in air. Then, when Lavoisier carried out his fundamental studies on chemical oxidation, grasping the true nature of the process, he also showed, quantitatively, the similarity between chemical oxidation and the respiratory process.
Photosynthesis was another biological phenomenon that occupied the attention of the chemists of the late 18th century. The demonstration, through the combined work of Joseph Priestley, Jan Ingenhousz, and Jean Senebier, that photosynthesis is essentially the reverse of respiration was a milestone in the development of biochemical thought.
In spite of these early fundamental discoveries, rapid progress in biochemistry had to wait upon the development of structural organic chemistry, one of the great achievements of 19th-century science. A living organism contains many thousands of different chemical compounds. The elucidation of the chemical transformations undergone by these compounds within the living cell is a central problem of biochemistry. Clearly, the determination of the molecular structure of the organic substances present in living cells had to precede the study of the cellular mechanisms, whereby these substances are synthesized and degraded.
There are few sharp boundaries in science, and the boundaries between organic and physical chemistry, on the one hand, and biochemistry, on the other, have always shown much overlap. Biochemistry has borrowed the methods and theories of organic and physical chemistry and applied them to physiological problems. Progress in this path was at first impeded by a stubborn misconception in scientific thinking—the error of supposing that the transformations undergone by matter in the living organism were not subject to the chemical and physical laws that applied to inanimate substances and that consequently these “vital” phenomena could not be described in ordinary chemical or physical terms. Such an attitude was taken by the vitalists, who maintained that natural products formed by living organisms could never be synthesized by ordinary chemical means. The first laboratory synthesis of an organic compound, urea, by Friedrich Wöhler in 1828, was a blow to the vitalists but not a decisive one. They retreated to new lines of defense, arguing that urea was only an excretory substance—a product of breakdown and not of synthesis. The success of the organic chemists in synthesizing many natural products forced further retreats of the vitalists. It is axiomatic in modern biochemistry that the chemical laws that apply to inanimate materials are equally valid within the living cell.
At the same time that progress was being impeded by a misplaced kind of reverence for living phenomena, the practical needs of humans operated to spur the progress of the new science. As organic and physical chemistry erected an imposing body of theory in the 19th century, the needs of the physician, the pharmacist, and the agriculturalist provided an ever-present stimulus for the application of the new discoveries of chemistry to various urgent practical problems.
Two outstanding figures of the 19th century, Justus von Liebig and Louis Pasteur, were particularly responsible for dramatizing the successful application of chemistry to the study of biology. Liebig studied chemistry in Paris and carried back to Germany the inspiration gained by contact with the former students and colleagues of Lavoisier. He established at Giessen a great teaching and research laboratory, one of the first of its kind, which drew students from all over Europe.
Besides putting the study of organic chemistry on a firm basis, Liebig engaged in extensive literary activity, attracting the attention of all scientists to organic chemistry and popularizing it for the layman as well. His classic works, published in the 1840s, had a profound influence on contemporary thought. Liebig described the great chemical cycles in nature. He pointed out that animals would disappear from the face of Earth if it were not for the photosynthesizing plants, since animals require for their nutrition the complex organic compounds that can be synthesized only by plants. The animal excretions and the animal body after death are also converted by a process of decay to simple products that can be re-utilized only by plants.
In contrast with animals, green plants require for their growth only carbon dioxide, water, mineral salts, and sunlight. The minerals must be obtained from the soil, and the fertility of the soil depends on its ability to furnish the plants with these essential nutrients. But the soil is depleted of these materials by the removal of successive crops; hence the need for fertilizers. Liebig pointed out that chemical analysis of plants could serve as a guide to the substances that should be present in fertilizers. Agricultural chemistry as an applied science was thus born.
In his analysis of fermentation, putrefaction, and infectious disease, Liebig was less fortunate. He admitted the similarity of these phenomena but refused to admit that living organisms might function as the causative agents. It remained for Pasteur to clarify that matter. In the 1860s Pasteur proved that various yeasts and bacteria were responsible for “ferments,” substances that caused fermentation and, in some cases, disease. He also demonstrated the usefulness of chemical methods in studying these tiny organisms and was the founder of what came to be called bacteriology.
Later, in 1877, Pasteur’s ferments were designated as enzymes, and, in 1897, German chemist Eduard Buchner clearly showed that fermentation could occur in a press juice of yeast, devoid of living cells. Thus a life process of cells was reduced by analysis to a nonliving system of enzymes. The chemical nature of enzymes remained obscure until 1926, when the first pure crystalline enzyme (urease) was isolated. This enzyme and many others subsequently isolated proved to be proteins, which had already been recognized as high-molecular-weight chains of subunits called amino acids.
The mystery of how minute amounts of dietary substances known as the vitamins prevent diseases such as beriberi, scurvy, and pellagra became clear in 1935, when riboflavin (vitamin B2) was found to be an integral part of an enzyme. Subsequent work has substantiated the concept that many vitamins are essential in the chemical reactions of the cell by virtue of their role in enzymes.
In 1929 the substance adenosine triphosphate (ATP) was isolated from muscle. Subsequent work demonstrated that the production of ATP was associated with respiratory (oxidative) processes in the cell. In 1940 F.A. Lipmann proposed that ATP is the common form of energy exchange in many cells, a concept now thoroughly documented. ATP has been shown also to be a primary energy source for muscular contraction.
The use of radioactive isotopes of chemical elements to trace the pathway of substances in the animal body was initiated in 1935 by two U.S. chemists, Rudolf Schoenheimer and David Rittenberg. That technique provided one of the single most important tools for investigating the complex chemical changes that occur in life processes. At about the same time, other workers localized the sites of metabolic reactions by ingenious technical advances in the studies of organs, tissue slices, cell mixtures, individual cells, and, finally, individual cell constituents, such as nuclei, mitochondria, ribosomes, lysosomes, and membranes.
In 1869 a substance was isolated from the nuclei of pus cells and was called nucleic acid, which later proved to be deoxyribonucleic acid (DNA), but it was not until 1944 that the significance of DNA as genetic material was revealed, when bacterial DNA was shown to change the genetic matter of other bacterial cells. Within a decade of that discovery, the double helix structure of DNA was proposed by Watson and Crick, providing a firm basis for understanding how DNA is involved in cell division and in maintaining genetic characteristics.
Advances have continued since that time, with such landmark events as the first chemical synthesis of a protein, the detailed mapping of the arrangement of atoms in some enzymes, and the elucidation of intricate mechanisms of metabolic regulation, including the molecular action of hormones.
Areas of study
A description of life at the molecular level includes a description of all the complexly interrelated chemical changes that occur within the cell—i.e., the processes known as intermediary metabolism. The processes of growth, reproduction, and heredity, also subjects of the biochemist’s curiosity, are intimately related to intermediary metabolism and cannot be understood independently of it. The properties and capacities exhibited by a complex multicellular organism can be reduced to the properties of the individual cells of that organism, and the behavior of each individual cell can be understood in terms of its chemical structure and the chemical changes occurring within that cell.
Chemical composition of living matter
Every living cell contains, in addition to water and salts or minerals, a large number of organic compounds, substances composed of carbon combined with varying amounts of hydrogen and usually also of oxygen. Nitrogen, phosphorus, and sulfur are likewise common constituents. In general, the bulk of the organic matter of a cell may be classified as (1) protein, (2) carbohydrate, and (3) fat, or lipid. Nucleic acids and various other organic derivatives are also important constituents. Each class contains a great diversity of individual compounds. Many substances that cannot be classified in any of the above categories also occur, though usually not in large amounts.
Proteins are fundamental to life, not only as structural elements (e.g., collagen) and to provide defense (as antibodies) against invading destructive forces but also because the essential biocatalysts are proteins. The chemistry of proteins is based on discoveries made by German chemist Emil Fischer, whose work from 1882 demonstrated that proteins are very large molecules, or polymers, built up of about 24 amino acids. Proteins may vary in size from small—insulin with a molecular weight of 5,700 (based on the weight of a hydrogen atom as 1)—to very large—molecules with molecular weights of more than 1,000,000. The first complete amino acid sequence was determined for the insulin molecule in the 1950s.
By 1963 the chain of amino acids in the protein enzyme ribonuclease (molecular weight 12,700) had also been determined, aided by the powerful physical techniques of X-ray-diffraction analysis. In the 1960s, Nobel Prize winners Sir John Cowdery Kendrew and Max Ferdinand Perutz, utilizing X-ray studies, constructed detailed atomic models of the proteins hemoglobin and myoglobin (the respiratory pigment in muscle), which were later confirmed by sophisticated chemical studies. The abiding interest of biochemists in the structure of proteins rests on the fact that the arrangement of chemical groups in space yields important clues regarding the biological activity of molecules.
Carbohydrates include such substances as sugars, starch, and cellulose. The second quarter of the 20th century witnessed a striking advance in the knowledge of how living cells handle small molecules, including carbohydrates. The metabolism of carbohydrates became clarified during this period, and elaborate pathways of carbohydrate breakdown and subsequent storage and utilization were gradually outlined in terms of cycles (e.g., the Embden–Meyerhof glycolytic cycle and the Krebs cycle). The involvement of carbohydrates in respiration and muscle contraction was well worked out by the 1950s.
Fats, or lipids, constitute a heterogeneous group of organic chemicals that can be extracted from biological material by nonpolar solvents such as ethanol, ether, and benzene. The classic work concerning the formation of body fat from carbohydrates was accomplished during the early 1850s. Those studies, and later confirmatory evidence, have shown that the conversion of carbohydrate to fat occurs continuously in the body. The liver is the main site of fat metabolism. Fat absorption in the intestine was studied as early as the 1930s. The control of fat absorption is known to depend upon a combination action of secretions of the pancreas and bile salts. Abnormalities of fat metabolism, which result in disorders such as obesity and rare clinical conditions, are the subject of much biochemical research. Equally interesting to biochemists is the association between high levels of fat in the blood and the occurrence of arteriosclerosis (“hardening” of the arteries).
Nucleic acids are large, complex compounds of very high molecular weight present in the cells of all organisms and in viruses. They are of great importance in the synthesis of proteins and in the transmission of hereditary information from one generation to the next. Originally discovered as constituents of cell nuclei (hence their name), it was assumed for many years after their isolation in 1869 that they were found nowhere else. This assumption was not challenged seriously until the 1940s, when it was determined that two kinds of nucleic acid exist: DNA, in the nuclei of all cells and in some viruses; and ribonucleic acid (RNA), in the cytoplasm of all cells and in most viruses.
The profound biological significance of nucleic acids came gradually to light during the 1940s and 1950s. Attention turned to the mechanism by which protein synthesis and genetic transmission was controlled by nucleic acids (see below Genes). During the 1960s, experiments were aimed at refinements of the genetic code. Promising attempts were made during the late 1960s and early 1970s to accomplish duplication of the molecules of nucleic acids outside the cell—i.e., in the laboratory. By the mid-1980s genetic engineering techniques had accomplished, among other things, in vitro fertilization and the recombination of DNA (so-called gene splicing).
Nutrition
Biochemists have long been interested in the chemical composition of the food of animals. All animals require organic material in their diet, in addition to water and minerals. This organic matter must be sufficient in quantity to satisfy the caloric, or energy, requirements of the animals. Within certain limits, carbohydrate, fat, and protein may be used interchangeably for this purpose. In addition, however, animals have nutritional requirements for specific organic compounds. Certain essential fatty acids, about ten different amino acids (the so-called essential amino acids), and vitamins are required by many higher animals. The nutritional requirements of various species are similar but not necessarily identical; thus man and the guinea pig require vitamin C, or ascorbic acid, whereas the rat does not.
That plants differ from animals in requiring no preformed organic material was appreciated soon after the plant studies of the late 1700s. The ability of green plants to make all their cellular material from simple substances—carbon dioxide, water, salts, and a source of nitrogen such as ammonia or nitrate—was termed photosynthesis. As the name implies, light is required as an energy source, and it is generally furnished by sunlight. The process itself is primarily concerned with the manufacture of carbohydrate, from which fat can be made by animals that eat plant carbohydrates. Protein can also be formed from carbohydrate, provided ammonia is furnished.
In spite of the large apparent differences in nutritional requirements of plants and animals, the patterns of chemical change within the cell are the same. The plant manufactures all the materials it needs, but these materials are essentially similar to those that the animal cell uses and are often handled in the same way once they are formed. Plants could not furnish animals with their nutritional requirements if the cellular constituents in the two forms were not basically similar.
Digestion
The organic food of animals, including humans, consists in part of large molecules. In the digestive tracts of higher animals, these molecules are hydrolyzed, or broken down, to their component building blocks. Proteins are converted to mixtures of amino acids, and polysaccharides are converted to monosaccharides. In general, all living forms use the same small molecules, but many of the large complex molecules are different in each species. An animal, therefore, cannot use the protein of a plant or of another animal directly but must first break it down to amino acids and then recombine the amino acids into its own characteristic proteins. The hydrolysis of food material is necessary also to convert solid material into soluble substances suitable for absorption. The liquefaction of stomach contents aroused the early interest of observers, long before the birth of modern chemistry, and the hydrolytic enzymes secreted into the digestive tract were among the first enzymes to be studied in detail. Pepsin and trypsin, the proteolytic enzymes of gastric and pancreatic juice, respectively, continue to be intensively investigated.
The products of enzymatic action on the food of an animal are absorbed through the walls of the intestines and distributed to the body by blood and lymph. In organisms without digestive tracts, substances must also be absorbed in some way from the environment. In some instances simple diffusion appears to be sufficient to explain the transfer of a substance across a cell membrane. In other cases, however (e.g., in the case of the transfer of glucose from the lumen of the intestine to the blood), transfer occurs against a concentration gradient. That is, the glucose may move from a place of lower concentration to a place of higher concentration.
In the case of the secretion of hydrochloric acid into gastric juice, it has been shown that active secretion is dependent on an adequate oxygen supply (i.e., on the respiratory metabolism of the tissue), and the same holds for absorption of salts by plant roots. The energy released during the tissue oxidation must be harnessed in some way to provide the energy necessary for the absorption or secretion. This harnessing is achieved by a special chemical coupling system. The elucidation of the nature of such coupling systems has been an objective of the biochemist.
Blood
One of the animal tissues that has always excited special curiosity is blood. Blood has been investigated intensively from the early days of biochemistry, and its chemical composition is known with greater accuracy and in more detail than that of any other tissue in the body. The physician takes blood samples to determine such things as the sugar content, the urea content, or the inorganic-ion composition of the blood, since these show characteristic changes in disease.
The blood pigment hemoglobin has been intensively studied. Hemoglobin is confined within the blood corpuscles and carries oxygen from the lungs to the tissues. It combines with oxygen in the lungs, where the oxygen concentration is high, and releases the oxygen in the tissues, where the oxygen concentration is low. The hemoglobins of higher animals are related but not identical. In invertebrates, other pigments may take the place and function of hemoglobin. The comparative study of these compounds constitutes a fascinating chapter in biochemical investigation.
The proteins of blood plasma also have been extensively investigated. The gamma-globulin fraction of the plasma proteins contains the antibodies of the blood and is of practical value as an immunizing agent. An animal develops resistance to disease largely by antibody production. Antibodies are proteins with the ability to combine with an antigen (i.e., an agent that induces their formation). When this agent is a component of a disease-causing bacterium, the antibody can protect an organism from infection by that bacterium. The chemical study of antigens and antibodies and their interrelationship is known as immunochemistry.
Metabolism and hormones
The cell is the site of a constant, complex, and orderly set of chemical changes collectively called metabolism. Metabolism is associated with a release of heat. The heat released is the same as that obtained if the same chemical change is brought about outside the living organism. This confirms the fact that the laws of thermodynamics apply to living systems just as they apply to the inanimate world. The pattern of chemical change in a living cell, however, is distinctive and different from anything encountered in nonliving systems. This difference does not mean that any chemical laws are invalidated. It instead reflects the extraordinary complexity of the interrelations of cellular reactions.
Hormones, which may be regarded as regulators of metabolism, are investigated at three levels, to determine (1) their physiological effects, (2) their chemical structure, and (3) the chemical mechanisms whereby they operate. The study of the physiological effects of hormones is properly regarded as the province of the physiologist. Such investigations obviously had to precede the more analytical chemical studies. The chemical structures of thyroxine and adrenaline are known. The chemistry of the gender and adrenal hormones, which are steroids, has also been thoroughly investigated. The hormones of the pancreas—insulin and glucagon—and the hormones of the hypophysis (pituitary gland) are peptides (i.e., compounds composed of chains of amino acids). The structures of most of these hormones has been determined. The chemical structures of the plant hormones, auxin and gibberellic acid, which act as growth-controlling agents in plants, are also known.
The first and second phases of the hormone problem thus have been well, though not completely, explored, but the third phase is still in its infancy. It seems likely that different hormones exert their effects in different ways. Some may act by affecting the permeability of membranes; others appear to control the synthesis of certain enzymes. Evidently some hormones also control the activity of certain genes.
Genes
Genetic studies have shown that the hereditary characteristics of a species are maintained and transmitted by the self-duplicating units known as genes, which are composed of nucleic acids and located in the chromosomes of the nucleus. One of the most fascinating chapters in the history of the biological sciences contains the story of the elucidation, in the mid-20th century, of the chemical structure of the genes, their mode of self-duplication, and the manner in which the DNA of the nucleus causes the synthesis of RNA, which, among its other activities, causes the synthesis of protein. Thus, the capacity of a protein to behave as an enzyme is determined by the chemical constitution of the gene (DNA) that directs the synthesis of the protein. The relationship of genes to enzymes has been demonstrated in several ways. The first successful experiments, devised by the Nobel Prize winners George W. Beadle and Edward L. Tatum, involved the bread mold Neurospora crassa; the two men were able to collect a variety of strains that differed from the parent strain in nutritional requirements. Such strains had undergone a mutation (change) in the genetic makeup of the parent strain. The mutant strains required a particular amino acid not required for growth by the parent strain. It was then shown that such a mutant had lost an enzyme essential for the synthesis of the amino acid in question. The subsequent development of techniques for the isolation of mutants with specific nutritional requirements led to a special procedure for studying intermediary metabolism.
Evolution and origin of life
The exploration of space beginning in the mid-20th century intensified speculation about the possibility of life on other planets. At the same time, man was beginning to understand some of the intimate chemical mechanisms used for the transmission of hereditary characteristics. It was possible, by studying protein structure in different species, to see how the amino acid sequences of functional proteins (e.g., hemoglobin and cytochrome) have been altered during phylogeny (the development of species). It was natural, therefore, that biochemists should look upon the problem of the origin of life as a practical one. The synthesis of a living cell from inanimate material was not regarded as an impossible task for the future.
Applied biochemistry
An early objective in biochemistry was to provide analytical methods for the determination of various blood constituents because it was felt that abnormal levels might indicate the presence of metabolic diseases. The clinical chemistry laboratory now has become a major investigative arm of the physician in the diagnosis and treatment of disease and is an indispensable unit of every hospital. Some of the older analytical methods directed toward diagnosis of common diseases are still the most commonly used—for example, tests for determining the levels of blood glucose, in diabetes; urea, in kidney disease; uric acid, in gout; and bilirubin, in liver and gallbladder disease. With development of the knowledge of enzymes, determination of certain enzymes in blood plasma has assumed diagnostic value, such as alkaline phosphatase, in bone and liver disease; acid phosphatase, in prostatic cancer; amylase, in pancreatitis; and lactate dehydrogenase and transaminase, in cardiac infarct. Electrophoresis of plasma proteins is commonly employed to aid in the diagnosis of various liver diseases and forms of cancer. Both electrophoresis and ultracentrifugation of serum constituents (lipoproteins) are used increasingly in the diagnosis and examination of therapy of atherosclerosis and heart disease. Many specialized and sophisticated methods have been introduced, and machines have been developed for the simultaneous automated analysis of many different blood constituents in order to cope with increasing medical needs.
Analytical biochemical methods have also been applied in the food industry to develop crops superior in nutritive value and capable of retaining nutrients during the processing and preservation of food. Research in this area is directed particularly to preserving vitamins as well as color and taste, all of which may suffer loss if oxidative enzymes remain in the preserved food. Tests for enzymes are used for monitoring various stages in food processing.
Biochemical techniques have been fundamental in the development of new drugs. The testing of potentially useful drugs includes studies on experimental animals and man to observe the desired effects and also to detect possible toxic manifestations; such studies depend heavily on many of the clinical biochemistry techniques already described. Although many of the commonly used drugs have been developed on a rather empirical (trial-and-error) basis, an increasing number of therapeutic agents have been designed specifically as enzyme inhibitors to interfere with the metabolism of a host or invasive agent. Biochemical advances in the knowledge of the action of natural hormones and antibiotics promise to aid further in the development of specific pharmaceuticals.
Methods in biochemistry
Like other sciences, biochemistry aims at quantifying, or measuring, results, sometimes with sophisticated instrumentation. The earliest approach to a study of the events in a living organism was an analysis of the materials entering an organism (foods, oxygen) and those leaving (excretion products, carbon dioxide). This is still the basis of so-called balance experiments conducted on animals, in which, for example, both foods and excreta are thoroughly analyzed. For this purpose many chemical methods involving specific color reactions have been developed, requiring spectrum-analyzing instruments (spectrophotometers) for quantitative measurement. Gasometric techniques are those commonly used for measurements of oxygen and carbon dioxide, yielding respiratory quotients (the ratio of carbon dioxide to oxygen). Somewhat more detail has been gained by determining the quantities of substances entering and leaving a given organ and also by incubating slices of a tissue in a physiological medium outside the body and analyzing the changes that occur in the medium. Because these techniques yield an overall picture of metabolic capacities, it became necessary to disrupt cellular structure (homogenization) and to isolate the individual parts of the cell—nuclei, mitochondria, lysosomes, ribosomes, membranes—and finally the various enzymes and discrete chemical substances of the cell in an attempt to understand the chemistry of life more fully.
Centrifugation and electrophoresis
An important tool in biochemical research is the centrifuge, which through rapid spinning imposes high centrifugal forces on suspended particles, or even molecules in solution, and causes separations of such matter on the basis of differences in weight. Thus, red cells may be separated from plasma of blood, nuclei from mitochondria in cell homogenates, and one protein from another in complex mixtures. Proteins are separated by ultracentrifugation—very high speed spinning; with appropriate photography of the protein layers as they form in the centrifugal field, it is possible to determine the molecular weights of proteins.
Another property of biological molecules that has been exploited for separation and analysis is their electrical charge. Amino acids and proteins possess net positive or negative charges according to the acidity of the solution in which they are dissolved. In an electric field, such molecules adopt different rates of migration toward positively (anode) or negatively (cathode) charged poles and permit separation. Such separations can be effected in solutions or when the proteins saturate a stationary medium such as cellulose (filter paper), starch, or acrylamide gels. By appropriate color reactions of the proteins and scanning of color intensities, a number of proteins in a mixture may be measured. Separate proteins may be isolated and identified by electrophoresis, and the purity of a given protein may be determined. (Electrophoresis of human hemoglobin revealed the abnormal hemoglobin in sickle-cell anemia, the first definitive example of a “molecular disease.”)
Chromatography and isotopes
The different solubilities of substances in aqueous and organic solvents provide another basis for analysis. In its earlier form, a separation was conducted in complex apparatus by partition of substances in various solvents. A simplified form of the same principle evolved as ‘‘paper chromatography,” in which small amounts of substances could be separated on filter paper and identified by appropriate color reactions. In contrast to electrophoresis, this method has been applied to a wide variety of biological compounds and has contributed enormously to research in biochemistry.
The general principle has been extended from filter paper strips to columns of other relatively inert media, permitting larger scale separation and identification of closely related biological substances. Particularly noteworthy has been the separation of amino acids by chromatography in columns of ion-exchange resins, permitting the determination of exact amino acid composition of proteins. Following such determination, other techniques of organic chemistry have been used to elucidate the actual sequence of amino acids in complex proteins. Another technique of column chromatography is based on the relative rates of penetration of molecules into beads of a complex carbohydrate according to size of the molecules. Larger molecules are excluded relative to smaller molecules and emerge first from a column of such beads. This technique not only permits separation of biological substances but also provides estimates of molecular weights.
Perhaps the single most important technique in unraveling the complexities of metabolism has been the use of isotopes (heavy or radioactive elements) in labeling biological compounds and “tracing” their fate in metabolism. Measurement of the isotope-labeled compounds has required considerable technology in mass spectroscopy and radioactive detection devices.
A variety of other physical techniques, such as nuclear magnetic resonance, electron spin spectroscopy, circular dichroism, and X-ray crystallography, have become prominent tools in revealing the relation of chemical structure to biological function.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2187) Computer Aided Design
Gist
Computer-aided design (CAD) is a way to digitally create 2D drawings and 3D models of real-world products before they're ever manufactured. With 3D CAD, you can share, review, simulate, and modify designs easily, opening doors to innovative and differentiated products that get to market fast.
Summary
What is the Full form of CAD?
The full form of CAD is Computer-Aided Design. Computer-Aided Design (CAD) is a high-tech tool used by architects, engineers, drafters, and artists to create designs and technical drawings in 2D and 3D. It’s a mix of hardware and software that makes designing and producing things easier.
Before a product is ever created, it is possible to digitally develop 2D drawings and 3D models of it by using computer-aided design (CAD). By making designs simple to share, review, simulate, and edit using 3D CAD, you can quickly bring new, unique items to market.
What is CAD?
CAD stands for Computer-Aided Design, which is also referred to as computer-aided drafting. It is a technology used to create and design the layout of physical components in manufactured products. CAD allows designers to work with two-dimensional or three-dimensional space, as well as curves, surfaces, and solids.
CAD software for mechanical design uses vector-based graphics to depict the objects of traditional drafting. CAD (computer-aided design) software is used by architects, engineers, drafters, artists, and others to create precision drawings or technical illustrations.
History of CAD
There are two significant milestones in the development of CAD technology.
* The first major innovation in CAD technology occurred in 1963 when Ivan Sutherland created Sketchpad, a GUI-based system for generating x-y plots, as part of his PhD thesis at MIT.
* The second milestone was the founding of MCS (Manufacturing and Consulting Services, Inc.) in 1971 by P.J. Haranty, who introduced the CAD software ADAM (Automated Drafting And Machining).
CAD Software/Tools
Designers and engineers can use a variety of CAD tools. Some CAD tools, like those used in industrial design or architecture, are made to match particular use cases and industries. Various CAD software applications software can be supported by the usage of other industries. Below are some commonly used CAD tools.
* MicroStation (offered by Bentley Systems)
* AutoCAD (offered by Autodesk)
* CorelCAD
* IronCAD
* CADTalk
* SolidWorks
* LibreCAD
* OpenSCAD
* Vectorworks
* Solid Edge
* A high-quality monitor with graphics
* Light Pen
* Digitizing tablet
* The Mouse
* Printer or unique plotter.
Characteristics of CAD
* Efficiency: Efficient software is that which can use fewer resources to give a better output.
* Simplicity: Software must be easy to understand, simple to use and must be user friendly.
* Flexibility: The software must be able to integrate the design modification without much difficulty.
* Readability: This provides the capability within the software to help the user as and when required.
* Portability: The software must have the capacity to get transferred from one system to another.
* Recover ability: Good software must be able to give warnings before crashing and must be able to recover.
Applications of CAD
Here are some examples of how CAD is used in various industries or domains:
* 3D printing: Three-dimensional printing is a method for turning a digital model into a three-dimensional, physical thing. It is done by employing an additive approach that entails successively stacking layers of material, usually thermoplastic. Each layer, which is thinly cut, is a horizontal cross-section of the final item.
* Dental industry: CAD technology is currently one of the greatest technologies for aiding in the design and manufacture of components connected to dental care. This digital technology is almost exclusively used in restorative dentistry operations since it can give a 3D depiction of the patient’s oral anatomy.
* Mapping: Custom maps can help people avoid getting lost in a cell service dead zone, which renders useful maps useless when exploring uncharted territory. If users travel, for example, to the mountains or somewhere else, they can add areas of interest, their lodging, and the routes they take to get there to create a personalized map. With CAD, one may maintain the digital format by printing it or storing it in a smart device.
* Fashion: The first step in the design process for fashion designers is to use 2D CAD software. The fashion design industry can benefit from CAD in a variety of ways, from mass-market to high couture. Fashion designers, garment manufacturers, export businesses, etc. all rely on CAD software because of their functions like pattern development, virtual test fitting, pattern grading, marker generation, etc.
* Architecture: CAD can be used to create 2D or 3D models, which can then be utilized to create animations and other presentational materials. Technical drawings convey comprehensive instructions on how to create something. Using CAD, technical drawings might incorporate designs for mechanical engineering and architectural constructions.
* Interior design: CAD may be used to create room layout plans and mockups. The software makes it far easier than by hand to create a 2D or 3D model mockup of any physical location. As they work with a customer to establish the overall positioning of significant furniture or fixtures, the majority will start with a 2D layout.
Advantages of CAD
Here are the key benefits of CAD software:
* Save Time: Utilizing computer-aided design tools will help you save time and create better, more effective designs in a shorter amount of time.
* Easy to edit: When creating designs, you sometimes need to make changes. It will be considerably simpler to make modifications when utilizing computer-aided design software because you can easily correct mistakes and edit the drawings.
* Reduction in error percentage: Because CAD software uses some of the best tools, it considerably lowers the percentage of errors that result from manual designing.
* Decrease design effort: In terms of the work required to design the various models, there has been reduced design effort because the software automates most of the tasks.
* Code re-use: code can be reused multiple times again and again. As the entire task is carried out with the help of computer tools, it removes the problem of duplication of labour.
Disadvantages of CAD
* When computers break down suddenly, work may be lost.
* Work viruses are common.
* Work could easily be “hacked”.
* The process of learning how to use or execute the software is time-consuming.
* Most popular CAD software is high priced for individuals.
* Training the employees who will work on it will take time and money.
* Constant updating of operating systems or software.
* CAD/CAM systems reduce the need for employees.
* Software Complexity.
* Maintenance and Upkeep.
* With every new release of CAD software, the operator has to update their skills.
Details
Computer-aided design (CAD) is the use of computers (or workstations) to aid in the creation, modification, analysis, or optimization of a design. This software is used to increase the productivity of the designer, improve the quality of design, improve communications through documentation, and to create a database for manufacturing. Designs made through CAD software help protect products and inventions when used in patent applications. CAD output is often in the form of electronic files for print, machining, or other manufacturing operations. The terms computer-aided drafting (CAD) and computer-aided design and drafting (CADD) are also used.
Its use in designing electronic systems is known as electronic design automation (EDA). In mechanical design it is known as mechanical design automation (MDA), which includes the process of creating a technical drawing with the use of computer software.
CAD software for mechanical design uses either vector-based graphics to depict the objects of traditional drafting, or may also produce raster graphics showing the overall appearance of designed objects. However, it involves more than just shapes. As in the manual drafting of technical and engineering drawings, the output of CAD must convey information, such as materials, processes, dimensions, and tolerances, according to application-specific conventions.
CAD may be used to design curves and figures in two-dimensional (2D) space; or curves, surfaces, and solids in three-dimensional (3D) space.
CAD is an important industrial art extensively used in many applications, including automotive, shipbuilding, and aerospace industries, industrial and architectural design (building information modeling), prosthetics, and many more. CAD is also widely used to produce computer animation for special effects in movies, advertising and technical manuals, often called DCC digital content creation. The modern ubiquity and power of computers means that even perfume bottles and shampoo dispensers are designed using techniques unheard of by engineers of the 1960s. Because of its enormous economic importance, CAD has been a major driving force for research in computational geometry, computer graphics (both hardware and software), and discrete differential geometry.
The design of geometric models for object shapes, in particular, is occasionally called computer-aided geometric design (CAGD).
Overview
Computer-aided design is one of the many tools used by engineers and designers and is used in many ways depending on the profession of the user and the type of software in question.
CAD is one part of the whole digital product development (DPD) activity within the product lifecycle management (PLM) processes, and as such is used together with other tools, which are either integrated modules or stand-alone products, such as:
* Computer-aided engineering (CAE) and finite element analysis (FEA, FEM)
* Computer-aided manufacturing (CAM) including instructions to computer numerical control (CNC) machines
* Photorealistic rendering and motion simulation
* Document management and revision control using product data management (PDM)
CAD is also used for the accurate creation of photo simulations that are often required in the preparation of environmental impact reports, in which computer-aided designs of intended buildings are superimposed into photographs of existing environments to represent what that locale will be like, where the proposed facilities are allowed to be built. Potential blockage of view corridors and shadow studies are also frequently analyzed through the use of CAD.
Additional Information
CAD (computer-aided design) is the use of computer-based software to aid in design processes. CAD software is frequently used by different types of engineers and designers. CAD software can be used to create two-dimensional (2-D) drawings or three-dimensional (3-D) models.
The purpose of CAD is to optimize and streamline the designer's workflow, increase productivity, improve the quality and level of detail in the design, improve documentation communications and often contribute toward a manufacturing design database. CAD software outputs come in the form of electronic files, which are then used accordingly for manufacturing processes.
CAD is often used in tandem with digitized manufacturing processes. CAD/CAM (computer-aided design/computer-aided manufacturing) is software used to design products such as electronic circuit boards in computers and other devices.
Who uses CAD?
Computer-aided design is used in a wide variety of professions. CAD software is used heavily within various architecture, arts and engineering projects. CAD use cases are specific to industry and job functions. Professions that use CAD tools include, but are not limited to:
* Architects
* Engineers
* City planners
* Graphic designers
* Animation illustrators
* Drafters
* Fashion designers
* Interior designers
* Exterior designers
* Game designers
* Product designers
* Industrial designers
* Manufacturers
CAD benefits
Compared to traditional technical sketching and manual drafting, the use of CAD design tools can have significant benefits for engineers and designers:
* Lower production costs for designs;
* Quicker project completion due to efficient workflow and design process;
* Changes can be made independent of other design details, without the need to completely re-do a sketch;
* Higher quality designs with documentation (such as angles, measurements, presets) built into the file;
* Clearer designs, better legibility and ease of interpretation by collaborators, as handmade drawings are not as clear or detailed;
* Use of digital files can make collaborating with colleagues more simple; and
* Software features can support generative design, solid modeling, and other technical functions.
CAD software/tools
A number of CAD tools exist to assist designers and engineers. Some CAD tools are tailored to fit specific use cases and industries, such as industrial design or architecture. Other CAD software tools can be used to support a variety of industries and project types. Some widely-used CAD tools are:
* MicroStation (offered by Bentley Systems)
* AutoCAD (offered by Autodesk)
* CorelCAD
* IronCAD
* CADTalk
* SolidWorks
* Onshape
* Catia
* LibreCAD
* OpenSCAD
* Vectorworks
* Solid Edge
* Altium Designer
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2188) Metal Detector
A metal detector is an instrument that detects the nearby presence of metal. Metal detectors are useful for finding metal objects on the surface, underground, and under water. A metal detector consists of a control box, an adjustable shaft, and a variable-shaped pickup coil. When the coil nears metal, the control box signals its presence with a tone, light, or needle movement. Signal intensity typically increases with proximity. A common type are stationary "walk through" metal detectors used at access points in prisons, courthouses, airports and psychiatric hospitals to detect concealed metal weapons on a person's body.
The simplest form of a metal detector consists of an oscillator producing an alternating current that passes through a coil producing an alternating magnetic field. If a piece of electrically conductive metal is close to the coil, eddy currents will be induced (inductive sensor) in the metal, and this produces a magnetic field of its own. If another coil is used to measure the magnetic field (acting as a magnetometer), the change in the magnetic field due to the metallic object can be detected.
The first industrial metal detectors came out in the 1960s. They were used for finding minerals among other things. Metal detectors help find land mines. They also detect weapons like knives and guns, which is important for airport security. People even use them to search for buried objects, like in archaeology and treasure hunting. Metal detectors are also used to detect foreign bodies in food, and in the construction industry to detect steel reinforcing bars in concrete and pipes and wires buried in walls and floors.
History and development
In 1841 Professor Heinrich Wilhelm Dove published an invention he called the "differential inductor". It was a 4-coil induction balance, with 2 glass tubes each having 2 well-insulated copper wire solenoids wound around them. Charged Leyden jars (high-voltage capacitors) were discharged through the 2 primary coils; this current surge induced a voltage in the secondary coils. When the secondary coils were wired in opposition, the induced voltages cancelled as confirmed by the Professor holding the ends of the secondary coils. When a piece of metal was placed inside one glass tube the Professor received a shock. This then was the first magnetic induction metal detector, and the first pulse induction metal detector.
In late 1878 and early 1879 Professor (of music) David Edward Hughes published his experiments with the 4-coil induction balance. He used his own recent invention the microphone and a ticking clock to generate regular pulses and a telephone receiver as detector. To measure the strength of the signals he invented a coaxial 3-coil induction balance which he called the "electric sonometer". Hughes did much to popularize the induction balance, quickly leading to practical devices that could identify counterfeit coins. In 1880 Mr. J. Munro, C.E. suggested the use of the 4-coil induction balance for metal prospecting. Hughes's coaxial 3-coil induction balance would also see use in metal detecting.
In July 1881 Alexander Graham Bell initially used a 4-coil induction balance to attempt to locate a bullet lodged in the chest of American President James Garfield. After much experimenting the best bullet detection range he achieved was only 2 inches. He then used his own earlier discovery, the partially overlapping 2-coil induction balance, and the detection range increased to 5 inches. But the attempt was still unsuccessful because the metal coil spring bed Garfield was lying on confused the detector. Bell's 2-coil induction balance would go on to evolve into the popular double D coil.
On December 16, 1881, Captain Charles Ambrose McEvoy applied for British Patent No. 5518, Apparatus for Searching for Submerged Torpedoes, &c., which was granted Jun 16 1882. His US269439 patent application of Jul 12 1882 was granted Dec 19 1882. It was a 4-coil induction balance for detecting submerged metallic torpedoes and iron ships and the like. Given the development time involved this may have been the earliest known device specifically constructed as a metal detector using magnetic induction.
In 1892 George M. Hopkins described an orthogonal 2-coil induction balance for metal detecting.
In 1915 Professor Camille Gutton developed a 4-coil induction balance to detect unexploded shells in farmland of former battlefields in France. Unusually both coil pairs were used for detection. The 1919 photo at the right is a later version of Gutton's detector.
Modern developments
The modern development of the metal detector began in the 1920s. Gerhard Fischer had developed a system of radio direction-finding, which was to be used for accurate navigation. The system worked extremely well, but Fischer noticed there were anomalies in areas where the terrain contained ore-bearing rocks. He reasoned that if a radio beam could be distorted by metal, then it should be possible to design a machine which would detect metal using a search coil resonating at a radio frequency. In 1925 he applied for, and was granted, the first patent for an electronic metal detector. Although Gerhard Fischer was the first person granted a patent for an electronic metal detector, the first to apply was Shirl Herr, a businessman from Crawfordsville, Indiana. His application for a hand-held Hidden-Metal Detector was filed in February 1924, but not patented until July 1928. Herr assisted Italian leader Benito Mussolini in recovering items remaining from the Emperor Caligula's galleys at the bottom of Lake Nemi, Italy, in August 1929. Herr's invention was used by Admiral Richard Byrd's Second Antarctic Expedition in 1933, when it was used to locate objects left behind by earlier explorers. It was effective up to a depth of eight feet. However, it was one Lieutenant Józef Stanisław Kosacki, a Polish officer attached to a unit stationed in St Andrews, Fife, Scotland, during the early years of World War II, who refined the design into a practical Polish mine detector. These units were still quite heavy, as they ran on vacuum tubes, and needed separate battery packs.
The design invented by Kosacki was used extensively during the Second Battle of El Alamein when 500 units were shipped to Field Marshal Montgomery to clear the minefields of the retreating Germans, and later used during the Allied invasion of Sicily, the Allied invasion of Italy and the Invasion of Normandy.
As the creation and refinement of the device was a wartime military research operation, the knowledge that Kosacki created the first practical metal detector was kept secret for over 50 years.
Beat frequency induction
Many manufacturers of these new devices brought their own ideas to the market. White's Electronics of Oregon began in the 1950s by building a machine called the Oremaster Geiger Counter. Another leader in detector technology was Charles Garrett, who pioneered the BFO (beat frequency oscillator) machine. With the invention and development of the transistor in the 1950s and 1960s, metal detector manufacturers and designers made smaller, lighter machines with improved circuitry, running on small battery packs. Companies sprang up all over the United States and Britain to supply the growing demand. Beat Frequency Induction requires movement of the detector coil; akin to how swinging a conductor near a magnet induces an electric current.
Refinements
Modern top models are fully computerized, using integrated circuit technology to allow the user to set sensitivity, discrimination, track speed, threshold volume, notch filters, etc., and hold these parameters in memory for future use. Compared to just a decade ago, detectors are lighter, deeper-seeking, use less battery power, and discriminate better.
State-of-the-art metal detectors have further incorporated extensive wireless technologies for the earphones, connect to Wi-Fi networks and Bluetooth devices. Some also utilize built in GPS locator technology to keep track of searching location and the location of items found. Some connect to smartphone applications to further extend functionality.
Discriminators
The biggest technical change in detectors was the development of a tunable induction system. This system involved two coils that are electro-magnetically tuned. One coil acts as an RF transmitter, the other as a receiver; in some cases these can be tuned to between 3 and 100 kHz. When metal is in their vicinity, a signal is detected owing to eddy currents induced in the metal. What allowed detectors to discriminate between metals was the fact that every metal has a different phase response when exposed to alternating current; longer waves (low frequency) penetrate ground deeper, and select for high-conductivity targets like silver, and copper; than shorter waves (higher frequency) which, while less ground penetrating, select for low-conductivity targets like iron. Unfortunately, high frequency is also sensitive to ground mineralization interference. This selectivity or discrimination allowed detectors to be developed that could selectively detect desirable metals, while ignoring undesirable ones.
Even with discriminators, it was still a challenge to avoid undesirable metals, because some of them have similar phase responses (e.g. tinfoil and gold), particularly in alloy form. Thus, improperly tuning out certain metals increased the risk of passing over a valuable find. Another disadvantage of discriminators was that they reduced the sensitivity of the machines.
New coil designs
Coil designers also tried out innovative designs. The original induction balance coil system consisted of two identical coils placed on top of one another. Compass Electronics produced a new design: two coils in a D shape, mounted back-to-back to form a circle. The system was widely used in the 1970s, and both concentric and double D type (or widescan as they became known) had their fans. Another development was the invention of detectors which could cancel out the effect of mineralization in the ground. This gave greater depth, but was a non-discriminate mode. It worked best at lower frequencies than those used before, and frequencies of 3 to 20 kHz were found to produce the best results. Many detectors in the 1970s had a switch which enabled the user to switch between the discriminate mode and the non-discriminate mode. Later developments switched electronically between both modes. The development of the induction balance detector would ultimately result in the motion detector, which constantly checked and balanced the background mineralization.
Pulse induction
At the same time, developers were looking at using a different technique in metal detection called pulse induction. Unlike the beat frequency oscillator or the induction balance machines, which both used a uniform alternating current at a low frequency, the pulse induction (PI) machine simply magnetized the ground with a relatively powerful, momentary current through a search coil. In the absence of metal, the field decayed at a uniform rate, and the time it took to fall to zero volts could be accurately measured. However, if metal was present when the machine fired, a small eddy current would be induced in the metal, and the time for sensed current decay would be increased. These time differences were minute, but the improvement in electronics made it possible to measure them accurately and identify the presence of metal at a reasonable distance. These new machines had one major advantage: they were mostly impervious to the effects of mineralization, and rings and other jewelry could now be located even under highly mineralized black sand. The addition of computer control and digital signal processing have further improved pulse induction sensors.
One particular advantage of using a pulse induction detector includes the ability to ignore the minerals contained within heavily mineralized soil; in some cases the heavy mineral content may even help the PI detector function better. Where a VLF detector is affected negatively by soil mineralization, a PI unit is not.
Uses
Large portable metal detectors are used by archaeologists and treasure hunters to locate metallic items, such as jewelry, coins, clothes buttons and other accessories, bullets, and other various artifacts buried beneath the surface.
Archaeology
Metal detectors are widely used in archaeology with the first recorded use by military historian Don Rickey in 1958 who used one to detect the firing lines at Little Big Horn. However archaeologists oppose the use of metal detectors by "artifact seekers" or "site looters" whose activities disrupt archaeological sites. The problem with use of metal detectors in archaeological sites or hobbyist who find objects of archeological interest is that the context that the object was found in is lost and no detailed survey of its surroundings is made. Outside of known sites the significance of objects may not be apparent to a metal detector hobbyist.
Discriminators and circuits
The development of transistors, discriminators, modern search coil designs, and wireless technology significantly impacted the design of metal detectors as we know them today: lightweight, compact, easy-to-use, and deep-seeking systems. The invention of a tunable induction device was the most significant technological advancement in detectors. Two electro-magnetically tuned coils were used in this method. One coil serves as an RF transmitter, while the other serves as a receiver; in some situations, these coils may be tuned to frequencies ranging from 3 to 100 kHz.
Due to eddy currents induced in the metal, a signal is detected when metal is present. The fact that every metal has a different phase response when exposed to alternating current allowed detectors to differentiate between metals. Longer waves (low frequency) penetrate the ground deeper and select for high conductivity targets like silver and copper, while shorter waves (higher frequency) select for low conductivity targets like iron. Unfortunately, ground mineralization interference affects high frequency as well. This selectivity or discrimination allowed the development of detectors that can selectively detect desirable metals.
Even with discriminators, avoiding undesirable metals was difficult because some of them have similar phase responses (for example, tinfoil and gold), particularly in alloy form. As a result, tuning out those metals incorrectly increased the chance of missing a valuable discovery. Discriminators also had the downside of lowering the sensitivity of the devices.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2189) Diabetic Neuropathy
Gist
Diabetic neuropathy is nerve damage that is caused by diabetes.
Nerves are bundles of special tissues that carry signals between your brain and other parts of your body. The signals
* send information about how things feel
* move your body parts
* control body functions such as digestion.
Summary
Diabetic neuropathy is a complication of diabetes that results in damage to the nervous system. It is a progressive disease, and symptoms get worse over time.
Neuropathy happens when high levels of fats or sugar in the blood damage the nerves in the body. It can affect virtually any nerve in the body, with a wide range of symptoms.
Nerves are essential to how the body works. They enable people to move, send messages about how things feel, and control automatic functions, such as breathing.
There are several types. Some involve the peripheral nerves, while others damage the nerves that supply the internal organs, such as the heart, the bladder, and the gut. In this way, it can affect many body functions.
Between one-third and a half of people with diabetes have neuropathy, according to the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK).
Details
Diabetic neuropathy is a type of nerve damage that develops gradually and is caused by long-term high blood sugar levels. While there’s no cure, managing blood sugar levels can slow its progression and prevent complications.
Diabetic neuropathy is a serious and common complication of type 1 and type 2 diabetes. It’s a type of nerve damage caused by long-term high blood sugar levels. The condition usually develops slowly, sometimes over the course of several decades.
If you have diabetes and notice numbness, tingling, pain, or weakness in your hands or feet, you should see a doctor or healthcare professional, as these are early symptoms of peripheral neuropathy. The danger is usually when you can’t feel pain and an ulcer develops on your foot.
In cases of severe or prolonged peripheral neuropathy, you may be vulnerable to injuries or infections. In serious cases, poor wound healing or infection can lead to amputation.
There are different types of diabetic neuropathy that affect different areas of your body, causing a variety of symptoms. If you have diabetes, it’s important to regularly check your blood glucose levels and contact a doctor if you have any symptoms of neuropathy.
What are the symptoms of diabetic neuropathy?
It’s common for symptoms of neuropathy to appear gradually. In many cases, the first type of nerve damage to occur involves the nerves of the feet. This can lead to the symptom of sometimes painful “pins and needles” in your feet.
Symptoms vary depending on the areas affected. Common signs and symptoms of the different types of diabetic neuropathy include:
* sensitivity to touch
* loss of sense of touch
* difficulty with coordination when walking
* numbness or pain in your hands or feet
* burning sensation in feet, especially at night
* muscle weakness or wasting
* bloating or fullness
* nausea, indigestion, or vomiting
* diarrhea or constipation
* dizziness when you stand up
* excessive or decreased sweating
* bladder problems such as incomplete bladder emptying
* vaginal dryness
* erectile dysfunction
* inability to sense low blood glucose
* vision trouble such as double vision
* increased heart rate
What are the different types of diabetic neuropathy?
The term neuropathy is used to describe several types of nerve damage. In people with diabetes, there are four main types of neuropathy.
1. Peripheral neuropathy
The most common form of neuropathy is peripheral neuropathy. Peripheral neuropathy usually affects the feet and legs, but it can also affect the arms or hands. Symptoms are varied and can be mild to severe. They include:
* numbness
* tingling or burning sensations
* extreme sensitivity to touch
* insensitivity to hot and cold temperatures
* sharp pain or cramping
* muscle weakness
* loss of balance or coordination
Some people experience symptoms more often at night.
If you have peripheral neuropathy, you may not feel an injury or sore on your foot. People with diabetes often have poor circulation, which makes it more difficult for wounds to heal. This combination increases the risk of infection. In extreme cases, infection can lead to amputation.
2. Autonomic neuropathy
The second most common type of neuropathy in people with diabetes is autonomic neuropathy.
The autonomic nervous system runs other systems in your body over which you have no conscious control. Many organs and muscles are controlled by it, including your:
* digestive system
* sweat glands
* gender organs and bladder
* cardiovascular system
Digestion problems
Nerve damage to the digestive system may cause:
* constipation
* diarrhea
* swallowing trouble
* gastroparesis, which causes the stomach to empty too slowly into the small intestines
Gastroparesis causes a delay in digestion, which can worsen over time, leading to frequent nausea and vomiting. You’ll typically feel full too quickly and be unable to finish a meal.
Delayed digestion often makes it more difficult to control blood glucose levels, too, with frequently alternating high and low readings.
Also, symptoms of hypoglycemia, such as sweating and heart palpitations, can go undetected in people with autonomic neuropathy. This can mean not noticing when you have low blood sugar, increasing the risk of a hypoglycemic emergency.
Sexual and bladder problems
Autonomic neuropathy may also cause sexual problems such as erectile dysfunction, vaginal dryness, or difficulty achieving orgasm. Neuropathy in the bladder can cause incontinence or make it difficult to fully empty your bladder.
Cardiovascular problems
Damage to the nerves that control your heart rate and blood pressure can make them respond more slowly. You may experience a drop in blood pressure and feel light-headed or dizzy when you stand up after sitting or lying down, or when you exert yourself. Autonomic neuropathy can also cause an abnormally fast heart rate.
Autonomic neuropathy can make it difficult to identify some of the symptoms of a heart attack. You may not feel any chest pain when your heart isn’t getting enough oxygen. If you have autonomic neuropathy, you should know the other symptoms of a heart attack, including:
* profuse sweating
* pain in the arm, back, neck, jaw, or stomach
* shortness of breath
* nausea
* lightheadedness
3. Proximal neuropathy
A rare form of neuropathy is proximal neuropathy, also known as diabetic amyotrophy. This form of neuropathy is more common in adults over 50 years old with type 2 diabetes and is diagnosed more often in men.
It often affects the hips, buttocks, or thighs. You may experience sudden and sometimes severe pain. Muscle weakness in your legs may make it difficult to stand up without assistance. Diabetic amyotrophy usually affects only one side of the body.
After the onset of symptoms, they usually get worse and then eventually begin to improve slowly. Most people recover within a few years, even without treatment.
4. Focal neuropathy
Focal neuropathy, or mononeuropathy, occurs when there’s damage to one specific nerve or group of nerves, causing weakness in the affected area. This occurs most often in your hand, head, torso, or leg. It appears suddenly and is usually very painful.
Like proximal neuropathy, most focal neuropathies go away in a few weeks or months and leave no lasting damage. The most common type is carpal tunnel syndrome.
Although most don’t feel the symptoms of carpal tunnel syndrome, about 25% of people with diabetes have some degree of nerve compression at the wrist.
Symptoms of focal neuropathy include:
* pain, numbness, tingling in fingers
* an inability to focus
* double vision
* aching behind the eyes
* Bell’s palsy
* pain in isolated areas such as the front of the thigh, lower back, pelvic region, chest, stomach, inside the foot, outside the lower leg, or weakness in big toe
What causes diabetic neuropathy?
Diabetic neuropathy is caused by high blood sugar levels sustained over a long period of time. Other factors can lead to nerve damage such as:
* damage to the blood vessels caused by high cholesterol levels
* mechanical injury such as injuries caused by carpal tunnel syndrome
* lifestyle factors such as smoking or alcohol use
Low levels of vitamin B12 can also lead to neuropathy. Metformin, a common medication used to manage diabetes, can decrease levels of vitamin B12. You can ask a doctor for a simple blood test to identify any vitamin deficiencies.
How is diabetic neuropathy diagnosed?
A doctor will determine whether or not you have neuropathy, starting by asking about your symptoms and medical history. You’ll also have a physical examination. They’ll check your level of sensitivity to temperature and touch, heart rate, blood pressure, and muscle tone.
A doctor may do a filament test to test the sensitivity in your feet. For this, they’ll use a nylon fiber to check your limbs for any loss of sensation. A tuning fork may be used to test your vibration threshold. A doctor may also test your ankle reflexes.
In some cases, they may also perform a nerve conduction study, which can assess nerve damage by measuring the speed and strength of nerve signals.
How is diabetic neuropathy treated?
There’s no cure for diabetic neuropathy, but you can slow its progression. Keeping your blood sugar levels within a healthy range is the best way to decrease the likelihood of developing diabetic neuropathy or slow its progression. It can also relieve some symptoms.
Quitting smoking, if applicable, and exercising regularly are also parts of a comprehensive treatment plan. Always talk with a doctor or healthcare professional before beginning a new fitness routine. You may also ask a doctor about complementary treatments or supplements for neuropathy.
Pain management
Medications may be used to treat pain caused by diabetic neuropathy. Talk with a doctor about the available medications and their potential side effects. Several medications have been shown to help with symptoms.
You may also want to consider alternative therapies such as acupuncture. Some research has found capsaicin to be helpful. Alternative therapies may provide additional relief when used in conjunction with medication.
Managing complications
Depending on your type of neuropathy, a doctor can suggest medications, therapies, or lifestyle changes that may help deal with symptoms and ward off complications.
For example, if you have problems with digestion as a result of your neuropathy, a doctor may suggest you eat smaller meals more often and limit the amount of fiber and fat in your diet.
If you have vaginal dryness, a doctor may suggest a lubricant. If you have erectile dysfunction, they may prescribe medication that can help.
Peripheral neuropathy is very common in people with diabetes and can lead to serious foot complications, which in turn can lead to amputation. If you have peripheral neuropathy, it’s important to take special care of your feet and to quickly get help if you have an injury or sore.
Can I prevent diabetic neuropathy?
Diabetic neuropathy can often be avoided if you manage your blood glucose vigilantly. To do this, be consistent in:
* monitoring your blood glucose levels
* taking medications as prescribed
* managing your diet
* being active
If you do develop diabetic neuropathy, work closely with a doctor and follow their recommendations for slowing its progression. With proper care, you can reduce the damage to your nerves and avoid complications.
Additional Information
Diabetic neuropathy is various types of nerve damage associated with diabetes mellitus. Symptoms depend on the site of nerve damage and can include motor changes such as weakness; sensory symptoms such as numbness, tingling, or pain; or autonomic changes such as urinary symptoms. These changes are thought to result from a microvascular injury involving small blood vessels that supply nerves (vasa nervorum). Relatively common conditions which may be associated with diabetic neuropathy include distal symmetric polyneuropathy; third, fourth, or sixth cranial nerve palsy; mononeuropathy; mononeuropathy multiplex; diabetic amyotrophy; and autonomic neuropathy.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2190) Personal Digital Assistant
Gist
The main purpose of a personal digital assistant (PDA) is to act as an electronic organizer or day planner that is portable, easy to use and capable of sharing information with your PC. It's supposed to be an extension of the PC, not a replacement.
PDAs, also called handhelds or palmtops, have definitely evolved over the years. Not only can they manage your personal information, such as contacts, appointments, and to-do lists, today's devices can also connect to the Internet, act as global positioning system (GPS) devices, and run multimedia software. What's more, manufacturers have combined PDAs with cell phones, multimedia players and other electronic gadgetry.
As its capabilities continue to grow, the standard PDA device is changing.
Summary
A personal digital assistant (PDA) is a handheld computer. Personal digital assistants were designed to replace non-electronic day planners. Many PDAs can work as an address book, a calculator, a clock, and a calendar. Some also have games on them. Newer PDAs are now called smartphones and have Wi-Fi, touch screens, can read e-mail, record video, play music and make phone calls.
Processor
PDA's processor is a chip which makes the device work. PDAs can have different processors depending on their speed. If access to the Internet, GPS, or video recording are needed, the PDA needs to have a fast processor.
Memory
PDA stores data and applications on RAM memory(Random Access Memory). It is a memory card inside the device. PDAs can also use SD (Secure Digital), SDIO and MMS cards. These are easy to remove and are called flash memory cards. USB flash drives can be used to add more memory.
Memory capacity guide
16MB : can contain simple data such as contacts and text files.
32MB : allows to use e-mail and office programs.
64MB : can access Internet and play video, images, music.
128MB+ : recommended for Internet access, makes PDA work better, and easily plays music, images and video (multimedia).
Screen
Most PDAs have touch screens. This means that the device has less buttons or has no buttons at all. Usually PDAs come with removable 'pen' (stylus) with which the screen is touched. The pen acts like a mouse for desktop PCs or like a touch pad for laptops. For other PDAs a finger is used instead of the pen.
Battery
PDAs are powered by batteries. There are two types of batteries: removable and fixed. Older PDAs used alkaline batteries which the user would have to take out and replace when they ran out. Most new PDAs have Li-ion batteries. These batteries are rechargeable. PDAs use more battery power when playing video, accessing Internet or transferring data using Bluetooth. Large and bright screens also take more power. This makes the battery run out faster.
Connection
A PDA can communicate with other devices, PDAs, PCs, etc. This is made with the help of infrared (IrDA) port or with Bluetooth. With this ability the devices can transfer data, download music, images, video, games, etc.
Synchronization means having the same data on both the PDA and PC or other device. When changes are made on either the PDA or the PC, when they are synchronized, these changes made to data on one of them are also made the data on the other. The synchronization also prevents the loss of the stored information.
Adaptation (Customization)
Customization is adding more memory, miniature keyboards, etc. to the PDA. Customization also can be putting other software (download Internet programs) on the PDA device.
Details
Personal digital assistant (PDA) is a small, mobile, handheld device that provides computing and information storage and retrieval capabilities for personal or business use, often for keeping schedules, calendars and address book information handy.
Popular in the 1990s and early 2000s, PDAs were the precursors to smartphones. Most PDAs had a small physical keyboard, and some had an electronically sensitive pad which could process handwriting. Original uses for a personal digital assistant included schedule and address book storage and note-entering. However, many types of applications were written for PDAs.
Types of PDA devices
Apple's Newton was the first widely sold PDA that accepted handwriting. Other popular PDA devices included Hewlett-Packard's Palmtop and Palm's PalmPilot. Some PDAs offered a variation of the Microsoft Windows operating system called Windows CE. Other products had their own or another operating system.
Apple CEO John Sculley coined the term PDA in 1992, but devices fitting that description had existed for nearly a decade prior. In the mid-1990s, the manufacturers of PDAs, pagers and cellular telephones began to combine the functionality of those devices into a new device type now known as the smartphone.
PDA and pager manufacturer Research in Motion Limited released its first Blackberry smartphone in 2000, and the company dominated the market for most of the ensuing decade. However, in 2007, Apple released the first iPhone, a touchscreen smartphone. Within five years, the market had shifted away from devices with physical keyboards.
Future of PDAs
In the 2010s, the technology industry recycled the term "personal digital assistant." Now, it's more common for the term to refer to software that recognizes a user's voice and uses artificial intelligence to respond to queries. Examples of this type of personal digital assistant include Apple's Siri, Microsoft's Cortana and Amazon's Alexa.
Additional Information
PDA, an electronic handheld organizer used in the 1990s and 2000s to store contact information, manage calendars, communicate by e-mail, and handle documents and spreadsheets, usually in communication with the user’s personal computer (PC).
The first PDAs were developed in the early 1990s as digital improvements upon the traditional pen-and-paper organizers used to record personal information such as telephone numbers, addresses, and calendars. The first electronic organizers were large, had limited capabilities, and were often incompatible with other electronic systems. As computer technology improved, however, so did personal organizers. Soon companies such as Sharp Electronics Corporation, Casio Computer Company, and Psion PLC developed more-efficient models. Those PIMs, or personal information managers, were more user-friendly and could connect to PCs, and they had stylus interfaces and upgrade capabilities. In addition, later versions offered e-mail access and the option to download e-books. These improved devices still faced compatibility issues, however.
In 1993 Apple Inc. released the Newton MessagePad, for which John Sculley, then Apple’s chief executive officer, coined the term PDA. Although an improvement in some areas, the Newton’s handwriting recognition was only 85 percent effective, resulting in ridicule and poor sales.
In 1996 Palm, Inc., released the first Palm Pilot PDAs, which quickly became the model for other companies to follow. The Pilot did not try to replace the computer but made it possible to organize and carry information with an electronic calendar, a telephone number and address list, a memo pad, and expense-tracking software and to synchronize that data with a PC. The device included an electronic cradle to connect to a PC and pass information back and forth. It also featured a data-entry system called “graffiti,” which involved writing with a stylus using a slightly altered alphabet that the device recognized. Its success encouraged numerous software companies to develop applications for it.
In 1998 Microsoft Corporation produced Windows CE, a stripped-down version of its Windows OS (operating system), for use on mobile devices such as PDAs. This encouraged several established consumer electronics firms to enter the handheld organizer market. These small devices also often possessed a communications component and benefited from the sudden popularization of the Internet and the World Wide Web. In particular, the BlackBerry PDA, introduced by the Canadian company Research in Motion in 2002, established itself as a favourite in the corporate world because of features that allowed employees to make secure connections with their companies’ databases.
Most new PDAs were easy to use and featured keyboards, colour displays, touch screens, sound, increased memory, PC connectivity, improved software (including e-mail and word-processing programs), and wireless Internet access. In addition, technologies such as Bluetooth allowed PDAs to communicate wirelessly with a user’s primary computer and with other users’ PDAs. Most PDAs also offered extensive music storage capabilities as well as access to telephone networks, either through the Internet or through traditional cellular telephone technologies. The steady growth of smartphone sales beginning in the 2000s coincided with a decline in sales of PDAs, which were eventually supplanted by the smartphone.
Note: A personal digital assistant (PDA) is a multi-purpose mobile device which functions as a personal information manager. PDAs have been mostly displaced by the widespread adoption of highly capable smartphones, in particular those based on iOS and Android, and thus saw a rapid decline in use after 2007.
A PDA has an electronic visual display. Most models also have audio capabilities, allowing usage as a portable media player, and also enabling many of them to be used as telephones. By the early 2000s, nearly all PDA models had the ability to access the Internet, intranets or extranets via Wi-Fi or Wireless WANs, and since that time PDAs generally include a web browser. Sometimes, instead of buttons, PDAs employ touchscreen technology.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2191) Test of English as a Foreign Language
Details
Test of English as a Foreign Language is a standardized test to measure the English language ability of non-native speakers wishing to enroll in English-speaking universities. The test is accepted by more than 11,000 universities and other institutions in over 190 countries and territories. TOEFL is one of several major English-language tests worldwide, including IELTS, Duolingo English Test, Cambridge Assessment English, and Trinity College London exams.
TOEFL is a trademark of the Educational Testing Service (ETS), a private non-profit organization, which designs and administers the tests. ETS issues official score reports which are sent independently to institutions and are valid for two years following the test.
History
In 1962, a national council made up of representatives of thirty government and private organizations was formed to address the problem of ensuring English language proficiency for non-native speakers wishing to study at U.S. universities. This council recommended the development and administration of the TOEFL exam for the 1963–1965 timings.
The test was initially developed at the Center for Applied Linguistics under the direction of Stanford University applied linguistics professor Charles A. Ferguson.
The TOEFL test was first administered in 1964 by the Modern Language Association financed by grants from the Ford Foundation and Danforth Foundation.
In 1965, The College Board and ETS jointly assumed responsibility for continuing the TOEFL testing program.
In 1973, a cooperative arrangement was made between ETS, The College Board, and the Graduate Record Examinations board of advisers to oversee and run the program. ETS was to administer the exam with the guidance of the TOEFL board.
To the present day, college admission criteria for international students who are nationals of some of the Commonwealth nations exempt them from taking the TOEFL exam. Nations that are part of the English-speaking world (from most Commonwealth realms to former British colonies e.g., Hong Kong SAR or former/protectorates of the United States (Philippines, Puerto Rico), where English is the de facto official language, automatically grant a TOEFL exemption with some restrictions (e.g., residents of Quebec are required to take TOEFL while the rest of Canada is exempt – also inclusive of Commonwealth nations where English is not an official language e.g., Mozambique or Namibia (English is co-official but spoken by 3% of the population)). However, this does not apply to some Commonwealth nations outside the Anglosphere, due to the IELTS, such as India, Pakistan, Bangladesh, etc., even though they may have English as the de facto official language.
Formats and contents
Internet-based test
The TOEFL Internet-based test (iBT) measures all four academic English skills: reading, listening, speaking, and writing. Since its introduction in late 2005, the Internet-based Test format has progressively replaced computer-based tests (CBT) and paper-based tests (PBT), although paper-based testing is still used in select areas. The TOEFL iBT test has been introduced in phases, with the United States, Canada, France, Germany, and Italy in 2005 and the rest of the world in 2006, with test centers added regularly. It is offered weekly at authorized test centers. The CBT was discontinued in September 2006 and these scores are no longer valid.
Initially, the demand for test seats was higher than availability, and candidates had to wait for months. It is now possible to take the test within one to four weeks in most countries. Now, people who wish to take the test create an account on the official website to find the closest place. In the past this test lasted 4 hours, today people can choose to take the test for around 3 hours.
The test consists of four sections, each measuring one of the basic language skills (while some tasks require integrating multiple skills), and all tasks focus on the language used in an academic, higher-education environment. Note-taking is allowed during the TOEFL iBT test. The test cannot be taken more than once every 3 days, starting from September 2019.
Home edition
The TOEFL iBT Home Edition is essentially the same test as the TOEFL iBT. However, it is taken at home while a human proctor watches through a web camera (usually built-in to most laptops) and via sharing of the computer screen. The popularity of the Home Edition has grown during the COVID-19 pandemic as it has been the only option during lockdowns. Many students experience technical or security problems during the Home Edition tests. The ETS browser used to administer the test has been unreliable in many cases. Students who have their exams interrupted are not likely to get a refund or the chance to reschedule for a new test as the ETS has technical problems that are hard to document and the processing of a complaint is slow due to the popularity of the Home Edition and the number of complaints. If the test runs smoothly, the results are accepted by most companies and universities that accept the TOEFL iBT standard edition.
Additional Information
The Test of English as a Foreign Language, or TOEFL, is a test which measures people’s English language skills to see if they are good enough to take a course at university or graduate school in English-speaking countries. It is for people whose native language is not English but wish to study in an international University. It measures how well a person uses listening, reading, speaking and writing skills to perform academic tasks. This test is accepted by more than 10 000 colleges, universities, and agencies in more than 150 countries; which means it is the most widely recognized English test in the world.
The format of the TOEFL test has been changed three times. The first was the PBT (paper-based TOEFL test). It tests listening, reading and grammar skills with a perfect score being 677 (Paper Based). Some centers where computers are not available still offer this format. The second format is the CBT (computer-based TOEFL test). People are each provided with a computer to take the test. A writing section was added as well as the three sections. The level of listening and grammar skills are automatically changed depending on a person’s English level. The third change is the iBT (Internet-based test) that is being brought in around the world which measures listening, speaking, reading and writing.
There are three procedures that people are following to take the test. First it is decided where and when you are going to take the test because the format of the test could be either iBT or CBT depending on location. People need to register 2–3 months in advance to get a place. Second is registration in person, online, by phone, or through email. Online registration is most common and payment will be required to complete the registration. The cost changes depending on what kind of exam is taken.
As more universities and colleges want a TOEFL more people are wanting to do the test. In Korea in 2010, nearly 115,000 people took the test to demonstrate their ability in English. This was 20% of the total for people doing the test. It is considered as one of the ways for middle or high school students to apply for the high ranked domestic universities. By studying at such an advanced level, it could help to improve their English skills to get high scores in other English exams.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2192) Graduate Record Examinations
Gist
The Graduate Record Examinations (GRE) is a standardized test that is part of the admissions process for many graduate schools in the United States and Canada and a few other countries. The GRE is owned and administered by Educational Testing Service (ETS).
Summary
GRE, or the Graduate Record Examination, is a standardized test, usually requested by universities in the U.S., that students take prior to enrolling to a Master’s degree.
What makes GRE stand out from the rest is that it is one of the most popular tests Business schools ask for from their future MBA (Master of Business Administration) students. Of course, you could apply to other Master’s degrees, so taking a GRE Test is probably as versatile as a jean jacket.
What also makes the GRE Exam stand out is the option to choose which grades you send the university after taking the GRE!
Hear us out:
If, after you take the test, you want to highlight what great grades you acquired to some of the parameters, you can just send those to the university. And, if you feel like you can do better, you can RETAKE THE TEST.
And that’s amazing, because most other tests won’t even let you go back to some of the questions you already answered!
Details
The Graduate Record Examinations (GRE) is a standardized test that is part of the admissions process for many graduate schools in the United States and Canada and a few other countries. The GRE is owned and administered by Educational Testing Service (ETS). The test was established in 1936 by the Carnegie Foundation for the Advancement of Teaching.
According to ETS, the GRE aims to measure verbal reasoning, quantitative reasoning, analytical writing, and critical thinking skills that have been acquired over a long period of learning. The content of the GRE consists of certain specific data analysis or interpretation, arguments and reasoning, algebra, geometry, arithmetic, and vocabulary sections. The GRE General Test is offered as a computer-based exam administered at testing centers and institution owned or authorized by Prometric. In the graduate school admissions process, the level of emphasis that is placed upon GRE scores varies widely among schools and departments. The importance of a GRE score can range from being a mere admission formality to an important selection factor.
The GRE was significantly overhauled in August 2011, resulting in an exam that is adaptive on a section-by-section basis, rather than question by question, so that the performance on the first verbal and math sections determines the difficulty of the second sections presented (excluding the experimental section). Overall, the test retained the sections and many of the question types from its predecessor, but the scoring scale was changed to a 130 to 170 scale (from a 200 to 800 scale).
The cost to take the test is US$205, although ETS will reduce the fee under certain circumstances. It also provides financial aid to GRE applicants who prove economic hardship. ETS does not release scores that are older than five years, although graduate program policies on the acceptance of scores older than five years will vary.
Once almost universally required for admission to Ph.D. science programs in the U.S., its use for that purpose has fallen precipitously.
History
The Graduate Record Examinations was "initiated in 1936 as a joint experiment in higher education by the graduate school deans of four Ivy League universities and the Carnegie Foundation for the Advancement of Teaching."
The first universities to experiment with the test on their students were Harvard University, Yale University, Princeton University and Columbia University. The University of Wisconsin was the first public university to ask their students to take the test in 1938. It was first given to students at the University of Iowa in 1940, where it was analyzed by psychologist Dewey Stuit. It was first taken by students at Texas Tech University in 1942. In 1943, it was taken by students at Michigan State University, where it was analyzed by Paul Dressel. It was taken by over 45,000 students applying to 500 colleges in 1948.
"Until the Educational Testing Service was established in January, 1948, the Graduate Record Examination remained a project of the Carnegie Foundation."
2011 revision
In 2006, ETS announced plans to make significant changes in the format of the GRE. Planned changes for the revised GRE included a longer testing time, a departure from computer-adaptive testing, a new grading scale, and an enhanced focus on reasoning skills and critical thinking for both the quantitative and qualitative sections.
On April 2, 2007, ETS announced the decision to cancel plans for revising the GRE. The announcement cited concerns over the ability to provide clear and equal access to the new test after the planned changes as an explanation for the cancellation. The ETS stated, however, that they did plan "to implement many of the planned test content improvements in the future", although specific details regarding those changes were not initially announced.
Changes to the GRE took effect on November 1, 2007, as ETS started to include new types of questions in the exam. The changes mostly centered on "fill in the blank" type answers for the mathematics section that requires the test-taker to fill in the blank directly, without being able to choose from a multiple choice list of answers. ETS announced plans to introduce two of these new types of questions in each quantitative section, while the majority of questions would be presented in the regular format.
Since January 2008, the Reading Comprehension within the verbal sections has been reformatted, passages' "line numbers will be replaced with highlighting when necessary in order to focus the test taker on specific information in the passage" to "help students more easily find the pertinent information in reading passages."
In December 2009, ETS announced plans to move forward with significant revisions to the GRE in 2011. Changes include a new 130–170 scoring scale, the elimination of certain question types such as antonyms and analogies, the addition of an online calculator, and the elimination of the CAT format of question-by-question adjustment, in favor of a section by section adjustment.
On August 1, 2011, the Revised GRE General test replaced General GRE test. The revised GRE is said to be better by design and provides a better test taking experience. The new types of questions in the revised format are intended to test the skills needed in graduate and business schools programs. From July 2012 onwards GRE announced an option for users to customize their scores called ScoreSelect.
Before October 2002
The earliest versions of the GRE tested only for verbal and quantitative ability. For a number of years before October 2002, the GRE had a separate Analytical Ability section which tested candidates on logical and analytical reasoning abilities. This section was replaced by the Analytical Writing Assessment.
Structure
The computer-based GRE General Test consists of six sections. The first section is always the analytical writing section involving separately timed issue and argument tasks. The next five sections consist of two verbal reasoning sections, two quantitative reasoning sections, and either an experimental or research section. These five sections may occur in any order. The experimental section does not count towards the final score but is not distinguished from the scored sections. Unlike the computer adaptive test before August 2011, the GRE General Test is a multistage test, where the examinee's performance on earlier sections determines the difficulty of subsequent sections, using a technique known as computer-adaptive testing. This format allows the examined person to freely move back and forth between questions within each section, and the testing software allows the user to "mark" questions within each section for later review if time remains. The entire testing procedure lasts about 3 hours 45 minutes. One-minute breaks are offered after each section and a 10-minute break after the third section.
The paper-based GRE General Test also consists of six sections. The analytical writing is split up into two sections, one section for each issue and argument task. The next four sections consist of two verbal and two quantitative sections in varying order. There is no experimental section on the paper-based test.
Verbal section
The computer-based verbal sections assess reading comprehension, critical reasoning, and vocabulary usage. The verbal test is scored on a scale of 130–170, in 1-point increments. (Before August 2011, the scale was 200–800, in 10-point increments.) In a typical examination, each verbal section consists of 20 questions to be completed in 30 minutes. Each verbal section consists of about 6 text completion, 4 sentence equivalence, and 10 critical reading questions. The changes in 2011 include a reduced emphasis on rote vocabulary knowledge and the elimination of antonyms and analogies. Text completion items have replaced sentence completions and new reading question types allowing for the selection of multiple answers were added.
Quantitative section
The computer-based quantitative sections assess basic high school level mathematical knowledge and reasoning skills. The quantitative test is scored on a scale of 130–170, in 1-point increments (Before August 2011 the scale was 200–800, in 10-point increments). In a typical examination, each quantitative section consists of 20 questions to be completed in 35 minutes. Each quantitative section consists of about 8 quantitative comparisons, 9 problem solving items, and 3 data interpretation questions. The changes in 2011 include the addition of numeric entry items requiring the examinee to fill in the blank and multiple-choice items requiring the examinee to select multiple correct responses.
Analytical writing section
The analytical writing section consists of two different essays, an "issue task" and an "argument task". The writing section is graded on a scale of 0–6, in half-point increments. The essays are written on a computer using a word processing program specifically designed by ETS. The program allows only basic computer functions and does not contain a spell-checker or other advanced features. Each essay is scored by at least two readers on a six-point holist scale. If the two scores are within one point, the average of the scores is taken. If the two scores differ by more than a point, a third reader examines the response.
Issue Task
The test taker is given 30 minutes to write an essay about a selected topic. Issue topics are selected from a pool of questions, which the GRE Program has published in its entirety. Individuals preparing for the GRE may access the pool of tasks on the ETS website.
Argument Task
The test taker will be given an argument (i.e. a series of facts and considerations leading to a conclusion) and asked to write an essay that critiques the argument. Test takers are asked to consider the argument's logic and to make suggestions about how to improve the logic of the argument. Test takers are expected to address the logical flaws of the argument and not provide a personal opinion on the subject. The time allotted for this essay is 30 minutes. The Arguments are selected from a pool of topics, which the GRE Program has published in its entirety. Individuals preparing for the GRE may access the pool of tasks on the ETS website.
Experimental section
The experimental section, which can be either verbal or quantitative, contains new questions ETS is considering for future use. Although the experimental section does not count towards the test-taker's score, it is unidentified and appears identical to the scored sections. Because test takers have no definite way of knowing which section is experimental, it is typically advised that test takers try their best and be focused on every section. Sometimes an identified research section at the end of the test is given instead of the experimental section. There is no experimental section on the paper-based GRE.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2193) GMAT (Graduate Management Admission Test)
Details
The graduate management admission test (GMAT) is a standardized test used to measure a test taker's aptitude in mathematics, verbal skills, and analytical writing. The GMAT is most commonly used as the primary exam reviewed by business schools to gain entrance into an MBA program. The exam is generally offered by computer only; in areas of the world where computer networks are limited, the exam may be given as a paper-based test.
KEY TAKEAWAYS
* The GMAT, which stands for the Graduate Management Admission Test, is the most common test used by business schools to assess candidates.
* The test consists of four sections: analytical writing, verbal reasoning, quantitative reasoning, and integrated reasoning.
* Overall, the GMAT takes three and one-half hours to complete and has a max score of 800 points.
Understanding the Graduate Management Admission Test (GMAT)
The GMAT exam consists of four sections: analytical writing assessment, verbal reasoning, integrated reasoning, and quantitative reasoning. The maximum score achievable for the GMAT is 800, and exam scores are generally valid for five years following the exam's completion. On average, the exam takes three and one-half hours to complete.
Every year, over 255,000 individuals take the GMAT. As of August 2023, it costs $275 to take the test at a test center and $300 to take it online in the U.S. Due to the widespread nature of the admissions test, GMATs are offered almost every day of the year and can be taken every 16 calendar days. However, the test can be taken no more than eight times total and no more than five times in a 12-month period. Most applicants take the exam once or twice before applying.
How the GMAT Is Applied
The Graduate Management Admission Council administers the exam. In addition to testing comprehension of writing and math, the GMAT is also used to assess an individual’s critical reasoning skills and logic as applicable to business and management in the real world.
Starting in 2012, the exam added a section called Integrated Reasoning, which assesses an individual’s evaluation skills for dealing with information gathered from multiple sources and in new formats. This section also intends to test students in the context of working with data and technology.
Approximately 3,391 graduate programs and institutions around the world use the GMAT to assess applicants to their programs. The Graduate Management Admission Council has recommended that the GMAT be used as one factor for determining whether a student is accepted into a program.
The council cautions that for some international students, the writing analysis section might show the limits of their English language comprehension rather than their critical thinking and reasoning capacity.
Moreover, the nature of the exams and what they test the applicants on make it inappropriate to treat both exams similarly. The Graduate Management Admission Council recommends not using a so-called cutoff score when reviewing applicants but instead looking at their applications holistically. If a cutoff score is implemented, the council suggests the institution take additional measures to show that the cutoff does not lead to discrimination based on age, gender, or ethnicity.
Requirements for the GMAT
The Graduate Management Admissions Council requires identification to take the test, which can include:
* International travel passport—a passport is always required when taking the exam at a location outside of your country. Expired passports are not acceptable.
* Green Cards (Permanent Resident Cards) for non-citizen residents
* Government-issued driver's license
* Government-issued national/state/province identity card
* Military ID card
Acceptable identification varies by country, so you should check the GMAC's website to learn what is acceptable in your country. In addition to having proper identification, you should be eligible to apply for a graduate program at the school you wish to attend.
GMAT vs. GRE
Compared to the GMAT's four, the GRE has three sections: an Analytical Writing Assessment, a Quantitative section, and a Verbal section. You're given three hours and 45 minutes to finish the test, the maximum score you can achieve is 170, and the average test score is 150.8.
It is not uncommon for graduate programs to use either the GMAT or the Graduate Record Examination (GRE) to assess an applicant. Due to the differences in how the two tests are scaled, GMAT and GRE scores cannot be directly compared. However, some schools accept either the GMAT or the GRE, depending on the program you're applying for. Many graduate programs use the GRE, while more business graduate programs use the GMAT.
Is the GMAT Hard or Easy?
The GMAT requires that you have the critical thinking and reasoning skills required of business leaders. You might need to study and prepare for the test, or you might not—whether it is hard depends on your experiences, knowledge, and level of preparation.
Is 700 a Bad GMAT Score?
The maximum score on the GMAT is 800; the average score in 2022 was 651. So, a score of 700 is above average, but the closer you get to 800, the better it is for your application.
Do Colleges Prefer GMAT or GRE?
It depends on the college you're applying to and the program you wish to study. Colleges might prefer the GMAT over the GRE if you're interested in business. However, it's up to the specific college you're applying to whether it accepts the GRE or GMAT for your area of interest.
The Bottom Line
The GMAT is a test designed to evaluate your ability to think critically and make decisions using math, analytical, and communication skills. It is accepted at most business schools and is one of the first steps to take if you're considering a graduate degree in business.
Additional Information
The Graduate Management Admission Test is a computer adaptive test (CAT) intended to assess certain analytical, quantitative, verbal, and data literacy skills for use in admission to a graduate management program, such as a Master of Business Administration (MBA) program. Answering the test questions requires reading comprehension, and mathematical skills such as arithmetic, and algebra. The Graduate Management Admission Council (GMAC) owns and operates the test, and states that the GMAT assesses critical thinking and problem-solving abilities while also addressing data analysis skills that it believes to be vital to real-world business and management success. It can be taken up to five times a year but no more than eight times total. Attempts must be at least 16 days apart.
GMAT is a registered trademark of the Graduate Management Admission Council. More than 7,700 programs at approximately 2,400+ graduate business schools around the world accept the GMAT as part of the selection criteria for their programs. Business schools use the test as a criterion for admission into a wide range of graduate management programs, including MBA, Master of Accountancy, Master of Finance programs and others. The GMAT is administered online and in standardized test centers in 114 countries around the world. According to a survey conducted by Kaplan Test Prep, the GMAT is still the number one choice for MBA aspirants. According to GMAC, it has continually performed validity studies to statistically verify that the exam predicts success in business school programs. The number of test-takers of GMAT plummeted from 2012 to 2021 as more students opted for an MBA program that didn't require the GMAT.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2194) Scholastic Aptitude Test
Summary
The SAT is a standardized test administered by the College Board, a non-profit organization that runs other programs including the PSAT (Preliminary SAT), AP (Advanced Placement) and CLEP (College-Level Examination Project). The SAT along with the ACT are the primary entrance exams used by colleges and universities in the United States.
The SAT and the Problem of "Aptitude"
The letters SAT originally stood for the Scholastic Aptitude Test. The idea of "aptitude," one's natural ability, was central to the exam's origins. The SAT was supposed to be an exam that tested one's abilities, not one's knowledge. As such, it was supposed to be an exam for which students could not study, and it would provide colleges with a useful tool for measuring and comparing the potential of students from different schools and backgrounds.
The reality, however, was that students could indeed prepare for the exam and that the test was measuring something other than aptitude. Not surprisingly, the College Board changed the name of the exam to the Scholastic Assessment Test, and later to the SAT Reasoning Test. Today the letters SAT stand for nothing at all. In fact, the evolution of the meaning of "SAT" highlights many of the problems associated with the exam: it's never been entirely clear what it is that the test measures.
The SAT competes with the ACT, the other widely used exam for college admissions in the United States. The ACT, unlike the SAT, has never focused on the idea of "aptitude." Instead, the ACT tests what students have learned in school. Historically, the tests have been different in meaningful ways, and students who do poorly on one might do better on the other. In recent years, the ACT surpassed the SAT as the most widely used college admissions entrance exam. In response to both its loss of market share and criticisms about the very substance of the exam, the SAT launched an entirely redesigned exam in the spring of 2016. If you were to compare the SAT to the ACT today, you'd find that the exams are much more similar than they had been historically.
Details
The SAT is a standardized test widely used for college admissions in the United States. Since its debut in 1926, its name and scoring have changed several times. For much of its history, it was called the Scholastic Aptitude Test and had two components, Verbal and Mathematical, each of which was scored on a range from 200 to 800. Later it was called the Scholastic Assessment Test, then the SAT I: Reasoning Test, then the SAT Reasoning Test, then simply the SAT.
The SAT is wholly owned, developed, and published by the College Board, a private, not-for-profit organization in the United States. It is administered on behalf of the College Board by the Educational Testing Service, another non-profit organization which until shortly before the 2016 redesign of the SAT developed the test and maintained a repository of items (test questions) as well. The test is intended to assess students' readiness for college. Originally designed not to be aligned with high school curricula, several adjustments were made for the version of the SAT introduced in 2016. College Board president David Coleman added that he wanted to make the test reflect more closely what students learn in high school with the new Common Core standards, which have been adopted by the District of Columbia and many states.
Many students prepare for the SAT using books, classes, online courses, and tutoring, which are offered by a variety of companies and organizations. One of the best known such companies is Kaplan, Inc., which has offered SAT preparation courses since 1946. Starting with the 2015–16 school year, the College Board began working with Khan Academy to provide free SAT preparation online courses.
Historically, starting around 1937, the tests offered under the SAT banner also included optional subject-specific SAT Subject Tests, which were called SAT Achievement Tests until 1993 and then were called SAT II: Subject Tests until 2005; these were discontinued after June 2021.
Historically, the test also included an essay section, which became optional in 2016 and was discontinued (with some exceptions) after June 2021.
Historically, the test was taken using paper forms that were filled in using a number 2 pencil and were scored (except for hand-written response sections) using Scantron-type optical mark recognition technology. Starting in March 2023 for international test-takers and March 2024 within the U.S., the testing method changed to use a computer program application called Bluebook running on a laptop or tablet computer provided either by the student or the testing service. The test was also made adaptive, customizing the questions that are presented to the student based on how they perform on questions asked earlier in the test, and shortened from three hours to two hours and 14 minutes.
While a considerable amount of research has been done on the SAT, many questions and misconceptions remain. Outside of college admissions, the SAT is also used by researchers studying human intelligence in general and intellectual precociousness in particular, and by some employers in the recruitment process.
Function
The SAT is typically taken by high school juniors and seniors. The College Board states that the SAT is intended to measure literacy, numeracy and writing skills that are needed for academic success in college. They state that the SAT assesses how well the test-takers analyze and solve problems—skills they learned in school that they will need in college.
The College Board also claims that the SAT, in combination with high school grade point average (GPA), provides a better indicator of success in college than high school grades alone, as measured by college freshman GPA. Various studies conducted over the lifetime of the SAT show a statistically significant increase in correlation of high school grades and college freshman grades when the SAT is factored in. The predictive validity and powers of the SAT are topics of research in psychometrics.
The SAT is a norm-referenced test intended to yield scores that follow a bell curve distribution among test-takers. To achieve this distribution, test designers include challenging multiple-choice questions with plausible but incorrect options, known as "distractors", exclude questions that a majority of students answer correctly, and impose tight time constraints during the examination.
There are substantial differences in funding, curricula, grading, and difficulty among U.S. secondary schools due to U.S. federalism, local control, and the prevalence of private, distance, and home schooled students. SAT (and ACT) scores are intended to supplement the secondary school record and help admission officers put local data—such as course work, grades, and class rank—in a national perspective.
Historically, the SAT was more widely used by students living in coastal states and the ACT was more widely used by students in the Midwest and South; in recent years, however, an increasing number of students on the East and West coasts have been taking the ACT. Since 2007, all four-year colleges and universities in the United States that require a test as part of an application for admission will accept either the SAT or ACT, and as of Fall 2022, more than 1400 four-year colleges and universities did not require any standardized test scores at all for admission, though some of them were planning to apply this policy only temporarily due to the coronavirus pandemic.
SAT test-takers are given two hours and 14 minutes to complete the test (plus a 10-minute break between the Reading and Writing section and the Math section), and as of 2024 the test costs US$60.00, plus additional fees for late test registration, registration by phone, registration changes, rapid delivery of results, delivery of results to more than four institutions, result deliveries ordered more than nine days after the test, and testing administered outside the United States, as applicable, and fee waivers are offered to low-income students within the U.S. and its territories. Scores on the SAT range from 400 to 1600, combining test results from two 200-to-800-point sections: the Mathematics section and the Evidence-Based Reading and Writing section. Although taking the SAT, or its competitor the ACT, is required for freshman entry to many colleges and universities in the United States, during the late 2010s, many institutions made these entrance exams optional, but this did not stop the students from attempting to achieve high scores as they and their parents are skeptical of what "optional" means in this context. In fact, the test-taking population was increasing steadily. And while this may have resulted in a long-term decline in scores, experts cautioned against using this to gauge the scholastic levels of the entire U.S. population.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2195) Common Admission Test
Gist
The Indian Institutes of Management started this exam and use the test for selecting students for their business administration programs (MBA or PGDM). The test is conducted every year by one of the Indian Institutes of Managements based on a policy of rotation. Once a year (usually on the last Sunday of November).
Details
The Common Admission Test (CAT) is a computer based test for admission in graduate management programs. The test consists of three sections: Verbal Ability and Reading Comprehension, Data Interpretation and Logical Reasoning and Quantitative Ability. The exam is taken online over a period of three hours, with one hour per section. In 2020, due to the COVID precautions, The Indian Institutes of Management Indore decided to conduct the CAT Exam in 2 hours with 40 minutes devoted to each section. The Indian Institutes of Management started this exam and use the test for selecting students for their business administration programs (MBA or PGDM). The test is conducted every year by one of the Indian Institutes of Managements based on a policy of rotation.
In August 2011, it was announced that Indian Institutes of Technology (IITs) and Indian Institute of Science (IISc) would also use the CAT scores, instead of the Joint Management Entrance Test (JMET), to select students for their management programmes starting with the 2012-15 batch.
Before 2010, CAT was a paper based test conducted on a single day for all candidates. The pattern, number of questions and duration have seen considerable variations over the years.
On 1 May 2009, it was announced that CAT would be a Computer Based Test starting from 2009. The American firm Prometric was entrusted with the responsibility of conducting the test from 2009 to 2013. The first computer based CAT was marred with technical snags. The issue was so serious that it prompted the Government of India to seek a report from the convenor. The trouble was diagnosed as 'Conficker' and 'W32 Nimda', the two viruses that attacked the system display of the test, causing server slow down. Since 2014 onward, CAT has been conducted by Tata Consultancy Services (TCS). CAT 2015 and CAT 2016 were 180-minute tests consisting of 100 questions (34 from Quantitative Ability, 34 from Verbal Ability and Reading Comprehension, and 32 from Data Interpretation and Logical Reasoning. CAT 2020 onwards, the exam duration has been reduced to two hours, with 40 minutes allotted per section.
Eligibility for CAT
The candidate must satisfy the below specified criteria:
* Hold a bachelor's degree, with not less than 50% or equal CGPA (45% for Scheduled Caste (SC), Scheduled Tribe (ST) and Persons with Disability (PWD)/Differently Able (DA) classification)
* The degree should be granted by any of the universities consolidated by an act of the central or state statutory body in India or other instructive organizations built up by an act of Parliament or pronounced to be considered as a university under Section 3 of the UGC Act, 1956, or possess an equivalent qualification recognized by the Ministry of HRD, Government of India.
* Competitors appearing for the final year of bachelor's degree/equivalent qualification examination and the individuals who have finished degree prerequisites and are anticipating results can likewise apply. If selected, such applicants will be permitted to join the program temporarily, only if they present a certificate most recent by June of next year in which the exam is held, from the principal/registrar of their college/institute (issued at the latest 30th June of that year) expressing that the competitor has finished every one of the prerequisites for acquiring four-year or three-year college education/identical capability on the date of the issue of the certificate.
Exam pattern
The Common Admission Test (CAT), like virtually all large-scale exams, utilises multiple forms, or versions, of the test. Hence there are two types of scores involved: a raw score and a scaled score.
The raw score is calculated for each section based on the number of questions one answered correctly, incorrectly, or left unattempted. Candidates are given +3 points for each correct answer and -1 point for each incorrect answer, no negative marking for TITA (Type in the Answer) questions . No points are given for questions that are not answered. The raw scores are then adjusted through a process called equating. Equated raw scores are then placed on a common scale or metric to ensure appropriate interpretation of the scores. This process is called 'scaling'.
The change in the total number of questions and number of questions per section in CAT can vary by year. On the whole, there are 66 number of questions combining each section. The very first section which is the verbal ability and reading comprehension contains 24 questions, further bifurcating 16 questions of reading comprehension and 8 questions of verbal ability, then next section is of data interpretation and logical reasoning which contains 20 questions and the last section is of quantitative ability which contains 22 questions making it to 66 questions in total..
CAT is conducted in three slots/sessions (Morning Slot, Afternoon Slot, Evening Slot)
CAT Pattern and Duration
Three 40-minute sessions will be held to conduct the CAT 2024 exam. A total of 120 minutes will be given. The CAT exam pattern will consist of multiple choice questions (MCQs) and non-multiple-choice questions or TITA {Type In The Answer} questions. The three sections in the exam are as follows:
* Verbal Ability & Reading Comprehension (VARC)
* Data Interpretation & Logical Reasoning (DILR)
* Quantitative Ability (QA)
-> 24 questions are asked in VARC out which 8 questions are of VA (Para jumble - 2 TITA questions , Para summary - 2 MCQ questions , Odd one out - 2 TITA questions , Sentence Placement - 2 MCQ questions) 16 questions of RC are asked by 4 passages with 4 questions in each passage (all questions are of MCQ type).
-> 20 questions are asked in DILR , questions are asked in 4 sets with 6-6-4-4 or 5-5-5-5 pattern.
-> 22 questions are asked in QA , 22 independent questions are asked from topics such as Arithmetic , Algebra , Geometry , Number System & Modern Math.
There will be a maximum score of 198 marks and 66 total questions in the CAT exam pattern.
Candidates cannot jump between the three sections while taking the exam. The order of the sections is fixed: VARC -> DILR -> QA.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2196) Cloud computing
Gist
Cloud computing is the on-demand availability of computing resources (such as storage and infrastructure), as services over the internet. It eliminates the need for individuals and businesses to self-manage physical resources themselves, and only pay for what they use.
The definition for the cloud can seem murky, but essentially, it's a term used to describe a global network of servers, each with a unique function. The cloud is not a physical entity, but instead is a vast network of remote servers around the globe which are hooked together and meant to operate as a single ecosystem.
The name comes from the fact that the data gets stored on servers - in the cloud. So, I guess it's safe to assume that the cloud is a slang term used by tech industry people to describe the servers and networking infrastructures that allow users to store and access data through the internet.
Summary
Cloud computing is the on-demand availability of computer system resources, especially data storage (cloud storage) and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each of which is a data center. Cloud computing relies on sharing of resources to achieve coherence and typically uses a pay-as-you-go model, which can help in reducing capital expenses but may also lead to unexpected operating expenses for users.
Definition
A European Commission communication issued in 2012 argued that the breadth of scope offered by cloud computing made a general definition "elusive", whereas the United States National Institute of Standards and Technology's 2011 definition of cloud computing identified "five essential characteristics":
* On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.
* Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).
* Resource pooling. The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand.
* Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be appropriated in any quantity at any time.
* Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.
History
Cloud computing has a rich history which extends back to the 1960s, with the initial concepts of time-sharing becoming popularized via remote job entry (RJE). The "data center" model, where users submitted jobs to operators to run on mainframes, was predominantly used during this era. This was a time of exploration and experimentation with ways to make large-scale computing power available to more users through time-sharing, optimizing the infrastructure, platform, and applications, and increasing efficiency for end users.
The "cloud" metaphor for virtualized services dates to 1994, when it was used by General Magic for the universe of "places" that mobile agents in the Telescript environment could "go". The metaphor is credited to David Hoffman, a General Magic communications specialist, based on its long-standing use in networking and telecom.[7] The expression cloud computing became more widely known in 1996 when Compaq Computer Corporation drew up a business plan for future computing and the Internet. The company's ambition was to supercharge sales with "cloud computing-enabled applications". The business plan foresaw that online consumer file storage would likely be commercially successful. As a result, Compaq decided to sell server hardware to internet service providers.[8]
In the 2000s, the application of cloud computing began to take shape with the establishment of Amazon Web Services (AWS) in 2002, which allowed developers to build applications independently. In 2006 the beta version of Google Docs was released, Amazon Simple Storage Service, known as Amazon S3, and the Amazon Elastic Compute Cloud (EC2), in 2008 NASA's development of the first open-source software for deploying private and hybrid clouds.
The following decade saw the launch of various cloud services. In 2010, Microsoft launched Microsoft Azure, and Rackspace Hosting and NASA initiated an open-source cloud-software project, OpenStack. IBM introduced the IBM SmartCloud framework in 2011, and Oracle announced the Oracle Cloud in 2012. In December 2019, Amazon launched AWS Outposts, a service that extends AWS infrastructure, services, APIs, and tools to customer data centers, co-location spaces, or on-premises facilities.
Since the global pandemic of 2020, cloud technology has surged in popularity due to the level of data security it offers and the flexibility of working options it provides for all employees, notably remote workers.
Details
Cloud computing, method of running application software and storing related data in central computer systems and providing customers or other users access to them through the Internet.
Early development
The origin of the expression cloud computing is obscure, but it appears to derive from the practice of using drawings of stylized clouds to denote networks in diagrams of computing and communications systems. The term came into popular use in 2008, though the practice of providing remote access to computing functions through networks dates back to the mainframe time-sharing systems of the 1960s and 1970s. In his 1966 book The Challenge of the Computer Utility, the Canadian electrical engineer Douglas F. Parkhill predicted that the computer industry would come to resemble a public utility “in which many remotely located users are connected via communication links to a central computing facility.”
For decades, efforts to create large-scale computer utilities were frustrated by constraints on the capacity of telecommunications networks such as the telephone system. It was cheaper and easier for companies and other organizations to store data and run applications on private computing systems maintained within their own facilities.
The constraints on network capacity began to be removed in the 1990s when telecommunications companies invested in high-capacity fibre-optic networks in response to the rapidly growing use of the Internet as a shared network for exchanging information. In the late 1990s, a number of companies, called application service providers (ASPs), were founded to supply computer applications to companies over the Internet. Most of the early ASPs failed, but their model of supplying applications remotely became popular a decade later, when it was renamed cloud computing.
Cloud services and major providers
Cloud computing encompasses a number of different services. One set of services, sometimes called software as a service (SaaS), involves the supply of a discrete application to outside users. The application can be geared either to business users (such as an accounting application) or to consumers (such as an application for storing and sharing personal photographs). Another set of services, variously called utility computing, grid computing, and hardware as a service (HaaS), involves the provision of computer processing and data storage to outside users, who are able to run their own applications and store their own data on the remote system. A third set of services, sometimes called platform as a service (PaaS), involves the supply of remote computing capacity along with a set of software-development tools for use by outside software programmers.
Early pioneers of cloud computing include Salesforce.com, which supplies a popular business application for managing sales and marketing efforts; Google, Inc., which in addition to its search engine supplies an array of applications, known as Google Apps, to consumers and businesses; and Amazon Web Services, a division of online retailer Amazon.com, which offers access to its computing system to Web-site developers and other companies and individuals. Cloud computing also underpins popular social networks and other online media sites such as Facebook, MySpace, and Twitter. Traditional software companies, including Microsoft Corporation, Apple Inc., Intuit Inc., and Oracle Corporation, have also introduced cloud applications.
Cloud-computing companies either charge users for their services, through subscriptions and usage fees, or provide free access to the services and charge companies for placing advertisements in the services. Because the profitability of cloud services tends to be much lower than the profitability of selling or licensing hardware components and software programs, it is viewed as a potential threat to the businesses of many traditional computing companies.
Data centres and privacy
Construction of the large data centres that run cloud-computing services often requires investments of hundreds of millions of dollars. The centres typically contain thousands of server computers networked together into parallel-processing or grid-computing systems. The centres also often employ sophisticated virtualization technologies, which allow computer systems to be divided into many virtual machines that can be rented temporarily to customers. Because of their intensive use of electricity, the centres are often located near hydroelectric dams or other sources of cheap and plentiful electric power.
Because cloud computing involves the storage of often sensitive personal or commercial information in central database systems run by third parties, it raises concerns about data privacy and security as well as the transmission of data across national boundaries. It also stirs fears about the eventual creation of data monopolies or oligopolies. Some believe that cloud computing will, like other public utilities, come to be heavily regulated by governments.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2197) Lie detector
Gist
According to the American Psychological Association (APA), polygraph tests measure a person's “heart rate/blood pressure, respiration, and skin conductivity.” The purpose of the test is usually to prove whether or not a person committed a crime.
Summary
Lie detector, instrument for recording physiological phenomena such as blood pressure, pulse rate, and respiration of a human subject as he answers questions put to him by an operator; these data are then used as the basis for making a judgment as to whether or not the subject is lying. Used in police interrogation and investigation since 1924, the lie detector is still controversial among psychologists and not always judicially acceptable.
Physiological phenomena usually chosen for recording are those not greatly subject to voluntary control. A pneumograph tube is fastened around the subject’s chest, and a blood pressure–pulse cuff is strapped around the arm. Pens record impulses on moving graph paper driven by a small electric motor.
Details
A polygraph, often incorrectly referred to as a lie detector test, is a junk science device or procedure that measures and records several physiological indicators such as blood pressure, pulse, respiration, and skin conductivity while a person is asked and answers a series of questions. The belief underpinning the use of the polygraph is that deceptive answers will produce physiological responses that can be differentiated from those associated with non-deceptive answers; however, there are no specific physiological reactions associated with lying, making it difficult to identify factors that separate those who are lying from those who are telling the truth.
In some countries, polygraphs are used as an interrogation tool with criminal suspects or candidates for sensitive public or private sector employment. Some United States law enforcement and federal government agencies, and many police departments use polygraph examinations to interrogate suspects and screen new employees. Within the US federal government, a polygraph examination is also referred to as a psychophysiological detection of deception examination.
Assessments of polygraphy by scientific and government bodies generally suggest that polygraphs are highly inaccurate, may easily be defeated by countermeasures, and are an imperfect or invalid means of assessing truthfulness. A comprehensive 2003 review by the National Academy of Sciences of existing research concluded that there was "little basis for the expectation that a polygraph test could have extremely high accuracy." The American Psychological Association states that "most psychologists agree that there is little evidence that polygraph tests can accurately detect lies."
Testing procedure
The examiner typically begins polygraph test sessions with a pre-test interview to gain some preliminary information which will later be used to develop diagnostic questions. Then the tester will explain how the polygraph is supposed to work, emphasizing that it can detect lies and that it is important to answer truthfully. Then a "stim test" is often conducted: the subject is asked to deliberately lie and then the tester reports that he was able to detect this lie. Guilty subjects are likely to become more anxious when they are reminded of the test's validity. However, there are risks of innocent subjects being equally or more anxious than the guilty. Then the actual test starts. Some of the questions asked are "irrelevant" ("Is your name Fred?"), others are "diagnostic" questions, and the remainder are the "relevant questions" that the tester is really interested in. The different types of questions alternate. The test is passed if the physiological responses to the diagnostic questions are larger than those during the relevant questions.
Criticisms have been given regarding the validity of the administration of the Control Question Technique. The CQT may be vulnerable to being conducted in an interrogation-like fashion. This kind of interrogation style would elicit a nervous response from innocent and guilty suspects alike. There are several other ways of administering the questions.
An alternative is the Guilty Knowledge Test (GKT), or the Concealed Information Test, which is used in Japan. The administration of this test is given to prevent potential errors that may arise from the questioning style. The test is usually conducted by a tester with no knowledge of the crime or circumstances in question. The administrator tests the participant on their knowledge of the crime that would not be known to an innocent person. For example: "Was the crime committed with a .45 or a 9 mm?" The questions are in multiple choice and the participant is rated on how they react to the correct answer. If they react strongly to the guilty information, then proponents of the test believe that it is likely that they know facts relevant to the case. This administration is considered more valid by supporters of the test because it contains many safeguards to avoid the risk of the administrator influencing the results.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2198) Sneezing
Gist
Sneezing is a forceful burst of air that comes from your lungs and exits your body through your nose and mouth. It's involuntary. You can't control when a sneeze happens, and you should never try to hold one in. When you sneeze, it removes irritants like dirt, dust and pollen from your nose or throat.
Summary
Sneezing is your body’s way of removing irritants from your nose or throat. It is a powerful, involuntary expulsion of air. While this symptom can be quite annoying, it’s not usually the result of any serious health problem.
Sneezing often happens suddenly and without warning. Another name for sneezing is sternutation. Read on to learn more about why you sneeze and how to treat it.
What causes you to sneeze?
Part of your nose’s job is to clean the air you breathe, making sure it’s free of dirt and bacteria. In most cases, your nose traps this dirt and bacteria in mucus. Your stomach then digests the mucus, which neutralizes any potentially harmful invaders.
Sometimes, however, dirt and debris can enter your nose and irritate the sensitive mucous membranes inside your nose and throat. When these membranes become irritated, it causes you to sneeze.
Sneezing can be triggered by a variety of things, including:
* allergens
* viruses, such as the common cold or flu
* nasal irritants
* inhalation of corticosteroids through a nasal spray
* drug withdrawal
Allergies
Allergies are an extremely common condition caused by your body’s response to foreign organisms. Under normal circumstances, your body’s immune system protects you from harmful invaders such as disease-causing bacteria.
If you have allergies, your body’s immune system identifies typically harmless organisms as threats. Allergies can cause you to sneeze when your body tries to expel these organisms.
Infections
Infections caused by viruses such as the common cold and flu can also make you sneeze. There are more than 200 different viruses that can cause the common cold. However, most colds are the result of the rhinovirus.
Details
A sneeze (also known as sternutation) is a semi-autonomous, convulsive expulsion of air from the lungs through the nose and mouth, usually caused by foreign particles irritating the nasal mucosa. A sneeze expels air forcibly from the mouth and nose in an explosive, spasmodic involuntary action. This action allows for mucus to escape through the nasal cavity and saliva to escape from the oral cavity. Sneezing is possibly linked to sudden exposure to bright light (known as photic sneeze reflex), sudden change (drop) in temperature, breeze of cold air, a particularly full stomach, exposure to allergens, or viral infection. Because sneezes can spread disease through infectious aerosol droplets, it is recommended to cover one's mouth and nose with the forearm, the inside of the elbow, a tissue or a handkerchief while sneezing. In addition to covering the mouth, looking down is also recommended to change the direction of the droplets spread and avoid high concentration in the human breathing heights.
The function of sneezing is to expel mucus containing foreign particles or irritants and cleanse the nasal cavity. During a sneeze, the soft palate and palatine uvula depress while the back of the tongue elevates to partially close the passage to the mouth, creating a venturi (similar to a carburetor) due to Bernoulli's principle so that air ejected from the lungs is accelerated through the mouth and thus creating a low pressure point at the back of the nose. This way air is forced in through the front of the nose and the expelled mucus and contaminants are launched out the mouth. Sneezing with the mouth closed does expel mucus through the nose but is not recommended because it creates a very high pressure in the head and is potentially harmful. Holding in sneeze isn't recommended as well as it can repture.
Sneezing cannot occur during sleep due to REM atonia – a bodily state where motor neurons are not stimulated and reflex signals are not relayed to the brain. Sufficient external stimulants, however, may cause a person to wake from sleep to sneeze, but any sneezing occurring afterwards would take place with a partially awake status at minimum.
When sneezing, humans eyes automatically close due to the involuntary reflex during sneeze.
Description
Sneezing typically occurs when foreign particles or sufficient external stimulants pass through the nasal hairs to reach the nasal mucosa. This triggers the release of histamines, which irritate the nerve cells in the nose, resulting in signals being sent to the brain to initiate the sneeze through the trigeminal nerve network. The brain then relates this initial signal, activates the pharyngeal and tracheal muscles and creates a large opening of the nasal and oral cavities, resulting in a powerful release of air and bioparticles. The powerful nature of a sneeze is attributed to its involvement of numerous organs of the upper body – it is a reflexive response involving the face, throat, and chest muscles. Sneezing is also triggered by sinus nerve stimulation caused by nasal congestion and allergies.
The neural regions involved in the sneeze reflex are located in the brainstem along the ventromedial part of the spinal trigeminal nucleus and the adjacent pontine-medullary lateral reticular formation. This region appears to control the epipharyngeal, intrinsic laryngeal and respiratory muscles, and the combined activity of these muscles serve as the basis for the generation of a sneeze.
The sneeze reflex involves contraction of a number of different muscles and muscle groups throughout the body, typically including the eyelids. The common suggestion that it is impossible to sneeze with one's eyes open is, however, inaccurate. Other than irritating foreign particles, allergies or possible illness, another stimulus is sudden exposure to bright light – a condition known as photic sneeze reflex (PSR). Walking out of a dark building into sunshine may trigger PSR, or the ACHOO (autosomal dominant compulsive helio-ophthalmic outbursts of sneezing) syndrome as it is also called. The tendency to sneeze upon exposure to bright light is an autosomal dominant trait and affects 18–35% of the human population. A rarer trigger, observed in some individuals, is the fullness of the stomach immediately after a large meal. This is known as snatiation and is regarded as a medical disorder passed along genetically as an autosomal dominant trait.
Epidemiology
While generally harmless in healthy individuals, sneezes spread disease through the infectious aerosol droplets, commonly ranging from 0.5 to 5 μm. A sneeze can produce 40,000 droplets. To reduce the possibility of thus spreading disease (such as the flu), one holds the forearm, the inside of the elbow, a tissue or a handkerchief in front of one's mouth and nose when sneezing. Using one's hand for that purpose has recently fallen into disuse as it is considered inappropriate, since it promotes spreading germs through human contact (such as handshaking) or by commonly touched objects (most notably doorknobs).
Until recently, the maximum visible distance over which the sneeze plumes (or puffs) travel was observed at 0.6 metres (2.0 ft), and the maximum sneeze velocity derived was 4.5 m/s (about 10 mph). In 2020, sneezes were recorded generating plumes of up to 8 meters (26 ft).
Prevention
Proven methods to reduce sneezing generally advocate reducing interaction with irritants, such as keeping pets out of the house to avoid animal dander; ensuring the timely and continuous removal of dirt and dust particles through proper housekeeping; replacing filters for furnaces and air-handling units; air filtration devices and humidifiers; and staying away from industrial and agricultural zones. Tickling the roof of the mouth with the tongue can stop a sneeze. Some people, however, find sneezes to be pleasurable and would not want to prevent them.
Holding in sneezes, such as by pinching the nose or holding one's breath, is not recommended as the air pressure places undue stress on the lungs and airways. One computer simulation suggests holding in a sneeze results in a burst of air pressure of 39 kPa, approximately 24 times that of a normal sneeze.
In 1884, biologist Henry Walter Bates elucidated the impact of light on the sneezing reflex (Bates H.W. 1881-4. Biologia Centrali-Americana Insecta. Coleoptera. Volume I, Part 1.). He observed that individuals were only capable of sneezing when they felt in control of their entire environment. Consequently, he inferred that people were unable to sneeze in the dark. However, this hypothesis was later debunked.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2199) Cough
Gist
A cough is your body's way of responding when something irritates your throat or airways. An irritant stimulates nerves that send a message to your brain. The brain then tells muscles in your chest and abdomen to push air out of your lungs to force out the irritant. An occasional cough is normal and healthy.
Summary
A cough is a sudden expulsion of air through the large breathing passages which can help clear them of fluids, irritants, foreign particles and microbes. As a protective reflex, coughing can be repetitive with the cough reflex following three phases: an inhalation, a forced exhalation against a closed glottis, and a violent release of air from the lungs following opening of the glottis, usually accompanied by a distinctive sound.
Frequent coughing usually indicates the presence of a disease. Many viruses and bacteria benefit, from an evolutionary perspective, by causing the host to cough, which helps to spread the disease to new hosts. Irregular coughing is usually caused by a respiratory tract infection but can also be triggered by choking, smoking, air pollution, asthma, gastroesophageal reflux disease, post-nasal drip, chronic bronchitis, lung tumors, heart failure and medications such as angiotensin-converting-enzyme inhibitors (ACE inhibitors) and beta blockers.
Treatment should target the cause; for example, smoking cessation or discontinuing ACE inhibitors. Cough suppressants such as codeine or dextromethorphan are frequently prescribed, but have been demonstrated to have little effect. Other treatment options may target airway inflammation or may promote mucus expectoration. As it is a natural protective reflex, suppressing the cough reflex might have damaging effects, especially if the cough is productive (producing phlegm).
Details
A cough isn’t usually concerning unless it lingers for more than two weeks or you have additional symptoms such as difficulty breathing.
Coughing is a common reflex that clears your throat of mucus or foreign irritants. While everyone coughs to clear their throat from time to time, a number of conditions can cause more frequent coughing.
Most episodes of coughing will clear up or at least significantly improve within 2 weeks. Contact a doctor or healthcare professional if your cough doesn’t improve within a few weeks. This could indicate a more serious condition.
Also, contact a doctor if you cough up blood or have a “barking” cough.
DID YOU KNOW?
A cough that lasts for less than 3 weeks is an acute cough. If a cough lasts between 3 and 8 weeks, improving by the end of that period, it’s considered a subacute cough. A persistent cough that lasts more than 8 weeks is a chronic cough.
What causes coughing?
There are several possible causes of a cough.
Urge to clear your throat
Coughing is a standard way of clearing your throat.
When your airways become clogged with mucus or foreign particles such as smoke or dust, a cough serves as a reflexive reaction that helps clear the particles and make breathing easier.
Usually, this type of coughing is relatively infrequent, but coughing will increase with exposure to irritants such as smoke.
Viruses
The most common cause of a cough is a respiratory tract infection, such as a cold or the flu.
Cough is associated with coronavirus disease 2019 (COVID-19), too. A chronic cough is also one of the hallmark symptoms of long-haul COVID-19 (long COVID).
Respiratory tract infections are usually caused by a virus and may last for 1 to 2 weeks. Antiviral medications such as those for the flu are most effective when you take them within 2 daysTrusted Source of your symptoms starting.
Smoking
Smoking is another common cause of coughing.
A cough caused by smoking is almost always a chronic cough with a distinctive sound. It’s often known as a smoker’s cough.
Asthma
A common cause of coughing in young children is asthma. Asthmatic coughing typically involves wheezing, making it easy to identify.
Asthma exacerbations should be treated with the use of medications to open your airway (delivered by an inhaler or a nebulizer). It’s possible for children with asthma to outgrow the condition as they get older.
Medications
Some medications will cause coughing, although it’s generally a rare side effect.
Angiotensin-converting enzyme (ACE) inhibitors can cause coughing. They are often used to treat high blood pressure and heart conditions.
Two of the more common ACE inhibitors are enalapril (Vasotec) and lisinopril (Zestril).
Your coughing will stop when you quit using the medication.
Other conditions
Other conditions that may cause a cough include:
* damage to your vocal cords
* postnasal drip
* bacterial infections such as pneumonia, whooping cough, and croup
* serious conditions such as pulmonary embolism and heart failure
Another common condition that can cause a chronic cough is gastroesophageal reflux disease (GERD). In this condition, the contents of your stomach flow back into your esophagus. This backflow stimulates a reflex in your trachea (windpipe), causing you to cough.
When is coughing an emergency?
Contact a doctor if you have a cough that hasn’t cleared up or improved in 2 weeks. It may be a symptom of a more serious problem.
Get immediate medical attention if you develop additional symptoms. Symptoms to watch out for include:
* fever
* chest pain
* headaches
* drowsiness
* confusion
Coughing up blood or having difficulty breathing also require immediate emergency medical attention. Call your local emergency services right away.
Which treatments are available for a cough?
A cough can be treated in a variety of ways, depending on the cause. Healthy adults will mostly be able to treat their coughs with home remedies and self-care.
Home remedies
A cough that results from a virus can’t be treated with antibiotics. You can soothe it in the following ways instead:
* Keep hydrated by drinking plenty of water.
* Elevate your head with extra pillows when sleeping.
* Use cough drops to soothe your throat.
* Gargle with warm salt water regularly to remove mucus and soothe your throat.
* Avoid irritants, including smoke and dust.
* Add honey or ginger to hot tea to relieve your cough and clear your airway.
* Use decongestant sprays to unblock your nose and ease breathing.
Medical care
Typically, medical care will involve a doctor looking down your throat, listening to your cough, and asking about any other symptoms.
If your cough is likely due to a bacterial infection, the doctor will prescribe oral antibiotics. They may also prescribe either cough suppressants that contain codeine or expectorant cough syrups.
Which tests can help with diagnosis?
If the doctor can’t determine the cause of your cough, they may order additional tests. These tests could include:
* Chest X-ray: A chest X-ray helps them assess whether your lungs are clear.
* Allergy tests: They’ll perform blood and skin tests if they suspect an allergic response.
* Phlegm or mucus analysis: These tests can reveal signs of bacteria or tuberculosis.
It’s very rare for a cough to be the only symptom of heart problems, but a doctor may request an echocardiogram to ensure that your heart is functioning correctly and isn’t causing your cough.
Difficult cases may require these additional tests:
* CT scan: A CT scan offers a more in-depth view of your airways and chest.
* Esophageal pH monitoring: If the CT scan doesn’t reveal the cause, the doctor may refer you to a gastrointestinal or pulmonary (lung) specialist. One of the tests these specialists may perform is esophageal pH monitoring, which looks for evidence of GERD.
In cases where the previous tests are either not possible or extremely unlikely to be successful, or your cough is expected to resolve without treatment, doctors may prescribe cough suppressants.
What’s the outcome if a cough is left untreated?
In most cases, a cough will disappear naturally within 1 or 2 weeks after it first develops. Coughing won’t typically cause any long lasting damage or symptoms.
In some cases, a severe cough may cause temporary complications such as:
* tiredness
* dizziness
* headaches
* fractured ribs
These are very rare, and they’ll normally stop when your cough disappears.
A cough that’s the symptom of a more serious condition is unlikely to go away on its own. If left untreated, the condition could worsen and cause other symptoms.
Can you prevent a cough?
Infrequent coughing is necessary to clear your airways. But there are ways you can prevent other coughs.
* Quit smoking
Smoking is a common contributor to a chronic cough. It can be very difficult to cure a smoker’s cough.
There are a wide variety of methods available to help you if you decide to quit smoking, from gadgets to advice groups and support networks. If you quit smoking, you’ll be much less likely to catch colds or experience a chronic cough.
* Dietary changes
According to a 2018 case study, a diet high in fiber-rich fruits may help relieve chronic respiratory symptoms, such as a cough with phlegm.
In addition, guidelines from the American College of Chest Physicians suggest that adults with GERD may reduce their cough by avoiding eating within 3 hours of their bedtime.
If you need help adjusting your diet, a doctor may be able to advise you or refer you to a dietitian.
Medical conditions
If you can, avoid anyone with a contagious illness such as bronchitis. This will reduce your chances of coming into contact with germs.
Wash your hands frequently and don’t share utensils, towels, or pillows.
If you have existing medical conditions that increase your chances of developing a cough, such as GERD or asthma, ask a doctor about different management strategies. Once you manage your condition, you may find that your cough disappears or becomes much less frequent.
Additional Information
A cough is a spontaneous reflex. When things such as mucus, germs or dust irritate your throat and airways, your body automatically responds by coughing. Similar to other reflexes such as sneezing or blinking, coughing helps protect your body.
Key Facts
* Coughing is an important reflex that helps protect your airway and lungs against irritants.
* Coughing can propel air and particles out of your lungs and throat at speeds close to 50 miles per hour.
* Occasional coughing is normal as it helps clear your throat and airway of germs, mucus and dust.
* A cough that doesn’t go away or comes with other symptoms like shortness of breath, mucus production or bloody phlegm could be the sign of a more serious medical problem.
How a Cough Affects Your Body
An occasional cough is a normal healthy function of your body. Your throat and airways are equipped with nerves that sense irritants and seek to dispel them. This response is almost instantaneous and very effective.
Throats and lungs normally produce a small amount of mucus to keep the airway moist and to have a thin covering layer that works as a protective barrier against irritants and germs you may breathe in. Some infrequent coughing helps mobilize mucus and has no damaging effects on your body. Coughing also allows for the rapid removal of any unwelcome particles you accidentally breathe in.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2200) Foundry
Gist
i) an establishment for producing castings in molten metal.
ii) the act or process of founding or casting metal.
iii) the category of metal objects made by founding; castings.
Summary
In simplified terms, a foundry is a factory where castings are produced by melting metal, pouring liquid metal into a mold, then allowing it to solidify.
Foundries specialize in metal casting to create both ornamental and functional objects made of metal. The casting process includes patternmaking, creating a mold, melting metal, pouring the metal into a mold, waiting for it to solidify, removing it from the mold, and cleaning and finishing the object.
Details
A foundry is a factory that produces metal castings. Metals are cast into shapes by melting them into a liquid, pouring the metal into a mold, and removing the mold material after the metal has solidified as it cools. The most common metals processed are aluminum and cast iron. However, other metals, such as bronze, brass, steel, magnesium, and zinc, are also used to produce castings in foundries. In this process, parts of desired shapes and sizes can be formed.
Foundries are one of the largest contributors to the manufacturing recycling movement, melting and recasting millions of tons of scrap metal every year to create new durable goods. Moreover, many foundries use sand in their molding process. These foundries often use, recondition, and reuse sand, which is another form of recycling.
Process
In metalworking, casting involves pouring liquid metal into a mold, which contains a hollow cavity of the desired shape, and then allowing it to cool and solidify. The solidified part is also known as a casting, which is ejected or broken out of the mold to complete the process. Casting is most often used for making complex shapes that would be difficult or uneconomical to make by other methods.
Melting
Melting is performed in a furnace. Virgin material, external scrap, internal scrap, and alloying elements are used to charge the furnace. Virgin material refers to commercially pure forms of the primary metal used to form a particular alloy. Alloying elements are either pure forms of an alloying element, like electrolytic nickel, or alloys of limited composition, such as ferroalloys or master alloys. External scrap is material from other forming processes such as punching, forging, or machining. Internal scrap consists of gates, risers, defective castings, and other extraneous metal oddments produced within the facility.
The process includes melting the charge, refining the melt, adjusting the melt chemistry and tapping into a transport vessel. Refining is done to remove harmful gases and elements from the molten metal to avoid casting defects. Material is added during the melting process to bring the final chemistry within a specific range specified by industry and/or internal standards. Certain fluxes may be used to separate the metal from slag and/or dross and degassers are used to remove dissolved gas from metals that readily dissolve in gasses. During the tap, final chemistry adjustments are made.
Furnace
Several specialised furnaces are used to heat the metal. Furnaces are refractory-lined vessels that contain the material to be melted and provide the energy to melt it. Modern furnace types include electric arc furnaces (EAF), induction furnaces, cupolas, reverberatory, and crucible furnaces. Furnace choice is dependent on the alloy system quantities produced. For ferrous materials EAFs, cupolas, and induction furnaces are commonly used. Reverberatory and crucible furnaces are common for producing aluminium, bronze, and brass castings.
Furnace design is a complex process, and the design can be optimized based on multiple factors. Furnaces in foundries can be any size, ranging from small ones used to melt precious metals to furnaces weighing several tons, designed to melt hundreds of pounds of scrap at one time. They are designed according to the type of metals that are to be melted. Furnaces must also be designed based on the fuel being used to produce the desired temperature. For low temperature melting point alloys, such as zinc or tin, melting furnaces may reach around 500 °C (932 °F). Electricity, propane, or natural gas are usually used to achieve these temperatures. For high melting point alloys such as steel or nickel-based alloys, the furnace must be designed for temperatures over 1,600 °C (2,910 °F). The fuel used to reach these high temperatures can be electricity (as employed in electric arc furnaces) or coke. The majority of foundries specialize in a particular metal and have furnaces dedicated to these metals. For example, an iron foundry (for cast iron) may use a cupola, induction furnace, or EAF, while a steel foundry will use an EAF or induction furnace. Bronze or brass foundries use crucible furnaces or induction furnaces. Most aluminium foundries use either electric resistance or gas heated crucible furnaces or reverberatory furnaces.
Degassing
Degassing is a process that may be required to reduce the amount of hydrogen present in a batch of molten metal. Gases can form in metal castings in one of two ways:
* by physical entrapment during the casting process or
* by chemical reaction in the cast material.
Hydrogen is a common contaminant for most cast metals. It forms as a result of material reactions or from water vapor or machine lubricants. If the hydrogen concentration in the melt is too high, the resulting casting will be porous; the hydrogen will exit the molten solution, leaving minuscule air pockets, as the metal cools and solidifies. Porosity often seriously deteriorates the mechanical properties of the metal.
An efficient way of removing hydrogen from the melt is to bubble a dry, insoluble gas through the melt by purging or agitation. When the bubbles go up in the melt, they catch the dissolved hydrogen and bring it to the surface. Chlorine, nitrogen, helium and argon are often used to degas non-ferrous metals. Carbon monoxide is typically used for iron and steel.
There are various types of equipment that can measure the presence of hydrogen. Alternatively, the presence of hydrogen can be measured by determining the density of a metal sample.
In cases where porosity still remains present after the degassing process, porosity sealing can be accomplished through a process called metal impregnating.
Mold making
In the casting process, a pattern is made in the shape of the desired part. Simple designs can be made in a single piece or solid pattern. More complex designs are made in two parts, called split patterns. A split pattern has a top or upper section, called a cope, and a bottom or lower section called a drag. Both solid and split patterns can have cores inserted to complete the final part shape. Cores are used to create hollow areas in the mold that would otherwise be impossible to achieve. Where the cope and drag separates is called the parting line.
When making a pattern it is best to taper the edges so that the pattern can be removed without breaking the mold. This is called draft. The opposite of draft is an undercut where there is part of the pattern under the mold material, making it impossible to remove the pattern without damaging the mold.
The pattern is made of wax, wood, plastic, or metal. The molds are constructed by several different processes dependent upon the type of foundry, metal to be poured, quantity of parts to be produced, size of the casting, and complexity of the casting. These mold processes include:
* Sand casting – Green or resin bonded sand mold.
* Lost-foam casting – Polystyrene pattern with a mixture of ceramic and sand mold.
* Investment casting – Wax or similar sacrificial pattern with a ceramic mold.
* Ceramic mold casting – Plaster mold.
* V-process casting – Vacuum with thermoformed plastic to form sand molds. No moisture, clay or resin required.
* Die casting – Metal mold.
* Billet (ingot) casting – Simple mold for producing ingots of metal, normally for use in other foundries.
* Loam molding – a built up mold used for casting large objects, such as cannon, steam engine cylinders, and bells.
Pouring
In a foundry, molten metal is poured into molds. Pouring can be accomplished with gravity, or it may be assisted with a vacuum or pressurized gas. Many modern foundries use robots or automatic pouring machines to pour molten metal. Traditionally, molds were poured by hand using ladles.
Shakeout
The solidified metal component is then removed from its mold. Where the mold is sand based, this can be done by shaking or tumbling. This frees the casting from the sand, which is still attached to the metal runners and gates — which are the channels through which the molten metal traveled to reach the component itself.
Degating
Degating is the removal of the heads, runners, gates, and risers from the casting. Runners, gates, and risers may be removed using cutting torches, bandsaws, or ceramic cutoff blades. For some metal types, and with some gating system designs, the sprue, runners, and gates can be removed by breaking them away from the casting with a sledge hammer or specially designed knockout machinery. Risers must usually be removed using a cutting method but some newer methods of riser removal use knockoff machinery with special designs incorporated into the riser neck geometry that allow the riser to break off at the right place.
The gating system required to produce castings in a mold yields leftover metal — including heads, risers, and sprue (sometimes collectively called sprue) — that can exceed 50% of the metal required to pour a full mold. Since this metal must be remelted as salvage, the yield of a particular gating configuration becomes an important economic consideration when designing various gating schemes, to minimize the cost of excess sprue, and thus overall melting costs.
Heat treating
Heat treating is a group of industrial and metalworking processes used to alter the physical, and sometimes chemical, properties of a material. The most common application is metallurgical. Heat treatments are also used in the manufacture of many other materials, such as glass. Heat treatment involves the use of heating or chilling, normally to extreme temperatures, to achieve a desired result such as hardening or softening of a material. Heat treatment techniques include annealing, case-hardening, precipitation strengthening, tempering, and quenching. Although the term "heat treatment" applies only to processes where the heating and cooling are done for the specific purpose of altering properties intentionally, heating and cooling often occur incidentally during other manufacturing processes such as hot forming or welding.
Surface cleaning
After degating and heat treating, sand or other molding media may remain adhered to the casting. To remove any mold remnants, the surface is cleaned using a blasting process. This means a granular media will be propelled against the surface of the casting to mechanically knock away the adhering sand. The media may be blown with compressed air, or may be hurled using a shot wheel. The cleaning media strikes the casting surface at high velocity to dislodge the mold remnants (for example, sand, slag) from the casting surface. Numerous materials may be used to clean cast surfaces, including steel, iron, other metal alloys, aluminium oxides, glass beads, walnut shells, baking powder, and many others. The blasting media is selected to develop the color and reflectance of the cast surface. Terms used to describe this process include cleaning, bead blasting, and sand blasting. Shot peening may be used to further work-harden and finish the surface.
Finishing
The final step in the process of casting usually involves grinding, sanding, or machining the component in order to achieve the desired dimensional accuracies, physical shape, and surface finish.
Removing the remaining gate material, called a gate stub, is usually done using a grinder or sander. These processes are used because their material removal rates are slow enough to control the amount of material being removed. These steps are done prior to any final machining.
After grinding, any surfaces that require tight dimensional control are machined. Many castings are machined in CNC milling centers. The reason for this is that these processes have better dimensional capability and repeatability than many casting processes. However, it is not uncommon today for castings to be used without machining.
A few foundries provide other services before shipping cast products to their customers. It is common to paint castings to prevent corrosion and improve visual appeal. Some foundries assemble castings into complete machines or sub-assemblies. Other foundries weld multiple castings or wrought metals together to form a finished product.
More and more, finishing processes are being performed by robotic machines, which eliminate the need for a human to physically grind or break parting lines, gating material, or feeders. Machines can reduce risk of injury to workers and lower costs for consumables — while also increasing productivity. They also limit the potential for human error and increase repeatability in the quality of grinding.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2201) Rechargeable battery
Gist
A rechargeable battery, storage battery, or secondary cell (formally a type of energy accumulator), is a type of electrical battery which can be charged, discharged into a load, and recharged many times, as opposed to a disposable or primary battery, which is supplied fully charged and discarded after use.
Details
A rechargeable battery, storage battery, or secondary cell (formally a type of energy accumulator), is a type of electrical battery which can be charged, discharged into a load, and recharged many times, as opposed to a disposable or primary battery, which is supplied fully charged and discarded after use. It is composed of one or more electrochemical cells. The term "accumulator" is used as it accumulates and stores energy through a reversible electrochemical reaction. Rechargeable batteries are produced in many different shapes and sizes, ranging from button cells to megawatt systems connected to stabilize an electrical distribution network. Several different combinations of electrode materials and electrolytes are used, including lead–acid, zinc–air, nickel–cadmium (NiCd), nickel–metal hydride (NiMH), lithium-ion (Li-ion), lithium iron phosphate (LiFePO4), and lithium-ion polymer (Li-ion polymer).
Rechargeable batteries typically initially cost more than disposable batteries but have a much lower total cost of ownership and environmental impact, as they can be recharged inexpensively many times before they need replacing. Some rechargeable battery types are available in the same sizes and voltages as disposable types, and can be used interchangeably with them. Billions of dollars in research are being invested around the world for improving batteries as industry focuses on building better batteries.
Applications
Devices which use rechargeable batteries include automobile starters, portable consumer devices, light vehicles (such as motorized wheelchairs, golf carts, electric bicycles, and electric forklifts), road vehicles (cars, vans, trucks, motorbikes), trains, small airplanes, tools, uninterruptible power supplies, and battery storage power stations. Emerging applications in hybrid internal combustion-battery and electric vehicles drive the technology to reduce cost, weight, and size, and increase lifetime.
Older rechargeable batteries self-discharge relatively rapidly[vague] and require charging before first use; some newer low self-discharge NiMH batteries hold their charge for many months, and are typically sold factory-charged to about 70% of their rated capacity.
Battery storage power stations use rechargeable batteries for load-leveling (storing electric energy at times of low demand for use during peak periods) and for renewable energy uses (such as storing power generated from photovoltaic arrays during the day to be used at night). Load-leveling reduces the maximum power which a plant must be able to generate, reducing capital cost and the need for peaking power plants.
According to a report from Research and Markets, the analysts forecast the global rechargeable battery market to grow at a CAGR of 8.32% during the period 2018–2022.
Small rechargeable batteries can power portable electronic devices, power tools, appliances, and so on. Heavy-duty batteries power electric vehicles, ranging from scooters to locomotives and ships. They are used in distributed electricity generation and in stand-alone power systems.
Charging and discharging
During charging, the positive active material is oxidized, releasing electrons, and the negative material is reduced, absorbing electrons. These electrons constitute the current flow in the external circuit. The electrolyte may serve as a simple buffer for internal ion flow between the electrodes, as in lithium-ion and nickel-cadmium cells, or it may be an active participant in the electrochemical reaction, as in lead–acid cells.
The energy used to charge rechargeable batteries usually comes from a battery charger using AC mains electricity, although some are equipped to use a vehicle's 12-volt DC power outlet. The voltage of the source must be higher than that of the battery to force current to flow into it, but not too much higher or the battery may be damaged.
Chargers take from a few minutes to several hours to charge a battery. Slow "dumb" chargers without voltage or temperature-sensing capabilities will charge at a low rate, typically taking 14 hours or more to reach a full charge. Rapid chargers can typically charge cells in two to five hours, depending on the model, with the fastest taking as little as fifteen minutes. Fast chargers must have multiple ways of detecting when a cell reaches full charge (change in terminal voltage, temperature, etc.) to stop charging before harmful overcharging or overheating occurs. The fastest chargers often incorporate cooling fans to keep the cells from overheating. Battery packs intended for rapid charging may include a temperature sensor that the charger uses to protect the pack; the sensor will have one or more additional electrical contacts.
Different battery chemistries require different charging schemes. For example, some battery types can be safely recharged from a constant voltage source. Other types need to be charged with a regulated current source that tapers as the battery reaches fully charged voltage. Charging a battery incorrectly can damage a battery; in extreme cases, batteries can overheat, catch fire, or explosively vent their contents.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2202) Emergency light
An emergency light is a battery-backed lighting device that switches on automatically when a building experiences a power outage.
In the United States, emergency lights are standard in new commercial and high occupancy residential buildings, such as college dormitories, apartments, and hotels. Most building codes in the US require that they be installed in older buildings as well. Incandescent light bulbs were originally used in emergency lights, before fluorescent lights and later light-emitting diodes (LEDs) superseded them in the 21st century.
History
By the nature of the device, an emergency light is designed to come on when the power goes out. Every model, therefore, requires some sort of a battery or generator system that could provide electricity to the lights during a blackout. The earliest models were incandescent light bulbs which could dimly light an area during a blackout and perhaps provide enough light to solve the power problem or evacuate the building. It was quickly realized, however, that a more focused, brighter, and longer-lasting light was needed. Modern emergency floodlights provide a high-lumen, wide-coverage light that can illuminate an area quite well. Some lights are halogen, and provide a light source and intensity similar to that of an automobile headlight.
Early battery backup systems were huge, dwarfing the size of the lights for which they provided power. The systems normally used lead acid batteries to store a full 120 VDC charge. For comparison, an automobile uses a single lead acid battery as part of the ignition system. Simple transistor or relay technology was used to switch on the lights and battery supply in the event of a power failure. The size of these units, as well as the weight and cost, made them relatively rare installations. As technology developed further, the voltage requirements for lights dropped, and subsequently the size of the batteries was reduced as well. Modern lights are only as large as the bulbs themselves - the battery fits quite well in the base of the fixture.
Modern installations
In the United States, modern emergency lighting is installed in virtually every commercial and high occupancy residential building. The lights consist of one or more incandescent bulbs or one or more clusters of high-intensity light-emitting diodes (LED). The emergency lighting heads have usually been either incandescent PAR 36 sealed beams or wedge base lamps, but LED illumination is increasingly common. All units have some sort of a device to focus and intensify the light they produce. This can either be in the form of a plastic cover over the fixture, or a reflector placed behind the light source. Most individual light sources can be rotated and aimed for where light is needed most in an emergency, such as toward fire exits.
Modern fixtures usually have a test button of some sort which simulates a power failure and causes the unit to switch on the lights and operate from battery power, even if the main power is still on. Modern systems are operated with relatively low voltage, usually from 6-12 VDC. This both reduces the size of the batteries required and reduces the load on the circuit to which the emergency light is wired. Modern fixtures include a small transformer in the base of the fixture which steps-down the voltage from main current to the low voltage required by the lights. Batteries are commonly made of lead-calcium, and can last for 10 years or more on continuous charge. US fire safety codes require a minimum of 90 minutes on battery power during a power outage along the path of egress.
Compliance codes
New York City requires emergency lights to carry a Calendar Number signifying approval for local installation, Chicago requires emergency lighting to have a metal face plate, and Los Angeles requires additional exit signs be installed within 18 inches (460 mm) of the floor around doors to mark exits during a fire, as smoke rises and tends to block out higher installed units.
As there are strict requirements to provide an average of one foot candle of light along the path of egress, emergency lighting should be selected carefully to ensure codes are met.
In recent years, emergency lighting has made less use of the traditional two-head unit - with manufacturers stretching the concept of emergency lighting to accommodate and integrate emergency lighting into the architecture.
An emergency lighting installation may be either a central standby source such as a bank of lead acid batteries and control gear/chargers supplying slave fittings throughout the building, or may be constructed using self-contained emergency fittings which incorporate the lamp, battery, charger and control equipment.
Self-contained emergency lighting fittings may operate in "Maintained" mode (illuminated all the time or controlled by a switch) or "Non-Maintained" mode (illuminated only when the normal supply fails).
Some emergency lighting manufacturers offer dimming solutions for common area emergency lighting to allow energy savings for building owners when unoccupied using embedded sensors.
Another popular method for lighting designers, architects and contractors are battery backup ballasts that install within or adjacent to existing lighting fixtures. Upon sensing power loss, the ballasts switch into emergency mode turning the existing lighting into emergency lighting in order to meet both the NFPA's Life Safety Code and the National Electric Code without the need of wiring separate circuits or external wall mounts.
Codes of practice for remote mounted emergency lighting generally mandate that wiring from the central power source to emergency luminaires be kept segregated from other wiring, and constructed in fire resistant cabling and wiring systems.
Codes of practice lay down minimum illumination levels in escape routes and open areas. Codes of practice also lay down requirements governing siting of emergency lighting fittings, for example the UK code of practice, BS5266, specifies that a fitting must be within 2 metres (6 ft 7 in) horizontal distance of a fire alarm call point or location for fire fighting appliances.
The most recent codes of practice require the designer to allow for both failure of the supply to the building and the failure of an individual lighting circuit. BS5266 requires that when Non Maintained fittings are used, they must be supplied from the same final circuit as the main lighting circuit in the area.
UK specific information
Emergency lights test, or emergency lighting compliance (ELC), is the process of ensuring that emergency lights are in working order and compliant with safety regulations. This typically involves monthly and annual tests, as well as regular maintenance and replacement of batteries and bulbs. emergency lights test is important to ensure that emergency lights will be able to provide adequate illumination in the event of a power outage or other emergency situation.
According to British fire safety law, an entire assessment of the system must be conducted yearly and “flick-tested” at least once a month. Emergency lighting serves multiple purposes: illuminating pathways for occupants to escape from hazardous situations, as well as helping individuals discover nearby fire-fighting equipment in case of emergencies.
Types
For UK and Australian regulations, two types are distinguished:
* Maintained luminaires are permanently illuminated, and remain illuminated when power fails. They are used for tasks such as emergency exit lighting. In some cases they may be switched off deliberately, but are usually required to be active when a building is occupied, or when the public are admitted, such as for a theatre.
* Sustained or non-maintained luminaires may be switched on and off normally. If the power fails, they turn on automatically.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online