This glossary of engineering terms is a list of definitions about the major concepts of
engineering. Please see the bottom of the page for glossaries of specific fields of engineering.
In
electrochemistry, according to an
IUPAC definition,[1] is the
electrode potential of a
metal measured with respect to a universal reference system (without any additional metal–solution interface).
The lower limit of the
thermodynamic temperature scale, a state at which the
enthalpy and
entropy of a cooled
ideal gas reach their minimum value, taken as 0. Absolute zero is the point at which the fundamental particles of nature have minimal vibrational motion, retaining only quantum mechanical,
zero-point energy-induced particle motion. The theoretical temperature is determined by extrapolating the
ideal gas law; by international agreement, absolute zero is taken as −273.15° on the
Celsius scale (
International System of Units),[2][3] which equals −459.67° on the
Fahrenheit scale (
United States customary units or
Imperial units).[4] The corresponding
Kelvin and
Rankine temperature scales set their zero points at absolute zero by definition.
Absorbance or decadic absorbance is the common logarithm of the ratio of incident to transmittedradiant power through a material, and spectral absorbance or spectral decadic absorbance is the common logarithm of the ratio of incident to transmittedspectral radiant power through a material.[5]
A type of wastewater treatment process for treating sewage or industrial wastewaters using aeration and a biological floc composed of bacteria and protozoa.
In cellular biology, active transport is the movement of molecules across a membrane from a region of their lower concentration to a region of their higher concentration—against the concentration gradient. Active transport requires cellular energy to achieve this movement. There are two types of active transport: primary active transport that uses
ATP, and secondary active transport that uses an electrochemical gradient. An example of active transport in
human physiology is the uptake of
glucose in the
intestines.
A device that accepts 2 inputs (control signal, energy source) and outputs kinetic energy in the form of physical movement (linear, rotary, or oscillatory). The control signal input specifies which motion should be taken. The energy source input is typically either an electric current, hydraulic pressure, or pneumatic pressure. An actuator can be the final element of a control loop
A complex
organic chemical that provides energy to drive many processes in living
cells, e.g. muscle contraction, nerve impulse propagation, chemical synthesis. Found in all forms of life, ATP is often referred to as the "molecular unit of
currency" of intracellular
energy transfer.[7]
The tendency of dissimilar particles or surfaces to cling to one another (cohesion refers to the tendency of similar or identical particles/surfaces to cling to one another).
The study of the motion of air, particularly its interaction with a solid object, such as an airplane wing. It is a sub-field of fluid dynamics and gas dynamics, and many aspects of aerodynamics theory are common to these fields.
Is the primary field of
engineering concerned with the development of
aircraft and
spacecraft.[10] It has two major and overlapping branches: Aeronautical engineering and Astronautical Engineering.
Avionics engineering is similar, but deals with the
electronics side of aerospace engineering.
Alpha particles consist of two
protons and two
neutrons bound together into a particle identical to a
helium-4nucleus. They are generally produced in the process of
alpha decay, but may also be produced in other ways. Alpha particles are named after the first letter in the
Greek alphabet,
α.
In chemistry, an amphoteric compound is a molecule or ion that can react both as an
acid as well as a
base.[20] Many metals (such as
copper,
zinc,
tin,
lead,
aluminium, and
beryllium) form amphoteric oxides or hydroxides. Amphoterism depends on the
oxidation states of the oxide. Al2O3 is an example of an amphoteric oxide.
The amplitude of a
periodicvariable is a measure of its change over a single
period (such as
time or
spatial period). There are various definitions of amplitude, which are all
functions of the magnitude of the difference between the variable's
extreme values. In older texts the
phase is sometimes called the amplitude.[21]
Is a collection of processes by which
microorganisms break down
biodegradable material in the absence of
oxygen.[22] The process is used for industrial or domestic purposes to
manage waste or to produce fuels. Much of the
fermentation used industrially to produce food and drink products, as well as home fermentation, uses anaerobic digestion.
In
physics, angular momentum (rarely, moment of momentum or rotational momentum) is the rotational equivalent of
linear momentum. It is an important quantity in physics because it is a
conserved quantity—the total angular momentum of a system remains constant unless acted on by an external
torque.
Is an ion with more electrons than protons, giving it a net negative charge (since electrons are negatively charged and protons are positively charged).[24]
In
particle physics, annihilation is the process that occurs when a
subatomic particle collides with its respective
antiparticle to produce other particles, such as an
electron colliding with a
positron to produce two
photons.[25] The total
energy and
momentum of the initial pair are conserved in the process and distributed among a set of other particles in the final state. Antiparticles have exactly opposite additive
quantum numbers from particles, so the sums of all quantum numbers of such an original pair are zero. Hence, any set of particles may be produced whose total quantum numbers are also zero as long as
conservation of energy and
conservation of momentum are obeyed.[26]
The American National Standards Institute is a private
non-profit organization that oversees the development of
voluntary consensus standards for products, services, processes, systems, and personnel in the United States.[27] The organization also coordinates U.S. standards with international standards so that American products can be used worldwide.
Anti-gravity (also known as non-gravitational field) is a theory of creating a place or object that is free from the force of
gravity. It does not refer to the lack of weight under gravity experienced in
free fall or
orbit, or to balancing the force of gravity with some other force, such as electromagnetism or aerodynamic lift.
Is the field concerned with the application of management, design, and technical skills for the design and integration of systems, the execution of new
product designs, the improvement of manufacturing processes, and the management and direction of physical and/or technical functions of a firm or organization. Applied-engineering degreed programs typically include instruction in basic engineering principles,
project management, industrial processes, production and operations management, systems integration and control, quality control, and statistics.[28]
Arc length is the distance between two points along a section of a
curve. Determining the length of an irregular arc segment is also called rectification of a curve. The advent of
infinitesimal calculus led to a general formula that provides
closed-form solutions in some cases.
States that the upward
buoyant force that is exerted on a body immersed in a
fluid, whether fully or partially submerged, is equal to the
weight of the fluid that the body
displaces and acts in the upward direction at the center of mass of the displaced fluid.[29] Archimedes' principle is a
law of physics fundamental to fluid mechanics. It was formulated by
Archimedes of Syracuse[30]
The 2nd moment of area, also known as moment of inertia of plane area, area moment of inertia, or second area moment, is a geometrical property of an area which reflects how its points are distributed with regard to an arbitrary axis. The second moment of area is typically denoted with either an for an axis that lies in the plane or with a for an axis perpendicular to the plane. In both cases, it is calculated with a
multiple integral over the object in question. Its dimension is L (length) to the fourth power. Its
unit of dimension when working with the
International System of Units is meters to the fourth power,
m4.
In
mathematics and
statistics, the arithmetic mean or simply the
mean or average when the context is clear, is the sum of a collection of numbers divided by the number of numbers in the collection.[31]
In
mathematics, an arithmetic progression (AP) or arithmetic sequence is a
sequence of
numbers such that the difference between the consecutive terms is constant. Difference here means the second minus the first. For instance, the sequence 5, 7, 9, 11, 13, 15, . . . is an arithmetic progression with common difference of 2.
An aromatic hydrocarbon or arene[32] (or sometimes aryl hydrocarbon)[33] is a
hydrocarbon with
sigma bonds and delocalized
pi electrons between carbon atoms forming a circle. In contrast,
aliphatic hydrocarbons lack this delocalization. The term "aromatic" was assigned before the physical mechanism determining
aromaticity was discovered; the term was coined as such simply because many of the compounds have a sweet or pleasant odour. The configuration of six carbon atoms in aromatic compounds is known as a benzene ring, after the simplest possible such hydrocarbon,
benzene. Aromatic hydrocarbons can be monocyclic (MAH) or polycyclic (PAH).
The Arrhenius equation is a formula for the temperature dependence of
reaction rates. The equation was proposed by
Svante Arrhenius in 1889, based on the work of Dutch chemist
Jacobus Henricus van 't Hoff who had noted in 1884 that
Van 't Hoff's equation for the temperature dependence of
equilibrium constants suggests such a formula for the rates of both forward and reverse reactions. This equation has a vast and important application in determining rate of chemical reactions and for calculation of energy of activation. Arrhenius provided a physical justification and interpretation for the formula.[34][35][36] Currently, it is best seen as an
empirical relationship.[37]: 188 It can be used to model the temperature variation of diffusion coefficients, population of crystal vacancies, creep rates, and many other thermally-induced processes/reactions. The
Eyring equation, developed in 1935, also expresses the relationship between rate and energy.
(AI), is
intelligence demonstrated by
machines, unlike the natural intelligence
displayed by humans and
animals. Leading AI textbooks define the field as the study of "
intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[40] Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the
human mind, such as "learning" and "problem solving".[41]
In
atomic theory and
quantum mechanics, an atomic orbital is a
mathematical function that describes the wave-like behavior of either one
electron or a pair of electrons in an
atom.[42] This function can be used to calculate the
probability of finding any electron of an atom in any specific region around the
atom's nucleus. The term atomic orbital may also refer to the physical region or space where the electron can be calculated to be present, as defined by the particular mathematical form of the orbital.[43]
An audio frequency (abbreviation: AF), or audible frequency is characterized as a
periodicvibration whose
frequency is audible to the average human. The
SI unit of audio frequency is the
hertz (Hz). It is the property of
sound that most determines
pitch.[44]
Austenitization means to heat the iron, iron-based metal, or steel to a temperature at which it changes crystal structure from ferrite to austenite.[45] The more open structure of the austenite is then able to absorb carbon from the iron-carbides in carbon steel. An incomplete initial austenitization can leave undissolved
carbides in the matrix.[46] For some irons, iron-based metals, and steels, the presence of carbides may occur during the austenitization step. The term commonly used for this is two-phase austenitization.[47]
Is the technology by which a process or procedure is performed with minimum human assistance.[48] Automation[49] or automatic control is the use of various
control systems for operating equipment such as machinery, processes in factories, boilers and heat treating ovens, switching on telephone networks, steering and stabilization of ships, aircraft and other applications and vehicles with minimal or reduced human intervention. Some processes have been completely automated.
In
chemistry, bases are substances that, in
aqueous solution, release
hydroxide (OH−) ions, are slippery to the touch, can taste
bitter if an alkali,[50] change the color of indicators (e.g., turn red
litmus paper blue), react with
acids to form
salts, promote certain chemical reactions (
base catalysis), accept
protons from any proton donor, and/or contain completely or partially displaceable OH−ions.
The Beer–Lambert law, also known as Beer's law, the Lambert–Beer law, or the Beer–Lambert–Bouguer law relates the
attenuation of
light to the properties of the material through which the light is travelling. The law is commonly applied to
chemical analysis measurements and used in understanding attenuation in
physical optics, for
photons,
neutrons or rarefied gases. In
mathematical physics, this law arises as a solution of the
BGK equation.
Is a term describing the friction forces between a
belt and a surface, such as a belt wrapped around a
bollard. When one end of the belt is being pulled only part of this force is transmitted to the other end wrapped about a surface. The friction force increases with the amount of wrap about a surface and makes it so the
tension in the belt can be different at both ends of the belt. Belt friction can be modeled by the
Belt friction equation.[51]
In
applied mechanics, bending (also known as flexure) characterizes the behavior of a slender
structural element subjected to an external
load applied perpendicularly to a longitudinal axis of the element. The structural element is assumed to be such that at least one of its dimensions is a small fraction, typically 1/10 or less, of the other two.[52]
Cost–benefit analysis (CBA), sometimes called benefit costs analysis (BCA), is a systematic approach to estimating the strengths and weaknesses of alternatives (for example in transactions, activities, functional business requirements); it is used to determine options that provide the best approach to achieve benefits while preserving savings.[55] It may be used to compare potential (or completed) courses of actions; or estimate (or evaluate) the value against
costs of a single decision, project, or policy.
is called a Bernoulli differential equation where is any real number and and .[56] It is named after
Jacob Bernoulli who discussed it in 1695. Bernoulli equations are special because they are nonlinear differential equations with known exact solutions. A famous special case of the Bernoulli equation is the
logistic differential equation.
also called beta ray or beta radiation (symbol β), is a high-energy, high-speed
electron or
positron emitted by the
radioactive decay of an
atomic nucleus during the process of
beta decay. There are two forms of beta decay, β− decay and β+ decay, which produce electrons and positrons respectively.[62]
Biocatalysis refers to the use of
living (biological) systems or their parts to speed up (
catalyze) chemical reactions. In biocatalytic processes, natural catalysts, such as
enzymes, perform chemical transformations on
organic compounds. Both enzymes that have been more or less
isolated and enzymes still residing inside living
cells are employed for this task.[63][64][65] The modern usage of biotechnologically produced and possibly modified enzymes for
organic synthesis is termed chemoenzymatic synthesis; the reactions performed are chemoenzymatic reactions.
Biomedical Engineering (BME) or Medical Engineering is the application of engineering principles and design concepts to medicine and biology for healthcare purposes (e.g. diagnostic or therapeutic). This field seeks to close the gap between
engineering and
medicine, combining the design and problem solving skills of engineering with medical biological sciences to advance health care treatment, including
diagnosis,
monitoring, and
therapy.[66]
The Biot number (Bi) is a
dimensionless quantity used in heat transfer calculations. It is named after the eighteenth century French physicist
Jean-Baptiste Biot (1774–1862), and gives a simple index of the ratio of the heat transfer resistances inside of and at the surface of a body. This ratio determines whether or not the temperatures inside a body will vary significantly in space, while the body heats or cools over time, from a thermal gradient applied to its surface.
The boiling point of a substance is the temperature at which the
vapor pressure of a
liquid equals the
pressure surrounding the liquid[73][74] and the liquid changes into a vapor.
Boiling-point elevation describes the phenomenon that the
boiling point of a
liquid (a
solvent) will be higher when another compound is added, meaning that a
solution has a higher boiling point than a pure solvent. This happens whenever a non-volatile solute, such as a salt, is added to a pure solvent, such as water. The boiling point can be measured accurately using an
ebullioscope.
Boyle's law (sometimes referred to as the Boyle–Mariotte law, or Mariotte's law[84]) is an experimental
gas law that describes how the
pressure of a
gas tends to increase as the
volume of the container decreases. A modern statement of Boyle's law is:
The absolute pressure exerted by a given mass of an
ideal gas is inversely proportional to the volume it occupies if the
temperature and
amount of gas remain unchanged within a
closed system.[85][86]
In
geometry and
crystallography, a Bravais lattice, named after
Auguste Bravais (
1850),[87] is an infinite array (or a finite array, if we consider the edges, obviously) of discrete points generated by a set of
discrete translation operations described in three dimensional space by:
where ni are any integers and ai are known as the primitive vectors which lie in different directions (not necessarily mutually perpendicular) and span the lattice. This discrete set of vectors must be closed under vector addition and subtraction. For any choice of position vector R, the lattice looks exactly the same.
The break-even point (BEP) in
economics,
business—and specifically
cost accounting—is the point at which total cost and total revenue are equal, i.e. "even". There is no net loss or gain, and one has "broken even", though
opportunity costs have been paid and capital has received the risk-adjusted, expected return. In short, all costs that must be paid are paid, and there is neither profit nor loss.[88][89]
Brewster's angle (also known as the polarization angle) is an
angle of incidence at which
light with a particular
polarization is perfectly transmitted through a transparent
dielectric surface, with no
reflection. When unpolarized light is incident at this angle, the light that is reflected from the surface is therefore perfectly polarized. This special angle of incidence is named after the Scottish physicist
Sir David Brewster (1781–1868).[90][91]
A material is brittle if, when subjected to
stress, it breaks without significant
plastic deformation. Brittle materials absorb relatively little
energy prior to fracture, even those of high
strength. Breaking is often accompanied by a snapping sound. Brittle materials include most
ceramics and
glasses (which do not deform plastically) and some
polymers, such as
PMMA and
polystyrene. Many
steels become brittle at low temperatures (see
ductile–brittle transition temperature), depending on their composition and processing.
Brownian motion or pedesis is the random motion of
particles suspended in a
fluid (a
liquid or a
gas) resulting from their collision with the fast-moving
molecules in the fluid.[94]
A buffer solution (more precisely,
pH buffer or
hydrogen ion buffer) is an
aqueous solution consisting of a
mixture of a
weak acid and its
conjugate base, or vice versa. Its pH changes very little when a small amount of
strong acid or
base is added to it. Buffer solutions are used as a means of keeping pH at a nearly constant value in a wide variety of chemical applications. In nature, there are many systems that use buffering for pH regulation.
The bulk modulus ( or ) of a substance is a measure of how resistant to compression that substance is. It is defined as the ratio of the
infinitesimalpressure increase to the resulting relative decrease of the
volume.[95]
Other moduli describe the material's response (
strain) to other kinds of
stress: the
shear modulus describes the response to shear, and
Young's modulus describes the response to linear stress. For a
fluid, only the bulk modulus is meaningful. For a complex
anisotropic solid such as
wood or
paper, these three moduli do not contain enough information to describe its behaviour, and one must use the full generalized
Hooke's law.
Capillary action (sometimes capillarity, capillary motion, capillary effect, or wicking) is the ability of a
liquid to flow in narrow spaces without the assistance of, or even in opposition to, external forces like
gravity. The effect can be seen in the drawing up of liquids between the hairs of a paintbrush, in a thin tube, in porous materials such as paper and plaster, in some non-porous materials such as sand and liquefied
carbon fiber, or in a cell. It occurs because of
intermolecular forces between the liquid and surrounding solid surfaces. If the diameter of the tube is sufficiently small, then the combination of
surface tension (which is caused by
cohesion within the liquid) and
adhesive forces between the liquid and container wall act to propel the liquid.
A hypothetical thermodynamic cycle for a heat engine; no thermodynamic cycle can be more efficient than a Carnot cycle operating between the same two temperature limits.
Named for
Carlo Alberto Castigliano, is a method for determining the displacements of a
linear-elastic system based on the
partial derivatives of the
energy. He is known for his two theorems. The basic concept may be easy to understand by recalling that a change in energy is equal to the causing force times the resulting displacement. Therefore, the causing force is equal to the change in energy divided by the resulting displacement. Alternatively, the resulting displacement is equal to the change in energy divided by the causing force. Partial derivatives are needed to relate causing forces and resulting displacements to the change in energy.
The cell membrane (also known as the plasma membrane or cytoplasmic membrane, and historically referred to as the plasmalemma) is a
biological membrane that separates the
interior of all
cells from the
outside environment (the extracellular space) which protects the cell from its environment[96][97] consisting of a
lipid bilayer with embedded
proteins.
In
cell biology, the nucleus (pl. nuclei; from
Latinnucleus or nuculeus, meaning kernel or seed) is a
membrane-enclosed
organelle found in
eukaryoticcells. Eukaryotes usually have a single nucleus, but a few cell types, such as mammalian red blood cells, have
no nuclei, and a few others including
osteoclasts have
many.
In
biology, cell theory is the historic
scientific theory, now universally accepted, that living organisms are made up of
cells, that they are the basic structural/organizational unit of all organisms, and that all cells come from pre-existing cells. Cells are the basic unit of structure in all organisms and also the basic unit of reproduction.
Is the point where the total sum of a
pressure field acts on a body, causing a
force to act through that point. The total force vector acting at the center of pressure is the value of the integrated vectorial pressure field. The resultant force and center of pressure location produce equivalent force and moment on the body as the original pressure field.
In
probability theory, the central limit theorem (CLT) establishes that, in some situations, when
independent random variables are added, their properly normalized sum tends toward a
normal distribution (informally a "bell curve") even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions.
A central processing unit (CPU) is the
electronic circuitry within a
computer that carries out the
instructions of a
computer program by performing the basic
arithmetic, logic, controlling and
input/output (I/O) operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s.[98] Traditionally, the term "CPU" refers to a processor, more specifically to its processing unit and
control unit (CU), distinguishing these core elements of a computer from external components such as
main memory and
I/O circuitry.[99]
Is a sequence of reactions where a reactive product or by-product causes additional reactions to take place. In a chain reaction,
positive feedback leads to a self-amplifying
chain of events.
Charles's law (also known as the law of volumes) is an experimental
gas law that describes how
gasestend to expand when heated. A modern statement of Charles's law is:
When the
pressure on a sample of a dry gas is held constant, the Kelvin temperature and the volume will be in direct proportion.[103]
In a
chemical reaction, chemical equilibrium is the state in which both reactants and products are present in
concentrations which have no further tendency to change with time, so that there is no observable change in the properties of the system.[104] Usually, this state results when the forward reaction proceeds at the same rate as the
reverse reaction. The
reaction rates of the forward and backward reactions are generally not zero, but equal. Thus, there are no net changes in the concentrations of the reactant(s) and product(s). Such a state is known as
dynamic equilibrium.[105][106]
Chemical kinetics, also known as reaction kinetics, is the study of
rates of
chemical processes. Chemical kinetics includes investigations of how different experimental conditions can influence the speed of a
chemical reaction and yield information about the
reaction's mechanism and
transition states, as well as the construction of
mathematical models that can describe the characteristics of a chemical reaction.
A chemical reaction is a process that leads to the
chemical transformation of one set of
chemical substances to another.[107] Classically, chemical reactions encompass changes that only involve the positions of
electrons in the forming and breaking of
chemical bonds between
atoms, with no change to the nuclei (no change to the elements present), and can often be described by a
chemical equation.
Nuclear chemistry is a sub-discipline of chemistry that involves the chemical reactions of unstable and radioactive elements where both electronic and nuclear changes can occur.
Chromate salts contain the chromate anion, CrO2− 4. Dichromate salts contain the dichromate anion, Cr 2O2− 7. They are
oxyanions of
chromium in the 6+
oxidation state . They are moderately strong
oxidizing agents. In an
aqueoussolution, chromate and dichromate ions can be interconvertible.
In
physics, circular motion is a movement of an object along the
circumference of a
circle or
rotation along a circular path. It can be uniform, with constant angular rate of rotation and constant speed, or non-uniform with a changing rate of rotation. The
rotation around a fixed axis of a three-dimensional body involves circular motion of its parts. The equations of motion describe the movement of the
center of mass of a body.
where is the slope of the tangent to the coexistence curve at any point, is the specific
latent heat, is the
temperature, is the
specific volume change of the phase transition, and is the
specific entropy change of the phase transition.
The Clausius theorem (1855) states that a system exchanging heat with external reservoirs and undergoing a cyclic process, is one that ultimately returns a system to its original state,
where is the infinitesimal amount of heat absorbed by the system from the reservoir and is the
temperature of the external reservoir (surroundings) at a particular instant in time. In the special case of a reversible process, the equality holds.[114] The reversible case is used to introduce the
entropy state function. This is because in a cyclic process the variation of a state function is zero. In words, the Clausius statement states that it is impossible to construct a device whose sole effect is the transfer of heat from a cool reservoir to a hot reservoir.[115] Equivalently, heat spontaneously flows from a hot body to a cooler one, not the other way around.[116] The generalized "inequality of Clausius"[117]
for an infinitesimal change in entropy S applies not only to cyclic processes, but to any process that occurs in a closed system.
The coefficient of performance or COP (sometimes CP or CoP) of a
heat pump, refrigerator or air conditioning system is a ratio of useful heating or cooling provided to work required.[118][119] Higher COPs equate to lower operating costs. The COP usually exceeds 1, especially in heat pumps, because, instead of just converting work to heat (which, if 100% efficient, would be a COP_hp of 1), it pumps additional heat from a heat source to where the heat is required. For complete systems, COP calculations should include energy consumption of all power consuming auxiliaries. COP is highly dependent on operating conditions, especially absolute temperature and relative temperature between sink and system, and is often graphed or averaged against expected conditions.[120]
In
physics, two wave sources are perfectly coherent if they have a constant
phase difference and the same
frequency, and the same
waveform. Coherence is an ideal property of
waves that enables stationary (i.e. temporally and spatially constant)
interference. It contains several distinct concepts, which are limiting cases that never quite occur in reality but allow an understanding of the physics of waves, and has become a very important concept in quantum physics. More generally, coherence describes all properties of the
correlation between
physical quantities of a single wave, or between several waves or wave packets.
Or cohesive attraction or cohesive force is the action or
property of like
molecules sticking together, being mutually
attractive. It is an intrinsic property of a
substance that is caused by the shape and structure of its molecules, which makes the distribution of orbiting
electrons irregular when molecules get close to one another, creating
electrical attraction that can maintain a microscopic structure such as a
water drop. In other words, cohesion allows for
surface tension, creating a "solid-like" state upon which light-weight or low-density materials can be placed.
Or cold working, any metal-working procedure (such as hammering, rolling, shearing, bending, milling, etc.) carried out below the metal's recrystallization temperature.
Is planning for side effects or other unintended issues in a
design. In a more simpler term, it's a "counter-procedure" plan on expected side effect performed to produce more efficient and useful results. The design of an
invention can itself also be to compensate for some other existing issue or
exception.
Compressive strength or compression strength is the capacity of a material or structure to withstand loads tending to reduce size, as opposed to
tensile strength, which withstands loads tending to elongate. In other words, compressive strength resists
compression (being pushed together), whereas tensile strength resists
tension (being pulled apart). In the study of
strength of materials, tensile strength, compressive strength, and
shear strength can be analyzed independently.
A computer is a device that can be instructed to carry out sequences of
arithmetic or
logical operations automatically via
computer programming. Modern computers have the ability to follow generalized sets of operations, called programs. These programs enable computers to perform an extremely wide range of tasks.
Computer-aided design (CAD) is the use of
computer systems (or workstations) to aid in the creation, modification, analysis, or optimization of a
design.[122] CAD software is used to increase the productivity of the designer, improve the quality of design, improve communications through documentation, and to create a database for manufacturing.[123] CAD output is often in the form of electronic files for print, machining, or other manufacturing operations. The term CADD (for Computer Aided Design and Drafting) is also used.[124]
Computer-aided manufacturing (CAM) is the use of software to control
machine tools and related ones in the
manufacturing of workpieces.[125][126][127][128][129] This is not the only definition for CAM, but it is the most common;[125] CAM may also refer to the use of a computer to assist in all operations of a manufacturing plant, including planning, management, transportation and storage.[130][131]
Is the theory, experimentation, and engineering that form the basis for the design and use of
computers. It involves the study of
algorithms that process, store, and communicate
digitalinformation. A
computer scientist specializes in the theory of computation and the design of computational systems.[133]
Lenses are classified by the curvature of the two optical surfaces. A lens is biconvex (or double convex, or just convex) if both surfaces are
convex. If both surfaces have the same radius of curvature, the lens is equiconvex. A lens with two
concave surfaces is biconcave (or just concave). If one of the surfaces is flat, the lens is plano-convex or plano-concave depending on the curvature of the other surface. A lens with one convex and one concave side is convex-concave or meniscus.
Is the field of physics that deals with the macroscopic and microscopic physical properties of matter. In particular it is concerned with the "condensed" phases that appear whenever the number of constituents in a system is extremely large and the interactions between the constituents are strong.
In
statistics, a confidence interval or compatibility interval (CI) is a type of
interval estimate, computed from the statistics of the observed data, that might contain the true value of an unknown
population parameter. The interval has an associated confidence level that, loosely speaking, quantifies the level of confidence that the parameter lies in the interval. More strictly speaking, the confidence level represents the frequency (i.e. the proportion) of possible confidence intervals that contain the true value of the unknown population parameter. In other words, if confidence intervals are constructed using a given confidence level from an infinite number of independent sample statistics, the proportion of those intervals that contain the true value of the parameter will be equal to the confidence level.[134][135][136]
A conjugate acid, within the
Brønsted–Lowry acid–base theory, is a
species formed by the
reception of a proton (
H+) by a
base—in other words, it is a base with a
hydrogen ion added to it. On the other hand, a conjugate base is what is left over after an acid has donated a proton during a chemical reaction. Hence, a conjugate base is a species formed by the
removal of a proton from an acid.[137] Because
some acids are capable of releasing multiple protons, the conjugate base of an acid may itself be acidic.
A conjugate acid, within the
Brønsted–Lowry acid–base theory, is a
species formed by the
reception of a proton (
H+) by a
base—in other words, it is a base with a
hydrogen ion added to it. On the other hand, a conjugate base is what is left over after an acid has donated a proton during a chemical reaction. Hence, a conjugate base is a species formed by the
removal of a proton from an acid.[137] Because
some acids are capable of releasing multiple protons, the conjugate base of an acid may itself be acidic.
In physics and chemistry, the law of conservation of energy states that the total
energy of an
isolated system remains constant; it is said to be
conserved over time.[138] This law means that energy can neither be created nor destroyed; rather, it can only be transformed or transferred from one form to another.
The law of conservation of mass or principle of mass conservation states that for any
system closed to all transfers of
matter and
energy, the
mass of the system must remain constant over time, as system's mass cannot change, so quantity cannot be added nor removed. Hence, the quantity of mass is conserved over time.
A continuity equation in physics is an
equation that describes the transport of some quantity. It is particularly simple and powerful when applied to a
conserved quantity, but it can be generalized to apply to any
extensive quantity. Since
mass,
energy,
momentum,
electric charge and other natural quantities are conserved under their respective appropriate conditions, a variety of physical phenomena may be described using continuity equations.
Is a branch of
mechanics that deals with the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician
Augustin-Louis Cauchy was the first to formulate such models in the 19th century.
Control engineering or control systems engineering is an
engineering discipline that applies
automatic control theory to design systems with desired behaviors in
control environments.[139] The discipline of controls overlaps and is usually taught along with
electrical engineering at many institutions around the world.[139]
.
Lenses are classified by the curvature of the two optical surfaces. A lens is biconvex (or double convex, or just convex) if both surfaces are
convex. If both surfaces have the same radius of curvature, the lens is equiconvex. A lens with two
concave surfaces is biconcave (or just concave). If one of the surfaces is flat, the lens is plano-convex or plano-concave depending on the curvature of the other surface. A lens with one convex and one concave side is convex-concave or meniscus.
Is a
natural process, which converts a refined metal to a more chemically-stable form, such as its
oxide,
hydroxide, or
sulfide. It is the gradual destruction of materials (usually
metals) by chemical and/or electrochemical reaction with their environment.
Corrosion engineering is the field dedicated to controlling and stopping corrosion.
Thus, it is also the amount of excess charge on a
capacitor of one
farad charged to a potential difference of one
volt:
The coulomb is equivalent to the charge of approximately 6.242×1018 (1.036×10−5mol)
protons, and −1 C is equivalent to the charge of approximately 6.242×1018electrons.
A
new definition, in terms of the
elementary charge, will take effect on 20 May 2019.[141] The new definition, defines the
elementary charge (the charge of the proton) as exactly 1.602176634×10−19 coulombs. This would implicitly define the coulomb as 1⁄0.1602176634×1018 elementary charges.
Coulomb's law, or Coulomb's inverse-square law, is a
law of
physics for quantifying Coulomb's force, or electrostatic force. Electrostatic force is the amount of force with which stationary,
electrically charged particles either repel, or attract each other. This force and the law for quantifying it, represent one of the most basic forms of force used in the physical sciences, and were an essential basis to the study and development of the theory and field of
classical electromagnetism. The law was first published in 1785 by French physicist
Charles-Augustin de Coulomb.[142]
In its
scalar form, the law is:
,
where ke is the
Coulomb constant (ke ≈ 9×109 N⋅m2⋅C−2), q1 and q2 are the signed magnitudes of the charges, and the scalar r is the distance between the charges. The force of the interaction between the charges is attractive if the charges have opposite signs (i.e., F is negative) and repulsive if like-signed (i.e., F is positive).
Being an
inverse-square law, the law is analogous to
Isaac Newton's inverse-square
law of universal gravitation. Coulomb's law can be used to derive
Gauss's law, and vice versa.
Crystallization is the (natural or artificial) process by which a solid forms, where the atoms or molecules are highly organized into a structure known as a
crystal. Some of the ways by which crystals form are
precipitating from a
solution,
freezing, or more rarely
deposition directly from a
gas. Attributes of the resulting crystal depend largely on factors such as temperature, air pressure, and in the case of liquid crystals, time of fluid evaporation.
Describes the motion of a moving particle that conforms to a known or fixed curve. The study of such motion involves the use of two co-ordinate systems, the first being planar motion and the latter being cylindrical motion.
In
chemistry and
physics, Dalton's law (also called Dalton's law of partial pressures) states that in a mixture of non-reacting gases, the total
pressure exerted is equal to the sum of the
partial pressures of the individual gases.[150]
In
materials science, deformation refers to any changes in the shape or size of an object due to
an applied
force (the deformation energy in this case is transferred through work) or
a change in temperature (the deformation energy in this case is transferred through heat).
The first case can be a result of
tensile (pulling) forces,
compressive (pushing) forces,
shear,
bending or
torsion (twisting).
In the second case, the most significant factor, which is determined by the temperature, is the mobility of the structural defects such as grain boundaries, point vacancies, line and screw dislocations, stacking faults and twins in both crystalline and non-crystalline solids. The movement or displacement of such mobile defects is thermally activated, and thus limited by the rate of atomic diffusion.[152][153]
Deformation in
continuum mechanics is the transformation of a body from a reference configuration to a current configuration.[154] A configuration is a set containing the positions of all particles of the body.
A deformation may be caused by
external loads,[155]body forces (such as
gravity or
electromagnetic forces), or changes in temperature, moisture content, or chemical reactions, etc.
The density, or more precisely, the volumetric mass density, of a substance is its
mass per unit
volume. The symbol most often used for density is Ï (the lower case Greek letter
rho), although the Latin letter D can also be used. Mathematically, density is defined as mass divided by volume:[156]
where Ï is the density, m is the mass, and V is the volume. In some cases (for instance, in the United States oil and gas industry), density is loosely defined as its
weight per unit
volume,[157] although this is scientifically inaccurate – this quantity is more specifically called
specific weight.
The derivative of a
function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of
calculus. For example, the derivative of the position of a moving object with respect to
time is the object's
velocity: this measures how quickly the position of the object changes when time advances.
Diamagnetic materials are repelled by a
magnetic field; an applied magnetic field creates an
induced magnetic field in them in the opposite direction, causing a repulsive force. In contrast,
paramagnetic and
ferromagnetic materials are attracted by a magnetic field. Diamagnetism is a
quantum mechanical effect that occurs in all materials; when it is the only contribution to the magnetism, the material is called diamagnetic. In paramagnetic and ferromagnetic substances the weak diamagnetic force is overcome by the attractive force of
magnetic dipoles in the material. The
magnetic permeability of diamagnetic materials is less than μ0, the permeability of vacuum. In most materials diamagnetism is a weak effect which can only be detected by sensitive laboratory instruments, but a
superconductor acts as a strong diamagnet because it repels a magnetic field entirely from its interior.
A differential pulley, also called Weston differential pulley, or colloquially chain fall, is used to manually lift very heavy objects like
car engines. It is operated by pulling upon the slack section of a continuous chain that wraps around pulleys. The relative size of two connected pulleys determines the maximum weight that can be lifted by hand. The load will remain in place (and not lower under the force of
gravity) until the chain is pulled.[158]
Is the net movement of molecules or atoms from a region of higher concentration (or high chemical potential) to a region of lower concentration (or low chemical potential).
is the analysis of the relationships between different
physical quantities by identifying their
base quantities (such as
length,
mass,
time, and
electric charge) and
units of measure (such as miles vs. kilometers, or pounds vs. kilograms) and tracking these dimensions as calculations or comparisons are performed. The
conversion of units from one dimensional unit to another is often somewhat complex. Dimensional analysis, or more specifically the factor-label method, also known as the unit-factor method, is a widely used technique for such conversions using the rules of
algebra.[159][160][161]
Direct integration is a
structural analysis method for measuring internal shear, internal moment, rotation, and deflection of a beam.
For a beam with an applied weight , taking downward to be positive, the internal
shear force is given by taking the negative integral of the weight:
The internal moment M(x) is the integral of the internal shear:
In
optics, dispersion is the phenomenon in which the
phase velocity of a wave depends on its frequency.[162]
Media having this common property may be termed dispersive media. Sometimes the term chromatic dispersion is used for specificity.
Although the term is used in the field of optics to describe
light and other
electromagnetic waves, dispersion in the same sense can apply to any sort of wave motion such as
acoustic dispersion in the case of sound and seismic waves, in
gravity waves (ocean waves), and for telecommunication signals along
transmission lines (such as
coaxial cable) or
optical fiber.
In
fluid mechanics, displacement occurs when an object is immersed in a
fluid, pushing it out of the way and taking its place. The volume of the fluid displaced can then be measured, and from this, the volume of the immersed object can be deduced (the volume of the immersed object will be exactly equal to the volume of the displaced fluid).
Is a
vector whose length is the shortest
distance from the initial to the final
position of a point P.[163] It quantifies both the distance and direction of an imaginary motion along a straight line from the initial position to the final position of the point. A displacement may be identified with the
translation that maps the initial position to the final position.
The Doppler effect (or the Doppler shift) is the change in
frequency or
wavelength of a
wave in relation to an
observer who is moving relative to the wave source.[164] It is named after the
Austrian physicist
Christian Doppler, who described the phenomenon in 1842.
The dose–response relationship, or exposure–response relationship, describes the magnitude of the
response of an
organism, as a
function of exposure (or
doses) to a
stimulus or
stressor (usually a
chemical) after a certain exposure time.[165] Dose–response relationships can be described by dose–response curves. A stimulus response function or stimulus response curve is defined more broadly as the response from any type of stimulus, not limited to chemicals.
In
fluid dynamics, drag (sometimes called air resistance, a type of
friction, or fluid resistance, another type of friction or fluid friction) is a
force acting opposite to the relative motion of any object moving with respect to a surrounding fluid.[166] This can exist between two fluid layers (or surfaces) or a fluid and a
solid surface. Unlike other resistive forces, such as dry
friction, which are nearly independent of velocity, drag forces depend on velocity.[167][168]
Drag force is proportional to the velocity for a
laminar flow and the squared velocity for a
turbulent flow. Even though the ultimate cause of a drag is viscous friction, the turbulent drag is independent of
viscosity.[169] Drag forces always decrease fluid velocity relative to the solid object in the fluid's
path.
Is a measure of a material's ability to undergo significant plastic deformation before rupture, which may be expressed as percent elongation or percent area reduction from a tensile test.
In physics and chemistry, effusion is the process in which a gas escapes from a container through a hole of diameter considerably smaller than the
mean free path of the molecules.[170]
In
physics, elasticity is the ability of a body to resist a distorting influence and to return to its original size and shape when that influence or force is removed. Solid objects will
deform when adequate
forces are applied to them. If the material is elastic, the object will return to its initial shape and size when these forces are removed.
is the
physical property of
matter that causes it to experience a
force when placed in an
electromagnetic field. There are two types of electric charges; positive and negative (commonly carried by
protons and
electrons respectively). Like charges repel and unlike attract. An object with an absence of net charge is referred to as neutral. Early knowledge of how charged substances interact is now called
classical electrodynamics, and is still accurate for problems that do not require consideration of
quantum effects.
Is a flow of
electric charge.[171]: 2 In
electric circuits this charge is often carried by moving
electrons in a
wire. It can also be carried by
ions in an
electrolyte, or by both ions and electrons such as in an ionised gas (
plasma).[172]
The
SI unit for measuring an electric current is the
ampere, which is the flow of electric charge across a surface at the rate of one
coulomb per second. Electric current is measured using a device called an
ammeter.[173]
Surrounds an
electric charge, and exerts force on other charges in the field, attracting or repelling them.[174][175] Electric field is sometimes abbreviated as E-field.
Is an
electrical machine that converts
electrical energy into
mechanical energy. Most electric motors operate through the interaction between the motor's
magnetic field and
winding currents to generate force in the form of
rotation. Electric motors can be powered by
direct current (DC) sources, such as from batteries, motor vehicles or rectifiers, or by
alternating current (AC) sources, such as a power grid,
inverters or electrical generators. An
electric generator is mechanically identical to an electric motor, but operates in the reverse direction, accepting mechanical energy (such as from flowing water) and converting this mechanical energy into electrical energy.
(Also called the electric field potential, potential drop or the electrostatic potential) is the amount of
work needed to move a unit of
positive charge from a reference point to a specific point inside the field without producing an acceleration. Typically, the reference point is the
Earth or a point at
infinity, although any point beyond the influence of the electric field charge can be used.
Electric potential energy, or electrostatic potential energy, is a
potential energy (measured in
joules) that results from
conservativeCoulomb forces and is associated with the configuration of a particular set of point
charges within a defined
system. An object may have electric potential energy by virtue of two key elements: its own electric charge and its relative position to other electrically charged objects. The term "electric potential energy" is used to describe the potential energy in systems with
time-variantelectric fields, while the term "electrostatic potential energy" is used to describe the potential energy in systems with
time-invariant electric fields.
Is an object or type of material that allows the flow of charge (
electrical current) in one or more directions. Materials made of metal are common electrical conductors. Electrical current is generated by the flow of negatively charged electrons, positively charged holes, and positive or negative ions in some cases.
Is the measure of the opposition that a
circuit presents to a
current when a
voltage is applied. The term complex impedance may be used interchangeably.
Is a material whose internal
electric charges do not flow freely; very little
electric current will flow through it under the influence of an
electric field. This contrasts with other materials,
semiconductors and
conductors, which conduct electric current more easily. The property that distinguishes an insulator is its
resistivity; insulators have higher resistivity than semiconductors or conductors.
Is a type of
magnet in which the
magnetic field is produced by an
electric current. Electromagnets usually consist of wire wound into a
coil. A current through the wire creates a magnetic field which is concentrated in the hole, denoting the centre of the coil. The magnetic field disappears when the current is turned off. The wire turns are often wound around a
magnetic core made from a
ferromagnetic or
ferrimagnetic material such as
iron; the magnetic core concentrates the
magnetic flux and makes a more powerful magnet.
Electromechanics[180][181][182][183] combines processes and procedures drawn from
electrical engineering and
mechanical engineering. Electromechanics focuses on the interaction of electrical and mechanical systems as a whole and how the two systems interact with each other. This process is especially prominent in systems such as those of DC or AC rotating electrical machines which can be designed and operated to generate power from a mechanical process (
generator) or used to power a mechanical effect (
motor). Electrical engineering in this context also encompasses
electronics engineering.
In
chemistry, an electron pair, or Lewis pair, consists of two
electrons that occupy the same
molecular orbital but have opposite
spins.
Gilbert N. Lewis introduced the concepts of both the electron pair and the covalent bond in a landmark paper he published in 1916.[189]
Symbolized as χ, is the measurement of the tendency of an
atom to attract a shared pair of
electrons (or
electron density).[190] An atom's electronegativity is affected by both its
atomic number and the distance at which its
valence electrons reside from the charged nucleus. The higher the associated electronegativity, the more an atom or a substituent group attracts electrons.
Is a process where a sample of some material (e.g., soil, waste or drinking water, bodily fluids,
minerals,
chemical compounds) is analyzed for its
elemental and sometimes
isotopic composition.[citation needed] Elemental analysis can be qualitative (determining what elements are present), and it can be quantitative (determining how much of each are present). Elemental analysis falls within the ambit of
analytical chemistry, the set of instruments involved in deciphering the chemical nature of our world.
Is any process with an increase in the
enthalpyH (or
internal energyU) of the system.[192] In such a process, a closed system usually absorbs
thermal energy from its surroundings, which is
heat transfer into the system. It may be a chemical process, such as dissolving ammonium nitrate in water, or a physical process, such as the melting of ice cubes.
Is the use of
scientific principles to design and build machines, structures, and other items, including bridges, tunnels, roads, vehicles, and buildings.[195] The discipline of engineering encompasses a broad range of more specialized
fields of engineering, each with a more specific emphasis on particular areas of
applied mathematics,
applied science, and types of application. The term engineering is derived from the
Latiningenium, meaning "cleverness" and ingeniare, meaning "to contrive, devise".[196]
Engineering economics, previously known as engineering economy, is a subset of
economics concerned with the use and "...application of economic principles"[197] in the analysis of engineering decisions.[198] As a discipline, it is focused on the branch of economics known as
microeconomics in that it studies the behavior of individuals and firms in making decisions regarding the allocation of limited resources. Thus, it focuses on the decision making process, its context and environment.[197] It is pragmatic by nature, integrating economic theory with engineering practice.[197] But, it is also a simplified application of microeconomic theory in that it assumes elements such as price determination, competition and demand/supply to be fixed inputs from other sources.[197] As a discipline though, it is closely related to others such as
statistics,
mathematics and
cost accounting.[197] It draws upon the logical framework of economics but adds to that the analytical power of mathematics and statistics.[197]
Or engineering science, refers to the study of the combined disciplines of
physics,
mathematics,
chemistry,
biology, and
engineering, particularly computer, nuclear, electrical, electronic, aerospace, materials or mechanical engineering. By focusing on the
scientific method as a rigorous basis, it seeks ways to apply, design, and develop new solutions in engineering.[201][202][203][204]
In
statistics, an estimator is a rule for calculating an estimate of a given quantity based on
observed data: thus the rule (the estimator), the quantity of interest (the
estimand) and its result (the estimate) are distinguished.[206] For example, the
sample mean is a commonly used estimator of the
population mean. There are
point and
interval estimators. The
point estimators yield single-valued results, although this includes the possibility of single vector-valued results and results that can be expressed as a single function. This is in contrast to an
interval estimator, where the result would be a range of plausible values (or vectors or functions).
Euler–Bernoulli beam theory (also known as engineer's beam theory or classical beam theory)[207] is a simplification of the
linear theory of elasticity which provides a means of calculating the load-carrying and
deflection characteristics of
beams. It covers the case for small deflections of a
beam that are subjected to lateral loads only. It is thus a special case of
Timoshenko beam theory. It was first enunciated circa 1750,[208] but was not applied on a large scale until the development of the
Eiffel Tower and the
Ferris wheel in the late 19th century. Following these successful demonstrations, it quickly became a cornerstone of engineering and an enabler of the
Second Industrial Revolution. Additional
mathematical models have been developed such as
plate theory, but the simplicity of beam theory makes it an important tool in the sciences, especially
structural and
mechanical engineering.
In
thermodynamics, the term exothermic process (exo- : "outside") describes a process or reaction that releases
energy from the system to its surroundings, usually in the form of
heat, but also in a form of
light (e.g. a spark, flame, or flash),
electricity (e.g. a battery), or
sound (e.g. explosion heard when burning hydrogen). Its etymology stems from the Greek prefix Îξω (exÅ, which means "outwards") and the Greek word θεÏμικός (thermikÏŒs, which means "thermal").[209]
(FoS), also known as (and used interchangeably with) safety factor (SF), expresses how much stronger a system is than it needs to be for an intended load.
[210] The farad (symbol: F) is the
SI derived unit of electrical
capacitance, the ability of a body to store an electrical charge. It is named after the English physicist
Michael Faraday.
Both of these values have exact defined values, and hence F has a known exact value. NA is the
Avogadro constant (the ratio of the number of particles, N, which is unitless, to the amount of substance, n, in units of moles), and e is the
elementary charge or the magnitude of the charge of an electron. This relation holds because the amount of charge of a mole of electrons is equal to the amount of charge in one electron multiplied by the number of electrons in a mole.
In
optics, Fermat's principle, or the principle of least time, named after French mathematician
Pierre de Fermat, is the principle that the path taken between two points by a ray of light is the path that can be traversed in the least time. This principle is sometimes taken as the definition of a ray of light.[215] However, this version of the principle is not general; a more modern statement of the principle is that rays of light traverse the path of stationary optical length with respect to variations of the path.[216] In other words, a ray of light prefers the path such that there are other paths, arbitrarily nearby on either side, along which the ray would take almost exactly the same time to traverse.
(FEM), is the most widely used method for solving problems of engineering and
mathematical models. Typical problem areas of interest include the traditional fields of
structural analysis,
heat transfer,
fluid flow, mass transport, and
electromagnetic potential.
The FEM is a particular
numerical method for solving
partial differential equations in two or three space variables (i.e., some
boundary value problems). To solve a problem, the FEM subdivides a large system into smaller, simpler parts that are called finite elements. This is achieved by a particular space
discretization in the space dimensions, which is implemented by the construction of a
mesh of the object: the numerical domain for the solution, which has a finite number of points.
The finite element method formulation of a boundary value problem finally results in a system of
algebraic equations. The method approximates the unknown function over the domain.[217]
The simple equations that model these finite elements are then assembled into a larger system of equations that models the entire problem. The FEM then uses
variational methods from the
calculus of variations to approximate a solution by minimizing an associated error function.
For Inspiration and Recognition of Science and Technology – is an organization founded by inventor Dean Kamen in 1989 to develop ways to inspire students in engineering and technology fields.
In
physics and
engineering, fluid dynamics is a subdiscipline of
fluid mechanics that describes the flow of
fluids—
liquids and
gases. It has several subdisciplines, including
aerodynamics (the study of air and other gases in motion) and hydrodynamics (the study of liquids in motion).
Fluid statics, or hydrostatics, is the branch of
fluid mechanics that studies "
fluids at rest and the pressure in a fluid or exerted by a fluid on an immersed body".[221]
Is a mechanical device specifically designed to use the conservation of
angular momentum so as to efficiently store
rotational energy; a form of kinetic energy proportional to the product of its
moment of inertia and the square of its
rotational speed. In particular, if we assume the flywheel's moment of inertia to be constant (i.e., a flywheel with fixed mass and
second moment of area revolving about some fixed axis) then the stored (rotational) energy is directly associated with the square of its rotational speed.
In
geometrical optics, a focus, also called an image point, is the point where
light rays originating from a point on the object
converge.[222] Although the focus is conceptually a point, physically the focus has a spatial extent, called the
blur circle. This non-ideal focusing may be caused by
aberrations of the imaging optics. In the absence of significant aberrations, the smallest possible blur circle is the
Airy disc, which is caused by
diffraction from the optical system's
aperture. Aberrations tend worsen as the aperture diameter increases, while the Airy circle is smallest for large apertures.
In
materials science, fracture toughness is the critical
stress intensity factor of a sharp crack where propagation of the crack suddenly becomes rapid and unlimited. A component's thickness affects the constraint conditions at the tip of a crack with thin components having
plane stress conditions and thick components having
plane strain conditions. Plane strain conditions give the lowest fracture toughness value which is a
material property. The critical value of stress intensity factor in
mode I loading measured under plane strain conditions is known as the plane strain fracture toughness, denoted .[226] When a test fails to meet the thickness and other test requirements that are in place to ensure plane strain conditions, the fracture toughness value produced is given the designation . Fracture toughness is a quantitative way of expressing a material's resistance to crack propagation and standard values for a given material are generally available.
The melting point (or, rarely, liquefaction point) of a substance is the
temperature at which it changes
state from
solid to
liquid. At the melting point the solid and liquid phase exist in
equilibrium. The melting point of a substance depends on
pressure and is usually specified at a
standard pressure such as 1
atmosphere or 100
kPa.
When considered as the temperature of the reverse change from liquid to solid, it is referred to as the freezing point or crystallization point. Because of the ability of substances to
supercool, the freezing point can easily appear to be below its actual value. When the "characteristic freezing point" of a substance is determined, in fact the actual methodology is almost always "the principle of observing the disappearance rather than the formation of ice, that is, the
melting point.[227]
Is the
force resisting the relative motion of solid surfaces, fluid layers, and material elements
sliding against each other.[228] There are several types of friction:
Dry friction is a force that opposes the relative lateral motion of two solid surfaces in contact. Dry friction is subdivided into static friction ("
stiction") between non-moving surfaces, and kinetic friction between moving surfaces. With the exception of atomic or molecular friction, dry friction generally arises from the interaction of surface features, known as
asperities (see Figure 1).
Fluid friction describes the friction between layers of a
viscous fluid that are moving relative to each other.[229][230]
Lubricated friction is a case of fluid friction where a
lubricant fluid separates two solid surfaces.[231][232][233]
Skin friction is a component of
drag, the force resisting the motion of a fluid across the surface of a body.
Internal friction is the force resisting motion between the elements making up a solid material while it undergoes
deformation.[230]
In mathematics, a function[note 2] is a
binary relation between two
sets that associates every element of the first set to exactly one element of the second set. Typical examples are functions from
integers to integers, or from the
real numbers to real numbers.
The fundamental frequency, often referred to simply as the fundamental, is defined as the lowest
frequency of a
periodicwaveform. In music, the fundamental is the musical
pitch of a note that is perceived as the lowest
partial present. In terms of a superposition of
sinusoids, the fundamental frequency is the lowest frequency sinusoidal in the sum of harmonically related frequencies, or the frequency of the difference between adjacent frequencies. In some contexts, the fundamental is usually abbreviated as f0, indicating the lowest frequency
counting from zero.[234][235][236] In other contexts, it is more common to abbreviate it as f1, the first
harmonic.[237][238][239][240][241] (The second harmonic is then f2 = 2â‹…f1, etc. In this context, the zeroth harmonic would be 0
Hz.)
In
physics, the fundamental interactions, also known as fundamental forces, are the interactions that do not appear to be reducible to more basic interactions. There are four fundamental interactions known to exist: the
gravitational and
electromagnetic interactions, which produce significant long-range forces whose effects can be seen directly in everyday life, and the
strong and
weak interactions, which produce forces at
minuscule, subatomic distances and govern nuclear interactions. Some scientists hypothesize that a
fifth force might exist, but these hypotheses remain speculative.[242][243][244]
The Fundamentals of Engineering (FE) exam, also referred to as the Engineer in Training (EIT) exam, and formerly in some states as the Engineering Intern (EI) exam, is the first of two examinations that
engineers must pass in order to be licensed as a
Professional Engineer in the
United States. The second examination is
Principles and Practice of Engineering Examination. The FE
exam is open to anyone with a
degree in engineering or a related field, or currently enrolled in the last year of an
ABET-accredited engineering degree program. Some state licensure boards permit students to take it prior to their final year, and numerous states allow those who have never attended an approved program to take the exam if they have a state-determined number of years of work experience in engineering. Some states allow those with ABET-accredited "Engineering Technology" or "ETAC" degrees to take the examination. The state of
Michigan has no admission pre-requisites for the FE.[245] The exam is administered by the
National Council of Examiners for Engineering and Surveying (NCEES).
A galvanic cell or voltaic cell, named after
Luigi Galvani or
Alessandro Volta, respectively, is an
electrochemical cell that derives electrical energy from spontaneous
redox reactions taking place within the cell. It generally consists of two different metals immersed in electrolytes, or of individual half-cells with different metals and their ions in solution connected by a
salt bridge or separated by a porous membrane. Volta was the inventor of the
voltaic pile, the first
electrical battery. In common usage, the word "battery" has come to include a single galvanic cell, but a battery properly consists of multiple cells.[246]
Is one of the
four fundamental states of matter (the others being
solid,
liquid, and
plasma). A pure gas may be made up of individual
atoms (e.g. a
noble gas like
neon),
elemental molecules made from one type of atom (e.g.
oxygen), or
compound molecules made from a variety of atoms (e.g.
carbon dioxide). A gas
mixture, such as
air, contains a variety of pure gases. What distinguishes a gas from liquids and solids is the vast separation of the individual gas particles.
In mathematics, the geometric mean is a
mean or
average, which indicates the
central tendency or typical value of a set of numbers by using the product of their values (as opposed to the
arithmetic mean which uses their sum). The geometric mean is defined as the
nth root of the
product of n numbers, i.e., for a set of numbers x1, x2, ..., xn, the geometric mean is defined as
Is, with
arithmetic, one of the oldest branches of
mathematics. It is concerned with properties of space that are related with distance, shape, size, and relative position of figures.[247] A mathematician who works in the field of geometry is called a
geometer.
Graham's law of effusion (also called Graham's law of
diffusion) was formulated by Scottish physical chemist
Thomas Graham in 1848.[254] Graham found experimentally that the rate of
effusion of a gas is inversely proportional to the square root of the mass of its particles.[254] This formula can be written as:
,
where:
Rate1 is the rate of effusion for the first gas. (volume or number of moles per unit time).
Gravitational energy or gravitational potential energy is the
potential energy a
massive object has in relation to another massive object due to
gravity. It is the potential energy associated with the
gravitational field, which is released (converted into
kinetic energy) when the objects
fall towards each other. Gravitational potential energy increases when two objects are brought further apart.
For two pairwise interacting point particles, the gravitational potential energy is given by
where and are the masses of the two particles, is the distance between them, and is the
gravitational constant.[255]
Close to the Earth's surface, the gravitational field is approximately constant, and the gravitational potential energy of an object reduces to
In
physics, a gravitational field is a
model used to explain the influences that a massive body extends into the space around itself, producing a force on another massive body.[256] Thus, a gravitational
field is used to explain
gravitational phenomena, and is measured in
newtons per
kilogram (N/kg). In its original concept,
gravity was a
force between point
masses. Following
Isaac Newton,
Pierre-Simon Laplace attempted to model gravity as some kind of
radiation field or
fluid, and since the 19th century, explanations for gravity have usually been taught in terms of a field model, rather than a point attraction.
In a field model, rather than two particles attracting each other, the particles distort
spacetime via their mass, and this distortion is what is perceived and measured as a "force".[citation needed] In such a model one states that matter moves in certain ways in response to the curvature of spacetime,[257] and that there is either no gravitational force,[258] or that gravity is a
fictitious force.[259]
Gravity is distinguished from other forces by its obedience to the
equivalence principle.
In
classical mechanics, the gravitational potential at a location is equal to the
work (
energy transferred) per unit mass that would be needed to move an object to that location from a fixed reference location. It is
analogous to the
electric potential with
mass playing the role of
charge. The reference location, where the potential is zero, is by convention
infinitely far away from any mass, resulting in a negative potential at any
finite distance.
In mathematics, the gravitational potential is also known as the
Newtonian potential and is fundamental in the study of
potential theory. It may also be used for solving the electrostatic and magnetostatic fields generated by uniformly charged or polarized ellipsoidal bodies.[260]
Or gravitation, is a
natural phenomenon by which all things with
mass or
energy—including
planets,
stars,
galaxies, and even
light[267]—are brought toward (or gravitate toward) one another. On
Earth, gravity gives
weight to
physical objects, and the
Moon's
gravity causes the ocean
tides. The gravitational attraction of the original gaseous matter present in the
Universe caused it to begin
coalescing and
forming stars and caused the stars to group together into galaxies, so gravity is responsible for many of the large-scale structures in the Universe. Gravity has an infinite range, although its effects become increasingly weaker as objects get further away.
The period at which one-half of a quantity of an unstable isotope has decayed into other elements; the time at which half of a substance has diffused out of or otherwise reacted in a system.
In
mathematics, the harmonic mean (sometimes called the subcontrary mean) is one of several kinds of
average, and in particular, one of the
Pythagorean means. Typically, it is appropriate for situations when the average of
rates is desired.
The harmonic mean can be expressed as the
reciprocal of the
arithmetic mean of the reciprocals of the given set of observations. As a simple example, the harmonic mean of 1, 4, and 4 is
Is a discipline of
thermal engineering that concerns the generation, use, conversion, and exchange of
thermal energy (
heat) between physical systems. Heat transfer is classified into various mechanisms, such as
thermal conduction,
thermal convection,
thermal radiation, and transfer of energy by
phase changes. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system.
In
thermodynamics, the Helmholtz free energy (or Helmholtz energy) is a
thermodynamic potential that measures the useful
work obtainable from a
closedthermodynamic system at a constant
temperature and
volume (
isothermal,
isochoric). The negative of the change in the Helmholtz energy during a process is equal to the maximum amount of work that the system can perform in a thermodynamic process in which volume is held constant. If the volume were not held constant, part of this work would be performed as boundary work. This makes the Helmholtz energy useful for systems held at constant volume. Furthermore, at constant temperature, the Helmholtz free energy is minimized at equilibrium.
can be used to estimate the
pH of a
buffer solution. The numerical value of the
acid dissociation constant, Ka, of the acid is known or assumed. The pH is calculated for given values of the concentrations of the acid, HA and of a salt, MA, of its conjugate base, A−; for example, the solution may contain
acetic acid and
sodium acetate.
In physical
chemistry, Henry's law is a
gas law that states that the amount of dissolved gas in a liquid is proportional to its
partial pressure above the liquid. The proportionality factor is called Henry's law constant. It was formulated by the English chemist
William Henry, who studied the topic in the early 19th century.
Is a device used for lifting or lowering a load by means of a drum or lift-wheel around which rope or chain wraps. It may be manually operated, electrically or
pneumatically driven and may use chain, fiber or wire
rope as its lifting medium. The most familiar form is an
elevator, the car of which is raised and lowered by a hoist mechanism. Most hoists couple to their loads using a
lifting hook. Today, there are a few governing bodies for the North American overhead hoist industry which include the Hoist Manufactures Institute (
HMI), ASME, and the Occupational Safety and Health Administration (
OSHA). HMI is a product counsel of the Material Handling Industry of America consisting of hoist manufacturers promoting safe use of their products.
In
mathematics, an identity is an
equality relating one mathematical expression A to another mathematical expression B, such that A and B (which might contain some
variables) produce the same value for all values of the variables within a certain range of validity.[280] In other words, A = B is an identity if A and B define the same
functions, and an identity is an equality between functions that are differently defined. For example, and are identities.[280] Identities are sometimes indicated by the
triple bar symbol ≡ instead of =, the
equals sign.[281]
Also known as a ramp, is a flat supporting surface tilted at an angle, with one end higher than the other, used as an aid for raising or lowering a load.[282][283][284] The inclined plane is one of the six classical
simple machines defined by Renaissance scientists. Inclined planes are widely used to move heavy loads over vertical obstacles; examples vary from a ramp used to load goods into a truck, to a person walking up a pedestrian ramp, to an automobile or railroad train climbing a grade.[284]
In
electromagnetism and
electronics, inductance is the tendency of an
electrical conductor to oppose a change in the
electric current flowing through it. The flow of electric current creates a
magnetic field around the conductor. The field strength depends on the magnitude of the current, and follows any changes in current. From
Faraday's law of induction, any change in magnetic field through a circuit induces an
electromotive force (EMF) (
voltage) in the conductors, a process known as
electromagnetic induction. This induced voltage created by the changing current has the effect of opposing the change in current. This is stated by
Lenz's law, and the voltage is called back EMF. Inductance is defined as the ratio of the induced voltage to the rate of change of current causing it. It is a proportionality factor that depends on the geometry of circuit conductors and the
magnetic permeability of nearby materials.[286] An
electronic component designed to add inductance to a circuit is called an
inductor. It typically consists of a
coil or helix of wire.
Is an engineering profession that is concerned with the optimization of complex
processes,
systems, or
organizations by developing, improving and implementing integrated systems of people, money, knowledge, information and equipment. Industrial engineers use specialized
knowledge and
skills in the mathematical, physical and
social sciences, together with the
principles and methods of
engineering analysis and design, to specify, predict, and evaluate the results obtained from systems and processes.[288] From these results, they are able to create new
systems, processes or situations for the useful coordination of
labour,
materials and
machines and also improve the
quality and
productivity of systems, physical or social.[289]
Is the resistance of any physical
object to any change in its
velocity. This includes changes to the object's
speed, or
direction of motion.
An aspect of this property is the tendency of objects to keep moving in a straight line at a constant speed, when no
forces act upon them.
Infrasound, sometimes referred to as low-frequency sound, describes sound waves with a frequency below the lower limit of audibility (generally 20 Hz). Hearing becomes gradually less sensitive as frequency decreases, so for humans to perceive infrasound, the
sound pressure must be sufficiently high. The ear is the primary organ for sensing low sound, but at higher intensities it is possible to feel infrasound vibrations in various parts of the body.
In
mathematics, an integral assigns numbers to functions in a way that describes displacement, area, volume, and other concepts that arise by combining
infinitesimal data. The process of finding integrals is called integration. Along with
differentiation, integration is a fundamental operation of calculus,[b] and serves as a tool to solve problems in mathematics and
physics involving the area of an arbitrary shape, the length of a curve, and the volume of a solid, among others.
In
mathematics, an integral transform maps a
function from its original
function space into another function space via
integration, where some of the properties of the original function might be more easily characterized and manipulated than in the original function space. The transformed function can generally be mapped back to the original function space using the inverse transform.
In
statistics, interval estimation is the use of
sample data to calculate an
interval of possible values of an unknown
population parameter; this is in contrast to
point estimation, which gives a single value.
Jerzy Neyman (1937) identified interval estimation ("estimation by interval") as distinct from
point estimation ("estimation by unique estimate"). In doing so, he recognized that then-recent work quoting results in the form of an
estimate plus-or-minus a
standard deviation indicated that interval estimation was actually the problem
statisticians really had in mind.
Is a
particle,
atom or
molecule with a net
electrical charge. The charge of the electron is considered negative by convention. The negative charge of an ion is equal and opposite to charged proton(s) considered positive by convention. The net charge of an ion is non-zero due to its total number of
electrons being unequal to its total number of
protons.
Is a type of
chemical bonding that involves the
electrostatic attraction between oppositely charged
ions, or between two
atoms with sharply different
electronegativities,[291] and is the primary interaction occurring in
ionic compounds. It is one of the main types of bonding along with
covalent bonding and
metallic bonding. Ions are atoms (or groups of atoms) with an electrostatic charge. Atoms that gain electrons make negatively charged ions (called
anions). Atoms that lose electrons make positively charged ions (called
cations). This transfer of electrons is known as electrovalence in contrast to
covalence. In the simplest case, the cation is a
metal atom and the anion is a
nonmetal atom, but these ions can be of a more complex nature, e.g.
molecular ions like NH+ 4 or SO2− 4. In simpler words, an ionic bond results from the transfer of electrons from a
metal to a
non-metal in order to obtain a full valence shell for both atoms.
Ionization or ionisation is the process by which an
atom or a
molecule acquires a negative or positive
charge by gaining or losing
electrons, often in conjunction with other chemical changes. The resulting electrically charged atom or molecule is called an
ion. Ionization can result from the loss of an electron after collisions with
subatomic particles, collisions with other atoms, molecules and ions, or through the interaction with
electromagnetic radiation.
Heterolytic bond cleavage and heterolytic
substitution reactions can result in the formation of ion pairs. Ionization can occur through radioactive decay by the
internal conversion process, in which an excited nucleus transfers its energy to one of the
inner-shell electrons causing it to be ejected.
The SI unit of energy. The joule, (symbol: J), is a
derived unit of
energy in the
International System of Units.[295] It is equal to the energy transferred to (or
work done on) an object when a
force of one
newton acts on that object in the direction of the object's motion through a distance of one
metre (1 newton-metre or Nâ‹…m). It is also the energy dissipated as heat when an electric
current of one
ampere passes through a
resistance of one
ohm for one second. It is named after the English physicist
James Prescott Joule (1818–1889).[296][297][298]
In statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The Kalman filter has numerous applications in technology.
Is a branch of
classical mechanics that describes the
motion of points, bodies (objects), and systems of bodies (groups of objects) without considering the forces that caused the motion.[301][302][303]
In
fluid dynamics, laminar flow is characterized by fluid particles following smooth paths in layers, with each layer moving smoothly past the adjacent layers with little or no mixing.[304] At low velocities, the fluid tends to flow without lateral mixing, and adjacent layers slide past one another like
playing cards. There are no cross-currents perpendicular to the direction of flow, nor
eddies or swirls of fluids.[305] In laminar flow, the motion of the particles of the fluid is very orderly with particles close to a solid surface moving in straight lines parallel to that surface.[306]
Laminar flow is a flow regime characterized by high
momentum diffusion and low momentum
convection.
Le Chatelier's principle, also called Chatelier's principle, is a principle of
chemistry used to predict the effect of a change in conditions on
chemical equilibria. The principle is named after French chemist
Henry Louis Le Chatelier, and sometimes also credited to
Karl Ferdinand Braun, who discovered it independently. It can be stated as:
When any system at equilibrium for a long period of time is subjected to a change in
concentration,
temperature,
volume, or
pressure, (1) the system changes to a new equilibrium, and (2) this change partly counteracts the applied change.
It is common to treat the principle as a more general observation of
systems,[309] such as
When a settled system is disturbed, it will adjust to diminish the change that has been made to it
Lenz's law, named after the physicist
Emil Lenz who formulated it in 1834,[310] states that the direction of the
electric current which is
induced in a
conductor by a changing
magnetic field is such that the magnetic field created by the induced current opposes the initial changing magnetic field.
It is a
qualitative law that specifies the direction of induced current, but states nothing about its magnitude. Lenz's law explains the direction of many effects in
electromagnetism, such as the direction of voltage induced in an
inductor or
wire loop by a changing current, or the drag force of
eddy currents exerted on moving objects in a magnetic field.
Lenz's law may be seen as analogous to
Newton's third law in
classical mechanics.[311]
Is a
simple machine consisting of a
beam or rigid rod pivoted at a fixed
hinge, or fulcrum. A lever is a rigid body capable of rotating on a point on itself. On the basis of the locations of fulcrum, load and effort, the lever is divided into
three types. Also,
leverage is mechanical advantage gained in a system. It is one of the six
simple machines identified by Renaissance scientists. A lever amplifies an input force to provide a greater output force, which is said to provide leverage. The ratio of the output force to the input force is the
mechanical advantage of the lever. As such, the lever is a
mechanical advantage device, trading off force against movement.
In
mathematics, more specifically
calculus, L'Hôpital's rule or L'Hospital's rule (French:[lopital],
English: /ËŒloÊŠpiËˈtÉ‘Ël/,
loh-pee-TAHL) provides a technique to evaluate
limits of
indeterminate forms. Application (or repeated application) of the rule often converts an indeterminate form to an expression that can be easily evaluated by substitution. The rule is named after the 17th-century
FrenchmathematicianGuillaume de l'Hôpital. Although the rule is often attributed to L'Hôpital, the theorem was first introduced to him in 1694 by the Swiss mathematician
Johann Bernoulli.
L'Hôpital's rule states that for functions f and g which are
differentiable on an open
intervalI except possibly at a point c contained in I, if and for all x in I with x ≠c, and exists, then
The differentiation of the numerator and denominator often simplifies the quotient or converts it to a limit that can be evaluated directly.
Is an
actuator that creates motion in a straight line, in contrast to the circular motion of a conventional
electric motor. Linear actuators are used in machine tools and industrial machinery, in computer
peripherals such as disk drives and printers, in
valves and
dampers, and in many other places where linear motion is required.
Hydraulic or
pneumatic cylinders inherently produce linear motion. Many other mechanisms are used to generate linear motion from a rotating motor.
Is a mathematical model of how solid objects deform and become internally stressed due to prescribed loading conditions. It is a simplification of the more general
nonlinear theory of elasticity and a branch of
continuum mechanics.
A liquid is a nearly
incompressiblefluid that conforms to the shape of its container but retains a (nearly) constant volume independent of pressure. As such, it is one of
the four fundamental states of matter (the others being
solid,
gas, and
plasma), and is the only state with a definite volume but no fixed shape. A liquid is made up of tiny vibrating particles of matter, such as atoms, held together by
intermolecular bonds. Like a gas, a liquid is
able to flow and take the shape of a container. Most liquids resist compression, although others can be compressed. Unlike a gas, a liquid does not disperse to fill every space of a container, and maintains a fairly constant density. A distinctive property of the liquid state is
surface tension, leading to
wetting phenomena.
Water is, by far, the most common liquid on Earth.
In
mathematics, the logarithm is the
inverse function to
exponentiation. That means the logarithm of a given number x is the
exponent to which another fixed number, the baseb, must be raised, to produce that number x. In the simplest case, the logarithm counts the number of occurrences of the same factor in repeated multiplication; e.g., since 1000 = 10 × 10 × 10 = 103, the "logarithm base 10" of 1000 is 3, or log10(1000) = 3. The logarithm of x to baseb is denoted as logb(x), or without parentheses, logbx, or even without the explicit base, log x, when no confusion is possible, or when the base does not matter such as in
big O notation.
More generally, exponentiation allows any positive
real number as base to be raised to any real power, always producing a positive result, so logb(x) for any two positive real numbers b and x, where b is not equal to 1, is always a unique real number y. More explicitly, the defining relation between exponentiation and logarithm is:
exactly if and and and .
For example, log2 64 = 6, as 26 = 64.
The logarithm base 10 (that is b = 10) is called the decimal or
common logarithm and is commonly used in science and engineering. The
natural logarithm has the
number e (that is b ≈ 2.718) as its base; its use is widespread in mathematics and
physics, because of its simpler
integral and
derivative. The
binary logarithm uses base 2 (that is b = 2) and is frequently used in
computer science. Logarithms are examples of
concave functions.
(Also known as log mean temperature difference, LMTD) is used to determine the temperature driving force for
heat transfer in flow systems, most notably in
heat exchangers. The LMTD is a
logarithmic average of the temperature difference between the hot and cold feeds at each end of the double pipe exchanger. For a given heat exchanger with constant area and heat transfer coefficient, the larger the LMTD, the more heat is transferred. The use of the LMTD arises straightforwardly from the analysis of a heat exchanger with constant flow rate and fluid thermal properties.
A lumped-capacitance model, also called lumped system analysis,[317] reduces a
thermal system to a number of discrete "lumps" and assumes that the
temperature difference inside each lump is negligible. This approximation is useful to simplify otherwise complex
differential heat equations. It was developed as a mathematical analog of
electrical capacitance, although it also includes thermal analogs of
electrical resistance as well.
The lumped-element model (also called lumped-parameter model, or lumped-component model) simplifies the description of the behaviour of spatially distributed physical systems into a
topology consisting of discrete entities that approximate the behaviour of the distributed system under certain assumptions. It is useful in
electrical systems (including
electronics), mechanical
multibody systems,
heat transfer,
acoustics, etc. Mathematically speaking, the simplification reduces the
state space of the system to a
finitedimension, and the
partial differential equations (PDEs) of the continuous (infinite-dimensional) time and space model of the physical system into
ordinary differential equations (ODEs) with a finite number of parameters.
^The words map, mapping, transformation, correspondence, and operator are often used synonymously.
Halmos 1970, p. 30.
^"Newtonian constant of gravitation" is the name introduced for G by Boys (1894). Use of the term by T.E. Stern (1928) was misquoted as "Newton's constant of gravitation" in Pure Science Reviewed for Profound and Unsophisticated Students (1930), in what is apparently the first use of that term. Use of "Newton's constant" (without specifying "gravitation" or "gravity") is more recent, as "Newton's constant" was also
used for the
heat transfer coefficient in
Newton's law of cooling, but has by now become quite common, e.g.
Calmet et al, Quantum Black Holes (2013), p. 93; P. de Aquino, Beyond Standard Model Phenomenology at the LHC (2013), p. 3.
The name "Cavendish gravitational constant", sometimes "Newton–Cavendish gravitational constant", appears to have been common in the 1970s to 1980s, especially in (translations from) Soviet-era Russian literature, e.g. Sagitov (1970 [1969]), Soviet Physics: Uspekhi 30 (1987), Issues 1–6, p. 342 [etc.].
"Cavendish constant" and "Cavendish gravitational constant" is also used in Charles W. Misner, Kip S. Thorne, John Archibald Wheeler, "Gravitation", (1973), 1126f.
Colloquial use of "Big G", as opposed to "
little g" for gravitational acceleration dates to the 1960s (R.W. Fairbridge, The encyclopedia of atmospheric sciences and astrogeology, 1967, p. 436; note use of "Big G's" vs. "little g's" as early as the 1940s of the
Einstein tensorGμν vs. the
metric tensorgμν, Scientific, medical, and technical books published in the United States of America: a selected list of titles in print with annotations: supplement of books published 1945–1948, Committee on American Scientific and Technical Bibliography National Research Council, 1950, p. 26).
^Integral calculus is a very well established mathematical discipline for which there are many sources. See
Apostol 1967 and
Anton, Bivens & Davis 2016, for example.
^For example, the SI unit of
velocity is the metre per second, m⋅s−1; of
acceleration is the metre per second squared, m⋅s−2; etc.
^For example the
newton (N), the unit of
force, equivalent to kg⋅m⋅s−2; the
joule (J), the unit of
energy, equivalent to kg⋅m2⋅s−2, etc. The most recently named derived unit, the
katal, was defined in 1999.
^For example, the recommended unit for the
electric field strength is the volt per metre, V/m, where the
volt is the derived unit for
electric potential difference. The volt per metre is equal to kg⋅m⋅s−3⋅A−1 when expressed in terms of base units.
^"Unit of thermodynamic temperature (kelvin)". SI Brochure, 8th edition. Bureau International des Poids et Mesures. 13 March 2010 [1967]. Section 2.1.1.5. Archived from
the original on 7 October 2014. Retrieved 20 June 2017. Note: The triple point of water is 0.01 °C, not 0 °C; thus 0 K is −273.15 °C, not −273.16 °C.
^Callister, W. D. "Materials Science and Engineering: An Introduction" 2007, 7th edition, John Wiley and Sons, Inc. New York, Section 4.3 and Chapter 9.
^"Amino". Dictionary.com. 2015. Retrieved 3 July 2015.
^"amino acid". Cambridge Dictionaries Online. Cambridge University Press. 2015. Retrieved 3 July 2015.
^"amino". FreeDictionary.com. Farlex. 2015. Retrieved 3 July 2015.
Poole, Mackworth & Goebel (1998), which provides the version that is used in this article. These authors use the term "computational intelligence" as a synonym for artificial intelligence.[38]
Russell & Norvig (2003) (who prefer the term "rational agent") and write "The whole-agent view is now widely accepted in the field".[39]
^Lambers HG, Tschumak S, Maier HJ, Canadinc D (Apr 2009). "Role of Austenitization and Pre-Deformation on the Kinetics of the Isothermal Bainitic Transformation". Metall. Mater. Trans. A. 40 (6): 1355–1366.
Bibcode:
2009MMTA...40.1355L.
doi:
10.1007/s11661-009-9827-z.
S2CID136882327.
^Johll, Matthew E. (2009). Investigating chemistry: a forensic science perspective (2nd ed.). New York: W. H. Freeman and Co.
ISBN978-1-4292-0989-2.
OCLC392223218.
^Anthonsen, Thorlief (2000).
"Reactions Catalyzed by Enzymes". In Adlercreutz, Patrick; Straathof, Adrie J. J. (eds.). Applied Biocatalysis (2nd ed.). Taylor & Francis. pp. 18–59.
ISBN978-90-5823-024-9. Retrieved 9 February 2013.
^Jayasinghe, Leonard Y.; Smallridge, Andrew J.; Trewhella, Maurie A. (1993). "The yeast mediated reduction of ethyl acetoacetate in petroleum ether". Tetrahedron Letters. 34 (24): 3949–3950.
doi:
10.1016/S0040-4039(00)79272-0.
^Frederick M. Steingress (2001). Low Pressure Boilers (4th ed.). American Technical Publishers.
ISBN0-8269-4417-5.
^Frederick M. Steingress, Harold J. Frost and Darryl R. Walker (2003). High Pressure Boilers (3rd ed.). American Technical Publishers.
ISBN0-8269-4300-4.
^Goldberg, David E. (1988). 3,000 Solved Problems in Chemistry (1st ed.). McGraw-Hill. section 17.43, p. 321.
ISBN0-07-023684-4.
^Theodore, Louis; Dupont, R. Ryan; Ganesan, Kumar, eds. (1999). Pollution Prevention: The Waste Management Approach to the 21st Century. CRC Press. section 27, p. 15.
ISBN1-56670-495-2.
^Carroll, Sean (2007). Guidebook. Dark Matter, Dark Energy: The dark side of the universe. The Teaching Company. Part 2, p. 43.
ISBN978-1-59803-350-2. ... boson: A force-carrying particle, as opposed to a matter particle (fermion). Bosons can be piled on top of each other without limit. Examples include photons, gluons, gravitons, weak bosons, and the Higgs boson. The spin of a boson is always an integer, such as 0, 1, 2, and so on ...
^Brönsted, J. N. (1923). "Einige Bemerkungen über den Begriff der Säuren und Basen" [Some observations about the concept of
acids and bases]. Recueil des Travaux Chimiques des Pays-Bas. 42 (8): 718–728.
doi:
10.1002/recl.19230420815.
^Jaspersen, S. L.; Winey, M. (2004). "THE BUDDING YEAST SPINDLE POLE BODY: Structure, Duplication, and Function". Annual Review of Cell and Developmental Biology. 20 (1): 1–28.
doi:
10.1146/annurev.cellbio.20.022003.114106.
PMID15473833.
^Fullick, P. (1994), Physics, Heinemann, pp. 141–142,
ISBN0-435-57078-1
^Lakatos, John; Oenoki, Keiji; Judez, Hector; Oenoki, Kazushi; Hyun Kyu Cho (March 1998).
"Learn Physics Today!". Lima, Peru: Colegio Dr. Franklin D. Roosevelt. Archived from
the original on 2009-02-27. Retrieved 2009-03-10.
^Purcell, Edward M.; Morin, David J. (2013). Electricity and Magnetism (3rd ed.). New York: Cambridge University Press. pp. 14–20.
ISBN978-1-107-01402-2.
^Browne, p 225: "... around every charge there is an aura that fills all space. This aura is the electric field due to the charge. The electric field is a vector field... and has a magnitude and direction."
^*Purcell and Morin, Harvard University. (2013). Electricity and Magnetism, 820p (3rd ed.). Cambridge University Press, New York.
ISBN978-1-107-01402-2. p 430: "These waves... require no medium to support their propagation. Traveling electromagnetic waves carry energy, and... the Poynting vector describes the energy flow...;" p 440: ... the electromagnetic wave must have the following properties: 1) The field pattern travels with speed c (speed of light); 2) At every point within the wave... the electric field strength E equals "c" times the magnetic field strength B; 3) The electric field and the magnetic field are perpendicular to one another and to the direction of travel, or propagation."
^* Browne, Michael (2013). Physics for Engineering and Science, p427 (2nd ed.). McGraw Hill/Schaum, New York.
ISBN978-0-07-161399-6.; p319: "For historical reasons, different portions of the EM spectrum are given different names, although they are all the same kind of thing. Visible light constitutes a narrow range of the spectrum, from wavelengths of about 400–800 nm.... ;p 320 "An electromagnetic wave carries forward momentum... If the radiation is absorbed by a surface, the momentum drops to zero and a force is exerted on the surface... Thus the radiation pressure of an electromagnetic wave is (formula)."
^Course in Electro-mechanics, for Students in Electrical Engineering, 1st Term of 3d Year, Columbia University, Adapted from Prof. F.E. Nipher's "Electricity and Magnetism". By
Fitzhugh Townsend. 1901.
^Szolc T.;
Konowrocki R.; Michajłow M.; Pregowska A. (2014). "An investigation of the dynamic electromechanical coupling effects in machine drive systems driven by asynchronous motors". Mechanical Systems and Signal Processing. 49 (1–2): 118–134.
Bibcode:
2014MSSP...49..118S.
doi:
10.1016/j.ymssp.2014.04.004.
^Konowrocki R.; Szolc T.; Pochanke A.; Pregowska A. (2016). "An influence of the stepping motor control and friction models on precise positioning of the complex mechanical system". Mechanical Systems and Signal Processing. 70–71. Mechanical Systems and Signal Processing, Vol. 70–71, pp. 397–413: 397–413.
Bibcode:
2016MSSP...70..397K.
doi:
10.1016/j.ymssp.2015.09.030.
ISSN0888-3270.
^Oxtoby, D. W; Gillis, H.P., Butler, L. J. (2015).Principles of Modern Chemistry, Brooks Cole. p. 617.
ISBN978-1-305-07911-3
^"Motor". Dictionary.reference.com. Retrieved 2011-05-09. a person or thing that imparts motion, esp. a contrivance, as a steam engine, that receives and modifies energy from some source in order to utilize it in driving machinery.
^Dictionary.com: (World heritage) "3. any device that converts another form of energy into mechanical energy so as to produce motion"
^"Bureau of Labor Statistics, U.S. Department of Labor.U.S. Department of Labor, Occupational Outlook Handbook, 2010–11 Edition"
^"Major: Engineering Physics". The Princeton Review. 2017. p. 01. Retrieved June 4, 2017.
^"Introduction" (online). Princeton University. Retrieved June 26, 2011.
^Khare, P.; A. Swarup (2009-01-26). Engineering Physics: Fundamentals & Modern Applications (13th ed.). Jones & Bartlett Learning. pp. xiii–Preface. ISBN 978-0-7637-7374-8.
^The International System of Units (SI) (8th ed.). Bureau International des Poids et Mesures (International Committee for Weights and Measures). 2006. p. 144.
^The term "magnitude" is used in the sense of "
absolute value": The charge of an electron is negative, but F is always defined to be positive.
^Daryl L. Logan (2011). A first course in the finite element method. Cengage Learning.
ISBN978-0-495-66825-1.
^Duderstadt, James J.; Martin, William R. (1979). "Chapter 4:The derivation of continuum description from transport equations". In Wiley-Interscience Publications (ed.). Transport theory. New York. p. 218.
ISBN978-0-471-04492-5.{{
cite book}}: CS1 maint: location missing publisher (
link)
^Freidberg, Jeffrey P. (2008). "Chapter 10:A self-consistent two-fluid model". In Cambridge University Press (ed.). Plasma Physics and Fusion Energy (1 ed.). Cambridge. p. 225.
ISBN978-0-521-73317-5.{{
cite book}}: CS1 maint: location missing publisher (
link)
^
Reif (1965):
"[in the special case of purely thermal interaction between two system:] The mean energy transferred from one system to the other as a result of purely thermal interaction is called 'heat'" (p. 67).
the quantity Q [...] is simply a measure of the mean energy change not due to the change of external parameters. [...] splits the total mean energy change into a part W due to mechanical interaction and a part Q due to thermal interaction [...] by virtue of [the definition ΔU = Q − W, present notation, physics sign convention], both heat and work have the dimensions of energy" (p. 73). C.f.: "heat is thermal energy in transfer" Stephen J. Blundell, Katherine M. Blundell, Concepts in Thermal Physics (2009),
p. 13Archived 24 June 2018 at the
Wayback Machine.
^Serway, A. Raymond; Jewett, John W.; Wilson, Jane; Wilson, Anna; Rowlands, Wayne (1 October 2016). "32". Physics for global scientists and engineers (2ndition ed.). Cengage AU. p. 901.
ISBN978-0-17-035552-0.
^Alexander, Charles; Sadiku, Matthew. Fundamentals of Electric Circuits (3 ed.). McGraw-Hill. p. 211.
^Salvendy, Gabriel. Handbook of Industrial Engineering. John Wiley & Sons, Inc; 3rd edition p. 5
^"What IEs Do". www.iienet2.org. Retrieved September 24, 2015.
^Noakes, Cath; Sleigh, Andrew (January 2009).
"Real Fluids". An Introduction to Fluid Mechanics. University of Leeds. Archived from
the original on 21 October 2010. Retrieved 23 November 2010.
^Pal, G.K.; Pal, Pravati (2001).
"chapter 52". Textbook of Practical Physiology (1st ed.). Chennai: Orient Blackswan. p. 387.
ISBN978-81-250-2021-9. Retrieved 11 October 2013. The human eye has the ability to respond to all the wavelengths of light from 400–700 nm. This is called the visible part of the spectrum.
^Buser, Pierre A.; Imbert, Michel (1992). Vision. MIT Press. p.
50.
ISBN978-0-262-02336-8. Retrieved 11 October 2013. Light is a special class of radiant energy embracing wavelengths between 400 and 700 nm (or mμ), or 4000 to 7000 Å.
This glossary of engineering terms is a list of definitions about the major concepts of
engineering. Please see the bottom of the page for glossaries of specific fields of engineering.
In
electrochemistry, according to an
IUPAC definition,[1] is the
electrode potential of a
metal measured with respect to a universal reference system (without any additional metal–solution interface).
The lower limit of the
thermodynamic temperature scale, a state at which the
enthalpy and
entropy of a cooled
ideal gas reach their minimum value, taken as 0. Absolute zero is the point at which the fundamental particles of nature have minimal vibrational motion, retaining only quantum mechanical,
zero-point energy-induced particle motion. The theoretical temperature is determined by extrapolating the
ideal gas law; by international agreement, absolute zero is taken as −273.15° on the
Celsius scale (
International System of Units),[2][3] which equals −459.67° on the
Fahrenheit scale (
United States customary units or
Imperial units).[4] The corresponding
Kelvin and
Rankine temperature scales set their zero points at absolute zero by definition.
Absorbance or decadic absorbance is the common logarithm of the ratio of incident to transmittedradiant power through a material, and spectral absorbance or spectral decadic absorbance is the common logarithm of the ratio of incident to transmittedspectral radiant power through a material.[5]
A type of wastewater treatment process for treating sewage or industrial wastewaters using aeration and a biological floc composed of bacteria and protozoa.
In cellular biology, active transport is the movement of molecules across a membrane from a region of their lower concentration to a region of their higher concentration—against the concentration gradient. Active transport requires cellular energy to achieve this movement. There are two types of active transport: primary active transport that uses
ATP, and secondary active transport that uses an electrochemical gradient. An example of active transport in
human physiology is the uptake of
glucose in the
intestines.
A device that accepts 2 inputs (control signal, energy source) and outputs kinetic energy in the form of physical movement (linear, rotary, or oscillatory). The control signal input specifies which motion should be taken. The energy source input is typically either an electric current, hydraulic pressure, or pneumatic pressure. An actuator can be the final element of a control loop
A complex
organic chemical that provides energy to drive many processes in living
cells, e.g. muscle contraction, nerve impulse propagation, chemical synthesis. Found in all forms of life, ATP is often referred to as the "molecular unit of
currency" of intracellular
energy transfer.[7]
The tendency of dissimilar particles or surfaces to cling to one another (cohesion refers to the tendency of similar or identical particles/surfaces to cling to one another).
The study of the motion of air, particularly its interaction with a solid object, such as an airplane wing. It is a sub-field of fluid dynamics and gas dynamics, and many aspects of aerodynamics theory are common to these fields.
Is the primary field of
engineering concerned with the development of
aircraft and
spacecraft.[10] It has two major and overlapping branches: Aeronautical engineering and Astronautical Engineering.
Avionics engineering is similar, but deals with the
electronics side of aerospace engineering.
Alpha particles consist of two
protons and two
neutrons bound together into a particle identical to a
helium-4nucleus. They are generally produced in the process of
alpha decay, but may also be produced in other ways. Alpha particles are named after the first letter in the
Greek alphabet,
α.
In chemistry, an amphoteric compound is a molecule or ion that can react both as an
acid as well as a
base.[20] Many metals (such as
copper,
zinc,
tin,
lead,
aluminium, and
beryllium) form amphoteric oxides or hydroxides. Amphoterism depends on the
oxidation states of the oxide. Al2O3 is an example of an amphoteric oxide.
The amplitude of a
periodicvariable is a measure of its change over a single
period (such as
time or
spatial period). There are various definitions of amplitude, which are all
functions of the magnitude of the difference between the variable's
extreme values. In older texts the
phase is sometimes called the amplitude.[21]
Is a collection of processes by which
microorganisms break down
biodegradable material in the absence of
oxygen.[22] The process is used for industrial or domestic purposes to
manage waste or to produce fuels. Much of the
fermentation used industrially to produce food and drink products, as well as home fermentation, uses anaerobic digestion.
In
physics, angular momentum (rarely, moment of momentum or rotational momentum) is the rotational equivalent of
linear momentum. It is an important quantity in physics because it is a
conserved quantity—the total angular momentum of a system remains constant unless acted on by an external
torque.
Is an ion with more electrons than protons, giving it a net negative charge (since electrons are negatively charged and protons are positively charged).[24]
In
particle physics, annihilation is the process that occurs when a
subatomic particle collides with its respective
antiparticle to produce other particles, such as an
electron colliding with a
positron to produce two
photons.[25] The total
energy and
momentum of the initial pair are conserved in the process and distributed among a set of other particles in the final state. Antiparticles have exactly opposite additive
quantum numbers from particles, so the sums of all quantum numbers of such an original pair are zero. Hence, any set of particles may be produced whose total quantum numbers are also zero as long as
conservation of energy and
conservation of momentum are obeyed.[26]
The American National Standards Institute is a private
non-profit organization that oversees the development of
voluntary consensus standards for products, services, processes, systems, and personnel in the United States.[27] The organization also coordinates U.S. standards with international standards so that American products can be used worldwide.
Anti-gravity (also known as non-gravitational field) is a theory of creating a place or object that is free from the force of
gravity. It does not refer to the lack of weight under gravity experienced in
free fall or
orbit, or to balancing the force of gravity with some other force, such as electromagnetism or aerodynamic lift.
Is the field concerned with the application of management, design, and technical skills for the design and integration of systems, the execution of new
product designs, the improvement of manufacturing processes, and the management and direction of physical and/or technical functions of a firm or organization. Applied-engineering degreed programs typically include instruction in basic engineering principles,
project management, industrial processes, production and operations management, systems integration and control, quality control, and statistics.[28]
Arc length is the distance between two points along a section of a
curve. Determining the length of an irregular arc segment is also called rectification of a curve. The advent of
infinitesimal calculus led to a general formula that provides
closed-form solutions in some cases.
States that the upward
buoyant force that is exerted on a body immersed in a
fluid, whether fully or partially submerged, is equal to the
weight of the fluid that the body
displaces and acts in the upward direction at the center of mass of the displaced fluid.[29] Archimedes' principle is a
law of physics fundamental to fluid mechanics. It was formulated by
Archimedes of Syracuse[30]
The 2nd moment of area, also known as moment of inertia of plane area, area moment of inertia, or second area moment, is a geometrical property of an area which reflects how its points are distributed with regard to an arbitrary axis. The second moment of area is typically denoted with either an for an axis that lies in the plane or with a for an axis perpendicular to the plane. In both cases, it is calculated with a
multiple integral over the object in question. Its dimension is L (length) to the fourth power. Its
unit of dimension when working with the
International System of Units is meters to the fourth power,
m4.
In
mathematics and
statistics, the arithmetic mean or simply the
mean or average when the context is clear, is the sum of a collection of numbers divided by the number of numbers in the collection.[31]
In
mathematics, an arithmetic progression (AP) or arithmetic sequence is a
sequence of
numbers such that the difference between the consecutive terms is constant. Difference here means the second minus the first. For instance, the sequence 5, 7, 9, 11, 13, 15, . . . is an arithmetic progression with common difference of 2.
An aromatic hydrocarbon or arene[32] (or sometimes aryl hydrocarbon)[33] is a
hydrocarbon with
sigma bonds and delocalized
pi electrons between carbon atoms forming a circle. In contrast,
aliphatic hydrocarbons lack this delocalization. The term "aromatic" was assigned before the physical mechanism determining
aromaticity was discovered; the term was coined as such simply because many of the compounds have a sweet or pleasant odour. The configuration of six carbon atoms in aromatic compounds is known as a benzene ring, after the simplest possible such hydrocarbon,
benzene. Aromatic hydrocarbons can be monocyclic (MAH) or polycyclic (PAH).
The Arrhenius equation is a formula for the temperature dependence of
reaction rates. The equation was proposed by
Svante Arrhenius in 1889, based on the work of Dutch chemist
Jacobus Henricus van 't Hoff who had noted in 1884 that
Van 't Hoff's equation for the temperature dependence of
equilibrium constants suggests such a formula for the rates of both forward and reverse reactions. This equation has a vast and important application in determining rate of chemical reactions and for calculation of energy of activation. Arrhenius provided a physical justification and interpretation for the formula.[34][35][36] Currently, it is best seen as an
empirical relationship.[37]: 188 It can be used to model the temperature variation of diffusion coefficients, population of crystal vacancies, creep rates, and many other thermally-induced processes/reactions. The
Eyring equation, developed in 1935, also expresses the relationship between rate and energy.
(AI), is
intelligence demonstrated by
machines, unlike the natural intelligence
displayed by humans and
animals. Leading AI textbooks define the field as the study of "
intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[40] Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the
human mind, such as "learning" and "problem solving".[41]
In
atomic theory and
quantum mechanics, an atomic orbital is a
mathematical function that describes the wave-like behavior of either one
electron or a pair of electrons in an
atom.[42] This function can be used to calculate the
probability of finding any electron of an atom in any specific region around the
atom's nucleus. The term atomic orbital may also refer to the physical region or space where the electron can be calculated to be present, as defined by the particular mathematical form of the orbital.[43]
An audio frequency (abbreviation: AF), or audible frequency is characterized as a
periodicvibration whose
frequency is audible to the average human. The
SI unit of audio frequency is the
hertz (Hz). It is the property of
sound that most determines
pitch.[44]
Austenitization means to heat the iron, iron-based metal, or steel to a temperature at which it changes crystal structure from ferrite to austenite.[45] The more open structure of the austenite is then able to absorb carbon from the iron-carbides in carbon steel. An incomplete initial austenitization can leave undissolved
carbides in the matrix.[46] For some irons, iron-based metals, and steels, the presence of carbides may occur during the austenitization step. The term commonly used for this is two-phase austenitization.[47]
Is the technology by which a process or procedure is performed with minimum human assistance.[48] Automation[49] or automatic control is the use of various
control systems for operating equipment such as machinery, processes in factories, boilers and heat treating ovens, switching on telephone networks, steering and stabilization of ships, aircraft and other applications and vehicles with minimal or reduced human intervention. Some processes have been completely automated.
In
chemistry, bases are substances that, in
aqueous solution, release
hydroxide (OH−) ions, are slippery to the touch, can taste
bitter if an alkali,[50] change the color of indicators (e.g., turn red
litmus paper blue), react with
acids to form
salts, promote certain chemical reactions (
base catalysis), accept
protons from any proton donor, and/or contain completely or partially displaceable OH−ions.
The Beer–Lambert law, also known as Beer's law, the Lambert–Beer law, or the Beer–Lambert–Bouguer law relates the
attenuation of
light to the properties of the material through which the light is travelling. The law is commonly applied to
chemical analysis measurements and used in understanding attenuation in
physical optics, for
photons,
neutrons or rarefied gases. In
mathematical physics, this law arises as a solution of the
BGK equation.
Is a term describing the friction forces between a
belt and a surface, such as a belt wrapped around a
bollard. When one end of the belt is being pulled only part of this force is transmitted to the other end wrapped about a surface. The friction force increases with the amount of wrap about a surface and makes it so the
tension in the belt can be different at both ends of the belt. Belt friction can be modeled by the
Belt friction equation.[51]
In
applied mechanics, bending (also known as flexure) characterizes the behavior of a slender
structural element subjected to an external
load applied perpendicularly to a longitudinal axis of the element. The structural element is assumed to be such that at least one of its dimensions is a small fraction, typically 1/10 or less, of the other two.[52]
Cost–benefit analysis (CBA), sometimes called benefit costs analysis (BCA), is a systematic approach to estimating the strengths and weaknesses of alternatives (for example in transactions, activities, functional business requirements); it is used to determine options that provide the best approach to achieve benefits while preserving savings.[55] It may be used to compare potential (or completed) courses of actions; or estimate (or evaluate) the value against
costs of a single decision, project, or policy.
is called a Bernoulli differential equation where is any real number and and .[56] It is named after
Jacob Bernoulli who discussed it in 1695. Bernoulli equations are special because they are nonlinear differential equations with known exact solutions. A famous special case of the Bernoulli equation is the
logistic differential equation.
also called beta ray or beta radiation (symbol β), is a high-energy, high-speed
electron or
positron emitted by the
radioactive decay of an
atomic nucleus during the process of
beta decay. There are two forms of beta decay, β− decay and β+ decay, which produce electrons and positrons respectively.[62]
Biocatalysis refers to the use of
living (biological) systems or their parts to speed up (
catalyze) chemical reactions. In biocatalytic processes, natural catalysts, such as
enzymes, perform chemical transformations on
organic compounds. Both enzymes that have been more or less
isolated and enzymes still residing inside living
cells are employed for this task.[63][64][65] The modern usage of biotechnologically produced and possibly modified enzymes for
organic synthesis is termed chemoenzymatic synthesis; the reactions performed are chemoenzymatic reactions.
Biomedical Engineering (BME) or Medical Engineering is the application of engineering principles and design concepts to medicine and biology for healthcare purposes (e.g. diagnostic or therapeutic). This field seeks to close the gap between
engineering and
medicine, combining the design and problem solving skills of engineering with medical biological sciences to advance health care treatment, including
diagnosis,
monitoring, and
therapy.[66]
The Biot number (Bi) is a
dimensionless quantity used in heat transfer calculations. It is named after the eighteenth century French physicist
Jean-Baptiste Biot (1774–1862), and gives a simple index of the ratio of the heat transfer resistances inside of and at the surface of a body. This ratio determines whether or not the temperatures inside a body will vary significantly in space, while the body heats or cools over time, from a thermal gradient applied to its surface.
The boiling point of a substance is the temperature at which the
vapor pressure of a
liquid equals the
pressure surrounding the liquid[73][74] and the liquid changes into a vapor.
Boiling-point elevation describes the phenomenon that the
boiling point of a
liquid (a
solvent) will be higher when another compound is added, meaning that a
solution has a higher boiling point than a pure solvent. This happens whenever a non-volatile solute, such as a salt, is added to a pure solvent, such as water. The boiling point can be measured accurately using an
ebullioscope.
Boyle's law (sometimes referred to as the Boyle–Mariotte law, or Mariotte's law[84]) is an experimental
gas law that describes how the
pressure of a
gas tends to increase as the
volume of the container decreases. A modern statement of Boyle's law is:
The absolute pressure exerted by a given mass of an
ideal gas is inversely proportional to the volume it occupies if the
temperature and
amount of gas remain unchanged within a
closed system.[85][86]
In
geometry and
crystallography, a Bravais lattice, named after
Auguste Bravais (
1850),[87] is an infinite array (or a finite array, if we consider the edges, obviously) of discrete points generated by a set of
discrete translation operations described in three dimensional space by:
where ni are any integers and ai are known as the primitive vectors which lie in different directions (not necessarily mutually perpendicular) and span the lattice. This discrete set of vectors must be closed under vector addition and subtraction. For any choice of position vector R, the lattice looks exactly the same.
The break-even point (BEP) in
economics,
business—and specifically
cost accounting—is the point at which total cost and total revenue are equal, i.e. "even". There is no net loss or gain, and one has "broken even", though
opportunity costs have been paid and capital has received the risk-adjusted, expected return. In short, all costs that must be paid are paid, and there is neither profit nor loss.[88][89]
Brewster's angle (also known as the polarization angle) is an
angle of incidence at which
light with a particular
polarization is perfectly transmitted through a transparent
dielectric surface, with no
reflection. When unpolarized light is incident at this angle, the light that is reflected from the surface is therefore perfectly polarized. This special angle of incidence is named after the Scottish physicist
Sir David Brewster (1781–1868).[90][91]
A material is brittle if, when subjected to
stress, it breaks without significant
plastic deformation. Brittle materials absorb relatively little
energy prior to fracture, even those of high
strength. Breaking is often accompanied by a snapping sound. Brittle materials include most
ceramics and
glasses (which do not deform plastically) and some
polymers, such as
PMMA and
polystyrene. Many
steels become brittle at low temperatures (see
ductile–brittle transition temperature), depending on their composition and processing.
Brownian motion or pedesis is the random motion of
particles suspended in a
fluid (a
liquid or a
gas) resulting from their collision with the fast-moving
molecules in the fluid.[94]
A buffer solution (more precisely,
pH buffer or
hydrogen ion buffer) is an
aqueous solution consisting of a
mixture of a
weak acid and its
conjugate base, or vice versa. Its pH changes very little when a small amount of
strong acid or
base is added to it. Buffer solutions are used as a means of keeping pH at a nearly constant value in a wide variety of chemical applications. In nature, there are many systems that use buffering for pH regulation.
The bulk modulus ( or ) of a substance is a measure of how resistant to compression that substance is. It is defined as the ratio of the
infinitesimalpressure increase to the resulting relative decrease of the
volume.[95]
Other moduli describe the material's response (
strain) to other kinds of
stress: the
shear modulus describes the response to shear, and
Young's modulus describes the response to linear stress. For a
fluid, only the bulk modulus is meaningful. For a complex
anisotropic solid such as
wood or
paper, these three moduli do not contain enough information to describe its behaviour, and one must use the full generalized
Hooke's law.
Capillary action (sometimes capillarity, capillary motion, capillary effect, or wicking) is the ability of a
liquid to flow in narrow spaces without the assistance of, or even in opposition to, external forces like
gravity. The effect can be seen in the drawing up of liquids between the hairs of a paintbrush, in a thin tube, in porous materials such as paper and plaster, in some non-porous materials such as sand and liquefied
carbon fiber, or in a cell. It occurs because of
intermolecular forces between the liquid and surrounding solid surfaces. If the diameter of the tube is sufficiently small, then the combination of
surface tension (which is caused by
cohesion within the liquid) and
adhesive forces between the liquid and container wall act to propel the liquid.
A hypothetical thermodynamic cycle for a heat engine; no thermodynamic cycle can be more efficient than a Carnot cycle operating between the same two temperature limits.
Named for
Carlo Alberto Castigliano, is a method for determining the displacements of a
linear-elastic system based on the
partial derivatives of the
energy. He is known for his two theorems. The basic concept may be easy to understand by recalling that a change in energy is equal to the causing force times the resulting displacement. Therefore, the causing force is equal to the change in energy divided by the resulting displacement. Alternatively, the resulting displacement is equal to the change in energy divided by the causing force. Partial derivatives are needed to relate causing forces and resulting displacements to the change in energy.
The cell membrane (also known as the plasma membrane or cytoplasmic membrane, and historically referred to as the plasmalemma) is a
biological membrane that separates the
interior of all
cells from the
outside environment (the extracellular space) which protects the cell from its environment[96][97] consisting of a
lipid bilayer with embedded
proteins.
In
cell biology, the nucleus (pl. nuclei; from
Latinnucleus or nuculeus, meaning kernel or seed) is a
membrane-enclosed
organelle found in
eukaryoticcells. Eukaryotes usually have a single nucleus, but a few cell types, such as mammalian red blood cells, have
no nuclei, and a few others including
osteoclasts have
many.
In
biology, cell theory is the historic
scientific theory, now universally accepted, that living organisms are made up of
cells, that they are the basic structural/organizational unit of all organisms, and that all cells come from pre-existing cells. Cells are the basic unit of structure in all organisms and also the basic unit of reproduction.
Is the point where the total sum of a
pressure field acts on a body, causing a
force to act through that point. The total force vector acting at the center of pressure is the value of the integrated vectorial pressure field. The resultant force and center of pressure location produce equivalent force and moment on the body as the original pressure field.
In
probability theory, the central limit theorem (CLT) establishes that, in some situations, when
independent random variables are added, their properly normalized sum tends toward a
normal distribution (informally a "bell curve") even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions.
A central processing unit (CPU) is the
electronic circuitry within a
computer that carries out the
instructions of a
computer program by performing the basic
arithmetic, logic, controlling and
input/output (I/O) operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s.[98] Traditionally, the term "CPU" refers to a processor, more specifically to its processing unit and
control unit (CU), distinguishing these core elements of a computer from external components such as
main memory and
I/O circuitry.[99]
Is a sequence of reactions where a reactive product or by-product causes additional reactions to take place. In a chain reaction,
positive feedback leads to a self-amplifying
chain of events.
Charles's law (also known as the law of volumes) is an experimental
gas law that describes how
gasestend to expand when heated. A modern statement of Charles's law is:
When the
pressure on a sample of a dry gas is held constant, the Kelvin temperature and the volume will be in direct proportion.[103]
In a
chemical reaction, chemical equilibrium is the state in which both reactants and products are present in
concentrations which have no further tendency to change with time, so that there is no observable change in the properties of the system.[104] Usually, this state results when the forward reaction proceeds at the same rate as the
reverse reaction. The
reaction rates of the forward and backward reactions are generally not zero, but equal. Thus, there are no net changes in the concentrations of the reactant(s) and product(s). Such a state is known as
dynamic equilibrium.[105][106]
Chemical kinetics, also known as reaction kinetics, is the study of
rates of
chemical processes. Chemical kinetics includes investigations of how different experimental conditions can influence the speed of a
chemical reaction and yield information about the
reaction's mechanism and
transition states, as well as the construction of
mathematical models that can describe the characteristics of a chemical reaction.
A chemical reaction is a process that leads to the
chemical transformation of one set of
chemical substances to another.[107] Classically, chemical reactions encompass changes that only involve the positions of
electrons in the forming and breaking of
chemical bonds between
atoms, with no change to the nuclei (no change to the elements present), and can often be described by a
chemical equation.
Nuclear chemistry is a sub-discipline of chemistry that involves the chemical reactions of unstable and radioactive elements where both electronic and nuclear changes can occur.
Chromate salts contain the chromate anion, CrO2− 4. Dichromate salts contain the dichromate anion, Cr 2O2− 7. They are
oxyanions of
chromium in the 6+
oxidation state . They are moderately strong
oxidizing agents. In an
aqueoussolution, chromate and dichromate ions can be interconvertible.
In
physics, circular motion is a movement of an object along the
circumference of a
circle or
rotation along a circular path. It can be uniform, with constant angular rate of rotation and constant speed, or non-uniform with a changing rate of rotation. The
rotation around a fixed axis of a three-dimensional body involves circular motion of its parts. The equations of motion describe the movement of the
center of mass of a body.
where is the slope of the tangent to the coexistence curve at any point, is the specific
latent heat, is the
temperature, is the
specific volume change of the phase transition, and is the
specific entropy change of the phase transition.
The Clausius theorem (1855) states that a system exchanging heat with external reservoirs and undergoing a cyclic process, is one that ultimately returns a system to its original state,
where is the infinitesimal amount of heat absorbed by the system from the reservoir and is the
temperature of the external reservoir (surroundings) at a particular instant in time. In the special case of a reversible process, the equality holds.[114] The reversible case is used to introduce the
entropy state function. This is because in a cyclic process the variation of a state function is zero. In words, the Clausius statement states that it is impossible to construct a device whose sole effect is the transfer of heat from a cool reservoir to a hot reservoir.[115] Equivalently, heat spontaneously flows from a hot body to a cooler one, not the other way around.[116] The generalized "inequality of Clausius"[117]
for an infinitesimal change in entropy S applies not only to cyclic processes, but to any process that occurs in a closed system.
The coefficient of performance or COP (sometimes CP or CoP) of a
heat pump, refrigerator or air conditioning system is a ratio of useful heating or cooling provided to work required.[118][119] Higher COPs equate to lower operating costs. The COP usually exceeds 1, especially in heat pumps, because, instead of just converting work to heat (which, if 100% efficient, would be a COP_hp of 1), it pumps additional heat from a heat source to where the heat is required. For complete systems, COP calculations should include energy consumption of all power consuming auxiliaries. COP is highly dependent on operating conditions, especially absolute temperature and relative temperature between sink and system, and is often graphed or averaged against expected conditions.[120]
In
physics, two wave sources are perfectly coherent if they have a constant
phase difference and the same
frequency, and the same
waveform. Coherence is an ideal property of
waves that enables stationary (i.e. temporally and spatially constant)
interference. It contains several distinct concepts, which are limiting cases that never quite occur in reality but allow an understanding of the physics of waves, and has become a very important concept in quantum physics. More generally, coherence describes all properties of the
correlation between
physical quantities of a single wave, or between several waves or wave packets.
Or cohesive attraction or cohesive force is the action or
property of like
molecules sticking together, being mutually
attractive. It is an intrinsic property of a
substance that is caused by the shape and structure of its molecules, which makes the distribution of orbiting
electrons irregular when molecules get close to one another, creating
electrical attraction that can maintain a microscopic structure such as a
water drop. In other words, cohesion allows for
surface tension, creating a "solid-like" state upon which light-weight or low-density materials can be placed.
Or cold working, any metal-working procedure (such as hammering, rolling, shearing, bending, milling, etc.) carried out below the metal's recrystallization temperature.
Is planning for side effects or other unintended issues in a
design. In a more simpler term, it's a "counter-procedure" plan on expected side effect performed to produce more efficient and useful results. The design of an
invention can itself also be to compensate for some other existing issue or
exception.
Compressive strength or compression strength is the capacity of a material or structure to withstand loads tending to reduce size, as opposed to
tensile strength, which withstands loads tending to elongate. In other words, compressive strength resists
compression (being pushed together), whereas tensile strength resists
tension (being pulled apart). In the study of
strength of materials, tensile strength, compressive strength, and
shear strength can be analyzed independently.
A computer is a device that can be instructed to carry out sequences of
arithmetic or
logical operations automatically via
computer programming. Modern computers have the ability to follow generalized sets of operations, called programs. These programs enable computers to perform an extremely wide range of tasks.
Computer-aided design (CAD) is the use of
computer systems (or workstations) to aid in the creation, modification, analysis, or optimization of a
design.[122] CAD software is used to increase the productivity of the designer, improve the quality of design, improve communications through documentation, and to create a database for manufacturing.[123] CAD output is often in the form of electronic files for print, machining, or other manufacturing operations. The term CADD (for Computer Aided Design and Drafting) is also used.[124]
Computer-aided manufacturing (CAM) is the use of software to control
machine tools and related ones in the
manufacturing of workpieces.[125][126][127][128][129] This is not the only definition for CAM, but it is the most common;[125] CAM may also refer to the use of a computer to assist in all operations of a manufacturing plant, including planning, management, transportation and storage.[130][131]
Is the theory, experimentation, and engineering that form the basis for the design and use of
computers. It involves the study of
algorithms that process, store, and communicate
digitalinformation. A
computer scientist specializes in the theory of computation and the design of computational systems.[133]
Lenses are classified by the curvature of the two optical surfaces. A lens is biconvex (or double convex, or just convex) if both surfaces are
convex. If both surfaces have the same radius of curvature, the lens is equiconvex. A lens with two
concave surfaces is biconcave (or just concave). If one of the surfaces is flat, the lens is plano-convex or plano-concave depending on the curvature of the other surface. A lens with one convex and one concave side is convex-concave or meniscus.
Is the field of physics that deals with the macroscopic and microscopic physical properties of matter. In particular it is concerned with the "condensed" phases that appear whenever the number of constituents in a system is extremely large and the interactions between the constituents are strong.
In
statistics, a confidence interval or compatibility interval (CI) is a type of
interval estimate, computed from the statistics of the observed data, that might contain the true value of an unknown
population parameter. The interval has an associated confidence level that, loosely speaking, quantifies the level of confidence that the parameter lies in the interval. More strictly speaking, the confidence level represents the frequency (i.e. the proportion) of possible confidence intervals that contain the true value of the unknown population parameter. In other words, if confidence intervals are constructed using a given confidence level from an infinite number of independent sample statistics, the proportion of those intervals that contain the true value of the parameter will be equal to the confidence level.[134][135][136]
A conjugate acid, within the
Brønsted–Lowry acid–base theory, is a
species formed by the
reception of a proton (
H+) by a
base—in other words, it is a base with a
hydrogen ion added to it. On the other hand, a conjugate base is what is left over after an acid has donated a proton during a chemical reaction. Hence, a conjugate base is a species formed by the
removal of a proton from an acid.[137] Because
some acids are capable of releasing multiple protons, the conjugate base of an acid may itself be acidic.
A conjugate acid, within the
Brønsted–Lowry acid–base theory, is a
species formed by the
reception of a proton (
H+) by a
base—in other words, it is a base with a
hydrogen ion added to it. On the other hand, a conjugate base is what is left over after an acid has donated a proton during a chemical reaction. Hence, a conjugate base is a species formed by the
removal of a proton from an acid.[137] Because
some acids are capable of releasing multiple protons, the conjugate base of an acid may itself be acidic.
In physics and chemistry, the law of conservation of energy states that the total
energy of an
isolated system remains constant; it is said to be
conserved over time.[138] This law means that energy can neither be created nor destroyed; rather, it can only be transformed or transferred from one form to another.
The law of conservation of mass or principle of mass conservation states that for any
system closed to all transfers of
matter and
energy, the
mass of the system must remain constant over time, as system's mass cannot change, so quantity cannot be added nor removed. Hence, the quantity of mass is conserved over time.
A continuity equation in physics is an
equation that describes the transport of some quantity. It is particularly simple and powerful when applied to a
conserved quantity, but it can be generalized to apply to any
extensive quantity. Since
mass,
energy,
momentum,
electric charge and other natural quantities are conserved under their respective appropriate conditions, a variety of physical phenomena may be described using continuity equations.
Is a branch of
mechanics that deals with the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician
Augustin-Louis Cauchy was the first to formulate such models in the 19th century.
Control engineering or control systems engineering is an
engineering discipline that applies
automatic control theory to design systems with desired behaviors in
control environments.[139] The discipline of controls overlaps and is usually taught along with
electrical engineering at many institutions around the world.[139]
.
Lenses are classified by the curvature of the two optical surfaces. A lens is biconvex (or double convex, or just convex) if both surfaces are
convex. If both surfaces have the same radius of curvature, the lens is equiconvex. A lens with two
concave surfaces is biconcave (or just concave). If one of the surfaces is flat, the lens is plano-convex or plano-concave depending on the curvature of the other surface. A lens with one convex and one concave side is convex-concave or meniscus.
Is a
natural process, which converts a refined metal to a more chemically-stable form, such as its
oxide,
hydroxide, or
sulfide. It is the gradual destruction of materials (usually
metals) by chemical and/or electrochemical reaction with their environment.
Corrosion engineering is the field dedicated to controlling and stopping corrosion.
Thus, it is also the amount of excess charge on a
capacitor of one
farad charged to a potential difference of one
volt:
The coulomb is equivalent to the charge of approximately 6.242×1018 (1.036×10−5mol)
protons, and −1 C is equivalent to the charge of approximately 6.242×1018electrons.
A
new definition, in terms of the
elementary charge, will take effect on 20 May 2019.[141] The new definition, defines the
elementary charge (the charge of the proton) as exactly 1.602176634×10−19 coulombs. This would implicitly define the coulomb as 1⁄0.1602176634×1018 elementary charges.
Coulomb's law, or Coulomb's inverse-square law, is a
law of
physics for quantifying Coulomb's force, or electrostatic force. Electrostatic force is the amount of force with which stationary,
electrically charged particles either repel, or attract each other. This force and the law for quantifying it, represent one of the most basic forms of force used in the physical sciences, and were an essential basis to the study and development of the theory and field of
classical electromagnetism. The law was first published in 1785 by French physicist
Charles-Augustin de Coulomb.[142]
In its
scalar form, the law is:
,
where ke is the
Coulomb constant (ke ≈ 9×109 N⋅m2⋅C−2), q1 and q2 are the signed magnitudes of the charges, and the scalar r is the distance between the charges. The force of the interaction between the charges is attractive if the charges have opposite signs (i.e., F is negative) and repulsive if like-signed (i.e., F is positive).
Being an
inverse-square law, the law is analogous to
Isaac Newton's inverse-square
law of universal gravitation. Coulomb's law can be used to derive
Gauss's law, and vice versa.
Crystallization is the (natural or artificial) process by which a solid forms, where the atoms or molecules are highly organized into a structure known as a
crystal. Some of the ways by which crystals form are
precipitating from a
solution,
freezing, or more rarely
deposition directly from a
gas. Attributes of the resulting crystal depend largely on factors such as temperature, air pressure, and in the case of liquid crystals, time of fluid evaporation.
Describes the motion of a moving particle that conforms to a known or fixed curve. The study of such motion involves the use of two co-ordinate systems, the first being planar motion and the latter being cylindrical motion.
In
chemistry and
physics, Dalton's law (also called Dalton's law of partial pressures) states that in a mixture of non-reacting gases, the total
pressure exerted is equal to the sum of the
partial pressures of the individual gases.[150]
In
materials science, deformation refers to any changes in the shape or size of an object due to
an applied
force (the deformation energy in this case is transferred through work) or
a change in temperature (the deformation energy in this case is transferred through heat).
The first case can be a result of
tensile (pulling) forces,
compressive (pushing) forces,
shear,
bending or
torsion (twisting).
In the second case, the most significant factor, which is determined by the temperature, is the mobility of the structural defects such as grain boundaries, point vacancies, line and screw dislocations, stacking faults and twins in both crystalline and non-crystalline solids. The movement or displacement of such mobile defects is thermally activated, and thus limited by the rate of atomic diffusion.[152][153]
Deformation in
continuum mechanics is the transformation of a body from a reference configuration to a current configuration.[154] A configuration is a set containing the positions of all particles of the body.
A deformation may be caused by
external loads,[155]body forces (such as
gravity or
electromagnetic forces), or changes in temperature, moisture content, or chemical reactions, etc.
The density, or more precisely, the volumetric mass density, of a substance is its
mass per unit
volume. The symbol most often used for density is Ï (the lower case Greek letter
rho), although the Latin letter D can also be used. Mathematically, density is defined as mass divided by volume:[156]
where Ï is the density, m is the mass, and V is the volume. In some cases (for instance, in the United States oil and gas industry), density is loosely defined as its
weight per unit
volume,[157] although this is scientifically inaccurate – this quantity is more specifically called
specific weight.
The derivative of a
function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of
calculus. For example, the derivative of the position of a moving object with respect to
time is the object's
velocity: this measures how quickly the position of the object changes when time advances.
Diamagnetic materials are repelled by a
magnetic field; an applied magnetic field creates an
induced magnetic field in them in the opposite direction, causing a repulsive force. In contrast,
paramagnetic and
ferromagnetic materials are attracted by a magnetic field. Diamagnetism is a
quantum mechanical effect that occurs in all materials; when it is the only contribution to the magnetism, the material is called diamagnetic. In paramagnetic and ferromagnetic substances the weak diamagnetic force is overcome by the attractive force of
magnetic dipoles in the material. The
magnetic permeability of diamagnetic materials is less than μ0, the permeability of vacuum. In most materials diamagnetism is a weak effect which can only be detected by sensitive laboratory instruments, but a
superconductor acts as a strong diamagnet because it repels a magnetic field entirely from its interior.
A differential pulley, also called Weston differential pulley, or colloquially chain fall, is used to manually lift very heavy objects like
car engines. It is operated by pulling upon the slack section of a continuous chain that wraps around pulleys. The relative size of two connected pulleys determines the maximum weight that can be lifted by hand. The load will remain in place (and not lower under the force of
gravity) until the chain is pulled.[158]
Is the net movement of molecules or atoms from a region of higher concentration (or high chemical potential) to a region of lower concentration (or low chemical potential).
is the analysis of the relationships between different
physical quantities by identifying their
base quantities (such as
length,
mass,
time, and
electric charge) and
units of measure (such as miles vs. kilometers, or pounds vs. kilograms) and tracking these dimensions as calculations or comparisons are performed. The
conversion of units from one dimensional unit to another is often somewhat complex. Dimensional analysis, or more specifically the factor-label method, also known as the unit-factor method, is a widely used technique for such conversions using the rules of
algebra.[159][160][161]
Direct integration is a
structural analysis method for measuring internal shear, internal moment, rotation, and deflection of a beam.
For a beam with an applied weight , taking downward to be positive, the internal
shear force is given by taking the negative integral of the weight:
The internal moment M(x) is the integral of the internal shear:
In
optics, dispersion is the phenomenon in which the
phase velocity of a wave depends on its frequency.[162]
Media having this common property may be termed dispersive media. Sometimes the term chromatic dispersion is used for specificity.
Although the term is used in the field of optics to describe
light and other
electromagnetic waves, dispersion in the same sense can apply to any sort of wave motion such as
acoustic dispersion in the case of sound and seismic waves, in
gravity waves (ocean waves), and for telecommunication signals along
transmission lines (such as
coaxial cable) or
optical fiber.
In
fluid mechanics, displacement occurs when an object is immersed in a
fluid, pushing it out of the way and taking its place. The volume of the fluid displaced can then be measured, and from this, the volume of the immersed object can be deduced (the volume of the immersed object will be exactly equal to the volume of the displaced fluid).
Is a
vector whose length is the shortest
distance from the initial to the final
position of a point P.[163] It quantifies both the distance and direction of an imaginary motion along a straight line from the initial position to the final position of the point. A displacement may be identified with the
translation that maps the initial position to the final position.
The Doppler effect (or the Doppler shift) is the change in
frequency or
wavelength of a
wave in relation to an
observer who is moving relative to the wave source.[164] It is named after the
Austrian physicist
Christian Doppler, who described the phenomenon in 1842.
The dose–response relationship, or exposure–response relationship, describes the magnitude of the
response of an
organism, as a
function of exposure (or
doses) to a
stimulus or
stressor (usually a
chemical) after a certain exposure time.[165] Dose–response relationships can be described by dose–response curves. A stimulus response function or stimulus response curve is defined more broadly as the response from any type of stimulus, not limited to chemicals.
In
fluid dynamics, drag (sometimes called air resistance, a type of
friction, or fluid resistance, another type of friction or fluid friction) is a
force acting opposite to the relative motion of any object moving with respect to a surrounding fluid.[166] This can exist between two fluid layers (or surfaces) or a fluid and a
solid surface. Unlike other resistive forces, such as dry
friction, which are nearly independent of velocity, drag forces depend on velocity.[167][168]
Drag force is proportional to the velocity for a
laminar flow and the squared velocity for a
turbulent flow. Even though the ultimate cause of a drag is viscous friction, the turbulent drag is independent of
viscosity.[169] Drag forces always decrease fluid velocity relative to the solid object in the fluid's
path.
Is a measure of a material's ability to undergo significant plastic deformation before rupture, which may be expressed as percent elongation or percent area reduction from a tensile test.
In physics and chemistry, effusion is the process in which a gas escapes from a container through a hole of diameter considerably smaller than the
mean free path of the molecules.[170]
In
physics, elasticity is the ability of a body to resist a distorting influence and to return to its original size and shape when that influence or force is removed. Solid objects will
deform when adequate
forces are applied to them. If the material is elastic, the object will return to its initial shape and size when these forces are removed.
is the
physical property of
matter that causes it to experience a
force when placed in an
electromagnetic field. There are two types of electric charges; positive and negative (commonly carried by
protons and
electrons respectively). Like charges repel and unlike attract. An object with an absence of net charge is referred to as neutral. Early knowledge of how charged substances interact is now called
classical electrodynamics, and is still accurate for problems that do not require consideration of
quantum effects.
Is a flow of
electric charge.[171]: 2 In
electric circuits this charge is often carried by moving
electrons in a
wire. It can also be carried by
ions in an
electrolyte, or by both ions and electrons such as in an ionised gas (
plasma).[172]
The
SI unit for measuring an electric current is the
ampere, which is the flow of electric charge across a surface at the rate of one
coulomb per second. Electric current is measured using a device called an
ammeter.[173]
Surrounds an
electric charge, and exerts force on other charges in the field, attracting or repelling them.[174][175] Electric field is sometimes abbreviated as E-field.
Is an
electrical machine that converts
electrical energy into
mechanical energy. Most electric motors operate through the interaction between the motor's
magnetic field and
winding currents to generate force in the form of
rotation. Electric motors can be powered by
direct current (DC) sources, such as from batteries, motor vehicles or rectifiers, or by
alternating current (AC) sources, such as a power grid,
inverters or electrical generators. An
electric generator is mechanically identical to an electric motor, but operates in the reverse direction, accepting mechanical energy (such as from flowing water) and converting this mechanical energy into electrical energy.
(Also called the electric field potential, potential drop or the electrostatic potential) is the amount of
work needed to move a unit of
positive charge from a reference point to a specific point inside the field without producing an acceleration. Typically, the reference point is the
Earth or a point at
infinity, although any point beyond the influence of the electric field charge can be used.
Electric potential energy, or electrostatic potential energy, is a
potential energy (measured in
joules) that results from
conservativeCoulomb forces and is associated with the configuration of a particular set of point
charges within a defined
system. An object may have electric potential energy by virtue of two key elements: its own electric charge and its relative position to other electrically charged objects. The term "electric potential energy" is used to describe the potential energy in systems with
time-variantelectric fields, while the term "electrostatic potential energy" is used to describe the potential energy in systems with
time-invariant electric fields.
Is an object or type of material that allows the flow of charge (
electrical current) in one or more directions. Materials made of metal are common electrical conductors. Electrical current is generated by the flow of negatively charged electrons, positively charged holes, and positive or negative ions in some cases.
Is the measure of the opposition that a
circuit presents to a
current when a
voltage is applied. The term complex impedance may be used interchangeably.
Is a material whose internal
electric charges do not flow freely; very little
electric current will flow through it under the influence of an
electric field. This contrasts with other materials,
semiconductors and
conductors, which conduct electric current more easily. The property that distinguishes an insulator is its
resistivity; insulators have higher resistivity than semiconductors or conductors.
Is a type of
magnet in which the
magnetic field is produced by an
electric current. Electromagnets usually consist of wire wound into a
coil. A current through the wire creates a magnetic field which is concentrated in the hole, denoting the centre of the coil. The magnetic field disappears when the current is turned off. The wire turns are often wound around a
magnetic core made from a
ferromagnetic or
ferrimagnetic material such as
iron; the magnetic core concentrates the
magnetic flux and makes a more powerful magnet.
Electromechanics[180][181][182][183] combines processes and procedures drawn from
electrical engineering and
mechanical engineering. Electromechanics focuses on the interaction of electrical and mechanical systems as a whole and how the two systems interact with each other. This process is especially prominent in systems such as those of DC or AC rotating electrical machines which can be designed and operated to generate power from a mechanical process (
generator) or used to power a mechanical effect (
motor). Electrical engineering in this context also encompasses
electronics engineering.
In
chemistry, an electron pair, or Lewis pair, consists of two
electrons that occupy the same
molecular orbital but have opposite
spins.
Gilbert N. Lewis introduced the concepts of both the electron pair and the covalent bond in a landmark paper he published in 1916.[189]
Symbolized as χ, is the measurement of the tendency of an
atom to attract a shared pair of
electrons (or
electron density).[190] An atom's electronegativity is affected by both its
atomic number and the distance at which its
valence electrons reside from the charged nucleus. The higher the associated electronegativity, the more an atom or a substituent group attracts electrons.
Is a process where a sample of some material (e.g., soil, waste or drinking water, bodily fluids,
minerals,
chemical compounds) is analyzed for its
elemental and sometimes
isotopic composition.[citation needed] Elemental analysis can be qualitative (determining what elements are present), and it can be quantitative (determining how much of each are present). Elemental analysis falls within the ambit of
analytical chemistry, the set of instruments involved in deciphering the chemical nature of our world.
Is any process with an increase in the
enthalpyH (or
internal energyU) of the system.[192] In such a process, a closed system usually absorbs
thermal energy from its surroundings, which is
heat transfer into the system. It may be a chemical process, such as dissolving ammonium nitrate in water, or a physical process, such as the melting of ice cubes.
Is the use of
scientific principles to design and build machines, structures, and other items, including bridges, tunnels, roads, vehicles, and buildings.[195] The discipline of engineering encompasses a broad range of more specialized
fields of engineering, each with a more specific emphasis on particular areas of
applied mathematics,
applied science, and types of application. The term engineering is derived from the
Latiningenium, meaning "cleverness" and ingeniare, meaning "to contrive, devise".[196]
Engineering economics, previously known as engineering economy, is a subset of
economics concerned with the use and "...application of economic principles"[197] in the analysis of engineering decisions.[198] As a discipline, it is focused on the branch of economics known as
microeconomics in that it studies the behavior of individuals and firms in making decisions regarding the allocation of limited resources. Thus, it focuses on the decision making process, its context and environment.[197] It is pragmatic by nature, integrating economic theory with engineering practice.[197] But, it is also a simplified application of microeconomic theory in that it assumes elements such as price determination, competition and demand/supply to be fixed inputs from other sources.[197] As a discipline though, it is closely related to others such as
statistics,
mathematics and
cost accounting.[197] It draws upon the logical framework of economics but adds to that the analytical power of mathematics and statistics.[197]
Or engineering science, refers to the study of the combined disciplines of
physics,
mathematics,
chemistry,
biology, and
engineering, particularly computer, nuclear, electrical, electronic, aerospace, materials or mechanical engineering. By focusing on the
scientific method as a rigorous basis, it seeks ways to apply, design, and develop new solutions in engineering.[201][202][203][204]
In
statistics, an estimator is a rule for calculating an estimate of a given quantity based on
observed data: thus the rule (the estimator), the quantity of interest (the
estimand) and its result (the estimate) are distinguished.[206] For example, the
sample mean is a commonly used estimator of the
population mean. There are
point and
interval estimators. The
point estimators yield single-valued results, although this includes the possibility of single vector-valued results and results that can be expressed as a single function. This is in contrast to an
interval estimator, where the result would be a range of plausible values (or vectors or functions).
Euler–Bernoulli beam theory (also known as engineer's beam theory or classical beam theory)[207] is a simplification of the
linear theory of elasticity which provides a means of calculating the load-carrying and
deflection characteristics of
beams. It covers the case for small deflections of a
beam that are subjected to lateral loads only. It is thus a special case of
Timoshenko beam theory. It was first enunciated circa 1750,[208] but was not applied on a large scale until the development of the
Eiffel Tower and the
Ferris wheel in the late 19th century. Following these successful demonstrations, it quickly became a cornerstone of engineering and an enabler of the
Second Industrial Revolution. Additional
mathematical models have been developed such as
plate theory, but the simplicity of beam theory makes it an important tool in the sciences, especially
structural and
mechanical engineering.
In
thermodynamics, the term exothermic process (exo- : "outside") describes a process or reaction that releases
energy from the system to its surroundings, usually in the form of
heat, but also in a form of
light (e.g. a spark, flame, or flash),
electricity (e.g. a battery), or
sound (e.g. explosion heard when burning hydrogen). Its etymology stems from the Greek prefix Îξω (exÅ, which means "outwards") and the Greek word θεÏμικός (thermikÏŒs, which means "thermal").[209]
(FoS), also known as (and used interchangeably with) safety factor (SF), expresses how much stronger a system is than it needs to be for an intended load.
[210] The farad (symbol: F) is the
SI derived unit of electrical
capacitance, the ability of a body to store an electrical charge. It is named after the English physicist
Michael Faraday.
Both of these values have exact defined values, and hence F has a known exact value. NA is the
Avogadro constant (the ratio of the number of particles, N, which is unitless, to the amount of substance, n, in units of moles), and e is the
elementary charge or the magnitude of the charge of an electron. This relation holds because the amount of charge of a mole of electrons is equal to the amount of charge in one electron multiplied by the number of electrons in a mole.
In
optics, Fermat's principle, or the principle of least time, named after French mathematician
Pierre de Fermat, is the principle that the path taken between two points by a ray of light is the path that can be traversed in the least time. This principle is sometimes taken as the definition of a ray of light.[215] However, this version of the principle is not general; a more modern statement of the principle is that rays of light traverse the path of stationary optical length with respect to variations of the path.[216] In other words, a ray of light prefers the path such that there are other paths, arbitrarily nearby on either side, along which the ray would take almost exactly the same time to traverse.
(FEM), is the most widely used method for solving problems of engineering and
mathematical models. Typical problem areas of interest include the traditional fields of
structural analysis,
heat transfer,
fluid flow, mass transport, and
electromagnetic potential.
The FEM is a particular
numerical method for solving
partial differential equations in two or three space variables (i.e., some
boundary value problems). To solve a problem, the FEM subdivides a large system into smaller, simpler parts that are called finite elements. This is achieved by a particular space
discretization in the space dimensions, which is implemented by the construction of a
mesh of the object: the numerical domain for the solution, which has a finite number of points.
The finite element method formulation of a boundary value problem finally results in a system of
algebraic equations. The method approximates the unknown function over the domain.[217]
The simple equations that model these finite elements are then assembled into a larger system of equations that models the entire problem. The FEM then uses
variational methods from the
calculus of variations to approximate a solution by minimizing an associated error function.
For Inspiration and Recognition of Science and Technology – is an organization founded by inventor Dean Kamen in 1989 to develop ways to inspire students in engineering and technology fields.
In
physics and
engineering, fluid dynamics is a subdiscipline of
fluid mechanics that describes the flow of
fluids—
liquids and
gases. It has several subdisciplines, including
aerodynamics (the study of air and other gases in motion) and hydrodynamics (the study of liquids in motion).
Fluid statics, or hydrostatics, is the branch of
fluid mechanics that studies "
fluids at rest and the pressure in a fluid or exerted by a fluid on an immersed body".[221]
Is a mechanical device specifically designed to use the conservation of
angular momentum so as to efficiently store
rotational energy; a form of kinetic energy proportional to the product of its
moment of inertia and the square of its
rotational speed. In particular, if we assume the flywheel's moment of inertia to be constant (i.e., a flywheel with fixed mass and
second moment of area revolving about some fixed axis) then the stored (rotational) energy is directly associated with the square of its rotational speed.
In
geometrical optics, a focus, also called an image point, is the point where
light rays originating from a point on the object
converge.[222] Although the focus is conceptually a point, physically the focus has a spatial extent, called the
blur circle. This non-ideal focusing may be caused by
aberrations of the imaging optics. In the absence of significant aberrations, the smallest possible blur circle is the
Airy disc, which is caused by
diffraction from the optical system's
aperture. Aberrations tend worsen as the aperture diameter increases, while the Airy circle is smallest for large apertures.
In
materials science, fracture toughness is the critical
stress intensity factor of a sharp crack where propagation of the crack suddenly becomes rapid and unlimited. A component's thickness affects the constraint conditions at the tip of a crack with thin components having
plane stress conditions and thick components having
plane strain conditions. Plane strain conditions give the lowest fracture toughness value which is a
material property. The critical value of stress intensity factor in
mode I loading measured under plane strain conditions is known as the plane strain fracture toughness, denoted .[226] When a test fails to meet the thickness and other test requirements that are in place to ensure plane strain conditions, the fracture toughness value produced is given the designation . Fracture toughness is a quantitative way of expressing a material's resistance to crack propagation and standard values for a given material are generally available.
The melting point (or, rarely, liquefaction point) of a substance is the
temperature at which it changes
state from
solid to
liquid. At the melting point the solid and liquid phase exist in
equilibrium. The melting point of a substance depends on
pressure and is usually specified at a
standard pressure such as 1
atmosphere or 100
kPa.
When considered as the temperature of the reverse change from liquid to solid, it is referred to as the freezing point or crystallization point. Because of the ability of substances to
supercool, the freezing point can easily appear to be below its actual value. When the "characteristic freezing point" of a substance is determined, in fact the actual methodology is almost always "the principle of observing the disappearance rather than the formation of ice, that is, the
melting point.[227]
Is the
force resisting the relative motion of solid surfaces, fluid layers, and material elements
sliding against each other.[228] There are several types of friction:
Dry friction is a force that opposes the relative lateral motion of two solid surfaces in contact. Dry friction is subdivided into static friction ("
stiction") between non-moving surfaces, and kinetic friction between moving surfaces. With the exception of atomic or molecular friction, dry friction generally arises from the interaction of surface features, known as
asperities (see Figure 1).
Fluid friction describes the friction between layers of a
viscous fluid that are moving relative to each other.[229][230]
Lubricated friction is a case of fluid friction where a
lubricant fluid separates two solid surfaces.[231][232][233]
Skin friction is a component of
drag, the force resisting the motion of a fluid across the surface of a body.
Internal friction is the force resisting motion between the elements making up a solid material while it undergoes
deformation.[230]
In mathematics, a function[note 2] is a
binary relation between two
sets that associates every element of the first set to exactly one element of the second set. Typical examples are functions from
integers to integers, or from the
real numbers to real numbers.
The fundamental frequency, often referred to simply as the fundamental, is defined as the lowest
frequency of a
periodicwaveform. In music, the fundamental is the musical
pitch of a note that is perceived as the lowest
partial present. In terms of a superposition of
sinusoids, the fundamental frequency is the lowest frequency sinusoidal in the sum of harmonically related frequencies, or the frequency of the difference between adjacent frequencies. In some contexts, the fundamental is usually abbreviated as f0, indicating the lowest frequency
counting from zero.[234][235][236] In other contexts, it is more common to abbreviate it as f1, the first
harmonic.[237][238][239][240][241] (The second harmonic is then f2 = 2â‹…f1, etc. In this context, the zeroth harmonic would be 0
Hz.)
In
physics, the fundamental interactions, also known as fundamental forces, are the interactions that do not appear to be reducible to more basic interactions. There are four fundamental interactions known to exist: the
gravitational and
electromagnetic interactions, which produce significant long-range forces whose effects can be seen directly in everyday life, and the
strong and
weak interactions, which produce forces at
minuscule, subatomic distances and govern nuclear interactions. Some scientists hypothesize that a
fifth force might exist, but these hypotheses remain speculative.[242][243][244]
The Fundamentals of Engineering (FE) exam, also referred to as the Engineer in Training (EIT) exam, and formerly in some states as the Engineering Intern (EI) exam, is the first of two examinations that
engineers must pass in order to be licensed as a
Professional Engineer in the
United States. The second examination is
Principles and Practice of Engineering Examination. The FE
exam is open to anyone with a
degree in engineering or a related field, or currently enrolled in the last year of an
ABET-accredited engineering degree program. Some state licensure boards permit students to take it prior to their final year, and numerous states allow those who have never attended an approved program to take the exam if they have a state-determined number of years of work experience in engineering. Some states allow those with ABET-accredited "Engineering Technology" or "ETAC" degrees to take the examination. The state of
Michigan has no admission pre-requisites for the FE.[245] The exam is administered by the
National Council of Examiners for Engineering and Surveying (NCEES).
A galvanic cell or voltaic cell, named after
Luigi Galvani or
Alessandro Volta, respectively, is an
electrochemical cell that derives electrical energy from spontaneous
redox reactions taking place within the cell. It generally consists of two different metals immersed in electrolytes, or of individual half-cells with different metals and their ions in solution connected by a
salt bridge or separated by a porous membrane. Volta was the inventor of the
voltaic pile, the first
electrical battery. In common usage, the word "battery" has come to include a single galvanic cell, but a battery properly consists of multiple cells.[246]
Is one of the
four fundamental states of matter (the others being
solid,
liquid, and
plasma). A pure gas may be made up of individual
atoms (e.g. a
noble gas like
neon),
elemental molecules made from one type of atom (e.g.
oxygen), or
compound molecules made from a variety of atoms (e.g.
carbon dioxide). A gas
mixture, such as
air, contains a variety of pure gases. What distinguishes a gas from liquids and solids is the vast separation of the individual gas particles.
In mathematics, the geometric mean is a
mean or
average, which indicates the
central tendency or typical value of a set of numbers by using the product of their values (as opposed to the
arithmetic mean which uses their sum). The geometric mean is defined as the
nth root of the
product of n numbers, i.e., for a set of numbers x1, x2, ..., xn, the geometric mean is defined as
Is, with
arithmetic, one of the oldest branches of
mathematics. It is concerned with properties of space that are related with distance, shape, size, and relative position of figures.[247] A mathematician who works in the field of geometry is called a
geometer.
Graham's law of effusion (also called Graham's law of
diffusion) was formulated by Scottish physical chemist
Thomas Graham in 1848.[254] Graham found experimentally that the rate of
effusion of a gas is inversely proportional to the square root of the mass of its particles.[254] This formula can be written as:
,
where:
Rate1 is the rate of effusion for the first gas. (volume or number of moles per unit time).
Gravitational energy or gravitational potential energy is the
potential energy a
massive object has in relation to another massive object due to
gravity. It is the potential energy associated with the
gravitational field, which is released (converted into
kinetic energy) when the objects
fall towards each other. Gravitational potential energy increases when two objects are brought further apart.
For two pairwise interacting point particles, the gravitational potential energy is given by
where and are the masses of the two particles, is the distance between them, and is the
gravitational constant.[255]
Close to the Earth's surface, the gravitational field is approximately constant, and the gravitational potential energy of an object reduces to
In
physics, a gravitational field is a
model used to explain the influences that a massive body extends into the space around itself, producing a force on another massive body.[256] Thus, a gravitational
field is used to explain
gravitational phenomena, and is measured in
newtons per
kilogram (N/kg). In its original concept,
gravity was a
force between point
masses. Following
Isaac Newton,
Pierre-Simon Laplace attempted to model gravity as some kind of
radiation field or
fluid, and since the 19th century, explanations for gravity have usually been taught in terms of a field model, rather than a point attraction.
In a field model, rather than two particles attracting each other, the particles distort
spacetime via their mass, and this distortion is what is perceived and measured as a "force".[citation needed] In such a model one states that matter moves in certain ways in response to the curvature of spacetime,[257] and that there is either no gravitational force,[258] or that gravity is a
fictitious force.[259]
Gravity is distinguished from other forces by its obedience to the
equivalence principle.
In
classical mechanics, the gravitational potential at a location is equal to the
work (
energy transferred) per unit mass that would be needed to move an object to that location from a fixed reference location. It is
analogous to the
electric potential with
mass playing the role of
charge. The reference location, where the potential is zero, is by convention
infinitely far away from any mass, resulting in a negative potential at any
finite distance.
In mathematics, the gravitational potential is also known as the
Newtonian potential and is fundamental in the study of
potential theory. It may also be used for solving the electrostatic and magnetostatic fields generated by uniformly charged or polarized ellipsoidal bodies.[260]
Or gravitation, is a
natural phenomenon by which all things with
mass or
energy—including
planets,
stars,
galaxies, and even
light[267]—are brought toward (or gravitate toward) one another. On
Earth, gravity gives
weight to
physical objects, and the
Moon's
gravity causes the ocean
tides. The gravitational attraction of the original gaseous matter present in the
Universe caused it to begin
coalescing and
forming stars and caused the stars to group together into galaxies, so gravity is responsible for many of the large-scale structures in the Universe. Gravity has an infinite range, although its effects become increasingly weaker as objects get further away.
The period at which one-half of a quantity of an unstable isotope has decayed into other elements; the time at which half of a substance has diffused out of or otherwise reacted in a system.
In
mathematics, the harmonic mean (sometimes called the subcontrary mean) is one of several kinds of
average, and in particular, one of the
Pythagorean means. Typically, it is appropriate for situations when the average of
rates is desired.
The harmonic mean can be expressed as the
reciprocal of the
arithmetic mean of the reciprocals of the given set of observations. As a simple example, the harmonic mean of 1, 4, and 4 is
Is a discipline of
thermal engineering that concerns the generation, use, conversion, and exchange of
thermal energy (
heat) between physical systems. Heat transfer is classified into various mechanisms, such as
thermal conduction,
thermal convection,
thermal radiation, and transfer of energy by
phase changes. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system.
In
thermodynamics, the Helmholtz free energy (or Helmholtz energy) is a
thermodynamic potential that measures the useful
work obtainable from a
closedthermodynamic system at a constant
temperature and
volume (
isothermal,
isochoric). The negative of the change in the Helmholtz energy during a process is equal to the maximum amount of work that the system can perform in a thermodynamic process in which volume is held constant. If the volume were not held constant, part of this work would be performed as boundary work. This makes the Helmholtz energy useful for systems held at constant volume. Furthermore, at constant temperature, the Helmholtz free energy is minimized at equilibrium.
can be used to estimate the
pH of a
buffer solution. The numerical value of the
acid dissociation constant, Ka, of the acid is known or assumed. The pH is calculated for given values of the concentrations of the acid, HA and of a salt, MA, of its conjugate base, A−; for example, the solution may contain
acetic acid and
sodium acetate.
In physical
chemistry, Henry's law is a
gas law that states that the amount of dissolved gas in a liquid is proportional to its
partial pressure above the liquid. The proportionality factor is called Henry's law constant. It was formulated by the English chemist
William Henry, who studied the topic in the early 19th century.
Is a device used for lifting or lowering a load by means of a drum or lift-wheel around which rope or chain wraps. It may be manually operated, electrically or
pneumatically driven and may use chain, fiber or wire
rope as its lifting medium. The most familiar form is an
elevator, the car of which is raised and lowered by a hoist mechanism. Most hoists couple to their loads using a
lifting hook. Today, there are a few governing bodies for the North American overhead hoist industry which include the Hoist Manufactures Institute (
HMI), ASME, and the Occupational Safety and Health Administration (
OSHA). HMI is a product counsel of the Material Handling Industry of America consisting of hoist manufacturers promoting safe use of their products.
In
mathematics, an identity is an
equality relating one mathematical expression A to another mathematical expression B, such that A and B (which might contain some
variables) produce the same value for all values of the variables within a certain range of validity.[280] In other words, A = B is an identity if A and B define the same
functions, and an identity is an equality between functions that are differently defined. For example, and are identities.[280] Identities are sometimes indicated by the
triple bar symbol ≡ instead of =, the
equals sign.[281]
Also known as a ramp, is a flat supporting surface tilted at an angle, with one end higher than the other, used as an aid for raising or lowering a load.[282][283][284] The inclined plane is one of the six classical
simple machines defined by Renaissance scientists. Inclined planes are widely used to move heavy loads over vertical obstacles; examples vary from a ramp used to load goods into a truck, to a person walking up a pedestrian ramp, to an automobile or railroad train climbing a grade.[284]
In
electromagnetism and
electronics, inductance is the tendency of an
electrical conductor to oppose a change in the
electric current flowing through it. The flow of electric current creates a
magnetic field around the conductor. The field strength depends on the magnitude of the current, and follows any changes in current. From
Faraday's law of induction, any change in magnetic field through a circuit induces an
electromotive force (EMF) (
voltage) in the conductors, a process known as
electromagnetic induction. This induced voltage created by the changing current has the effect of opposing the change in current. This is stated by
Lenz's law, and the voltage is called back EMF. Inductance is defined as the ratio of the induced voltage to the rate of change of current causing it. It is a proportionality factor that depends on the geometry of circuit conductors and the
magnetic permeability of nearby materials.[286] An
electronic component designed to add inductance to a circuit is called an
inductor. It typically consists of a
coil or helix of wire.
Is an engineering profession that is concerned with the optimization of complex
processes,
systems, or
organizations by developing, improving and implementing integrated systems of people, money, knowledge, information and equipment. Industrial engineers use specialized
knowledge and
skills in the mathematical, physical and
social sciences, together with the
principles and methods of
engineering analysis and design, to specify, predict, and evaluate the results obtained from systems and processes.[288] From these results, they are able to create new
systems, processes or situations for the useful coordination of
labour,
materials and
machines and also improve the
quality and
productivity of systems, physical or social.[289]
Is the resistance of any physical
object to any change in its
velocity. This includes changes to the object's
speed, or
direction of motion.
An aspect of this property is the tendency of objects to keep moving in a straight line at a constant speed, when no
forces act upon them.
Infrasound, sometimes referred to as low-frequency sound, describes sound waves with a frequency below the lower limit of audibility (generally 20 Hz). Hearing becomes gradually less sensitive as frequency decreases, so for humans to perceive infrasound, the
sound pressure must be sufficiently high. The ear is the primary organ for sensing low sound, but at higher intensities it is possible to feel infrasound vibrations in various parts of the body.
In
mathematics, an integral assigns numbers to functions in a way that describes displacement, area, volume, and other concepts that arise by combining
infinitesimal data. The process of finding integrals is called integration. Along with
differentiation, integration is a fundamental operation of calculus,[b] and serves as a tool to solve problems in mathematics and
physics involving the area of an arbitrary shape, the length of a curve, and the volume of a solid, among others.
In
mathematics, an integral transform maps a
function from its original
function space into another function space via
integration, where some of the properties of the original function might be more easily characterized and manipulated than in the original function space. The transformed function can generally be mapped back to the original function space using the inverse transform.
In
statistics, interval estimation is the use of
sample data to calculate an
interval of possible values of an unknown
population parameter; this is in contrast to
point estimation, which gives a single value.
Jerzy Neyman (1937) identified interval estimation ("estimation by interval") as distinct from
point estimation ("estimation by unique estimate"). In doing so, he recognized that then-recent work quoting results in the form of an
estimate plus-or-minus a
standard deviation indicated that interval estimation was actually the problem
statisticians really had in mind.
Is a
particle,
atom or
molecule with a net
electrical charge. The charge of the electron is considered negative by convention. The negative charge of an ion is equal and opposite to charged proton(s) considered positive by convention. The net charge of an ion is non-zero due to its total number of
electrons being unequal to its total number of
protons.
Is a type of
chemical bonding that involves the
electrostatic attraction between oppositely charged
ions, or between two
atoms with sharply different
electronegativities,[291] and is the primary interaction occurring in
ionic compounds. It is one of the main types of bonding along with
covalent bonding and
metallic bonding. Ions are atoms (or groups of atoms) with an electrostatic charge. Atoms that gain electrons make negatively charged ions (called
anions). Atoms that lose electrons make positively charged ions (called
cations). This transfer of electrons is known as electrovalence in contrast to
covalence. In the simplest case, the cation is a
metal atom and the anion is a
nonmetal atom, but these ions can be of a more complex nature, e.g.
molecular ions like NH+ 4 or SO2− 4. In simpler words, an ionic bond results from the transfer of electrons from a
metal to a
non-metal in order to obtain a full valence shell for both atoms.
Ionization or ionisation is the process by which an
atom or a
molecule acquires a negative or positive
charge by gaining or losing
electrons, often in conjunction with other chemical changes. The resulting electrically charged atom or molecule is called an
ion. Ionization can result from the loss of an electron after collisions with
subatomic particles, collisions with other atoms, molecules and ions, or through the interaction with
electromagnetic radiation.
Heterolytic bond cleavage and heterolytic
substitution reactions can result in the formation of ion pairs. Ionization can occur through radioactive decay by the
internal conversion process, in which an excited nucleus transfers its energy to one of the
inner-shell electrons causing it to be ejected.
The SI unit of energy. The joule, (symbol: J), is a
derived unit of
energy in the
International System of Units.[295] It is equal to the energy transferred to (or
work done on) an object when a
force of one
newton acts on that object in the direction of the object's motion through a distance of one
metre (1 newton-metre or Nâ‹…m). It is also the energy dissipated as heat when an electric
current of one
ampere passes through a
resistance of one
ohm for one second. It is named after the English physicist
James Prescott Joule (1818–1889).[296][297][298]
In statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The Kalman filter has numerous applications in technology.
Is a branch of
classical mechanics that describes the
motion of points, bodies (objects), and systems of bodies (groups of objects) without considering the forces that caused the motion.[301][302][303]
In
fluid dynamics, laminar flow is characterized by fluid particles following smooth paths in layers, with each layer moving smoothly past the adjacent layers with little or no mixing.[304] At low velocities, the fluid tends to flow without lateral mixing, and adjacent layers slide past one another like
playing cards. There are no cross-currents perpendicular to the direction of flow, nor
eddies or swirls of fluids.[305] In laminar flow, the motion of the particles of the fluid is very orderly with particles close to a solid surface moving in straight lines parallel to that surface.[306]
Laminar flow is a flow regime characterized by high
momentum diffusion and low momentum
convection.
Le Chatelier's principle, also called Chatelier's principle, is a principle of
chemistry used to predict the effect of a change in conditions on
chemical equilibria. The principle is named after French chemist
Henry Louis Le Chatelier, and sometimes also credited to
Karl Ferdinand Braun, who discovered it independently. It can be stated as:
When any system at equilibrium for a long period of time is subjected to a change in
concentration,
temperature,
volume, or
pressure, (1) the system changes to a new equilibrium, and (2) this change partly counteracts the applied change.
It is common to treat the principle as a more general observation of
systems,[309] such as
When a settled system is disturbed, it will adjust to diminish the change that has been made to it
Lenz's law, named after the physicist
Emil Lenz who formulated it in 1834,[310] states that the direction of the
electric current which is
induced in a
conductor by a changing
magnetic field is such that the magnetic field created by the induced current opposes the initial changing magnetic field.
It is a
qualitative law that specifies the direction of induced current, but states nothing about its magnitude. Lenz's law explains the direction of many effects in
electromagnetism, such as the direction of voltage induced in an
inductor or
wire loop by a changing current, or the drag force of
eddy currents exerted on moving objects in a magnetic field.
Lenz's law may be seen as analogous to
Newton's third law in
classical mechanics.[311]
Is a
simple machine consisting of a
beam or rigid rod pivoted at a fixed
hinge, or fulcrum. A lever is a rigid body capable of rotating on a point on itself. On the basis of the locations of fulcrum, load and effort, the lever is divided into
three types. Also,
leverage is mechanical advantage gained in a system. It is one of the six
simple machines identified by Renaissance scientists. A lever amplifies an input force to provide a greater output force, which is said to provide leverage. The ratio of the output force to the input force is the
mechanical advantage of the lever. As such, the lever is a
mechanical advantage device, trading off force against movement.
In
mathematics, more specifically
calculus, L'Hôpital's rule or L'Hospital's rule (French:[lopital],
English: /ËŒloÊŠpiËˈtÉ‘Ël/,
loh-pee-TAHL) provides a technique to evaluate
limits of
indeterminate forms. Application (or repeated application) of the rule often converts an indeterminate form to an expression that can be easily evaluated by substitution. The rule is named after the 17th-century
FrenchmathematicianGuillaume de l'Hôpital. Although the rule is often attributed to L'Hôpital, the theorem was first introduced to him in 1694 by the Swiss mathematician
Johann Bernoulli.
L'Hôpital's rule states that for functions f and g which are
differentiable on an open
intervalI except possibly at a point c contained in I, if and for all x in I with x ≠c, and exists, then
The differentiation of the numerator and denominator often simplifies the quotient or converts it to a limit that can be evaluated directly.
Is an
actuator that creates motion in a straight line, in contrast to the circular motion of a conventional
electric motor. Linear actuators are used in machine tools and industrial machinery, in computer
peripherals such as disk drives and printers, in
valves and
dampers, and in many other places where linear motion is required.
Hydraulic or
pneumatic cylinders inherently produce linear motion. Many other mechanisms are used to generate linear motion from a rotating motor.
Is a mathematical model of how solid objects deform and become internally stressed due to prescribed loading conditions. It is a simplification of the more general
nonlinear theory of elasticity and a branch of
continuum mechanics.
A liquid is a nearly
incompressiblefluid that conforms to the shape of its container but retains a (nearly) constant volume independent of pressure. As such, it is one of
the four fundamental states of matter (the others being
solid,
gas, and
plasma), and is the only state with a definite volume but no fixed shape. A liquid is made up of tiny vibrating particles of matter, such as atoms, held together by
intermolecular bonds. Like a gas, a liquid is
able to flow and take the shape of a container. Most liquids resist compression, although others can be compressed. Unlike a gas, a liquid does not disperse to fill every space of a container, and maintains a fairly constant density. A distinctive property of the liquid state is
surface tension, leading to
wetting phenomena.
Water is, by far, the most common liquid on Earth.
In
mathematics, the logarithm is the
inverse function to
exponentiation. That means the logarithm of a given number x is the
exponent to which another fixed number, the baseb, must be raised, to produce that number x. In the simplest case, the logarithm counts the number of occurrences of the same factor in repeated multiplication; e.g., since 1000 = 10 × 10 × 10 = 103, the "logarithm base 10" of 1000 is 3, or log10(1000) = 3. The logarithm of x to baseb is denoted as logb(x), or without parentheses, logbx, or even without the explicit base, log x, when no confusion is possible, or when the base does not matter such as in
big O notation.
More generally, exponentiation allows any positive
real number as base to be raised to any real power, always producing a positive result, so logb(x) for any two positive real numbers b and x, where b is not equal to 1, is always a unique real number y. More explicitly, the defining relation between exponentiation and logarithm is:
exactly if and and and .
For example, log2 64 = 6, as 26 = 64.
The logarithm base 10 (that is b = 10) is called the decimal or
common logarithm and is commonly used in science and engineering. The
natural logarithm has the
number e (that is b ≈ 2.718) as its base; its use is widespread in mathematics and
physics, because of its simpler
integral and
derivative. The
binary logarithm uses base 2 (that is b = 2) and is frequently used in
computer science. Logarithms are examples of
concave functions.
(Also known as log mean temperature difference, LMTD) is used to determine the temperature driving force for
heat transfer in flow systems, most notably in
heat exchangers. The LMTD is a
logarithmic average of the temperature difference between the hot and cold feeds at each end of the double pipe exchanger. For a given heat exchanger with constant area and heat transfer coefficient, the larger the LMTD, the more heat is transferred. The use of the LMTD arises straightforwardly from the analysis of a heat exchanger with constant flow rate and fluid thermal properties.
A lumped-capacitance model, also called lumped system analysis,[317] reduces a
thermal system to a number of discrete "lumps" and assumes that the
temperature difference inside each lump is negligible. This approximation is useful to simplify otherwise complex
differential heat equations. It was developed as a mathematical analog of
electrical capacitance, although it also includes thermal analogs of
electrical resistance as well.
The lumped-element model (also called lumped-parameter model, or lumped-component model) simplifies the description of the behaviour of spatially distributed physical systems into a
topology consisting of discrete entities that approximate the behaviour of the distributed system under certain assumptions. It is useful in
electrical systems (including
electronics), mechanical
multibody systems,
heat transfer,
acoustics, etc. Mathematically speaking, the simplification reduces the
state space of the system to a
finitedimension, and the
partial differential equations (PDEs) of the continuous (infinite-dimensional) time and space model of the physical system into
ordinary differential equations (ODEs) with a finite number of parameters.
^The words map, mapping, transformation, correspondence, and operator are often used synonymously.
Halmos 1970, p. 30.
^"Newtonian constant of gravitation" is the name introduced for G by Boys (1894). Use of the term by T.E. Stern (1928) was misquoted as "Newton's constant of gravitation" in Pure Science Reviewed for Profound and Unsophisticated Students (1930), in what is apparently the first use of that term. Use of "Newton's constant" (without specifying "gravitation" or "gravity") is more recent, as "Newton's constant" was also
used for the
heat transfer coefficient in
Newton's law of cooling, but has by now become quite common, e.g.
Calmet et al, Quantum Black Holes (2013), p. 93; P. de Aquino, Beyond Standard Model Phenomenology at the LHC (2013), p. 3.
The name "Cavendish gravitational constant", sometimes "Newton–Cavendish gravitational constant", appears to have been common in the 1970s to 1980s, especially in (translations from) Soviet-era Russian literature, e.g. Sagitov (1970 [1969]), Soviet Physics: Uspekhi 30 (1987), Issues 1–6, p. 342 [etc.].
"Cavendish constant" and "Cavendish gravitational constant" is also used in Charles W. Misner, Kip S. Thorne, John Archibald Wheeler, "Gravitation", (1973), 1126f.
Colloquial use of "Big G", as opposed to "
little g" for gravitational acceleration dates to the 1960s (R.W. Fairbridge, The encyclopedia of atmospheric sciences and astrogeology, 1967, p. 436; note use of "Big G's" vs. "little g's" as early as the 1940s of the
Einstein tensorGμν vs. the
metric tensorgμν, Scientific, medical, and technical books published in the United States of America: a selected list of titles in print with annotations: supplement of books published 1945–1948, Committee on American Scientific and Technical Bibliography National Research Council, 1950, p. 26).
^Integral calculus is a very well established mathematical discipline for which there are many sources. See
Apostol 1967 and
Anton, Bivens & Davis 2016, for example.
^For example, the SI unit of
velocity is the metre per second, m⋅s−1; of
acceleration is the metre per second squared, m⋅s−2; etc.
^For example the
newton (N), the unit of
force, equivalent to kg⋅m⋅s−2; the
joule (J), the unit of
energy, equivalent to kg⋅m2⋅s−2, etc. The most recently named derived unit, the
katal, was defined in 1999.
^For example, the recommended unit for the
electric field strength is the volt per metre, V/m, where the
volt is the derived unit for
electric potential difference. The volt per metre is equal to kg⋅m⋅s−3⋅A−1 when expressed in terms of base units.
^"Unit of thermodynamic temperature (kelvin)". SI Brochure, 8th edition. Bureau International des Poids et Mesures. 13 March 2010 [1967]. Section 2.1.1.5. Archived from
the original on 7 October 2014. Retrieved 20 June 2017. Note: The triple point of water is 0.01 °C, not 0 °C; thus 0 K is −273.15 °C, not −273.16 °C.
^Callister, W. D. "Materials Science and Engineering: An Introduction" 2007, 7th edition, John Wiley and Sons, Inc. New York, Section 4.3 and Chapter 9.
^"Amino". Dictionary.com. 2015. Retrieved 3 July 2015.
^"amino acid". Cambridge Dictionaries Online. Cambridge University Press. 2015. Retrieved 3 July 2015.
^"amino". FreeDictionary.com. Farlex. 2015. Retrieved 3 July 2015.
Poole, Mackworth & Goebel (1998), which provides the version that is used in this article. These authors use the term "computational intelligence" as a synonym for artificial intelligence.[38]
Russell & Norvig (2003) (who prefer the term "rational agent") and write "The whole-agent view is now widely accepted in the field".[39]
^Lambers HG, Tschumak S, Maier HJ, Canadinc D (Apr 2009). "Role of Austenitization and Pre-Deformation on the Kinetics of the Isothermal Bainitic Transformation". Metall. Mater. Trans. A. 40 (6): 1355–1366.
Bibcode:
2009MMTA...40.1355L.
doi:
10.1007/s11661-009-9827-z.
S2CID136882327.
^Johll, Matthew E. (2009). Investigating chemistry: a forensic science perspective (2nd ed.). New York: W. H. Freeman and Co.
ISBN978-1-4292-0989-2.
OCLC392223218.
^Anthonsen, Thorlief (2000).
"Reactions Catalyzed by Enzymes". In Adlercreutz, Patrick; Straathof, Adrie J. J. (eds.). Applied Biocatalysis (2nd ed.). Taylor & Francis. pp. 18–59.
ISBN978-90-5823-024-9. Retrieved 9 February 2013.
^Jayasinghe, Leonard Y.; Smallridge, Andrew J.; Trewhella, Maurie A. (1993). "The yeast mediated reduction of ethyl acetoacetate in petroleum ether". Tetrahedron Letters. 34 (24): 3949–3950.
doi:
10.1016/S0040-4039(00)79272-0.
^Frederick M. Steingress (2001). Low Pressure Boilers (4th ed.). American Technical Publishers.
ISBN0-8269-4417-5.
^Frederick M. Steingress, Harold J. Frost and Darryl R. Walker (2003). High Pressure Boilers (3rd ed.). American Technical Publishers.
ISBN0-8269-4300-4.
^Goldberg, David E. (1988). 3,000 Solved Problems in Chemistry (1st ed.). McGraw-Hill. section 17.43, p. 321.
ISBN0-07-023684-4.
^Theodore, Louis; Dupont, R. Ryan; Ganesan, Kumar, eds. (1999). Pollution Prevention: The Waste Management Approach to the 21st Century. CRC Press. section 27, p. 15.
ISBN1-56670-495-2.
^Carroll, Sean (2007). Guidebook. Dark Matter, Dark Energy: The dark side of the universe. The Teaching Company. Part 2, p. 43.
ISBN978-1-59803-350-2. ... boson: A force-carrying particle, as opposed to a matter particle (fermion). Bosons can be piled on top of each other without limit. Examples include photons, gluons, gravitons, weak bosons, and the Higgs boson. The spin of a boson is always an integer, such as 0, 1, 2, and so on ...
^Brönsted, J. N. (1923). "Einige Bemerkungen über den Begriff der Säuren und Basen" [Some observations about the concept of
acids and bases]. Recueil des Travaux Chimiques des Pays-Bas. 42 (8): 718–728.
doi:
10.1002/recl.19230420815.
^Jaspersen, S. L.; Winey, M. (2004). "THE BUDDING YEAST SPINDLE POLE BODY: Structure, Duplication, and Function". Annual Review of Cell and Developmental Biology. 20 (1): 1–28.
doi:
10.1146/annurev.cellbio.20.022003.114106.
PMID15473833.
^Fullick, P. (1994), Physics, Heinemann, pp. 141–142,
ISBN0-435-57078-1
^Lakatos, John; Oenoki, Keiji; Judez, Hector; Oenoki, Kazushi; Hyun Kyu Cho (March 1998).
"Learn Physics Today!". Lima, Peru: Colegio Dr. Franklin D. Roosevelt. Archived from
the original on 2009-02-27. Retrieved 2009-03-10.
^Purcell, Edward M.; Morin, David J. (2013). Electricity and Magnetism (3rd ed.). New York: Cambridge University Press. pp. 14–20.
ISBN978-1-107-01402-2.
^Browne, p 225: "... around every charge there is an aura that fills all space. This aura is the electric field due to the charge. The electric field is a vector field... and has a magnitude and direction."
^*Purcell and Morin, Harvard University. (2013). Electricity and Magnetism, 820p (3rd ed.). Cambridge University Press, New York.
ISBN978-1-107-01402-2. p 430: "These waves... require no medium to support their propagation. Traveling electromagnetic waves carry energy, and... the Poynting vector describes the energy flow...;" p 440: ... the electromagnetic wave must have the following properties: 1) The field pattern travels with speed c (speed of light); 2) At every point within the wave... the electric field strength E equals "c" times the magnetic field strength B; 3) The electric field and the magnetic field are perpendicular to one another and to the direction of travel, or propagation."
^* Browne, Michael (2013). Physics for Engineering and Science, p427 (2nd ed.). McGraw Hill/Schaum, New York.
ISBN978-0-07-161399-6.; p319: "For historical reasons, different portions of the EM spectrum are given different names, although they are all the same kind of thing. Visible light constitutes a narrow range of the spectrum, from wavelengths of about 400–800 nm.... ;p 320 "An electromagnetic wave carries forward momentum... If the radiation is absorbed by a surface, the momentum drops to zero and a force is exerted on the surface... Thus the radiation pressure of an electromagnetic wave is (formula)."
^Course in Electro-mechanics, for Students in Electrical Engineering, 1st Term of 3d Year, Columbia University, Adapted from Prof. F.E. Nipher's "Electricity and Magnetism". By
Fitzhugh Townsend. 1901.
^Szolc T.;
Konowrocki R.; Michajłow M.; Pregowska A. (2014). "An investigation of the dynamic electromechanical coupling effects in machine drive systems driven by asynchronous motors". Mechanical Systems and Signal Processing. 49 (1–2): 118–134.
Bibcode:
2014MSSP...49..118S.
doi:
10.1016/j.ymssp.2014.04.004.
^Konowrocki R.; Szolc T.; Pochanke A.; Pregowska A. (2016). "An influence of the stepping motor control and friction models on precise positioning of the complex mechanical system". Mechanical Systems and Signal Processing. 70–71. Mechanical Systems and Signal Processing, Vol. 70–71, pp. 397–413: 397–413.
Bibcode:
2016MSSP...70..397K.
doi:
10.1016/j.ymssp.2015.09.030.
ISSN0888-3270.
^Oxtoby, D. W; Gillis, H.P., Butler, L. J. (2015).Principles of Modern Chemistry, Brooks Cole. p. 617.
ISBN978-1-305-07911-3
^"Motor". Dictionary.reference.com. Retrieved 2011-05-09. a person or thing that imparts motion, esp. a contrivance, as a steam engine, that receives and modifies energy from some source in order to utilize it in driving machinery.
^Dictionary.com: (World heritage) "3. any device that converts another form of energy into mechanical energy so as to produce motion"
^"Bureau of Labor Statistics, U.S. Department of Labor.U.S. Department of Labor, Occupational Outlook Handbook, 2010–11 Edition"
^"Major: Engineering Physics". The Princeton Review. 2017. p. 01. Retrieved June 4, 2017.
^"Introduction" (online). Princeton University. Retrieved June 26, 2011.
^Khare, P.; A. Swarup (2009-01-26). Engineering Physics: Fundamentals & Modern Applications (13th ed.). Jones & Bartlett Learning. pp. xiii–Preface. ISBN 978-0-7637-7374-8.
^The International System of Units (SI) (8th ed.). Bureau International des Poids et Mesures (International Committee for Weights and Measures). 2006. p. 144.
^The term "magnitude" is used in the sense of "
absolute value": The charge of an electron is negative, but F is always defined to be positive.
^Daryl L. Logan (2011). A first course in the finite element method. Cengage Learning.
ISBN978-0-495-66825-1.
^Duderstadt, James J.; Martin, William R. (1979). "Chapter 4:The derivation of continuum description from transport equations". In Wiley-Interscience Publications (ed.). Transport theory. New York. p. 218.
ISBN978-0-471-04492-5.{{
cite book}}: CS1 maint: location missing publisher (
link)
^Freidberg, Jeffrey P. (2008). "Chapter 10:A self-consistent two-fluid model". In Cambridge University Press (ed.). Plasma Physics and Fusion Energy (1 ed.). Cambridge. p. 225.
ISBN978-0-521-73317-5.{{
cite book}}: CS1 maint: location missing publisher (
link)
^
Reif (1965):
"[in the special case of purely thermal interaction between two system:] The mean energy transferred from one system to the other as a result of purely thermal interaction is called 'heat'" (p. 67).
the quantity Q [...] is simply a measure of the mean energy change not due to the change of external parameters. [...] splits the total mean energy change into a part W due to mechanical interaction and a part Q due to thermal interaction [...] by virtue of [the definition ΔU = Q − W, present notation, physics sign convention], both heat and work have the dimensions of energy" (p. 73). C.f.: "heat is thermal energy in transfer" Stephen J. Blundell, Katherine M. Blundell, Concepts in Thermal Physics (2009),
p. 13Archived 24 June 2018 at the
Wayback Machine.
^Serway, A. Raymond; Jewett, John W.; Wilson, Jane; Wilson, Anna; Rowlands, Wayne (1 October 2016). "32". Physics for global scientists and engineers (2ndition ed.). Cengage AU. p. 901.
ISBN978-0-17-035552-0.
^Alexander, Charles; Sadiku, Matthew. Fundamentals of Electric Circuits (3 ed.). McGraw-Hill. p. 211.
^Salvendy, Gabriel. Handbook of Industrial Engineering. John Wiley & Sons, Inc; 3rd edition p. 5
^"What IEs Do". www.iienet2.org. Retrieved September 24, 2015.
^Noakes, Cath; Sleigh, Andrew (January 2009).
"Real Fluids". An Introduction to Fluid Mechanics. University of Leeds. Archived from
the original on 21 October 2010. Retrieved 23 November 2010.
^Pal, G.K.; Pal, Pravati (2001).
"chapter 52". Textbook of Practical Physiology (1st ed.). Chennai: Orient Blackswan. p. 387.
ISBN978-81-250-2021-9. Retrieved 11 October 2013. The human eye has the ability to respond to all the wavelengths of light from 400–700 nm. This is called the visible part of the spectrum.
^Buser, Pierre A.; Imbert, Michel (1992). Vision. MIT Press. p.
50.
ISBN978-0-262-02336-8. Retrieved 11 October 2013. Light is a special class of radiant energy embracing wavelengths between 400 and 700 nm (or mμ), or 4000 to 7000 Å.