| *1: Electromagnetic force *2: Weak force *3: Strong force | |||
| 1687 | newton | Propose gravity/gravity | |
| 1802 | John Dalton | Proposed the theory that all matter is composed of atoms | |
| 1869 | Dimitri Mendeleev | Publish the periodic table of elements | |
| 1873 | Maxwell | *1 | Describe the electromagnetic force: Publish a more complete set of Maxwell's equations |
| 1896 | Henri Becquerel | *2 | radioactive material discovered |
| 1897 | Thomson | Discovered that there are electrons in atoms, by cathode rays. Believed that atoms are composed of protons and bound electrons | |
| 1898 | Rutherford | *2 | Discover radioactive half-life. Name alpha and beta rays |
| 1909 | Rutherford | *3 | Discovery of atomic nuclei: Rutherford scattering experiment: alpha particles can be scattered at large angles |
| 1932 | Chadwick | Discovered that the neutron-atom model took shape: the nucleus is composed of protons and neutrons, and electrons move outside the nucleus | |
| *3 | Realize that nuclear forces (how do protons and neutrons come together?) cannot be explained by gravity or electromagnetism | ||
| 1934 | Hideki Yukawa | *3 | Predicting the existence of mesons as carriers of nuclear forces |
| 1950~ | Discover a bunch of new particles | ||
| 1954 | Chen Ning Yang&Mills | *3 | Introducing non-commutative gauge field theory to explain strong interactions |
| 1961 | Sheldon Glashow | *12 | Consider the weak force and the electromagnetic force together and discover the electroweak interaction |
| 1964 | Gell-Mann|Zweig | *3 | Quark Model: Hadron Classification Scheme |
| 1967 | Steven & Abdul | Standard model of elementary particle theory | |
| 1974 | Ting Zhaozhong & Burton | Discovery of J/ψ meson: the bottom definite quark model confirms quantum electrodynamics QCD |
| unit | abbreviation | Convert to kilograms (kg) |
|---|---|---|
| milligrams | mg | 0.000001 kg |
| Duke | g | 0.001 kg |
| Kilogram | kg | 1 kg |
| mt | ton(metric) | 1000 kg |
| Taijin | Taijin | 0.6 kg |
| Taiwan and two | Taiwan and two | 0.0375 kg |
| ounce | oz | 0.02835 kg |
| pound | lb | 0.4536 kg |
| British tons | UK ton | 1016.05 kg |
| US tons | US ton | 907.18 kg |
Newtonian mechanics, also known as classical mechanics, is a branch of physics based on the laws of motion proposed by Isaac Newton, which describes the motion behavior of objects under the action of various forces. The theory applies to macroscopic scales and low-speed motion, and has laid an important foundation in the development of modern physics.
The core of Newtonian mechanics are the three laws of motion:
F = m * a,inFIt's an external force,mIt's quality,ais acceleration.Newtonian mechanics applies to the following categories:
Newton's law of universal gravitation describes the gravitational interaction between two masses:
F = G * (m₁ * m₂) / r²
Ffor gravity.Gis the universal gravitational constant.m₁andm₂are the masses of the two objects.ris the distance between two objects.Although Newtonian mechanics works great in the macroscopic world, it breaks down when:
Momentum is an important physical quantity that describes the state of motion of an object and is widely used in classical mechanics, quantum mechanics and the theory of relativity.
Momentum is the product of an object’s mass and velocity, and its expression is:
p = m * v
pis the momentum vector.mis the mass of the object.vis the velocity vector of the object.In a closed system, the total momentum remains constant, which is a fundamental conservation law in physics:
p_initial = p_final
This law applies to all types of collisions and interactions.
Angular momentum is the cross product of momentum and position vector and is used to describe the properties of an object rotating around a center point:
L = r × p
Lis the angular momentum.ris the position vector.pis the momentum vector.Under high-speed motion, the classical momentum formula needs to be modified to the relativistic form:
p = γ * m * v
γis the Lorentz factor,γ = 1 / √(1 - v²/c²)。cis the speed of light.Work and energy are important concepts in physics that describe the movement and interaction of objects and are widely used in mechanics, thermodynamics and other fields.
Work is the inner product of force and displacement when a force acts on an object and causes it to move:
W = F * d * cos(θ)
WIt's gong.Fis the force.dis displacement.θis the angle between force and displacement.K = 0.5 * m * v²
U = m * g * h
E = K + U
The relationship between work and energy is described by the work-energy theorem:
W = ΔK
This states that the net work done on an object is equal to the change in its kinetic energy.
Energy is neither created nor destroyed; it can only be converted from one form to another or transferred from one system to another:
E_initial = E_final
The simple harmonic oscillator is an important model in physics, used to describe the simple harmonic motion of an object under the action of a restoring force near its equilibrium position. This model is widely used in many fields such as classical mechanics, quantum mechanics, and electricity.
The motion of a simple harmonic oscillator is described by the following second-order differential equation:
m * (d²x/dt²) + k * x = 0
in:
mis the mass of the object.kis the elastic constant or force constant.xis the displacement from the equilibrium position.The solution to this equation is simple harmonic motion, whose displacement changes with time as a sine or cosine function:
x(t) = A * cos(ω * t + φ)
in:
Ais the amplitude, indicating the maximum displacement.ω = √(k/m)is the angular frequency.φis the initial phase and determines the initial conditions.The total energy of a simple harmonic oscillator is the sum of kinetic and potential energy and remains constant in the absence of resistance:
K = 0.5 * m * v²U = 0.5 * k * x²E = K + U = 0.5 * k * A²In practice, oscillators are often affected by damping or external forces:
Simple harmonic oscillators are widely used in many fields, including:
Vibration science is the science that studies the reciprocating motion of objects after being subjected to force. It mainly analyzes the motion rules, vibration characteristics of the system and its impact on the outside world. Vibration is divided into three types: free vibration, forced vibration and damped vibration.
Learning vibration science requires a solid foundation in mathematics and mechanics. It is recommended to be familiar with differential equations, linear algebra and dynamics, and to use tools such as MATLAB or ANSYS for simulation and experimental analysis.
Collision and scattering are important phenomena in physics that describe the interaction of particles or objects, and are widely used in fields such as classical mechanics, quantum mechanics, and high-energy physics.
m₁ * v₁ + m₂ * v₂ = m₁ * v₁' + m₂ * v₂'。0.5 * m₁ * v₁² + 0.5 * m₂ * v₂² = 0.5 * m₁ * v₁'² + 0.5 * m₂ * v₂'²。Scattering cross section is a key physical quantity for quantifying the scattering process, indicating the range of influence of target particles on incident particles:
In quantum mechanics, the scattering process is described by the Schrödinger equation or quantum field theory. The transition probability between the initial state and the final state of the particle is usually calculated through the scattering matrix (S matrix).
Rigid body motion is a theory in physics that describes the motion behavior of rigid bodies under the action of external forces or external moments. A rigid body is defined as an idealized object in which the distance between any two points within it remains constant during motion.
Rigid body motion can be divided into two main types:
Rigid body motion can be described by the following physical quantities:
The total kinetic energy of rigid body motion includes translational kinetic energy and rotational kinetic energy:
K₁ = 0.5 * M * v²,inMis the total mass of the rigid body,vis the center of mass velocity.K₂ = 0.5 * I * ω²,inIis the moment of inertia of the rigid body relative to the axis of rotation,ωis the angular velocity.K = K₁ + K₂L = I * ω,inIis the moment of inertia,ωis the angular velocity.The theory of rigid body motion has important applications in many engineering and physics problems, including:
The Kepler Problem is a classic problem in celestial mechanics, which mainly studies the motion behavior of planets, satellites or other objects under the influence of gravity. The problem takes its name from Johannes Kepler, who proposed three laws describing planetary motion.
Kepler's problem can be described through universal gravitation and Newton's laws of motion, which can predict the motion of planets in a gravitational field. Mathematically, the equation of motion of Kepler’s problem can be expressed as:
F = - (G * M * m) / r²
in,Frepresents gravity,Gis the gravitational constant,Mandmare the masses of celestial bodies respectively,rRepresents the distance between the two.
Kepler's problem is one of the core concepts in celestial mechanics. Through the combination of Kepler's law and the law of universal gravitation, scientists can accurately describe the laws of celestial bodies. This theory has had a profound impact on the development of modern astronomy, aerospace engineering and physics.
Lagrangian dynamics is a classical mechanics expression with energy as the core, replacing the vector form of "force = mass × acceleration" in Newtonian dynamics. It is particularly suitable for dealing with complex coordinate systems or systems with constraints.
In Lagrangian mechanics, the state of a system is represented by a set of generalized coordinatesqirepresentation, rather than just rectangular coordinates. These coordinates can be angles, lengths, or even parameters in any curvilinear coordinate system.
The Lagrangian is defined as the difference between the kinetic energy and potential energy of the system:
L(qi, 𝑞̇i, t) = T - V
Each generalized coordinate corresponds to an equation of motion, called the Euler–Lagrange equation:
d/dt (∂L/∂𝑞̇i) - ∂L/∂qi = 0
These equations combined describe the complete dynamic behavior of the system.
simple pendulum:Let a length belof a simple pendulum whose angleθare generalized coordinates.
Plugging into the Lagrange equation we get:
d/dt (ml²𝜃̇) + mgl sin θ = 0 ⇒ 𝜃̈ + (g/l) sin θ = 0
This is the nonlinear equation of motion of a simple pendulum.
Lagrangian mechanics provides the perspective of "taking the smallest amount of action" for system changes, and its core concept is consistent with the principle of "most energy saving" in nature. This also laid a solid foundation for subsequent Hamiltonian mechanics and quantum field theory.
Hamilton-Jacobi theory is an important framework of classical mechanics, which transforms dynamic problems into solution problems of partial differential equations and has a profound influence in quantum mechanics and modern physics.
dqᵢ/dt = ∂H/∂pᵢdpᵢ/dt = -∂H/∂qᵢqᵢandpᵢare generalized coordinates and generalized momentum respectively,His the Hamiltonian.H(qᵢ, ∂S/∂qᵢ, t) + ∂S/∂t = 0in,S(qᵢ, t)is the action function.action functionSIt is the core of Hamiltonian-Jacobian theory and describes the dynamic behavior of the system. Features include:
dS = ∑(pᵢ dqᵢ) - H dt。Hamilton-Jacobian theory is closely related to the principle of variation, applying the principle of action minimization to classical mechanics and formalizing it through partial differential equations.
Under certain circumstances, the Hamiltonian-Jacobian equation can be solved by the separation of variables method. This requires that the Hamiltonian has a specific form such that the action functionSIt can be divided into the sum of functions of time and space:
S(qᵢ, t) = W(qᵢ) - E * t
W(qᵢ)is the action quantity of the space part.Eis the energy of the system.Gravity is one of the four fundamental forces in nature, generated by mass and acting on other masses. In Newtonian mechanics, gravity is an instantaneous force at a distance; in Einstein's general theory of relativity, gravity is reinterpreted as the result of the curvature of space-time due to mass.
According to general relativity, mass and energy change the geometry of space-time around them. The trajectory of an object moving in this curved space-time is the "gravitational effect" we observe. This theory successfully explains observational phenomena such as Mercury's perihelion precession and light bending.
When the mass acceleration changes, the changes in the curvature of space-time will propagate outward in the form of waves, forming gravitational waves. These fluctuations are very weak and require extremely sophisticated instruments to detect. Common sources include the merger of binary neutron stars or black holes.
According to Einstein's theory, gravitational waves propagate in a vacuum at the speed of light (approximately 299,792,458 meters per second). This was experimentally verified when LIGO and Virgo detected the GW170817 event in 2017, because gravitational waves and electromagnetic wave signals arrived at the Earth almost at the same time.
The observation of gravitational waves has opened up a new field of astronomy, "gravitational wave astronomy," which can detect cosmic events that cannot be observed with traditional telescopes, allowing us to gain a deeper understanding of the structure and evolution of the universe.
Electromagnetism is the branch of physics that studies electric and magnetic fields and their interactions. Major core concepts include Coulomb's law, Ampere's law, Faraday's law of electromagnetic induction, and Gauss's law.
The electric field is a spatial property produced by electric charges and describes the interaction between charges. Magnetic fields are related to moving charges or magnetic materials and represent the range of magnetic force.
Maxwell's equations are the basic theory of electromagnetism and contain four main equations:
Electromagnetism is widely used in modern technology, including wireless communications, power generation, medical imaging (such as MRI), radar technology, and electronic device design.
Electromagnetic research requires sophisticated experimental and measurement equipment, such as electric field probes, magnetometers, and oscilloscopes, to accurately analyze electromagnetic phenomena.
Electromagnetism is an important tool for understanding one of the fundamental forces in nature and has a profound impact on the development of science and engineering.
The electromagnetic equations proposed by James Clerk Maxwell are a set of equations that describe how electric and magnetic fields interact. These equations unified the concepts of electricity and magnetism and became the basis of modern electromagnetism.
1. Gauss’s law (electric field):
∮ E • dA = Q_enc / ε₀
2. Gauss’s law (magnetic field):
∮ B • dA = 0
3. Faraday’s law of electromagnetic induction:
∮ E • dl = - dΦ_B / dt
4. Ampere-Maxwell’s law:
∮ B • dl = μ₀ I_enc + μ₀ ε₀ dΦ_E / dt
Maxwell's equations play an important role in wireless communications, power transmission, optics, and various electromagnetic devices, helping us understand and design modern electronic devices.
Ampere–Maxwell's law is part of Maxwell's system of equations and describes how magnetic fields are generated by current flow and a changing electric field. Its differential form is:
∇ × B = μ₀J + μ₀ε₀ ∂E/∂t
in:
Maxwell added the "displacement current" term (μ₀ε₀∂E/∂t) to Ampere's law, making the electromagnetic theory mathematically and physically complete and self-consistent.
Combining Ampere–Maxwell’s law and Faraday’s law of induction:
∇ × E = -∂B/∂t
The equations for electromagnetic waves in free space can be derived. For example, the electric fieldE, its wave equation is:
∇²E = μ₀ε₀ ∂²E/∂t²
This is a standard wave equation, and the solution is of the form that the propagation velocity iscthe wave function.
It can be known from the wave equation that the propagation speed of electromagnetic waves in vacuumcfor:
c = 1 / √(μ₀ε₀)
Substitute the constant value measured experimentally:
get:
c ≈ 2.998 × 10⁸ m/s
This is exactly the speed of light. This result shows that light is essentially an electromagnetic wave, and electromagnetic waves of all frequencies propagate at the same speed in a vacuum.
Ampere-Maxwell's law not only unified electricity and magnetism, but also revealed the nature of light and laid the theoretical foundation for modern communications, optics, and quantum electrodynamics.
Lenz's Law is a basic law in electromagnetic induction, which explains the relationship between the direction of the induced current and the change of the magnetic field.
Lentz's law states: "The direction of an induced current always causes the magnetic field it generates to oppose the change in the magnetic field that causes the induced current."
This means that the induced current will try to resist an increase or decrease in magnetic flux to maintain the stability of the system.
ε = -dΦ/dt
εIt is the induced electromotive force.Φis the magnetic flux.tIt's time.Laplace Equation is an important second-order partial differential equation that is widely used in mathematics and physics to describe steady-state phenomena. Its scalar form is usually expressed as:
∇²φ = 0
In a three-dimensional Cartesian coordinate system, this equation expands to:
(∂²φ / ∂x²) + (∂²φ / ∂y²) + (∂²φ / ∂z²) = 0
Functions that satisfy Laplace's equation are called harmonic functions. This type of function has the following core mathematical properties:
Laplace's equation is mainly used to describe field distribution in areas without "sources" or "sinks":
| Physics | The meaning of variable φ | Physics |
|---|---|---|
| electrostatics | Scalar potential (V) | Describe the electric field distribution in an uncharged region. |
| gravitational field | Gravity level (Φ) | Describes gravity in a mass-free region. |
| Steady state heat transfer | Temperature (T) | Describe the temperature field inside an object in thermal equilibrium. |
| fluid mechanics | Velocity Potential | Describes the motion of an ideal fluid that is incompressible and irrotational. |
Since Laplace's equation is a partial differential equation, boundary conditions must be met to obtain the unique solution:
Common analytical solution methods include the separation of variables method (often used for symmetrical geometries), while complex geometries often use numerical solutions such as the finite element method (FEM) or the finite difference method (FDM).
Poisson Equation is a second-order partial differential equation widely used in mathematics, physics and engineering to describe field distribution affected by source terms. It is an expanded version of Laplace's equation. When there is a "source" in space, Laplace's equation evolves into Poisson's equation. Its standard form is:
∇²φ = f
In the Cartesian coordinate system, the formula expands to:
(∂²φ / ∂x²) + (∂²φ / ∂y²) + (∂²φ / ∂z²) = f(x, y, z)
in the equationφrepresents a potential field (such as electric potential, gravitational potential, or temperature), whilefIt is called the source term.
Poisson's equation is a basic tool for describing many physical fields:
| Application areas | Bit function φ | source term f | physical description |
|---|---|---|---|
| electrostatics | Potential (V) | -ρ / ε₀ | Describe how the charge density ρ produces a potential distribution in space. |
| Gravity | Gravity level (Φ) | 4πGρ | Describe the gravitational field produced by the mass density ρ. |
| heat conduction | Temperature (T) | -q / k | Describe the steady-state temperature distribution when the object contains a heat source q. |
| fluid mechanics | Flow level function | Vorticity or source flow strength | Describe the velocity potential of a fluid in the presence of spin or source sink. |
Poisson's equation is closely related to Laplace's equation:
Since it is usually difficult to obtain analytical solutions to Poisson's equation for complex geometric shapes, the following numerical methods are often used in engineering:
Eddy current is a ring current induced inside a conductor due to changes in the magnetic field. When a conductor is exposed to a changing magnetic field, according to Faraday's law of electromagnetic induction, an induced electromotive force will be generated in the conductor, causing the free electrons to form a closed loop current, which is an eddy current.
Although eddy currents may cause energy losses, they also have valuable applications in many fields of engineering and technology. Through proper design and control, its characteristics can be effectively utilized to achieve functions such as precision detection, electromagnetic damping, and thermal energy conversion.
When a wave (such as a light wave, sound wave, or electromagnetic wave) encounters a medium interface, part of the energy returns to the original medium (reflection), and part of the energy travels across into the new medium (transmission). These two phenomena are basic concepts in wave theory and are widely used in optics, acoustics and quantum mechanics.
n₁ sinθ₁ = n₂ sinθ₂inn₁、n₂is the refractive index,θ₁、θ₂are the angles of incidence and refraction.
The total energy of the wave is divided between reflection and transmission, with the ratio depending on the properties of the medium and the angle of incidence.
In quantum mechanics, even if the particle energy is less than the barrier height, there may still be partial transmission, which is called the tunneling effect. This phenomenon has no classical counterpart and is a quantum extension of reflection and transmission.
Reflection and transmission are the basic results of the interaction between waves and media. Understanding these two behaviors is crucial to explaining and applying various wave phenomena, playing a key role in both daily optical devices and advanced science and technology.
Waveguides are structures used to guide electromagnetic waves (such as microwaves and light waves) to propagate in specific directions.常见形式为金属空心管或光纤,其设计可限制波的传播方向并降低能量损耗。
Resonance cavities (Cavities) are closed space structures that can store electromagnetic waves of specific frequencies. Resonance occurs when waves are reflected multiple times in the cavity and form a stable standing wave mode.
| project | waveguide | resonant cavity |
|---|---|---|
| Function | transmit electromagnetic waves | Store electromagnetic waves |
| structure | Open at one or both ends | closed |
| modal | TE, TM modes | standing wave mode |
| application | communication, transmission | Oscillation, filtering, resonance |
Waveguides and resonant cavities play central roles in electromagnetic theory and applied technology, and are used to effectively guide and store energy respectively. From microwave communications to laser systems to particle accelerators, their design and analysis are of critical value to modern physics and engineering.
Scattering refers to the phenomenon that when waves or particles encounter obstacles or inhomogeneous media, their propagation direction, energy distribution or phase changes.散射可以发生于光、声、电子、粒子等各种波或物质中。
In the quantum realm, scattering is an important method for studying particle interactions. The possibility and intensity of scattering are described by the scattering cross section.
Scattering phenomena reveal the interaction between matter and waves and are a key tool for studying microscopic and macroscopic structures in nature. Whether in daily optical observations, particle experiments or high-tech instruments, the theory and application of scattering play an indispensable role.
Geometric Theory of Diffraction (GTD) is an extension of traditional geometric optics and is used to describe the phenomenon of diffraction of waves when they encounter the edges or corners of objects. The theory, proposed by J. B. Keller in 1957, treats diffraction as an additional "ray," making up for the failure of geometric optics to predict near boundaries.
In GTD, when an incident wave ray (or reflected ray) hits a geometric discontinuity (such as an object's sharp corner, edge), a diffraction ray is generated that propagates along the direction that satisfies the diffraction conditions.
The diffraction coefficient describes the intensity and phase changes of diffracted rays and varies depending on geometry and boundary conditions. Different boundaries (such as PEC perfect conductors or dielectrics) have corresponding diffraction coefficients.
| theory | characteristic | Applicable situations |
|---|---|---|
| Huygens-Fresnel principle | Each point on the wave front is a secondary wave source | Applies to the overall diffraction field |
| Cauchy-Kirchhoff integral | Accurate wave field integration | Requires a lot of numerical calculations |
| Geometric Theory of Diffraction (GTD) | Describe diffraction in terms of rays | High frequency approximation, suitable for engineering applications |
Geometric diffraction theory is of great practical value in engineering electromagnetics and high-frequency wave analysis. It incorporates diffraction behavior into the ray theory framework and takes into account both physical explanation and computational efficiency. It is a powerful tool for the analysis of complex boundaries and obstacles.
When electromagnetic waves encounter wedge-shaped structures with impedance boundary conditions (such as conductive materials, corners or thin films covering media), complex diffraction phenomena will occur. This type of problem is extremely important in electromagnetic wave scattering, antenna design and radar cross-section analysis. Especially in high-frequency situations, it can be modeled and solved through geometric diffraction theory (GTD) and its extended theory.
Assume a two-dimensional infinite wedge with impedance properties on both sides. When an incident wave hits its sharp corners, diffracted waves will be generated and propagate in space. The angular distribution and amplitude of diffraction are affected by the wedge angle and boundary impedance conditions.
For a wedge boundary with a resistive surface, its electric and magnetic fields need to satisfy general impedance boundary conditions:
Et = Zs Hn
According to the theory of Sommerfeld and Maliuzhinets, the diffraction field for the impedance wedge problem can be expressed in integral form, and the diffraction coefficient D(θi, θs) will be closely related to boundary conditions.
The diffraction problem of impedance wedges combines geometric optics, wave theory and boundary electromagnetic theory, and is a typical problem in engineering and physics. Through the combination of analytical and numerical methods, the impact of complex structures on electromagnetic waves can be effectively predicted, thereby optimizing design and interference control.
Plasma, also known as plasma, is the fourth state of matter besides solid, liquid, and gaseous states. When a gas is heated to extremely high temperatures or subjected to a strong electromagnetic field, the electrons break away from the atomic nuclei and form an ionized gas composed of positively charged ions and negatively charged electrons.
The process of plasma formation is called ionization. Compared with ordinary gases, it has the following unique physical properties:
| category | Specific examples |
|---|---|
| natural phenomenon | Lightning, Aurora, Sun and Stars, Electric Arc. |
| industrial technology | Plasma cutting, semiconductor etching, surface treatment. |
| People's Livelihood Technology | Fluorescent lamps (fluorescent lamps), neon lamps, plasma air purifiers. |
| Frontier Energy | Nuclear fusion research (such as tokamak devices), ion thrusters. |
Plasmas can be divided into two categories based on temperature distribution:
Optics is a branch of physics that studies the properties, behavior and interaction of light with matter. Optics involves the phenomena of light propagation, reflection, refraction, interference, diffraction, and polarization. As an important natural science subject, the theory and application of optics widely influence the development of science and technology.
Light has dual properties, showing both particle and wave properties. According to quantum theory, light is made up of particles called photons; while according to wave theory, light travels in the form of waves. This dual nature allows light to exhibit different behaviors under different conditions.
Optics can be divided into the following main branches:
Optics contains many interesting phenomena that appear frequently in daily life and scientific experiments:
Optics has a wide range of applications in modern technology. Here are some of the main application areas:
Optics is a subject that studies the properties of light and its applications. With the advancement of science and technology, optics plays an increasingly important role in many fields.
Geometrical Optics is a theory that describes the propagation behavior of light. It is assumed that light propagates in a straight line (called a ray) without considering the wave nature. This theory applies when the wavelength of light is much smaller than the size of the object.
n₁ sinθ₁ = n₂ sinθ₂
1/f = 1/dₒ + 1/dᵢ
infis the focal length,dₒis the object distance,dᵢis the image distanceGeometric optics is one of the basic theories of optics. It describes the path and imaging behavior of light in an intuitive way and is suitable for most daily optical design and analysis. Despite its inability to handle fluctuating properties, it still plays an extremely important role in engineering and technical applications.
Laser optics is a discipline that studies the generation, propagation, and interaction of laser light with matter. Laser is a light source with high monochromaticity, directionality, high intensity and coherence.
The generation of laser is based on the principle of stimulated emission. The main processes include:
According to the different working substances of laser, it can be divided into the following types:
Laser technology is widely used in the following fields:
The development direction of laser optics includes the design of more efficient lasers, ultrafast laser technology, the research and development of new laser materials, and the exploration of quantum laser technology.
The original second (symbol as) is the unit of time, and 1 original second is equal to 10⁻¹⁸ seconds. Attosecond light pulse refers to an extremely short pulse of light with a duration in the original second level, mainly in the extreme ultraviolet (XUV) or soft X-ray band. This is the shortest known artificial time-scale light source.
Protosecond pulses usually pass throughHigh-Harmonic Generation (HHG)Nonlinear optical process to achieve:
Due to the extremely short duration of the original second pulse, its spectrum range is extremely wide (can cover tens or even hundreds of electron volts), and it is a broadband non-monochromatic light source.
The 2023 Nobel Prize in Physics is awarded to three pioneers of protosecond science: Pierre Agostini, Ferenc Krausz and Anne L'Huillier, for their contributions to the generation and application of protosecond light pulses.
The development of protosecond light pulses enables humans to observe and control the dynamic processes of subatomic particles such as electrons for the first time, marking a new limit of time resolution and an important milestone in modern ultrafast science.
Optical soliton (Soliton) is a kind of light pulse whose shape and speed can remain stable for a long time when propagating in optical fiber or nonlinear media. It is the result of a balance between nonlinear and dispersive effects, so it does not broaden or distort as it propagates like ordinary light pulses.
Optical solitons can be described by the Nonlinear Schrödinger Equation (NLSE):
i ∂ψ/∂z + (1/2)β₂ ∂²ψ/∂t² + γ|ψ|²ψ = 0
When dispersion (the second term) and self-phase modulation (the third term) cancel each other out, the solution is a stable soliton.
In the 1980s, American scientist Linn Mollenauer successfully experimentally demonstrated that optical solitons can be stably transmitted over long distances in optical fibers, confirming the practical value of theoretical predictions.
Optical solitons are a peculiar phenomenon in which fluctuations are balanced by nonlinearity and dispersion. They have far-reaching significance in the fields of optical fiber communications and nonlinear optics. They are a key concept in modern photonic technology.
Thermodynamics is a physical discipline that studies energy conversion and energy transfer between substances. Its main focus is on how different systems use heat, work, etc. to change their internal energy states. Thermodynamics consists primarily of four fundamental laws, each of which describes how energy is transferred and transformed in nature.
Thermodynamics is widely used in various engineering disciplines, natural sciences, and daily life. For example, devices such as car engines, refrigerators, and air conditioners all operate using the principles of thermodynamics. At the same time, thermodynamics also plays an important role in astronomy, biology, chemistry and other disciplines.
EntropyIt is a core concept in thermodynamics and information theory, used to measure the "degree of chaos" or "uncertainty" of a system.
In physics, entropy describes the possible number of microscopic states of a system; in information theory, entropy represents the uncertainty of information or the average amount of information.
In thermodynamics, entropy was first proposed by Rudolf Clausius in the 1850s to describe the irreversibility of energy conversion. It is defined as:
ΔS = ∫(dQrev / T)
in:
This means that in a reversible process, the entropy change of the system is equal to the heat absorbed divided by the temperature.
The second law of thermodynamics states:
ΔStotal ≥ 0
This means that the entropy of an isolated system never decreases, it only remains the same or increases. The increase in entropy symbolizes the directionality of natural processes and also represents the "arrow" of time.
In statistical mechanics, Ludwig Boltzmann gave the microscopic definition of entropy:
S = kB ln Ω
in:
When the system has more possible microscopic arrangements, the entropy is greater, which means the system is more "disordered".
Suppose there are gas molecules in a container. The state in which the molecules are uniformly distributed in space has more possible microscopic combinations than the state concentrated on one side, so the entropy of uniform distribution is higher.
Claude Shannon introduced the information definition of entropy in 1948:
H = −∑ pi log₂(pi)
in:
When the probability of all events is equal, the entropy is the largest, indicating the highest uncertainty; when the probability of an event is close to 1, the entropy approaches 0, indicating that the system is almost certain.
Toss a fair coin: p(positive) = 0.5, p(tail) = 0.5, then
H = −[0.5 log₂(0.5) + 0.5 log₂(0.5)] = 1 bit
If the coin is biased, for example, p(positive)=0.9, p(negative)=0.1, then the entropy is:
H = −[0.9 log₂(0.9) + 0.1 log₂(0.1)] ≈ 0.47 bit
Represents lower uncertainty.
Modern theories (such as Landauer’s Principle) point out:
ΔE ≥ kBT ln 2
This means that "eliminating one bit of information" will produce at leastkBT ln 2energy dissipation.
This closely links information entropy and physical entropy, showing the concept of "information is physics".
Entropy is not only the cornerstone of thermodynamics, but also an important concept for understanding the direction of time, information theory, and even the evolution of the universe.
The Carnot Cycle is a theoretical model of an ideal heat engine proposed by Nicolas Carnot to describe the highest efficiency of energy conversion.
The Carnot cycle consists of four reversible processes:
T_Habsorb heatQ_H, the gas expands isothermally.T_C。T_CRelease heatQ_C, gas isothermal compression.T_H。The efficiency of the Carnot cycle is given by:
η = 1 - T_C / T_H
ηis the heat engine efficiency.T_HIt is the absolute temperature of the high-temperature heat source.T_Cis the absolute temperature of the low-temperature heat source.This formula shows that the efficiency depends only on the temperature difference of the heat source and has nothing to do with the working substance.
Thermal radiation is an electromagnetic wave emitted by all objects with temperature, originating from the thermal motion of particles within the object. Even in a vacuum, thermal radiation can still transfer energy, unlike thermal conduction and convection.
A black body is an idealized object that can completely absorb and emit electromagnetic radiation of various wavelengths. Blackbody radiation provides a benchmark model for thermal radiation research, describing its spectral distribution according to Planck's law.
In thermodynamics, when an object reaches thermal equilibrium with its surroundings, the radiant energy it absorbs and emits is equal. A black body is an ideal system that can perfectly absorb and emit radiant energy at any wavelength, and is used to describe the properties of thermal radiation in equilibrium.
Radiation has entropy and changes with the distribution of energy. At thermal equilibrium, the entropy density of blackbody radiation can be expressed by the following relationship:
s = (4/3) · (u / T)
insis the entropy density,uis the energy density,Tis the absolute temperature.
The energy density of blackbody radiation is proportional to the fourth power of temperature:
u = aT⁴
inais the radiation constant (related to the Stefan–Boltzmann constant σ). The corresponding radiation pressure is:
P = u / 3
According to the second law of thermodynamics, energy always flows from high temperature to low temperature. In the case of radiation, high-temperature objects radiate more energy, which is absorbed by low-temperature objects until thermal equilibrium is reached. This process is accompanied by an increase in total entropy and conforms to the principle of entropy increase.
If objects with different temperatures are placed in a completely reflective cavity, they will eventually reach a common temperature by absorbing and emitting radiation. The radiation field in this system will approach the blackbody radiation state, showing that thermal radiation has the ability to achieve thermal equilibrium.
In thermal engines or photovoltaic devices, thermal radiation can be used as part of the energy conversion. According to Carnot efficiency, the theoretical maximum efficiency of any energy conversion based on thermal radiation is determined by the temperature difference between high and low temperatures:
η = 1 - (Tcold / Thot)
This formula limits the maximum conversion efficiency of solar thermal engines and infrared thermoelectric devices.
Planck's law describes the energy emitted by a black body per unit area, unit time, and unit wavelength. Its formula is:
E(λ, T) = (2hc² / λ⁵) / (e^(hc / λkT) - 1)
inλis the wavelength,Tis the temperature,his Planck’s constant,cis the speed of light,kis Boltzmann's constant.
Wien's law states that the maximum intensity wavelength of blackbody radiation is inversely proportional to temperature:
λmax = b / T
inbis Wien’s constant (approximately 2.898 × 10-3m·K). This explains why hot objects like the sun appear white, while cold objects appear reddish.
This law states that the total radiated energy of a black body is proportional to the fourth power of its absolute temperature:
P = σAT⁴
inPis the total radiated power,Ais the surface area,σis the Stefan-Boltzmann constant.
Thermal radiation has applications in infrared thermal imaging cameras, stellar spectral analysis, space telescope cooling systems, and energy-saving building design.
Fluid Mechanics Mechanics) is the branch of science that studies the movement, behavior of fluids (liquids and gases) and their interaction with the surrounding environment. Fluid mechanics has important applications in fields such as physics, engineering, atmospheric science, biomedicine and oceanography. Through the analysis of fluid mechanics, we can understand and predict various fluid phenomena, such as the lift of airplanes, the formation of storms, and the flow of water in pipes.
Fluid has continuity and deformability. These characteristics allow the fluid to continuously deform and flow after being stressed. Basic parameters in fluid mechanics include:
Fluid mechanics can be divided into the following main branches:
Fluid mechanics follows a series of physical laws to describe and analyze the movement of fluids, including:
Fluid mechanics has a wide range of applications in modern engineering and science. The following are some important application areas:
Fluid mechanics is a discipline that studies the properties of fluids and their motion behavior. It is crucial for explaining many phenomena in nature and plays an important role in various fields of technology and engineering.
Fluid modeling is the process of using mathematical and computational methods to describe and simulate the behavior of fluids (liquids and gases). These models are used in a wide range of fields, including physics, engineering, meteorology, oceanography, biomedicine and computer animation.
∂ρ/∂t + ∇·(ρv) = 0Indicates that fluid mass does not appear or disappear in space.
ρ(∂v/∂t + v·∇v) = −∇p + μ∇²v + fRepresents the movement behavior of fluid under the action of pressure, viscosity and external force.
Fluid modeling is a key tool for understanding dynamic changes in natural and engineering systems. Combining physical laws and computational methods is of great significance to the development of modern technology and science.
Molecular Fluid Stresses describe the mechanical strains and stresses produced by microscopic molecular motion and interaction in macroscopic fluids. It extends the concept of stress in traditional continuum media to the molecular level, which is of great significance especially in nanoscale and non-equilibrium systems.
In traditional fluid mechanics, stress is defined as force per unit area. However, at the molecular scale, stress results from:
The classic Irving–Kirkwood formula provides a representation of the stress tensor at the molecular scale:
σαβ = −(1/V) ⟨∑ mi vi,α vi,β + ½ ∑∑ rij,α Fij,β⟩
The study of stress in molecular fluids is a bridge between continuum mechanics and statistical mechanics, and is crucial to understanding the behavior of materials and fluids at the microscale. Through molecular simulation, we can more accurately capture micromechanical properties that traditional theory cannot handle.
In fluid mechanics and continuum mechanics, the relationship between stress and velocity describes how the velocity field affects the internal stress distribution during the deformation process of a fluid or solid. This relationship is critical to understanding the flow properties and viscous behavior of materials.
For a Newtonian fluid, the shear stress is proportional to the velocity gradient:
τ = μ (du/dy)
This relationship means: the faster the velocity changes, the greater the resistance (stress) generated inside the fluid.
The relationship between stress and velocity of non-Newtonian fluids is complex and often nonlinear or time-dependent, for example:
In a three-dimensional flow field, the relationship between stress and velocity is usually represented by a tensor:
σ = −pI + τ τij = μ (∂vi/∂xj + ∂vj/∂xi)
This representation is applied to the Navier-Stokes equations and describes how the internal stress of a fluid is determined by the velocity field.
In molecular dynamics, the relationship between stress and velocity can be established through average momentum flow and intermolecular forces, which is especially suitable for microfluidic or nanoscale systems.
The relationship between stress and velocity is the core of fluid and solid dynamics, and plays an indispensable role in engineering design, materials science, and basic physics.
Microscopic conservation equations are basic equations that describe how physical quantities (such as mass, momentum, and energy) change with time and space at the molecular or particle scale. These equations are a bridge between continuum mechanics and statistical mechanics and are commonly used in molecular dynamics simulations and non-equilibrium statistical physics.
∂ρ/∂t + ∇·(ρv) = 0
This is the most basic microscopic continuity equation, describing how mass is distributed and changed in space as particles move.
∂(ρv)/∂t + ∇·(ρv ⊗ v) = −∇·σ + ρf
∂e/∂t + ∇·(e v) = −∇·q + σ : ∇v + ρr
Through the Irving–Kirkwood or Hardy method, the continuous field expression of microscopic conserved quantities can be derived from the particle position and velocity, in the following form:
ρ(r, t) = ∑ mi δ(r − ri(t))
v(r, t) = (1/ρ) ∑ mi vi δ(r − ri(t))
These formulas map discrete particle distributions into continuous fields as Dirac delta functions.
Microscopic conservation equations are the core tool for deriving continuous physical quantities based on the behavior of elementary particles. They are important for understanding flow and stress behavior at the nanoscale, as well as the microscopic foundations of continuum models.
Internal flow (Internal Flows) refers to the phenomenon of fluid flowing in closed or partially closed channels, such as water flow in water pipes, air flow in air conditioning ducts, blood flow in blood vessels, etc. It is characterized by continuous contact between the fluid and the solid boundary and is controlled by it.
| project | internal mobility | external flow |
|---|---|---|
| boundary case | The fluid is completely surrounded by | Fluid flows around the outside of an object |
| pressure changes | usually declines with the flow | There may be boost and step-down zones |
| application | Piping, Cooling, Chemical Engineering | Aerodynamics, wind tunnel, car body design |
If heat conduction is taken into account, the internal flow is coupled to the energy equation. For example, under forced convection, the temperature difference between the wall and the fluid will affect the overall heat exchange efficiency.
Internal flows (Internal Flows) are the most common and practical research objects in fluid mechanics, and are crucial to fields such as heat exchange, transportation systems, and microfluidic technology. Understanding its flow regime, pressure loss and heat transfer characteristics will help improve engineering design efficiency and performance.
External flows refer to the movement of fluid outside an object, such as air flowing around an airplane wing, water flowing over a bridge pier, or the air flowing around a vehicle. This type of flow is characterized by the fact that a major portion of the fluid is not restricted by boundaries and can diffuse freely.
| project | external flow | internal mobility |
|---|---|---|
| boundary conditions | Only partially restricted (such as the surface of an object) | Completely restricted by closed channels |
| pressure changes | There are boost and drop zones, which are closely related to the shape of the object. | Pressure usually decreases in one direction |
| Typical applications | Flight, aerospace, vehicle aerodynamics | Pipe design, cooling system, blood flow |
External flows play a key role in aviation, transportation, architecture and sports science. Mastering its flow field behavior, boundary layer development and fluid forces is a core element of engineering design and performance optimization.
Macroscopic Balance Equations are conservation equations that describe the changes of mass, momentum and energy with time and space in continuous media. These equations are the basic theoretical basis for fluid mechanics, thermodynamics and transport phenomena.
∂ρ/∂t + ∇·(ρv) = 0
This equation shows that in any control volume, mass can neither be created nor destroyed out of thin air.
ρ(∂v/∂t + v·∇v) = −∇p + ∇·τ + ρf
This is the manifestation of Newton's second law in continuous media, indicating that the change in momentum comes from pressure gradient, viscous force and external force.
ρ(∂e/∂t + v·∇e) = −∇·q + τ : ∇v + ρr
This equation includes factors such as heat conduction, work done, and internal heat sources, and is an expression of the first law of thermodynamics.
The macroscopic equilibrium equation integrates the most basic conservation principles of nature and is a key tool for understanding and predicting physical behavior in the fields of engineering and science. Through these equations, we can mathematically accurately describe complex phenomena and simulate their evolution.
Quantum mechanics is a branch of physics that describes the behavior of particles (such as electrons, photons, etc.) in the microscopic world. Unlike classical mechanics, quantum mechanics reveals that particles have characteristics such as wave-particle duality and uncertainty.
iħ ∂ψ/∂t = Ĥψ。ΔxΔp ≥ ħ/2, indicating the measurement accuracy limit of position and momentum.Quantum mechanics has a wide range of applications in modern technology, including:
Despite its great success, quantum mechanics still has unsolved mysteries, such as:
The Uncertainty Principle (Heisenberg Uncertainty Principle) is one of the basic principles of quantum mechanics, proposed by the German physicist Heisenberg in 1927. This principle states that certain pairs of physical quantities (such as position and momentum) cannot be measured accurately at the same time; the more accurately one quantity is measured, the greater the uncertainty in the other.
Locationxwith momentumpThe uncertainty relationship is:
Δx · Δp ≥ ℏ / 2
in:
The uncertainty principle also applies to other pairs of conjugate variables:
A number of quantum interference and scattering experiments have confirmed the uncertainty principle, such as electron diffraction, single photon interference, etc., showing that particles cannot have a definite path and interference pattern at the same time.
In classical physics, the position and momentum of an object can theoretically be measured simultaneously with any accuracy. But in quantum mechanics, due to the wave nature, particles do not have "absolutely precise trajectories", so their states must be described probabilistically.
The uncertainty principle subverts the belief in determinism of classical physics and reveals the essential randomness and limitations of the microscopic world. It is one of the most revolutionary concepts in quantum mechanics.
In quantum mechanics and linear algebra, the Hermitian Operator (also known as the self-conjugate operator) is an operator equal to its adjoint matrix (Adjoint Matrix). If expressed symbolically, when an operator H satisfies H = H†, we call it a Hermitian operator. The † symbol here represents the operation of transposing the matrix and taking the complex conjugate.
The Hermitian operator has two mathematical properties that are crucial to physics:
In the postulates of quantum mechanics, every observable physical quantity (Observable) corresponds to a linear Hermitian operator. This is because:
The Hermitian operator is the bridge connecting "abstract mathematical space" and "real physical measurement". When we say that an operator is Hermitian, we are essentially declaring that the physical quantity represented by this operator can be observed in the real world and has definite physical meaning. If an operator is not Hermitian, its eigenvalues may contain imaginary numbers, which will lose physical reality when describing observable physical quantities.
The Dirac equation was proposed by Paul Dirac in 1928 to describe the motion of fermions (such as electrons) with spin 1/2. It is an important equation that brings together quantum mechanics and special relativity. The equation is of the form:
Dirac's equation:
(iγμ ∂μ - m)ψ = 0
in:
Dirac matrix γμis a 4x4 matrix. These four matrices are γ0and γ1, γ2, γ3, corresponding to time and three spatial dimensions. Common notations are:
γ0 = [ [ 1, 0, 0, 0 ],
[ 0, 1, 0, 0 ],
[ 0, 0, -1, 0 ],
[ 0, 0, 0, -1 ] ]
γ1 = [ [ 0, 0, 0, 1 ],
[ 0, 0, 1, 0 ],
[ 0, -1, 0, 0 ],
[ -1, 0, 0, 0 ] ]
These γiMatrices are used to represent motion in different spatial dimensions.The Dirac equation is actually four simultaneous partial differential equations, including the evolution of different components of spinning particles. When writing this in matrix form, we get the following structure:
[ (i ∂t - m) -i(∂xσ1 + ∂yσ2 + ∂zσ3) ] [ ψ1 ]
[ i(∂xσ1 + ∂yσ2 + ∂zσ3) (i ∂t + m) ] [ ψ2 ]
These simultaneous equations describe the dynamic changes of each component of a spin particle in time and space, and predict the existence of antiparticles. This is one of the great contributions of the Dirac equation.
In quantum mechanics and quantum field theory,Dirac NotationIts dagger operator provides an effective tool for describing quantum state transformations and matrix operations.
Dirac notation includes two basic vector forms:
The inner product of quantum states can be written as⟨ψ | φ⟩, and the outer product is expressed as|ψ⟩⟨φ|。
dagger operator uses symbols†, represents the conjugate transpose of the matrix. For example, ifAis a matrix, then its conjugate transpose isA†. For Ket vector|ψ⟩, whose Hermitian conjugate is⟨ψ|。
This system of symbols and operators is very useful in quantum mechanics, providing a concise tool for expressing interactions between states.
The one-dimensional Infinite Potential Well, often called "Particle-in-a-Box", is the most basic and inspiring model in quantum mechanics. It describes a particle of mass m confined in a one-dimensional space of length L. Inside the box, the potential energy is zero, and the particles can move freely; but at the boundary, the potential energy is infinite, which means that the particles are absolutely unable to penetrate the boundary and escape outside.
In quantum mechanics, we no longer describe particles in terms of precise trajectories, but in terms of wave functions (psi). Due to the boundary restrictions, the wave of particles in the box behaves like a "standing wave" generated by a string fixed at both ends. This means that the wave function must be equal to zero at the boundaries (x=0 and x=L).
This is the most important conclusion of this model: the energy of a particle is not continuous, but can only take on specific, discontinuous values. This phenomenon is called "quantization". According to the wave properties, the energy E of the nth energy level can be expressed as:
E_n = (n2 * h2) / (8 * m * L2)
Here n must be a positive integer (1, 2, 3...), and h is Planck's constant. It can be seen from the formula that the smaller the size L of the box, the greater the gap between energy levels and the more obvious the quantum effect.
Particle-on-a-Ring is the basic model for studying rotational motion in quantum mechanics. It describes a particle of mass m constrained to move in a circular orbit of radius R. Unlike one-dimensional potential energy wells (straight-line paths), the particles of this model move in a closed circular path, with their position usually defined by the angle phi (between 0 and 2pi).
In the torus model, the particle does not have a physical hard boundary (such as a wall), but it must satisfy periodic boundary conditions. This means that when a particle goes around a circle and returns to the origin, its wave function must be exactly the same as when it started. The mathematical expression is: psi(phi) = psi(phi + 2pi).
In order to satisfy this condition, the wave function must assume a specific oscillatory form, usually expressed as a complex exponential form: psi(phi) = A * exp(i * m_l * phi). In order for the wave function to remain continuous after one revolution, the parameter m_l must be an integer (0, +/-1, +/-2...). This is the fundamental source of quantization of the system.
According to the Schrödinger equation, we can derive the allowable energy value of the particles on the ring. Its energy E is proportional to the square of the quantum number m_l:
E = (m_l^2 * h-bar^2) / (2 * I)
where I = m * R^2 is the moment of inertia of the particle and h-bar is the reduced Planck constant. This formula reveals several important physical properties:
The particle model on a ring is more than just a theoretical exercise; it plays a key role in describing the rotational phenomena of the microscopic world:
The spherical particle model (Particle on a Sphere) is an important model in quantum mechanics. It mainly describes a particle with mass m moving freely on a spherical surface with a fixed radius r. This model is the basis for understanding the rotational spectra of molecules (such as diatomic molecules) and the angular momentum of atomic orbitals.
In the spherical coordinate system, since the radius is fixed, the Laplacian operator simplifies to include only the angular part. Its Hamiltonian operator is proportional to the squared angular momentum operator L^2:
H = L^2 / (2mr^2) = L^2 / (2I)
where I = mr^2 is called the moment of inertia.
By solving the equation, we can get the quantized energy hierarchy:
The wave function of this system isSpherical Harmonics, usually expressed as Y_lm(theta, phi).
Quantum spin coherent electron correlation is an important field in quantum mechanics that studies correlation effects between electrons. Especially when spin and quantum coherence are considered, it is of great significance to electron dynamics in condensed matter physics, quantum computing, and chemistry.
±1/2express.|↑⟩and|↓⟩。α|↑⟩ + β|↓⟩,inαandβis a complex coefficient.Path Integral is a calculation method in quantum physics used to describe the dynamic behavior of particles. This method was developed by Richard Feynman Feynman proposed to convert the problem of quantum mechanics into the sum of the possible paths of a large number of particles to calculate the probability amplitude of the particles from one point to another.
In the path integral representation of quantum mechanics, from timet1It's timet2The particle state transition amplitude of can be expressed as the sum of all paths:
⟨x(t₂)|x(t₁)⟩ = ∫ e^(iS[x]/ħ) Dx
in,S[x]represents the amount of action,ħis Planck’s constant,Dxmeans integrating over all possible paths.
Path integral is a powerful mathematical tool used to analyze the uncertain behavior of quantum particles, providing another perspective on quantum physics. It plays an important role in many modern physical theories and helps scientists understand the complexity of the microscopic world.
The Standard Model is the basic theoretical framework of modern particle physics. It describes the elementary particles that make up all visible matter in the universe, and the three basic interactions between them: electromagnetic force, weak interaction, and strong interaction (excluding gravity).
The elementary particles in the Standard Model can be divided into fermions (components of matter) and bosons (transmitting force):
These particles are the media of interaction:
The Higgs boson is the only scalar particle in the Standard Model and gives other particles mass through the Higgs mechanism.
The electroweak theory unifies SU(2) × U(1) and describes the unified source of the electromagnetic and weak forces.
The Standard Model is a quantum field theory built on the gauge symmetry SU(3) × SU(2) × U(1), and achieves the mass generation of particles through the spontaneous breaking of symmetry and the Higgs field.
The Standard Model has been verified by decades of high-precision experiments, including the discovery of the Higgs boson at the LHC, the precise measurement of the properties of the Z boson by the LEP, and observations at multiple hadron and lepton colliders.
The Standard Model is one of the most successful theories in particle physics today. It accurately describes the behavior and interactions of particles in the microscopic world. Although it is not yet complete, it lays the foundation for subsequent unified theories (such as string theory, supersymmetry or quantum gravity).
In the Standard Model, elementary particles such as W and Z bosons and fermions (such as electrons and quarks) have mass. However, if the mass term is directly added to the Lagrangian, the gauge symmetry will be destroyed, making the quantum field theory unable to be self-consistent. The Higgs mechanism provides a way to impart mass to particles without breaking local gauge symmetries.
The core of the Higgs mechanism is "spontaneous symmetry breaking". Although some theories have symmetry, their vacuum state (the lowest energy state) does not obey this symmetry.
To illustrate with a classic example: a ball is symmetrical at the center of a circular valley, but it may roll to a lower point in either direction, and the final state breaks the original symmetry.
Introduce a complex scalar field φ, called the Higgs field, whose potential energy is:
V(φ) = μ²|φ|² + λ|φ|⁴, and μ²< 0
This potential energy is in the shape of a "Mexican hat", and the vacuum expectation value of φ is not zero, that is:
⟨φ⟩ ≠ 0
This means that the vacuum of the universe itself is filled with the Higgs field.
When other particle fields (such as W and Z bosons or fermions) couple with the Higgs field, they "sense" this non-zero vacuum expectation and thus gain mass. The size of the mass depends on the strength of its coupling to the Higgs field.
In the electroweak theory, W⁺, W⁻ and Z bosons gain mass through the Higgs mechanism, while photons remain massless. This illustrates that electromagnetic force and weak interaction can be unified into one theory at high energy, but differentiate into different properties at low energy.
希格斯场量子化后会产生一个可被观察的粒子:希格斯玻色子(Higgs boson)。 The particle, first discovered by CERN's LHC experiment in 2012, has a mass of about 125 GeV and is experimental evidence for the existence of the Higgs mechanism.
The Higgs mechanism successfully explains the source of mass and maintains the gauge symmetry and renormalizability in the Standard Model. It is an indispensable theoretical mechanism in modern particle physics and one of the deepest understandings of the structure of matter in the universe.
Quantum Entanglement is a non-classical correlation in quantum mechanics. When two or more particles interact in a certain way, their quantum states can no longer be described as separate states, but must be regarded as a superposition state of the overall system.
In other words, observations of one particle can instantly affect the state of the other, even if they are far apart.
An example of the entangled state of two particles is as follows (one of the Bell states):
|Ψ⟩ = (1/√2)(|↑⟩A|↓⟩B + |↓⟩A|↑⟩B)
Among them, A and B are two particles, |↑ means spin up, and |↓ means spin down. This state means that if particle A is ↑, then B must be ↓, and vice versa, and cannot be described separately.
Entangled systems exhibit quantum nonlocality: there is a correlation between particles that transcends distances in classical space.
This does not mean that there is information transmitted at faster than the speed of light, but that the quantum state itself is mathematically integrated.
Einstein, Podolsky and Rosen proposed the EPR paradox in 1935, believing that entanglement revealed the incompleteness of quantum mechanics, and proposed that "hidden variable theory" should be used to supplement it.
Einstein called this instantaneous connection "spooky action at a distance."
In 1964, John Bell derived Bell's inequality, stating that if hidden variable theory is correct, certain correlations should satisfy this inequality.
However, since the 1970s, a number of experiments, including the Aspect experiment, have proven that Bell's inequality is violated, confirming that quantum entanglement is a real natural phenomenon.
Quantum entanglement is the core feature that separates quantum mechanics from classical physics. It challenges humankind's traditional concepts of reality and causality, and reveals the deep connections hidden among fundamental particles in the universe.
Bell's Inequality is a mathematical expression of the conflict between quantum mechanics and local realism. It was proposed by physicist John Bell in 1964 to test whether quantum entanglement can be explained by classical hidden variable theory.
In the classical physics point of view, it is assumed that each particle has "hidden variables", which determine the observation results and do not affect each other instantaneously. This assumption is calledlocal realism(Local Realism). But the entangled state of quantum mechanics predicts that even if two particles are far apart, correlations that exceed classical expectations can still occur.
Taking two entangled particles A and B as an example, measure the spins in three different directions:
A measurement direction: \( a \) or \( a' \)
B measurement direction: \( b \) or \( b' \)
Define the measurement results as \( A(a, \lambda), A(a', \lambda), B(b, \lambda), B(b', \lambda) \), and their values are ±1 respectively. Bell derived that for any theory of local hidden variables, the following inequalities must hold:
| E(a, b) - E(a, b') | + E(a', b) + E(a', b') ≤ 2
Where \( E(a, b) \) is the expected value of the measurement result:
E(a, b) = ∫ A(a, λ) B(b, λ) ρ(λ) dλ
If the experimental observation results violate this inequality, it meansNature does not conform to local realism。
For quantum entangled spin states:
|ψ⟩ = (|↑↓⟩ - |↓↑⟩) / √2Its expected value can be expressed as:
E(a, b) = -cos(θ)When setting appropriate measurement angles (such as 0°, 45°, 90°), the results predicted by quantum mechanics can reach:
|E(a, b) - E(a, b')| + E(a', b) + E(a', b') = 2√2 > 2This is a clear violation of Bell's inequality.
Since the 1980s (especially Alain Aspect's photon polarization experiments), a large number of experimental results have shown that the predictions of quantum mechanics are correct and indeed violate Bell's inequality. Therefore, people believe that there arenon-locality(nonlocality) phenomenon.
In 1935, Einstein, Podolsky and Rosen (EPR) proposed a thought experiment to question the completeness of quantum mechanics and believed that there were "hidden variables" to compensate for its randomness. John Bell proposed the mathematical form in 1964Bell's inequality, pointing out that if nature follows local realism, the observation results must satisfy certain probabilistic logical inequalities.
If the experiment violates Bell's inequality, it means that nature does not follow local realism. Instead, as predicted by quantum mechanics, there is a non-local quantum entanglement relationship between particles.
Led by French physicist Alain Aspect, the violation of Bell's inequality was rigorously verified for the first time:
Multiple teams from the Netherlands, Austria, and the United States published experiments almost simultaneously, fully closing two major loopholes in the past:
Experiments since the 1970s have shown that the universe exhibits non-local characteristics in its deep structure. Entanglement is not only the core of quantum theory, but also the basis for the development of future quantum technology.
Local RealismIt is a core assumption in the philosophy of physics and the interpretation of quantum mechanics. It combines two basic concepts: "Realism" and "Locality", advocating that the nature of the physical world already exists before observation, and the impact between events will not exceed the propagation limit of the speed of light.
Reality believes:
Every observable property of a physical system (such as a particle's spin, position, momentum) has a definite value, whether or not the observer measures it.
In other words, observation only reveals these existing properties, rather than creating them.
Locality thinks:
The impact of a physical event cannot be instantly transmitted to a place no matter how far away it is.
In other words, the measurement results at point A will not immediately affect the particles at distant point B unless the information is transmitted at no faster than the speed of light.
This concept comes from Einstein's theory of relativity, so it is also called "No Action at a Distance".
In 1935, Einstein, Podolsky and Rosen (Einstein-Podolsky-Rosen, EPR) proposed the famous "EPR paradox", arguing that the description of quantum mechanics is incomplete and that there should be some unobserved "hidden variables" to restore reality and locality.
However, in 1964 physicistsJohn BellderivedBell’s Inequality, proves that if nature really obeys local realism, then the correlation between measurement results should be subject to mathematical constraints.
Experiments show that measurements of quantum entanglement violate Bell's inequality. This means:
Therefore, quantum mechanics shows that it is possible in natureNonlocal Correlation, that is, a phenomenon that goes beyond classical message communication.
Local realism is the basic belief of classical physics, but it is challenged by Bell's experiment and quantum entanglement at the quantum level. Most physicists today believe thatNature does not fully comply with local realism, and the non-locality of quantum mechanics is one of the fundamental characteristics of the world.
The Ising model is a model used to describe spin systems in statistical physics. It was proposed by German physicist Ernst Ising in 1925. This model is used to study the interaction of spins in magnetic materials, especially the phase transition behavior at different temperatures.
The Hamiltonian of the Ising model can be written in the following form:
H = -J Σ⟨i,j⟩ sᵢsⱼ - h Σᵢ sᵢ
in:
The Ising model is one of the basic models in statistical physics and condensed matter physics. It helps scientists understand the fundamental mechanisms of phase transitions, criticality phenomena and collective behavior. Although the model is relatively simple, it provides deep insights into complex systems and has broad applications across multiple disciplines.
The Ising model provides a simple yet powerful tool for studying interactions in matter. Through this model, we can have a deep understanding of the magnetism, phase transition and critical behavior of materials, and can apply it to interdisciplinary research. It is one of the important theories in modern physics.
Atoms in solid substances are usually arranged in regular arrangements to form crystals. Crystal structures can be divided into cubic, hexagonal, tetragonal and other types. The most common ones are face-centered cubic (FCC), body-centered cubic (BCC) and hexagonal closest packing (HCP).
A crystal can be thought of as a repeated stack of basic units - a lattice. Each lattice point can be attached with a group of atoms to form a "primitive unit", which together form the three-dimensional structure of the crystal.
In solids, electron energy levels create energy bands due to interactions between atoms. The difference between conductors, semiconductors and insulators mainly depends on the energy gap between the valence band and the conduction band.
Semiconductors such as silicon (Si) have a medium energy gap and can control conductivity through doping. N-type and P-type semiconductors have more free electrons and holes respectively, forming the basis of various electronic components.
The vibration of atoms in a crystal can be described by phonons, which are the main carriers of thermal energy transfer. Thermal conductivity depends on the scattering and propagation characteristics of phonons.
The magnetism of solids comes from the internal spin and orbital angular momentum of atoms. Common types of magnetism are ferromagnetic, antiferromagnetic and paramagnetic.
The resistance of some materials drops to zero at extremely low temperatures and enters a superconducting state. Superconductors can also repel magnetic fields (the Meissner effect), a property that is extremely important in quantum technology and magnetic levitation applications.
Band Theory is a core theory in solid state physics, used to explain the electronic properties of materials such as conductors, semiconductors, and insulators. It describes the energy range and distribution that electrons can occupy in a crystal.
When a large number of atoms form a crystal, the atomic orbitals overlap and the energy levels split, forming continuous energy bands (Energy Bands). The most common energy bands include:
The analytical formula of the energy band is usually derived through quantum mechanical models, such as the tight-binding model or the free electron model. For example:
E(k) = (ħ²k²)/(2m)
E(k)is the energy of the electron.ħis the reduced Planck constant.kis the wave vector.mis the effective mass of the electron.E(k) = E₀ - 2t cos(ka)
E₀is the central energy level.tis the transition parameter (related to the coupling strength between atoms).ais the lattice constant.A nonlinear system means that the output of the system is not simply proportional to the input. In this type of system, small changes in the input can lead to large changes in the output. The characteristics of nonlinear systems include complexity and diversity, and their application scope covers many disciplines such as physics, chemistry, biology, and economics.
Examples of nonlinear equations are:
dx/dt = rx - x²
Among them, the stability of the system behavior depends on the parametersrvalue.
Chaos is a behavioral pattern of nonlinear systems that refers to the extreme sensitivity of the system to initial conditions. This behavior is known as the "butterfly effect", where a small initial change can have a huge impact on the entire system. Chaotic systems cannot be predicted over the long term due to their unpredictability and complexity.
Common examples of chaotic systems include the Lorenz System:
dx/dt = σ(y - x)
dy/dt = x(ρ - z) - y
dz/dt = xy - βz
Among them, parametersσ、ρandβThe chaos that controls system behavior.
A fractal is a geometric structure whose basic characteristic is self-similarity, the recurrence of structures at different scales. Fractals are often used to describe irregular shapes in nature, such as coastlines, mountains, and clouds.
A common example of a fractal is the Mandelbrot Set, which is defined as:
z = z² + c
in,cis a complex number, if after iterationzIf it does not tend to infinity, thencBelongs to the Mandelbrot collection.
A complex system is a system composed of a large number of interacting component units, and the overall behavior is difficult to derive from the properties of a single part. His behavior often exhibitsemergence、non-linear relationship、self-organizingand other characteristics.
The overall behavior or structure of a system does not exist in a single element, but arises from simple rules and interactions. For example:
Complex systems are difficult to predict accurately, but understanding their characteristics can help improve system resilience, prevent systemic collapse, and improve decision-making mechanisms. For example, predicting financial crises or designing resilient urban systems.
Complex systems reveal the nonlinear and interactive phenomena prevalent in nature and human society, and are an important core of interdisciplinary science and technology research in the 21st century.
Lorenz AttractorIt is a classic example of Chaos Theory, developed by meteorologistsEdward LorenzProposed in 1963.
He originally tried to build a simplified atmospheric convection model, but unexpectedly found that this system was extremely sensitive to initial conditions, resulting in unpredictable long-term behavior. This phenomenon became a representative of chaotic phenomena.
The Lorentz model consists of three simultaneous differential equations:
dx/dt = σ (y - x) dy/dt = x (ρ - z) - y dz/dt = x y - β z
in:
When taking:
σ = 10, ρ = 28, β = 8/3
, the system will displaychaos trajectory, its trajectory drawn in three-dimensional space forms a butterfly-shaped figure, which is called the "Lorentz attractor".
The Lorentz attractor derives the famous "Butterfly Effect" - "A butterfly flapping its wings in Brazil may cause a tornado in Texas."
It symbolizes that small changes can cause huge macro effects and is the core idea of chaos theory.
Lorentz systems can be simulated numerically, such as the Runge–Kutta method. with initial conditions(x₀, y₀, z₀) = (0, 1, 1.05)For example, the trajectory eventually converges to a butterfly-shaped chaotic attractor.
The Lorentz attractor has non-integer dimensions, about 2.06, and is a type of "Strange Attractor".
Its trajectory neither converges nor diverges, but rotates infinitely around two unstable fixed points.
Koch Snowflake is a famous fractal pattern that creates a snowflake-like shape by recursively dividing each edge and adding small details.
This snowflake fractal chart uses HTML5<canvas>elements to draw. Using a recursive algorithm, we can gradually generate a snowflake-like pattern, demonstrating the self-similarity of fractals.
∂u/∂t = f(u, v) + D₁∇²u ∂v/∂t = g(u, v) + D₂∇²v
Nₜ₊₁ = r * Nₜ * (1 - Nₜ / K)
The exponential growth model assumes that resources are unlimited and the population will grow indefinitely. The logistic model considers limited resources and can reflect "self-regulated growth in a limited environment" in reality. When the parameters enter the high sensitivity range, the nature of the chaotic system can also be revealed.
Cellular automata is a discrete mathematical model that consists of a large number of simple units (called cells). Each cell evolves in discrete time steps according to specific update rules based on the state of neighboring cells. Although the rules are simple, they can exhibit complex and diverse dynamic behaviors.
Cellular automata demonstrate the possibility of generating complex behaviors from simple rules and are an important tool for studying complex systems and emergent phenomena. It can simulate biological evolution, traffic flows, social interactions and physical processes.
Relativity was developed by Albert Einstein in 20 A set of physical theories proposed at the beginning of the century brought revolutionary changes to the understanding of time, space and gravity in physics. The theory of relativity consists of two main parts:special relativityandgeneral relativity。
Special Relativity was proposed in 1905. It mainly deals with the problem of motion between different reference systems under the premise that the speed of light remains constant. The core ideas of special relativity are:
根据狭义相对论的理论推导,物体在接近光速时会产生一系列效应,例如时间膨胀、长度收缩和质量增加。 Special theory of relativity changed people's absolute view of space and time and proved that they are interdependent.
General Relativity in 1915 It was proposed by Einstein in 2001 to further explore the relationship between gravity and acceleration. According to the general theory of relativity, gravity is not a "force" as traditionally understood, but the distortion of space-time by mass. When an object has mass, it causes the surrounding space-time to curve, and other objects move along these curved space-time, producing the gravitational effects we observe.
General relativity has a wide range of applications and explains many astronomical phenomena, such as black holes, gravitational lenses, universe expansion, etc. General relativity has also been widely verified experimentally, such as the precession of Mercury's orbit and the gravitational red shift phenomenon.
The introduction of the theory of relativity completely changed the basic concepts of time, space and gravity in physics. It not only has a profound impact on the development of modern physics, but also brings many applications in science and technology. The Global Positioning System (GPS) is an example. Because the satellites are at high altitudes and traveling at high speeds, special and general relativity predict that time will be slightly faster than time on the Earth's surface and must be corrected to ensure positioning accuracy.
Relativity and quantum mechanics are two cornerstones of modern physics. The former describes motion and gravitational effects at the macroscale, while the latter focuses on particle behavior at the microscale. Currently, scientists are still studying how to unify these two theories to achieve a grand unified theory.
In the 19th century, the physics community generally believed that light was a wave that relied on "ether" as a propagation medium. As the Earth orbits the Sun, it should be moving relative to the ether, so the measured speed of light should vary depending on the direction. Michelson and Morey designed experiments to test this hypothesis.
They used a Michelson interferometer to split a beam of light into two beams that propagated in mutually perpendicular directions, were reflected, and then merged again. If the speed of light differs due to the motion of the Earth relative to the ether, this will produce an observable shift in the interference pattern.
According to the ether hypothesis, the propagation time of the beam along the direction of the Earth's motion should be different from that in the vertical direction, resulting in measurable displacement of the interference fringes. A change in fringes should be observed after turning the interferometer.
After many precise measurements, neither Michelson nor Morey could observe the expected interference fringe displacement. This means that the motion of the Earth has no measurable effect on the speed of light, contradicting the expectations of aether theory.
This experiment is considered the most famous "negative result experiment" in the history of physics. It indirectly denied the existence of the ether, paving the way for Einstein to propose the "Special Theory of Relativity" in 1905, which claimed that the speed of light is constant in all inertial reference systems without assuming the ether.
From the perspective of modern theory, the Michelson-Morley experiment confirmed the invariance of the speed of light and is one of the core experimental supports for the theory of relativity. It also shows that time and space are not absolute but depend on the motion state of the observer.
Lorentz Transformation is the core mathematical tool in the special theory of relativity, which is used to describe the space and time transformation relationship between two relatively moving inertial reference systems.
When two inertial reference frames move with velocityvLorentz transformation is used to maintain the speed of light during relative motion along the x-axiscIt is constant in all reference frames and ensures that the laws of physics have the same form in different reference frames.
Suppose an event occurs in the S reference frame at(x, t), while at the speedvIn the S' reference system of relative motion, the space-time coordinates of this event are(x', t'),but:
x' = γ(x - vt)
t' = γ(t - vx/c²)
y' = y
z' = z
inγ(gamma factor) is:
γ = 1 / √(1 - v²/c²)
Lorenz space-time transformation is the mathematical basis of special relativity, which is used to describe how the space-time coordinates of events are transformed in different inertial reference systems. This conversion ensures that the speed of light is constant in all inertial systems and explains the phenomena of time dilation and shrinkage under high-speed motion.
Assume two reference systems S and S', S' moves relative to S along the x-axis with speed v, and they coincide at t = t' = 0. Then the coordinates of the event in S are (x, y, z, t), and the coordinates of the event in S' are (x′, y′, z′, t′). The conversion relationship between the two is:
x′ = γ(x − vt)
y′ = y
z′ = z
t′ = γ(t − vx / c²)
where γ is the Lorentz factor:
γ = 1 / √(1 − v² / c²)
Representing four-dimensional space-time events as vectors (ct, x, y, z), the Lorentz transformation along the x-axis direction can be expressed as:
Λx = | γ −βγ 0 0 | | −βγ γ 0 0 | | 0 0 1 0 | | 0 0 0 1 |
If the time coordinate is rewritten in imaginary formict, then the four-dimensional vector is expressed as (x, y, z, ict). At this time, the transformation matrix can be written as:
Λrot-like = | γ 0 0 −iβγ | | 0 1 0 0 | | 0 0 1 0 | | iβγ 0 0 γ |
This form makes the Lorentz transformation mathematically analogous to rotations in Euclidean space, and this representation is often used to simplify the analysis of Wick rotations in field theory and statistical physics.
In the imaginary time representation, the Lorentz transformation can also be expressed in differential form:
dx′ = γ(dx − v d(it)) = γ(dx − i v dt) d(it′) = γ(d(it) − (v / c²) dx) = γ(i dt − (v / c²) dx)
WillitConsidered as the fourth coordinate, the transformation is written as:
dX′μ = Λμν dXν
Among them, dX is a tiny four-dimensional imaginary displacement vector, and Λ is the rotation matrix shown in the above formula.
The differential form can be used to derive the tensor transformation rules of Lorentz covariance and be applied to the transformation analysis of four-dimensional gradients and momentum in field theory.
To maintain the Minkowski time distance:
s² = ημν xμ xν
The Lorentz transformation matrix needs to satisfy:
ΛT η Λ = η
in:
This means that the Lorentz transformation maintains the inner product invariant in four-dimensional space-time, ensuring that physical quantities (such as time distance, four-momentum length) are consistent for all inertial observers.
Lorentz transformation reveals that time and space are not independent, but constitute a unified structure of four-dimensional space-time. Transformations in matrix form express this symmetry, using imaginary numbersictrepresentation, further strengthening its rotation-like nature. The differential form makes it applicable to tensor calculus and field theory derivation. The transposed matrix ensures that physical quantities remain unchanged under transformation, which is one of the foundations of the relativistic architecture.
In some early or mathematically oriented formulations, in order for the Lorentz transformation to be formally similar to a rotation in Euclidean space, physicists put the time coordinatetmultiply by an imaginary uniti, that is, making time an imaginary number:ict. The purpose of this is to convert the metric of Minkowski spacetime:
s² = c²t² − x² − y² − z²
Rewritten into a more familiar form in Euclidean space:
s² = (ict)² + x² + y² + z² = −c²t² + x² + y² + z²
At this time, the four-dimensional space-time looks like part of the "four-dimensional Euclidean space", except that the time component is an imaginary number, which can be mathematically unified into the form of the rotation group SO(4).
ictRepresented technical purpose.Notice:This is just a mathematical equivalent conversion. In actual physical quantities, time is still a real number and cannot be regarded as a real "imaginary time".
In order to unify the units of time and space (that is, both use "length" as the unit), time is often multiplied by the speed of light in the special theory of relativity:
x⁰ = ct
In this way, each component in the four-dimensional vector (ct, x, y, z) is a unit of length such as "meter", which facilitates unified mathematical representation and four-dimensional tensor operations.
When the time changes toicttime, which means multiplying time by the speed of light in unified units, and then multiplying by an imaginary numberiTaking unity as the rotating space structure:
x⁰ = ict
So the four-dimensional coordinates are:
(x, y, z, ict)
Space appears to be a four-dimensional Euclidean space, but the time axis is an imaginary axis, so that the time direction retains different geometric properties (that is, it is a "time-like" direction).
Minkowski space is a four-dimensional space-time structure in the special theory of relativity, proposed by the mathematician Hermann Minkowski. This coordinate system combines three-dimensional space and one-dimensional time to uniformly describe the spatiotemporal relationship between motion and events.
In Min spacetime, the position of an event is represented by four components:
x^μ = (ct, x, y, z)
incis the speed of light,tfor time,x, y, zare spatial coordinates. Time multiplied by the speed of light has the same units (length) as space, making calculations easier.
The geometry of Min spacetime is described by a non-Euclidean metric tensor, whose standard form is:
ds² = -c²dt² + dx² + dy² + dz²
Or expressed in the form of a four-dimensional tensor as:
ds² = ημν dx^μ dx^ν
where ημνis the Min metric tensor, its diagonal elements are (-1, 1, 1, 1) and the others are 0. This time space intervalds²Invariant in all inertial coordinate systems.
According to time and space intervalds²symbols, the relationships between events can be divided into three categories:
In the Min coordinate system, the transformation between different inertial observers is described by the Lorentz transformation. These transformations maintain space-time separationds²Invariant, ensuring that the laws of physics have the same form in all inertial reference frames.
Min space provides the geometric language of special relativity, allowing time and space to be treated in a unified manner. A particle's worldline is its path in space-time, and its light cone determines the causal structure of reachable events.
The Min coordinate system not only reveals the relativity of time and space, but also lays the foundation for flat space-time for the concept of curved space-time in general relativity. It is an indispensable mathematical framework for modern theoretical physics.
The Twin Paradox is a famous thought experiment in the special theory of relativity, used to illustrate the phenomenon of time dilation. The core of the paradox is: Why do two observers in different motion states produce asymmetric observations of each other's time passage?
Suppose there are a pair of twins, one (A) stays on the earth, and the other (B) sets off on a spaceship traveling at close to the speed of light, flies to somewhere and then returns. From A's point of view, because B is moving at high speed, its time dilation should be younger than A after returning.
The contradiction of the paradox is that according to the principle of relativity of special relativity, B can also say that it is stationary, but A is moving, and logically it can also be said that A's time is slower. But in fact, the two are not symmetrical.
The key to truly solving the paradox is that B has experienced acceleration and deceleration during the journey, especially when turning back, B is no longer in the inertial reference frame. In the special theory of relativity, only reference systems in inertial motion have relative symmetry. Therefore, the passage of time for B is not equivalent to that of A.
If the astronaut travels at speedvmoving timet(From the perspective of the earth), then the inherent time it passes (measured by your own watch) is:
τ = t √(1 - v²/c²)
This means that the twins who traveled in space experienced a shorter period of time and would return younger than the twins who stayed on Earth.
The twin effect is not a simple thought experiment, it has been confirmed by experiments. For example, high-speed atomic clocks do indeed run slower than ground-based atomic clocks; the same phenomenon occurs in GPS satellites, where relativistic corrections must be taken into account to keep time accurate.
The twin paradox shows that time is not absolute, but is related to the state of motion of the observer. This has a profound impact on our understanding of time, motion and causality, and is one of the most intuitive and inspiring examples of the theory of relativity.
General Relativity is a theory proposed by Albert Einstein in 1915 that describes how gravity affects the geometry of space-time.
The core equations of general relativity are:
Gμν = (8πG/c⁴) Tμν
Gμνis the Einstein tensor, which describes the curvature of space-time.Tμνis the energy momentum tensor, describing the distribution of matter and energy.Gis the universal gravitational constant,cis the speed of light.Paul Ehrenfest raised a question about the special theory of relativity in 1909, called "Ehrenfest's Paradox". He considers a rigid disk rotating about its center at extremely high angular velocities and explores how the geometric properties of the disk change under special relativity.
The length contraction effect in the special theory of relativity points out that an object will shrink in its direction of motion. For a rotating disk, each small segment of the edge moves at high speed in the tangential direction and should shrink, while the center of the circle remains stationary. From this Ehrenfest asks:
Assuming that the disk rotates at an angular velocity ω, the tangential velocity at a point on the circumference isv = ωR. According to the special theory of relativity, the circumference of the circle should shrink:
L' = L · √(1 - v²/c²)
However, the radial ruler does not shorten because its direction is perpendicular to the motion. As a result, the circumference of the circle becomes shorter but the radius remains unchanged, and the ratio of the circumference to the radius will be less than 2π, which is inconsistent with Euclidean geometry.
Ehrenfest's paradox shows that special relativity cannot consistently describe the geometric relationships in non-inertial (rotating) systems, especially in the treatment of rigid objects. The question states:
Ehrenfest's thought experiment became an important motivation for further research on non-inertial coordinate systems and curved space-time. Inspired by this, Einstein further developed the general theory of relativity, which unified gravity and acceleration into the curved space-time structure.
In modern physics, the space of a rotating disk is considered to have a non-Euclidean geometry. Its circumference is indeed no longer equal to 2πR, but is related to the metric. This shows that the space-time geometry in non-inertial systems needs to rely on a broader theoretical treatment, which is beyond the scope of application of special relativity.
Polymer Physics Physics) is a branch of physics that focuses on the study of the structure, properties, dynamic behavior of polymer materials and their physical properties in various applications. Polymer materials include plastics, rubber, fibers, proteins, etc., which have unique elasticity, toughness and thermal stability and are widely used in modern industry and biomedicine.
Polymers are long chain structures formed by a large number of small molecular units (monomers) connected to each other through chemical bonds. Repeated arrangements of these monomers give polymers different properties from ordinary small molecules. The properties of polymers are affected by factors such as their chain structure, molecular weight, and intermolecular forces.
Polymer physics mainly studies the following aspects of polymers:
Polymer physics uses a range of theories to describe the behavior of polymers, including:
The study of polymer physics has important applications in many fields, such as:
Polymer physics is a discipline that explores the properties and behavior of polymer materials. With the development of new polymer materials, this field is playing an increasingly important role in technology and scientific research.
Gaussian Chain Model is a statistical model describing the configuration of polymer chains in polymer physics. It assumes that the polymer chain is composed of many independent nodes (monomers), each node is connected with a random step size, and satisfies a Gaussian distribution. This model ignores volume repulsion effects and intermolecular interactions and focuses on the random coiling properties of the chains.
Let the chain length be given byNIt consists of segments, the length of each segment isb, then the end vector of the chainRThe average square of is:
⟨R²⟩ = N b²
If the chain segment direction obeys Gaussian distribution, the probability distribution of the end-to-end distance of the chain is:
P(R) = \(\left(\frac{3}{2 \pi N b^2}\right)^{3/2} \exp\left(-\frac{3R^2}{2Nb^2}\right)\)
Viscoelasticity theory describes the properties of materials that have both viscosity and elasticity. This type of material can both store energy like a spring and dissipate energy like a fluid under the action of external forces. Common viscoelastic materials include polymers, rubber, asphalt, biological tissues, etc.
Viscoelastic behavior results from the motion and relaxation processes of molecular chains. On short time scales, the material behaves like an elastic solid; on long time scales, it approaches a viscous fluid. This property makes viscoelastic theory a core foundation for understanding polymer physics and biomechanics.
Theory of Unification refers to a theory that attempts to incorporate different fundamental forces in nature into the same mathematical framework. In the development process of human physics, seemingly independent forces have been gradually integrated, and the ultimate goal is to construct a "Theory of Everything" (TOE) to unify gravity, electromagnetism, weak force, and strong force.
Grand unified theories attempt to unify the strong force and the electroweak force at higher energy scales. Common candidate groups are SU(5), SO(10), E₆, etc. The symmetries of these groups will spontaneously break at low energy and differentiate into strong interactions and electroweak interactions. One of the main predictions is proton decay, but it has not been observed so far.
General relativity successfully describes gravity, but it is incompatible with the quantum field theory framework. To incorporate gravity into a unified theory, it is necessary to develop quantum gravity theories, such as string theory, loop quantum gravity, and the brane universe model.
Unified theory is one of the ultimate goals pursued by physics, aiming to describe all the basic forces of nature in a single mathematical framework. Although still unfinished, it has advanced advances in theoretical physics, mathematics, and high-energy experimental physics.
From electromagnetic unification to the standard model, to grand unification and string theory, the development of unified theory demonstrates physics' exploration of the "deep order of nature." If gravity can be successfully integrated with other forces in the future, it will mark a new era in physics moving toward the "theory of everything."
Quantum Field Theory (QFT) is the core architecture of modern theoretical physics. It combines quantum mechanics and special relativity to describe the interaction between particles and fields. In this framework, particles are viewed as quantized excitations of fields rather than as mere independent point-like objects.
Quantum field theory is the basis of the Standard Model of elementary particles. The Standard Model successfully describes the electromagnetic interaction, weak interaction, and strong interaction, and explains the source of particle mass through the Higgs mechanism. However, gravity has not yet been included.
Quantum field theory is the theoretical cornerstone of current high-energy physics, cosmology and condensed matter physics. It not only explains the interaction between microscopic particles, but is also widely used in actual physical systems such as superconductors and semiconductors.
Quantum field theory is the core language of modern physics. It has successfully unified the description of particles and fields and is highly consistent with experimental results. Although there are still unsolved challenges, it provides the most solid mathematical and theoretical tools for humans to explore deeper natural laws.
String theory is a theory that seeks to unify quantum mechanics and general relativity. It assumes that all elementary particles are not point-like, but extremely tiny "string"-like objects. These strings vibrate in space, producing different vibration patterns that manifest as different particle properties (such as mass and charge). String theory therefore explains all the fundamental forces and particle properties in the universe as the different modes of vibration of these tiny strings.
Superstring theory is a theory further developed on the basis of string theory and adds the concept of "supersymmetry". Supersymmetry is a theoretical assumption that each particle has a corresponding "supersymmetric companion particle", which makes superstring theory more unified. Superstring theory can describe more dimensions, generally including ten-dimensional space, which helps solve some mathematical problems that arise in string theory and enhances its applicability in physics.
String theory and superstring theory are considered candidates for a "theory of everything," meaning they could be a theoretical framework that unifies all fundamental forces in the universe (gravity, electromagnetism, weak nuclear force, and strong nuclear force). However, these theories are still under development and have not yet been fully confirmed experimentally. If string theory or superstring theory can be verified, it may change our understanding of the structure of the universe.
Supersymmetry (SUSY for short) is a theoretical symmetry that attempts to unify "bosons" (integer spin particles, responsible for transmitting force) and "fermions" (half-integer spin particles, constituting matter) under the same symmetry framework. This theory proposes that every known particle should have an undiscovered supersymmetric "superpartner".
The mathematical basis of supersymmetry comes from "Superalgebra" (Superalgebra), which expands traditional symmetry groups (such as rotations and translations) to allow the mixing of "anticommutative operators" in exchange relations. This allows bosons and fermions to convert into each other under the same symmetry operation.
So far, the Large Hadron Collider (LHC) has not discovered any superpartner particles, which poses a challenge to simple supersymmetry models. However, physicists still consider more complex versions (such as weak supersymmetry breaking, non-minimum supersymmetry models) and regard supersymmetric particles as possible dark matter detection targets.
Supersymmetry is a theory that has not yet been experimentally verified, but its elegant mathematical structure and potential to explain problems still make it an important research direction in modern theoretical physics. Future higher-energy experiments or cosmic observations may provide key evidence for the existence of supersymmetric particles.
"Three-dimensional time" refers to the assumption that time has more than one independent direction (such as t₁, t₂, t₃), juxtaposed with three-dimensional space (x, y, z), making the space-time dimension reach 6 dimensions. This idea is mostly seen in theoretical exploration or philosophical discussions, and is not a mainstream physical architecture.
If three time components t₁, t₂, t₃ are introduced, the generalized interval can be written:
ds² = c²(dt₁² + dt₂² + dt₃²) − (dx² + dy² + dz²)
Its metric signature is ( +, +, +, −, −, − ). It is also possible to more generally allow mixing terms between time components, but this would lead to more complex causal structures.
One-dimensional time provides a clear time-like/space-like boundary, a cone-shaped causal structure, and a stable quantum vacuum; while introducing multiple times increases symmetry, it generally destroys the above-mentioned key physical properties.
Three-dimensional time is an inspiring mathematical and philosophical construct, but under the current physical evidence and theoretical consistency requirements, one-dimensional time is still the most successful and testable framework for describing nature. The role of c as the core constant for unit conversion and the limit of the causal cone, and i as a mathematical tool, still plays key functions in the standard theory of one-dimensional time.