Tripartite Essentialism and its Expert System

TRIPARTITE ESSENTIALISM - and its EXPERT SYSTEM
Three part Essentialism

robots/scanners/AI need a fully functional general systems theory [TRE] with which to map the essence of all things and all the ways that these things behave - then it can map all events out as a series of exchanges taking place in various mediums and label these exchanges with nouns verbs and adjectives from objects at all scales and in all areas of knowledge. These maps look like topographical geography. In some cases robots/scanners/AI will find an unknown situation and in order to find a solution to recognising the process that it is seeing it will borrow a map that looks similar from the set of geographical maps for other sorts of knowledge in other subject areas or domains This is called isomorphism between domains. http://en.wikibooks.org/wiki/Systems_Theory/Isomorphic_Systems
TRE has the general systems theory [6 keys systems theory] and mapping, knowledge representation system [TREES] and specialised language [HX PROLOG] to query and instruct the database and maps.
At the heart of the metaphysics (called tripartite essentialism) is the essence of every transaction in the universe at all scales and magnitudes. There are eight models (that are 'logically real') of the one and only universal transaction of the form A to B through a common context C [However, including undecided/modal states at time 1 - the definition can also be A to B through a common C with the intercession of at least one D - there are 27 three part descriptions which include those extra undecided or modal states]

Ie.
Universally, logically, every transaction A to B through a common medium C can have eight and only eight forms of integrity at time 1
The state description of Object A and how it functions relates how it functions and how integrated and effective it is at any given time.
Also Object A makes a donation of surplus energy in a competitive environment/context. A is a developed and sophisticated object or process that is capable of emerging or losing surplus from its investments or internal works C and this surplus B is its assets/qualities at B.
Energy flowing from higher to lower down a gradient of exchange through its internal structures A to B through common C

0 0 0 0 1 1 1 1 A OBJECT High Frequency
0 0 1 1 0 0 1 1 C PROCESS
0 1 0 1 0 1 0 1 B QUALITY Low Frequency Surplus (Oogenic investment)

TRE uses organic models in a general systems theory with which to map the universe and its behaviour at all scales and in all contexts.
In any context for a physical event or process, the physical transaction, the essential exchange, can be modelled and categorised by the properties of its high frequency components that then emerge and facilitate its low frequency qualities and assets e.g. its emerged assets/seeds/investments etc ie. the process is Oogenic - seed making/crystallising/telic.
All physical transactions take the form ACB

TRE in natural and logical language has objects or nouns, processes or verbs and qualities/assets or adjectives. In tripartite metaphysics every object and event can be mapped out in natural language as noun verb and adjective of the type object process and quality/asset.
The metaphysics of a universal three part exchange is given below - followed by the language [HX] Assembler which can be used to model the fine details and context within each exchange.

An essence characterizes a substance or a form, in the sense of the Forms or Ideas in Platonic idealism. It is permanent, unalterable, and eternal; and present in every possible world. Classical humanism has an essentialist conception of the human being, which means that it believes in an eternal and unchangeable human nature.
Essentialism, in its broadest sense, is any philosophy that acknowledges the primacy of Essence. Unlike Existentialism, which posits "being" as the fundamental reality, the essentialist ontology must be approached from a metaphysical perspective. Empirical knowledge is developed from experience of a relational universe whose components and attributes are defined and measured in terms of intellectually constructed laws.

The three part essences describe the integrity of the exchange at time 1 and at time 2 ie. whether the objects, its process and its quality are in an integrated or disintegrated state and can resemble Boolean logic.
This kind of knowledge about the three part states is a priori or synthetic a priori.
The terms a priori ("from the earlier") and a posteriori ("from the later") are used in philosophy (epistemology) to distinguish two types of knowledge, justifications or arguments. A priori knowledge or justification is independent of experience (for example "All bachelors are unmarried"); a posteriori knowledge or justification is dependent on experience or empirical evidence (for example "Some bachelors are very happy"). A posteriori justification makes reference to experience; but the issue concerns how one knows the proposition or claim in question-what justifies or grounds one's belief in it. Galen Strawson wrote that an a priori argument is one in which "you can see that it is true just lying on your couch. You don't have to get up off your couch and go outside and examine the way things are in the physical world. You don't have to do any science."[1] There are many points of view on these two types of assertions, and their relationship is one of the oldest problems in modern philosophy.
The terms "a priori" and "a posteriori" are used in philosophy to distinguish two different types of knowledge, justification, or argument: 'a priori knowledge' is known independently of experience (conceptual knowledge), and "a posteriori knowledge" is proven through experience. Thus, they are primarily used as adjectives to modify the noun "knowledge", or taken to be compound nouns that refer to types of knowledge (for example, "a priori knowledge"). However, "a priori" is sometimes used as an adjective to modify other nouns, such as "truth". Additionally, philosophers often modify this use. For example, "apriority" and "aprioricity" are sometimes used as nouns to refer (approximately) to the quality of being "a priori".

The status of TRE and its before the fact (a priori) essentialist states introduces certainty into its applications. This means that limited and closed sets of numbers which which to represent the infinite will enable any computation based on these TRE numbers to overcome the Halting problem.

In computability theory, the halting problem can be stated as follows: Given a description of an arbitrary computer program, decide whether the program finishes running or continues to run forever. This is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever.
Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. A key part of the proof was a mathematical definition of a computer and program, what became known as a Turing machine; the halting problem is undecidable over Turing machines. It is one of the first examples of a decision problem.

The halting problem is historically important because it was one of the first problems to be proved undecidable. (Turing's proof went to press in May 1936, whereas Alonzo Church's proof of the undecidability of a problem in the lambda calculus had already been published in April 1936.) Subsequently, many other undecidable problems have been described; the typical method of proving a problem to be undecidable is with the technique of reduction. To do this, it is sufficient to show that if a solution to the new problem were found, it could be used to decide an undecidable problem by transforming instances of the undecidable problem into instances of the new problem. Since we already know that no method can decide the old problem, no method can decide the new problem either. Often the new problem is reduced to solving the halting problem.

For example, one such consequence of the halting problem's undecidability is that there cannot be a general algorithm that decides whether a given statement about natural numbers is true or not. The reason for this is that the proposition stating that a certain program will halt given a certain input can be converted into an equivalent statement about natural numbers. If we had an algorithm that could solve every statement about natural numbers, it could certainly solve this one; but that would determine whether the original program halts, which is impossible, since the halting problem is undecidable.
Rice's theorem generalizes the theorem that the halting problem is unsolvable. It states that any non-trivial property of the partial function that is implemented by a program is undecidable. (A partial function is a function which may not always produce a result, and so is used to model programs, which can either produce results or fail to halt.) For example, the property "halt for the input 0" is undecidable. Note that this theorem holds only for properties of the partial function implemented by the program; Rice's Theorem does not apply to properties of the program itself. For example, "halt on input 0 within 100 steps" is not a property of the partial function that is implemented by the program-it is a property of the program implementing the partial function and is very much decidable.
Gregory Chaitin has defined a halting probability, represented by the symbol O, a type of real number that informally is said to represent the probability that a randomly produced program halts. These numbers have the same Turing degree as the halting problem. It is a normal and transcendental number which can be defined but cannot be completely computed. This means one can prove that there is no algorithm which produces the digits of O, although its first few digits can be calculated in simple cases.
While Turing's proof shows that there can be no general method or algorithm to determine whether algorithms halt, individual instances of that problem may very well be susceptible to attack. Given a specific algorithm, one can often show that it must halt for any input, and in fact computer scientists often do just that as part of a correctness proof. But each proof has to be developed specifically for the algorithm at hand; there is no mechanical, general way to determine whether algorithms on a Turing machine halt. However, there are some heuristics that can be used in an automated fashion to attempt to construct a proof, which succeed frequently on typical programs. This field of research is known as automated termination analysis.
Since the negative answer to the halting problem shows that there are problems that cannot be solved by a Turing machine, the Church-Turing thesis limits what can be accomplished by any machine that implements effective methods. However, not all machines conceivable to human imagination are subject to the Church-Turing thesis (e.g. oracle machines are not). It is an open question whether there can be actual deterministic physical processes that, in the long run, elude simulation by a Turing machine, and in particular whether any such hypothetical process could usefully be harnessed in the form of a calculating machine (a hypercomputer) that could solve the halting problem for a Turing machine amongst other things. It is also an open question whether any such unknown physical processes are involved in the working of the human brain, and whether humans can solve the halting problem

TRE therefore is of the paradigm called Logical Atomism and is also the basis of a General Systems Theory


Logical atomism is a philosophical belief that originated in the early 20th century with the development of analytic philosophy. Its principal exponents were the British philosopher Bertrand Russell, the early work of his Austrian-born pupil and colleague Ludwig Wittgenstein, and his German counterpart Rudolf Carnap.
The theory holds that the world consists of ultimate logical "facts" (or "atoms") that cannot be broken down any further. Having originally propounded this stance in his Tractatus Logico-Philosophicus, Wittgenstein rejected it in his later Philosophical Investigations.[citation needed]

The name for this kind of theory was coined in 1918 by Russell in response to what he called "logical holism"; i.e. the belief that the world operates in such a way that no part can be known without the whole being known first.[citation needed] This belief is commonly called monism, and in particular, Russell (and G.E. Moore) were reacting to the absolute idealism dominant then in Britain.[
The term was first coined in a 1911 essay by Russell. However, it became widely known only when Russell gave a series of lectures in 1918 entitled "The Philosophy of Logical Atomism". Russell was much influenced by Ludwig Wittgenstein, as an introductory note explicitly acknowledges.
Russell and Moore broke themselves free from British Idealism which, for nearly 90 years, had dominated British Philosophy. Russell would later recall in "My Mental Development" that "with a sense of escaping from prison, we allowed ourselves to think that grass is green, that the sun and stars would exist if no one was aware of them ... ".
The principles of logical atomism
Russell referred to his atomistic doctrine as contrary to the tier "of the people who more or less follow Hegel".
The first principle of logical atomism is that the World contains "facts". The facts are complex structures consisting of objects ("Particulars"). This he defines as "objects' relations in terms of atomic facts "(PLA 199) is a fact, either from an object with a simple property or from different objects, in relation to each other more easily. In addition, there are judgments ("Beliefs"), which are in a relationship to the facts, and by this relationship either true or false.
According to this theory even ordinary objects of daily life "are apparently complex entities". According to Russell words like "this" and "that" are words used to denote particulars. In contrast, ordinary names such as "Socrates" actually are definitive descriptions, according to Russell. In the analysis of "Plato talks with his pupils", "Plato" needs to be replaced with something like "the man who was the teacher of Aristotle".

TRE is a General Systems Theory

Systems theory is the interdisciplinary study of systems in general, with the goal of elucidating principles that can be applied to all types of systems at all nesting levels in all fields of research.[citation needed] The term does not yet have a well-established, precise meaning, but systems theory can reasonably be considered a specialization of systems thinking, a generalization of systems science, a systems approach. The term originates from Bertalanffy's General System Theory (GST) and is used in later efforts in other fields, such as the action theory of Talcott Parsons and the system-theory of Niklas Luhmann.
In this context the word systems is used to refer specifically to self-regulating systems, i.e. that are self-correcting through feedback. Self-regulating systems are found in nature, including the physiological systems of our body, in local and global ecosystems, and in climate-and in human learning processes.
General systems research and systems inquiry
Many early systems theorists aimed at finding a general systems theory that could explain all systems in all fields of science. The term goes back to Bertalanffy's book titled "General System theory: Foundations, Development, Applications" from 1968. According to Von Bertalanffy, he developed the "allgemeine Systemlehre" (general systems teachings) first via lectures beginning in 1937 and then via publications beginning in 1946.
Von Bertalanffy's objective was to bring together under one heading the organismic science that he had observed in his work as a biologist. His desire was to use the word "system" to describe those principles which are common to systems in general. In GST, he writes:
...there exist models, principles, and laws that apply to generalized systems or their subclasses, irrespective of their particular kind, the nature of their component elements, and the relationships or "forces" between them. It seems legitimate to ask for a theory, not of systems of a more or less special kind, but of universal principles applying to systems in general.
Ervin Laszlo in the preface of von Bertalanffy's book Perspectives on General System Theory:
Thus when von Bertalanffy spoke of Allgemeine Systemtheorie it was consistent with his view that he was proposing a new perspective, a new way of doing science. It was not directly consistent with an interpretation often put on "general system theory", to wit, that it is a (scientific) "theory of general systems." To criticize it as such is to shoot at straw men. Von Bertalanffy opened up something much broader and of much greater significance than a single theory (which, as we now know, can always be falsified and has usually an ephemeral existence): he created a new paradigm for the development of theories.
Ludwig von Bertalanffy outlines systems inquiry into three major domains: Philosophy, Science, and Technology. In his work with the Primer Group, Béla H. Bánáthy generalized the domains into four integratable domains of systemic inquiry:
Domain
Description
Philosophy
the ontology, epistemology, and axiology of systems;
Theory
a set of interrelated concepts and principles applying to all systems
Methodology
the set of models, strategies, methods, and tools that instrumentalize systems theory and philosophy
Application
the application and interaction of the domains
These operate in a recursive relationship, he explained. Integrating Philosophy and Theory as Knowledge, and Method and Application as action, Systems Inquiry then is knowledgeable action.

TRE has its own unique Knowledge Representation System and with this it is possible to build topographical maps of all domains of knowledge using the empirical values and scientific units of measurement e.g. watts, volts for each object or noun in the domain

Knowledge Representation (KR) research involves analysis of how to reason accurately and effectively and how best to use a set of symbols to represent a set of facts within a knowledge domain. A symbol vocabulary and a system of logic are combined to enable inferences about elements in the KR to create new KR sentences. Logic is used to supply formal semantics of how reasoning functions should be applied to the symbols in the KR system. Logic is also used to define how operators can process and reshape the knowledge. Examples of operators and operations include, negation, conjunction, adverbs, adjectives, quantifiers and modal operators. The logic is interpretation theory. These elements--symbols, operators, and interpretation theory--are what give sequences of symbols meaning within a KR.
A key parameter in choosing or creating a KR is its expressivity. The more expressive a KR, the easier and more compact it is to express a fact or element of knowledge within the semantics and grammar of that KR. However, more expressive languages are likely to require more complex logic and algorithms to construct equivalent inferences. A highly expressive KR is also less likely to be complete and consistent. Less expressive KRs may be both complete and consistent. Autoepistemic temporal modal logic is a highly expressive KR system, encompassing meaningful chunks of knowledge with brief, simple symbol sequences (sentences). Propositional logic is much less expressive but highly consistent and complete and can efficiently produce inferences with minimal algorithm complexity. Nonetheless, only the limitations of an underlying knowledge base affect the ease with which inferences may ultimately be made (once the appropriate KR has been found). This is because a knowledge set may be exported from a knowledge model or knowledge base system (KBS) into different KRs, with different degrees of expressivenes, completeness, and consistency. If a particular KR is inadequate in some way, that set of problematic KR elements may be transformed by importing them into a KBS, modified and operated on to eliminate the problematic elements or augmented with additional knowledge imported from other sources, and then exported into a different, more appropriate KR.
In applying KR systems to practical problems, the complexity of the problem may exceed the resource constraints or the capabilities of the KR system. Recent developments in KR include the concept of the Semantic Web, and development of XML-based knowledge representation languages and standards, including Resource Description Framework (RDF), RDF Schema, Topic Maps, DARPA Agent Markup Language (DAML), Ontology Inference Layer (OIL)[2], and Web Ontology Language (OWL).
There are several KR techniques such as frames, rules, tagging, and semantic networks which originated in Cognitive Science. Since knowledge is used to achieve intelligent behavior, the fundamental goal of knowledge representation is to facilitate reasoning, inferencing, or drawing conclusions. A good KR must be both declarative and procedural knowledge. What is knowledge representation can best be understood in terms of five distinct roles it plays, each crucial to the task at hand :
* A knowledge representation (KR) is most fundamentally a surrogate, a substitute for the thing itself, used to enable an entity to determine consequences by thinking rather than acting, i.e., by reasoning about the world rather than taking action in it.
* It is a set of ontological commitments, i.e., an answer to the question: In what terms should I think about the world?
* It is a fragmentary theory of intelligent reasoning, expressed in terms of three components: (i) the representation's fundamental conception of intelligent reasoning; (ii) the set of inferences the representation sanctions; and (iii) the set of inferences it recommends.
* It is a medium for pragmatically efficient computation, i.e., the computational environment in which thinking is accomplished. One contribution to this pragmatic efficiency is supplied by the guidance a representation provides for organizing information so as to facilitate making the recommended inferences.
* It is a medium of human expression, i.e., a language in which we say things about the world."
The inference engine is a computer program designed to produce a reasoning on rules. In order to produce a reasoning, it is based on logic. There are several kinds of logic: propositional logic, predicates of order 1 or more, epistemic logic, modal logic, temporal logic, fuzzy logic, etc. Except for propositional logic, all are complex and can only be understood by mathematicians, logicians or computer scientists. Propositional logic is the basic human logic, that is expressed in syllogisms. The expert system that uses that logic is also called a zeroth-order expert system. With logic, the engine is able to generate new information from the knowledge contained in the rule base and data to be processed.
The engine has two ways to run: batch or conversational. In batch, the expert system has all the necessary data to process from the beginning. For the user, the program works as a classical program: he provides data and receives results immediately. Reasoning is invisible. The conversational method becomes necessary when the developer knows he cannot ask the user for all the necessary data at the start, the problem being too complex. The software must "invent" the way to solve the problem, request the missing data from the user, gradually approaching the goal as quickly as possible. The result gives the impression of a dialogue led by an expert. To guide a dialogue, the engine may have several levels of sophistication: "forward chaining", "backward chaining" and "mixed chaining". Forward chaining is the questioning of an expert who has no idea of the solution and investigates progressively (e.g. fault diagnosis). In backward chaining, the engine has an idea of the target (e.g. is it okay or not? Or: there is danger but what is the level?). It starts from the goal in hopes of finding the solution as soon as possible. In mixed chaining the engine has an idea of the goal but it is not enough: it deduces in forward chaining from previous user responses all that is possible before asking the next question. So quite often he deduces the answer to the next question before asking it.
A strong interest in using logic is that this kind of software is able to give the user clear explanation of what it is doing (the "Why?") and what it has deduced (the "How?" ). Better yet, thanks to logic the most sophisticated expert system are able to detect contradictions[30] into user information or in the knowledge and can explain them clearly, revealing at the same time the expert knowledge and his way of thinking.

The Knowledge Maps of each domain and the energy profiles of the objects that they contain create geographical maps.
In Artificial Intelligence – a machine will be able to automatically borrow analogies and models for unknown scenarios from a database of maps it holds for other domains and other physical events at all scales of magnitude.


Isomorphism
Isomorphism is the formal mapping between complex structures where the two structures contain equal parts. This formal mapping is a fundamental premise used in mathematics and is derived from the Greek words Isos, meaning equal, and morphe, meaning shape. Identifying isomorphic structures in science is a powerful analytical tool used to gain deeper knowledge of complex objects. Isomorphic mapping aids biological and mathematical studies where the structural mapping of complex cells and sub-graphs is used to understand equally related objects.

Isomorphic Mapping
Isomorphic mapping is applied in systems theory to gain advanced knowledge of the behavior of phenomena in our world. Finding isomorphism between systems opens up a wealth of knowledge that can be shared between the analyzed systems. Systems theorists further define isomorphism to include equal behavior between two objects. Thus, isomorphic systems behave similarly when the same set of input elements is presented. As in scientific analysis, systems theorists seek out isomorphism in systems so to create a synergetic understanding of the intrinsic behavior of systems. Mastering the knowledge of how one system works and successfully mapping that system's intrinsic structure to another releases a flow of knowledge between two critical knowledge domains. Discovering isomorphism between a well understood and a lesser known, newly defined system can create a powerful impact in science, medicine or business since future, complex behaviors of the lesser understood system will become revealed.

Methods
General systems theorists strive to find concepts, principles and patterns between differing systems so that they can be readily applied and transferred from one system to another. Systems are mathematically modeled so that the level of isomorphism can be determined. Event graphs and data flow graphs are created to represent the behavior of a system. Identical vertices and edges within the graphs are discovered to identify equal structure between systems. Identifying this isomorphism between modeled systems allows for shared abstract patterns and principles to be discovered and applied to both systems. Thus, isomorphism is a powerful element of systems theory which propagates knowledge and understanding between different groups. The archive of knowledge obtained for each system is increased. This empowers decision makers and leaders to make critical choices concerning the system in which they participate. As future behavior of a system is more well understood, good decision making concerning the potential balance and operation of a system is facilitated.
Uses
Isomorphism has been used extensively in information technology as computers have evolved from simple low level circuitry with a minimal external interface to highly distributed clusters of dedicated application servers. All computer scientific concepts are derived from fundamental mathematical theory. Thus, isomorphic theory is easily applied within the computer science domain. Finding isomorphism between lesser undeveloped and current existing technologies is a powerful goal within the IT industry as scientists determine the proper path in implementing new technologies. Modeling an abstract dedicated computer or large application on paper is much less costly than building the actual instance with hardware components. Finding isomorphism within these modeled, potential computer technologies allows scientists to gain an understanding of the potential performance, drawbacks and behavior of emerging technologies. Isomorphic theory is also critical in discovering "design patterns" within applications. Computer scientists recognized similar abstract data structures and architecture types within software as programs migrated from low level assembler language to the currently used higher level languages. Patterns of equivalent technical solution architectures have been documented in detail. Modularization, functionality, interfacing, optimization, and platform related issues are identified for each common architecture so to further assist developers implementing today's applications. Examples of common patterns include the "proxy" and "adapter" patterns. The proxy design pattern defines the best way to implement a remote object's interface, while the adapter pattern defines how to build interface wrappers around frequently instantiated objects. Current research into powerful, new abstract solutions to industry specific applications and the protection of user security and privacy will further benefit from implementing isomorphic principles.

Comparing real vs model
The most powerful use for isomorphic research occurs when comparing a synthetic model of a natural system and the real existence of that system in nature. System theorists build models to potentially solve business, engineering and scientific problems and to gain a valid representation of the natural world. These models facilitate understanding of our natural phenomena. Theorists work to build these powerful isomorphic properties between the synthetic models they create and real world phenomena. Discovering significant isomorphism between the modeled and real world facilitates our understanding of the our own world. Equal structure must exist between the man-made model and the natural system so to ensure an isomorphic link between the two systems. The defined behavior and principles built inside the synthetic model must directly parallel the natural world. Success in this analytical and philosophical drive leads man to gain a deeper understanding of himself and the natural world he lives in.

The General Systems Theory of TRE holds that osmosis is a natural and universal law and that the relativity of A to B through a common membrane C is regulated by natural power laws like the Inverse Square Law seen in Ohms Law, Fajans rules, gravity etc

Osmosis is the movement of solvent molecules through a partially permeable membrane into a region of higher solute concentration, aiming to equalize the solute concentrations on the two sides. It may also be used to describe a physical process in which any solvent moves, without input of energy, across a semipermeable membrane (permeable to the solvent, but not the solute) separating two solutions of different concentrations. Although osmosis does not require input of energy, it does use kinetic energy and can be made to do work.

Net movement of solvent is from the less concentrated (hypotonic) to the more concentrated (hypertonic) solution, which tends to reduce the difference in concentrations. This effect can be countered by increasing the pressure of the hypertonic solution, with respect to the hypotonic. The osmotic pressure is defined to be the pressure required to maintain an equilibrium, with no net movement of solvent. Osmotic pressure is a colligative property, meaning that the osmotic pressure depends on the molar concentration of the solute but not on its identity.
Osmosis is essential in biological systems, as biological membranes are semipermeable. In general, these membranes are impermeable to large and polar molecules, such as ions, proteins, and polysaccharides, while being permeable to non-polar and/or hydrophobic molecules like lipids as well as to small molecules like oxygen, carbon dioxide, nitrogen, nitric oxide, etc. Permeability depends on solubility, charge, or chemistry, as well as solute size. Water molecules travel through the plasma membrane, tonoplast membrane (vacuole) or protoplast by diffusing across the phospholipid bilayer via aquaporins (small transmembrane proteins similar to those in facilitated diffusion and in creating ion channels). Osmosis provides the primary means by which water is transported into and out of cells. The turgor pressure of a cell is largely maintained by osmosis, across the cell membrane, between the cell interior and its relatively hypotonic environment.

Voltage, otherwise known as electrical potential difference or electric tension (denoted ?V and measured in volts, or joules per coulomb) is the potential difference between two points - or the difference in electric potential energy per unit charge between two points.[1] Voltage is equal to the work which would have to be done, per unit charge, against a static electric field to move the charge between two points. A voltage may represent either a source of energy (electromotive force), or it may represent lost or stored energy (potential drop). A voltmeter can be used to measure the voltage (or potential difference) between two points in a system; usually a common reference potential such as the ground of the system is used as one of the points. Voltage can be caused by static electric fields, by electric current through a magnetic field, by time-varying magnetic fields, or a combination of all three.

Inverse Square Power Law

In physics, an inverse-square law is any physical law stating that a specified physical quantity or strength is inversely proportional to the square of the distance from the source of that physical quantity.
The divergence of a vector field which is the resultant of radial inverse-square law fields with respect to one or more sources is everywhere proportional to the strength of the local sources, and hence zero outside sources.


The lines represent the flux emanating from the source. The total number of flux lines depends on the strength of the source and is constant with increasing distance. A greater density of flux lines (lines per unit area) means a stronger field. The density of flux lines is inversely proportional to the square of the distance from the source because the surface area of a sphere increases with the square of the radius. Thus the strength of the field is inversely proportional to the square of the distance from the source.
Ohm's law is an empirical law, a generalization from many experiments that have shown that current is approximately proportional to electric field for most materials. It is less fundamental than Maxwell's equations and is not always obeyed. Any given material will break down under a strong-enough electric field, and some materials of interest in electrical engineering are "non-ohmic" under weak fields.
Ohm's law has been observed on a wide range of length scales. In the early 20th century, it was thought that Ohm's law would fail at the atomic scale, but experiments have not borne out this expectation. As of 2012, researchers have demonstrated that Ohm's law works for silicon wires as small as four atoms wide, and one atom high

Comments

Popular Posts