Приклади вживання Operatorname Англійська мовою та їх переклад на Українською
{-}
-
Colloquial
-
Ecclesiastic
-
Computer
Welcome! My name is operatorName.
Displaystyle\operatorname{E}} is the expectation.
How can I help you? Welcome! My name is operatorName.
Here E{\displaystyle\operatorname{E}} is the expectedvalueoperator, and I is the informationcontent of X.
It is denoted by Supp( M){\displaystyle\operatorname{Supp}(M)}.
Here E{\displaystyle\operatorname{E}} is the expected value operator, and I is the information content of X.
Proposition(extremal property of E){\displaystyle\operatorname{E}}.
Grad( f)=∇ f{\ displaystyle\operatorname{grad} (f)=\nabla f} Measures the rate and direction of change in a scalar field.
It follows from the definition of Lebesgue integral that E[C]= c{\displaystyle\operatorname{E}[C]=c}.
E= p{\displaystyle\operatorname{E}\left={\boldsymbol{p}}} Let X{\displaystyle{\boldsymbol{X}}} be the realisation from a categorical distribution.
The one-rule program p← not p{\displaystyle p\leftarrow\operatorname{not} p} has no stable models.
Assume that E[ X]{\displaystyle\operatorname{E}[X]} is defined, i.e. min( E[ X+], E[ X-])<∞{\displaystyle\min\operatorname{E} X\operatorname{E} X\infty}.
As a counterexample look on the sign function sgn(x){\displaystyle\operatorname{sgn}(x)} which is defined through.
The hyperbolic trig function\operatorname{sech}\, x appears as one solution to the Korteweg-de Vries equation which describes the motion of a soliton wave in a canal.
There are four cases that can be interpreted as follows: nec(U)= 1{\displaystyle\operatorname{nec}(U)=1} means that U{\displaystyle U} is necessary.
To estimate the compression/expansion work in an isothermal process, it may be assumed that the compressed air obeys the ideal gas law,p V= n R T= constant{\displaystyle pV=nRT=\operatorname{constant}}.
Further noting that X+ Y∼ Pois(λ+ μ){\displaystyle X+Y\sim\operatorname{Pois}(\lambda+\mu)}, and computing a lower bound on the unconditional probability gives the result.
Finally, the query s{\displaystyle s} succeeds, because each of the subgoals p{\displaystyle p},not q{\displaystyle\operatorname{not} q} succeeds.
The proof follows from the arithmetic-geometric mean inequality,AM≤ max{\displaystyle\operatorname{AM}\leq\max}, and reciprocal duality( min{\displaystyle\min} and max{\displaystyle\max} are also reciprocal dual to each other).
The proposition in probability theory known as the law of total expectation,[1] the law of iterated expectations,[2] the tower rule,[3] Adam's law, and the smoothing theorem,[4] among other names, states that if X{\displaystyle X} is a random variable whose expected value E(X){\displaystyle\operatorname{E}(X)} is defined, and Y{\displaystyle Y} is any random variable on the same probability space.
Michael Gelfond proposed to read not p{\displaystyle\operatorname{not} p} in the body of a rule as" p{\displaystyle p} is not believed", and to understand a rule with negation as the corresponding formula of autoepistemic logic.
Axiom 2 couldbe interpreted as the assumption that the evidence from which pos{\displaystyle\operatorname{pos}} was constructed is free of any contradiction.
Thus I( X; X)≥ I( X; Y){\displaystyle\operatorname{I}(X;X)\geq\operatorname{I}(X;Y)}, and one can formulate the basic principle that a variable contains at least as much information about itself as any other variable can provide.
The moment of inertia of a cloud of n points with a covariance matrix of Σ{\displaystyle\Sigma} is given by I= n( 1 3× 3 tr( Σ)-Σ).{\displaystyle I=n(\mathbf{1}_{3\times 3}\operatorname{tr}(\Sigma)-\Sigma).} This difference between moment of inertia in physics and in statistics is clear for points that are gathered along a line.
Operators a r g m i n{\displaystyle\operatorname{arg\, min}} and a r g m a x{\displaystyle\operatorname{arg\, max}} are sometimes also written as argmin{\displaystyle\operatorname{argmin}} and argmax{\displaystyle\operatorname{argmax}}, and stand for argument of the minimum and argument of the maximum.
Possibility can be seen as an upper probability: any possibility distribution defines a unique set of admissible probability distributions by{ p:∀ S p( S)≤ pos( S)}.{\displaystyle\left\{\,p:\forall S\ p(S)\leq\operatorname{pos}(S)\,\right\}.} This allows one to study possibility theory using the tools of imprecise probabilities.
Using base-2 logarithms:(For reference, the mutual information I( X;Y){\displaystyle\operatorname{I}(X;Y)} would then be 0.2141709) Pointwise Mutual Information has many of the same relationships as the mutual information.
To sum up, the behavior of SLDNF resolution on the given program can be represented by the following truth assignment: On the other hand, the rules of the given program can be viewed as propositional formulas if we identify the comma with conjunction∧{\displaystyle\land},the symbol not{\displaystyle\operatorname{not}} with negation¬{\displaystyle\neg}, and agree to treat F← G{\displaystyle F\leftarrow G} as the implication G→ F{\displaystyle G\rightarrow F} written backwards.
Like the discrete-time Markov decision processes, in continuous-time Markov decision processes we want to find the optimal policy or control which could give us the optimal expected integrated reward:max E u{\displaystyle\max\operatorname{E}_{u}\left} where 0≤ γ< 1.{\displaystyle 0\leq\gamma<1.} If the state space and action space are finite, we could use linear programming to find the optimal policy, which was one of the earliest approaches applied.