text
stringlengths 256
16.4k
|
---|
Let's say that we have an objective function $f(\mathbf x,\mathbf y)$ which has the parameters $\mathbf x=[x_1\ldots x_n]$ and $\mathbf y=[y_1\ldots y_n]$. Here, $\mathbf y$ is a blackbox variable which is calculated from a simulation of a network $\mathcal N$ by taking $\mathbf x$ as input.
$f$ is an objective function to be minimized for a given problem. Let's say:
$$ f(\mathbf x,\mathbf y) = \sum\limits_{i=1}^{n}\alpha_ix_i+\beta_ix_iy_i $$
where $\mathbf y= \mathcal N(\mathbf x)$ is a blackbox function whose analytical form is unknown which takes $\mathbf x$ as input
$\mathcal N$ refers to a network being simulated.
$\alpha_i$ and $\beta_i$ are constants, $i=1,\ldots,n$.
The problem has the following constraints:
$$ \sum\limits_{i=1}^{n}x_i=C\\ x_i^\min \leq x_i \leq x_i^\max\\ 0 \leq y_i \leq y_i^\max\\ 0 \leq x_iy_i \leq (x_iy_i)^\max $$ $C,x_i^\min, x_i^\max,y_i^\max,(x_iy_i)^\max$ being some fixed constants, $i=1,\ldots,n$.
A two-variable ($n=2$) example is as follows:
Objective:
$$ \min 2x_1+4x_1y_1+3x_2+5x_2y_2 $$
where
$$ [y_1,y_2]=\mathcal N([x_1,x_2]) $$
and $\alpha_1=2,\alpha_2=3,\beta_1=4,\beta_2=5$ according to the previous definitions.
Constraints:
$$ x_1+x_2 = 10\\ 0\leq x_1\leq 10\\ 5\leq x_2 \leq 10\\ 0\leq y_1\leq 5\\ 0\leq y_2\leq 10\\ 0\leq x_1y_1\leq 50\\ 0\leq x_2y_2\leq 50\\ $$
I have gone through some of the stochastic algorithms such as
simulated annealing, hill climbing, evolutionary algorithms like genetic algorithms, and so on. And I'm also aware that my problem containing a blackbox function within itself and subjected to a number of constraints has a high chance of falling into the ugly world of "No Free Lunch Theorem".
Are there any appropriate algorithms or suggestions that can solve such a problem? |
Definition:Pointwise Addition of Mappings Definition
Let $G^S$ be the set of all mappings from $S$ to $G$.
Then pointwise addition on $G^S$ is the binary operation $\circ: G^S \times G^S \to G^S$ (the $\circ$ is the same as for $G$) defined by: $\forall f,g \in G^S: \forall s \in S: \left({f \circ g}\right) \left({s}\right) := f \left({s}\right) \circ g \left({s}\right)$
The double use of $\circ$ is justified as $\left({G^S, \circ}\right)$ inherits all abstract-algebraic properties $\left({G, \circ}\right)$ might have.
This is rigorously formulated and proved on Mappings to Algebraic Structure form Similar Algebraic Structure.
Pointwise Multiplication
Let $\circ$ be used with multiplicative notation.
Then the operation defined above is called
pointwise multiplication instead. Examples Pointwise Addition of Real-Valued Functions Pointwise Addition of Extended Real-Valued Functions Pointwise Multiplication of Real-Valued Functions Pointwise Multiplication of Extended Real-Valued Functions |
Quasi In Situ Growth Observation and Precipitation Kinetics Assessment of the β-Mn Phase in Fe-30Mn-9Al-1C Steel 15 Downloads Abstract
This paper investigated the microstructure evolution of cold-rolled Fe-30Mn-9Al-1C steel at various heat treatment temperatures and found that the β-Mn phase could rapidly precipitate at moderate temperatures. A method of quasi in situ observation was used to observe the precipitation of the intergranular and intragranular β-Mn phase. A transmission electron microscopy analysis showed that the transition band can act as a rapid precipitation passage of the β-Mn phase. Intragranular κ-carbides caused the β-Mn phase to precipitate without the formation of the α-ferrite phase. The precipitation reaction, \( \gamma \mathop \to \limits^{\kappa } \beta{\text{-Mn}} + \gamma^{\prime } \), was verified by estimating the Gibbs free energy change to − 46741 J/mol according to the Fe-Mn-Al-C quaternary thermodynamics model. The strain storage energy at the deformation band boundaries can be up to 55% according to the kinetics calculation, which supported the priority of the transition band in the rapid precipitation of the β-Mn phase.
Notes Acknowledgements
We appreciate Prof. Youn-Bae Kang for the instruction about the thermodynamic calculations. This work was supported by the Key Scientific Research Project in Shanxi Province (Grant Nos. MC2016-06, 201603D111004, 20181101014 and 201805D121003), Research Project Supported by Shanxi Scholarship Council of China (2017-029), and Patent Promotion and Implement Found of Shanxi Province (20171003).
References 1. 2. 3. 4. 5. 6. 7. 8.R.A. Howell and D.C.V. Aken, Iron Steel Technol.6, 193 (2009).Google Scholar 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. |
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty).
And Chrome has a Personal Blocklist extension which does what you want.
: )
Of course you already have a Google account but Chrome is cool : )
Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies?
do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created.
@QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value.
I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$.
@QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0.
@KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc.
In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people
in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results
@QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O
@NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that.
@NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment.
@QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h).
@KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow)
Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know? |
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response)
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details
http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference |
TL;DR: Unless you assume people are unreasonably bad at judging car color, or that blue cars are unreasonably rare, the large number of people in your example means the probability that the car is blue is basically 100%.
Matthew Drury already gave the right answer but I'd just like to add to that with some numerical examples, because you chose your numbers such that you actually get pretty similar answers for a wide range of different parameter settings. For example, let's assume, as you said in one of your comments, that the probability that people judge the color of a car correctly is 0.9. That is: $$p(\text{say it's blue}|\text{car is blue})=0.9=1-p(\text{say it isn't blue}|\text{car is blue})$$and also$$p(\text{say it isn't blue}|\text{car isn't blue})=0.9=1-p(\text{say it is blue}|\text{car isn't blue})$$
Having defined that, the remaining thing we have to decide is: what is the prior probability that the car is blue? Let's pick a very low probability just to see what happens, and say that $p(\text{car is blue})=0.001$, i.e. only 0.1% of all cars are blue. Then the posterior probability that the car is blue can be calculated as:
\begin{align*}&p(\text{car is blue}|\text{answers})\\&=\frac{p(\text{answers}|\text{car is blue})\,p(\text{car is blue})}{p(\text{answers}|\text{car is blue})\,p(\text{car is blue})+p(\text{answers}|\text{car isn't blue})\,p(\text{car isn't blue})}\\&=\frac{0.9^{900}\times 0.1^{100}\times0.001}{0.9^{900}\times 0.1^{100}\times0.001+0.1^{900}\times0.9^{100}\times0.999}\end{align*}
If you look at the denominator, it's pretty clear that the second term in that sum will be negligible, since the relative size of the terms in the sum is dominated by the ratio of $0.9^{900}$ to $0.1^{900}$, which is on the order of $10^{58}$. And indeed, if you do this calculation on a computer (taking care to avoid numerical underflow issues) you get an answer that is equal to 1 (within machine precision).
The reason the prior probabilities don't really matter much here is because you have so much evidence for one possibility (the car is blue) versus another. This can be quantified by the
likelihood ratio, which we can calculate as:$$\frac{p(\text{answers}|\text{car is blue})}{p(\text{answers}|\text{car isn't blue})}=\frac{0.9^{900}\times 0.1^{100}}{0.1^{900}\times 0.9^{100}}\approx 10^{763}$$
So before even considering the prior probabilities, the evidence suggests that one option is already astronomically more likely than the other, and for the prior to make any difference, blue cars would have to be unreasonably, stupidly rare (so rare that we would expect to find 0 blue cars on earth).
So what if we change how accurate people are in their descriptions of car color? Of course, we could push this to the extreme and say they get it right only 50% of the time, which is no better than flipping a coin. In this case, the posterior probability that the car is blue is simply equal to the prior probability, because the people's answers told us nothing. But surely people do at least a little better than that, and even if we say that people are accurate only 51% of the time, the likelihood ratio still works out such that it is roughly $10^{13}$ times more likely for the car to be blue.
This is all a result of the rather large numbers you chose in your example. If it had been 9/10 people saying the car was blue, it would have been a very different story, even though the same ratio of people were in one camp vs. the other. Because statistical evidence doesn't depend on this ratio, but rather on the numerical difference between the opposing factions. In fact, in the likelihood ratio (which quantifies the evidence), the 100 people who say the car isn't blue exactly cancel 100 of the 900 people who say it is blue, so it's the same as if you had 800 people all agreeing it was blue. And that's obviously pretty clear evidence.
(Edit: As Silverfish pointed out, the assumptions I made here actually implied that whenever a person describes a non-blue car incorrectly, they will default to saying it's blue. This isn't realistic of course, because they could really say any color, and will say blue only some of the time. This makes no difference to the conclusions though, since the less likely people are to mistake a non-blue car for a blue one, the stronger the evidence that it is blue when they say it is. So if anything, the numbers given above are actually only a lower bound on the pro-blue evidence.) |
I) OP wrote (v2):
I know that $\mathcal{L}$ is a
functional not a function.
Actually for local theories the Lagrangian density $$ \mathcal{L}(\phi(x), \partial\phi(x), \partial^2\phi(x), \ldots, ;x) $$is a
function of $\phi(x)$, $\partial\phi(x)$, $\partial^2\phi(x)$, $\ldots$, and $x$. In contrast, the action $S[\phi]=\int \!d^dx~\mathcal{L}$ is a functional of $\phi$, cf. e.g. this Phys.SE post.
II) OP wrote (v2):
The Lagrangian density of a Poincare invariant theory should not depend
explicitly on the space-time coordinates. Does this mean
$$\tag{1} \partial_{\mu} \mathcal{L}~=~0~? $$
If $\partial_\mu$ in OP's notation denotes
explicit space-time differentiation $ \frac{\partial}{\partial x^{\mu}}$, then the answer is Yes. Note however that there are also implicit dependence on the space-time coordinates via the fields $\phi$. The total space-time derivative becomes
$$ d_{\mu}\mathcal{L}~=~\partial_{\mu} \mathcal{L} + \frac{\partial\mathcal{L}}{\partial\phi}\partial_{\mu}\phi+ \frac{\partial\mathcal{L}}{\partial(\partial_{\nu}\phi)}\partial_{\nu}\partial_{\mu}\phi + \ldots . $$
III) Translational invariance (1) implies via Noether's theorem that the corresponding energy-momentum 4-vector $P_{\mu}$ is conserved. |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
Given the function
$g(n,m)=\min\Big\{f(a,b)+f(n-a,c)+f(n,m-bc)\Big|\\a,b,c\ \ \text{with} \left\{\begin{matrix} a,\ b,\ n-a,\ c,\ m-bc \geq 0 \\ b\leq a! \\ c\leq (n-a)! \\ \end{matrix}\right. \Big\} $
Assuming that $n,m\geq 0,\ ((\lceil n/2\rceil)!)^2\leq m\leq n!,\ f(n,m)=\Omega (n)$,
is it true that $g(n,m) \geq 2f(\lfloor n/2\rfloor ,(\lfloor n/2\rfloor)!)+f(n,m-((\lceil n/2\rceil)!)^2)$ ?
I tried KKT conditions, but can't derive this (as it contains factorial).
Also, it seems that the condition $f(n,m)=\Omega (n)$ implies that $f$ is convex on our domain (and thus, satisfies the regularity condition for using KKT), but I managed to prove it only if $f$ is polynomial.
So I am fully stuck in this...
Any help would be highly appreciated! |
I found another mistake in Romer's "proof", but I also got some (wrong-headed) pushback on the econ job rumors forum about my take on Paul Romer's mathiness. Here's the comment:
Yeah, [this post] is complete garbage. Mathematically, it's an issue of pointwise vs. uniform convergence, plain and simple. Economically, Romer says that Lucas + Moll don't get endogenous growth because of diffusion. Rather, they get endogenous growth because the "knowledge frontier" is already unbounded. Any empirical observations (which necessarily occur at finite T) depend critically on the rate of knowledge arrival. This guy just does not get it.
I think this shows us a bit where economics has gone off the rails with the mathiness. Economics should never refer to uniform vs pointwise convergence. This is a topic from real analysis [0]. If uniform vs pointwise convergence matters in economics,
. you're doing it wrong
And it's not even a correct assessment -- it's actually a case of
almost uniform convergence(the set of points where the limit doesn't converge uniformly has measure zero, and is in fact exactly the point $\beta = 0$).
An example from physics might help make this a bit clearer. Let's say we have a model of all possible collections of nuclei that decay via $\alpha$ decay with time constant $\tau$ so that the number of your original nuclei left at time t is given by
N_{\tau}(t) = N_{0} e^{-t/\tau}
$$
This function has the essential properties of Lucas and Moll's growth function $g_{\beta}(t)$ where basically $1/\beta$ is now $\tau$.
\lim_{\tau \rightarrow \infty} N_{\tau}(T) = 1
$$
$$
\lim_{t \rightarrow \infty} N_{\tau}(t) = 0
$$
The English versions of these two limits:
If the nuclei are stable, they never decay. Unstable nuclei always eventually decay.
Lucas and Moll (2014) essentially is a model of decaying nuclei and their limit is the second one. They don't care about stable nuclei (economies where nothing happens), they care about something else (economies where everything has already happened [1]). Probably why they ignored Romer's comment! I'd ignore it too. And send back a pithy comment.
"Any empirical observations (which necessarily occur at finite T) depend critically on the rate of knowledge arrival."
That is exactly my point! They also critically depend on finite $\beta$, so why are you taking any limits at all? The whole concept of choosing T only makes sense relative to $\beta$. And the model only exists at finite $\beta$, what ever limit you are taking has to be relative to $\beta$.
This entire argument might have been avoided by a simple $\beta > 0$ statement. But then, we get to Paul Romer's second mistake. He apparently notices that $\beta > 0$. He takes a limit that depends critically on a point he himself says doesn't exist as a point in the space!
His mathy proposition says that $\beta \in (0, \tilde{\beta})$, not $\beta \in [0, \tilde{\beta})$, so the limit $\beta \rightarrow 0$ takes $\beta$ to be arbitrarily small, but not zero. One notation for that is $0^{+}$ and that limit is uniformly convergent. You can exchange the order of limits under this condition:
\lim_{t \rightarrow \infty} \lim_{\beta \rightarrow 0^{+}} g_{\beta}(t) = \lim_{\beta \rightarrow 0^{+}} \lim_{t \rightarrow \infty} g_{\beta}(t) = \gamma
$$
All of this nonsense could have been avoided by lightening up on the real analysis and not forgetting that we are trying to describe a real-life system!
Footnotes:
[0] I know. In addition to physics, I have a math degree too. I've been though that punishment.
[1] Not sure why they want to do this -- the interesting bit is where $T \sim 1/\beta$. And if this is what Romer is objecting to, why do it with the silly proof and the argument about limits? Just say the economy Lucas and Moll describe doesn't make economic sense -- not that they didn't properly account for the order of limits. |
ISSN:
1531-3492
eISSN:
1553-524X
All Issues
Discrete & Continuous Dynamical Systems - B
March 2007 , Volume 7 , Issue 2
Select all articles
Export/Reference:
Abstract:
In this paper, we analyze theoretically an age structured population model with cannibalism. The model is nonlinear in that cannibalism decreases the birth rate based on total population density. We use degree theory to prove the existence of unique solution. We also investigate the asymptotic stability of the solutions, and prove under special hypotheses, local and global attractivity of a unique nontrivial steady state. We convert the problem to a delay differential equation and prove that quasiconvergence leads to global attraction. Some numerical simulations are presented exhibiting sustained oscillations which may occur when the hypotheses of theoretical analysis are not satisfied.
Abstract:
In this paper we propose the analysis of the incompressible non-homogeneous Navier-Stokes equations with nonlinear outflow boundary condition. This kind of boundary condition appears to be, in some situations, a useful way to perform numerical computations of the solution to the unsteady Navier-Stokes equations when the Dirichlet data are not given explicitly by the physical context on a part of the boundary of the computational domain. The boundary condition we propose, following previous works in the homogeneous case, is a relationship between the normal component of the stress and the outflow momentum flux taking into account inertial effects. We prove the global existence of a weak solution to this model both in 2D and 3D. In particular, we show that the nonlinear boundary condition under study holds for such a solution in a weak sense, even though the normal component of the stress and the density may not have traces in the usual sense.
Abstract:
This paper is devoted to the study of travelling wave solutions for a simple epidemic model. This model consists in a single scalar equation with age-dependence and spatial structure. We prove the existence of travelling waves for a continuum of admissible wave speeds as well as some qualitative properties, like exponential decay and monotonicity with respect to the direction of front's propagation. Our proofs extensively use the comparison principle that allows us to construct suitable sub and super-solutions or to use the classical sliding method to obtain qualitative properties of the wave front.
Abstract:
The standard Melnikov method for analyzing the onset of chaos in the vicinity of a separatrix is used to explore the possibility of suppression of chaos of a certain class of dynamical systems. For a given dynamical system we apply an external perturbation, which we call the stabilizing perturbation, with the goal that after its action the chaos present in the system is suppressed. We apply this method to the nonlinear pendulum as a paradigm, and obtain some analytical expressions for the corresponding external perturbations that eliminate chaotic behavior. Numerical simulations in the pendulum show a complete agreement with the analytical results.
Abstract:
In this paper, we study a two-dimensional Burgers--Korteweg-de Vries-type equation with higher-order nonlinearities. A class of solitary wave solution is obtained by means of the Divisor Theorem which is based on the ring theory of commutative algebra. Our result indicates that the presentation of traveling wave solution in [J. Phys. A (Math. Gen.) 35 (2002) 8253--8265] is incorrect; an explanation as to why this is so is given.
Abstract:
In systems governing two-dimensional turbulence, surface quasi-geostrophic turbulence, (more generally $\alpha$-turbulence), two-layer quasi-geostrophic turbulence, etc., there often exist two conservative quadratic quantities, one "energy''-like and one "enstrophy''-like. In a finite inertial range there are in general two spectral fluxes, one associated with each conserved quantity. We derive here an inequality comparing the relative magnitudes of the "energy'' and "enstrophy'' fluxes for finite or infinitesimal dissipations, and for hyper or hypo viscosities. When this inequality is satisfied, as is the case of 2D turbulence,where the energy flux contribution to the energy spectrum is small, the subdominant part will be effectively hidden. In sQG turbulence, it is shown that the opposite is true: the downscale energy flux becomes the dominant contribution to the energy spectrum. A combination of these two behaviors appears to be the case in 2-layer QG turbulence, depending on the baroclinicity of the system.
Abstract:
In this paper we give a rigorous mathematical proof of the instability of stationary radial flame ball solutions of a three dimensional free boundary model which models the combustion of a gaseous mixture with dust in a microgravity environment.
Abstract:
In this paper, we introduce a class of one-dimensional non-autonomous dynamical systems that allow an explicit study of their orbits, of the associated variational equations as well as of certain types of bifurcations. In a special case, the model class can be transformed into the non-autonomous Beverton-Holt equation. We use these model functions for analyzing various notions of non-autonomous transcritical and pitchfork bifurcations that have been recently proposed in the literature.
Abstract:
An age structured $s$-$i$-$s$ epidemic model with random diffusion is studied. The model is described by the system of nonlinear and nonlocal integro-differential equations. Finite differences along the characteristics in age-time domain combined with Galerkin finite elements in spatial domain are used in the approximation. It is shown that a positive periodic solution to the discrete system resulting from the approximation can be generated, if the initial condition is fertile. It is proved that the endemic periodic solution is globally stable once it exists.
Abstract:
We discuss the applicability of Kolmogorov's theorem on existence of invariant tori to the real Sun-Jupiter-Saturn system. Using computer algebra, we construct a Kolmogorov's normal form defined in a neighborhood of the actual orbit in the phase space, giving a sharp evidence of the convergence of the algorithm. If not a rigorous proof, we consider our calculation as a strong indication that Kolmogorov's theorem applies to the motion of the two biggest planets of our solar system.
Abstract:
A family of delay-differential models of the glucose-insulin system is introduced, whose members represent adequately the Intra-Venous Glucose Tolerance Test and allied experimental procedures of diabetological interest. All the models in the family admit positive bounded unique solutions for any positive initial condition and are persistent. The models agree with the physics underlying the experiments, and they all present a unique positive equilibrium point.
Local stability is investigated in a pair of interesting member models: one, a discrete-delays differential system; the other, a distributed-delay system reducing to an ordinary differential system evolving on a suitably defined extended state space. In both cases conditions are given on the physical parameters in order to ensure the local asymptotic stability of the equilibrium point. These conditions are always satisfied, given the actual parameter estimates obtained experimentally. A study of the global stability properties is performed, but while from simulations it could be conjectured that the models considered are globally asymptotically stable, sufficient stability criteria, formally derived, are not actually satisfied for physiological parameters values. Given the practical importance of the models studied, further analytical work may be of interest to conclusively characterize their behavior.
Abstract:
We show that in the limit of small Rossby number $\varepsilon$, the primitive equations of the ocean (OPEs) can be approximated by "higher-order quasi-geostrophic equations'' up to an exponential accuracy in $\varepsilon$. This approximation assumes well-prepared initial data and is valid for a timescale of order one (independent of $\varepsilon$). Our construction uses Gevrey regularity of the OPEs and a classical method to bound errors in higher-order perturbation theory.
Abstract:
In this paper I will investigate the bifurcation and asymptotic behavior of solutions of the Swift-Hohenberg equation and the generalized Swift-Hohenberg equation with the Dirichlet boundary condition on a one-dimensional domain $(0,L)$. I will also study the bifurcation and stability of patterns in the $n$-dimensional Swift-Hohenberg equation with the odd-periodic and periodic boundary conditions. It is shown that each equation bifurcates from the trivial solution to an attractor $\mathcal A_\lambda$ when the control parameter $\lambda$ crosses $\lambda _{c} $, the principal eigenvalue of $(I+\Delta)^2$. The local behavior of solutions and their bifurcation to an invariant set near higher eigenvalues are analyzed as well.
Abstract:
Let $f:\M\to\M$ be a continuous map of a locally compact metric space. Models of interacting populations often have a closed invariant set $\partial \M$ that corresponds to the loss or extinction of one or more populations. The dynamics of $f$ subject to bounded random perturbations for which $\partial \M$ is absorbing are studied. When these random perturbations are sufficiently small, almost sure absorbtion (i.e. extinction) for all initial conditions is shown to occur if and only if $M\setminus \partial M$ contains no attractors for $f$. Applications to evolutionary bimatrix games and uniform persistence are given. In particular, it shown that random perturbations of evolutionary bimatrix game dynamics result in almost sure extinction of one or more strategies.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
We have a DFA $A = (Q,\Sigma, \delta, q_0, F)$. States $p, q$ $\in Q$ are equivalent, $p \sim q$ if $ \forall w \in \Sigma ^ \ast$ $ \delta^*(p,w) \in F \iff \delta^*(q,w) \in F$
States $p,q$ are equivalent after $i$ steps, $p \sim ^i q$, if $ \forall w \in \Sigma ^ \ast$ such that $|w|\le i$ $ \delta^*(p,w) \in F \iff \delta^*(q,w) \in F$
$p \sim q$, if $ \forall i$ $p \sim ^i q$
State equivalence $\sim$ creates a partition $Q/\sim$ of states to equivalence classes.
I am trying to find a $n$-state DFA for all $n \ge 2$ where the state equivalence relation $\sim$ is equal to the state equivalence relation after $n - 2$ steps, $\sim^{n-2}$, but is not equal to the state equivalence relation after $n - 3$ steps, $\sim^{n-3}$.
So I'm searching for DFAs which require exactly $n - 2$ transitions to check the equivalence. I don't know how to approach this problem. |
I'm trying to convert some English statements to first order logic statements and I'm trying to use Prolog to verify the translations. My question is: how do I convert a first order logic statement (having $\forall$ and $\exists$ quantifiers) into Prolog rules?
For example, there's this English statement:
Every voter votes for a candidate which some voter doesn't vote for.
and here's my translation of the English statement into first order logic:
$$\forall x, y[Voter(x) \land Candidate(y) \land Votes(x, y) \rightarrow \exists z[ Voter(z) \land \lnot Votes(z, y) ] ]$$
Now I'm not sure if this translation is correct. That's what I want to find out. My question is: how do I convert this first order logic statement to a Prolog rule?
So first I'm trying to fill a Prolog database with some facts.
human(p1).human(p2).human(p3).human(p4).human(p5).human(p6).human(p7).human(p8).human(p9).%humans who are neither voters nor candidateshuman(p10).human(p11).%humans who are only votersvoter(p1).voter(p2).voter(p3).%humans who are candidates and votersvoter(p4).voter(p5).voter(p6).candidate(p4).candidate(p5).candidate(p6).%humans who are only candidatescandidate(p7).candidate(p8).candidate(p9).%some random votesvotes(p1, p6).votes(p2, p6).votes(p3, p6).votes(p4, p7).votes(p5, p8).votes(p6, p5).
I'm using
human,
voter,
candidate, and
votes. Here are some attempts to model the statement into a Prolog rule:
rule1 :- foreach((voter(X), candidate(Y), votes(X, Y)),(voter(Z), \+votes(Z, Y))).rule2 :- foreach((human(X), voter(X), candidate(Y), votes(X, Y)),(human(Z), voter(Z), \+votes(Z, Y))). |
Alright, I have this group $\langle x_i, i\in\mathbb{Z}\mid x_i^2=x_{i-1}x_{i+1}\rangle$ and I'm trying to determine whether $x_ix_j=x_jx_i$ or not. I'm unsure there is enough information to decide this, to be honest.
Nah, I have a pretty garbage question. Let me spell it out.
I have a fiber bundle $p : E \to M$ where $\dim M = m$ and $\dim E = m+k$. Usually a normal person defines $J^r E$ as follows: for any point $x \in M$ look at local sections of $p$ over $x$.
For two local sections $s_1, s_2$ defined on some nbhd of $x$ with $s_1(x) = s_2(x) = y$, say $J^r_p s_1 = J^r_p s_2$ if with respect to some choice of coordinates $(x_1, \cdots, x_m)$ near $x$ and $(x_1, \cdots, x_{m+k})$ near $y$ such that $p$ is projection to first $m$ variables in these coordinates, $D^I s_1(0) = D^I s_2(0)$ for all $|I| \leq r$.
This is a coordinate-independent (chain rule) equivalence relation on local sections of $p$ defined near $x$. So let the set of equivalence classes be $J^r_x E$ which inherits a natural topology after identifying it with $J^r_0(\Bbb R^m, \Bbb R^k)$ which is space of $r$-order Taylor expansions at $0$ of functions $\Bbb R^m \to \Bbb R^k$ preserving origin.
Then declare $J^r p : J^r E \to M$ is the bundle whose fiber over $x$ is $J^r_x E$, and you can set up the transition functions etc no problem so all topology is set. This becomes an affine bundle.
Define the $r$-jet sheaf $\mathscr{J}^r_E$ to be the sheaf which assigns to every open set $U \subset M$ an $(r+1)$-tuple $(s = s_0, s_1, s_2, \cdots, s_r)$ where $s$ is a section of $p : E \to M$ over $U$, $s_1$ is a section of $dp : TE \to TU$ over $U$, $\cdots$, $s_r$ is a section of $d^r p : T^r E \to T^r U$ where $T^k X$ is the iterated $k$-fold tangent bundle of $X$, and the tuple satisfies the following commutation relation for all $0 \leq k < r$
$$\require{AMScd}\begin{CD} T^{k+1} E @>>> T^k E\\ @AAA @AAA \\ T^{k+1} U @>>> T^k U \end{CD}$$
@user193319 It converges uniformly on $[0,r]$ for any $r\in(0,1)$, but not on $[0,1)$, cause deleting a measure zero set won't prevent you from getting arbitrarily close to $1$ (for a non-degenerate interval has positive measure).
The top and bottom maps are tangent bundle projections, and the left and right maps are $s_{k+1}$ and $s_k$.
@RyanUnger Well I am going to dispense with the bundle altogether and work with the sheaf, is the idea.
The presheaf is $U \mapsto \mathscr{J}^r_E(U)$ where $\mathscr{J}^r_E(U) \subset \prod_{k = 0}^r \Gamma_{T^k E}(T^k U)$ consists of all the $(r+1)$-tuples of the sort I described
It's easy to check that this is a sheaf, because basically sections of a bundle form a sheaf, and when you glue two of those $(r+1)$-tuples of the sort I describe, you still get an $(r+1)$-tuple that preserves the commutation relation
The stalk of $\mathscr{J}^r_E$ over a point $x \in M$ is clearly the same as $J^r_x E$, consisting of all possible $r$-order Taylor series expansions of sections of $E$ defined near $x$ possible.
Let $M \subset \mathbb{R}^d$ be a compact smooth $k$-dimensional manifold embedded in $\mathbb{R}^d$. Let $\mathcal{N}(\varepsilon)$ denote the minimal cardinal of an $\varepsilon$-cover $P$ of $M$; that is for every point $x \in M$ there exists a $p \in P$ such that $\| x - p\|_{2}<\varepsilon$....
The same result should be true for abstract Riemannian manifolds. Do you know how to prove it in that case?
I think there you really do need some kind of PDEs to construct good charts.
I might be way overcomplicating this.
If we define $\tilde{\mathcal H}^k_\delta$ to be the $\delta$-Hausdorff "measure" but instead of $diam(U_i)\le\delta$ we set $diam(U_i)=\delta$, does this converge to the usual Hausdorff measure as $\delta\searrow 0$?
I think so by the squeeze theorem or something.
this is a larger "measure" than $\mathcal H^k_\delta$ and that increases to $\mathcal H^k$
but then we can replace all of those $U_i$'s with balls, incurring some fixed error
In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). It is named after the German mathematician Hermann Minkowski and the French mathematician Georges Bouligand.To calculate this dimension for a fractal S, imagine this fractal lying on an evenly spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid...
@BalarkaSen what is this
ok but this does confirm that what I'm trying to do is wrong haha
In mathematics, Hausdorff dimension (a.k.a. fractal dimension) is a measure of roughness and/or chaos that was first introduced in 1918 by mathematician Felix Hausdorff. Applying the mathematical formula, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas...
Let $a,b \in \Bbb{R}$ be fixed, and let $n \in \Bbb{Z}$. If $[\cdot]$ denotes the greatest integer function, is it possible to bound $|[abn] - [a[bn]|$ by a constant that is independent of $n$? Are there any nice inequalities with the greatest integer function?
I am trying to show that $n \mapsto [abn]$ and $n \mapsto [a[bn]]$ are equivalent quasi-isometries of $\Bbb{Z}$...that's the motivation. |
Alright, I have this group $\langle x_i, i\in\mathbb{Z}\mid x_i^2=x_{i-1}x_{i+1}\rangle$ and I'm trying to determine whether $x_ix_j=x_jx_i$ or not. I'm unsure there is enough information to decide this, to be honest.
Nah, I have a pretty garbage question. Let me spell it out.
I have a fiber bundle $p : E \to M$ where $\dim M = m$ and $\dim E = m+k$. Usually a normal person defines $J^r E$ as follows: for any point $x \in M$ look at local sections of $p$ over $x$.
For two local sections $s_1, s_2$ defined on some nbhd of $x$ with $s_1(x) = s_2(x) = y$, say $J^r_p s_1 = J^r_p s_2$ if with respect to some choice of coordinates $(x_1, \cdots, x_m)$ near $x$ and $(x_1, \cdots, x_{m+k})$ near $y$ such that $p$ is projection to first $m$ variables in these coordinates, $D^I s_1(0) = D^I s_2(0)$ for all $|I| \leq r$.
This is a coordinate-independent (chain rule) equivalence relation on local sections of $p$ defined near $x$. So let the set of equivalence classes be $J^r_x E$ which inherits a natural topology after identifying it with $J^r_0(\Bbb R^m, \Bbb R^k)$ which is space of $r$-order Taylor expansions at $0$ of functions $\Bbb R^m \to \Bbb R^k$ preserving origin.
Then declare $J^r p : J^r E \to M$ is the bundle whose fiber over $x$ is $J^r_x E$, and you can set up the transition functions etc no problem so all topology is set. This becomes an affine bundle.
Define the $r$-jet sheaf $\mathscr{J}^r_E$ to be the sheaf which assigns to every open set $U \subset M$ an $(r+1)$-tuple $(s = s_0, s_1, s_2, \cdots, s_r)$ where $s$ is a section of $p : E \to M$ over $U$, $s_1$ is a section of $dp : TE \to TU$ over $U$, $\cdots$, $s_r$ is a section of $d^r p : T^r E \to T^r U$ where $T^k X$ is the iterated $k$-fold tangent bundle of $X$, and the tuple satisfies the following commutation relation for all $0 \leq k < r$
$$\require{AMScd}\begin{CD} T^{k+1} E @>>> T^k E\\ @AAA @AAA \\ T^{k+1} U @>>> T^k U \end{CD}$$
@user193319 It converges uniformly on $[0,r]$ for any $r\in(0,1)$, but not on $[0,1)$, cause deleting a measure zero set won't prevent you from getting arbitrarily close to $1$ (for a non-degenerate interval has positive measure).
The top and bottom maps are tangent bundle projections, and the left and right maps are $s_{k+1}$ and $s_k$.
@RyanUnger Well I am going to dispense with the bundle altogether and work with the sheaf, is the idea.
The presheaf is $U \mapsto \mathscr{J}^r_E(U)$ where $\mathscr{J}^r_E(U) \subset \prod_{k = 0}^r \Gamma_{T^k E}(T^k U)$ consists of all the $(r+1)$-tuples of the sort I described
It's easy to check that this is a sheaf, because basically sections of a bundle form a sheaf, and when you glue two of those $(r+1)$-tuples of the sort I describe, you still get an $(r+1)$-tuple that preserves the commutation relation
The stalk of $\mathscr{J}^r_E$ over a point $x \in M$ is clearly the same as $J^r_x E$, consisting of all possible $r$-order Taylor series expansions of sections of $E$ defined near $x$ possible.
Let $M \subset \mathbb{R}^d$ be a compact smooth $k$-dimensional manifold embedded in $\mathbb{R}^d$. Let $\mathcal{N}(\varepsilon)$ denote the minimal cardinal of an $\varepsilon$-cover $P$ of $M$; that is for every point $x \in M$ there exists a $p \in P$ such that $\| x - p\|_{2}<\varepsilon$....
The same result should be true for abstract Riemannian manifolds. Do you know how to prove it in that case?
I think there you really do need some kind of PDEs to construct good charts.
I might be way overcomplicating this.
If we define $\tilde{\mathcal H}^k_\delta$ to be the $\delta$-Hausdorff "measure" but instead of $diam(U_i)\le\delta$ we set $diam(U_i)=\delta$, does this converge to the usual Hausdorff measure as $\delta\searrow 0$?
I think so by the squeeze theorem or something.
this is a larger "measure" than $\mathcal H^k_\delta$ and that increases to $\mathcal H^k$
but then we can replace all of those $U_i$'s with balls, incurring some fixed error
In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). It is named after the German mathematician Hermann Minkowski and the French mathematician Georges Bouligand.To calculate this dimension for a fractal S, imagine this fractal lying on an evenly spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid...
@BalarkaSen what is this
ok but this does confirm that what I'm trying to do is wrong haha
In mathematics, Hausdorff dimension (a.k.a. fractal dimension) is a measure of roughness and/or chaos that was first introduced in 1918 by mathematician Felix Hausdorff. Applying the mathematical formula, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas...
Let $a,b \in \Bbb{R}$ be fixed, and let $n \in \Bbb{Z}$. If $[\cdot]$ denotes the greatest integer function, is it possible to bound $|[abn] - [a[bn]|$ by a constant that is independent of $n$? Are there any nice inequalities with the greatest integer function?
I am trying to show that $n \mapsto [abn]$ and $n \mapsto [a[bn]]$ are equivalent quasi-isometries of $\Bbb{Z}$...that's the motivation. |
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response)
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details
http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference |
I kind of have an elementary solution, it seems to be fine but I am not sure if everything is correct; please point out the mistake(s) I'm making, if any.
Define $$H_n:=\sum_{i=1}^n \frac{1}{i}$$Since $0<H_n<n$, if $\exists$ some $n$ for which $H_n$ is integral then $H_n=k$ where $0<k<n$. Then $$H_n=k=1+\frac{1}{2}+\frac{1}{3}+\cdots\ +\frac{1}{k}+\cdots\ +\frac{1}{n}\\\Rightarrow k=\frac{1}{k}+\frac{p}{q}\Rightarrow qk^2-pk-q=0$$ where $\gcd(p,q)=1$. Then we get $$k=\frac{p\pm \sqrt{p^2+4q^2}}{2q}$$ Since $k$ is integer $$p^2+4q^2=r^2$$ for some $r\in \mathbb{Z}^+$. Let $\gcd(p,2q,r)=d$ and let $\displaystyle x=\frac{p}{d},\ y=\frac{2q}{d},\ z=\frac{r}{d}$. Then $$x^2+y^2=z^2$$ Now, I make the following claim:
Claim: $p$ is odd and $q$ is even.
Proof: Let $s=2^m\le n$ be the largest power of $2$ in $\{1,2,\cdots,\ n\}$. Then, if $k\ne s$ then the numerator of $\displaystyle \frac{p}{q}$ is the sum of $n-1$ terms out of which one will be odd and hence $p$ is odd. On the other hand, $q$ will have the term $s$ as a factor. So q is even.
Now, if $k=s$, then since $n>2$(otherwise there is nothing to prove)then, there will be a factor $2^{m-1}\ge 2$ in $q$ and one of the sum terms in $p$ that corresponds to $2^{m-1}$ will be odd. Hence in this case also, $p$ is odd and $q$ is even. So the claim is proved. $\Box$
So, now we see that $d\ne 2$ and hence $2|y$. So we have a Pythgorian equation with $2|y, \ x,y,z>0$. hence the solutions will be $$x=u^2-v^2,\ y=2uv,\ z=u^2+v^2$$ with $(u,v)=1.$ So, since $k$ is positive, $$k=\frac{d(x+z)}{dy}=\frac{u}{v}$$ But since $(u,v)=1$, $k$ is not an integer (for $n\ge 2$) which is a contradiction. So $H_n$ can not be an integer. $\Box$ |
I need to calculate the number of ways of distributing $n$ balls among $k$ boxes, each box may contain no ball, but if it contains any, then it must contain $\geq L$ & $\leq M$ balls.
This effectively solves:
$x_1+x_2+x_3+\dotsb+x_k = n; \quad x_i\in [0,L,L+1,L+2,\dotsc,M-1,M]$.
Is there a known solution to this? Googling "bounded combinatorics" and similar doesn't reveal anything, except for the post below which is a solution for an upper-bound.
It feels like there should be a solution to the $L \leq x_i \leq M$ case, and then the $0$-possible case can then (hopefully) be added to this as a solution to "ways to distribute $n$ balls among $k$ boxes such that at least one box contains no balls" |
This is my first question, please forgive me if I mistake something, since I don't think I will be allowed to edit the question later.
So, let me explain the kind of matrices I'm talking about. Think of a $n\times n$ square matrix, and think of a sequence of $n^2$ consecutive squares starting at $k^2$ (the most natural choice being $k=1$). Now place those squares in the matrix as if you were writing, row by row. I'll dare to invent a notation for these "Square Matrice filled with Consecutive Squares": $SMCS(n,k)$. So we have, for example: $$ SMCS(3,4)= \begin{bmatrix} 4^2 & 5^2 & 6^2 \\ 7^2 & 8^2 & 9^2 \\ 10^2 & 11^2 & 12^2 \\ \end{bmatrix}= \begin{bmatrix} 16 & 25 & 36 \\ 49 & 64 & 81 \\ 100 & 121 & 144 \\ \end{bmatrix} $$ I wasn't able to find anything about matrices like these neither here nor elsewhere. Probably there are of no mathematical interest. My interest starts from a lecture, years ago, when the professor made us notice that the "Square Matrice filled with Consecutive Integers" $1$ to $9$ —it is the cell phone keypad! I'll call it $SMCI(3,1)$— is singular. That is: $$ \det \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \\ \end{bmatrix}=0 $$ I find very intuitive that this elegant property holds for matrices of higher order and even if starting from a different integer, that is $$ \det \left[SMCI(n,k)\right]=0 \qquad n\geqslant 3, \;\forall k $$ I can recognize a clear pattern (where the middle culumns are an "average" of the ones at their sides) so that I feel relieved from the need to provide an explicit proof for this result, that shouldn't be too hard anyway.
Now I'm looking at the determinants of the matrices filled with squares. With no surprise I find nonsingular matrices of order $2$. Then, with some hope, I look at $SMCS(3,1)$ (the cell phone keypad with each number squared) but I find that the determinant is $-216$. So I find myself admitting that, understandably, the property doesn't hold for matrices filled with powers. Yet something curious happens: the result is the same even if I take a different starting point, that is $$ \det\left[SMCS(3,k)\right]=-216\qquad \forall k $$ This was a surprise, but it was pretty straightforward proving it with some direct algebra calculations.
Here comes the big deal. I went on looking at higher orders, wondering if there were a characteristic constant for each $n$. Instead... $0$'s began to appear again! Me unbeliever! I've grown convinced that $$ \det \left[SMCS(n,k)\right]=0 \qquad n\geqslant 4, \;\forall k $$ but attempting a direct proof, if only for $n=4$, was out of the question. Let alone a general proof for any $n$, which is what I would really be interested in, but I wouldn't know even where to start from. I don't really need the rigorous proof, what I'd love is to be sure of the validity of the property, and possibly a way to "understand" it in a manner similar to what I could do with the $SMCI$ matrices, to be "convinced" of the result. Can anyone help my with this? Thank you!
Wow, great, I got it! It took me some effort, but I got it all, thank you JeanMarie! Forgive me for coming back with some delay, but I saw you constantly improving your answer, and considering my time zone disadvantage I had to give up for the day.
Understanding your "trick" allows to easily extend the property to even higher orders; for example rows (or columns) of consecutive cubes give singular matrices from $5 \times 5$ on. If I extend the notation to "Square Matrix filled with Consecutive Powers", I shall write
$$ \det \left[SMCP(p,n,k)\right]=0 \qquad n\geqslant p+2, \;\forall k $$
I couldn't resist exploring the case just before the first singularity occurence, when the matrix order is greater than the power only by $1$. I mean, in the general case, because it is easy to algebraically verify that
$$ \det \left[SMCP(1,2,k)\right]=-2 \qquad \forall k $$
$$ \det \left[SMCP(2,3,k)\right]=-216 \qquad \forall k $$
I made a few direct attempts to find that (probably $\forall k$, but I don't dare writing it)
$$ \det \left[SMCP(3,4,k)\right]=5308416 $$
$$ \det \left[SMCP(4,5,k)\right]=7776 \cdot 10^{10} $$
I hoped to infer a rule from this sequence start, and try a proof at a later stage, but I cannot see anything. Nor I think that the kernel trick can be helpful at this. So the question now would be
$$ \det \left[SMCP(p,n,k)\right]=\;? \qquad \forall k \;\textrm{ when }\; n=p+1 $$
but I don't know how hard it can be to annswer it, and I won't ask anyone an eccessive effort about something that now has gone well beyond what I was looking for in the first place. I leave it here just in case someone is able to easily see something that doesn't occur to me. Again thank you! |
Suppose we have a real function $ f: \mathbb{R} \to \mathbb{R}$ that is two times differentiable and we draw its graph $\{(x,f(x)), x \in \mathbb{R} \} $. We know, for example, that when the first derivative is not continuous at a point we then have an "corner" in the graph. How about a discontinuity of $f''(x)$ at a point $x_0$ though? Can we spot that just by drawing the graph of $f(x)$ -not drawing the graph of the first derivative and noticing it has an "corner" at $x_0$, that's cheating. What about discontinuities of higher derivatives, which will of course be way harder to "see"?
Seeing the second derivative by eye is rather impossible as the examples in the other answers show. Nonetheless there is a good way to visualize the second derivative as curvature, which gives a good way to see the discontinuities if some additional visual aids are available.
Whereas the first derivative describes the slope of the line that best approximates the graph, the second derivative describes the curvature of the best approximating circle. The best approximating circle to the graph of $f:\mathbb{R}\to\mathbb{R}$ at a point $x$ is the circle tangent to $(x,f(x))$ with radius equal to $((1+f'(x)^2)^{3/2})/\left\vert f''(x) \right\vert$. This leaves exactly two possible circles, and the sign of $f''(x)$ determines which is the correct one. That is, whether the circle should be above ($f''(x)>0$) or below ($f''(x)<0$) the graph. Note that the case when $f''(x)=0$ corresponds to a circle with "infinite" radius, i.e., just a line.
As a demonstration, consider the difference between some of the functions mentioned before. For $x\mapsto x^3$, the second derivative is continuous, and the best approximating circle varies in a continuous manner ("continuous" here needs to be interpreted with care, since the radius passes through the degenerate $\infty$-case when it switches sign)
For $x\mapsto \begin{cases}x^2,&x>0\\-x^2,&x\leq 0\end{cases}$ the best approximating circle has a visible discontinuity at $x=0$, where the circle abruptly hops from one side to the other.
The pictures above were made with Sage. The code used to create the second one is below to play around with:
def tangent_circle(f,x0,df=None): if df is None: df = f.derivative(x) r = (1+df(x=x0)^2)^(3/2)/df.derivative(x)(x=x0) tang = df(x=x0) unitnormal = vector(SR,(-tang,1))/sqrt(1+tang^2) c = vector(SR,(x0,f(x0)))+unitnormal*r return c,rdef plot_curvatures(f,df=None,plotrange=(-1.2,1.2),xran=(-1,1),framenum=50): fplot = plot(f,(x,plotrange[0],plotrange[1]),aspect_ratio=1,color="black") xmin = xran[0] xmax = xran[1] pts = [xmin+(xmax-xmin)*(k/(framenum-1)) for k in range(framenum)] return [fplot+circle((x0,f(x0)),0.03,fill=True,color="blue")+circle(*tangent_circle(f,x0,df),color="blue") for x0 in pts]f = lambda x: x^2 if x>0 else -x^2df = 2*abs(x)frames = plot_curvatures(f,df)animate(frames,xmin=-1.5,xmax=1.5,ymin=-1,ymax=1)
Whether or not you can "see" this is in some sense a question about the acuity of human vision. I think the answer is "no". If you graph the function $$ g(x) = \int_0^x |t|dt $$ you will see the usual parabola in the right half plane and its negative in the left half plane. They meet at the origin with derivative $0$. The derivative of this function is $|x|$, whose derivative is undefined at $0$. The second derivative is $-1$ on the left and $1$ on the right, undefined at the origin.
Plot that and see if you can see the second derivative. Unless you draw it really carefully and know what you are looking for the graph will look like that of $x^3$, which is quite respectable.
Assuming your graph is made from reflective material, you can see discontinuous second derivatives in the reflection. For example, if you stand at $(0,-1)$ and look upwards at the graph of $f(x)=\max\{x,0\}$ the reflection will have a discontinuity at the point of nondifferentiability of $f$, namely $(0,0)$, even though $f$ is continuous there. Similarly, if you look at a diffferentiable graph that is differentiable but not twice differentiable, the reflection will not be differentiable.
Some years ago I crammed my adult body into a rather small car of a child's ride called The Wild Mouse. It was a "roller coaster" with no perceptible vertical drops as it went in a helical fashion on a path that consisted of alternating line segments and quarter circles. Children could be delighted at a transition between line and circle because the path has a discontinuity in its second derivative at each transition.
The car essentially moved at a constant speed, so on the straight segments the car was not accelerating, and there was "no" force on the car. However, a particle moving at constant speed in a circular path has constant non-zero acceleration pointing toward the circle's center, so the car, and child, experiences a force toward the circle center. The children could feel the discontinuity in the second derivative of the car's path.
Railroad engineers have known about this abrupt change in force since before the mid 19th century, and used curves with spiral characteristics, now called "transition curves." Such curves are used by railroad and highway designers to provide smoother rides for their customers.
Not sure if there is a "picture rigorous" way of checking, but there are subtle clues that could indicate it's possible.
For instance, if you know the first derivative $f'$ has a "corner" at $x=a$, then the graph of $f$ has to be monotonically increasing/decreasing around $x=a$. Furthermore, this indicates a change in concavity has to be present. So places where you have
a change in concavity AND don't change increasing/decreasing behavior (we allow derivative of zero here),
then it could happen at that point (in terms of limiting down the choice of possibilities). Take for instance the antiderivative of $|x|$.
Obviously this isn't saying this is always true (see $x^3$ for instance). Just that if you want to narrow your focus to potential points, do this trick.
Please let me know if there is something I'm missing or wrong about. |
Let $E$ be a non-empty subset of an ordered space. Suppose that a is a lower bound of $E$ and $\beta$ is a upper bound of $E$. Show that $ \alpha \leq \beta $.
Proof: (1) If $\alpha$ is a lower bound of $E$ then $\forall x\in E\quad x\geq \alpha$ (2) If $\beta$ is a upper bound of $E$ then $\forall x\in E\quad x\leq \beta$
From (1), (2) and since $E$ is ordered then
$$\alpha\leq x\leq\beta\Rightarrow \alpha\leq\beta$$
I'm starting now with real analysis and I'm still learning the art of demonstrating. Is this right? That's enough? |
The problem I'm trying to solve is as follows:
Let $X$ be a space and let $A$ be a $\sigma$-algebra on $X$ that contains infinitely many elements. An infinite partition is a countably infinite sequence on non-empty and pairwise disjoint sets $C_1, C_2, ...$, all in $A$, such that their union is $X$. Show that such a partition exists.
I tried to show this using the method described in this question, namely:
For the function $f:X\rightarrow A$ defined as $f(x) := \bigcap_{x\in S \in A} S$ we have $f(x) \cap f(y) = \varnothing$ for all $x \neq y$. Then, the set $f(X) := \{f(x) | x \in X\}$ partitions $X$.
However, this method results in a countable partition iff $X$ is countable. Does anyone know another method that will definitely result in a countable partition? |
This is the question i would like to discuss, properly stated.
Given a model $M$ for a collection of set theory axioms (ZFC, for example), list all basic modal formulas $\phi$ such that $M\Vdash \phi$ and $\nVdash \phi$ (that is, $\phi$ is valid on the basic modal frame $M$, and $\phi$ is not a formula valid in the class of all basic modal frames).
I've been studying modal logic for a while now, albeit slowly. Recently, i've encountered the concept of
frame definability, which is about relating modal formulas to the class of frames they are valid on. Now, i've never had any formal training on Set Theory, but thanks to the internet, i believe a large part of it consists of stating a collection of logical sentences such that any structure with signature $(W,\in)$ that models said collection behaves much like we'd expect sets to behave.
As it turns out, the structure signature $(W,\in)$ is precisely that of the
basic modal frames, with the membership relation providing the interpretation for the $\Diamond$ modality. I can't help but wonder if there aren't any ways to define sets using modal logic, or if not, how "close" can we get to them.
Alas, i'm not sure if these questions are appropriate on a StackExchange site (too vague). So for now i'd like to know something simpler. We know that there are many modal formulas valid on the class of all frames; that's the smallest normal modal logic. My question is then, what does the set of modal formulas valid on a model of, say, ZFC, or NF, look like? Just how much bigger than the smallest normal modal logic is that set?
As a follow-up, maybe i could ask (if it isn't asking too much) if there are modal formulas that can
distinguish between different models of the same set theory, that is, by being valid on one model but not in another.
EDIT: This question's previous wording was simpler; all i asked was whether there were a non-trivially (that is, not a member of the smallest normal modal logic) valid modal formula or not. As it turns out, there's a pretty easy one, $\Diamond \top$, since every set must be a member of another set (and in fact the infinite set consisting of $\{ \Diamond \top, \Diamond \Diamond \top, \Diamond \Diamond \Diamond \top, ... \}$ is valid).
I'm changing it to a harder statement (asking to define the entire logic generated by the set model), but perhaps even more interesting would be asking if there are non-trivially valid modal formulas for a "converse set theory" $M=(W,\in_c)$ model, in which $x\in_c y $ iff $y \in x$ (and, of course, your choice of axioms would need to be altered, in order to invert the operands in the $\in$ predicate). Dunno what to ask... |
I read the following statement in my modal logic book.
Propositional calculus system $L$ is consistent if and only if for every proposition symbol $p$ in $L$, $\not\vdash p$
I wonder how to prove this statement. And is this also true in FOL?
To make sure, I write some definitions here.
The propositional calculus system $ L $ is a formal system $(A,C)$ where:
a) The set $A$ is a countably infinite set of proposition symbols.
b) The set $C$ is a set of logical connectives, that is $\{\neg, \vee, \wedge, \to \}$.
And we use natural deduction as a proof system. And by definition, system $L$ with the proof system is consistent if and only if there is no WFF $\alpha$ such that $\vdash \alpha$ and $\vdash \neg \alpha$ |
Is there a function in Mathematica that can be used to find the perturbation solution of an equation like $x^2 − 1 = \epsilon \,x$, $x − 2 = \epsilon \cosh(x)$ or $x^2 − 1 = \epsilon\, e^x$?
Decide up to which power you would like to expand:
pow = 4;
Let's do one of the equations you mentioned as an example (bring all terms to one side and save as a single expression,
eq in this case):
eq = x - 2 - e Cosh[x];
Write an ansatz for the
x solution with unknown coefficients:
ansatz = Sum[a[i] e^i, {i, 0, pow}];
Substitute the ansatz into your expression and do a series expansion:
expand = Series[eq /. x -> ansatz, {e, 0, pow}];
Solve the constraints of overall factors vanishing:
vars = Table[a[i], {i, 0, pow}];sols = Solve[expand == 0, vars][[1]] // Simplify // Quiet;
Finally, insert the solution into your ansatz to obtain the result:
res = ansatz /. sols
2 + e Cosh[2] + e^2 Cosh[2] Sqrt[-1 + Cosh[2]^2] + 1/3 e^4 Cosh[2] Sqrt[-1 + Cosh[2]^2] (-3 + 8 Cosh[2]^2) + e^3 (-Cosh[2] + (3 Cosh[2]^3)/2)
Don't forget to test numerically, whether your result is actually correct at the end:
enum = 10^-10;xnum = x /.FindRoot[(eq /. e -> enum) == 0, {x, 2}, WorkingPrecision -> 60];(res /. e -> enum) - xnum
-3.627605747396*10^-47
This shows that expanding to fourth order with an
e= 10^-10 indeed consistently matches the result up to about
10^-50 accuracy, so the expansion was correct. Rinse and repeat for the other examples.
PS: In cases where your equation admits several solutions you might have to be a little bit more careful, but the principle still stays the same.
For these simple examples you could use
InverseSeries. For example:
InverseSeries[Series[(x^2 - 1)/x, {x, 1, 10}], e]InverseSeries[Series[(x - 2)/Cosh[x], {x, 2, 10}], e]InverseSeries[Series[(x^2 - 1)/E^x, {x, 1, 10}], e]
You need to solve for
e, and then do a series around the value of
x when
e is zero.
The new in M12 function
AsymptoticSolve can be used to find these perturbation expansions:
AsymptoticSolve[x^2 - 1 == ϵ x, {x, 1}, {ϵ, 0, 7}]
{{x -> 1 + ϵ/2 + ϵ^2/8 - ϵ^4/128 + ϵ^6/ 1024}}
AsymptoticSolve[x - 2 == ϵ Cosh[x], {x, 2}, {ϵ, 0, 7}]
{{x -> 2 + ϵ Cosh[2] + ϵ^2 Cosh[2] Sinh[2] + 1/2 ϵ^3 (Cosh[2]^3 + 2 Cosh[2] Sinh[2]^2) + 1/3 ϵ^4 (5 Cosh[2]^3 Sinh[2] + 3 Cosh[2] Sinh[2]^3) + 1/24 ϵ^5 (13 Cosh[2]^5 + 88 Cosh[2]^3 Sinh[2]^2 + 24 Cosh[2] Sinh[2]^4) + 1/15 ϵ^6 (47 Cosh[2]^5 Sinh[2] + 100 Cosh[2]^3 Sinh[2]^3 + 15 Cosh[2] Sinh[2]^5) + 1/720 ϵ^7 (541 Cosh[2]^7 + 7746 Cosh[2]^5 Sinh[2]^2 + 7800 Cosh[2]^3 Sinh[2]^4 + 720 Cosh[2] Sinh[2]^6)}}
AsymptoticSolve[x^2 - 1 == ϵ Exp[x], {x, 1}, {ϵ, 0, 7}]AsymptoticSolve[x^2 - 1 == ϵ Exp[x], {x, -1}, {ϵ, 0, 7}]
{{x -> 1 + (E ϵ)/2 + (E^2 ϵ^2)/8 + (E^3 ϵ^3)/16 + ( 13 E^4 ϵ^4)/384 + (E^5 ϵ^5)/48 + (69 E^6 ϵ^6)/ 5120 + (841 E^7 ϵ^7)/92160}}
{{x -> -1 - ϵ/(2 E) + (3 ϵ^2)/(8 E^2) - (7 ϵ^3)/( 16 E^3) + (235 ϵ^4)/(384 E^4) - (121 ϵ^5)/(128 E^5) + ( 7959 ϵ^6)/(5120 E^6) - (245953 ϵ^7)/(92160 E^7)}}
According to standard perturbation theory for static Hamiltonians of this type, the ground state of the system may be approximated by:
\begin{align} |0\rangle_V &= |0\rangle_H - \epsilon \sum_{\alpha\neq 0} \frac{U_{\alpha 0}}{\alpha} |\alpha\rangle_H \\ \notag &\simeq N^{-\frac12} \left ( |0\rangle_H + \beta_1 |1\rangle_H + \beta_2 |2\rangle_H \right)\,, \end{align} where $N=1+\beta_1^2+\beta_2^2$, $\beta_k = -\epsilon U_{\alpha 0}/\alpha$ and $U_{\alpha 0}={}_H\langle \alpha|U| 0 \rangle_H$, $\alpha=1,2$.
Hope this helps to formulate your problem. |
Seminar
Parent Program: Location: Space Science Lab, Room 105
In this work, we study the volume ratio of the projective tensor products $\ell^n_p\otimes_{\pi}\ell_q^n\otimes_{\pi}\ell_r^n$ with $1\leq p\leq q \leq r \leq \infty$. The asymptotic formulas we obtain are sharp in almost all cases. As a consequence of our estimates, these spaces allow for an almost Euclidean decomposition of Kashin type whenever $1\leq p \leq q\leq r \leq 2$ or $1\leq p \leq 2 \leq r \leq \infty$ and $q=2$. Also, from the Bourgain-Milman bound on the volume ratio of Banach spaces in terms of their cotype $2$ constant, we obtain information on the cotype of these $3$-fold projective tensor products. Our results naturally generalize to the $k$-fold products $\ell_{p_1}^n\otimes_{\pi}\dots \otimes_{\pi}\ell_{p_k}^n$ with $k\in\N$ and $1\leq p_1 \leq \dots\leq p_k \leq \infty$.
joint work with O. Giladi, J. Prochno, E. Werner, N. Tomczak-JaegermannnNo Notes/Supplements Uploaded No Video Files Uploaded |
What does it mean to say \(P(event) = something \)
This blog post is an attempt to explain the following tutorial to a more general audience. I gave that tutorial to explain some of my favorite ideas in the Shafer and Vovk book Probability and Finance, It’s only a Game!
Horse Gambling Games
Way before Probability Theory was its own field let alone a field with rigorous foundations, two fairly celebrated mathematicians Pascal and Fermat wanted to win a ton of money betting on horses. Before doing so they of course had to first check whether horse gambling was a fair game. This meant they had to first define what it means for a game to be fair. The definition they came up with involves two parties which we’ll call A and B involved in a zero sum game.
If A pays x$ And B pays x$ The winner gets paid 2x$
This definition seems fairly intuitive because we can actually attribute a meaning to the title \(P(event) = something \) by claiming that:
P(event) = how much money you’re willing to spend on a game where you could win 1$.
Mathematization of Probability Theory
The above definition is indeed intuitive but it hardly seems inline with what we’d imagine a mathematical definition of probability would look like.
Kolmogorov rigorously formalized probability theory using just 6 axioms; what was previously a subfield of mathematical physics became its own deep topic. I might need a whole other blog post on Kolmogorov’s 6 axioms but I’ll mention one small but interesting thing here. The sixth axiom uses what is called the axiom of choice. The axiom of choice is . For our purposes though it would be interesting to explore the possibility of a different foundation to probability theory in games instead of measure theory.
Sequential Learning
Von Mises was the first to propose that probability theory could find its foundation in games.
Given an observed bit string 001110 What is the probability that the next bit observed is a 1 so that sequence becomes 0011101?
The question is meaningful becomes if the length of the string is sufficiently large then we expect that looking at a subset of the strings will give us a statistical estimate of the frequency of each bit.
Fortunately we also have a method of quantifying how random a length \(n \) bitstring is: Kolmogorov Complexity!
A brief fugue into Kolmogorov Complexity
Suppose you’re given two strings:
\(A = 01 01 01 01 01 01 01 \) \(B = 00 10 01 00 10 11 11 \)
My question is which string looks more random? Well one way to approach this is to see if a string has many repetitions and we notice indeed that \(A \) is simply 01 repeated 7 times so it’s essentially the same message appended to itself 7 times. \(B \) on the other hand seems to lack such a clear looking pattern so we can claim that \(B \) is more random than \(A \).
This is exactly the notion that Kolmogorov complexity captures! Unfortunately, it is also impossible to determine the Kolmogorov complexity of a random string! Whaaaat? Kolmogorov complexity belongs to a class of problems called undecidable which means that it can be reduced to the halting problem.
If there’s enough interest in the comments I can write a whole other blog post about Kolmogorov complexity and the halting problem but to give you an intuition of what the halting problem is all about: just think about how you’d know in a finite amount of time if something is going to run forever. Turns out computer science has answers to lots of philosophical questions.
The weak law of large numbers
The weak law of large numbers is generally covered in undergraduate statistics class using measure theoretic tools even though measure theory is outside the scope of many undergraduate programs. But it’s worth going over the “classical” proof to appreciate the game theoretic proof.
We will make the typical assumption that we are observing \(n \) random variables \(X_1 \dots X_n \) are i.i.d variables with mean \(\mu \) and variance \(\sigma^2 \).
The i.i.d assumption part stands for independent and identically ditributed, this assumption is an important one because without it learning seems quite impossible. In the most degenerate case if every random variable depends on every other one and each one has its own distribution it is impossible to generalize any knowledge about the points.
Let us define \(A_n = \frac{X_1 + \dots + X_n}{n} \) to be the average value of the random variables. Then the expected value of \(A_n \) is \(E[A_n] = \frac{n \mu}{n} = \mu \). Similarly \(Var[A_n] = \frac{n \sigma^2}{n^2} = \frac{\sigma^2}{n} \).
The proof of the weak law of large numbers is then concluded using Chebyshev’s inequality which is simply a formalization of the intuition that rare things happen rarely.
\(P(|A_n - \mu | \geq \epsilon) \leq Var[A_n]/ \sigma^2 = \frac{\sigma^2}{n \epsilon^2} \)
Bounded Fair Coin Game
Now we will instead show how to prove the weak law of large numbers using games instead. We will setup a game between two parties that we will call the skeptic and nature.
\(K_i \) will be the skeptic capital at time \(i \) \(M_n \) is the amount of tickets that the skeptic purchases \(x_n \) is the value of a ticket determined by nature
Now we can finally introduce the game:
\(K_0 = 1 \) For \(n = 1,2 \dots \): Skeptic announces \(M_n \in R \) Reality announces \(x_n \in {0,1 } \) \(K_n = K_{n - 1} + M_{n}x_{n} \)
What we will prove is that there exists a winning strategy for the skeptic. But first we will use a favorite trick employed by philosophers and first define what we mean by winning. The relationship with the weak law of large numbers will come out soon.
We claim that the skeptic wins \(K_n > 0 \) for all \(n \) we call this condition the no-bankruptcy condition. And if one two things happen either:
\(\lim_{n \rightarrow \infty} \sum_{i = 1}^{n}x_i = 0 \) Or
\(\lim_{ \rightarrow \infty} K_n = \infty\)
Everytime an author proposes new definitions, I find it helpful to understand what the equations mean in plain english.
The condition after the “or” states that the skeptic wins if he becomes infinitely rich and I’m sure we can all agree that the constitutes a reasonable definition of winning.
The first needs some more clarification it states that if the outcome of the game is truly random then the skeptic wins, what is meant here is that if nature plays a completely arbitrary strategy then no winning strategy exists. Which is another way of saying that any given skeptic can’t possibly play better than any other skeptic in retrospect.
Note how I use the term arbitrarily instead of the term randomly, this distinction seems pendantic but is quite important. Since randomly means according to some distribution while arbitrarily means there is absolutely no limitation to how a random variable may act.
Proving the bounded fair coin game
Now for the proof: If the skeptic bets an infinitely small amount \(\epsilon \) on heads then nature will be forced to not play heads often or else the skeptic will become infinitely rich. Therefore nature will start playing tails, when that happens the skeptic starts playing tails. Essentially this almost silly example provides a proof of concept for machine learning: If nature’s strategy is not completely arbitrary then nature’s strategy has an underlying strategy that can be learnt from data!
TLDR: If nature’s strategy is not completely uniform, meaning if nature does not play heads as often as tails then the skeptic can learn nature’s strategy and become infinitely rich by playing an infinite number of rounds.
And that’s the proof of the weak law of large numbers! We can think of nature as an entity trying to minimize rare events: that’s why there aren’t that many infinitely rich folks out there! (Actually the real proof is slightly more complicated and is covered in my original tutorial
Not only is this a completely valid proof of the weak law of large numbers it also does not make the i.i.d assumption so it also implies the measure theoretic result!
\(A \wedge B \rightarrow A \)
Epilogue
The efficient market hypothesis is an economic concept that states that it is impossible to consistently achieve better returns than the market. Did we also prove this statement as a corollary? |
Im working out this proof needed for the caratheodory - koebe theory.
The idea is quite simply to understand but there is an argument using sqeunces which im questioning about.
The statement is the following:
Let $D \subset \mathbb{C}$ a domain with $0 \in D$. $f,g$ holomorphic in $D$ with $f,g$ not constant. $g$ is injective, $f(0) = 0$ and $\mid f(z) \mid \geq \mid g(z) \mid$ for all $z \in D$.
Then $\sup\lbrace r > 0 \mid B_r(0) \subset f(D) \rbrace \geq \sup\lbrace r > 0 \mid B_r(0) \subset f(D) \rbrace$
This seems quite confusing on the first look but is actually quite easy to understand (at least when drawing a picture).
(Note: $\sup\lbrace r > 0 \mid B_r(0) \subset f(D) \rbrace$ is called the inner radius of $f(D)$ and just represents the supremum of the largest disk you could fit into $f(D)$ around $0$.
Proof:
We proove this by taking a disk $B = B_r(0) \subset g(D)$. This is actually possible because $g(D)$ has to contain $0$ because of $f(0) = 0$ and $\mid f(z) \mid \geq \mid g(z) \mid$. Also $g(D)$ has to be a domain because of the open mapping theorem ($ g \neq $const). This means there is a disk fitting around $0$ in $g(D)$. We show the statement by showing $B \subset f(g^{-1}(B))$ (therefore $g$ has to be injective). This will follow from: Is $b \in \partial f(g^{-1}(B))$ then $\mid b \mid \geq r$.
After defining $B$ there has to be a sequence $a_n \in B$ with $a_n \rightarrow a \in \partial B \quad (n \rightarrow \infty)$ and $f(g^{-1}(a_n)) \rightarrow b \quad (n \rightarrow \infty)$.
That is my question. Why can we say that such sqequence has to exist?
Using this sequence we continue:
$\mid b \mid = \lim \mid f(g^{-1}(a_n)) \mid \geq \lim \mid g(g^{-1} ( a_n)) \mid = \mid a_n \mid = r$.
How can we say that there is such a sequence $a_n$ which converges in $B$ to $a \in \partial B$ AND $\lim f(g^{-1}(a_n)) = b \in \partial f(g^{-1} (B))$? |
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty).
And Chrome has a Personal Blocklist extension which does what you want.
: )
Of course you already have a Google account but Chrome is cool : )
Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies?
do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created.
@QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value.
I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$.
@QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0.
@KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc.
In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people
in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results
@QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O
@NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that.
@NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment.
@QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h).
@KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow)
Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know? |
Call the strategies of rock, paper, and scissors A, B, and C: C beats B beats A beats C.Label the possible positions in this game with $2(n-1)$ dollars in the pot as either: $T_{n-1}$ if the previous result was a tie; $G_{n-1}$ if player I has a winning streak of 1 using strategy A (where A could be any of rock, paper or scissors); or $H_{n-1}$ if player I has a winning streak of 2 using strategy A (and will thus take the pot if he wins the next round using strategy A). We are using $n-1$ for the amount in the pot rather than $n$ for algebraic convenience when presenting the sub-game matrices.
(Clearly, this omits the cases where player II has a winning streak, but the values of those positions $G^{-}_{n-1}$ and $H^{-}_{n-1}$ are just the negatives of the values of the corresponding $G_{n-1}$ and $H_{n-1}$ positions.)
Let the value of any position in this game be the optimum game value minus the result that would happen if the players were to quit and immediately split the pot. Then by symmetry $V(T_{n-1}) = 0$ for all $n >0$ and the optimal strategy for any $T_{n-1}$ position is the trivial one of choosing A, B, or C each with probability 1/3.
Because the value of $T_{n-1}$ is zero, in the position analyses the values and strategies are unaffected if we were to say we don't care how the $2(n-1$ dollars got into the pot, and that any tie ends the game by splitting the pot (and that the players. So the game overall for some value of $n-1$ is characterized by which of 4 positions you are at.
From $G_{n-1}$ the game may end (by a tie, in which case the players restart the game at $T_n$ but since that has zero value we don't care), transition to $H_{n-1}$ with player I gaining a dollar, or transition to $G^{-}_{n-1}$ with player II gaining a dollar.
From $H_{n-1}$ the game may end by a tie, transition to $G_{n-1}$ with player I gaining a dollar having won using strategy B or C, transition to $G^{-}_{n-1}$ with player II gaining a dollar, or end with player I winning the pot (gaining $n$ dollars altogether) by winning using strategy A.
For $n=1$, with no money in he pot, their is no advantage at all to having a winning streak, the position values are all zero, and the optimal strategies are the trivial equal-probability strategies. All the interesting features are in games where there is someting in the pot.
The two game matrices (for $n>1) are:$$G_{n-1} = \left(\begin{array}{ccc}0 & -g & h \\+g & 0 & -g \\-g & g & 0\end{array}\right)$$(where for convenience we introduce $g \equiv 1+V(G_{n-1})$ and $h \equiv 1+V(H_{n-1})$) and $$H_{n-1} = \left(\begin{array}{ccc}0 & -g & n \+g & 0 & -g \-g & g & 0\end{array}\right)$$
$$H_{n-1} = \left(\begin{array}{cc|c}0 & -g & n \\+g & 0 & -g \\-g & g & 0\end{array}\right)$$
To solve $G_{n-1}$ we do the usual mantra of subtracting columns and taking determinants:$$G_{n-1}:\begin{array}{ccc|cc|c}0 & -g & h & g & -h & 3g^2 \\+g & 0 & -g & g & 2g & g^2 + 2hg \\-g & g & 0 & -26 & -g & 2g^2 + hg \\\hline-g & g & h+g & & & \\g & -2g & h & & & \\\hline2g^2 + hg & g^2 + 2hg & 3g^2 & & & \end{array}$$He game value is $\frac{hg-g^2}{3h+g6}$. So $$g = 1 + V(G_{n-1}) = 1 + \frac{hg-g^2}{3h+g6}$$
Similarly, we solve $H_{n-1}$:$$H_{n-1}:\begin{array}{ccc|cc|c}0 & -g & n & g & -n & 3g^2 \\+g & 0 & -g & g & 2g & g^2 + 2ng \\-g & g & 0 & -26 & -g & 2g^2 + ng \\\hline-g & g & n+g & & & \\g & -2g & n & & & \\\hline2g^2 + ng & g^2 + 2ng & 3g^2 & & & \end{array}$$The game value is $\frac{hg-g^2}{3h+g6}$. So $$h = 1 + \frac{ng-g^2}{3n+g6}$$
Before working with these two equations, we can look at the situation for $n$ very large. There, $ h = 1 + g/3 + O(1/n)$ and we can substitute to find that $$\begin{array}{l}g = \frac{15+\sqrt{1053}}{46} \approx 1.031521 \\V(G_{\infty}) \approx 0.031521 \\h = 1 + g/3 \approx 1.34384 \\V(H_{\infty}) \approx 0.34384 \end{array}$$That is, with a very large pot, if player I has a winning streak of 2 wins playing strategy A, player II will very rarely (probability $\frac{3g}{3N+6g}$) risk using strategy C, which could result in losing the whole pot. Player I will almost always choose strategies B (about 2/3 of the time) or C (about 1/3), and the game is favorable to player I with a value of $+\frac{1}{3}$. And if player 1 has just a 1 game winning streak, then the roughly 1.33 reward in the upper right hand corner (going over to $H_{n-1}$ with another A win) biases the game in player I's favor, but only by about 0.03.
Okay, now look at the case for $n$ not large enought to ignore $1/n$ effects: Substituting the expression for $h$ into the expression for $g$ we find$$40g^3 + (23n-21)g^2 - (15n+18)g - 9n = 0$$
So for example, if there is $2 \times 1$ dollars in the pot ($n = 2$) then the value of the game to a player with a winning streak is $G_1 \approx 0.008118$ and the value of a two game winning streak is $G_1 \approx 0.082991$.In game $G_1$ the strategy for player I is to choose A (the winning streak choice) to (B which beats A) to C in ratio $$3g : g + 2h : 2g + h = 3.024 : 3.172 : 3.099$$ and in game $H_1$ the ratios are$$3g : g + 2n : 2g + n = 3.024 : 5.008 : 4.016$$The game values for $n = 11$ (ten dollars in the pot) are $G_{10} = 0.024413$ and $H_{10} = 0.261048$; the strategies for player 1 in game $H_{10}$ are in ratio$$3.073 : 21.049: 1.024 \approx 1 : 7 : 4$$and for player II they are in ratio$$2n+g : n+2g : 3g = 21:024 : 12.049 : 3.073 \approx 7 : 4 : 1$$ for A, B and C in that order. |
Developing flow From Thermal-FluidsPedia
Line 7: Line 7:
Similar analysis and conclusions can be made with the Schmidt number, Sc, relative to mass transfer problems concerning the entrance effects due to mass diffusion. If one needs to get detailed information concerning the hydrodynamic, thermal or concentration entrance effects, the conservation equations should be solved without a fully developed velocity, concentration, or temperature profile.
Similar analysis and conclusions can be made with the Schmidt number, Sc, relative to mass transfer problems concerning the entrance effects due to mass diffusion. If one needs to get detailed information concerning the hydrodynamic, thermal or concentration entrance effects, the conservation equations should be solved without a fully developed velocity, concentration, or temperature profile.
-
[[Image:Fig5.15.
+
[[Image:Fig5.15.|thumb|400 px|alt=and for a circular tube|and for forced convective heat and mass transfer in a circular tube.
- + - + +
The conservation equations with the above assumptions, as well as neglecting the viscous dissipation and assuming an incompressible Newtonian fluid, are
The conservation equations with the above assumptions, as well as neglecting the viscous dissipation and assuming an incompressible Newtonian fluid, are
Line 54: Line 53:
|}
|}
Typical boundary conditions are:
Typical boundary conditions are:
-
<math>\text{Axial velocity at wall} u\left( x,{{r}_{o}} \right)=0 \text{no slip boundary condition}</math>
+ -
<math>\text{Radial velocity at wall} v\left( x,{{r}_{o}} \right)= \left\{ \begin{align}
+
<math>\text{Axial velocity at wall} u\left( x,{{r}_{o}} \right)=0 \text{no slip boundary condition}</math>
+
<math>\text{Radial velocity at wall} v\left( x,{{r}_{o}} \right)= \left\{ \begin{align}
& {{v}_{w}}=0 \text{impermeable wall} \\
& {{v}_{w}}=0 \text{impermeable wall} \\
& {{v}_{w}}>0 \text{injection and} {{v}_{w}}<0 \text{suction} \\
& {{v}_{w}}>0 \text{injection and} {{v}_{w}}<0 \text{suction} \\
& {{{\dot{m}}}_{w}}^{\prime \prime }= \text{mass flux due to diffusion} \\
& {{{\dot{m}}}_{w}}^{\prime \prime }= \text{mass flux due to diffusion} \\
& \quad \ \ \ =\rho \left[ {{\omega }_{1,w}}{{v}_{w}}-{{D}_{12}}{{\left. \frac{\partial {{\omega }_{1}}}{\partial r} \right|}_{r={{r}_{o}}}} \right] \\
& \quad \ \ \ =\rho \left[ {{\omega }_{1,w}}{{v}_{w}}-{{D}_{12}}{{\left. \frac{\partial {{\omega }_{1}}}{\partial r} \right|}_{r={{r}_{o}}}} \right] \\
-
\end{align} \right.</math>
+
\end{align} \right.</math>
- +
<math>\text{Thermal condition on wall at} \left( r={{r}_{o}} \right)\quad \quad \left\{ \begin{align}
-
<math>\text{Thermal condition on wall at} \left( r={{r}_{o}} \right)\quad \quad \left\{ \begin{align}
+
& {{T}_{w}}= \text{const}\text{.} \\
& {{T}_{w}}= \text{const}\text{.} \\
& {{q}_{w}}^{\prime \prime }=-k{{\left. \frac{\partial T}{\partial r} \right|}_{r={{r}_{o}}}}= \text{const}\text{. or} \\
& {{q}_{w}}^{\prime \prime }=-k{{\left. \frac{\partial T}{\partial r} \right|}_{r={{r}_{o}}}}= \text{const}\text{. or} \\
& {{T}_{w}}=f\left( x \right) \text{or} \\
& {{T}_{w}}=f\left( x \right) \text{or} \\
& {{q}_{w}}^{\prime \prime }=g\left( x \right) \\
& {{q}_{w}}^{\prime \prime }=g\left( x \right) \\
-
\end{align} \right.</math>
+
\end{align} \right.</math>
- +
<math>\text{Inlet condition at} x=0</math>
-
<math>\text{Inlet condition at} x=0</math>
+
<math>\left\{ \begin{align}
-
<math>\left\{ \begin{align}
+
& T={{T}_{in}} \\
& T={{T}_{in}} \\
& {{\omega }_{1}}={{\omega }_{1,}}_{in} \\
& {{\omega }_{1}}={{\omega }_{1,}}_{in} \\
& u={{u}_{in}} \\
& u={{u}_{in}} \\
-
\end{align} \right.</math>
+
\end{align} \right.</math>
- +
<math>\text{Outlet condition at} x=L\begin{matrix}
-
<math>\text{Outlet condition at} x=L\begin{matrix}
+
{} & {} & {} & {} & {} \\
{} & {} & {} & {} & {} \\
\end{matrix}\left\{ \begin{align}
\end{matrix}\left\{ \begin{align}
Line 83: Line 80:
& u=? \\
& u=? \\
& P=? \\
& P=? \\
-
\end{align} \right.</math>
+
\end{align} \right.</math>
-
Clearly there are five partial differential equations and five unknowns (u, v, P, T,
+
Clearly there are five partial differential equations and five unknowns (u, v, P, T, ). All equations are of elliptic nature and one can neglect the axial diffusion terms, <math>\left( \frac{{{\partial }^{2}}u}{\partial {{x}^{2}}},\ \frac{{{\partial }^{2}}v}{\partial {{x}^{2}}},\ \frac{{{\partial }^{2}}T}{\partial {{x}^{2}}},\ \frac{{{\partial }^{2}}{{\omega }_{1}}}{\partial {{x}^{2}}} \right)</math>, under some circumstances in order to make the conservation equations of parabolic nature. These axial diffusion terms can also be neglected under boundary layer assumptions.
-
<math>\left( \frac{{{\partial }^{2}}u}{\partial {{x}^{2}}},\ \frac{{{\partial }^{2}}v}{\partial {{x}^{2}}},\ \frac{{{\partial }^{2}}T}{\partial {{x}^{2}}},\ \frac{{{\partial }^{2}}{{\omega }_{1}}}{\partial {{x}^{2}}} \right)</math>
+ -
, under some circumstances in order to make the conservation equations of parabolic nature. These axial diffusion terms can also be neglected under boundary layer assumptions.
+
Making boundary layer assumptions makes the result invalid very close to the tube entrance where the Reynolds number is very small. Shah and London 1978showed that the momentum boundary layer assumption will lead to error if Re < 400 and / D< 0.005Re. In these circumstances, the full Navier Stokes equation should be solved.
-
Making boundary layer assumptions makes the result invalid very close to the tube entrance where the Reynolds number is very small. Shah and London
+ -
It was also shown in
+
It was also shown in that there are circumstances other than boundary layer assumptions where axial diffusion terms, such as the axial conduction term, can be neglected. However, as we showed in the case of the energy equation, one cannot neglect axial conduction for a very low Prandtl number despite the thermal boundary layer assumption.
+
In general, elliptic equations are more complex to solve analytically or numerically than parabolic equations. Furthermore, to solve the equations as elliptic you need pertinent information at the outlet as well, which in some cases is unknown. The momentum equation is nonlinear while the energy equation is linear under the constant property assumption.
In general, elliptic equations are more complex to solve analytically or numerically than parabolic equations. Furthermore, to solve the equations as elliptic you need pertinent information at the outlet as well, which in some cases is unknown. The momentum equation is nonlinear while the energy equation is linear under the constant property assumption.
- - - - - - - - - + + + + + + + + + + + +
<center>
<center>
<div style="display:inline;">
<div style="display:inline;">
Line 335: Line 336:
|}
|}
</div></center>
</div></center>
+ +
<center>
<center>
Revision as of 05:29, 23 July 2010
For forced convective heat and mass transfer with constant properties, the hydrodynamic entrance length is independent of Pr or Sc. When assuming fully developed flow, the point at which the temperature profile becomes fully developed for forced convection in tubes is linearly proportional to RePr.
Analysis of these criteria for a fully developed flow and temperature profile shows that when Pr 1, as is the case with fluids with high viscosities such as oils, the temperature profile takes a longer distance to completely develop. In these circumstances (Pr 1), it makes sense to assume fully developed velocity since the thermal entrance is much longer than the hydrodynamic entrance.
Obviously, from the definition of Prandtl number and the above criteria, one expects that when Pr ≈ 1 for fluids such as gases, the temperature and velocity develop at the same rate. When Pr 1, as in the case of liquid metals, the temperature profile will develop much faster than the velocity profile, and therefore a uniform velocity assumption (slug flow) is appropriate.
Similar analysis and conclusions can be made with the Schmidt number, Sc, relative to mass transfer problems concerning the entrance effects due to mass diffusion. If one needs to get detailed information concerning the hydrodynamic, thermal or concentration entrance effects, the conservation equations should be solved without a fully developed velocity, concentration, or temperature profile.
Consider laminar forced convective heat and mass transfer in a circular tube for the case of steady two-dimensional constant properties. The inlet velocity, temperature and concentration are uniform at the entrance with the possibility of mass transfer between the wall and fluid, as shown in figure to the right.
The conservation equations with the above assumptions, as well as neglecting the viscous dissipation and assuming an incompressible Newtonian fluid, are
Typical boundary conditions are:
x= 0
Clearly there are five partial differential equations and five unknowns (
u, v, P, T, ω 1). All equations are of elliptic nature and one can neglect the axial diffusion terms, , under some circumstances in order to make the conservation equations of parabolic nature. These axial diffusion terms can also be neglected under boundary layer assumptions.
Making boundary layer assumptions makes the result invalid very close to the tube entrance where the Reynolds number is very small. Shah and London
[1] showed that the momentum boundary layer assumption will lead to error if Re < 400 and L H / D< 0.005Re. In these circumstances, the full Navier Stokes equation should be solved.
It was also shown in Basics of Internal Forced Convection that there are circumstances other than boundary layer assumptions where axial diffusion terms, such as the axial conduction term, can be neglected. However, as we showed in the case of the energy equation, one cannot neglect axial conduction for a very low Prandtl number despite the thermal boundary layer assumption.
In general, elliptic equations are more complex to solve analytically or numerically than parabolic equations. Furthermore, to solve the equations as elliptic you need pertinent information at the outlet as well, which in some cases is unknown. The momentum equation is nonlinear while the energy equation is linear under the constant property assumption.
In most cases, the momentum, energy, and species equations are uncoupled, except under the following circumstances which make the equations coupled.
1. Variable properties, such as density variation as a function of temperature in natural convection problems. 2. Coupled governing equations and/or boundary conditions in phase change problems, such as absorption or dissolution problems. 3. Existence of a source term in one conservation equation that is a function of the dependent variable in another conservation equation.
Langhaar
Cite error: Closing </ref> missing for <ref> tag obtained approximate solutions for the momentum equation for circular tubes by solving the linearized momentum equation. Hornbeck [2] solved the momentum equation numerically by making boundary layer assumptions (parabolic form). Several investigators solved the energy equation either using Langhaar’s approximate velocity profile, or solving the momentum and energy equations numerically for both constant wall temperature and constant wall heat flux in circular tubes. Heat transfer in hydrodynamic and thermal entrance region has been solved numerically based on full elliptic governing equations [3]Bahrami, H., 2009, Personal Communication, Storrs, CT.</ref>.
Variations of local and average Nusselt numbers for different Prandtl numbers under constant wall temperature and constant heat flux using full elliptic governing equations are shown in the two figures to the right, respectively. The local and average Nusselt numbers for different Prandtl numbers and boundary conditions are also presented in the following two tables.
Table Local and average Nusselt number for the entrance region of a circular tube with constant wall temperature
n Nu x Nu m'' x + Pr = 0.7 Pr = 2 Pr = 5 Pr = 0.7 Pr = 2 Pr = 5 0.001 17.0 12.5 10.6 60.7 34.9 24.9 0.002 11.6 8.86 8.04 37.3 22.6 17.0 0.004 8.29 6.64 6.30 23.3 15.1 12.0 0.008 6.29 5.23 5.09 14.9 10.4 8.80 0.01 5.81 4.89 4.80 13.1 9.36 8.03 0.02 4.69 4.12 4.12 8.9 6.89 6.21 0.04 4.01 3.75 3.76 6.46 5.39 5.06 0.06 3.79 3.67 3.68 5.56 4.83 4.61 0.08 3.71 3.66 3.66 5.09 4.54 4.37 0.1 3.68 3.66 3.66 4.81 4.36 4.23 0.12 3.66 3.66 3.66 4.62 4.24 4.14 3.66 3.66 3.66 3.66 3.66 3.66 Table Local and mean Nusselt number for the entrance region of a circular tube with constant wall heat flux
n Nu x Nu m'' x + Pr = 0.7 Pr = 2 Pr = 5 Pr = 0.7 Pr = 2 Pr = 5 0.001 23.0 17.8 15.0 61.0 43.4 34.2 0.002 15.6 12.5 11.1 39.8 29.0 23.5 0.004 10.9 9.16 8.44 26.3 19.8 16.5 0.008 7.92 7.01 6.65 17.7 13.8 11.9 0.01 7.24 6.49 6.22 15.7 12.4 10.8 0.02 5.69 5.30 5.21 11.0 9.11 8.23 0.04 4.81 4.64 4.62 8.09 7.00 6.54 0.06 4.53 4.46 4.45 6.94 6.18 5.87 0.08 4.43 4.39 4.39 6.33 5.74 5.51 0.1 4.39 4.37 4.37 5.95 5.47 5.29 0.12 4.37 4.36 4.36 5.69 5.29 5.13 0.16 4.36 4.36 4.36 5.36 5.23 4.94 4.36 4.36 4.36 4.36 4.36 4.36
Heaton et al.
[4] approximated the result for linearized momentum and energy equations using the energy equation for constant wall heat flux for a group of circular tube annulus for several Prandtl numbers. The following table summarizes the results for parallel plates and circular annulus. Table Local Nusselt number for the entrance region of a group of circular Tube Annulus with Constant Wall Heat Flux
Parallel planes Circular-tube annulus K= 0.50 Pr Nu 11 θ* 1 Nu ii Nu oo θ* i θ* i 0.01 24.2 0.048 -- 24.2 -- 0.0322 11.7 0.117 -- 11.8 -- 0.0786 8.8 0.176 9.43 8.9 0.252 0.118 5.77 0.378 6.4 5.88 0.525 0.231 5.53 0.376 6.22 5.6 0.532 0.238 5.39 0.346 6.18 5.04 0.528 0.216 0.7 18.5 0.037 19.22 18.3 0.0513 0.0243 9.62 0.096 10.47 9.45 0.139 0.063 7.68 0.154 8.52 7.5 0.228 0.0998 5.55 0.327 6.35 5.27 0.498 0.207 5.4 0.345 6.19 5.06 0.527 0.215 5.39 0.346 6.18 5.04 0.528 0.216 10 15.6 0.0311 16.86 15.14 0.045 0.0201 9.2 0.092 10.2 8.75 0.136 0.0583 7.49 0.149 8.43 7.09 0.224 0.0943 5.55 0.327 6.35 5.2 0.498 0.204 5.4 0.345 6.19 5.05 0.527 0.215 5.39 0.346 6.18 5.04 0.528 0.216 Cite error:
<ref> tags exist, but no
<references/> tag was found |
Another method, not covered by the answers above, is
finite automaton transformation. As a simple example, let us show that the regular languages are closed under the shuffle operation, defined as follows:$$L_1 \mathop{S} L_2 = \{ x_1y_1 \ldots x_n y_n \in \Sigma^* : x_1 \ldots x_n \in L_1, y_1 \ldots y_n \in L_2 \}$$You can show closure under shuffle using closure properties, but you can also show it directly using DFAs. Suppose that $A_i = \langle \Sigma, Q_i, F_i, \delta_i, q_{0i} \rangle$ is a DFA that accepts $L_i$ (for $i=1,2$). We construct a new DFA $\langle \Sigma, Q, F, \delta, q_0 \rangle$ as follows: The set of states is $Q_1 \times Q_2 \times \{1,2\}$, where the third component remembers whether the next symbol is an $x_i$ (when 1) or a $y_i$ (when 2). The initial state is $q_0 = \langle q_{01}, q_{02}, 1 \rangle$. The accepting states are $F = F_1 \times F_2 \times \{1\}$. The transition function is defined by $\delta(\langle q_1, q_2, 1 \rangle, \sigma) = \langle \delta_1(q_1,\sigma), q_2, 2 \rangle$ and $\delta(\langle q_1, q_2, 2 \rangle, \sigma) = \langle q_1, \delta_2(q_2,\sigma), 1 \rangle$.
A more sophisticated version of this method involves
guessing. As an example, let us show that regular languages are closed under reversal, that is,$$ L^R = \{ w^R : w \in \Sigma^* \}. $$(Here $(w_1\ldots w_n)^R = w_n \ldots w_1$.) This is one of the standard closure operations, and closure under reversal easily follows from manipulation of regular expressions (which may be regarded as the counterpart of finite automaton transformation to regular expressions) – just reverse the regular expression. But you can also prove closure using NFAs. Suppose that $L$ is accepted by a DFA $\langle \Sigma, Q, F, \delta, q_0 \rangle$. We construct an NFA $\langle \Sigma, Q', F', \delta', q'_0 \rangle$, where The set of states is $Q' = Q \cup \{q'_0\}$. The initial state is $q'_0$. The unique accepting state is $q_0$. The transition function is defined as follows: $\delta'(q'_0,\epsilon) = F$, and for any state $q \in Q$ and $\sigma \in \Sigma$, $\delta(q', \sigma) = \{ q : \delta(q,\sigma) = q' \}$.
(We can get rid of $q'_0$ if we allow multiple initial states.) The guessing component here is the final state of the word after reversal.
Guessing often involves also verifying. One simple example is closure under
rotation:$$ R(L) = \{ yx \in \Sigma^* : xy \in L \}. $$Suppose that $L$ is accepted by the DFA $\langle \Sigma, Q, F, \delta, q_0 \rangle$. We construct an NFA $\langle \Sigma, Q', F', \delta', q'_0 \rangle$, which operates as follows. The NFA first guesses $q=\delta(q_0,x)$. It then verifies that $\delta(q,y) \in F$ and that $\delta(q_0,x) = q$, moving from $y$ to $x$ non-deterministically. This can be formalized as follows: The states are $Q' = \{q'_0\} \cup Q \times Q \times \{1,2\}$. Apart from the initial state $q'_0$, the states are $\langle q,q_{curr}, s \rangle$, where $q$ is the state that we guessed, $q_{curr}$ is the current state, and $s$ specifies whether we are at the $y$ part of the input (when 1) or at the $x$ part of the input (when 2). The final states are $F' = \{\langle q,q,2 \rangle : q \in Q\}$: we accept when $\delta(q_0,x)=q$. The transitions $\delta'(q'_0,\epsilon) = \{\langle q,q,1 \rangle : q \in Q\}$ implement guessing $q$. The transitions $\delta'(\langle q,q_{curr},s \rangle, \sigma) = \langle q,\delta(q_{curr},\sigma),s \rangle$ (for every $q,q_{curr} \in Q$ and $s \in \{1,2\}$) simulate the original DFA. The transitions $\delta'(\langle q,q_f,1 \rangle, \epsilon) = \langle q,q_0,2 \rangle$, for every $q \in Q$ and $q_f \in F$, implement moving from the $y$ part to the $x$ part. This is only allowed if we've reached a final state on the $y$ part.
Another variant of the technique incorporates bounded counters. As an example, let us consider change
edit distance closure:$$ E_k(L) = \{ x \in \Sigma^* : \text{ there exists $y \in L$ whose edit distance from $x$ is at most $k$} \}. $$Given a DFA $\langle \Sigma, Q, F, \delta, q_0 \rangle$ for $L$, e construct an NFA $\langle \Sigma, Q', F', \delta', q'_0 \rangle$ for $E_k(L)$ as follows: The set of states is $Q' = Q \times \{0,\ldots,k\}$, where the second item counts the number of changes done so far. The initial state is $q'_0 = \langle q_0,0 \rangle$. The accepting states are $F' = F \times \{0,\ldots,k\}$. For every $q,\sigma,i$ we have transitions $\langle \delta(q,\sigma), i \rangle \in \delta'(\langle q,i \rangle, \sigma)$. Insertions are handled by transitions $\langle q,i+1 \rangle \in \delta'(\langle q,i \rangle, \sigma)$ for all $q,\sigma,i$ such that $i < k$. Deletions are handled by transitions $\langle \delta(q,\sigma), i+1 \rangle \in \delta'(\langle q,i \rangle, \epsilon)$ for all $q,\sigma,i$ such that $i < k$. Substitutions are similarly handles by transitions $\langle \delta(q,\sigma), i+1 \rangle \in \delta'(\langle q,i \rangle, \tau)$ for all $q,\sigma,\tau,i$ such that $i < k$. |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
As I understand it, the definition of the hazard ratio is the ratio of two hazard rates. Often the exp(coef) from a Cox model is also used as an estimate of the hazard ratio. These methods give two different, although similar, results. Are there circumstances in which it's appropriate to use one method vs. the other?
I would suggest that you write the model you are working with, and then compute the hazard ratio you are interested in, in order to know how to estimate it from your $\hat{\beta}$'s.
$$$$
Example
Suppose your Cox's proportional hazards model takes the form $$h(t) = h_0(t) \exp(x_1 \beta_1 + x_2 \beta_2)$$ where $h_0$ is the baseline hazard function, and $\beta_1$ and $\beta_2$ are regression coefficients associated with the covariates $x_1$ and $x_2$. Suppose further that $x_1$ is binary ($0$ or $1$).
$$$$
$$\frac{h(t | x_1=1, x_2)}{h(t | x_1=0, x_2)} = \exp(\beta_1).$$ So, $\exp(\beta_1)$ is a conditional (given the value of $x_2$) hazard ratio ($x_1 = 1$ versus $x_1 = 0$).
In your comment you make clear that you are comparing the weighted instantaneous hazard ratio ( i.e., exp(beta)) with some summary measure of the ratios of cumulative hazards ( -log(frac.surv pop2)*time ). I think you will find that they are roughly the same during the early early phases of follow-up but they diverge as time increases. The reason for this is that as survival approaches zero the ratio of cumulative survival necessarily goes to unity, where as the hazard ratio may not. |
Basically, what you're saying is that a homogeneous electric field $\vec{E}(\vec{r},t)=\cos \omega t \ \hat {z}$ in vacuum requires you to have a magnetic field that satisfies$$\nabla \times \mathbf{B} =\frac{1}{c^2}\frac{\partial \mathbf{E}}{\partial t},$$giving you a non-homogeneous magnetic field, which would then require the electric field to satisfy$$\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t},$$that therefore needs an inhomogeneous electric field, in conflict with your initial assumption? Or, in other words, you're saying that the electric field assumed by the question, $\vec{E}(\vec{r},t)=\cos \omega t \ \hat {z}$, is inconsistent with the Maxwell equations?
If that
is what you're saying, then: good! you're right. That electric field is not, strictly speaking, compatible with the Maxwell equations.
It
is, however, an excellent and interesting approximation to useful solutions under suitable approximations (specifically, the dipole approximation). And, under that approximation, you neglect both the magnetic field and the conflict that this creates with the Ampère-Maxwell law. This obviously means that the fields are not precisely correct, and that there is the possibility of non-dipole effects (which come in a hierarchy, starting with magnetic dipole effects and then through electric quadrupole field dependences, and then upwards), but if the system is localized enough then those effects can be completely negligible. |
Homework is due by 4:30 pm next
Wednesday, October 17th and can be handed in outside 329 Annenberg, or electronically by emailing keenan@cs.caltech.edu. Homework will be graded not only on the basis of correctness, but also on Remember that proofs are meant for human beings, not for machines. Legible handwriting, complete sentences, an outline of your approach, and illustrations (where appropriate) are all strongly recommended if you wish to receive a good score. presentation!
Euler Characteristic
Exercise 1.1: Polyhedral Formula A topological disk is, roughly speaking, any shape you can get by deforming the region bounded by a circle without tearing it, puncturing it, or gluing its edges together. Some examples of shapes that are disks include a flag, a leaf, and a glove. Some examples of shapes that are not disks include a circle (why?), a ball, a sphere, a DVD, a donut, and a teapot. A polygonal disk is any disk constructed out of simple polygons. Similarly, a topological sphere is any shape resembling the standard sphere, and a polyhedron is a sphere made of polygons. More generally, a piecewise linear surface is any surface made by gluing together polygons along their edges; a simplicial surface is a special case of a piecewise linear surface where all the faces are triangles. The boundary of a piecewise linear surface is the set of edges that are contained in only a single face (all other edges are shared by exactly two faces). For example, a disk has a boundary whereas a polyhedron does not. You may assume that surfaces have no boundary unless otherwise stated.
Show that for any polygonal disk with $V$ vertices, $E$ edges, and $F$ faces, the following relationship holds:
\[ V - E + F = 1 \]
and conclude that for any polyhedron $V – E + F = 2$.
Hint: use induction. Note that induction is generally easier if you start with a given object and decompose it into smaller pieces rather than trying to make it larger, because there are fewer cases to think about.
Euler-Poincaré Formula
Clearly not all surfaces look like disks or spheres. Some surfaces have additional
handles that distinguish them topologically; the number of handles $g$ is known as the genus of the surface (see illustration above for examples). In fact, among all surfaces that have no boundary and are connected (meaning a single piece), compact (meaning closed and contained in a ball of finite size), and orientable (having two distinct sides), the genus is the only thing that distinguishes two surfaces. A more general formula applies to such surfaces, namely
\[ V - E + F = 2 - 2g, \]
which is known as the
Euler-Poincaré formula. (You do not have to prove anything about this statement, but it will be useful in later calculations.)
Tessellation
Exercise 1.2: Regular Valence
The
valence of a vertex in a piecewise linear surface is the number of faces that contain that vertex. A vertex of a simplicial surface is said to be regular when its valence equals six. Show that the only (connected, orientable) simplicial surface for which every vertex has regular valence is a torus ($g=1$). You may assume that the surface has finitely many faces. Hint: apply the Euler-Poincaré formula.
Exercise 1.3: Minimally Irregular Valence Show that the minimum possible number of irregular valence vertices in a (connected, orientable) simplicial surface $K$ of genus $g$ is given by
\[
m(K) = \begin{cases} 4, & g = 0 \\ 0, & g = 1 \\ 1, & g \geq 2, \end{cases} \]
assuming that all vertices have valence at least three and that there are finitely many faces.
Exercise 1.4: Mean Valence Show that the mean valence approaches six as the number of vertices in a (connected, orientable) simplicial surface goes to infinity, and that the ratio of vertices to edges to faces hence approaches
\[ V:E:F = 1:3:2. \]
Discrete Gaussian Curvature
Exercise 1.5: Area of a Spherical Triangle Show that the area of a spherical triangle on the unit sphere with interior angles $\alpha_1$, $\alpha_2$, $\alpha_3$ is
\[ A = \alpha_1 + \alpha_2 + \alpha_3 - \pi. \]
Hint: consider the areas $A_1$, $A_2$, $A_3$ of the three shaded regions (called “diangles”) pictured below.
Exercise 1.6: Area of a Spherical Polygon Show that the area of a spherical polygon with consecutive interior angles $\beta_1, \ldots, \beta_n$ is
\[ A = (2-n) \pi + \sum_{i=1}^n \beta_i. \]
Hint: use the expression for the area of a spherical triangle you just derived!
Exercise 1.7: Angle Defect Recall that for a discrete planar curve we can define the curvature at a vertex as the distance on the unit circle between the two adjacent normals. For a discrete surface we can define (Gaussian) curvature at a vertex $v$ as the area on the unit sphere bounded by a spherical polygon whose vertices are the unit normals of the faces around $v$. Show that this area is equal to the angle defect
\[ d(v) = 2\pi - \sum_{ f \in F_v } \angle_f(v) \]
where $F_v$ is the set of faces containing $v$ and $\angle_f(v)$ is the interior angle of the face $f$ at vertex $v$.
Hint: consider planes that contain two consecutive normals and their intersection with the unit sphere.
Exercise 1.8: Discrete Gauss-Bonnet Theorem Consider a (connected, orientable) simplicial surface $K$ with finitely many vertices $V$, edges $E$ and faces $F$. Show that a discrete analog of the Gauss-Bonnet theorem holds for simplicial surfaces, namely
\[ \sum_{ v \in V } d(v) = 2 \pi \chi \]
where $\chi = |V| – |E| + |F|$ is the
Euler characteristic of the surface. |
An RLC circuit is a simple electric circuit with a resistor, inductor and capacitor in it -- with resistance
R, inductance Land capacitance C, respectively. It's one of the simplest circuits that displays non-trivial behavior.
You can derive an equation for the behavior by using Kirchhoff's laws (conservation of the stocks and flows of electrons) and the properties of the circuit elements. Wikipedia does a fine job.
You arrive at a solution for the current as a function of time that looks generically like this (not the most general solution, but a solution):
i(t) = A e^{\left( -\alpha + \sqrt{\alpha^{2} - \omega^{2}} \right) t}
$$
with $\alpha = R/2L$ and $\omega = 1/\sqrt{L C}$. If you fill in some numbers for these parameters, you can get all kinds of behavior:
As you can tell from that diagram, the Kirchhoff conservation laws don't in any way nail down the behavior of the circuit. The values you choose for
R, Land Cdo. You could have a slowly decaying current or a quickly oscillating one. It depends on R, Land C.
Now you may wonder why I am talking about this on an economics blog. Well, Cullen Roche implicitly asked a question:
Although [stock flow consistent models are] widely used in the Fed and on Wall Street it hasn’t made much impact on more mainstream academic economic modeling techniques for reasons I don’t fully know.
The reason is that the content of stock flow consistent modeling is identical to Kirchhoff's laws. Currents are flows of electrons (flows of money); voltages are stocks of electrons (stocks of money).
Krichhoff's laws do not in any way nail down the behavior of an RLC circuit.
SFC models do not nail down the behavior of the economy.
If you asked what the impact of some policy was and I gave you the graph above, you'd probably never ask again.
What SFC models do in order to hide the fact that anything could result from an SFC model is effectively assume
R = L = C = 1, which gives you this:
I'm sure to get objections to this. There might even be legitimate objections. But I ask of any would-be objector:
How is accounting for money different from accounting for electrons?
Before saying this circuit model is in continuous time, note that there are circuits with clock cycles -- in particular the device you are currently reading this post with.
I can't for the life of me think of any objection, and I showed exactly this problem with a SFC model from Godley and Lavoie:
But to answer Cullen's implicit question -- as the two
Nick Rowe is generally better than me at these things. Mathematicanotebooks above show, SFC models don't specify the behavior of an economy without assuming R = L = C = 1 ... that is to say Γ = 1. Update:
Nick Rowe is generally better than me at these things. |
I am working on the following problem:
Let $Y$ be the image of $\mathbb{P}^2$ in $\mathbb{P}^5$ by the Veronese embedding. Let $Z$ be a closed subvariety of $Y$ of dimension 1. Show that there exists a hypersurface $V$ of $ \mathbb{P}^5$ such that $V\cap Y = Z$
This is what I have done so far:
As $Z$ is a subvariety of $Y$ and the Veronese embedding is an injection, I can see the preimage of $Z$, noted $X$, as a closed subvariety of $\mathbb{P}^2$. This means that $X=V(f)$ where $f$ is an irreducible polynomial in $k[X_0,X_1,X_2]$.
Now, I have that $f^2 = g \in k[X_0^2,X_1^2,X_2^2,X_0X_1,X_0X_2,X_1X_2]$.
Then $V(f) \subset V(g)$. I can factor $g$ into irreducible $g=g_1\dots g_n$ where each $g_i \in k[X_0^2,X_1^2,X_2^2,X_0X_1,X_0X_2,X_1X_2]$.
I know there should be one $g_i$ such that $Y\cap V(g_i)=Z$. I don't know how to continue
Is this reasoning right? I have found "easier" ways to proof this but I can't see them clearly (Example Why this property holds in a Veronese surface)
How do I know there is one $g_i$ with such property? |
ISSN:
1547-5816
eISSN:
1553-166X
All Issues
Journal of Industrial & Management Optimization
October 2013 , Volume 9 , Issue 4
Select all articles
Export/Reference:
Abstract:
In this paper, we propose a primal-dual approach for solving the generalized fractional programming problem. The outer iteration of the algorithm is a variant of interval-type Dinkelbach algorithm, while the augmented Lagrange method is adopted for solving the inner min-max subproblems. This is indeed a very unique feature of the paper because almost all Dinkelbach-type algorithms in the literature addressed only the outer iteration, while leaving the issue of how to practically solve a sequence of min-max subproblems untouched. The augmented Lagrange method attaches a set of artificial variables as well as their corresponding Lagrange multipliers to the min-max subproblem. As a result, both the primal and the dual information is available for updating the iterate points and the min-max subproblem is then reduced to a sequence of minimization problems. Numerical experiments show that the primal-dual approach can achieve a better precision in fewer iterations.
Abstract:
In this paper, we consider an optimal investment-consumption problem subject to a closed convex constraint. In the problem, a constraint is imposed on both the investment and the consumption strategy, rather than just on the investment. The existence of solution is established by using the Martingale technique and convex duality. In addition to investment, our technique embeds also the consumption into a family of fictitious markets. However, with the addition of consumption, it leads to nonreflexive dual spaces. This difficulty is overcome by employing the so-called technique of ``relaxation-projection" to establish the existence of solution to the problem. Furthermore, if the solution to the dual problem is obtained, then the solution to the primal problem can be found by using the characterization of the solution. An illustrative example is given with a dynamic risk constraint to demonstrate the method.
Abstract:
Due to globalization and technological advances, increasing competition and falling prices have forced enterprises to reduce cost; this poses new challenges in pricing and replenishment strategy. The study develops a piecewise production-inventory model for a multi-market deteriorating product with time-varying and price-sensitive demand. Optimal product pricing and material replenishment strategy is derived to optimize the manufacturer's total profit. Sensitivity analyses of how the major parameters affect the decision variables were carried out. Finally, the single production cycle is extended to multiple production cycles. We find that the total profit for multiple production cycle increases 5.77/100 when compared with the single production cycle.
Abstract:
The system of absolute value equations $Ax+B|x|=b$, denoted by AVEs, is proved to be NP-hard, where $A, B$ are arbitrary given $n\times n$ real matrices and $b$ is arbitrary given $n$-dimensional vector. In this paper, we reformulate AVEs as a family of parameterized smooth equations and propose a smoothing-type algorithm to solve AVEs. Under the assumption that the minimal singular value of the matrix $A$ is strictly greater than the maximal singular value of the matrix $B$, we prove that the algorithm is well-defined. In particular, we show that the algorithm is globally convergent and the convergence rate is quadratic without any additional assumption. The preliminary numerical results are reported, which show the effectiveness of the algorithm.
Abstract:
We examine the problem of optimal capacity reservation policy on innovative product in a setting of one supplier and one retailer. The parameters of capacity reservation policy are two dimensional: reservation price and excess capacity that the supplier will have in additional to the reservation amount. The above problem is analyzed using a two-stage Stackelberg game. In the first stage, the supplier announces the capacity reservation policy. The retailer forecasts the future demand and then determines the reservation amount. After receiving the reservation amount, the supplier expands the capacity. In the second stage, the uncertainty in demand is resolved and the retailer places a firm order. The supplier salvages the excess capacity and the associated payments are made.
In the paper, with exogenous reservation price or exogenous excess capacity level, we study the optimal expansion policy and then investigate the impacts of reservation price or excess capacity level on the optimal strategies. Finally, we characterize Nash Equilibrium and derive the optimal capacity reservation policy, in which the supplier will adopt exact capacity expansion policy.
Abstract:
This paper develops three (re)ordering models of a supply chain consisting of one risk-neutral manufacturer and one loss-averse retailer to study the coordination mechanism and the effects of the reordering policy on the coordination mechanism. The three (re)ordering policies are twice ordering policy with break-even quantity, twice ordering policy without break-even quantity and once ordering policy, respectively. We design a buyback-setup-cost-sharing mechanism to coordinate the supply chain for each policy, and Pareto analysis indicates that both the manufacturer and the retailer will realize a 'win-win' situation. By comparing the models, we find that twice ordering policy with break-even quantity is absolutely dominant for both the retailer and the supply chain. However, only if the break-even quantity is less than the mean quantity to failure, twice ordering policy without break-even quantity is dominant over the once ordering policy. The higher marginal revenue can induce more order quantity of the retailer under both twice ordering policy with break-even quantity and once ordering policy. However, it is interesting that it has no effect on the order plan of centralized decision-maker in twice ordering policy without break-even quantity.
Abstract:
In today's business environment, there are various reasons, namely, bulk purchase discounts, seasonality of products, re-order costs, etc., which force the buyer to order more than the warehouse capacity (owned warehouse). Such reasons call for additional storage space to store the excess units purchased. This additional storage space is typically a rented warehouse. It is known that the demand of seasonal products increases at the beginning of the season up to a certain moment and then is stabilized to a constant rate for the remaining time of the season (ramp type demand rate). As a result, the buyer prefers to keep a higher inventory at the beginning of the season and so more units than can be stored in owned warehouse may be purchased. The excess quantities need additional storage space, which is facilitated by a rented warehouse.
In this study an order level two-warehouse inventory model for deteriorating seasonal products is studied. Shortages at the owned warehouse are allowed subject to partial backlogging. This two-warehouse inventory model is studied under two different policies. The first policy starts with an instant replenishment and ends with shortages and the second policy starts with shortages and ends without shortages. For each of the models, conditions for the existence and uniqueness of the optimal solution are derived and a simple procedure is developed to obtain the overall optimal replenishment policy. The dynamics of the model and the solution procedure have been illustrated with the help of a numerical example and a comprehensive sensitivity analysis, with respect to the most important parameters of the model, is considered.
Abstract:
In the framework of multi-choice games, we propose a specific reduction to construct a dynamic process for the multi-choice Shapley value introduced by Nouweland et al. [8].
Abstract:
In [8], Zhang et al. proposed a modified three-term HS (MTTHS) conjugate gradient method and proved that this method converges globally for nonconvex minimization in the sense that $\liminf_{k\to\infty}\|\nabla f(x_k)\|=0$ when the Armijo or Wolfe line search is used. In this paper, we further study the convergence property of the MTTHS method. We show that the MTTHS method has strongly global convergence property (i.e., $\lim_{k\to\infty}\|\nabla f(x_k)\|=0$) for nonconvex optimization by the use of the backtracking type line search in [7]. Some preliminary numerical results are reported.
Abstract:
This paper analyzes an M/G/1 queue with general setup times from an economical point of view. In such a queue whenever the system becomes empty, the server is turned off. A new customer's arrival will turn the server on after a setup period. Upon arrival, the customers decide whether to join or balk the queue based on observation of the queue length and the status of the server, along with the reward-cost structure of the system. For the observable and almost observable cases, the equilibrium joining strategies of customers who wish to maximize their expected net benefit are obtained. Two numerical examples are presented to illustrate the equilibrium joining probabilities for these cases under some specific distribution functions of service times and setup times.
Abstract:
In this paper, a new non-monotone trust-region algorithm is proposed for solving unconstrained nonlinear optimization problems. We modify the retrospective ratio which is introduced by Bastin et al. [Math. Program., Ser. A (2010) 123: 395-418] to form a convex combination ratio for updating the trust-region radius. Then we combine the non-monotone technique with this new framework of trust-region algorithm. The new algorithm is shown to be globally convergent to a first-order critical point. Numerical experiments on CUTEr problems indicate that it is competitive with both the original retrospective trust-region algorithm and the classical trust-region algorithms.
Abstract:
The aim of this paper is to develop an improved inventory model which helps the enterprises to advance their profit increasing and cost reduction in a single vendor-single buyer environment with permissible delay in payments depending on the ordering quantity and imperfect production. Through this study, some numerical examples available in the literature are provided herein to apply the permissible delay in payments depending on the ordering quantity strategy. Furthermore, imperfect products will cause the cost and increase number of lots through the whole model. Therefore, for more closely conforming to the actual inventories and responding to the factors that contribute to inventory costs, our proposed model can be the references to the business applications. Finally, results of this study showed applying the permissible delay in payments can promote the cost reduction; and also showed a longer trade credit term can decrease costs for the complete supply chain.
Abstract:
Channel coordination is an optimal state with operation of channel. For achieving channel coordination, we present a quantity discount mechanism based on a fairness preference theory. Game models of the channel discount mechanism are constructed based on the entirely rationality and self-interest. The study shows that as long as the degree of attention (parameters) of retailer to manufacturer's profit and the fairness preference coefficients (parameters) of retailers satisfy certain conditions, channel coordination can be achieved by setting a simple wholesale price and fixed costs. We also discuss the allocation method of channel coordination profit, the allocation method ensure that retailer's profit is equal to the profit of independent decision-making, and manufacturer's profit is raised.
Abstract:
Constraint qualification (CQ) is an important concept in nonlinear programming. This paper investigates the motivation of introducing constraint qualifications in developing KKT conditions for solving nonlinear programs and provides a geometric meaning of constraint qualifications. A unified framework of designing constraint qualifications by imposing conditions to equate the so-called ``locally constrained directions" to certain subsets of ``tangent directions" is proposed. Based on the inclusion relations of the cones of tangent directions, attainable directions, feasible directions and interior constrained directions, constraint qualifications are categorized into four levels by their relative strengths. This paper reviews most, if not all, of the commonly seen constraint qualifications in the literature, identifies the categories they belong to, and summarizes the inter-relationship among them. The proposed framework also helps design new constraint qualifications of readers' specific interests.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Anyone knows if there is an algorithm for directly write the context-free grammar that generates a given regular expression?
I assume you want to get a grammar that generates the same language as the given regular expression.
You can achieve that by the following steps:
Translate the regular expression into an NFA. Translate the NFA into a (right-)regular grammar.
Both translations are standard and covered in basic textbooks on formal languages and automata. Note that any regular grammar is also context-free.
Yes. I give the high-level answer, without many details.
First you have to parse the expressions. That can be done using a simple recursive decent parser. Several examples on the web.
Then you should add "semantic" rules to the parser, when returning from the recursion. Those are standard in any formal language theory course. If $S_1$ and $S_2$ are non-terminals that generate expressions $E_1$ and $E_2$ then we can generate $E_1+E_2$ by $S$ and the rules $S\to S_1$, $S\to S_2$. We can generate concatenation $E_1 E_2$ by $S$ and the rule $S\to S_1 S_2$. $E_1*$ by $S$ and the rules $S\to S_1 S$, $S\to \lambda$. Assuming we choose fresh nonterminals each time.
Like a previous answer, I assume you want to get a grammar that generates the same language as the given regular expression $r$.
A recursive algorithm for constructing a context-free grammar $G$ with $L(G) = L(r)$ goes as follows:
if $r = \emptyset$, output a grammar with no production rules (or $S \to S$ if it must have a rule). if $r = \Lambda$, output $S \to \Lambda$. if $r = a$ (the expression is just a single letter), output $S \to a$. if $r = a \cup b$ (union, also notated as + or |): construct disjoint grammars for $a$ and $b$ with start symbols $S_a$ and $S_b$, combine them and add $S \to S_a \mid S_b$. if $r = lr$ (concatenation, also notated as $\cdot$): construct disjoint grammars for $l$ and $r$ with start symbols $S_l$ and $S_r$, combine them and add $S \to S_l S_r$. if $r = x^*$ (Kleene star): construct a grammar for $x$ with start symbol $S_x$ and add $S \to S_x S \mid \Lambda$.
This is essentially equivalent to Hendrik's answer, but with more detail which may be useful. |
Search
Now showing items 1-10 of 52
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Highlights of experimental results from ALICE
(Elsevier, 2017-11)
Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ...
Event activity-dependence of jet production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV measured with semi-inclusive hadron+jet correlations by ALICE
(Elsevier, 2017-11)
We report measurement of the semi-inclusive distribution of charged-particle jets recoiling from a high transverse momentum ($p_{\rm T}$) hadron trigger, for p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in p-Pb events ...
System-size dependence of the charged-particle pseudorapidity density at $\sqrt {s_{NN}}$ = 5.02 TeV with ALICE
(Elsevier, 2017-11)
We present the charged-particle pseudorapidity density in pp, p–Pb, and Pb–Pb collisions at sNN=5.02 TeV over a broad pseudorapidity range. The distributions are determined using the same experimental apparatus and ...
Photoproduction of heavy vector mesons in ultra-peripheral Pb–Pb collisions
(Elsevier, 2017-11)
Ultra-peripheral Pb-Pb collisions, in which the two nuclei pass close to each other, but at an impact parameter greater than the sum of their radii, provide information about the initial state of nuclei. In particular, ...
Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE
(Elsevier, 2017-11)
The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ... |
I was inspired by Dietrich Vollrath's latest blog post to work out the generalization of the macro ensemble version of the information equilibrium condition [1] to more than one factor of production. However, as it was my lunch break, I didn't have time to LaTeX up all the steps so I'm just going to post the starting place and the result (for now).
We have two ensembles of information equilibrium relationships $A_{i} \rightleftarrows B$ and $A_{j} \rightleftarrows C$ (with two factors of production $B$ and $C$), and we generalize the partition function analogously to multiple thermodynamic potentials (see also here):
Z = \sum_{i j} e^{-k_{i}^{(1)} \log B/B_{0} -k_{j}^{(2)} \log C/C_{0}}
$$
Playing the same game as worked out in [1], except with partial derivatives, you obtain:
\begin{align}
\frac{\partial \langle A \rangle}{\partial B} = & \; \langle k^{(1)} \rangle \frac{\langle A \rangle}{B}\\
\frac{\partial \langle A \rangle}{\partial C} = & \; \langle k^{(2)} \rangle \frac{\langle A \rangle}{C}
\end{align}
$$
This is the same as before, except now the values of $k$ can change. If the $\langle k \rangle$ change
slowly(i.e. treated as almost constant), the solution can be approximated by a Cobb-Douglas production function:
\langle A \rangle = a \; B^{\langle k^{(1)} \rangle} C^{\langle k^{(2)} \rangle}
$$
And now you can read Vollrath's piece keeping in mind that using an ensemble of information equilibrium relationships implies $\beta$ (e.g. $\langle k^{(1)} \rangle$) can change and we aren't required to maintain $\langle k^{(1)} \rangle + \langle k^{(2)} \rangle = 1$.
Update 28 July 2017
I'm sure it was obvious to readers, but this generalizes to any number of factors of production using the partition function
Z = \sum_{i_{n}} \exp \left( - \sum_{n} k_{i_{n}}^{(n)} \log B^{(n)}/B_{0}^{(n)} \right)
$$
where instead of $B$ and $C$ (or $D$), we'd have $B^{(1)}$ and $B^{(2)}$ (or $B^{(3)}$). You'd obtain:
\frac{\partial \langle A \rangle}{\partial B^{(n)}} = \; \langle k^{(n)} \rangle \frac{\langle A \rangle}{B^{(n)}}
$$ |
Given that you already accept that the duality gap along the central path is $m/t$, then the inequality you're struggling with is really rather simple. Remember, the dual problem provides lower bounds for the optimal value of the primal. So $p^*$ is necessarily in between the objective values of any feasible primal point and any feasible dual point.
That is: we have$$f(x^*(t)) - g(\gamma^*(t),\lambda^*(t)) = m/t$$ where $g$ is the dual objective. But we also have$$p^* \leq f(x) \quad \forall x\text{ feasible}$$and$$p^* \geq g(\gamma,\lambda) \quad \forall \gamma,\lambda\text{ feasible}$$Thus for
any feasible primal and dual points, we have$$f(x)-p^* \leq f(x) - g(\gamma,\lambda) \quad \forall x,\gamma,\lambda\text{ feasible}$$And along the central path,$$f(x^*(t))-p^* \leq f(x^*(t)) - g(\gamma^*(t),\lambda^*(t))=m/t.$$
EDIT: Actually, it looks like you may not accept the premise that the $m/t$ difference applies to the original problem. So let's look at this for an LP. The primal and dual problems are$$\begin{array}{llcll}\text{minimize} & c^T x & \quad & \text{maximize} & b^T \lambda \\\text{subject to} & A x = b & & \text{subject to} & A^T \lambda + \gamma = c \\& x \succeq 0 & & & \gamma \succeq 0\end{array}$$I prefer $y$ and $z$ above to $\lambda$ and $\gamma$, but for consistency I am keeping your variable names. $\lambda$ is the Lagrange multiplier for the equality constraints, $\gamma$ is the Lagrange multiplier for the inequalities. The barrier problem for the primal is$$\begin{array}{ll}\text{minimize} & t c^T x - \textstyle \sum_{i=1}^m \log x_i \\ \text{subject to} & A x = b\end{array}$$Note that the domain of this problem is limited to $x\succ 0$ by the barrier term.The optimality conditions for a fixed $t>0$ satisfy$$t c - \mathop{\textrm{diag}}(x)^{-1} \mathbf{1} - A^Ty = 0 \quad Ax=b \quad x \succ 0$$where $y$ is a Lagrange multiplier for the equality constraints. That second term is a bit clumsy: define $z_i\triangleq x_i^{-1}$ for $i=1,2,\dots,m$ so$$t c - z - A^Ty =0 \quad Ax=b \quad x,z \succ 0 \quad z_i\triangleq x_i^{-1},~i=1,2,\dots,m$$By inspection, we see that if $(x,y,z)$ satisfies these equality constraints, then $(\gamma,\lambda)=(t^{-1}z,t^{-1}y)$ is a feasible dual point
for the original problem. The duality gap for this primal/dual pair $(x,\gamma,\lambda)$ is$$c^Tx-b^T\lambda=c^Tx-t^{-1}b^Ty=c^Tx-t^{-1}x^TA^Ty=(c-t^{-1}A^Ty)^Tx=t^{-1}z^Tx=m/t.$$So even though we're looking at the modified barrier model, the set of optimal points $x^*(t)$ are all feasible for the original, and they all lead to feasible dual points $(\gamma^*(t),\lambda^*(t))$ for the dual problem as well. The duality gap for the original model is $m/t$. |
Search
Now showing items 1-10 of 23
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... |
Equivalence point means equal numbers of moles of acid and base have been added. If $x$ moles of the weak base $\ce{B}$ and the strong acid $\ce{HA}$ have been added, then the $\ce{H^+}$ ions will react away $\ce{OH^-}$ until all the weak base has associated into $\ce{BH^+}$ and $\ce{OH^-}$ (which of course has been reacted with $\ce{H^+}$ forming water). This leaves us with water and $x$ moles of the salt $\ce{BHA}$. This salt is ionic and so soluble in water, so it completely ionises to form $\ce{BH^+}$ and $\ce{A^-}$ ions in solution.
As $\ce{BH^+}$ is the conjugate acid of the weak base we had originally, it can react with water in an equilibrium reaction as follows:
$\ce{BH^+} + \ce{H2O} \rightleftharpoons \ce{B} + \ce{H3O^+}$
As $\ce{B}$ is a weak base, the above equilibrium will lie to the right, and hence there will be a considerable concentration of $\ce{H3O^+}$ in the solution at equivalence, which leads to an acidic pH.
Calculating the pH depends on the fact that $K_b K_a = K_w$ for a base and its conjugate acid. This is derived here.
For the equilibrium
$\ce{BH^+} + \ce{H2O} \rightleftharpoons \ce{B} + \ce{H3O^+}$
the acid dissociation constant has the form
$K_a = \dfrac{[\ce{B}][\ce{H3O^+}]}{[\ce{BH^+}]}$
As we know $K_b K_a = K_w$, we can rewrite this equilibrium expression as
$[\ce{H3O^+}]=\sqrt{[\ce{BH^+}]\dfrac{K_w}{K_b}}$
To find pH you can $-\log$ this. (Note $[\ce{B}]=[\ce{H3O^+}]$ as one molecule of each is produced for each molecule of $\ce{BH^+}$ that reacts with a water molecule).
So in order to find the pH at the equivalence point, you need to know the $K_b$ of the base you're using. And as everything is in the same volume, all the volumes will cancel from the concentration terms.
I hope this helps, and please correct me if I've made any errors. |
Given two independent events $A$ and $B$, with given conditions:
$0 \lt P(A) , P(B) <1 $. Which one of the following options is/are false? $A$ and $B’$ are independent. $A’$ and $B’$ are independent. $P(A|B) = P(A|B’)$ For any event c, with $0 \lt P(c) \lt 1$, $P(AB|c)= P(A|c)\cdot P(B|c)$
Here is what I tried:
A and B are independent iff: $P(A \cap B)$ $=$ $P(A)\cdot P(B)$
Now, we have : $P(A) = P(A \cap B) + P(A \cap B')$ So, $ P(A \cap B')$ $=$ $P(A) - P(A \cap B)$ $=$ $P(A) - P(A)\cdot P(B)$ $=$ $[1-P(B)]\cdot P(A)$ $=$ $P(A)\cdot P(B')$ Thus, 1 is true.
We know that, $P(A’ \cap B’) =P(A \cup B )’$
$ =1 - P(A \cup B)$ $ =1 - P(A) - P(B) + P( A \cap B)$ $ =1 - P(A) - P(B) + P(A)\cdot P(B)$ $ = [1-P(A)] \cdot [1-P(B)]$ $ =P(A’)P(B’) $ Thus, 2 is also true.
By conditional probability, $P(A | B)$ $=$ $\frac{P(A \cap B)} {P(B)}$
$=$ $\frac{P(A)\cdot P(B)}{P(B)}$ $=$ $P(A) $ And $P(A | B') $ $ =$ $\frac{P(A \cap B')}{P(B')}$ $=$ $\frac{P(A)\cdot P(B')}{P(B')}$ $=$ $P(A) $
The problem is with 4. I tried to disprove it, by finding a counter-example, and I couldn't.
What is the correct answer? |
Consider a heat equation in one space dimension $$\frac{\partial u(t,x)}{\partial t} = \frac12\Theta(x)\frac{\partial^2u(t,x)}{\partial x^2} \tag{1}$$ where Heavyside function $$ \Theta(x) = \begin{cases} 0,\, x<0 \\ 1,\, x\ge 0 \end{cases} $$ with initial condition $$u(t=0,x) = x_+.$$
Obviously, $u(t,x)=0,\,\forall x<0$. In effect we may consider Eq. (1) as a heat equation with unit conductivity on the positive real axis. If we take this route, how should we specify the boundary condition for this heat equation on the positive axis? Would $u$ be continuous over $x$? Would the right hand side first spatial derivative be equal to the left hand side for positive time $t$, i.e. $\frac{\partial u}{\partial x}(t>0,x=0_+)=0$?
On the other hand, we observe, for the particular initial condition at hand, if we ignore the differentiability of $u(t>0,x)$ with respect to $x$ at $x=0$, $u(t,x)=x_+$ is a solution on the whole $x$-axis.
Further, for $t>0$ fixed, is $u(t>0,x)\in C^\infty(-\infty,\infty)$?
If it is true $\frac{\partial u}{\partial x}(t>0,x=0_+)=0$, I am then puzzled that it seems physically, the heat should diffuse to the left and accumulates at $x=0$ that the temperature $u$ at $x=0_+$ should be positive and higher than $x_+$ immediately to the right of $x=0$. If $\frac{\partial u}{\partial x}(t>0,x=0_+)=0$, it implies that the heat diffuses away to the right of $x=0$ and the temperature immediate to the right of $x=0$ is lower than $x_+$. How should one explain that? |
There are many explanations to be found about shot noise in optics, but the answers I find are incompatible. There are three ways shot noise in optics is explained.
(Note that according to Wikipedia, in general, shot noise is a type of noise which can be modeled by a Poisson process.)
It is the noise purely arising form (vacuum) fluctuations of the EM-field. For example, the book of Gerry and Knight states that "In an actual experiment, the signal beam is first blocked in order to obtain the shot-noise level." I guess the number of photons you would detect in this way follows a Poissonian distribution, hence the name `shot noise'. (For context, see screenshot of relevant section below - courtesy of Google Books)
It is due to 'the particle nature of light'. Semi-classically, a low intensity laser beam will emit photons following a Poisson distribution. If the beam is incident on a photon detector, this detector will receive a fluctuating number of photons per time bin (according to the Poissonian). Thus the intensity (~number of photons per time bin) will fluctuate. These fluctuations are the `shot noise'.
A laser beam emits a coherent state $|\alpha \rangle$. The probability to find $n$ photons upon measurement follows the poisson distribution, $P(n)=|\langle n | \alpha \rangle|^2= \frac{\bar n}{n!}e^{-\bar n}$ with $\bar n = |\alpha|^2= \langle \alpha | a^\dagger a | \alpha \rangle $ the average number of photons. Thus there is shot noise in the number of photons. (Here $| n \rangle $ is in the Fock basis but $|\alpha \rangle $ is in the coherent state basis.)
So what is shot noise? Can you have multiple sources of shot noise, throw them all on one heap and call the combination 'the shot noise'. Then how can you 'measure the shot noise level' as in 1, or 'measure at the shot noise level'?
Explanation 1 is incompatible with 2 and 3, for both 2 and 3 will cause
no photons at all to be counted in the vacuum state. (The vacuum state is the coherent state with $\alpha=0$.) |
First of all, the answer that applies here was already given by Raphael in the comments to the question: "
Given that we don't even know how to find one simple shortest path in linear time, I doubt it." In the following, thus, I will assume you are interested in knowing about the best available algorithms in the current state of the art. In the following, I describe the best worst-case bound (to the best of my knowledge) but also an algorithm that might run in linear time in some specific cases.
But before describing the latest developments in the state of the art, I wanted to
emphasize the importance of simple paths in this specific problem. As a matter of fact, many people overlooks the importance of this requirement and some do not even understand it at first glance.
When computing the shortest path from a start vertex to a goal vertex, the optimal path is necessarily
simple, i.e., it does not repeat any vertices. However, when computing $k$ optimal paths, the second, third, ... best paths might not be simple. To prove it, I provide here an example that has been slightly adapted from Hershberger, Maxel & Suri, 2007:
The Figure shows a digraph whose optimal solution (from the source vertex $s$ to the goal vertex $t$) is the path $\pi_1 : \langle s, A, B, C, D, t\rangle$ with a cost equal to 5. If paths are not required to be simple, then the second and third optimal paths are $\pi_2 : \langle s, A, B, C, D, C, D, t\rangle$ and $\pi_3 : \langle s, A, B, A, B, C, D, t\rangle$ both with a cost equal to 7. However, if paths are required to be simple, then the second and third optimal paths would be $\pi_2 : \langle s, F, t\rangle$ and $\pi_3 : \langle s, G, t\rangle $ with costs 10 and 11, respectively.
Given a graph $G(V, E)$ where $V$ is the set of vertices and $\langle u, v\rangle\in E, u, v\in V$ if there is an edge between vertices $u$ and $v$, the current state of the art for this problem to the best of my knowledge is described below:
The first significant improvement to solve the $k$ optimal paths problem is Eppstein's algorithm (Eppstein, 1998) which runs in $O(|E|+|V|\log |V|+k)$. However, this algorithm requires the graph to be given explicitly. $K^*$ alleviates this requirement while maintaining the low complexity (Aljazzar & Leue, 2011) and, additionally, enables the application of admissible heuristics. In both cases, the output computed by these algorithms are not necessarily simple paths.
In case that paths are required to be simple, the best results is due to Yen (Yen, 1971, 1972), generalized later by Lawler (Lawler, 1972), which using modern data structures can be implemented in $O(k|V|(|E|+|V|\log |V|))$ worst-case time. In the case of undirected graphs, Katoh, Ibaraki and Mine (Katoh, Ibaraki & Mine, 1982) improve Yen's algorithm to $O(k(|E|+|V|\log |V|))$ time. While Yen's asymptotic worst-case bound for enumerating $k$ simple shortest paths in a directed graph remains unbeaten (again, to the best of my knowledge!), several attempts have been done to outperform it in practice.
One of such works is due to John Hershberger et al., who introduced a
replacement paths algorithm which is known to fail scarcely. As a result, their algorithm delivers a speedup that grows linearly with the average number of edges in the $k$ shortest paths but, for some cases (as random graphs), this speedup is minimized.
Hope this helps,
Bibliography
Aljazzar, H. & Leue, S. (2011). $K^*$: A heuristic search algorithm for finding the $k$ shortest paths. Artificial Intelligence, 175(18), 2129-2154.
Eppstein, D. (1998). Finding the $k$ shortest paths. SIAM Journal on Computing, 28(2), 652-673.
Hershberger, J., Maxel, M. & Suri, S. (2007). Finding the $k$ shortest simple paths: A new algorithm and its implementation. ACM Transactions on Algorithms, 3(4), 45-46
Katoh, N., Ibaraki, T. & Mine, H. (1982). An efficient algorithm for $k$ shortest simple paths. Networks, 12, 411-427.
Lawler, E. L. (1972). A procedure for computing the $k$ best solutions to discrete optimization problems and its application to the shortest path problem. Management Science, 18, 401-405.
Yen, J.Y. (1971). Finding the $k$ shortest loopless paths in a network. Management Sciences, 17, 712-716.
Yen, J.Y. (1972). Another algorithm for finding the $k$ shortest loopless network paths. Proceedings of 41st Management Operations Research Society of America, 20. |
Skills to Develop
When a substrate binds to one enzymatic subunit, the rest of the subunits are stimulated and become active. Ligands can either have non-cooperativity, positive cooperativity or negative cooperativity.
A significant portion of enzymes function such that their properties can be studied using the Michaelis-Menten equation. However, a particular class of enzymes exhibit kinetic properties that cannot be studied using the Michaelis-Menten equation. The rate equation of these unique enzymes is characterized by an “S-shaped” sigmoidal curve, which is different from the majority of enzymes whose rate equation exhibits hyberbolic curves. Allosteric regulation is the regulation of an enzyme or other protein by binding an effector molecule at the protein's allosteric site (that is, a site other than the protein's active site). Effectors that enhance the protein's activity are referred to as allosteric activators, whereas those that decrease the protein's activity are called allosteric inhibitors. The term allostery refers to the fact that the regulatory site of an allosteric protein is
physically distinct from its active site. Allosteric regulations are a natural example of control loops, such as feedback from downstream products or feedforward from upstream substrates. Long-range allostery is especially important in cell signaling. Allosteric Modulation (Cooperativity)
Cooperativity is a phenomenon displayed by enzymes or receptors that have multiple binding sites where the affinity of the binding sites for a ligand is increased, positive cooperativity, or decreased, negative cooperativity, upon the binding of a ligand to a binding site. We also see cooperativity in large chain molecules made of many identical (or nearly identical) subunits (such as DNA, proteins, and phospholipids), when such molecules undergo phase transitions such as melting, unfolding or unwinding. This is referred to as subunit cooperativity (discussed below).
An example of positive cooperativity is the binding of oxygen to hemoglobin. One oxygen molecule can bind to the ferrous iron of a heme molecule in each of the four chains of a hemoglobin molecule. Deoxy-hemoglobin has a relatively low affinity for oxygen, but when one molecule binds to a single heme, the oxygen affinity increases, allowing the second molecule to bind more easily, and the third and fourth even more easily. The oxygen affinity of 3-oxy-hemoglobin is ~300 times greater than that of deoxy-hemoglobin. This behavior leads the affinity curve of hemoglobin to be sigmoidal, rather than hyperbolic as with the monomeric myoglobin. By the same process, the ability for hemoglobin to lose oxygen increases as fewer oxygen molecules are bound.
Negative allosteric modulation (also known as allosteric inhibition) occurs when the binding of one ligand
decreases the affinity for substrate at other active sites. For example, when 2,3-BPG binds to an allosteric site on hemoglobin, the affinity for oxygen of all subunits decreases. This is when a regulator is absent from the binding site.
Another instance in which negative allosteric modulation can be seen is between ATP and the enzyme Phosphofructokinase within the negative feedback loop that regulates glycolysis. Phosphofructokinase (generally referred to as PFK) is an enzyme that catalyses the third step of glycolysis: the phosphorylation of Fructose-6-phosphate into Fructose 1,6-bisphosphate. PFK can be allosterically inhibited by high levels of ATP within the cell. When ATP levels are high, ATP will bind to an allosteric site on phosphofructokinase, causing a change in the enzyme's three-dimensional shape. This change causes its affinity for substrate (fructose-6-phosphate and ATP) at the active site to decrease, and the enzyme is deemed inactive. This causes glycolysis to cease when ATP levels are high, thus conserving the body's glucose and maintaining balanced levels of cellular ATP. In this way, ATP serves as a negative allosteric modulator for PFK, despite the fact that it is also a substrate of the enzyme.
Sigmoidal kinetic profiles are the result of enzymes that demonstrate positive cooperative binding. cooperativity refers to the observation that binding of the substrate or ligand at one binding site affects the affinity of other sites for their substrates. For enzymatic reactions with multiple substrate binding sites, this increased affinity for the substrate causes a rapid and coordinated increase in the velocity of the reaction at higher \([S]\) until \(V_{max}\) is achieved. Plotting the \(V_0\) vs. \([S]\) for a cooperative enzyme, we observe the characteristic sigmoidal shape with low enzyme activity at low substrate concentration and a rapid and immediate increase in enzyme activity to \(V_{max}\) as \([S]\) increases. The phenomenon of cooperativity was initially observed in the oxygen-hemoglobin interaction that functions in carrying oxygen in blood. Positive cooperativity implies allosteric binding – binding of the ligand at one site increases the enzyme’s affinity for another ligand at a site different from the other site. Enzymes that demonstrate cooperativity are defined as allosteric. There are several types of allosteric interactions: homotropic (positive) and heterotropic (negative).
.
Figure 1: Rate of Reaction (velocity) vs. Substrate Concentration.
Positive and negative allosteric interactions (as illustrated through the phenomenon of cooperativity) refer to the enzyme's binding affinity for other ligands at other sites, as a result of ligand binding at the initial binding site. When the ligands interacting are all the same compounds, the effect of the allosteric interaction is considered homotropic. When the ligands interacting are different, the effect of the allosteric interaction is considered heterotropic. It is also very important to remember that allosteric interactions tend to be driven by ATP hydrolysis.
The Hill Equation
The degree of cooperativity is determined by Hill equation (Equation \(\ref{Eq1}\)) for non-Michaelis-Menten kinetics. The Hill equation accounts for allosteric binding at sites other than the active site. \(n\) is the "Hill coefficient."
\[ \theta = \dfrac{[L]^n}{K_d+[L]^n} = \dfrac{[L]^n}{K_a^n+[L]^n} \label{Eq1}\]
where
\( \theta \) is the fraction of ligand binding sites filled \([L]\) is the ligand concentration \(K_d\) is the apparent dissociation constant derived from the law of mass action (equilibrium constant for dissociation) \(K_a\) is the ligand concentration producing half occupation (ligand concentration occupying half of the binding sides), that is also the microscopic dissociation constant \(n\) is the Hill coefficient that describes the cooperativity
Taking the logarithm of both sides of the equation leads to an alternative formulation of the Hill Equation.
\[ \log \left( \dfrac{\theta}{1-\theta} \right) = n\log [L] - \log K_d \label{Eq2}\]
when \(n < 1\), there is negative cooperativity when \(n = 1\), there is no cooperativity when \(n > 1\), there is positive cooperativity Allosteric Models
Currently, there are 2 models for illustrating cooperativity: the concerted model and the sequential model. Most allosteric effects can be explained by the concerted MWC model put forth by Monod, Wyman, and Changeux, or by the sequential model described by Koshland, Nemethy, and Filmer. Both postulate that enzyme subunits exist in one of two conformations, tensed (T) or relaxed (R), and that relaxed subunits bind substrate more readily than those in the tense state. The two models differ most in their assumptions about subunit interaction and the preexistence of both states.
The Concerted model
The concerted model of allostery, also referred to as the symmetry model or MWC model, postulates that enzyme subunits are connected in such a way that a conformational change in one subunit is necessarily conferred to all other subunits. Thus, all subunits must exist in the same conformation. The model further holds that, in the absence of any ligand (substrate or otherwise), the equilibrium favours one of the conformational states, T or R. The equilibrium can be shifted to the R or T state through the binding of one ligand (the allosteric effector or ligand) to a site that is different from the active site (the allosteric site).
The Sequential model
The sequential model of allosteric regulation holds that subunits are not connected in such a way that a conformational change in one induces a similar change in the others. Thus, all enzyme subunits do not necessitate the same conformation. Moreover, the sequential model dictates that molecules of substrate bind via an induced fit protocol. In general, when a subunit randomly collides with a molecule of substrate, the active site, in essence, forms a glove around its substrate. While such an induced fit converts a subunit from the tensed state to relaxed state, it does not propagate the conformational change to adjacent subunits. Instead, substrate-binding at one subunit only slightly alters the structure of other subunits so that their binding sites are more receptive to substrate. To summarize:
subunits need not exist in the same conformation molecules of substrate bind via induced-fit protocol conformational changes are not propagated to all subunits
Note: Allosteric database
Allostery is a direct and efficient means for regulation of biological macromolecule function, produced by the binding of a ligand at an allosteric site topographically distinct from the orthosteric site. Due to the often high receptor selectivity and lower target-based toxicity, allosteric regulation is also expected to play an increasing role in drug discovery and bioengineering. The AlloSteric Database (ASD, provides a central resource for the display, search and analysis of the structure, function and related annotation for allosteric molecules. Currently, ASD contains allosteric proteins from more than 100 species and modulators in three categories (activators, inhibitors, and regulators). Each protein is annotated with detailed description of allostery, biological process and related diseases, and each modulator with binding affinity, physicochemical properties and therapeutic area. Integrating the information of allosteric proteins in ASD should allow the prediction of allostery for unknown proteins, to be followed with experimental validation. In addition, modulators curated in ASD can be used to investigate potential allosteric targets for a query compound, and can help chemists to implement structure modifications for novel allosteric drug design.
Summary
Allosteric enzymes are an exception to the Michaelis-Menten model. Because they have more than two subunits and active sites, they do not obey the Michaelis-Menten kinetics, but instead have sigmoidal kinetics. Since allosteric enzymes are cooperative, a sigmoidal plot of \(V_0\) versus \([S]\) results: There are distinct properties of Allosteric Enzymes that makes it different compared to other enzymes.
One is that allosteric enzymes do not follow the Michaelis-Menten Kinetics. This is because allosteric enzymes have multiple active sites. These multiple active sites exhibit the property of cooperativity, where the binding of one active site affects the affinity of other active sites on the enzyme. As mentioned earlier, it is these other affected active sites that result in a sigmoidal curve for allosteric enzymes. Allosteric Enzymes are influenced by substrate concentration. For example, at high concentrations of substrate, more enzymes are found in the R state. The T state is favorite when there is an insufficient amount of substrate to bind to the enzyme. In other words, the T and R state equilibrium depends on the concentration of the substrate. Allosteric Enzymes are regulated by other molecules. This is seen when the molecules 2,3-BPG, pH, and CO2 modulates the binding affinity of hemoglobin to oxygen. 2,3-BPG reduces binding affinity of O2 to hemoglobin by stabilizing the T- state. Lowering the pH from physiological pH=7.4 to 7.2 (pH in the muscles and tissues) favors the release of \(O_2\). Hemoglobin is more likely to release oxygen in \(CO_2\) rich areas in the body.
There are two primary models for illustrating cooperativity.
The concerted model (also called the Monod-Wyman-Changeux model) illustrates cooperativity by assuming that proteins have two or more subunits, and that each part of the protein molecule is able to exist in either the relaxed (R) state or the tense (T) state - the tense state of a protein molecule is favored when it doesn't have any substrates bound. All aspects, including binding and dissociation constants are the same for each ligand at the respective binding sites. The sequential model aims to demonstrate cooperativity by assuming that the enzyme/protein molecule affinity is relative and changes as substrates bind. Unlike the concerted model, the sequential model accounts for different species of the protein molecule. |
rmsprop¶
This module provides an implementation of rmsprop.
class
climin.rmsprop.
RmsProp(
wrt, fprime, step_rate, decay=0.9, momentum=0, step_adapt=False, step_rate_min=0, step_rate_max=inf, args=None)¶
RmsProp optimizer.
RmsProp [tieleman2012rmsprop] is an optimizer that utilizes the magnitude of recent gradients to normalize the gradients. We always keep a moving average over the root mean squared (hence Rms) gradients, by which we divide the current gradient. Let \(f'(\theta_t)\) be the derivative of the loss with respect to the parameters at time step \(t\). In its basic form, given a step rate \(\alpha\) and a decay term \(\gamma\) we perform the following updates:\[\begin{split}r_t &=& (1 - \gamma)~f'(\theta_t)^2 + \gamma r_{t-1} , \\ v_{t+1} &=& {\alpha \over \sqrt{r_t}} f'(\theta_t), \\ \theta_{t+1} &=& \theta_t - v_{t+1}.\end{split}\]
In some cases, adding a momentum term \(\beta\) is beneficial. Here, Nesterov momentum is used:\[\begin{split}\theta_{t+{1 \over 2}} &=& \theta_t - \beta v_t, \\ r_t &=& (1 - \gamma)~f'(\theta_{t + {1 \over 2}})^2 + \gamma r_{t-1}, \\ v_{t+1} &=& \beta v_t + {\alpha \over \sqrt{r_t}} f'(\theta_{t + {1 \over 2}}), \\ \theta_{t+1} &=& \theta_t - v_{t+1}\end{split}\]
Additionally, this implementation has adaptable step rates. As soon as the components of the step and the momentum point into the same direction (thus have the same sign) the step rate for that parameter is multiplied with
1 + step_adapt. Otherwise, it is multiplied with
1 - step_adapt. In any way, the minimum and maximum step rates
step_rate_minand
step_rate_maxare respected and exceeding values truncated to it.
RmsProp has several advantages; for one, it is a very robust optimizer which has pseudo curvature information. Additionally, it can deal with stochastic objectives very nicely, making it applicable to mini batch learning.
Note
Works with gnumpy.
[tieleman2012rmsprop] Tieleman, T. and Hinton, G. (2012), Lecture 6.5 - rmsprop, COURSERA: Neural Networks for Machine Learning
Attributes
wrt (array_like) Current solution to the problem. Can be given as a first argument to
.fprime.
fprime (Callable) First derivative of the objective function. Returns an array of the same shape as
.wrt.
step_rate (float or array_like) Step rate of the optimizer. If an array, means that per parameter step rates are used. momentum (float or array_like) Momentum of the optimizer. If an array, means that per parameter momentums are used. step_adapt (float or bool) Constant to adapt step rates. If False, step rate adaption is not done. step_rate_min (float, optional, default 0) When adapting step rates, do not move below this value. step_rate_max (float, optional, default inf) When adapting step rates, do not move above this value.
Methods
__init__(
wrt, fprime, step_rate, decay=0.9, momentum=0, step_adapt=False, step_rate_min=0, step_rate_max=inf, args=None)¶
Create an RmsProp object.
Parameters: wrt: array_like
Array that represents the solution. Will be operated upon in place.
fprimeshould accept this array as a first argument.
fprime: callable step_rate: float or array_like
Step rate to use during optimization. Can be given as a single scalar value or as an array for a different step rate of each parameter of the problem.
decay: float
Decay parameter for the moving average. Must lie in [0, 1) where lower numbers means a shorter “memory”.
momentum: float or array_like
Momentum to use during optimization. Can be specified analoguously (but independent of) step rate.
step_adapt: float or bool
Constant to adapt step rates. If False, step rate adaption is not done.
step_rate_min: float, optional, default 0
When adapting step rates, do not move below this value.
step_rate_max: float, optional, default inf
When adapting step rates, do not move above this value.
args: iterable
Iterator over arguments which
fprimewill be called with. |
Definition:Harmonic Numbers Contents Definition
The
harmonic numbers are denoted $H_n$ and are defined for positive integers $n$: $\displaystyle \forall n \in \Z, n \ge 0: H_n = \sum_{k \mathop = 1}^n \frac 1 k$
From the definition of vacuous summation it is clear that $H_0 = 0$.
Let $r \in \R_{>0}$.
For $n \in \N_{> 0}$ the
Harmonic numbers order $r$ are defined as follows: $\displaystyle H_n^{\left({r}\right)} = \sum_{k \mathop = 1}^n \frac 1 {k^r}$ $H_0 = 0$ $H_1 = 1$ $H_2 = \dfrac 3 2$
To $15$ decimal places:
$H_{10000} \approx 9 \cdotp 78760 \, 60360 \, 44382 \, \ldots$ Notation
There is no standard notation for this series.
The notations $h_n$, $S_n$ and $\psi \left({n + 1}\right) + \gamma$ can be found in the literature.
The notation given here is as advocated by Donald E. Knuth.
Also see Results about harmonic numberscan be found here. Sources 1997: Donald E. Knuth: The Art of Computer Programming: Volume 1: Fundamental Algorithms(3rd ed.) ... (previous) ... (next): $\S 1.2.7$: Harmonic Numbers: $(1)$ |
Steve Roth linked to Ben Casselman in a Tweet about the latest JOLTS data, and both brought attention to the fact that the ratio of unemployed to job openings appears to have flattened out; Steve also made the observation that it doesn't seem to go below 1.
I thought that the JOLTS data might make a good candidate for my new favorite go-to model ‒ the naive dynamic equilibrium that I recently used to make an unwarranted forecast of future unemployment. However, in this case, I think the framing of the same data from voxeu.org in terms of the logarithm might be the better way to look at the data.
We will assume that the logarithmic derivative of the inverse ratio (i.e. openings over unemployed) is a constant
\frac{d}{dt} \log \; \frac{V}{U} \simeq \;\text{ constant}
$$
where $\exp \; (- \log (V/U)) = U/V$ which is the ratio Ben graphs. Using the same entropy minimization procedure, fitting to a sum of logistic functions, and finally converting back to the $U/V$ variable Ben and Steve were talking about, we have a pretty decent fit (using two negative shocks):
The shocks are centered at 2000.6 and 2008.6, consistent with the NBER recessions just like the unemployment model. There's a bit of overshooting after the recession hits just like in the unemployment case as well. I've also projected the result out to 2020. If there weren't any shocks, we'd expect the ratio to fall below 1 in 2018 while spending approximately three to four years in the neighborhood of 1.3. And I think that explains why we haven't seen a ratio below 1 often: the economy would have to experience several years without a shock. A ratio near 2 requires a shorter time to pass; a ratio near 4 requires an even shorter time. Also in this picture, Ben's "stabilization" is not particularly unexpected.
However, there's a bit more apparent in this data that is not entirely obvious in the $U/V$ representation shown above. For example, Obamacare pushed job openings in health care higher, likely even reducing the unemployment rate by a percentage point or more (as I showed here). This also has a noticeable effect on the JOLTS data; here's the fit with three shocks ‒ two negative and one positive:
With the positive Obamacare shock (onset of 2014.3 or about mid April 2014 is consistent with the unemployment findings and the negative shocks are in the nearly the same places at 2000.8 and 2008.7). We can see what originally looked like mean reversion in the past year now looks a bit like the beginning of a deviation. As I asked in my unwarranted forecast at the link above, what if we're seeing the early signs of a future recession? The result hints at a coming rise, but is fairly inconclusive:
I wanted to keep the previous three graphs in order for you to be able to cycle through them on a web browser and see the subtle changes. This graph of the logarithm of $V/U$ that I mentioned at the top of the post allows you to see the Obamacare bump and the potential leading edge of the next recession a bit clearer:
It's important to note that leading edge may vanish; for example, look at 2002-2005 on the graph above where a leading edge of a recovery and recession both appeared to vanish.
Now is this the proper frame to understand the JOLTS data? I don't know the answer to that. And the frame you put on the data can have pretty big impacts on what you think is the correct analysis. The dynamic equilibrium is at least an incredibly simple frame (much like my discussion of consumption smoothing here) that's pretty easy to generate with a matching model ($U$ and $V$ grow/shrink at a constant relative rate). Even better, an information equilibrium model because the equation at the top of this page essentially says that $V \rightleftarrows U$ with a constant IT index $k$ so that:
\log \; V = \; k \; \log \; U + c
$$
therefore if the total number of unemployed grows at a constant rate $r$, i.e. $U\sim e^{rt}$ (say, with population growth)
\begin{align}
\log \; \frac{V}{U} = & \; \log V - \log U\\
= & \; k\; \log U + c - \log U\\
= & \; (k-1)\; \log U + c\\
\frac{d}{dt}\log \; \frac{V}{U} = & \; (k-1)\;\frac{d}{dt} \log U \\
= & \; (k-1)r
\end{align}
$$
which is a constant (which is about 0.21, or about 21% of unemployed per year or 1.7% per month, at least since 2000 ‒ multiple equilibria are possible).
In case you are interested, here is the code (click to expand): Update 11 January 2017
In case you are interested, here is the code (click to expand): |
when solving the advection equation in 1D that is:
$$ \frac{\partial u}{\partial t} + c\frac{\partial u}{\partial x} = 0 $$ with $ u'(t,0) = 0$ and $u(t,L) = 0$ , $u(0,x) = u_{0} $
one numerical scheme is the FTCS (Forward time-centered space), but this numerical scheme is unstable.
$$\frac{u_{j}^{n+1}-u_{j}^{n}}{ h_{t}} = c \frac{u_{j+1}^{n}-u_{j-1}^{n}}{ 2h_{x}} $$
But when solving
$$ \frac{\partial u}{\partial t} + c\frac{\partial u}{\partial x} = \alpha \frac{\partial^2 u}{\partial x^2} $$ the advection-diffusion equation in 1D with $ u'(t,0) = 0$ and $u(t,L) = 0$ , $u(0,x) = u_{0} $
Since the advection-diffusion equation is a second order equation I'd like to use a second order approximation.
if we define $u_{k}^{n} := u(t_{n},x_{k}) $; $\ \ x_{k} = kh $ and $ \ \ k = 0,1,2,...,N$. $h$ is known as the mesh size or step size.
For the second derivative:
$$ \frac{\partial^2u_{k}^{n}}{\partial x^2} \approx \frac{u_{k+1}^{n}-2u_{k}^{n}+u_{k-1}^{n}}{h^2} = \frac{u_{k-1}^{n}-2u_{k}^{n}+u_{k+1}^{n}}{h^2} $$ for $k=0,1,...,N-1$
Since $u'(t,0) = 0$ and $ u(t_{n},L) = u(t_{n},x_{N}) = u_{N}^{n} = 0 $ we get the following matrix representation of the second derivative operator
\begin{equation} \frac{\partial^2}{\partial x^2} \approx L_{2} = \frac{1}{h^2}\left(\begin{matrix} -2 & 1 & & 0\\ 1 & \ddots & \ddots & \\ & \ddots & \ddots & 1 \\ 0 & & 1 & -2 \end{matrix} \right) \end{equation}
for $k=0$ , we get
$$ \frac{\partial u_{0}^{n}}{\partial x} = \frac{u_{0+1}^{n}-u_{0-1}^{n}}{ 2h} = 0 $$ this implies that $ u_{1}^{n}=u_{-1}^{n} $ and
$$ \frac{\partial^2u_{0}^{n}}{\partial x^2} \approx \frac{u_{0+1}^{n}-2u_{0}^{n}+u_{0-1}^{n}}{h^2} = \frac{u_{0-1}^{n}-2u_{0}^{n}+u_{0+1}^{n}}{h^2} = \frac{-2u_{0}^{n}+2u_{1}^{n}}{h^2} $$
thus we have to modify the entry $1,2$ of $L_{2}$
\begin{equation} L_{2} = \frac{1}{h^2}\left(\begin{matrix} -2 & 2 & & 0\\ 1 & \ddots & \ddots & \\ & \ddots & \ddots & 1 \\ 0 & & 1 & -2 \end{matrix} \right) \end{equation}
What I have done, is $\mathbf{impose \ the \ Neumann \ boundary \ condition}$ in $L_{2}$ .
I want to approximate the first derivative using central difference(Second order approximation):
$$ \frac{\partial u_{k}^{n}}{\partial x} = \frac{u_{k+1}^{n}-u_{k-1}^{n}}{ 2h} $$
The matrix representation is: \begin{equation} \frac{\partial}{\partial x} \approx L_{1} = \frac{1}{2h}\left(\begin{matrix} 0 & 1 & & 0\\ -1 & \ddots & \ddots & \\ & \ddots & \ddots & 1 \\ 0 & & -1 & 0 \end{matrix} \right) \end{equation}
for $k=0$ , we get
$$ \frac{\partial u_{0}^{n}}{\partial x} = \frac{u_{0+1}^{n}-u_{0-1}^{n}}{ 2h} = 0 $$ this implies that $ u_{1}^{n}=u_{-1}^{n} $
But I'm stuck when I try to $\mathbf{impose \ the \ neumann \ boundary \ condition \ in}$ $L_{1}$. I don't know how to do that.
If we solve that problem, we can solve the differential equation
$$ \frac{ \partial u }{\partial t} = \Big(-cL_{1} +\alpha L_{2} \Big)u $$ integrating in time( C-N, Back-Euler, RK4 )
$\mathbf{please \ help!\ \ How \ do \ you \ impose \ the \ Neumann \ Boundary \ Condition \ in \ L_{1}?}$ |
Inertia
In power systems engineering, "inertia" is a concept that typically refers to rotational inertia or rotational kinetic energy. For synchronous systems that run at some nominal frequency (i.e. 50Hz or 60Hz), inertia is the energy that is stored in the rotating masses of equipment electro-mechanically coupled to the system, e.g. generator rotors, fly wheels, turbine shafts.
Derivation
Below is a basic derivation of power system rotational inertia from first principles, starting from the basics of circle geometry and ending at the definition of moment of inertia (and it's relationship to kinetic energy).
The length of a circle arc is given by:
[math] L = \theta r [/math]
where [math]L[/math] is the length of the arc (m)
[math]\theta[/math] is the angle of the arc (radians) [math]r[/math] is the radius of the circle (m)
A cylindrical body rotating about the axis of its centre of mass therefore has a rotational velocity of:
[math] v = \frac{\theta r}{t} [/math]
where [math]v[/math] is the rotational velocity (m/s)
[math]t[/math] is the time it takes for the mass to rotate L metres (s)
Alternatively, rotational velocity can be expressed as:
[math] v = \omega r [/math]
where [math]\omega = \frac{\theta}{t} = \frac{2 \pi \times n}{60}[/math] is the angular velocity (rad/s)
[math]n[/math] is the speed in revolutions per minute (rpm)
The kinetic energy of a circular rotating mass can be derived from the classical Newtonian expression for the kinetic energy of rigid bodies:
[math] KE = \frac{1}{2} mv^{2} = \frac{1}{2} m(\omega r)^{2}[/math]
where [math]KE[/math] is the rotational kinetic energy (Joules or kg.m
2/s 2 or MW.s, all of which are equivalent) [math]m[/math] is the mass of the rotating body (kg)
Alternatively, rotational kinetic energy can be expressed as:
[math] KE = \frac{1}{2} J\omega^{2} [/math]
where [math]J = mr^{2}[/math] is called the
moment of inertia (kg.m 2).
Note that in physics, the moment of inertia [math]J[/math] is normally denoted as [math]I[/math]. In electrical engineering, the convention is for the letter "i" to always be reserved for current, and is therefore often replaced by the letter "j", e.g. the complex number operator i in mathematics is j in electrical engineering.
Normalised Inertia Constants
The moment of inertia can be expressed as a normalised quantity called the
inertia constant H, calculated as the ratio of the rotational kinetic energy of the machine at nominal speed to its rated power (VA): [math]H = \frac{1}{2} \frac{J \omega_0^{2}}{S_{b}}[/math]
where [math]\omega = 2 \pi \frac{n}{60}[/math] is the nominal mechanical angular frequency (rad/s)
[math]n[/math] is the nominal speed of the machine (revolutions per minute) [math]S_{b}[/math] is the rated power of the machine (VA)
Based on actual generator data, the normalised constants for different types of generators are summarised in the table below:
Generator Type Number of samples MVA Rating Inertia constant H Min Median Max Min Median Max Steam turbine 45 28.6 389 904 2.1 3.2 5.7 Gas turbine 47 22.5 99.5 588 1.9 5.0 8.9 Hydro turbine 22 13.3 46.8 312.5 2.4 3.7 6.8 Combustion engine 26 0.3 1.25 2.5 0.6 0.95 1.6 Generator Inertia
The moment of inertia for a generator is dependent on its mass and apparent radius, which in turn is largely driven by its prime mover type. |
I read in a book, it is hard to formulate the theory of everything by unifying all the forces, because general relativity is a background independent theory while electromagnetism isn't. Why is this true?
I believe this to be a very important conceptual novelty of GR. Let me explain.
Electromagnetism depends on the background
Consider the action functional for the Maxwell's theory of the electromagnetic field: $$ S[A] = - \frac{1}{4} \int d^4 x\, \sqrt{-g} F_{\mu \nu} F^{\mu \nu}. $$
This action depends on the electromagnetic potential $A_{\mu}(x)$, as well as on the metric tensor $g_{\mu \nu}(x)$. However, $A_{\mu}(x)$ appears in the square brackets of $S[A]$ and $g_{\mu \nu}(x)$ doesn't.
This actually makes all the difference. We say that $A$ is a dynamical variable, while $g$ is an external parameter, the
background. Maxwell's theory is thus dependent on the exact background geometry, which is usually taken to be Minkowski flat spacetime. Gravity is background independent
Now General Relativity is given by a diffeomorphism-invariant integral $$ S[g] = \frac{1}{16 \pi G} \intop_{M} d^4 x \sqrt{-g} \,R_{\mu \nu} g^{\mu \nu}. $$
Note the profound difference: now $g$ appears in the square brackets of $S[g]$! Spacetime geometry is a dynamical variable in gravity, just as the electromagnetic field is a dynamical variable in Maxwell's theory.
We no longer have an external background spacetime, because spacetime is dynamical. The (classical) spacetime metric tensor is not given a-priori, but is a solution of Einstein's equations. That's background independence.
Diff invariance vs background independence
These concepts are very different.
Diffeomorphism invariance simply means that we are building a field theory on a spacetime manifold, and any coordinate patch is equally good for describing the theory. Note how both the Maxwell (for electromagnetism) and Einstein-Hilbert (for gravity) actions in this post are written in the diffeomorphism-invariant form.
Background independence is basically whether a theory depends on an external spacetime geometry or not. Maxwell action depends on external background geometry, while Einstein-Hilbert (and its high-energy modifications) doesn't. In simple words, it is about whether $g$ is in the square brackets of $S$.
Why should we care
First, background independent physics is very different from the "old" physics on the background. The absense of the timelike Killing vector field renders generally ill-defined such concepts as time and energy.
Second, there's a physical insight of background independence: space and time aren't some static arena inhabited by fields, but rather have the same dynamical properties that fields have.
Third, there is no place for evolution in external time in the background independent context, because there is no external time. The implications are far-reaching: no general energy conservation (except in particular solutions), no Hamiltonian, no unitarity in the quantum theory. This is known as the problem of time. This doesn't indicate that background independent theories are unphysical, however, just that we have to utilize completely different techniques in order to derive predictions. E.g. the background independent dynamics is described in terms of constraints.
Making electromagnetism background independent
It is really easy to make Maxwell's theory background independent. We just have to couple it to gravity:
$$ S[A,g] = \intop_{\mathcal{M}} d^4 x \sqrt{-g} \left( \frac{1}{16 \pi G} R_{\mu \nu} g^{\mu \nu} - \frac{1}{4} F_{\mu \nu} F^{\mu \nu} \right). $$
Anything coupled to General Relativity (or its high-energy modification) is background independent, because $g$ appears in the square brackets of the total action.
Short conclusion
Electromagnetism is a theory of an (electromagnetic) field in spacetime. Gravity is a theory of spacetime itself.
Why is it really difficult to formulate a ToE
The difficulty in formulating a ToE lies in two completely different issues:
Gravity is hard to consistently quantize. This is partially related to background independence. Manifestly background independent quantization procedures exist, e.g. Loop Quantum Gravity. It is hard to establish a natural unification of gravity and Standard Model (unless you believe in compactifications). This has little to do with background independence, mostly being an issue of obtaining a complex gravity+gauge Lagrangian from some simple geometrical model.
This is my guess, basically (I self study, have pity).
From the link you gave, the answer is contained in the first paragraph. When dealing with electromagnetic phenomena, the charge and mass of an electron, for example, are free parameters.
So as we don't have a theory for these parameters, we just have to accept our measurements.
In other words, by the (latest, as I have read a few variations) definition of background independence, EM is not background independent.
Background independence, a very nice, (and long) answer by Luboš Motl is worth reading, I discovered the link after writing the above.
It is useful to keep in mind that there does not seem to be a universally agreed upon definition of background (in)dependence.
My understanding is roughly as follows.
When we talk about background-independent theories, we mean theories that provide answers to what the space-time (background) should look like. That is, amongst its principles and derivations, the theory would contain something that would determine the geometry or stage on which the objects of the theory "dance." As an example, Einstein's theory of General Relativity does not have to say anything about space-time geometry in its postulates. Instead, the geometry comes out as a consequence of the theory.
On the other hand, if a theory is background-dependent, then it depends on a postulate about a certain geometry of space. That is, one needs to make these assumptions before he can even begin talking about the theory in question. To be specific, the electrodynamics does not contain enough information to make conclusions about the existing geometry of space(time). |
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
1. Measurement of D⁎±, D± and Ds± meson production cross sections in pp collisions at s=7 TeV with the ATLAS detector
Nuclear Physics, Section B, ISSN 0550-3213, 06/2016, Volume 907, pp. 717 - 763
The production of , and charmed mesons has been measured with the ATLAS detector in collisions at at the LHC, using data corresponding to an integrated...
phase space [kinematics] | Hadron Collisions | Science & Technology | Phenomenology | production [meson] | Nuclear Experiment | Deep inelastic scattering | total cross section | Subatomär fysik | Quark Fragmentation Function | Physik | perturbation theory [quantum chromodynamics] | rapidity | Experiment | CERN LHC Coll | Hadron-Production | Fractions | Hera | Hadron production | Deep-Inelastic Scattering | rapidity dependence | QUARK FRAGMENTATION FUNCTION; DEEP-INELASTIC SCATTERING; HADRON-PRODUCTION; E(+)E(-) COLLISIONS; JET FRAGMENTATION; HERA; ANNIHILATION; CHARM; MODEL; FRACTIONS | transverse momentum dependence | differential cross section | scattering [p p] | Subatomic Physics | transverse momentum | quantum chromodynamics | Jet Fragmentation | transverse momentum [D] | High Energy Physics - Experiment | production [charm] | High Energy Physics | Annihilation | Charm | 7000 GeV-cms | ATLAS detector; LHC; proton-proton | experimental results | Nuclear and High Energy Physics | E(+)E(-) Collisions | measured [differential cross section] | Mesons | fragmentation [charm] | charmed meson | measured [total cross section] | Model | colliding beams [p p] | hadroproduction [charm] | hadroproduction [charmed meson] | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
phase space [kinematics] | Hadron Collisions | Science & Technology | Phenomenology | production [meson] | Nuclear Experiment | Deep inelastic scattering | total cross section | Subatomär fysik | Quark Fragmentation Function | Physik | perturbation theory [quantum chromodynamics] | rapidity | Experiment | CERN LHC Coll | Hadron-Production | Fractions | Hera | Hadron production | Deep-Inelastic Scattering | rapidity dependence | QUARK FRAGMENTATION FUNCTION; DEEP-INELASTIC SCATTERING; HADRON-PRODUCTION; E(+)E(-) COLLISIONS; JET FRAGMENTATION; HERA; ANNIHILATION; CHARM; MODEL; FRACTIONS | transverse momentum dependence | differential cross section | scattering [p p] | Subatomic Physics | transverse momentum | quantum chromodynamics | Jet Fragmentation | transverse momentum [D] | High Energy Physics - Experiment | production [charm] | High Energy Physics | Annihilation | Charm | 7000 GeV-cms | ATLAS detector; LHC; proton-proton | experimental results | Nuclear and High Energy Physics | E(+)E(-) Collisions | measured [differential cross section] | Mesons | fragmentation [charm] | charmed meson | measured [total cross section] | Model | colliding beams [p p] | hadroproduction [charm] | hadroproduction [charmed meson] | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
Journal Article
Physical Review Letters, ISSN 0031-9007, 09/2013, Volume 111, Issue 11, pp. 111801 - 111801
We measure the mass difference Δm0 between the D*(2010)+ and the D0 and the natural linewidth Γ of the transition D*(2010)+ → D0π+. The data were recorded with...
Physics - High Energy Physics - Experiment | Física de partícules | Experiments | Particle physics | Physics | High Energy Physics - Experiment
Physics - High Energy Physics - Experiment | Física de partícules | Experiments | Particle physics | Physics | High Energy Physics - Experiment
Journal Article
3. A Lithium‐Ion Battery using a 3 D‐Array Nanostructured Graphene–Sulfur Cathode and a Silicon Oxide‐Based Anode
ChemSusChem, ISSN 1864-5631, 05/2018, Volume 11, Issue 9, pp. 1512 - 1520
Journal Article
Physical Review Letters, ISSN 0031-9007, 09/2012, Volume 109, Issue 10
Based on the full BABAR data sample, we report improved measurements of the ratios R(D(*))=B(B̄→D(*)τ⁻ν¯τ)/B(B̄→D(*)ll¯ν¯l), where l is either e or μ. These...
Journal Article
Physical Review Letters, ISSN 0031-9007, 2013, Volume 111, Issue 11
Journal Article
Physical Review D - Particles, Fields, Gravitation and Cosmology, ISSN 1550-7998, 09/2013, Volume 88, Issue 5
Journal Article
7. Thrombin-Facilitated Efflux of d-[3H]-Aspartate from Cultured Astrocytes and Neurons Under Hyponatremia and Chemical Ischemia
Neurochemical Research, ISSN 0364-3190, 7/2014, Volume 39, Issue 7, pp. 1219 - 1231
Thrombin effect increasing swelling-induced glutamate efflux was examined in cultured cortical astrocytes, cerebellar granule neurons (CGN), hippocampal and...
Neurochemistry | Biochemistry, general | Neurology | Neurosciences | Biomedicine | Volume sensitive pathway | Excitotoxicity | Protease activated receptors | Cytotoxic swelling | Cell Biology | MUSCARINIC CHOLINERGIC-RECEPTORS | GLUTAMATE RELEASE | BIOCHEMISTRY & MOLECULAR BIOLOGY | SH-SY5Y NEUROBLASTOMA-CELLS | PROTEIN-COUPLED RECEPTORS | OSMOLYTE EFFLUX | HIPPOCAMPAL-NEURONS | NEUROSCIENCES | CEREBELLAR GRANULE CELLS | VOLUME-SENSITIVE EFFLUX | REGULATED ANION CHANNELS | CEREBRAL-ISCHEMIA | Astrocytes - drug effects | Cell Survival - drug effects | Cell Hypoxia - drug effects | Cell Hypoxia - physiology | Rats, Wistar | Thrombin - toxicity | Cells, Cultured | Rats | Cerebral Cortex - cytology | Cerebral Cortex - metabolism | Animals | Tritium - metabolism | Aspartic Acid - metabolism | Neurons - metabolism | Cerebral Cortex - drug effects | Neurons - drug effects | Cell Survival - physiology | Astrocytes - metabolism | Hyponatremia - metabolism | Prevention | Ischemia | Diastereomers | Neurons | Mortality | Aspartate | Thrombin | Explosives | Glutamate | Index Medicus
Neurochemistry | Biochemistry, general | Neurology | Neurosciences | Biomedicine | Volume sensitive pathway | Excitotoxicity | Protease activated receptors | Cytotoxic swelling | Cell Biology | MUSCARINIC CHOLINERGIC-RECEPTORS | GLUTAMATE RELEASE | BIOCHEMISTRY & MOLECULAR BIOLOGY | SH-SY5Y NEUROBLASTOMA-CELLS | PROTEIN-COUPLED RECEPTORS | OSMOLYTE EFFLUX | HIPPOCAMPAL-NEURONS | NEUROSCIENCES | CEREBELLAR GRANULE CELLS | VOLUME-SENSITIVE EFFLUX | REGULATED ANION CHANNELS | CEREBRAL-ISCHEMIA | Astrocytes - drug effects | Cell Survival - drug effects | Cell Hypoxia - drug effects | Cell Hypoxia - physiology | Rats, Wistar | Thrombin - toxicity | Cells, Cultured | Rats | Cerebral Cortex - cytology | Cerebral Cortex - metabolism | Animals | Tritium - metabolism | Aspartic Acid - metabolism | Neurons - metabolism | Cerebral Cortex - drug effects | Neurons - drug effects | Cell Survival - physiology | Astrocytes - metabolism | Hyponatremia - metabolism | Prevention | Ischemia | Diastereomers | Neurons | Mortality | Aspartate | Thrombin | Explosives | Glutamate | Index Medicus
Journal Article
8. A- and D-ring structural modifications of an androsterone derivative inhibiting 17β-hydroxysteroid dehydrogenase type 3: Chemical synthesis and structure-activity relationships
Journal of medicinal chemistry, ISSN 0022-2623, 07/2019, Volume 62, Issue 15, pp. 7070 - 7088
Journal Article
Physical Review D - Particles, Fields, Gravitation and Cosmology, ISSN 1550-7998, 10/2013, Volume 88, Issue 7
Journal Article
PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 09/2012, Volume 109, Issue 10
Journal Article
11. Measurement of $D^{\pm}$, $D^\pm$ and $D_s^\pm$ meson production cross sections in $pp$ collisions at $\sqrt{s}=7$ TeV with the ATLAS detector
Nuclear Physics B, ISSN 0550-3213, 12/2015, Volume 907, p. 717–763
Nucl. Phys. B 907 (2016) 717 The production of $D^{*\pm}$, $D^\pm$ and $D_s^\pm$ charmed mesons has been measured with the ATLAS detector in $pp$ collisions at...
Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | HEPEX | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | HEPEX | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
12. Observation of direct CP violation in the measurement of the Cabibbo-Kobayashi-Maskawa angle gamma with $B^±\to D^()K^()±$ decays
Physical Review D, ISSN 1550-7998, 2013, Volume 87, Issue 5, p. 052015
We report the determination of the Cabibbo-Kobayashi-Maskawa CP-violating angle γ through the combination of various measurements involving B
Physics - High Energy Physics - Experiment | Física de partícules | Experiments | Particle physics | Physics | High Energy Physics - Experiment | Experiment-HEP,HEPEX
Physics - High Energy Physics - Experiment | Física de partícules | Experiments | Particle physics | Physics | High Energy Physics - Experiment | Experiment-HEP,HEPEX
Journal Article
13. Evidence for direct CP violation in the measurement of the cabbibo-kobayashi-maskawa angle γ with B →D( )K( ) Decays
Physical Review Letters, ISSN 0031-9007, 09/2010, Volume 105, Issue 12, pp. 121801 - 121801
Journal Article
14. Vitamin D deficiency is associated with retinopathy in children and adolescents with type 1 diabetes
Diabetes Care, ISSN 0149-5992, 06/2011, Volume 34, Issue 6, pp. 1400 - 1402
OBJECTIVE-To examine the hypothesis that vitamin D deficiency (VDD) is associated with an increased prevalence of microvascular complications in young people...
D-RECEPTOR | POLYMORPHISM | INHIBITS ANGIOGENESIS | ENDOCRINOLOGY & METABOLISM | Prevalence | Cross-Sectional Studies | Vitamin D Deficiency - complications | Glycated Hemoglobin A - metabolism | Humans | Male | Diabetes Mellitus, Type 1 - complications | Australia - epidemiology | Young Adult | Diabetic Retinopathy - epidemiology | Adolescent | Female | Diabetic Retinopathy - etiology | Child | Diabetic retinopathy | Vitamin D deficiency | Diabetics | Diet therapy | Vitamin D | Calcifediol | Diabetes | Research | Alfacalcidol | Studies | Celiac disease | Socioeconomic factors | Index Medicus | Original Research
D-RECEPTOR | POLYMORPHISM | INHIBITS ANGIOGENESIS | ENDOCRINOLOGY & METABOLISM | Prevalence | Cross-Sectional Studies | Vitamin D Deficiency - complications | Glycated Hemoglobin A - metabolism | Humans | Male | Diabetes Mellitus, Type 1 - complications | Australia - epidemiology | Young Adult | Diabetic Retinopathy - epidemiology | Adolescent | Female | Diabetic Retinopathy - etiology | Child | Diabetic retinopathy | Vitamin D deficiency | Diabetics | Diet therapy | Vitamin D | Calcifediol | Diabetes | Research | Alfacalcidol | Studies | Celiac disease | Socioeconomic factors | Index Medicus | Original Research
Journal Article |
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response)
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details
http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference |
In a certain economy, time is discrete with periods $t=0,1,2,...$. The economy is populated by many households and identical firms. The utility of a household is:
$\displaystyle\sum^{\infty}_{t=0}\beta^t\Big(\log c_t - \gamma L_t^{\frac{1}{\gamma}}\Big)$
where $c_t$ is consumption and $L_t$ is labor in hours worked. There are two assets: capital, $K_t$, and a one-period risk-free bond $b_t$.
Capital is rented to the firms at a rate $v_t$. The law of motion for capital is $K_{t+1}=(1-\delta)K_t+I_T$. $I_t$ is gross investment and $I_t-\delta K_t$ is net investment.
The bond, is in zero net supply and yields $r_t$ per period.
Firms are owned by households and produce output $Y_t^p=K_t^{\alpha}L_t^{1-\alpha}$ where $0<\alpha<1$. Wages are $w_t$.
The government has a sequence of public spending $\{G_t\}_{t=0}^{\infty}$ that will be paid with a combination of taxed bonds at a rate $\tau_t^b$, taxed net investment at a rate $\tau_t^I$, taxed labor income at a rate $\tau_t^w$, and taxed consumption at a rate $\tau_t^c$. The government cannot borrow or save, $Y_t=Y_t^p$.
I need to set up the firm's problem, the household's problem, and the social planner's problem but have no idea how to incorporate the tax structure.
For example, here is what I have for the household's problem, but I'm pretty sure it's not right:
$\begin{aligned} \max_{c_t, L_t, K_{t+1}} \quad & \displaystyle\sum_{t=0}^{\infty} \beta^t (c_t, L_t)\\ \textrm{s.t.} \quad & c_t + I_t + \tau_t = Y_t = K_t^{\alpha} L_t^{1-\alpha} + G_t\\ \end{aligned}$
I'd appreciate any hints for setting up these problems. |
Let T be a second-order arithmetical theory, Λ a well-order, λ < Λ and X ⊆ ℕ. We use $[\lambda |X]_T^{\rm{\Lambda }}\varphi$ as a formalization of “φ is provable from T and an oracle for the set X, using ω-rules of nesting depth at most λ”.
For a set of formulas Γ, define predicative oracle reflection for T over Γ (Pred–O–RFNΓ(T)) to be the schema that asserts that, if X ⊆ ℕ, Λ is a well-order and φ ∈ Γ, then
$$\forall \,\lambda < {\rm{\Lambda }}\,([\lambda |X]_T^{\rm{\Lambda }}\varphi \to \varphi ).$$
In particular, define predicative oracle consistency (Pred–O–Cons(T)) as Pred–O–RFN{0=1}(T).
Our main result is as follows. Let ATR0 be the second-order theory of Arithmetical Transfinite Recursion, ${\rm{RCA}}_0^{\rm{*}}$ be Weakened Recursive Comprehension and ACA be Arithmetical Comprehension with Full Induction. Then,
$${\rm{ATR}}_0 \equiv {\rm{RCA}}_0^{\rm{*}} + {\rm{Pred - O - Cons\ }}\left( {{\rm{RCA}}_0^{\rm{*}} } \right) \equiv {\rm{RCA}}_0^{\rm{*}} + \,{\rm{Pred - O - Cons\ }}\left( {{\rm{RCA}}_0^{\rm{*}} } \right) \equiv {\rm{RCA}}_0^{\rm{*}} + \,{\rm{Pred - O - RFN}}\,_{{\bf{\Pi }}_2^1 } \left( {{\rm{ACA}}} \right).$$
We may even replace ${\rm{RCA}}_0^{\rm{*}}$ by the weaker ECA0, the second-order analogue of Elementary Arithmetic.
Thus we characterize ATR0, a theory often considered to embody Predicative Reductionism, in terms of strong reflection and consistency principles. |
I do not understand why: $$\cos(x)-i\sin(x) = \cos(-x)+i\sin(-x)$$
The last step i understand. (It is one of the solutions of a quadratic equation if it matters)
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I do not understand why: $$\cos(x)-i\sin(x) = \cos(-x)+i\sin(-x)$$
The last step i understand. (It is one of the solutions of a quadratic equation if it matters)
This is so, because
$$\cos (-x)=\cos (x)$$
And, $$\sin (-x)=-\sin (x) $$
Because $\cos$ is even and $\sin$ is odd. In detail, it's because $$\cos(z)=\frac{e^{iz}+e^{-iz}}{2}$$ and $$\sin(z)=\frac{e^{iz}-e^{-iz}}{2i}.$$ |
When I typed this question in google I found this link: http://octomatics.org/ Just from the graphic point of view: this system seems to be easier (when he explains that you can overlap the line). He also talks about how base 8 is useful for computers. Is base 8 really the best,what makes one base better than the other. Is there some mathematical argument that we can use to decide which is the best base to use? Thank you for your insight, I think the answer has to be entangled with other areas of knowledge as cognitive sciences or simply how large are the numbers that are used more often but: Arithmetic is something that almost everybody does so it is probably worth a while to try to see if we are making it as easy as possible. Thank you very much in advance.
We could say that there are two desirable goals for a base system:
A minimized number of symbols (e.g. there are two symbols in binary, 0 and 1). Minimal digital lengths for each number (e.g. the number 10101 has a length of 5 digits in binary).
Let us say our base is $b$, and hence we have exactly $b$ symbols to represent numbers with. The digital length of a number is then represented as $\log_bN$ for a positive number $N.$
There are many ways to attempt to minimize both of these values simultaneously. One way is to minimize $||\langle b,\space \log_bN \rangle||_p$, where $||\small \vec x||_p$ is the p-norm of $\vec x$, and where $N$ can be any arbitrary number (hence $N$ becomes a weighting parameter of sorts).
The results you obtain with the above algorithm are completely dependent on what values you choose for $N$ and $p$. So while there is no absolute answer, you can obtain one by assigning fixed values to these two parameters.
A while back I tried doing some pencil-and-paper arithmetic in base 16 (extraction of square roots) to see what it was like, and found that it was hard mainly because I don't have the base-16 multiplication table memorized, and there is a lot to memorize. My conclusion at the end was that the easiest way to do base-16 arithmetic is to do it in base 4 and then convert at the end; the conversion is trivial. All the numerals have twice as many digits, but it takes much less than half as long to calculate each digit, so it is much faster. Converting to base 2 goes too far and loses the tradeoff; now the numerals are twice as long again, but you do not gain enough speed to make it worth while.
For base $n$, you have to memorize $\frac12\bigl(n^2-3n+2\bigr)$ multiplication table entries, since $0\times n$ and $1\times n$ are trivial, and $m\times n = n\times m$. This increases rapidly with $n$: in base 10 you have 36 products to memorize; in base 16 there are 105. For base 4, you only have to remember $2\times 2 = 10, 2\times 3 = 12, $ and $3\times 3 = 21$.
I did not try any of the usual methods for speeding up paper-and-pencil multiplication, such as Napier's bones, or Genaille-Lucas rulers. But I don't think these would help enough. If you don't have the multiplication tables down pat, you lose a lot of number sense that you probably take for granted. For example, you can't estimate the answer to division problems. The square root algorithm requires that you guess the answer to a lot of sub-problems of the type "How many times does
2D,409 go into
1C0,000?" and whereas this sort of thing is easy in base 10 because you have years of practice (What's $1,835,000\div 185,353$ to one decimal place?) it is hard in base 16. So knowing the multiplication tables is important, and in base 16 there is a lot to know.
I found the exercise very enlightening, and I recommend it. If nothing else, it may be interesting to re-experience feelings that you have not had since third grade.
Converting to and from base 8 is not so felicitous, but the multiplication table is not too hard to memorize. Still I think the advantage over bases 4 or 16 are minimal, if they exist at all. I think the worst choice for pencil calculation would be a large prime number, since there's no easy conversion to a smaller base as there is from bases 8 or 16 to base 4.
The desiderata for computer calculation are of course completely different, and for abacus calculation are different again. For computers I have seen claims that base 3 is in some sense optimal, but I think 70 years of engineering practice refutes that completely. There is a certain sense in which modern computers operate not in base 2 but in base 256, but it seems to be something of a philosophical question which it is; it depends on what level of operation you are looking at.
To have the largest readership possible, along with the greatest probability of your readers reading your work until the end, and the probability of your readers also finding your work easy to read, at present, it seems you want base ten.
There doesn't exist any mathematical argument as to which base system comes as the best until we have criteria to determine what "the best" would mean. Who do we want to talk to? For what purpose do we talk to them? What sort of background do they have? What sort of tools do they have? What can those tools do?
Base 8 has direct conversion back and forth binary and would work almost equally well.
Having base-2 rather than anything else is a matter of electronics/hardware concerns before even getting into the values-in-any-base zone. Each separate bit is converted from an analog stream to binary through a measure it is judged-- it is cast as 1 if it is closer to 1 than it is to 0 and vice verse (An electronics expert would comment further on this). Splitting a signal among more choices would decrease its accuracy and is heavier on the hardware costs. The hardware was more expansive than the software on it back in the early days while these were all being set.
Base 3 is optimal b/c it can be shown to be the most efficient means of information storage as a balance between the number of symbols and the space they take up. Wish I had the paper handy that provides the proof of this so I can reference it. Unfortunately it is buried in some boxes. But it does exist and the reason has to do with the natural log (technically the natural log is the most efficient, but a fractional base, while possible, is obviously not practical).
However, that doesn't make base 3 best for humans b/c we are, obviously, not simple calculating machines. For us, we have to take pattern recognition and memory into account. Which means an effective base has to strike a balance between our ability to recall a long number of a few symbols and are ability to manipulate a short number of many symbols. Which, to get to the point, means it must lie somewhere between 4 and 16. Also, odd numbers are out b/c dividing by two evenly is a practical necessity. Beyond that we have to consider other practical aspects.
Both 4 and 16 push the limits of practical everyday use. Base 4 is very long winded, and 16 requires too much table memorization. While base 16 has utility for working with computers, that is really the only reason for it's use. Although, to its credit, it is the most concise. Base 8 is much more practical for every day usage while still being useful for computer work. Issac Asimov was a big proponent of using base 8 instead of base 10.
In contrast many people think base 12 is better because is has the most divisors: 2,3,4 and 6. This makes it very convenient for working with the division of things. Dividing a base 12 number by any of these is as easy as dividing a base 10 number by 5. On the downside, base 12 still has a somewhat large times table to memorize. Avoiding that leads us to base 6 (also known as senary), which has a trivial times table and yet has most of the division benefits of base 12. Though it can't be divided by 4 evenly, base 6 numbers can be taken in pairs and evenly divided by 4 and 9, which easily makes up for the deficiency. Another cool aspect of base 6 is that it can be counted on one's hands as easily as decimal. Each hand simply represents a units place 0 to 5, allowing a person to count from 0 to 35.
In summary, while there is some subjectivity involved is designating one base as better than another, there are some clear criteria that limit the selection to only a handful of possible values.
Well there is some subjectivity, the larger the Base, the larger the times table. base 16 (hexadecimal) is awesome because it can be divided in half 4 times, making it great for computer work. however, base 12 is great because it has more divisors than any other 'small' base (1,2,3,4,6,12) which makes division very easy. Base 8 or base 6 have some (but not all) of the advantages of base 16 and 12 (respectively) but are smaller (half) and therefore have an easier times table to memorise. What can be agreed however, is that base 10 is not a great choice, as it has relatively few divisors, (1,2,5,10) and you end up with very awkward decimals of 3 and 4 (i.e. .33333 and .25).
basically, we can't agree on what the best choice would be, but if we had it to do over again we wouldn't choose 10, and we would probably choose 6,8,12,or 16 as a base.
It depends what you find 'best'. Personally I don't care whether it's easy to convert to binary, since computers do that just fine for any base, so I never need to.
What is a little bit more annoying for humans is fractions that don't terminate. So I think to get the best base, you just multiply prime numbers from low to high until you feel the base gets too big. E.g.
1 * 2 = 21 * 2 * 3 = 61 * 2 * 3 * 5 = 301 * 2 * 3 * 5 * 7 = 210....
Using base 30, fractions like 1/2, 1/3, 1/5, 1/6, 1/8, 1/9, 1/10 or 1/15 factorizes nicely because the denominator's prime factors are also prime factors of your base. In that respect, base 12, or any $2^N$ base, waste numbers because they have double prime factors. The same fractions that terminate in base 12 also terminate in base 6.
Base 210 is definitely too big since you need to remember like 21.000 multiplications. Base 30 isn't really fun at around 400 either, but that's doable. The best bet may be 6. A quarter is not ideal though (3/20). |
I studied algebraic topology from a mathematical point of view, so I can only try to explain the physical interpretation.
Beginning with the mathematics. The fundamental group associated with a pointed topological space is a set of equivalence classes of closed loops (under homotopy). But what does this mean? Let's start with basics.
A
topological space $X$ is a general mathematical structure that is equipped with the notion of continuity and convergence (and etc.).
A
pointed topological space is just the old topological space $X$, now equipped with a chosen point $a\in X$. One write this as $\left(X,a\right)$.
Why do we need a pointed topological space? Because then we can look on
loops starting and ending at $a$. Those are continuous functions $\gamma :[0,1]\rightarrow X$ with $\gamma(0)=\gamma(1)=a$.
Those loops are interesting, because you can connect them one after another
$$(\gamma_{1}\ast\gamma_{2})(t)=\left\{\begin{matrix}\gamma_{1}(2t) & 0\leq t\leq\frac{1}{2}\\ \gamma_{2}(2t-1) & \frac{1}{2}< t\leq 1\end{matrix}\right.$$
and you can find their inverse (caution! this is incorrect, see below)
$$\gamma^{-1}(t)=\gamma(1-t)$$
You also have an identity element
$${\rm id}(t)=a$$
Therefore, you have here (almost) a group structure. For this to be a group, you can't really look at all the loops as different - you must consider two loops $\gamma_{1}$ and $\gamma_{2}$ to be the same, if you can deform $\gamma_{1}$ into $\gamma_{2}$ continuously. This is known as an
homotopy. A great illustration of this is available in this Wikipedia page. I attach it also here
Mathematicians call this kind of "considering two objects to be the same" by the name
equivalence relation, and this results in equivalence classes. Let's now denote by $[\gamma_{1}]$ all the loops that are equivalent to $\gamma_{1}$ under the homotopy relation. The set of all those loops
$$\pi_{1}(X,a)=\left\{[\gamma]|\gamma : [0,1]\rightarrow X,\: \gamma(0)=\gamma(1)=a\right\}$$
is called the
fundamental group. Now returning to the inverse of a loop, try to think why
$$\gamma^{-1}\ast\gamma\neq{\rm id}$$
but in fact
$$[\gamma^{-1}]\ast[\gamma]=[{\rm id}]$$
Some examples
Let $X=\mathbb{R}^3$,
i.e. the 3D space, and choose $a=0$ to be the origin. You can imagine that you can shrink every loop in $\mathbb{R}^{3}$ to a point, so all the loops are equivalent. This means that $\pi_{1}(\mathbb{R}^3,0)=\{{\rm id}\}$ is trivial.
Let $X=S^{1}$, the unit circle, and choose $a=(1,0)$. Note that every loop is characterized by how many times it circles the origin. That's an integer known as the
winding number. So for instance, you have a loop that circles the origin once counter-clockwise ($=1$) and you have a loop that circles the origin twice, but clockwise ($=-2$) and so on. Therefore $\pi_{1}(S^{1},(1,0))=\mathbb{Z}$.
More relevant examples
Note that ${\rm SO}(1)=\{1\}$, so of course $\pi_{1}({\rm SO}(1))=\{{\rm id}\}$ is trivial.
For ${\rm SO}(2)$, you can argue that it is the same as $S^{1}$, in the sense that rotations in 2D are equivalent to $e^{i\theta}$, which form the unit circle. Thus $\pi_{1}({\rm SO}(2))=\mathbb{Z}$.
For ${\rm SO}(3)$, every rotation can be achieved by giving the
axis of rotation, with is a point on the sphere $S^{2}$, and the rotation angle, which is in $S^{1}$. Note, however, that $(\boldsymbol{n},\theta)\sim(-\boldsymbol{n},-\theta)$ are equivalent rotations. This turns out to be equivalent to something known as the real projective space ${\rm RP}^{3}$, which has $\pi_{1}({\rm SO}(3))=\pi_{1}({\rm RP}^{3})=\mathbb{Z}_2$.
Now to the physics! What does the physics care about loops in ${\rm SO}(D)$? Let $t$ denote the time. How can we describe the exchange of two particles? The answer is by a curve $\gamma : [0,1]\rightarrow {\rm SO}(D)$ that describes the rotation of the two particles in the center-of-mass frame. This is a loop, because a full rotation of $2\pi$ is equivalent to doing nothing. Therefore, the number of topologically distinct loops is in fact the number of the different statistics possible. See also the following beautiful illustration taken from this Wikipedia page.
Anticlockwise rotation $\qquad\qquad\qquad$ Clockwise rotation
To quote the caption of this figure from Wikipedia
Exchange of two particles in $2 + 1$ spacetime by rotation. The rotations are inequivalent, since one cannot be deformed into the other (without the worldlines leaving the plane, an impossibility in $2D$ space). |
Some authors claim that an alignment between the Earth, the Sun, and the black hole in the center of the Milky Way will occur on 12/21/2012. We show why this is impossible, but also discuss why it wouldn't be a problem if it did. Claims
Some proponents claim that there will be an eclipse of the central black hole of our milky way galaxy on 12/21/2012, and that this will cause some dangerous effects on earth. We have seen claims of earthquakes, tsunamis, pole shifts and the like. Below are some of the claims found on the internet regarding black holes in 2012:
Black hole collisions in 2012 may create massive gravity radiation creating imbalance in our entire galaxy India Daily Technology Team Jun. 21, 2006
Scientists are worried about possible many Black hole collisions in and around the year of 2012. All galaxies are believed to contain supermassive black holes at their centers. Galaxies grow by merging with other galaxies, and when this occurs, the central black holes form a binary system and revolve around each other, eventually coalescing into one. The coalescence is driven by the emission of gravitational radiation. It follows the cysles of chilled universe below the Hyperspace – the cycle of creation, maintenance and eventual destruction.
According to some proponents, black holes are headed for collisions in and around the year of 2012. This can eject massive gravity radiation creating imbalance in our entire galaxy if not the whole universe. Black holes can also get ejected out of a galaxy and head towards another galaxy.[1]
Impossible alignment
As we discuss in the Solstice Alignment page, a perfect alignment with the center of the galaxy is not possible. In the case of the black hole, this is even more significant since the claims of a perfect alignment are central to the various claims made about this event. The point in the sky at which the galactic latitude and longitude are both zero is 17h 45m 37.224s −28° 56′ 10.23″ (J2000). This is offset slightly from the radio source Sagittarius A*, which is the best physical marker of the true galactic center. Sagittarius A* is located at 17h 45m 40.04s −29° 00′ 28.1″ (J2000), or galactic longitude 359° 56′ 39.5″, galactic latitude −0° 2′ 46.3″. On 12/21/2012 at 11:11 UT, the sun will be centered on 18h 00m 00s -23° 26′ 09″.
Therefore, the sun will not occlude the galactic black hole. Gravity
The most frequently cited force for causing problems during an alignment with the galaxy's central black hole is gravity. The gravity well of a black hole is indeed very steep…
if you are close to the black hole. If you are far away (and we're 26,000 ±1,400 light years away) then the gravity well is not steep, and the gravity of the black hole is no different than the gravity of a similar mass object at the same distance. If the sun were to be replaced with a $1M_\odot$ (that is, 1 Solar Mass) black hole, we would not notice the difference as far as gravity. It would be the same force, and the orbit of the earth would not change.[3]
If I say "There is a 4-million $M_\odot$ black hole in the center of the galaxy!" it sounds a lot scarier than "There are 4 million $M_\odot$ stars in the center of the galaxy!".
The formula for gravity (Newtons law of universal gravitation) is:(1)
where $m1$ and $m2$ are the masses of the two objects and $d$ is the distance between the centers of mass of the objects, and $G$ is the gravitational constant:(2)
Using these formulas we can determine that the force of gravity between the Sun and the Earth is $\approx 3.54 \times 10^{22}$ Newtons (3sf). The gravitational force between Sun and black hole at the center of the Galaxy (assuming that the black hole is 3,000,000 solar masses and 8kpc away) is $\approx 1.30 \times 10^{16}$ Newtons (3sf). The Gravitational force between Earth and Moon is $\approx 1.98 \times 10^{21}$ Newtons (3sf).
In other words, gravitational force between the Earth and Sun is about 1 million times larger than that between the Sun and the black hole at the center of the Galaxy.[5]
Other claimed forces
As you can see from the quotations above, the 2012 proponents do not limit themselves to gravity. They propose other forms of radiation or other forces that will 'bathe the earth' during the 2012 solstice. As is frequently the case, there is a tiny germ of truth in this claim. We are constantly being 'bathed with radiation' from the black hole. We are also constantly being 'bathed with radiation' from the Sun, from the stars, and from distant galaxies. In fact, it is essential to our ability to detect them. If we weren't able to detect the radiation from these objects, we wouldn't be able to see them!
There is nothing unusual about the 2012 solstice as far as radiation received from the galactic center. It will not increase or decrease during this time. We are in fact cut off from visible light from the galactic center (see the Dark Rift page for an explanation), but we can see the galactic center in infrared and x-ray light.
Some proponents make exactly the opposite claim. They say that when the 'sun eclipses the black hole' (which it won't) then we will be cut off from 'life-giving rays from the galactic core'. We challenge them to describe the form of radiation or force that is emanating from the black hole that is not also emanating from other sources (like the Sun), and why any such radiation or force is exempt from the inverse square law.
Conclusion In conclusion: We have shown that an alignment between the Earth, Sun and the galactic super-massive black hole is impossible. We have also shown that the gravity from the super-massive black hole is 1 millionth the gravity of the Sun. Black hole collisions in 2012 may create massive gravity radiation creating imbalance in our entire galaxyIndiaDaily. http://www.indiadaily.com/editorial/10106.asp (accessed 2009-06-03) The Mayan Prophecy of 2012TerryNazon.com. http://www.terrynazon.com/MayanProphecy2012.php (accessed 2009-06-03) Death from the Skies!Chapter 5. pp133. December 21, 2012… The End…or just another beginning. AboveTopSecret.com discussion forum. http://www.abovetopsecret.com/forum/thread283740/pg1 (accessed 2009-06-03)
This page should be edited to use include:footer |
Inertia
In power systems engineering, "inertia" is a concept that typically refers to rotational inertia or rotational kinetic energy. For synchronous systems that run at some nominal frequency (i.e. 50Hz or 60Hz), inertia is the energy that is stored in the rotating masses of equipment electro-mechanically coupled to the system, e.g. generator rotors, fly wheels, turbine shafts.
Contents Derivation
Below is a basic derivation of power system rotational inertia from first principles, starting from the basics of circle geometry and ending at the definition of moment of inertia (and it's relationship to kinetic energy).
The length of a circle arc is given by:
[math] L = \theta r [/math]
where [math]L[/math] is the length of the arc (m)
[math]\theta[/math] is the angle of the arc (radians) [math]r[/math] is the radius of the circle (m)
A cylindrical body rotating about the axis of its centre of mass therefore has a rotational velocity of:
[math] v = \frac{\theta r}{t} [/math]
where [math]v[/math] is the rotational velocity (m/s)
[math]t[/math] is the time it takes for the mass to rotate L metres (s)
Alternatively, rotational velocity can be expressed as:
[math] v = \omega r [/math]
where [math]\omega = \frac{\theta}{t} = \frac{2 \pi \times n}{60}[/math] is the angular velocity (rad/s)
[math]n[/math] is the speed in revolutions per minute (rpm)
The kinetic energy of a circular rotating mass can be derived from the classical Newtonian expression for the kinetic energy of rigid bodies:
[math] KE = \frac{1}{2} mv^{2} = \frac{1}{2} m(\omega r)^{2}[/math]
where [math]KE[/math] is the rotational kinetic energy (Joules or kg.m
2/s 2 or MW.s, all of which are equivalent) [math]m[/math] is the mass of the rotating body (kg)
Alternatively, rotational kinetic energy can be expressed as:
[math] KE = \frac{1}{2} J\omega^{2} [/math]
where [math]J = mr^{2}[/math] is called the
moment of inertia (kg.m 2).
Notes about the moment of inertia:
In physics, the moment of inertia [math]J[/math] is normally denoted as [math]I[/math]. In electrical engineering, the convention is for the letter "i" to always be reserved for current, and is therefore often replaced by the letter "j", e.g. the complex number operator i in mathematics is j in electrical engineering. Moment of inertia can also be denoted as [math]WR^{2}[/math] or [math]WK^{2}[/math], where [math]WK^{2} = \frac{1}{2} WR^{2}[/math]. WR 2literally stands for weight x radius squared. Moment of inertia can also be denoted as [math]WR^{2}[/math] or [math]WK^{2}[/math], where [math]WK^{2} = \frac{1}{2} WR^{2}[/math]. WR WR 2is often used with imperial units of lb.ft 2 WR Normalised Inertia Constants
The moment of inertia can be expressed as a normalised quantity called the
inertia constant H, calculated as the ratio of the rotational kinetic energy of the machine at nominal speed to its rated power (VA): [math]H = \frac{1}{2} \frac{J \omega_0^{2}}{S_{b}}[/math]
where [math]H[/math] is the inertia constant (s)
[math]\omega_{0} = 2 \pi \times \frac{n}{60}[/math] is the nominal mechanical angular frequency (rad/s) [math]n[/math] is the nominal speed of the machine (revolutions per minute) [math]S_{b}[/math] is the rated power of the machine (VA) Generator Inertia
The moment of inertia for a generator is dependent on its mass and apparent radius, which in turn is largely driven by its prime mover type.
Based on actual generator data, the normalised inertia constants for different types and sizes of generators are summarised in the table below:
Machine type Number of samples MVA Rating Inertia constant H Min Median Max Min Median Max Steam turbine 45 28.6 389 904 2.1 3.2 5.7 Gas turbine 47 22.5 99.5 588 1.9 5.0 8.9 Hydro turbine 22 13.3 46.8 312.5 2.4 3.7 6.8 Combustion engine 26 0.3 1.25 2.5 0.6 0.95 1.6 Relationship between Inertia and Frequency
TBA |
In my job as a data scientist, I am required to model the relationship between the price of a product and the sales or number of unit sold. I am trying to build a simplistic model, the assumptions of which are given below. I am not sure if all the assumptions will hold true simultaneously, or if there is a missing assumption or if some assumptions are wrong. Can any expert have a look and comment/give suggestions?
Assumptions:
Different customers have different income so the product priced at $P$ may be affordable to some and not affordable to others. But from a practical business point of view, we cannot build a model of each individual. So we assume that average income of the customers remains fairly constant in the short term and so we assume that we are trying to model the behavior of a single customer whose income is constant and equal the average income of all the customers.
All other conditions external conditions such as macro economics, competitive products, government policies and business rules remain constant The product has demand, it is mature and stable and so at present the no. of units of the product sold depends only on its price. As the price changes, the probability that th customer will purchase the product changes. Let $a_1$ be the probability that the customer purchases the product at price $P_1$ and $a_2$ be the probability that the customer curchases the product at price $a_2$. Then $a_2$ depends only on the initial price, initial probability and the new price i.e. $a_2 = f(a_1,P_1,P_2)$ where $f$ is some probability function whose behavior we are trying to find out assuming that such a function exists in the first place. If price remains constant, then the purchase probability remains unchanged i.e. $f(a_1,P_1,P_1) = a_1.$ Hypothetically, if the product is critical for survival (e.g. purchasing your daily oxygen in a Mars colony) the customer will buy it no matter what the price is i.e. $f(1,P_1,x) = 1$ for all $x$. If the product is not critical for survival then it will be out of demand at infinite price i.e. $ Lim_{x \to \infty} f(a_1,P_1,x) = 0.$ If the product is important and very cheap then everybody will but it i.e. $ Lim_{x \to 0} f(a_1,P_1x) = 1$. As price increases, sales decreases i.e. if $P_1 < P_2 < P_ 3$ then $0 \le f(a_1,P_1,P_3) < f(a_1,P_1,P_2) \le 1.$ Since $f$ is a probability density function, its value must be between 0 and 1 i.e. $0 \le f(a,x,y) \le 1$ for all $x$ and $y$. Mathematically, since $f$ is a probability density function, and price cannot be negative the total area under the probability density curve must be 1 i.e. $$\int_{-\infty}^{\infty} f(a_1,P_1, x) dx = \int_{0}^{\infty} f(a_1,P_1,x) dx = 1.$$
Now there will be infinitely many functions $f$ satisfying all the above conditions. For example
$$ a_1^{P_2/P_1}, \frac{a_1\log(1+P_2/P_1)}{\log 2} $$
We eleminate the ones that do not fit the observed data. But even then we can still be left with multiple functions that satisfies all the assumptions of the framework as well as fits the data within an acceptable error range as defined by the business.
Question 1: What other criteria/conditions/assumptions can we use to choose one mathematical model over another?
One way is to use Occam's Razor and go with the simplest model defined as the one which uses the least number of parameters in which case $$ f(a_1,P_1,P_2) = a_1^{P_2/P_1} $$
Question 2: Lets ignore my entire framework. What are the other models in economics that can be used to estimate the purchase probability given $a_1, P_1$ and $P_2$. Question 3: Which assumption is simple impossible and should be removed? What additional assumptions are required if any? |
I am facing a simple (at first glance) problem. I need to implement a numerical scheme for the solution of the first order wave propagation equation with chromatic dispersion included. My original problem is (for a forward propagating wave):
\begin{equation} \frac{1}{c} \frac{\partial u(x,t)}{\partial t} = -\frac{ \partial u}{ \partial x} - \frac{i \beta_2}{2} \frac{ \partial^2 u}{ \partial t^2}, \end{equation} where $c$ is the velocity of light, $u$ is the (complex) envelope of the field, $\beta_2$ is the 2nd order dispersion coefficient. Assume also that the wave is propagating inside a ring cavity of length , say, L where I take periodic boundary conditions: $u(x+L,t) = u(x,t)$ and also that at $t=0$ we know $u(x,0) $ and $u_t(x,0)$.
I am trying to implement a time-stepping numerical scheme and in the process I tried the following:
1) MOL approach, where I do semidiscretization along $x$, reduce the set of equations to a system of first order ODEs (by setting $v = \dot{u}$) and I establish a system: \begin{equation} \begin{bmatrix} \dot{v} \\ \dot{u} \end{bmatrix} = A \begin{bmatrix} v \\ u \end{bmatrix} . \end{equation}
When I solve the corresponding ODEs via 4th Runge-Kutta, Crank-Nicholson , or simply precomputing the matrix exponential, unfortunatelly, all my solutions eventually blow up to Inf. I implemented the periodic boundary conditions by modifying the matrix $A$ as $A \leftarrow PA$ , where $P$ is the identity matrix with the first row identical copy of the last row.
I also tried a simple finite differences approach where the spatial derivative is approximated via an upwind FD (first order) but to no avail.
Lastly I tried a strang splitting approach based on the two equations: \begin{align} \frac{1}{c}\dot{u} &= -\frac{\delta u }{\delta x} \\ \frac{1}{c}\dot{u} &= -\frac{i\beta_2}{2}\frac{\delta^2 u }{\delta t^2} , \end{align}
where this time the solution does not blow up but it looks somehow unphysical.
Does someone here know a stable and possibly higher than first order time-stepping scheme for this equation? Please, note that a solution based on a Fourier transform in $x$ is also not a good option for me because I would like to have the flexibility to implement different non-periodic boundary conditions. I would also dislike substituting the second order derivative in time with one in space due to the fact that this complicates the implementation of boundary conditions.
Thanks. |
Practical Guide to Variational Inference
Update: since I wrote this blog post 5 years ago, it’s quite a ride along the variational inference path, both for me and the state of the art! There was no mention of VAEs or normalizing flows or autodiff variational inference because they had not been invented yet (though BBVI was around this time). This post is focused on the tricky analytic derivations, which still have their place for certain models requiring lower variance updates during inference.
There are a few standard techniques for performing inference on hierarchical Bayesian models. Finding the posterior distribution over parameters or performing prediction requires an intractable integral for most Bayesian models, arising from the need to marginalise ("integrate out") nuisance parameters. In the face of this intractability there are two main ways to perform approximate inference: either transform the integration into a sampling problem (e.g., Gibbs sampling, slice sampling) or an optimisation problem (e.g., expectation-maximisation, variational inference).
Probably the most straightforward method is Gibbs sampling (see Chapter 29 of "Information Theory, Inference, and Learning Algorithms" by David MacKay) because you only need to derive conditional probability distributions for each random variable and then sample from these distributions in turn. Of course you have to handle convergence of the Markov chain, and make sure your samples are independent, but you can't go far wrong with the derivation of the conditional distributions themselves. The downside of sampling methods is their slow speed. A related issue is that sampling methods are not ideal for online scalable inference (e.g., learning from streaming social network data).
For these reasons, I have spent the last 6 months learning how to apply variational inference to my mobility models. While there are some very good sources describing variational inference (e.g., chapter 10 of "Pattern Recognition and Machine Learning" by Bishop, this tutorial by Fox and Roberts, this tutorial by Blei), I feel that the operational details can get lost among the theoretical motivation. This makes it hard for someone just starting out to know what steps to follow. Having successfully derived variational inference for several custom hierarchical models (e.g., stick-breaking hierarchical HMMs, extended mixture models), I'm writing a practical summary for anyone about go down the same path. So, here is my summary for how you actually apply variational Bayes to your model.
Preliminaries
I'm omitting an in-depth motivation because it has been covered so well by the aforementioned tutorials. But briefly, the way that mean-field variational inference transforms an integration problem into an optimisation problem is by first assuming that your model factorises further than you originally specified. It then defines a measure of error between the simpler factorised model and the original model (usually, this function is the Kullback-Leibler divergence, which is a measure of distance between two distributions).
The optimisation problem is to minimise this error by modifying the parameters to the factorised model (i.e., the variational parameters).
Something that can be confusing is that these variational parameters have a similar role in the variational model as the (often, fixed) hyperparameters have in the original model, which is to control things like prior mean, variance, and concentration. The difference is that you will be updating the variational parameters to optimise the factorised model, while the fixed hyperparameters to the original model are left alone. The way that you do this is by using the following equations for the optimal distributions over the parameters and latent variables, which follow from the assumptions made earlier:
$$\mathrm{ln} \; q^*(V_i) = \mathbb{E}_{-V_i}\left( \mathrm{ln} \; p(X, Z, V | \alpha) \right)$$
$$\mathrm{ln} \; q^*(Z_i) = \mathbb{E}_{-Z_i}\left( \mathrm{ln} \; p(X, Z, V | \alpha) \right)$$
where \(X\) is the observed data, \(V\) is the set of parameters, \(Z\) is the set latent variables, and \(\alpha\) is the set of hyperparameters. Another source of possible confusion is that these equations do not explicitly include the variational parameters, yet these parameters are the primary source of interest in the variational scheme. In the steps below, I describe how to derive the update equations for the variational parameters from these equations.
1. Write down the joint probability of your model
Specify the distributions and conditional dependencies of the data, parameters, and latent variables for your original model. Then write down the joint probability of the model, given the hyperparameters. In the following steps, I'm assuming that all the distributions are conjugate to each other (e.g., multinomial data have Dirichlet priors, Gaussian data have Gaussian-Wishart priors and so on).
The joint probability will usually look like this:
$$p(X, Z, V | \alpha) = p(V | \alpha) \prod_n^N \mathrm{<data \; likelihood \; of \; V, Z_n>} \mathrm{<probability \; of \; Z_n>}$$
where \(N\) is the number of observations. For example, in a mixture model, the data likelihood is \(p(X_n | Z_n, V)\) and the probability of \(Z_n\) is \(p(Z_n | V)\). An HMM has the same form, except that \(Z_n\) now has probability \(p(Z_n | Z_{n-1}, V)\). A Kalman filter is an HMM with continuous \(Z_n\). A topic model introduces an outer product over documents and additional set of (global) parameters.
2. Decide on the independence assumptions for the variational model
Decide on the factorisation that will allow tractable inference on the simpler model. The assumption that the latent variables are independent of the parameters is a common way to achieve this. Interestingly, you will find that a single assumption of factorisation will often induce further factorisations as a consequence. These come "for free" in the sense that you get simpler and easier equations without having to make any additional assumptions about the structure of the variational model.
Your variational model will probably factorise like this:
$$q(Z, V) = q(Z) q(V)$$
and you will probably get \(q(V) = \prod_i q(V_i)\) as a set of induced factorisations.
3. Derive the variational update equations
We now address the optimisation problem of minimising the difference between the factorised model and the original one.
Parameters
Use the general formula that we saw earlier:
$$\mathrm{ln} \; q^*(V_i) = \mathbb{E}_{-V_i}\left( \mathrm{ln}\; p(X, Z, V | \alpha) \right)$$
The trick is that most of the terms in \(p(X, Z, V | \alpha)\) do not involve \(V_i\), so can be removed from the expectation and absorbed into a single constant (which becomes a normalising factor when you take the exponential of both sides). You will get something that looks like this:
$$\mathrm{ln} \; q^*(V_i) = \mathbb{E}_{-V_i}\left( \mathrm{ln} \; p(V_i | \alpha) + \sum_n^N \mathrm{ln} \; \mathrm{<data \; likelihood \; of \; V_i, Z_n>} \right) + \mathrm{constant}$$
What you are left with is the log prior distribution of \(V_i\) plus the total log data likelihood of \(V_i\) given \(Z_n\). Even in the two remaining equations, you can often find terms that do not involve \(V_i\), so a lot of the work in this step involves discarding irrelevant parts.
The remaining work, assuming you chose conjugate distributions for your model, is to manipulate the equations to look like the prior distribution of $$V_i$$ (i.e., to have the same functional form as \(p(V_i | \alpha)\)). You will end up with something that looks like this:
$$\mathrm{ln} \; q^*(V_i) = \mathbb{E}_{-V_i}\left( \mathrm{ln} \; p(V_i | \alpha_i') \right) + \mathrm{constant}$$
where your goal is to find the value of \(\alpha_i'\) through equation manipulation. \(\alpha_i'\) is your variational parameter, and it will involve expectations of other parameters \(V_{-i}\) and/or \(Z\) (if it didn't, then you wouldn't need an iterative method). It's helpful to remember at this point that there are standard equations to calculate \(\mathbb{E} \left( \mathrm{ln} \; V_j \right)\) for common types of distribution (e.g., Dirichlet \(V_j\) has \(\mathbb{E} \left( \mathrm{ln} \; V_{j,k} \right) = \psi(V_{j,k}) - \psi(\sum_{k'} V_{j,k'})\), where \(\psi\) is the digamma function). Sometimes you will have to do further manipulation to find expectations of other functions of the parameters. We consider next how to find the expectations of the latent variables \(\mathbb{E}(Z)\).
Latent variables
Start with:
$$\mathrm{ln} \; q^*(Z_n) = \mathbb{E}_{-Z_{n}}\left( \mathrm{ln} \; p(X, Z, V | \alpha) \right)$$
and try to factor out \(Z_{n}\). This will usually be the largest update equation because you will not be able to absorb many terms into the constant. This is because you need to consider the parameters generating the latent variables as well as the parameters that control their effect on observed data. Using the example of multinomial independent \(Z_n\) (e.g., in a mixture model), this works out to be:
$$\mathrm{ln} \; q^*(Z_{n,k}) = \mathbb{E}_{-Z_{n,k}}\left( Z_{n,k} \mathrm{ln} \; V_k + Z_{n,k} \mathrm{ln} \; p(X_n | V_k) \right) + \mathrm{constant}$$
factorising out \(Z_{n,k}\) to get:
$$\mathrm{ln} \; \mathbb{E}(Z_{n,k}) = \mathbb{E}(\mathrm{ln} \; V_k) + \mathbb{E}(\mathrm{ln} \; p(X_n | V_k)) + \mathrm{constant}$$
4. Implement the update equations
Put your update equations from step 3 into code. Iterate over the parameters (M-step) and latent variables (E-step) in turn until your parameters converge. Multiple restarts from random initialisations of the expected latent variables are recommended, as variational inference converges to the local optimum.
The video below shows what variational inference looks like on a mixture model. The green scatters represent the observed data, the blue diamonds are the ground truth means (not known by the model, obviously), the red dots are the inferred means and the ellipses are the inferred covariance matrices: |
A recent question discussed the now-classical dynamic programming algorithm for TSP, due independently to Bellman and Held-Karp. The algorithm is universally reported to run in $O(2^n n^2)$ time. However, as one of my students recently pointed out, this running time may require an unreasonably powerful model of computation.
Here is a brief description of the algorithm. The input consists of a directed graph $G=(V,E)$ with $n$ vertices and a non-negative length function $\ell\colon E\to\mathbb{R}^+$. For any vertices $s$ and $t$, and any subset $X$ of vertices that excludes $s$ and $t$, let $L(s,X,t)$ denote the length of the shortest Hamiltonian path from $s$ to $t$ in the induced subgraph $G[X\cup\{s,t\}]$. The Bellman-Held-Karp algorithm is based on the following recurrence (or as economists and control theorists like to call it, “Bellman's equation”):
$$ L(s,X,t) = \begin{cases} \ell(s,t) & \text{if $X = \varnothing_{\strut} $} \\ \min_{v\in X}~ \big(L(s, X\setminus\lbrace v\rbrace, v) + \ell(v,t)\big) & \text{otherwise} \end{cases} $$
For any vertex $s$, the length of the optimal traveling salesman tour is $L(s,V\setminus\{s\}, s)$. Because the first parameter $s$ is constant in all recursive calls, there are $\Theta(2^n n)$ different subproblems, and each subproblem depends on at most $n$ others. Thus, the dynamic programming algorithm runs in $O(2^n n^2)$ time.
Or does it?!
The standard integer RAM model allows constant-time manipulation of integers with $O(\log n)$ bits, but at least for
arithmetic and logical operations, larger integers must be broken into word-sized chunks. (Otherwise, strange things can happen.) Is this not also true of access to longer memory addresses? If an algorithm uses superpolynomial space, is it reasonable to assume that memory accesses require only constant time?
For the Bellman-Held-Karp algorithm in particular, the algorithm must transform the description of the subset $X$ into the description of the subset $X\setminus\{v\}$, for each $v$, in order to access the memoization table. If the subsets are represented by integers, these integers require $n$ bits and therefore cannot be manipulated in constant time; if they are not represented by integers, their representation cannot be used directly as an index into the memoization table.
So:
What is the actual asymptotic running time of the Bellman-Held-Karp algorithm? |
For a homework , I struggled to solve the following question but couldn't go further:
endowment of person 1 = (30,0)endowment of person 2 = (0,20)
utility functions are such that:\begin{eqnarray*} U (a_1,b_1) & = & \min(a_1,b_1) \\\\ U (a_2,b_2) & = & \min(4a_2,b_2).\end{eqnarray*}
What I am doing is setting a1 equal to b1 and 4a2 equal to b2. After that I'm writing these; $$ p_1 a_1 + p_2 b_1 = 30p_1 \mbox{ and } p_1a_2 + p_24a_2 = 20p_2 $$ Finally, by looking at feasibility condition, I'm writing down: \begin{eqnarray*} \frac{30p_1}{p_1+p_2} + \frac{20p_2}{p_1+4p_2} & = & 30 \\ \\ \frac{30p_1}{p_1+p_2} + \frac{4 \cdot 20p_2}{p_1+4p_2} & = & 20. \end{eqnarray*}
Here, if I do some calculations, they result in $p_1 = p_2$ and accordingly $a_1=b_1=15, a_2=4$ and $b_2=16.$ But then the problem is that there is excess demand for good2 and there isn't a WE.
Alternatively, I am considering to have $p_2 = 0$ so that there exist a Walrasian Equilibrium.
BUT, I'm stuck at this point and lack the correct intuition for solving next steps. Or, I might be in a completely incorrect way. Please, explain me what will be done with this excess demand?
Thank you in advance. |
Golden Meantone Golden Meantone is based on making the relation between the whole tone and diatonic semitone intervals be the Golden Ratio
[math]\varphi = \frac 1 2 (\sqrt{5}+1) \approx 1.61803\,39887\ldots\,[/math]
This makes the Golden fifth exactly
[math](8 - \varphi) / 11[/math]
octave, or
[math](9600 - 1200 \varphi) / 11[/math]
cents, approximately 696.214 cents.
Edo systems in a Fibonacci style recurrence beginning with 7 and 12 are successively better approximations to this ideal.
Construction
Golden Meantone is approximated with increasing accuracy by the infinite sequence of temperaments indicated in the table below. In any meantone temperament the five intervals in the column headings form part of a Fibonacci sequence (in the sense that each adjacent pair sums to the interval to its immediate right) and in these equal temperaments the sizes of these intervals (expressed in step units) are consecutive numbers from the integer Fibonacci sequence 0, 1, 1, 2, 3, 5... Both the rows and the columns of the table form Fibonacci sequences, and because the five intervals sums to an octave, the octave cardinalities in the first column are formed by summing the five numbers to their right. As the cardinality increases the interval sequence better approximates a geometric progression.
# Temperament# # chroma# # semitone# # tone# # minor third# # fourth# # 7edo # 0 #1 #1 #2 #3 # 12edo # 1 #1 #2 #3 #5 # 19edo # 1 #2 #3 #5 #8 # 31edo # 2 #3 #5 #8 #13 # 50edo # 3 #5 #8 #13 #21 # 81edo # 5 #8 #13 #21 #34 # 131edo # 8 #13 #21 #34 #55 # ... # ... ... ... #... #... Evaluation
Graham Breed writes:
I think of this as the standard melodic meantone because the all these ratios are the same. It has the mellow sound of 1/4 comma, but does still have a character of its own. Some algorithms make this almost exactly the optimum 5-limit tuning. It's fairly good as a 7-limit tuning as well. Almost the optimum (according to me) for diminished sevenths. I toyed with this as a guitar tuning, but rejected it because 4:6:9 chords aren't quite good enough. That is, the poor fifth leads to a sludgy major ninth. Listening
Liber Abaci - Composition by Alex Ness, based on successive equal-tempered approximations of the Golden Meantone temperament |
If there were an algorithm that factored in polynomial time by means of examining each possible factor of a complex number efficiently, could one not also use this algorithm to solve unbounded knapsack problems since two factors can be viewed as one value, say within the set for the knapsack problem, and the other being the number of copies of the first factor?
FACTOR 15; 3, 5
Unbounded KNAPSACK with value of 15 and the set of all integers; {5,5,5} andor {3,3,3,3,3}
Would this mean FACTOR was NP-Complete?
Would solving unbounded knapsack problems in polynomial time in this way prove P=NP?
(1) NP-complete only contains problems that can be answered by Yes or No, which are called decision problems. So FACTOR is not an NP-complete problem even your reduction is correct. In fact, if your reduction were correct, it proves that FACTOR is NP-hard.
(2) If you want to prove the NP-hardness of FACTOR by reducing UKP (unbounded knapsack problem) to it, you should find an integer $M$ (in polynomial time) for each instance $I$ of UKP and show that the answer of $I$ can be gotten (in polynomial time) using the factorization of $M$.
In your proof, you can only solve a specific subset of UKP instances by FACTOR, so it is not a correct reduction.
If there were an algorithm that factored in polynomial time by means of examining each possible factor of a complex number efficiently
Starting from a self-contradicting statement like this, you can prove anything. Why is it self-contradicting?
Let $n$ be a natural number. The number of possible factors of $n$ is $\sqrt{n}$; there may be number theory that pushes this down more, but there is no such result that eliminates all but $\Theta(\log^k n)$ numbers¹. So, you have to consider $\Omega(n^\varepsilon)$ numbers for some $\varepsilon > 0$. Note that the length of $n$ -- which is the input size! -- is $\lceil \log n \rceil$. Therefore, your assumed algorithm examines super-polynomially² many numbers in polynomial time³ -- impossible.
If there were such a result (and you could list those candidates in polynomial time), factoring would be possible in polynomial time, which would be well-known. As far as I know, the problem is still open. $n^\varepsilon = 2^{\varepsilon\log n}$ Assuming you mean "in polynomial time" when you say "efficiently". |
Primes are positive integers that do not have any proper divisor except 1. Primes can be regarded as the building blocks of all integers with respect to multiplication.
Theorem \(\PageIndex{1}\): Fundamental Theorem of Arithmetic
Given any integer \(n\geq 2\), there exist primes \(p_1 \leq p_2 \leq \cdots \leq p_s\) such that \(n = p_1 p_2 \ldots p_s\). Furthermore, this factorization is unique, in the sense that if \(n = q_1 q_2 \ldots q_t\) for some primes \(q_1 \leq q_2 \leq \cdots \leq q_t\), then \(s=t\) and \(p_i = q_i\) for each \(i\), \(1\leq i\leq s\).
Proof
We first prove the existence of the factorization. Let \(S\) be the set of integers \(n\geq2\) that are
notexpressible as the product of primes. Since a product may contain as little as just one prime, \(S\) does not contain any prime. Suppose \(S\neq\emptyset\), then the principle of well-ordering implies that \(S\) has a smallest element \(d\). Since \(S\) does not contain any prime, \(d\) is composite, so \(d=xy\) for some integers \(x\) and \(y\), where \(2\leq x,y<d\). The minimality of \(d\) implies that \(x,y\not\in S\). So both \(x\) and \(y\) can be expressed as products of primes, then \(d=xy\) is also a product of primes, which is a contradiction, because \(d\) belongs to \(S\). Therefore, \(S=\emptyset\), which means every integer \(n\geq2\) can be expressed as a product of primes.
Next, we prove that the factorization is unique. Assume there are two ways to factor \(n\), say \(n = p_1 p_2 \ldots p_s = q_1 q_2 \ldots q_t\). Without loss of generality, we may assume \(s\leq t\). Suppose there exists a smallest \(i\), where \(1\leq i\leq s\), such that \(p_i \neq q_i\). Then \[p_1 = q_1, \quad p_2 = q_2, \quad \cdots \quad p_{i-1} = q_{i-1}, \quad\mbox{but}\quad p_i \neq q_i.\] It follows that \[p_i p_{i+1} \cdots p_s = q_i q_{i+1} \cdots q_t,\] in which both sides have at least two factors (why?). Without loss of generality, we may assume \(p_i < q_i\). Since \(p_i \mid q_i q_{i+1} \cdots q_t\), and \(p_i\) is prime, Euclid’s lemma implies that \(p_i \mid q_j\) for some \(j\), where \(i < j \leq t\). Since \(q_j\) is prime, we must have \(p_i = q_j \geq q_i\), which contradicts the assumption that \(p_i < q_i\). Therefore, there does not exist any \(i\) for which \(p_i\neq q_i\). This means \(p_i=q_i\) for each \(i\), and as a consequence, we must have have \(s=t\).
Interestingly, we can use the strong form of induction to prove the existence part of the Fundamental Theorem of Arithmetic.
Proof
(Existence) Induct on \(n\). The claim obviously holds for \(n=2\). Assume it holds for \(n=2,3,\ldots,k\) for some integer \(k\geq2\). We want to show that it also holds for \(k+1\). If \(k+1\) is a prime, we are done. Otherwise, \(k+1=\alpha\beta\) for some integers \(\alpha\) and \(\beta\), both less than \(k+1\). Since \(2\leq \alpha,\beta \leq k\), both \(\alpha\) and \(\beta\) can be expressed as a product of primes. Putting these primes together, and relabeling and rearranging them if necessary, we see that \(k+1\) is also expressible as a product of primes in the form we desire. This completes the induction.
The next result is one of the oldest theorems in mathematics, numerous proofs can be found in the literature.
Theorem \(\PageIndex{2}\)
There are infinitely many primes.
Proof
Suppose there are only a finite number of primes \(p_1, p_2, \ldots, p_n\). Consider the integer \[x = 1 + p_1 p_2 \cdots p_n.\] It is obvious that \(x\neq p_i\) for any \(i\). Since \(p_1, p_2, \ldots, p_n\) are assumed to be the only primes, the integer \(x\) must be composite, hence can be factored into a product of primes. Let \(p_k\) be one of these prime factors, so that \(x=p_kq\) for some integer \(q\). Then \[\begin{aligned} 1 &=& x-p_1 p_2 \cdots p_n \\ &=& p_kq-p_1 p_2 \cdots p_n \\ &=& p_k(q-p_1p_2\cdots p_{k-1} p_{k+1} \cdots p_n), \end{aligned}\] which is impossible. This contradiction proves that there are infinitely many primes.
Some of the primes listed in the Fundamental Theorem of Arithmetic can be identical. If we group the identical primes together, we obtain the
or canonical factorization of an integer. prime-power factorization
Theorem \(\PageIndex{3}\)
All integers \(n\geq 2\) can be uniquely expressed in the form \(n = p_1^{e_1} p_2^{e_2} \cdots p_t^{e_t}\) for some distinct primes \(p_i\) and positive integers \(e_i\).
Once we find the prime-power factorization of two integers, their greatest common divisor can be obtained easily.
Example \(\PageIndex{1}\label{eg:FTA-01}\)
From the factorizations \(246 = 2\cdot 3\cdot 41\) and \(426 = 2\cdot 3\cdot 79\), it is clear that \(\gcd(246,426) = 2\cdot3 = 6\).
hands-on exercise \(\PageIndex{1}\label{he:FTA-01}\)
Find the factorizations of 153 and 732, and use them to compute \(\gcd(153,732)\).
Although the set of primes that divide two different positive integers \(a\) and \(b\) may be different, we could nevertheless write both \(a\) and \(b\) as the product of powers of all the primes involved. For example, by combining the prime factors of \[12300 = 2^2\cdot 3\cdot 5^2\cdot 41, \quad\mbox{and}\quad 34128 = 2^4\cdot 3^3\cdot 79,\] we could write them as \[12300 = 2^2\cdot 3^1\cdot 5^2\cdot 41^1\cdot 79^0, \quad\mbox{and}\quad 34128 = 2^4\cdot 3^3\cdot 5^0\cdot 41^0\cdot 79^1.\] It follows that \[\gcd(12300,34128) = 2^2\cdot 3^1\cdot 5^0\cdot 41^0\cdot 79^0 = 12.\] The generalization is immediate.
Theorem \(\PageIndex{4}\)
If \(a = p_1^{e_1} p_2^{e_2} \cdots p_t^{e_t}\) and \(b = p_1^{f_1} p_2^{f_2} \cdots p_t^{f_t}\) for some distinct primes \(p_i\), where \(e_i, f_i\geq0\) for each \(i\), then \(\gcd(a,b) = p_1^{\min(e_1,f_1)} p_2^{\min(e_2,f_2)} \cdots p_t^{\min(e_t,f_t)}\).
In this theorem, we allow the exponents to be zero. In the usual prime-power factorization, the exponents have to be positive.
hands-on exercise \(\PageIndex{2}\label{he:FTA-02}\)
Compute \(\gcd(2^3\cdot5\cdot7\cdot11^2, 2^2\cdot3^2\cdot5^2\cdot7^2)\).
Definition: least common multiple
The
of the integers \(a\) and \(b\), denoted \(\mathrm{ lcm }(a,b)\), is the smallest positive common multiple of both \(a\) and \(b\). least common multiple
Theorem \(\PageIndex{5}\)
If \(a = p_1^{e_1} p_2^{e_2} \cdots p_t^{e_t}\) and \(b = p_1^{f_1} p_2^{f_2} \cdots p_t^{f_t}\) for some distinct primes \(p_i\), where \(e_i, f_i\geq0\) for each \(i\), then \(\mathrm{ lcm }(a,b) = p_1^{\max(e_1,f_1)} p_2^{\max(e_2,f_2)} \cdots p_t^{\max(e_t,f_t)}\).
hands-on exercise \(\PageIndex{3}\label{he:FTA-03}\)
Compute \(\mathrm{ lcm }(2^3\cdot5\cdot7\cdot11^2, 2^2\cdot3^2\cdot5^2\cdot7^2)\).
Corollary \(\PageIndex{6}\)
For any positive integers \(a\) and \(b\), we have \(ab = \gcd(a,b)\cdot \mathrm{ lcm }(a,b)\).
Proof
For each \(i\), one of the two numbers \(e_i\) and \(f_i\) is the minimum, and the other is the maximum. Hence, \[e_i + f_i = \min(e_i,f_i) + \max(e_i,f_i),\] from which we obtain \[p_i^{e_i} p_i^{f_i} = p_i^{e_i+f_i} = p_i^{\min(e_i,f_i)+\max(f_i,f_i)} = p_i^{\min(e_i,f_i)} p_i^{\max(e_i,f_i)}.\] Therefore, \(ab\) equals the product of \(\gcd(a,b)\) and \(\mathrm{ lcm }(a,b)\).
Example \(\PageIndex{1}\label{eg:FTA-02}\)
Since \(12300 = 2^2\cdot 3^1\cdot 5^2\cdot 41^1\cdot 79^0\), and \(34128 = 2^4\cdot 3^3\cdot 5^0\cdot 41^0\cdot 79^1\), it follows that \[\mathrm{ lcm }(12300,34128) = 2^4\cdot 3^3\cdot 5^2\cdot 41^1\cdot 79^1 = 34981200.\] We have seen that \(\gcd(12300,34128)=12\), and we do have \(12\cdot 34981200 = 12300\cdot 34128\).
hands-on exercise \(\PageIndex{4}\label{he:FTA-04}\)
Knowing that \(\gcd(246,426)=6\), how would you compute the value of \(\mathrm{ lcm }(246,426)\)?
Example \(\PageIndex{3}\label{eg:FTA-03}\)
When we add two fractions, we first take the common denominator, as in \[\frac{7}{8} + \frac{5}{12} = \frac{7}{8}\cdot\frac{3}{3} + \frac{5}{12}\cdot\frac{2}{2} = \frac{21+10}{24} = \frac{31}{24}.\] Clear enough, the least common denominator is precisely the least common multiple of the two denominators.
Example \(\PageIndex{4}\label{eg:FTA-04}\)
The control panel of a machine has two signal lights, one red and one blue. The red light blinks once every 10 seconds, and the blue light blinks once every 14 seconds. When the machine is turned on, both lights blink simultaneously. After how many seconds will they blink at the same time again?
Solution
This problem illustrates a typical application of least common multiple. The red light blinks at 10, 20, 30, … seconds, while the blue light blinks at 14, 28, 42, … seconds. In general, the red light blinks at \(t\) seconds if \(t\) is a multiple of 10, and the blue light blinks when \(t\) is a multiple of 14. Therefore, both lights blink together when \(t\) is a multiple of both 10 and 14. The next time it happens will be \(\mathrm{ lcm }(10,14)=70\) seconds later.
hands-on exercise \(\PageIndex{5}\label{he:FTA-05}\)
Two comets travel on fixed orbits around the earth. One of them returns to Earth every 35 years, the other every 42 years. If they both appear in 2012, when is the next time they will return to Earth in the same year?
hands-on exercise \(\PageIndex{6}\label{he:FTA-06}\)
Given relatively prime positive integers \(m\) and \(n\), what are the possible values of \(\mathrm{ lcm }(4m-6n,6m+4n)\)?
Example \(\PageIndex{5}\label{eg:FTA-05}\)
What does \(2\mathbb{Z}\cap3\mathbb{Z}\) equal to?
Solution
Assume \(x\in 2\mathbb{Z}\cap3\mathbb{Z}\), then \(x\in2\mathbb{Z}\) and \(x\in3\mathbb{Z}\). This means \(x\) is a multiple of both 2 and 3. Consequently, \(x\) is a multiple of \(\mathrm{ lcm }(2,3)=6\), which means \(x\in6\mathbb{Z}\). Therefore, \(2\mathbb{Z}\cap3\mathbb{Z} \subseteq 6\mathbb{Z}\).
Next, assume \(x\in 6\mathbb{Z}\), then \(x\) is a multiple of 6. Consequently, \(x\) is a multiple of 2, as well as a multiple of 3. This means \(x\in2\mathbb{Z}\), and \(x\in3\mathbb{Z}\). As a result, \(x\in 2\mathbb{Z}\cap3\mathbb{Z}\). Therefore, \(6\mathbb{Z} \subseteq 2\mathbb{Z}\cap3\mathbb{Z}\). Together with \(2\mathbb{Z}\cap3\mathbb{Z} \subseteq 6\mathbb{Z}\), we conclude that \(2\mathbb{Z}\cap3\mathbb{Z} = 6\mathbb{Z}\).
hands-on exercise \(\PageIndex{7}\label{he:FTA-07}\)
What does \(4\mathbb{Z}\cap6\mathbb{Z}\) equal to?
Summary and Review There are infinitely many primes. Any positive integer \(n>1\) can be uniquely factored into a product of prime powers. Primes can be considered as the building blocks (through multiplication) of all positive integers exceeding one. Given two positive integers \(a\) and \(b\), their least common multiple is denoted as \(\mathrm{ lcm }(a,b)\). For any positive integers \(a\) and \(b\), we have \(ab = \gcd(a,b)\cdot\mathrm{ lcm }(a,b)\).
Exercise \(\PageIndex{1}\label{ex:FTA-01}\)
Find the prime-power factorization of these integers.
1.25 (a) 4725 & (b) 9702
(c) 180625 & (d) 1662405
Exercise \(\PageIndex{2}\label{ex:FTA-02}\)
Find the least common multiple of each of the following pairs of integers.
1.25 (a) 27, 81 & (b) 24, 84 & (c) 120, 615
(d) 412, 936 & (e) 1380, 3020 & (f) 1122, 3672
Exercise \(\PageIndex{3}\label{ex:FTA-03}\)
Richard follows a very rigid routine. He orders a pizza for lunch every 10 days, and has dinner with his parents every 25 days. If he orders a pizza for lunch and has dinner with his parents today, when will he do both on the same day again?
Exercise \(\PageIndex{4}\label{ex:FTA-04}\)
Compute \(\gcd(15\cdot50,25\cdot21)\), and \(\mathrm{ lcm }(15\cdot50,25\cdot21)\).
Exercise \(\PageIndex{5}\label{ex:FTA-05}\)
What does \(10\mathbb{Z}\cap15\mathbb{Z}\) equal to? Prove your claim.
Exercise \(\PageIndex{6}\label{ex:FTA-06}\)
Let \(m\) and \(n\) be positive integers. What does \(m\mathbb{Z}\cap n\mathbb{Z}\) equal to? Prove your claim.
Exercise \(\PageIndex{7}\label{ex:FTA-07}\)
Let \(p\) be an odd prime. Show that
\(p\) is of the form \(4k+1\) or of the form \(4k+3\) for some nonnegative integer \(k\). \(p\) is of the form \(6k+1\) or of the form \(6k+5\) for some nonnegative integer \(k\).
Exercise \(\PageIndex{8}\label{ex:FTA-08}\)
Give three examples of an odd prime \(p\) of each of the following forms
1.25 (a) \(4k+1\) & (b) \(4k+3\)
(c) \(6k+1\) & (d) \(6k+5\)
Exercise \(\PageIndex{9}\label{ex:FTA-09}\)
Prove that any prime of the form \(3n+1\) is also of the form \(6k+1\).
Exercise \(\PageIndex{10}\label{ex:FTA-10}\)
Prove that if a positive integer \(n\) is of the form \(3k+2\), then it has a prime factor of the same form.
Hint
Consider its contrapositive.
Exercise \(\PageIndex{11}\label{ex:FTA-11}\)
Prove that 5 is the only prime of the form \(n^2-4\).
Hint
Consider the factorization of \(n^2-4\).
Exercise \(\PageIndex{12}\label{ex:FTA-12}\)
Use the result “Any odd prime \(p\) is of the form \(6k+1\) or of the form \(6k+5\) for some nonnegative integer \(k\)” to prove the following results.
If \(p\geq5\) is a prime, then \(p^2+2\) is composite. If \(p\geq q\geq5\) are primes, then \(24\mid(p^2-q^2)\). |
Let $x$ denote the solution of $Ax=b$ and let $\hat{x}$ denote the computed solution. We cannot hope to do better than $$\hat{x} = \text{fl}(x),$$ i.e., the floating point representation of $x$. In this, the most favorable case, we have $\hat{x}_j = x_j(1+\delta_j)$ where $|\delta_j| \leq u$ and $u$ is the unit roundoff. It follows, that $\|x-\hat{x}\|_2 \leq u \|x\|_2$. Now, let $r$ denote the residual given by$$ r = b - A\hat{x} = A(x-\hat{x}).$$ We have $$\|r\|_2 \leq \|A\|_2 \|x-\hat{x}\|_2 \leq u \|A\|_2 \|x\|_2 \leq u \|A\|_2 \| A^{-1} \|_2 \|b\|_2.$$ We conclude that the relative residual satisfies $$ \frac{\|r\|_2}{\|b\|_2} \leq u \, \kappa_2(A), $$ where $\kappa_2(A) = \|A\|_2 \| A^{-1} \|_2$ denotes the 2-norm condition number of the matrix $A$.
The estimate above is true for a general matrix $A$
. In practice, you will find that the relative residual of iterative methods stagnates at the level of $u \kappa_2(A)$
. There is no hope of the CG algorithm doing better in general. |
Search
Now showing items 1-9 of 9
Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV
(Springer, 2012-10)
The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ...
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-12)
In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ...
Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV
(Springer-verlag, 2012-11)
The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ...
Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV
(Springer, 2012-09)
The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ...
J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ...
Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ...
Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-03)
The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ...
Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV
(American Physical Society, 2012-12)
The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ... |
Fishburn-Shepp inequality $xyz$ inequality
An inequality for linear extensions of a finite partially ordered set $(X,\prec)$. Elements $x,y\in X$ are incomparable if $x\neq y$ and neither $x\prec y$ nor $y\prec x$. Denote by $<_0$ a general linear order extension of $\prec$ on $X$, let $N$ be the number of linear extensions $<_0$, and let $N(a_1<_0b_1,\ldots,a_n<_0b_n)$ be the number of linear extensions in which $a_i<_0b_i$ for $i=1,\ldots,n$.
The Fishburn–Shepp inequality says that if $x$, $y$ and $z$ are mutually incomparable members of $(X,\prec)$, then
$$N(x<_0 y)N(x<_0 z)<N(x<_0 y,x<_0 z)N.$$
The inequality is also written as
$$\mu(x<_0y)\mu(x<_0z)<\mu(x<_0y,x<_0z),$$
where $\mu(A)$ denotes the probability that a randomly chosen linear extension $<_0$ of $\prec$ satisfies $A$. When written as $\mu(x<_0y,x<_0z)/\mu(x<_0y)>\mu(x<_0z)$, one sees that the probability of $x<_0z$ increases when it is true that also $x<_0y$. Some plausible related inequalities are false [a3].
References
[a1] R. Ahlswede, D.E. Daykin, "An inequality for the weights of two families, their unions and intersections" Z. Wahrscheinlichkeitsth. verw. Gebiete , 43 (1978) pp. 183–185 [a2] P.C. Fishburn, "A correlational inequality for linear extensions of a poset" Order , 1 (1984) pp. 127–137 [a3] L.A. Shepp, "The XYZ conjecture and the FKG inequality" Ann. of Probab. , 10 (1982) pp. 824–827 How to Cite This Entry:
Fishburn-Shepp inequality.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Fishburn-Shepp_inequality&oldid=37414 |
In last night's post, I looked at the ratio of the level of unemployment $U$ to job vacancies $V$ in terms of the "naive dynamic equilibrium" model. There was a slight change from the original version describing the unemployment rate $u$ where instead of taking the dynamic equilibrium to be:
\frac{du}{dt} \approx \; \text{constant}
$$
I used the logarithmic derivative:
$$
\frac{d}{dt} \log \; \frac{V}{U} \approx \; \text{constant}
$$
This latter form is more interesting (to me) because it can be directly related to a simple information equilibrium model $V \rightleftarrows U$ (vacancies in information equilibrium with unemployment) as shown in last night's post. Can we unify these two approaches so that both use the logarithmic form? Yes, it turns out it works just fine. Because $u$ doesn't have a high dynamic range, $\log u(t)$ is directly related via linear transform to $u(t)$ such that basically we end up with a different constant.
If we have an information equilibrium relationship $U \rightleftarrows L$ where $L$ is the civilian labor force such that $u = U/L$, and we posit a dynamic equilibrium:
\frac{d}{dt} \log \; \frac{U}{L} =\; \frac{d}{dt} \log \; u \; \approx \; \text{constant}
$$
Then we can apply the same procedure as documented here and come up with a better version of the dynamic equilibrium model of the unemployment rate data:
[
Update:added derivative graph.] The transition points are at 1991.0, 2001.7, 2008.8, and 2014.3 (the first three are the recessions, and the last one is the "Obamacare boom"). The improvements are mostly near the turnaround points (peaks and valleys). Additionally, the recent slowing in the decrease in the unemployment rate no longer looks as much like the leading edge of a future recession (the recent data is consistent with the normal flattening out that is discussed here).
I just wanted to repeat the calculation I did in the previous post for the unemployment rate. We have an information equilibrium relationship $U \rightleftarrows L$ with a constant IT index $k$ so that:
\log \; U = \; k \; \log \; L + c
$$
therefore if the total number of people in the labor force grows at a constant rate $r$, i.e. $L\sim e^{rt}$ (with, say, $r$ being population growth)
\begin{align}
\log \; \frac{U}{L} = & \; \log U - \log L\\
= & \; k\; \log L + c - \log L\\
= & \; (k-1)\; \log L + c\\
\frac{d}{dt}\log \; u = & \; (k-1)\;\frac{d}{dt} \log L \\
= & \; (k-1)r
\end{align}
$$
which is a constant (and $u \equiv U/L$).
... It's possible that we need to resolve what may actually be multiple shocks in the early 1980s. Later posts examine the EU and Japan. First the EU: The recession centers are at 2003.1, 2009.1, and 2012.2. And for Japan:
...
Update +4 hours:
I went back and added the recessions through the 1960s. Because the logarithm of the unemployment rate is much better behaved, I was able to fit the entire series from 1960 to the present with a single function (a sum of 7 logistic functions) instead of the piecewise method I had used for the non-logarithmic version that fit the pre-1980s, the 1980s, and the post-1980s separately. Here is the result:
The recession years are: 1970.3, 1974.8, 1981.1, 1991.1, 2001.7, and 2008.8 ‒ plus the positive ACA shock in 2014.4. And except for one major difference, the model is basically within ± 0.2 percentage points (20 basis points). That major difference is the first half of the 1980s when Volcker was experimenting on the economy ...
It's possible that we need to resolve what may actually be multiple shocks in the early 1980s.
Update 14 January 2017
Later posts examine the EU and Japan. First the EU:
The recession centers are at 2003.1, 2009.1, and 2012.2. And for Japan:
The centroids of the recessions in the fit are at 1974.8, 1983.1, 1993.9, 1998.4, 2001.6, and 2009.0.
...
Update 17 January 2017
US employment level using the model above:
Update 22 January 2017
A post on the model for the UK: |
1) Given \(\vecs r(t)=(3t^2−2)\,\hat{\mathbf{i}}+(2t−\sin t)\,\hat{\mathbf{j}}\),
a. find the velocity of a particle moving along this curve.
b. find the acceleration of a particle moving along this curve.
Answer: a. \(\vecs v(t)=6t\,\hat{\mathbf{i}}+(2−\cos t)\,\hat{\mathbf{i}}\) b. \(\vecs a(t)=6\,\hat{\mathbf{i}}+\sin t\,\hat{\mathbf{i}}\) In questions 2 - 5, given the position function, find the velocity, acceleration, and speed in terms of the parameter \(t\).
2) \(\vecs r(t)=e^{−t}\,\hat{\mathbf{i}}+t^2\,\hat{\mathbf{j}}+\tan t\,\hat{\mathbf{k}}\)
3) \(\vecs r(t)=⟨3\cos t,\,3\sin t,\,t^2⟩\)
Answer: \(\vecs v(t)=-3\sin t\,\hat{\mathbf{i}}+3\cos t\,\hat{\mathbf{j}}+2t\,\hat{\mathbf{k}}\) \(\vecs a(t)=-3\cos t\,\hat{\mathbf{i}}-3\sin t\,\hat{\mathbf{j}}+2\,\hat{\mathbf{k}}\) \(\text{Speed}(t) = \|\vecs v(t)\| = \sqrt{9 + 4t^2}\)
4) \(\vecs r(t)=t^5\,\hat{\mathbf{i}}+(3t^2+2t- 5)\,\hat{\mathbf{j}}+(3t-1)\,\hat{\mathbf{k}}\)
5) \(\vecs r(t)=2\cos t\,\hat{\mathbf{j}}+3\sin t\,\hat{\mathbf{k}}\). The graph is shown here:
Answer: \(\vecs v(t)=-2\sin t\,\hat{\mathbf{j}}+3\cos t\,\hat{\mathbf{k}}\) \(\vecs a(t)=-2\cos t\,\hat{\mathbf{j}}-3\sin t\,\hat{\mathbf{k}}\) \(\text{Speed}(t) = \|\vecs v(t)\| = \sqrt{4\sin^2 t+9\cos^2 t}=\sqrt{4+5\cos^2 t}\) In questions 6 - 8, find the velocity, acceleration, and speed of a particle with the given position function.
6) \(\vecs r(t)=⟨t^2−1,t⟩\)
7) \(\vecs r(t)=⟨e^t,e^{−t}⟩\)
Answer: \(\vecs v(t)=⟨e^t,−e^{−t}⟩\), \(\vecs a(t)=⟨e^t, e^{−t}⟩,\) \( \|\vecs v(t)\| = \sqrt{e^{2t}+e^{−2t}}\)
8) \(\vecs r(t)=⟨\sin t,t,\cos t⟩\). The graph is shown here:
9) The position function of an object is given by \(\vecs r(t)=⟨t^2,5t,t^2−16t⟩\). At what time is the speed a minimum?
Answer: \(t = 4\)
10) Let \(\vecs r(t)=r\cosh(ωt)\,\hat{\mathbf{i}}+r\sinh(ωt)\,\hat{\mathbf{j}}\). Find the velocity and acceleration vectors and show that the acceleration is proportional to \(\vecs r(t)\).
11) Consider the motion of a point on the circumference of a rolling circle. As the circle rolls, it generates the cycloid \(\vecs r(t)=(ωt−\sin(ωt))\,\hat{\mathbf{i}}+(1−\cos(ωt))\,\hat{\mathbf{j}}\), where \(\omega\) is the angular velocity of the circle and \(b\) is the radius of the circle:
Find the equations for the velocity, acceleration, and speed of the particle at any time.
Answer: \(\vecs v(t)=(ω−ω\cos(ωt))\,\hat{\mathbf{i}}+(ω\sin(ωt))\,\hat{\mathbf{j}}\) \(\vecs a(t)=(ω^2\sin(ωt))\,\hat{\mathbf{i}}+(ω^2\cos(ωt))\,\hat{\mathbf{j}}\) \(\begin{align*} \text{speed}(t) &= \sqrt{(ω−ω\cos(ωt))^2 + (ω\sin(ωt))^2} \\ &= \sqrt{ω^2 - 2ω^2 \cos(ωt) + ω^2\cos^2(ωt) + ω^2\sin^2(ωt)} \\ &= \sqrt{2ω^2(1 - \cos(ωt))} \end{align*} \)
12) A person on a hang glider is spiraling upward as a result of the rapidly rising air on a path having position vector \(\vecs r(t)=(3\cos t)\,\hat{\mathbf{i}}+(3\sin t)\,\hat{\mathbf{j}}+t^2\,\hat{\mathbf{k}}\). The path is similar to that of a helix, although it is not a helix. The graph is shown here:
Find the following quantities:
a. The velocity and acceleration vectors
b. The glider’s speed at any time
Answer: \(∥\vecs v(t)∥=\sqrt{9+4t^2}\)
c. The times, if any, at which the glider’s acceleration is orthogonal to its velocity
13) Given that \(\vecs r(t)=⟨e^{−5t}\sin t,e^{−5t}\cos t,4e^{−5t}⟩\) is the position vector of a moving particle, find the following quantities:
a. The velocity of the particle
Answer: \(\vecs v(t)=⟨e^{−5t}(\cos t−5\sin t),−e^{−5t}(\sin t+5\cos t),−20e^{−5t}⟩\)
b. The speed of the particle
c. The acceleration of the particle
Answer: \(\vecs a(t)=⟨e^{−5t}(−\sin t−5\cos t)−5e^{−5t}(\cos t−5\sin t), −e^{−5t}(\cos t−5\sin t)+5e^{−5t}(\sin t+5\cos t),100e^{−5t}⟩\)
14) Find the maximum speed of a point on the circumference of an automobile tire of radius 1 ft when the automobile is traveling at 55 mph.
15) Find the position vector-valued function \(\vecs r(t)\), given that \(\vecs a(t)=\hat{\mathbf{i}}+e^t \,\hat{\mathbf{j}}, \quad \vecs v(0)=2\,\hat{\mathbf{j}}\), and \(\vecs r(0)=2\,\hat{\mathbf{i}}\).
16) Find \(\vecs r(t)\) given that \(\vecs a(t)=−32\,\hat{\mathbf{j}}, \vecs v(0)=600\sqrt{3} \,\hat{\mathbf{i}}+600\,\hat{\mathbf{j}}\), and \(\vecs r(0)=\vecs 0\).
17) The acceleration of an object is given by \(\vecs a(t)=t\,\hat{\mathbf{j}}+t\,\hat{\mathbf{k}}\). The velocity at \(t=1\) sec is \(\vecs v(1)=5\,\hat{\mathbf{j}}\) and the position of the object at \(t=1\) sec is \(\vecs r(1)=0\,\hat{\mathbf{i}}+0\,\hat{\mathbf{j}}+0\,\hat{\mathbf{k}}\). Find the object’s position at any time.
Answer: \(\vecs r(t)=0\,\hat{\mathbf{i}}+(\frac{1}{6}t^3+4.5t−\frac{14}{3})\,\hat{\mathbf{j}}+(\frac{t^3}{6}−\frac{1}{2}t+\frac{1}{3})\,\hat{\mathbf{k}}\) Projectile Motion
18) A projectile is shot in the air from ground level with an initial velocity of 500 m/sec at an angle of 60° with the horizontal. The graph is shown here:
a. At what time does the projectile reach maximum height?
Answer: \(44.185\) sec
b. What is the approximate maximum height of the projectile?
c. At what time is the maximum range of the projectile attained?
Answer: \(t=88.37\) sec
d. What is the maximum range?
e. What is the total flight time of the projectile?
Answer: \(t=88.37\) sec
19) A projectile is fired at a height of 1.5 m above the ground with an initial velocity of 100 m/sec and at an angle of 30° above the horizontal. Use this information to answer the following questions:
a. Determine the maximum height of the projectile.
b. Determine the range of the projectile.
Answer: The range is approximately 886.29 m.
20) A golf ball is hit in a horizontal direction off the top edge of a building that is 100 ft tall. How fast must the ball be launched to land 450 ft away?
21) A projectile is fired from ground level at an angle of 8° with the horizontal. The projectile is to have a range of 50 m. Find the minimum velocity (speed) necessary to achieve this range.
Answer: \(v=42.16\) m/sec
e. Prove that an object moving in a straight line at a constant speed has an acceleration of zero. |
This question already has an answer here:
Let $f : X \longrightarrow Y$ be continuous, and let $A \subseteq X$. Show that $f(\overline A) \subseteq \overline{f(A)}$.
My attempt
Let $y \in f(\overline A)$. Then there exists $x \in \overline A$ such that $f(x) = y$. Now let us take an open neighbourhood $V$ of $y$ in $Y$ arbitrarily. If we can show that $V \cap f(A) \neq \emptyset$ then our purpose will be served.
Now since $f(x) = y \in V$, we have $x \in f^{-1} (V)$. Since $x \in \overline A$ and $f^{-1} (V)$ is open in $X$ (since $f$ is continuous), we have that $f^{-1} (V) \cap A \neq \emptyset$.
Let $z \in f^{-1} (V) \cap A$. Then $f(z) \in V \cap f(A)$, which proves that $V \cap f(A) \neq \emptyset$ and we are done.
Is my reasoning correct at all? Please verify it. |
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response)
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details
http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference |
Well done Shaun from Nottingham and Maria from Seville.
In this diagram $OX$ makes an angle $\theta$ with the vertical, which means that the $2$ and $3$ weights both make an angle $\theta$ with the horizontal.
If we take the length $OX$ as the unit and then think about a vertical line through $O$ and the horizontal space between that line and vertical lines through the $2$ and $3$ weights, and also through $X$.
We can see that the horizontal shift from the pivot for $X$ is $\sin\theta$ and the horizontal shift from the pivot for the $2$ and for the $3$ is $\cos\theta$. The balance will come to a settled position so that:
$$3\cos\theta = 2\cos\theta + X\sin\theta$$
which means that $X\sin\theta$ must equal $\cos\theta$ or, after a little rearranging,
$$ X = \frac{ \cos\theta}{ \sin\theta}\;.$$
At Stage 4 this equation is properly best solved by trial and improvement but if you have gone just a little bit further with your maths, you may know that:
$$\tan\theta = \frac{ \sin\theta}{ \cos\theta}\;.$$
Investigate that if you haven't seen it before.
So here $\tan\theta$ will equal $1/x$, and all we need to do is find $1/x$ on a calculator and then take the inverse Tangent for that value. In degrees the specific angles were $45$ degrees and $26.6$ degrees (to $1$ decimal place). |
Numeric
Symbolic
fast usually slower approximate exact well suited for simulations/DE/root finding well suited for algebraic/geometric problems sensible to unstability always stable not certified certified bertini/PHCpack... CoCoA/Singular/Macaulay/Magma... not good for exact problems hopeless for certain problems
Let $C = V(f), f\in \mathbb{C}[x,y]$ be a plane affine curve. We are interested in the topology of the pair $(\mathbb{C}^2, C)$.
This has several ingredients:
So the complexity lies on the embedding. In particular, we will focus on the complement $\mathbb{C}^2\setminus C$. Its main invariant is the fundamental group.
The fundamental group $\pi_1(\mathbb{C}^2\setminus C)$ is given by the quotient of the free group $F_d$ by the action of the braid monodromy.
So our problem can be reduced to compute the braids corresponding to the loops around the points of the discriminant.
Two curves $C_1= V(f_1), C_2=V(f_2)$ form a
weak arithmetic Zariski pair if these two conditions hold:
There exist weak arithmetic Zariski pairs. In particular it means that we cannot use only algebraic methods to study the embedded topology.
Nope:
Two curves $C_1= V(f_1), C_2=V(f_2)$ form an
arithmetic Zariski pair if:
There exist arithmetic Zariski pairs. So we cannot use purely algebraic methods to compute the fundamental group either.
$\mathbb{R}_{int} := \{[a,b] \mid a,b \in\mathbb{R}, a\leq b\}$
Pseudoinverses:
$\mathbb{C}_{int} := \{A + i\cdot B \mid A, B \in \mathbb{R}_{int}\}$$$[X] = \bigcap_{X\subseteq Y \in \mathbb{C}_{int}} Y$$
Let $Y\in\mathbb{C}_{int}, y_0 \in Y$. Let $f:Y\to \mathbb{C}$ be a holomorphic function. Assume that $0 \notin [f'(Y)]$, and$$N(f, y_0, Y):= y_0-\frac{f(y_0)}{[f'(Y)]} \subseteq Y.$$
Then there exists a unique zero of $f$ in $Y$. Moreover, this zero lives in $N(f,y_0,Y)$
We can use this theorem to ensure that we have a tubular neighborhood of our piecewise linear braid that contains the actual braid.
note that using intervals allows us to compute with irrational numbers, even transcendental!
So the braid will lie inside the interval around $y_0$ for $x \in [0,0.25]$.
load("my_library.pyx")load("ZVK.py")
Compiling ./my_library.pyx...
R.<x,y> = QQ[]f = y^2 - x^3 - x^2implicit_plot(f, (x, -3, 3), (y, -3, 3))
fundamental_group(f)
Finitely presented group < x0 | >
f = (x^2+y^2)^2+18*(x^2+y^2) - 27 -8*(x^3-3*x*y^2)implicit_plot(f,(x,-3,3), (y,-3,3))
fundamental_group(f)
Finitely presented group < x0, x1, x2 | x0*x2*x0^-1*x2^-1*x0^-1*x2, x2*x1*x2*x1^-1*x2^-1*x1^-1, x1^-1*x0*x1*x0*x1^-1*x0^-1 >
f = (x^2 - 1)*(x - 1)^2 + (y^2 - 1)^2implicit_plot(f, (x, -3, 3), (y, -3, 3))
fundamental_group(f)
Finitely presented group < x0 | >
f = (x^2 + y^2)^2 - 2*x^2 + 2*y^2implicit_plot(f, (x,-2, 2), (y, -2, 2))
fundamental_group(f)
Finitely presented group < x0 | >
f = x^4 - x^2*y + y^3implicit_plot(f, (x,-2, 2), (y, -2, 2))
fundamental_group(f)
Finitely presented group < x0 | >
f = (x^2 + y^2)^2 - x^3 + 3*x*y^2implicit_plot(f, (x, -3, 3), (y, -3, 3))
fundamental_group(f)
Finitely presented group < x0 | >
a = QQ[x](x^5-1).roots(QQbar)[-1][0]a
0.3090169943749474? + 0.9510565162951536?*I
F = NumberField(a.minpoly(), 'a', embedding = a)F.inject_variables()F
Defining a Number Field in a with defining polynomial x^4 + x^3 + x^2 + x + 1
R.<x,y> = F[]f = x^4 + a*y^4 - 2*x^2*y^2
fundamental_group(f)
Finitely presented group < x0, x1, x3, x4 | x1*x3*x4*x0*x1^-1*x0^-1*x4^-1*x3^-1, x4*x0*x1*x3^-1*x1^-1*x0^-1*x4^-1*x3, x4^-1*x3^-1*x0*x1*x3*x4*x1^-1*x0^-1 > |
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1...
Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer...
The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$.
Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result?
Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa...
@AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works.
Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months.
Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter).
Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals.
I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ...
I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side.
On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book?
suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable
Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ .
Can you give some hint?
My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$
If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero.
I have a bilinear functional that is bounded from below
I try to approximate the minimum by a ansatz-function that is a linear combination
of any independent functions of the proper function space
I now obtain an expression that is bilinear in the coeffcients
using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0)
I get a set of $n$ equations with the $n$ the number of coefficients
a set of n linear homogeneus equations in the $n$ coefficients
Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists
This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz.
Avoiding the neccessity to solve for the coefficients.
I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero.
I wonder if there is something deeper in the background, or so to say a more very general principle.
If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x).
> Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel.
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!! |
In Section 4.2.2 of the book "Principles of Model Checking", there is a definition (Definition 4.16; Page 165) of "Product of Transition System and NFA". You are right about the states (i.e., $S \times Q$) of the product but make mistakes about its transition relation. Below I focus on the transition relation.Definition 4.16 Product of Transition System $...
John Harrison's book is an exception in going all the way from theory to practice and making all the source code available. I think you will find it difficult to find an equivalent book for model checking, but there are a few that achieve a close approximation.Principles of Model Checking by Baier and Katoen contains a lot of examples and pretty detailed ...
Although there are frameworks created specifically for the purpose of prototyping programming languages (including their semantics, type systems, evaluation, as well as checking properties about them), the best choice depends on your particular case and specific needs.Having that said, there are multiple (perhaps not so distinct) alternatives you might ...
Probably the most common fixpoint expressions in model checking are things like $\mu X.A\cup(B\cap\circ X)$ and $\nu X.A\cap(B\cup\circ X)$, where $\circ$ is some flavour of "next state" operator. That is, the least $X$ such that $X = A\cup(B\cap\circ X)$, and the greatest $X$ such that $X = A\cap(B\cup\circ X)$, respectively. More generally, we are talking ...
Symbolic model checking can be very useful for verifying the correctness of communications and security protocols. For example:A symbolic model of an OAUTH2 implementation could help check for unintended consequences where an adversary obtains secret authentication tokens or related circumstantial data that could help them contravene the process.A ...
Technically, LTL and CTL are incomparable in their differentiating power over Kripke structures. So intuitively, it shouldn't be easier to design one or the other.However, most people tend to find it easier to think in linear time. Making sure a certain property holds for all paths is easier than studying a branching-time property. In particular, this is ...
It seems to me that "$\Phi≡\Psi$" is equivalent to "Neither $(\Phi ∧ ¬\Psi)$ nor $(\Psi ∧ ¬\Phi)$ is satisfiable".Therefore deciding equivalence is as difficult as deciding satisfiability, since "$\Phi$ satisfiable" is equivalent to "not ($¬\Phi≡\top$)".In this article there is a mention of a an exponential procedure to decide satisfiability in ...
You are certainly right that the level of rigor found in old papers making such claims can be a bit low at times when viewed from today's perspective.The claim is correct anyway, even if it does not follow from Sistla/Clarke's proof. The reason is that LTL satisfiability checking is also PSPACE-complete. You can see satisfiability checking as a special ...
You have some misconception regarding transition systems:The alphabet of the system is $2^P$ for some set $P$, and paths induce words over $2^P$. In the first system, there is a single path: $(S_0,S_1)^\omega$, and it induces a single word: $(\{a\},\{a,b\})^\omega$.Now, this word satisfies $aUb$, since $a$ holds until the secod letter, in which $b$ holds....
How does the transition system (denoted by $\texttt{TS}$) relate to the sequential circuit (denoted by $\texttt{SC}$)?The states of $\texttt{TS}$ are all possible combinations of values of variable $x$ and register $r$ (no output variable $y$) in $\texttt{SC}$.The label of each state in $\texttt{TS}$ consists of all the variables (including the register) ...
Symbolic Model Checking is Model Checking that works on symbolic states. That is, they encode the states into symbolic representations, typically Ordered Binary Decision Diagrams (OBDDs).The question is what do they do and how do they work.You first have your source code for some application. You then transform your source code into some state-transition ...
In general when we talk about code generation (or model-to-model transformation in general), clearly defined semantics is quite important, since such transformations usually make sense when both the source and the target model semantically match according to some criteria. For example, programmers might describe the behaviour of a program with a formal ...
ACTL is the universal fragment of CTL.Thus, existential path quantification is not allowed.So a path formula is of the form $AF\psi$, $AG\psi$, or $AX\psi$ (or a conjunction or disjunction of path formulas).Moreover, you are not allowed a general NOT operator, but rather negations have to be on the atomic propositions (otherwise this fragment would be ...
Your interpretation of the $G$ modality is incorrect; it does not inherently talk about all paths. In particular, the example you give specifies that there is a path such that from some point on, all states on that path satisfy $p$.As you suspect, for CTL* it is in general not possible to use a simple bottom-up evaluation, as you would for CTL, the reason ...
Presumably it can check any "liveness" property that that can be formulated in LTL. A "liveness" property is typically described as a property stating that "something good eventually happens". This is usually contrasted to a "safety" property which states that "nothing bad ever happens". See e.g. Slide 20 of this SPIN tutorial. Basically, a basic safety ...
I don't think it's possible in CTL nor LTL to model two competing players.You would probably need ATL (Alternating-time Temporal Logic). In ATL, the formula $\langle\langle A \rangle\rangle \phi$ says that agent (or coalition) $A$ can enforce $\phi$ to come about. In your case, $\langle\langle P_1 \rangle\rangle \text{Win}_1$.In modal µ-calculus, it ...
You can learn a lot about CTL at Wikipedia page.The sentence you need to write, expressed more closely in the vocabulary of CTL operators, would beAlong all paths starting from current state, it always has to hold that there exists at least one path where $p$ eventually is true.I think you can take it from there, but comment if you have troubles.hint: ...
The set of states satisfying $\exists a U b$ is the smallest set $S$ such that$S$ contains all states satisfying $b$, and$S$ contains all states satisfying $a$ which have a successor in $S$Note that we specify the "smallest" such set because otherwise you could pick the set of all states, or include arbitrary cycles of $a$-states, etc. Do you see how ...
DefinitionsFrom "Principle of Model Checking" By Joost-Pieter Katoen and Christel baier:A program graph over a set Var of typed variables is a tuple $(Loc, Act, >Effect, \rightarrow, Loc_0, g_0)$ where:$Loc$ is a set of locations$Effect: Act \times Eval(Var) \to Eval(Var)$ is the effect function$\rightarrow \subseteq Loc \times Cond(...
CTL formulas are always evaluated from the starting state of the Kripke structure. Indeed, CTL stands for computation Tree logic, and the tree in question is the unwinding of the Kripke structure, starting from the initial state.If you want to specify that $EF q$ should hold from every (reachable) state, that can be specified as $AGEFq$, which is still a ...
First, to answer the question in your question title: the difference between equivalence and implication in CTL formulae is the same as the difference between equivalence and implication in propositional logic, that is, $A \leftrightarrow B$ is the same $(A \to B) \land (B \to A)$.But your real question is whether $\mathrm{AG}\,(A \land B)$ is equivalent ...
You did not provide any particular resources for neither transition system nor program graph. My answer below is based on the book "Principles of Model Checking" by Christel Baier and Joost-Pieter Katoen (The MIT Press).For completeness, I first present the definitions of both transition system and program graph in this book. (Keep the numbering of the ...
What you describe is symbolic model checking, and it is treated in this set of slides, using reduced ordered BDDs.In a nutshell, you still do the fixpoint iteration, the main issue being how to do the transformation $Q\mapsto \phi_2\vee(\phi_1\wedge EXQ)$ on BDDs. The elementary operations you need are renaming (to replace unprimed by primed variables in $...
In general, logical formulae can be thought of as trees; inner nodes are operators and leaves are atomic propositions. Therefore, every formula consists of as many direct subformulae (that is on the first level) as its top-most operator's arity. For example,$\qquad \varphi \land \psi$has two direct subformulae $\varphi$ and $\psi$. This can be continued ...
If you want to prove the identities by hand, I do not know if there are absolutely general techniques. You can start with the axioms and well known identities for CTL and work from there.If you want the answer and worry about having a human readable proof separately, you can use a CTL satisfiability checker like MLSolver.
If the function has two outputs, a standard way to represent this is as a function $f: A \to (B \times C)$, i.e., $f$ outputs a pair of an AST and a symbol table.If the function updates an existing symbol table, you could represent this as a function $f : (A \times C) \to (B \times C)$. It takes as an input a pair of a string and a symbol table, and ...
It depends on how you model the system and what proof approach you're using. For early versions of Hoare Logic, there isn't really any notion of "scope", so there's absolutely nothing special you need to do. You simply have a pre-conditions and post-conditions on the global state. For Hoare Logic, types have nothing to do with it.In a bit more detail, in a ...
The two definitions are equivalent:As you point out, the second one implies the first, as if $\phi$ holds, then clearly $\phi\vee \psi$ holds.Conversely, consider a computation that satisfies the first definition, and assume by way of contradiction that it does not satisfy the second definition.Thus, there exists some time in which $\psi$ holds, and in ...
Here are two techniques I've been able to identify:Identify an explicit Skolem function. Suppose Priscilla can identify an explicit function $f$ such that$$\forall x . P(x) \Leftrightarrow Q(x,f(x))$$holds. Then it follows that Priscilla's claim is correct.This means that Priscilla can help us verify her claim by providing a function $f$ so that the ...
I'm pretty sure that you need your system to be designed in Promela, rather than Java. You then input the system in Promela, and the specification in LTL (or CTL) to SPIN, which outputs a C code for a model checker for the specific instance. |
Search
Now showing items 1-10 of 26
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV
(American Physical Society, 2017-06)
The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ...
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ... |
Could anyone comment on the following ODE problem? Thank you.
Given a 2-d system in polar coordinates: $$\dot{r}=r+r^{5}-r^{3}(1+\sin^{2}\theta)$$ $$\dot{\theta}=1$$
Prove that there are at least two nonconstant periodic solutions to this system.
It's easy to prove that there is a noncostant periodic solution using Poincare-Bendixson theorem, but I don't know how to prove the existantce of two nonconstant periodic solutions. |
According to Bennett McCallum [1], the quantity theory of money (QTM) is the macroeconomic observation that the economy obeys long run neutrality of money (it's not just
MV = PY). This Implies supply and demand functions will be homogeneous of degree zero, i.e. ratios of $D$ to $S$ such that if $D \rightarrow \alpha D$ and $S \rightarrow \alpha S$ then $g(D,S) \rightarrow \alpha^{0} g(D,S) = g(D,S)$. The simplest differential equation consistent with this observation is
\text{(1) } \frac{dD}{dS} = \frac{1}{\kappa}\; \frac{D}{S}
$$
We can identify the RHS with the price level $P$ (
ratio of NGDP to the money supply the exchange rate for the marginal unit of AD for the marginal unit of AS should be proportional to the price level). The solution (for varying D and S) is $D \sim S^{1/\kappa}$, or
\text{(2) } P = \frac{1}{\kappa} S^{1/\kappa - 1}
$$
If we take
D = NGDPand S = MB, this equation does well over segments of the price level, with different values of κ(which I'll just call the IT index for now):
If we let $\kappa$ become $\log S / \log D$ [2] then with $D, S \gg 1$ and $\alpha$ small (i.e. small changes in NGDP or MB), there is still approximate short run homogeneity of degree zero. Additionally with
S, D → ∞with S/Dfinite, we have "long run" homogeneity of degree zero in a growing economy [3]. And it turns out the best fit to the local values of κis approximately log MB/log NGDP(measured in billions of dollars):
But why should we be satisfied with (1)? In Part II, we'll motivate the equation via information theory. In this Part I, we'll resort to one of my favorite topics from physics:
effective field theory.
A general homogeneous differential equation (of first order) is given by
\frac{dD}{dS} = g(D/S)
$$
Where $g$ is an arbitrary function. This would capture any possible (first order) theory of supply and demand for money with long run neutrality. A Taylor expansion of $g$ around $D = S$ results in an equation of the form
\text{(3) } \frac{dD}{dS} = c_0 + c_1 \frac{D}{S} + c_2 \left( \frac{D}{S} \right)^{2} + c_3 \left( \frac{D}{S} \right)^{3} + \cdots
$$
Note that higher order derivatives by themselves are not consistent with homogeneity. If we take
D→α D, S→α Smeans that d²D/dS² → (1/α) d²D/dS². Terms like D d²D/dS²would be necessary, which we'll subsume into "generalized" second order terms.
\text{(4) }\mathcal{L} = \partial_{\mu} \phi \partial^{\mu} \phi + m^{2} \phi^2 + g_1 \phi^{4} + g_2 \phi^{6} + \cdots
$$
where all terms we consider must be consistent with Lorentz symmetry (i.e. special relativity, this theory is also symmetric under charge symmetry as well); the resulting theory is guaranteed to be consistent with Lorentz invariance. This way of coming up with particle theories is called an effective field theory. Generally, one writes down every possible term consistent the symmetries under consideration. Our process with equation (3) was analogous to writing down every consistent with long run neutrality of money (analogous to a symmetry).
The higher order terms in field theory (4) represent higher order interactions (2, 4-particle, etc interactions with coupling constants $g_1$, $g_2$). They tend to be "suppressed" (in physics) because the coefficient has dimensions of mass and that mass is considered "heavy". The higher terms (degree > 2) in the Lagrangian represent vertices (interactions) in Feynman diagrams.
It is possible that the analogous terms in our long run neutrality of money (money invariance) theory (3) represent three or more party transactions which would be heavily suppressed by the existence of money (you'd likely trade apples for money and then get oranges with some of that money rather than work out some complicated contract between the three parties allocating money, oranges and apples). I.e. the higher order coefficients $c_2$, $c_3$, ... might be suppressed by factors of
1/MB(!) where MBis the size of the monetary base (how much money is out there).
The previous paragraph is just reasoning to justify taking $c_2$, $c_3$,
... = 0[4]. But the best reason to do so is that the model fits the empirical data! If we use κ = log S/log D, then equation (2) does an excellent job of describing the price level:
In Part II, we'll motivate (1) with information theory.
[1]
Long-Run Monetary Neutrality and Contemporary Policy AnalysisBennett T. McCallum (2004)
[2] This prescription of
κ = log S/log Dis reminiscent of the beta function in quantum field theory; it is motivated by empirical evidence and the information theory in Part II because κis the ratio of the number of symbols used to describe Sto the number of symbols used to describe D. In an older post, I refer to it as the unit of account effect when used to describe the price level: the size of the money supply defines the unit of money in which aggregate demand is measured.
[3] Series expansions around
α~ 0 have a small coefficient for the linear term (if S, D >> 1) and the limit as S, D → ∞is independent of α.
[4] We'll take
c0to be zero, too. Although we shoud be careful. Einstein famously took the equivalent of c0(the cosmological constant) in general relativity to be non-zero in order to allow a steady-state universe. He later regretted that action, but more recent results show that it is not actually zero, but very very close. |
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.)
@Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases.
@TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good.
It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors)
Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11...
$\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474.
Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function.
The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation}
Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation}
Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation}
Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation}
Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain.
Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$
We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better)
@TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P
Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr... |
Trudy Mat. Inst. Steklov., 1976, Volume 142, Pages 254–266 (Mi tm2576) This article is cited in scientific papers (total in 3 4 papers) The Hamiltonian system connected with the equation $u_{\xi_\eta}+\sin u=0$ L. A. Takhtadzhyan , L. D. Faddeev Full text: PDF file (1081 kB) English version: Proceedings of the Steklov Institute of Mathematics, 1979, 142, 277–289 Bibliographic databases: UDC: 511 Citation: L. A. Takhtadzhyan, L. D. Faddeev, “The Hamiltonian system connected with the equation $u_{\xi_\eta}+\sin u=0$”, Number theory, mathematical analysis and their applications, A collection of articles dedicated to Ivan Matveev Vinogradov on the occasion of his eightieth birthday, Trudy Mat. Inst. Steklov., 142, 1976, 254–266; Proc. Steklov Inst. Math., 142 (1979), 277–289 Citation in format AMSBIB
\Bibitem{TakFad76}
\by L.~A.~Takhtadzhyan, L.~D.~Faddeev \paper The Hamiltonian system connected with the equation $u_{\xi_\eta}+\sin u=0$ \inbook Number theory, mathematical analysis and their applications \bookinfo A~collection of articles dedicated to Ivan Matveev Vinogradov on the occasion of his eightieth birthday \serial Trudy Mat. Inst. Steklov. \yr 1976 \vol 142 \pages 254--266 \mathnet{http://mi.mathnet.ru/tm2576} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=650258} \zmath{https://zbmath.org/?q=an:0412.35065} \transl \jour Proc. Steklov Inst. Math. \yr 1979 \vol 142 \pages 277--289 Linking options: http://mi.mathnet.ru/eng/tm2576 http://mi.mathnet.ru/eng/tm/v142/p254 Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: P. P. Kulish, S. A. Tsyplyaev, “Supersymmetric $\cos\Phi_2$ model and the inverse scattering technique”, Theoret. and Math. Phys., 46:2 (1981), 114–124 S. A. Tsyplyaev, “Commutation relations of the transition matrix in the classical and quantum inverse scattering methods (local case)”, Theoret. and Math. Phys., 48:1 (1981), 580–586 Yu. S. Osipov, A. A. Gonchar, S. P. Novikov, V. I. Arnol'd, G. I. Marchuk, P. P. Kulish, V. S. Vladimirov, E. F. Mishchenko, “Lyudvig Dmitrievich Faddeev (on his sixtieth birthday)”, Russian Math. Surveys, 50:3 (1995), 643–659 L. A. Takhtajan, A. Yu. Alekseev, I. Ya. Aref'eva, M. A. Semenov-Tian-Shansky, E. K. Sklyanin, F. A. Smirnov, S. L. Shatashvili, “Scientific heritage of L. D. Faddeev. Survey of papers”, Russian Math. Surveys, 72:6 (2017), 977–1081
Number of views: This page: 292 Full text: 103 |
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic? |
CS-92-56 Title Applying Database Dependency Theory to Software Engineering Authors D. Raymond and F.W. Tompa Abstract We describe the use of database dependency theory for investigating software designs. Dependency theory cap- tures some of the essential constraints implicit in a sys- tem, and focuses attention on its update properties. The fundamental choice between redundancy and normalization is directly related to the issue of reuse. We show how depen- dency theory can be applied to the design of text editors and spreadsheet systems, and discuss its implications for object-oriented programming. Date December 1992 Report Applying Database Dependency Theory to Software Engineering (PDF) Compressed PostScript:
Applying Database Dependency Theory to Software Engineering (PS.Z)
CS-92-55 Title Partitioning a Chordal Graph Into Transitive Subgraphs for Parallel Sparse Triangular Solution Authors A. Pothen, B.W. Peyton and X. Yuan Date December 1992 Report README Partitioning a Chordal Graph Into Transitive Subgraphs for Parallel Sparse Triangular Solution (PDF) Compressed PostScript:
Partitioning a Chordal Graph Into Transitive Subgraphs for Parallel Sparse Triangular Solution (PS.Z)
CS-92-53 Title A Small-Domain Lower Bound for Parallel Maximum Computation Authors P. Ragde Abstract Recent work has shown that parallel algorithms that are sensitive to the size of the input domain can improve on more general parallel algorithms. A paper by Berkman et al in FOCS 1990 demonstrates an O(log log log s)-step algorithm on an n-processor CRCW PRAM for finding the prefix-maxima of n numbers in the range [1..s]. This paper proves a lower bound demonstrating that no algorithm is asymptotically faster as a function of s. Date November 1992 Report A Small-Domain Lower Bound for Parallel Maximum Computation (PDF) Compressed PostScript:
A Small-Domain Lower Bound for Parallel Maximum Computation (GZIP)
CS-92-52 Title The Stability of the Partitioned Inverse Approach to Parallel Sparse Triangular Authors A. Pothen and N.J. Higham Date October 1992 Report README The Stability of the Partitioned Inverse Approach to Parallel Sparse Triangular (PDF) Compressed PostScript:
The Stability of the Partitioned Inverse Approach to Parallel Sparse Triangular (PS.Z)
CS-92-51 Title Highly Parallel Sparse Triangular Solution Authors A. Pothen, F.L. Alvarado and R.S. Schreiber Date October 1992 Report README Highly Parallel Sparse Triangular Solution (PDF) Compressed PostScript:
Highly Parallel Sparse Triangular Solution (PS.Z)
CS-92-50 Title Solving Partial Constraint Satisfaction Problems Using Local Search and Abstraction Authors Qiang Yang and Philip W.L. Fong Abstract Partial constraint satisfaction problems (PCSPs) were proposed by Freuder and Wallace to address some of the representational difficulties with traditional constraint satisfaction techniques. However, the reasoning method of their proposal was limited to traditional backtracking based algorithms. In this paper, we extend the PCSP model by associating it with a local search algorithm, which has found great successes in solving many large scale problems in the past. Furthermore, we extend the combined model to incorporate abstract problem solving, and show that the extended model has not only the advantages of both PCSP and local search, but also a number of new features useful for scheduling applications. We demonstrate the feasibility of our approach by an application to a university course scheduling domain. Date September 1992 Report Solving Partial Constraint Satisfaction Problems Using Local Search and Abstraction (PDF)
CS-92-49 Title Speech Acts and Pragmatics in Sentence Generation Masters Thesis Authors Cameron Shelley Abstract A fundamental advance in recent theories about natural language pragmatics involves the realization that people use language not just to describe propositions, but also to perform actions. This idea can be taken as a given starting point for investigating the questions: how are linguistic actions, or {\it speech acts}, performed and understood? How far can descriptions, as locutions, be used as speech acts? What role does inference play in the performance and understanding of speech acts?
Many previous theories of speech acts have taken speech acts to be independent and primitive units of communication, implicit in, but separate from, description and inference. In this thesis, I argue for an alternative model of speech acts. I propose that speech acts can be explained by a combination of description and inference, without the requirement of separate conventions. This explanation relies instead on an account of explicit linguistic units, including clausal moods and performative verbs, in addition to the inferential mechanism provided by Gricean conversational implicature.
In addition to outlining a model for the description and comparison of speech acts, I present a small sentence generator that partially implements the model, and discuss how it can be enhanced in future. Also, I illustrate the relevance of this model for a current computational theory of syntactic style. My model of speech acts suggests how this computational stylistic theory can be extended into the areas of semantic style and lexical style.
Date September 1992 Report Speech Acts and Pragmatics in Sentence Generation Masters Thesis (PDF) Compressed PostScript:
Speech Acts and Pragmatics in Sentence Generation Masters Thesis (GZIP)
CS-92-45 Title Downward Refinement and the Efficiency of Hierarchical Problem Solving Authors Fahiem Bacchus and Qiang Yang Abstract Analysis and experiments have shown that hierarchical problem-solving is most effective when the hierarchy satisfies the downward refinement property “DRP”, whereby every abstract solution can be refined to a concrete-level solution without backtracking across abstraction levels. However, the DRP is a strong requirement that is not often met in practice. In this paper we examine the case when the DRP fails, and provide an analytical model of search complexity parameterized by the probability of an abstract solution being refinable. Our model provides a more accurate picture of the effectiveness of hierarchical problemfisolving. We then formalize the DRP in Abstripsfistyle hierarchies, providing a syntactic test that can be applied to determine if a hierarchy satisfies the DRP. Finally, we describe an algorithm called Highpoint that we have developed. This algorithm builds on the Alpine algorithm of Knoblock in that it automatically generates abstraction hierarchies. However, it uses the theoretical tools we have developed to generate hierarchies superior to those generated by Alpine. This superiority is demonstrated empirically Date September 1992 Report Downward Refinement and the Efficiency of Hierarchical Problem Solving (PDF)
CS-92-44 Title Anisotropic Mesh Transformations and Optimal Error Control Authors R. B. Simpson Date August 1992 Report Anisotropic Mesh Transformations and Optimal Error Control (PDF) Compressed PostScript:
Anisotropic Mesh Transformations and Optimal Error Control (PS.Z)
CS-92-42 Title Justified Plans and Ordered Hierarchies Authors Eugene Fink Abstract The use of abstraction in problem solving is an effective approach to reducing search, but finding good abstractions is a di cult problem. The first attempt to automatically gen erate a hierarchy of abstraction spaces was made by Sacerdoti in 1974. In 1990 Knoblock built the system ALPINE, which completely automates the formation of a hierarchy by abstracting preconditions of operators. To formalize his method, Knoblock introduced the notion of ordered abstraction hierarchies, in attempt to capture the intuition behind "good" hierarchies.
In this thesis we continue the work started by Knoblock. We present further formalization of several important notions of abstract planning and describe methods to increase the number of abstraction levels without violating the ordered property of a hierarchy. We start by defining the justification of a non linear plan. Justification captures the intuition behind "good" plans, which do not contain useless actions. We introduce several kinds of justification, and describe algorithms that find different justifications of a given plan by removing useless operators. We prove that the task to find the "best possible" justification is NP complete.
The notion of justified plans leads us to define several kinds of semi,ordered abstraction hierarchies, which preserve the "good" properties of Knoblock's ordered hierarchies, but may have more abstraction levels. Finally, we present an algorithm for automatically abstracting not only preconditions but also effects of operators. This algorithm generates hierarchies with more levels of abstraction than ALPINE, and may increase the e ciency of planning in many problem domains. The algorithm may generate both problem independent and problem specific hierarchies.
Date August 1992 Report Justified Plans and Ordered Hierarchies (PDF) Compressed PostScript:
Justified Plans and Ordered Hierarchies (PS.Z)
CS-92-34 Title An Evaluation of the Temporal Coherence Heuristic for Nonlinear Planning Authors Cheryl Murray and Qiang Yang Abstract This paper presents an evaluation of a heuristic for partial-order planning, known as temporal coherence. The temporal coherence heuristic was proposed by Drummond and Currie as a method to improve the effciency of partial-order planning without losing the ability to fnd a solution (i.e. completeness). It works by using a set of domain constraints to prune away plans that do not "make sense," or temporally inco- herent. Our analysis shows that, while intuitively appealing, temporal coherence can only be applied to a very specific implementation of a partial-order planner and still maintain completeness. Furthermore, the heuristic does not always improve planning effciency in some cases, its application can actually degrade the effciency of planning dramatically. To understand when the heuristic will work well, we conducted complex- ity analysis and empirical tests. Our results show that temporal coherence works well when strong domain constraints exist that significantly reduce the search space, when the number of subgoals is small, when the plan size is not too large and when it is inexpensive to check each domain constraint. Date June 1992 Report An Evaluation of the Temporal Coherence Heuristic for Nonlinear Planning (PDF) Compressed PostScript:
An Evaluation of the Temporal Coherence Heuristic for Nonlinear Planning (GZIP)
CS-92-18 Title A Sublinear Space, Polynomial Time Algorithm for Directed $s$-$t$ Connectivity Authors Greg Barnes, Jonathon F. Buss, Walter L. Ruzzo and Baruch Schieber Abstract Directed $s$-$t$ connectivity is the problem of detecting whether there is a path from vertex $s$ to vertex $t$ in a directed graph. We present the first known deterministic sublinear space, polynomial time algorithm for directed $s$-$t$ connectivity. For $n$-vertex graphs, the algorithm can use as little as $n/2^{\Theta(\sqrt{\log n})}$ space while still running in polynomial time. Date September 1993 Report A Sublinear Space, Polynomial Time Algorithm for Directed $s$-$t$ Connectivity (PDF) A Sublinear Space, Polynomial Time Algorithm for Directed $s$-$t$ Connectivity (PS.Z) |
Hello, I've never ventured into char before but cfr suggested that I ask in here about a better name for the quiz package that I am getting ready to submit to ctan (tex.stackexchange.com/questions/393309/…). Is something like latex2quiz too audacious?
Also, is anyone able to answer my questions about submitting to ctan, in particular about the format of the zip file and putting a configuration file in $TEXMFLOCAL/scripts/mathquiz/mathquizrc
Thanks. I'll email first but it sounds like a flat file with a TDS included in the right approach. (There are about 10 files for the package proper and the rest are for the documentation -- all of the images in the manual are auto-generated from "example" source files. The zip file is also auto generated so there's no packaging overhead...)
@Bubaya I think luatex has a command to force “cramped style”, which might solve the problem. Alternatively, you can lower the exponent a bit with f^{\raisebox{-1pt}{$\scriptstyle(m)$}} (modify the -1pt if need be).
@Bubaya (gotta go now, no time for followups on this one …)
@egreg @DavidCarlisle I already tried to avoid ascenders. Consider this MWE:
\documentclass[10pt]{scrartcl}\usepackage{lmodern}\usepackage{amsfonts}\begin{document}\noindentIf all indices are even, then all $\gamma_{i,i\pm1}=1$.In this case the $\partial$-elementary symmetric polynomialsspecialise to those from at $\gamma_{i,i\pm1}=1$,which we recognise at the ordinary elementary symmetric polynomials $\varepsilon^{(n)}_m$.The induction formula from indeed gives\end{document}
@PauloCereda -- okay. poke away. (by the way, do you know anything about glossaries? i'm having trouble forcing a "glossary" that is really an index, and should have been entered that way, into the required series style.)
@JosephWright I'd forgotten all about it but every couple of months it sends me an email saying I'm missing out. Oddly enough facebook and linked in do the same, as did research gate before I spam filtered RG:-)
@DavidCarlisle Regarding github.com/ho-tex/hyperref/issues/37, do you think that \textNFSSnoboundary would be okay as name? I don't want to use the suggested \textPUnoboundary as there is a similar definition in pdfx/l8uenc.def. And textnoboundary isn't imho good either, as it is more or less only an internal definition and not meant for users.
@UlrikeFischer I think it should be OK to use @, I just looked at puenc.def and for example \DeclareTextCompositeCommand{\b}{PU}{\@empty}{\textmacronbelow}% so @ needs to be safe
@UlrikeFischer that said I'm not sure it needs to be an encoding specific command, if it is only used as \let\noboundary\zzznoboundary when you know the PU encoding is going to be in force, it could just be \def\zzznoboundary{..} couldn't it?
@DavidCarlisle But puarenc.def is actually only an extension of puenc.def, so it is quite possible to do \usepackage[unicode]{hyperref}\input{puarenc.def}. And while I used a lot @ in the chess encodings, since I saw you do \input{tuenc.def} in an example I'm not sure if it was a good idea ...
@JosephWright it seems to be the day for merge commits in pull requests. Does github's "squash and merge" make it all into a single commit anyway so the multiple commits in the PR don't matter or should I be doing the cherry picking stuff (not that the git history is so important here) github.com/ho-tex/hyperref/pull/45 (@UlrikeFischer)
@JosephWright I really think I should drop all the generation of README and ChangeLog in html and pdf versions it failed there as the xslt is version 1 and I've just upgraded to a version 3 engine, an dit's dropped 1.0 compatibility:-) |
$$x = c_1\cos (t) + c_2\sin (t) $$ is a two-parameter family of solutions of the second-order DE $$x'' + x = 0. $$
Find a solution of the second-order IVP consisting of this differential equation and the given initial conditions:
$$x\left(\frac{\pi}{6}\right) = \frac{1}{2} $$
and
$$x'\left(\frac{\pi}{6}\right) = 0 $$
I found that:
$$x'(t) = - c_1\sin (t) + c_2\cos (t) $$
and then I solved for $c_2$
$$c_2 = 1 - c_1\sqrt 3 $$
and plugged $c_2$ into $$x'\left(\frac{\pi}{6}\right) = 0 $$
where
$$x'\left(\frac{\pi}{6}\right) = - c_1\sin (t) + (1 - c_1\sqrt 3 )\cos (t) = 0 $$
and $$ = \frac{1}{2}c_1 + 1 - c_1\frac{{\sqrt 3 }}{2} = 0 $$
and finally got as far as
$$c_1 = \frac{{(\frac{1}{2} - \frac{{\sqrt 3 }}{2})}}{{\frac{1}{2} - \frac{{\sqrt 3 }}{2}}} = \frac{{ - 1}}{{\frac{1}{2} - \frac{{\sqrt 3 }}{2}}} = $$
However the book's solutions guide came up with
$$c_1 = \frac{{\sqrt 3 }}{4},c_2 = \frac{1}{4} $$
from
$$\frac{{\sqrt 3 }}{2}c_1 + \frac{1}{2}c_2 = \frac{1}{2} $$
and
$$ - \frac{1}{2}c_1 + \frac{{\sqrt 3 }}{2}c_2 = 0 $$
I just can't follow their logic, So any help would be appreciated. |
Definition:Differential/Functional Definition
Let $J \sqbrk y$ be a differentiable functional.
Let $h$ be an increment of the independent variable $y$.
Then the term linear with respect to $h$ is called the differential of the functional $J$, and is denoted by $\delta J \sqbrk {y; h}$. Notes For a differentiable functional is holds that: $\Delta J \sqbrk {y;h} = \phi \sqbrk {y; h} + \epsilon \size h$
where $\phi$ is linear with respect to $h$.
Thus:
$\delta J \sqbrk {y; h} = \phi \sqbrk {y; h}$ Also known as
The
differential $\delta J \sqbrk {y; h}$ is also known as the ( first) variation. |
Suppose I made a tag and it is used by many people everyday, so will it increase my reputation? And also, suppose no one uses it even once for a long time, i.e. 6 months, then?
Maybe it could be called book-errata? I cannot give many examples of discussions from math.SE offhand but at least one example from here Find limit of unknown function This is an example from a different forum: http://www.sosmath.com/CBB/viewtopic.php?p=181367 I guess questions (and answers) r...
"Math-review is a tag for questions concerning troubles with a text one is reading. Typos and interpretation issues are also pertinent. Please be sure to mention the source you are using and to make quotation to make your question more precise "
I created a new proposal at area 51 its called Math Review (http://area51.stackexchange.com/proposals/90443/math-review) and it is intended to be a Q&A site concerning troubles one find while reading text books and articles. Most of them are bad typos, but not always, for instance I recently fou...
In the definition of martingales, one finds in Stroock and Varadhan (Multidimensional Diffusion processes - page 20) the strange request that it be right-continuous process. However no such requirement is made in the wiki https://en.wikipedia.org/wiki/Martingale_%28probability_theory%29 no...
In the book Multidimensional diffusion processes, of Stroock and Varadhan one reads (page 23): This is the proof of $(i)$. Here the authors say Define $f_t$ on $(\{\tau \leq t\}, \mathcal{F}_t [\{\tau \leq t\}])$ What is the $\sigma$-algebra $\mathcal{F}_t [\{\tau \leq t\}]$? Is it $\{...
This is a meta meta question on the principles by which we should decide which tags to create. The motivation for the question stems from this meta discussion, in which a majority thought that there shouldn't be an errata tag, whereas a significant minority of $5$ upvoted my answer saying there s...
« first day (1176 days earlier) ← previous day next day → last day (1480 days later) » |
This is something of a partial idea, a work in progress. Let's say there is some factor of production $M$ allocated across $p$ different firms. The $p$-volume bounded by this budget constraint is:
V = \frac{M^{p}}{p!}
$$
p-volume bounded by budget constraint M
Let's say total output $N$ is proportional to the volume $V$. Take the logarithm of the volume expression
\log V = p \log M - \log p!
$$
and use Stirling's approximation for a large number of firms:
$$
\log V = p \log M - p \log p + p
$$
If we assume $V \sim e^{\nu t}$ and $M \sim e^{\mu t}$ and taking the (logarithmic) derivative (continuously compounded rate of change) and re-arranging a bit:
\nu = \left( p+ \left( t - \frac{\log p}{\mu} \right) \frac{dp}{dt} \right) \mu
$$
Now let's take $p \sim e^{\pi t}$ and re-arrange a bit more:
$$
\text{(1) }\; \nu = p \left( 1 + \left(1 - \frac{\pi}{\mu} \right) \pi t \right) \mu
$$
In the information equilibrium model, for exponential growing functions with growth rates $a$ and $b$ we have the relationship (see e.g. here)
a = k b
$$
where $k$ is the information transfer index. So in equation (1) we can identify the IT index
$$
k \equiv p \left( 1 + \left(1 - \frac{\pi}{\mu} \right) \pi t \right)
$$
In a sense, we have shown one way how the information equilibrium condition with $k \neq 1$ can manifest itself. For short time scales $\pi t \ll 1$, we can say $p \approx p_{0} (1 + \pi t)$ and:
k \approx p_{0} \left( 1 + \pi t + \left(1 - \frac{\pi}{\mu} \right) \pi t \right)
$$
This an interesting expression. If $\pi > 2 \mu$ then the IT index
falls. That is to say if the rate at which the number of firms increases faster than the factor of production increase, then the IT index falls. Is this the beginning of an explanation for falling growth with regard to secular stagnation? I'm not so sure. |
Suppose we have $75$ boxes that are labeled from $1$ to $75$ and that in each box there is at least one ball, but there are not more than $125$ balls total. I'm trying to find the largest number $n \in \left\{1,\, \ldots,\, 75 \right\}$ such that this statement is true: There is a collection of neighbouring boxes that contain exactly $n$ balls together. With neighbouring boxes I mean that their labels have to be consecutive.
I hope it's clear what I mean. We can rephrase the statement like this: $1 \leq x_i$, $1 \leq i \leq 75$ and $\sum_{i=1}^{75} x_i \leq 125$. Then there is a sequence $x_i,\, x_{i+1},\, \ldots,\, x_{i+j}$ such that $n=x_i + x_{i+1} + \ldots + x_{i+j}$ for some $j$.
The pigeonhole principle allows an easy proof of the statement for $n \leq 24$ that doesn't work for $n \geq 25$. I tried to apply the following algorithm to create simple counter-examples: To prevent the first $n$ boxes from contributing to a collection, we need $n+1$ balls in the $n$th box. Then we do the same with the following boxes, starting at $n+1$ and so on.
The algorithm only works for $33 \leq n \leq 49$. For smaller $n$, we would need two boxes with $n+1$ balls, but since we have at most $125$ balls altogether and each box contains at least one ball, there are not enough spare balls. For $50 \leq n \leq 75$, there are not even enough balls to put $n+1$ balls in any box.
If the algorithm I mentioned above is optimal, then the statement is true for all $n$, except those between $33$ and $49$. But if that's true, how can I prove it for $25 \leq n \leq 32$ and $50 \leq n \leq 75$? |
I am trying to plot a function $ {\bf x}(t) = a(t)\hat{v} + {\bf b}(t) $$ {\bf x}(t) = r(t)\hat{v} + {\bf b}(t) $, with unit vector $ \hat{v} $ and vector $ {\bf b}(t) $ given in spherical coordinates. For simplicity, assume that $ \hat{v} = (\sin\theta\cos\phi, \sin\theta\sin\phi, \cos\theta) $, and $ {\bf b}(t) = [t, \sin(a+t), \cos(b + at^2)] $, where $ a $ and $ b $ are constants. This is easy to plot, however, I want the angles $ \theta $ and $ \phi $ to satisfy an implicit equation, say, $ \cos\theta + a(\phi + b t)^2 + c = 0 $.
3 edited body
I am trying to plot a function $ {\bf x}(t) = a(t)\hat{v} + {\bf b}(t) $, with unit vector $ \hat{v} $ and vector $ {\bf b}(t) $ given in spherical coordinates. For simplicity, assume that $ \hat{v} = (\sin\theta\cos\phi, \sin\theta\sin\phi, \cos\theta) $, and $ {\bf b}(t) = [t, \sin(a+t), \cos(b + at^2)] $, where $ a $ and $ b $ are constants. This is easy to plot, however, I want the angles $ \theta $ and $ \phi $ to satisfy an implicit equation, say, $ \cos\theta + a(\phi + b t)^2 + c = 0 $.
I am trying to plot a function $ {\bf x}(t) = r(t)\hat{v} + {\bf b}(t) $, with unit vector $ \hat{v} $ and vector $ {\bf b}(t) $ given in spherical coordinates. For simplicity, assume that $ \hat{v} = (\sin\theta\cos\phi, \sin\theta\sin\phi, \cos\theta) $, and $ {\bf b}(t) = [t, \sin(a+t), \cos(b + at^2)] $, where $ a $ and $ b $ are constants. This is easy to plot, however, I want the angles $ \theta $ and $ \phi $ to satisfy an implicit equation, say, $ \cos\theta + a(\phi + b t)^2 + c = 0 $.
2 deleted 36 characters in body
I am trying to plot a function $ {\bf x}(t) = a(t)\hat{v} + {\bf b}(t) $, with unit vector $ \hat{v} $ and vector $ {\bf b}(t) $ given in spherical coordinates. For simplicity, assume that $ \hat{v} = (\sin\theta\cos\phi, \sin\theta\sin\phi, \cos\theta) $, and $ {\bf b}(t) = [t, \sin(a+t), \cos(b + at^2)] $, where $ a $ and $ b $ are constants. This is easy to plot, however, I want the angles $ \theta $ and $ \phi $ to satisfy an implicit equation, say, $ \cos\theta + a(\phi + b t)^2 + c = 0 $.
Thanks in advance for your help.
I am trying to plot a function $ {\bf x}(t) = a(t)\hat{v} + {\bf b}(t) $, with unit vector $ \hat{v} $ and vector $ {\bf b}(t) $ given in spherical coordinates. For simplicity, assume that $ \hat{v} = (\sin\theta\cos\phi, \sin\theta\sin\phi, \cos\theta) $, and $ {\bf b}(t) = [t, \sin(a+t), \cos(b + at^2)] $, where $ a $ and $ b $ are constants. This is easy to plot, however, I want the angles $ \theta $ and $ \phi $ to satisfy an implicit equation, say, $ \cos\theta + a(\phi + b t)^2 + c = 0 $.
Thanks in advance for your help.
I am trying to plot a function $ {\bf x}(t) = a(t)\hat{v} + {\bf b}(t) $, with unit vector $ \hat{v} $ and vector $ {\bf b}(t) $ given in spherical coordinates. For simplicity, assume that $ \hat{v} = (\sin\theta\cos\phi, \sin\theta\sin\phi, \cos\theta) $, and $ {\bf b}(t) = [t, \sin(a+t), \cos(b + at^2)] $, where $ a $ and $ b $ are constants. This is easy to plot, however, I want the angles $ \theta $ and $ \phi $ to satisfy an implicit equation, say, $ \cos\theta + a(\phi + b t)^2 + c = 0 $.
1
Parametric 3D plot with parameters satisfying an implicit equation
I am trying to plot a function $ {\bf x}(t) = a(t)\hat{v} + {\bf b}(t) $, with unit vector $ \hat{v} $ and vector $ {\bf b}(t) $ given in spherical coordinates. For simplicity, assume that $ \hat{v} = (\sin\theta\cos\phi, \sin\theta\sin\phi, \cos\theta) $, and $ {\bf b}(t) = [t, \sin(a+t), \cos(b + at^2)] $, where $ a $ and $ b $ are constants. This is easy to plot, however, I want the angles $ \theta $ and $ \phi $ to satisfy an implicit equation, say, $ \cos\theta + a(\phi + b t)^2 + c = 0 $.
Thanks in advance for your help. |
Consider a tensor $T\in\mathbb{R}^{N\times N\times N\times M}$ and two vectors $x,y\in\mathbb{R}^N$. I want to compute the $N\times M$ vector defined by $X_{ij}=\operatorname{tr}(x^\top T_{:,:,i,j}y)=\operatorname{tr}_{12}(yx^\top T)$ efficiently.
I tried this in two different ways:
TCtable[x_, T_, y_] := ParallelTable[Module[{Tslice}, Tslice = T[[;; , ;; , i, j]]; Tr[x\[Transpose].Tslice.y] ], {i, 1, Length[x]}, {j,1,Last[Dimensions[T]]}];TCtrace[x_, T_, y_] := TensorContract[y.x\[Transpose].T, {1, 2}];
My tensors have the very nice property that $T_{:,:,i,j}$ is
very sparse for all $i,j$ (so I am representing my tensor in Mathematica as a sparse array).
With $N=500,M=3$ the parallel table method takes about 1 second, while the explicit tensor multiplication and partial trace takes about 20 seconds. Are there other clever ways to speed this up? I want to compute this for many different tensors and vectors of the same size so if there's a way to compile the code or amortize the complexity that would also be great! |
I'm a bit confuse with all theses notions. Let $E$ a normed vector space of infinite dimension (also Banach, but it's probably not important).
The theorem of Eberlin Smulian theorem says that : all bounded sequence that has a subsequence that converge weakly $\iff$ it's reflexive.
(In fact it just says implication, but the converse is also true)... anyway.
Q1) Does it mean that if $E$ is reflexive, then instead of the fact that the weak topology is not metrizable, the property
$C\subset E$ is compact $\iff$ $C$ is sequentially compact
hold ? Because in reflexive spaces, $\{x\in E\mid \|x\|\leq 1\}$ is compact. Eberlin Smulian theorem says that
$\{x\in E\mid \|x\|\leq 1\}$ is compact $\iff$ it's sequentially compact.
Can this be generalized for any compact $C$ ?
Q2) If a set is separable, we know that $\{x\in E\mid \|x\|\leq 1\}$ is metrizable for the weak topology. In particular, can we conclude from this that
If $E$ is separable, a set $C\subset E$ is compact $\iff$ it's sequentially compact. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.