![]() ![]() To prove this identity, we consider the polynomialĮvaluating this polynomial at, taking absolute values, using the triangle inequality, and then taking logarithms, we conclude thatĪ convexity argument gives the lower boundĪ common task in analysis is to obtain bounds on sums Note that equality is attained in the previously discussed example with half of the equal to and the other half equal to, thanks to the binomial theorem. One can use some standard manipulations reduce to the case where and, and after replacing with one is now left with establishing the inequality We sketch the proof of the inequality (4) as follows. This can be done, after a technical step of passing to tuples which nearly optimize the required inequality (3). Roughly speaking, the bound (3) would follow from (4) by setting, provided that we can show that the terms of the left-hand side dominate the sum in this regime. Instead, our primary tool is a new inequality Unlike the previous arguments, we do not rely primarily on the arithmetic mean-geometric mean inequality. This answers a question posed on MathOverflow. The main result of the paper rectifies this by establishing the optimal (up to constants) improvement However, if one inspects the bound (2) against the bounds (1) given by the key example, we see a mismatch – the right-hand side of (2) is larger than the left-hand side by a factor of about. Which can be established by combining the arithmetic mean-geometric mean inequalityĪs with the proof of the Newton inequalities, the general case of (2) can be obtained from this special case after some standard manipulations (including the differentiation operation mentioned previously). For instance, setting we obtain the inequality Whenever and are real (but possibly negative). More precise versions of this statement were subsequently observed by Meka-Reingold-Tal and Doron-Hatami-Hoza, who obtained estimates of the shape ![]() On the other hand, it was observed by Gopalan and Yehudayoff that if two consecutive values are small, then this makes all subsequent values small as well. In particular, vanishing of one does not imply vanishing of all subsequent. In particular, some routine estimation then gives the order of magnitude boundįor even, thus giving a significant violation of the Maclaurin inequality even after putting absolute values around the. Here, one can verify that the elementary symmetric means vanish for odd and are equal to for even. A key example occurs when is even, half of the are equal to, and half are equal to. Whereas Newton’s inequality works for arbitrary real, the Maclaurin inequality breaks down once one or more of the are permitted to be negative. One can think of Maclaurin’s inequality as providing a refined version of the arithmetic mean-geometric mean inequality on variables (which corresponds to the case, ). The general case of this inequality can be deduced from this special case by a number of standard manipulations (the most non-obvious of which is the operation of differentiating the real-rooted polynomial to obtain another real-rooted polynomial, thanks to Rolle’s theorem the key point is that this operation preserves all the elementary symmetric means up to ). Note that the case of this inequality is just the arithmetic mean-geometric mean inequality Valid for all and arbitrary real (in particular, here the are allowed to be negative). It can be proven as a consequence of the Newton inequality ![]() This paper concerns a variant of the Maclaurin inequality for the elementary symmetric means I have just uploaded to the arXiv my paper “ A Maclaurin type inequality“. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |