Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:

August 2

edit

Conditional probability (please check my math)

edit

As measured by recent Polymarket betting odds, Kamala Harris has 45% chance of winning the POTUS election (electoral vote)[1] and 70% chance of winning the popular vote.[2] Let's ignore edge cases like 3rd party candidates and electoral ties. Also, because of how EV's are distributed between red and blue states, while it's theoretically possible for Trump to win the PV and lose the EV, it's unlikely in practice so let's assign that a probability of 0 (polymarket puts it around 1%). So the event "Harris wins EV" is a subset of "Harris wins PV".

Let HP = the event "Harris wins PV", HE = Harris wins EV, TP=Trump wins PV, TE=Trump wins electoral.

So (abusing notation) HP=70% is split into 45% inside HE and 25% in TE. TE itself is 55%.

Does this mean that Pr[HP|TE], the probability of Harris winning the popular vote conditional on Trump winning the electoral vote, is .25/.55 = about .45? Does that sound realistic in terms of recent US politics? I guess we saw it happen in 2016 and in 2008. Not sure about 2012. Obviously whether this possibility is a good one or bad one is subjective and political and is not being asked here. The .45 probability is higher than I would have guessed.

2602:243:2008:8BB0:F494:276C:D59A:C992 (talk) 03:21, 2 August 2024 (UTC)Reply

Your calculation matches mine for the first question in terms of the conditional probability and assuming the assumptions. Whether it's realistic or not is not a math question. Note that Polymarket gives estimated probabilities for all combinations here with P(HP & HE) = .44, P(TP & TE) = .29, P(HP & TE) = .28 and P(TP & HE) = .01, matching your values to within a few percent. I've been following PredictIt myself, which currently gives Harris a 4 point lead over Trump, so there seems to be a fairly large margin of error in these internet odds sites. --RDBury (talk) 04:43, 2 August 2024 (UTC)Reply
Thanks. Can I ask how it is possible that different betting sites can have different odds for the exact same event? Does that create arbitrage potential that should get used immediately? I noticed yesterday that the Iowa Electronic Markets gave a very high probability (like 70%+) of Harris winning the PV and that surprised me, and I wondered why they made no market (as far as I saw) for the EV. I didn't notice til today that Polymarkets also gave around 70% for Harris wins PV (not sure if it is the exact same conditions as IEM)'s. I'm quite surprised by all of this, especially the PV prediction. For the EV, news outlets have been bipolar for weeks (about Biden and more recently Harris) while the prediction markets stayed fairly serene. Now they're swinging more towards Harris and IDK whether anything has substantively changed. 2602:243:2008:8BB0:F494:276C:D59A:C992 (talk) 04:57, 2 August 2024 (UTC)Reply
Suppose two betting markets have substantively different odds for the same event, say   and   expressed as probabilities in the unit interval   with   Given the way bookmakers calculate the pay-off, which guarantees them a margin of profit, betting in one market on the event occurring (at  ) and at the same time in the other market on it not occurring (at  ) can only give you a guaranteed net win if    --Lambiam 08:22, 2 August 2024 (UTC)Reply
I think you just need p < q (depending on how you distribute the bets) but the difference has to be enough that the profit is more than you'd get just putting your money in the bank and collecting interest. Plus I think that there's a nominal transaction fee with these things. I'm pretty sure it reduces to a minimax problem; find the proportion of bets on E on site a to bets on ~E on site b to maximize the minimum amount you'd win. But I'm also sure plenty of other people have already worked this out and are applying it to equalize the markets. RDBury (talk) 16:46, 2 August 2024 (UTC)Reply
PS. The technical name for this kind of thing is Arbitrage, but it's a bit more complicated than with the usual case with stocks and whatnot. In normal arbitrage you buy X in market A and sell X in market B at a higher amount. In this case we're buying X in market A and not X in market B, then wait until X has been resolved to true or false, which will be months from now. Another factor is that the X in one market may not be exactly the same as the X in the other market, so you have to read the details on each site. For example one site may say "Harris wins" while the other site says "Democrats win". If you don't think it makes a difference then you're not accounting for black swan type events like the presumptive candidate suddenly dropping out of the race. --RDBury (talk) 17:03, 2 August 2024 (UTC)Reply

Probability question

edit

Can someone please help me with this probability question? Say I have 10,000 songs on my iPod, 50 of which are by the Beatles. The iPod can play a sequence of songs at random. Assuming no songs are repeated, what is the probability of two Beatles songs being played consecutively? Thanks, --Viennese Waltz 18:03, 2 August 2024 (UTC)Reply

Just to clarify, do you mean the probability that, if you choose two songs at random, then both will be Beatles songs, or the probability that, if you play all 10,000 songs at random, there will be two consecutive Beatles songs somewhere in the shuffled list? GalacticShoe (talk) 18:11, 2 August 2024 (UTC)Reply
I thought this was a simple question that would not require further elucidation, but apparently not. I don't understand your request for clarification, I'm afraid. Imagine I turn the iPod on and start playing songs at random. What is the probability that two Beatles songs will appear consecutively? I can't be any more specific than that, I'm afraid. --Viennese Waltz 18:14, 2 August 2024 (UTC)Reply
How many songs are you playing total? Are you playing possibly all 10,000, until you hit two consecutive Beatles songs? GalacticShoe (talk) 18:20, 2 August 2024 (UTC)Reply
I have no idea how many I'm playing total. I don't see it as a finite activity. I just want to know the probability that two will pop up consecutively. If the question is unanswerable in this form, please let me know! --Viennese Waltz 18:22, 2 August 2024 (UTC)Reply
Well the problem is that you only have a finite number of songs, so it has to be a finite activity unless you define some behavior for your playlist looping. For example, if the first 10,000 songs are exhausted, then do you reshuffle and play all 10,000 songs again? If that's the case then the probability that you eventually hit two consecutive Beatles songs is essentially 100%. GalacticShoe (talk) 18:25, 2 August 2024 (UTC)Reply
Even if you did manage to get a well-posed version of the problem, I doubt there would be a simple formula. In the ball & urn model, you have m balls in an urn, k of which are colored, and you're selecting n balls without replacement, what is the probability that two colored balls in a row will be chosen? There are well-known formulas for the probability that l of the selected balls will be colored, but they don't say anything about the order in which they appear. Part of the problem may be that "no songs are repeated" is vague. Does it mean a no song is repeated twice in a row or that once a song is played it will never be played again. I think most people here would assume the second meaning, which would imply that the sequence would have to end after all the songs have been played. If it's the first meaning then the sequence could continue forever, but in that case the probability of two consecutive Beatles songs is 1. --RDBury (talk) 19:00, 2 August 2024 (UTC)Reply
Assuming that these are independent events: the odds of a random Beatles song playing is 50:10,000 or 1:200. The consecutive odds is 1:200x200 or 1:40,000. The next step is to subtract the odds they are the same... Modocc (talk) 19:18, 2 August 2024 (UTC)Reply
The odds the same song plays consecutively is 1:10,000x10,000, which is many orders smaller than any two of them. Modocc (talk) 19:31, 2 August 2024 (UTC)Reply
See Probability#Independent_events. Modocc (talk) 19:57, 2 August 2024 (UTC)Reply
The events aren't independent, because they are drawn without replacement. Tito Omburo (talk) 20:04, 2 August 2024 (UTC)Reply
My take is the apps' songs actually repeat (randomly) and from my experience it seems they do, but the OP simply excluded it (as in that doesn't count). Modocc (talk) 20:13, 2 August 2024 (UTC)Reply
Let's assume instead that the app does not repeat any, but plays all of them. The app selects a Beatles song with exactly the same odds initially, and puts every song played into a dustbin. Another similar example should help. Take 10,000 cards with lyrics and randomly assign them to [deal one card to each of the] 10,000 players. With 10,000 non-repetitious events and 50 prized Beatles cards each player has a one in 200 chance of winning a prize card. The chances that any two players (consecutive or not) are assigned these prizes is slightly greater than[differ from] 1:40,000 though because they are no longer independent. Modocc (talk) 22:07, 2 August 2024 (UTC)Reply
I'm not sure how this model relates to the original question. Take the case of an iPod with 3 songs, 2 of which are by the Fab Four. After the first of the 2 Beatle cards has been assigned randomly to one of 3 players, the probability that the player who is randomly selected to receive the second card happens to be the same person is   Among the possible shuffles of the songs, only those with the non-Beatle song in the middle have no adjacent Beatle songs. The probability of this song randomly being assigned to the middle is   so the likelihood of a random shuffle producing adjacent Beatle songs equals   much higher than the card model suggests.  --Lambiam 14:00, 3 August 2024 (UTC)Reply
I meant one card. It's been 40 years since I aced stats, I'm rusty and I've forgotten some of it. Modocc (talk) 14:46, 3 August 2024 (UTC)Reply
With three independent deals of one card each: 2:3 to win, 2/3x2/3 is 4/9 per pair which is too low. I aced linear algebra too, honors and all that, but I cannot seem to do any of the maths now, but I think the inaccuracy gets smaller with a larger deck because only two consecutive plays are involved.
Modocc (talk) 16:05, 3 August 2024 (UTC)Reply
(ec) One way of interpreting the question is this: given an urn with 10,000 marbles, 9,950 of which are white and 50 of which are black, drawing one marble at a time from the urn, without replacement but putting them neatly in a row in the order in which they were drawn, what is the probability that the final sequence of 10,000 marbles contains somewhere two adjacent marbles that are both black. Generalizing it and taking a combinatorial view, let   stand for the number of permutations   of the numbers   in which no two adjacent elements   and   are both at most   where   The answer is then   We have   but I don't have a general formula. I suppose, though, that the computation can be made tractable using a (mutual) recurrence relation.  --Lambiam 22:08, 2 August 2024 (UTC)Reply
Experimentally, repeating the process 10,000 times, 2130 runs had adjacent black marbles. So   should be in the order of    --Lambiam 22:53, 2 August 2024 (UTC)Reply

Let n be the number of songs in the playlist, and k the number of beatles songs. Assuming that every song plays exactly once, we count the number of permutations having the property that two beatles songs are played consecutively. First, note that the number of all such configurations is n!, which we write as  . This can be interpreted as follows: From n songs of the playlist, we first select k positions in   ways, into which we insert the k beatles songs in k! ways; then the remaining n-k songs are inserted into the remaining n-k slots in (n-k)! ways. We modify this by replacing the binomial coefficient by the quantity  , whose definition is the number of ways of selecting k among n objects, at least two of which are adjacent. Now, we have

 

where   is the number of ways of selecting k from the n objects such that none of the k selected are adjacent. We then have  . Now the desired probability is just

 

When n=10000 and k=50, we get a probability of about 0.21824. Tito Omburo (talk) 16:40, 3 August 2024 (UTC)Reply

The experimentally observed number of 2130 runs out of 10,000 is within 1.3 sigma of the expected value 2182.4, so this is a good agreement.  --Lambiam 19:38, 3 August 2024 (UTC)Reply
Restated as I understand the OP: "what is the probability that two songs played in succession are different Beatles songs?
I assume that two songs is the minimum they listen to in one session and the odds are small and my mouse iis malfuctioninggg too... Modocc (talk) 16:57, 3 August 2024 (UTC)Reply
The probability is 0.21824 when the entire playlist of 10,000 songs is listen to. There are 5000 (even) + 4999 (even shifted one is odd) consecutive pairs. Supposing their expectation values are equal, that is a probability of about 0.21824/9999 or 2.183e-5 or about 1/45,076. The values are converging then as I thought they would. Modocc (talk) 18:30, 3 August 2024 (UTC)Reply
One can add to the model a variable s, which determines the length of a session. Assuming as before that no song is played more than once, we first insert all songs into a playlist in n! ways. Let S_i denote the event that the number of beatles songs in the first s songs is i. Then, for i from 0 to k, these events partition the probability space. We have   Now, the conditional probability of two consecutive beatles songs is   So we get
 
For example, when n=10000, k=50, and s=10000, we get 0.21824; whereas if s=100 (we only listen to 100 songs), we have probability of consecutive beatles songs 0.00241166. Tito Omburo (talk) 20:26, 3 August 2024 (UTC)Reply
For clarity's sake, it's worth noting that when s=n, this simplifies to  . GalacticShoe (talk) 20:35, 3 August 2024 (UTC)Reply

Given 2 finite field elements, is it possible to enlarge or decrease the characteristic at the cost of the dimension ?

edit

Simple question : given 2 finite fields elements, is it possible to compute 2 other equivalent elements having a larger or lower characteristics while keeping the same old discrete logarithm between the 2 new elements (through enlarging or decreasing the dimension and without computing the discrete logarithm) ? 82.66.26.199 (talk) 21:48, 2 August 2024 (UTC)Reply

The algebraic notion of characteristic is usually defined for rings (and therefore also for fields). How do you define it for elements of a field?  --Lambiam 08:45, 3 August 2024 (UTC)Reply
Let’s say I have 2 finite fields elements having a discrete logarithm s between them. My aim is to redefine 2 equivalent elements having a larger prime but a lower dimension (with keeping the same discrete logarithm’s) 82.66.26.199 (talk) 10:02, 3 August 2024 (UTC)Reply
If by "prime" you mean the prime number   of the Galois field   there is no way to relate its elements algebraically to those of a Galois field with another prime. If you remain in the same field, there is a fair chance that   but the left-hand side may have more solutions in   of the equation   than the right-hand side has for the case    --Lambiam 12:58, 3 August 2024 (UTC)Reply
Correct, I was talking about   where   is a 64 bits large prime and   as a power is even larger. Isn’t it possible to decrease k by increasing p while keeping the same previous m between the 2 equivalent finite field elements ? (without knowing what the discrete logarithm  ) 2A01:E0A:401:A7C0:9CB:33F3:E8EB:8A5D (talk) 13:35, 3 August 2024 (UTC)Reply
I don't think there is any hope. The algebraic structures of the fields are completely different, so there are no homomorphisms between them.  --Lambiam 14:14, 3 August 2024 (UTC)Reply
Stupid begginer question. Why is pohlig Hellman varients sometimes applicable to elliptic curves but never on Finite fields having a composite dimension ? 2A01:E0A:401:A7C0:F4D5:FA63:12A5:B6B6 (talk) 16:46, 3 August 2024 (UTC)Reply
Pohlig--Hellman is applicable in any finite cyclic group such that the prime factorization of the order is known. It is possible to modify Pohlig--Hellman to any finite abelian group (under the same assumption), but if the rank of the abelian group is greater than one, you need several basis elements rather than just one generator for the discrete log. Notably, the multiplicative group of a finite field is always cyclic, so assuming you know how to factor  , you can use regular PH here. For the group of units in a finite ring, you typically need more information. For example, in the units modulo n, where n is composite, you need to be able to compute phi(n) (equivalent to the prime factorization of n), and phi(phi(n)). To answer your original question, if you know that a=b^x (mod p), and have another prime p', just let b' be a primitive root mod p', and define a' as b'^x (mod p'). Tito Omburo (talk) 17:18, 3 August 2024 (UTC)Reply
As I said, I m talking about finite fields of composite dimensions. This excludes  . As far I understand Pohlig Hellman variants don’t apply between elements of   unlike ecdlp but why ? 2A01:E0A:401:A7C0:F4D5:FA63:12A5:B6B6 (talk) 19:20, 3 August 2024 (UTC)Reply
It applies if you can determine the prime factorization of p^n-1. The multiplicative group of a finite field is cyclic. Tito Omburo (talk) 20:32, 3 August 2024 (UTC)Reply
It seems to me your are confusing finite fields and finite rings. Finite fields elements of non 1 dimensions are expressed as polynomials like 848848848848487489219*a^4+7378478947844783*a^3+43445998848848898*a^2+87837838383837*a+637837871093836 2A01:E0A:401:A7C0:F4D5:FA63:12A5:B6B6 (talk) 21:46, 3 August 2024 (UTC)Reply
No, I'm not. Any finite (multiplicative) subgroup of *any* field is cyclic. Using PH only requires knowing the prime factorization of the order of the group. In the case of a field with p^n elements, the group of units is cyclic of order is p^n-1. (Of course this may not have many small prime factors, e.g., if n=1 and p is a safe prime). Tito Omburo (talk) 22:37, 3 August 2024 (UTC)Reply
Ok but then how to use the resulting factors on a larger polynomial ? 2A01:E0A:401:A7C0:F4D5:FA63:12A5:B6B6 (talk) 00:12, 4 August 2024 (UTC)Reply

The details of how you do multiplication in the field are irrelevant. PH works if the group multiplication is just considered as a black box. Tito Omburo (talk) 12:13, 4 August 2024 (UTC)Reply

No, I’m meaning once I’ve factored the order, how do I shrink the 2 finite field elements to the lower prime factor in order to solve the ꜰꜰᴅʟᴘ in the lower factor ? For example, in the ᴇᴄᴅʟᴘ this is the matter of dividing each point by the prime factor… Especially if the base prime   is lower than the dimension/power  .
Though I admit I would like sample code/pseudocode to understand at that point… 82.66.26.199 (talk) 15:58, 4 August 2024 (UTC)Reply
Suppose you want to solve   where   is a generator of the group of units. If you have the unique factorization into prime powers  , then with  , the element   generates the group   of  -th roots of unity in  . Then one uses the prime power case of PH to solve   in  . This can be achieved efficiently (provided   is not a large prime) using the Pohlig–Hellman algorithm#Groups of prime-power order. (Basically, you start with   and successively remove powers of   inductively. This is a search over   possibilities, so polynomial time in k_i for fixed p_i.) So, we get a solution exponent   in each group  . Finally, let   be an integral combination, and   is the solution. Tito Omburo (talk) 17:25, 4 August 2024 (UTC)Reply
So let’s say I want to solve :
 
 
in  
How would those 2 elements be written (what does the polynomials would like) in order to solve the subdiscrete log in 2801 ? 82.66.26.199 (talk) 22:28, 4 August 2024 (UTC)Reply
First raise them both to the power (7^10-1)/2801 in the field. And then check which among 2801 possibilities you have g^x=y. Tito Omburo (talk) 22:37, 4 August 2024 (UTC)Reply
So still 10 polynomials elements per finite fields for just 2801 ? 2A01:E0A:401:A7C0:F4D5:FA63:12A5:B6B6 (talk) 01:02, 5 August 2024 (UTC)Reply

August 4

edit

Even binomial coefficients

edit

Is there an elementary way to show that all the entries except the outer two, on a row of Pascal's triangle corresponding to a power of 2, are even? 2A00:23C6:AA0D:F501:658A:3BC6:7F9D:2A4A (talk) 17:37, 4 August 2024 (UTC)Reply

The entries in Pascal's triangle are the coefficients of anbn-i in (a+b)n. It's easy to see that (a+b)2=(a2+b2) mod 2, and consequently (a+b)4=(a4+b4) mod 2, (a+b)8=(a8+b8) mod 2, and in general (a+b)n=(an+bn) mod 2 if n is a power of 2. This implies that all the coefficients of anbn-i are 0 mod 2 except when i=0 or i=n. Note, there are more complex formulas for finding anbn-i mod 2 or any other prime for any values of n and i. --RDBury (talk) 18:40, 4 August 2024 (UTC)Reply
@RDBury: Did you mean aibn-i in the first sentence...? --CiaPan (talk) 09:44, 5 August 2024 (UTC)Reply
Yes, good catch. RDBury (talk) 13:20, 5 August 2024 (UTC)Reply
Another way to see it is, we divide   into two equal piles of size  , so that to select k things from   is to select i things from one pile and k-i things from the other. That is, we have the following special case of Vandermonde's identity:
 
In the right-hand side, every term appears twice, except the middle term (if k is even). We thus have
 
We iterate this until k is either odd (and the right hand side is even by induction), or   in which case   or  , which corresponds one of the two outer binomial coefficients. Tito Omburo (talk) 18:51, 4 August 2024 (UTC)Reply
(OP) I think the first approach more my level, but thanks to both for the replies.2A00:23C6:AA0D:F501:6CF2:A683:CAFC:3D8 (talk) 07:10, 5 August 2024 (UTC)Reply


August 6

edit

Bishop and generalised leaper checkmate

edit

We know from the 1983 work of J. Telesin that king, bishop, and knight can force mate against the lone king on any square n×n board in O(n2) moves (a clear presentation), provided that the board has a corner of the colour the bishop travels on.

So, what if we replace the knight with a general (p,q)-leaper with gcd(p,q)=1, so that it can travel to any square on the board? (I guess I'm using the convention gcd(p,0)=|p|.) Is it still the case that mate is forceable in O(n2) moves on an n×n board, for sufficiently large n to stop the leaper from getting stuck by board edges?

(An almost identical question was asked on Kirill Kryukov's forum in 2015 by "byakuugan", but no general answer came.) Double sharp (talk) 11:52, 6 August 2024 (UTC)Reply

Note that the king and bishop may be used to pen the black king into a corner, unless the black king "waits" (just moves back and forth without attempting to escape the enclosure). This is the only time the knight (or leaper) needs to take action (and it doesn't matter how many moves it takes to check the king). If you can devise conditions such that half of the board above a main diagonal is connected under knight-moves, I think basically the solution given in that arxiv link works, although I am confused about the second pane of Figure 1 since this does not appear to be a chess position that can be arrived at legally. Presumably checkmate can only happen in the corner? Tito Omburo (talk) 16:08, 6 August 2024 (UTC)Reply

Identical partitions of cells of the 24-cell?

edit

I'm wondering whether for each divisor d of 24, whether the cells of a 24-cell can be split into identical pieces (not necessarily connected) with d cells (octahedra). Note since the 24-cells is self dual, an identical partition of d cells is equivalent to a partition of 24/d vertices which can be viewed as the partition of 24/d cells from the vertex in the dual corresponding to each cell. (mostly using the +/- 1,+/- 1,0,0 arrangements here) 1 and 24 area trivial. For 2 cells, pairing a cell with the flip of signs works, For 12 cells, grabbing one of each of the pairs in the 2 cell split will give the partition. this leaves the 3/8 cell pieces in partitions and the 4/6 cell pieces in partitions. For the 4, the vertices can be split based on which two dimensions are zero. so the faces in the dual can be split that way. But I don't think the same flip of 2/12 works. I think for the 8 cell pieces, that a consistent coloring of the octahedra can be done by using three different colors on a single edge, and then continuing the coloring so that each edge had three colors or by pairing 4 cell, but I'm not quite sure how to do 3 or 6.Naraht (talk) 14:44, 6 August 2024 (UTC)Reply

I wasn't quite able to follow everything, so I'll settle for restating and expanding slightly on what you have so far. For 12 sets of 2, as you pointed out, you can pair each vertex with it's negative. For 2 sets of 12 an explicit partition can be given as {vertices of type (±1, ±1, 0, 0) with first sign +} and {vertices of type (±1, ±1, 0, 0) with first sign -}. For 6 sets of 4, as you also pointed out, you can divide up the vertices of (±1, ±1, 0, 0) according to the positions of the 0's. For 3 sets of 8 you can use an equivalent formulation of the 24-cell using 8 vertices of type (±2, 0, 0, 0) and 16 of type (±1, ±1, ±1, ±1). Divide these into the (±2, 0, 0, 0) type vertices, the (±1, ±1, ±1, ±1) type with even +'s and (±1, ±1, ±1, ±1) type with odd +'s. Geometrically this corresponds to a compound of 3 hypercubes whose intersection is the 24-cell. I assume you can turn this into the octahedra coloring you were talking about. That leaves 8 sets of 3 and 4 sets of 6. Anyway, this doesn't really add much new information, but mainly I wanted to give some sort of response to show your question isn't being ignored. My feeling is that there are no such partitions, but I don't see a way of proving this without a lot of computation. --RDBury (talk) 19:51, 9 August 2024 (UTC)Reply
PS. I was able to find a partition of 8 sets of 3 as follows:
{(1, 1, 0, 0), (1, 0, 1, 0), (1, 0, 0, 1)}
{(1, 0, 0, -1), (1, 0, -1, 0), (1, -1, 0, 0)}
{(0, 1, 1, 0), (0, 1, 0, 1), (0, 0, 1, 1)}
{(0, 1, 0, -1), (0, 1, -1, 0), (0, 0, -1, -1)}
{(0, 0, 1, -1), (0, -1, 1, 0), (0, -1, 0, -1)}
{(0, 0, -1, 1), (0, -1, 0, 1), (0, -1, -1, 0)}
{(-1, 1, 0, 0), (-1, 0, 1, 0), (-1, 0, 0, 1)}
{(-1, 0, 0, -1), (-1, 0, -1, 0), (-1, -1, 0, 0)}
Each triple forms an equilateral triangle of adjacent vertices, which shows they are identical, but there are many more that 8 such triangles and I used a certain amount of trial and error to find this. That leaves 4 sets of 6 still unsolved --RDBury (talk) 06:05, 13 August 2024 (UTC)Reply
Natural next step would be to look for sets of 6 that mark out octahedra, but that won't include both the (all w=1) and (all w=-1) since the remaining w=0 can't be split that way.Naraht (talk) 18:17, 13 August 2024 (UTC)Reply
I don't think regular octahedra will work, but I found the following set of four irregular octahedra:
{(±1, ±1, 0, 0), (1, 0, 1, 0), (-1, 0, -1, 0)}
{(0, ±1, ±1, 0), (0, 1, 0, 1), (0, -1, 0, -1)}
{(0, 0, ±1, ±1), (1, 0, -1, 0), (-1, 0, 1, 0)}
{(±1, 0, 0, ±1), (0, 1, 0, -1), (0, -1, 0, 1)}
Each set can be transformed to any other set by a combination of coordinate permutations and sign changes so they are congruent. Again, finding this was more a matter of trial and error than an actual method. --RDBury (talk) 21:55, 13 August 2024 (UTC)Reply

Calculate percent below bell curve for a given standard deviation

edit

Assume a normal bell curve. I know that 34.1% of data is 1 standard deviation above mean (and another 34.1% is 1 standard deviation below mean). What is the method for calculating the percent above/below mean for any given standard deviation, such as 0.6 standard deviations above (or below) mean? 12.116.29.106 (talk) 16:43, 6 August 2024 (UTC)Reply

There is an analytical expression that makes use of the error function Erf(x). Specifically, the function   gives the fraction of data within z standard deviations (above+below) of the mean of a normal distribution. Get the amount above (or below) by dividing by two. Tito Omburo (talk) 17:11, 6 August 2024 (UTC)Reply
The "method for calculating" it is not answered there. Of course, the easy way to is to invoke an implementation of the error function that you find in all kinds of standard libraries, spreadsheets, etc. But if you really want to know how to calculate it, then you need to do the integral, numerically, and maybe built a table of polynomial approximants from doing the integral at a bunch of places. Or look it up in a book from a trusted source. Dicklyon (talk) 04:39, 8 August 2024 (UTC)Reply
The section Error function § Numerical approximations gives formulas for calculating approximations with lower and lower maximum errors, down to a maximum relative error less than    --Lambiam 07:12, 8 August 2024 (UTC)Reply
Nice! I should have known WP would have the comprehensive answer in the article Omburo linked! Dicklyon (talk) 14:13, 8 August 2024 (UTC)Reply




August 13

edit

Another probability question

edit

Hello, this is not homework. It is a conundrum that arose out of something I was trying to work out for myself. If we have only one sample of a random variable, and no other information at all, then the best estimate of the mean of the distribution is the sample itself. It may be a rubbish estimate, but nothing more can be said. However, suppose we also know that the mean is >= 0. With this extra information, can any better estimate of the mean be achieved from a single data point? (If necessary, please define "better" in any sensible way that aids the question.) It seems to me that, in the event that the sample is negative, replacing it with zero would always be a better estimate. However, averaged multiple adjusted samples will no longer converge to the true mean, assuming that positive samples are left alone, which seems undesirable. It's not obvious to me anyway that positive samples have to be left alone. Can anything better be achieved? How about if we also know anything else necessary about the distribution, even down to its exact "shape", except that we do not know the mean, apart from that it is >= 0. What then? Does that extra information help at all? 2A00:23C8:7B0C:9A01:8D5A:A879:6AFC:AB6 (talk) 20:56, 13 August 2024 (UTC)Reply

Adjusting positive samples will do you little good if you don't know how to adjust them – but if nothing else is known about the distribution than μ ≥ 0, there is no information to base an adjustment on. For an extreme case, assume all samples have sample size 1. Unknown to the sampler, the distribution is discrete with possible outcomes {−9, 1}. The sample means are them equal to the single element in each sample. Adjusting them to be nonnegative amounts to left-censoring. If μ is the true mean of the distribution, the average of the adjusted (left-censored) sample means tends to (μ + 9) / 10. For the average of the adjusted sample means to tend to the true mean, the positive sample means should be replaced by 10μ / (μ + 9). But this replacement requires already knowing both the value of μ and the outcome space – in fact, all there is to know about the distribution.  --Lambiam 07:33, 14 August 2024 (UTC)Reply

August 14

edit

Hypothetical US Senate reform

edit

Suppose that the US Senate is reformed to be lightly weighted by population, with each state having either 1, 2, or 3 senators (and keeping the total number of 100 senators constant). What would be a plausible method for determining which states get 1, 2, or 3? 71.126.57.129 (talk) 03:33, 14 August 2024 (UTC)Reply

Representatives are assigned using a greedy algorithm: First, every state is allocated one representative; then whichever state currently has the highest ratio of population to representatives is given another representative; repeat until out of representatives. One could reasonably use the same algorithm for senators, with a maximum number of senators per state allowed.--Antendren (talk) 06:31, 14 August 2024 (UTC)Reply
(ec) You can think of the states as each being one party in a multi-party system, and of the population of each state as being voters for their own party (voting for the party, not for a specific candidate). Several multi-winner voting systems can be used, specifically party-list proportional representation, tweaked to keep the number of seats for each party in a given 1 to n range – in the question as posed n = 3. One possible way is the D'Hondt method, which will assign the seats one by one to the state that is proportionally the least represented and still has fewer than n seats. As you can see at Party-list proportional representation § Apportionment of party seats, there are many other methods.  --Lambiam 06:36, 14 August 2024 (UTC)Reply
One plausible way is to start moving senators from the smallest states to the largest. So pick the smallest state first, take one of their senators and assign it to the largest. Then repeat with the next smallest and largest. Keep going until you run out of states that are small or large enough for it to make sense. This works as if you require the total to be unchanged, with 50 states and 100 senators, the number of states with 1 and 3 must be equal. --2A04:4A43:90FF:FB2D:F067:D97A:B158:E36B (talk) 12:44, 14 August 2024 (UTC)Reply
One approach is to define what is meant by a fair distribution of Senate seats. A reasonable ansatz is to minimize the sum of seats/population subject to the constraints that there are 100 seats total and each state gets between 1 and 3 seats. This is a variant of the knapsack problem. Tito Omburo (talk) 13:15, 14 August 2024 (UTC)Reply
Another theory-based approach is to minimize total dissatisfaction, assuming that the members of a group will feel less happy if the group is underrepresented. Let, for group   the quantity   stand for the fraction the group takes up in the total population, and   for the fraction of the seats allocated to the group (so  ) Ideally,   for all groups, but this is rarely posible, given the constraints. For a model of total dissatisfaction, choose some function   that represents the dissatisfaction of a group with population weight   and representation weight   then   stands for the proportionally weighted combined dissatisfaction. Plausibly,   meaning that a group is not underrepresented, should imply that   furthermore, for fixed   the dissatisfaction should weakly monotonically decrease with increasing   conversely, for fixed   the dissatisfaction should weakly monotonically increase with increasing   A possible formula is given by   using truncated subtraction.  --Lambiam 16:55, 14 August 2024 (UTC)Reply

August 15

edit

Can two polygons have the same interior angles (in order) if they are noncongruent and have no parallel sides?

edit

Was thinking about the question of whether the interior angles, in order, determine a polygon up to congruence, when I realized that the answer is obviously no if there are parallel sides which you can contract and lengthen at will without changing any of the angles. Barring these polygons with parallel sides and their lengthenings/contractions, then, are there any polygons that share the same interior angles in order without being congruent? GalacticShoe (talk) 17:30, 15 August 2024 (UTC)Reply

I'm not exactly sure what you're asking, but it seems to me that an example can be obtained by starting with a regular pentagon, and lengthening two adjacent sides, and the third opposite side, and shrinking the remaining two sides, while maintaining all angles. Tito Omburo (talk) 17:37, 15 August 2024 (UTC)Reply
I'm realizing now that this question is embarrassingly simple; if we consider extending polygon sides to lines, with the sides then being line segments between the intersections of adjacent sides, then moving lines always preserves angles. For example, in the pentagon example, the lengthening/shrinking you mentioned is akin to moving two lines further away from the center of the pentagon. GalacticShoe (talk) 17:48, 15 August 2024 (UTC)Reply
Upon further searching online, the answer is very much no; see this Math StackExchange post. In particular, there are many polygons where you can simply intersect the polygon with a planar half-space such that 1. the line L demarcating the planar half-space is parallel to one of the sides S, and 2. all sides of the polygon either fall on the interior of the half-space, share a vertex with S and intersect L, or are S. GalacticShoe (talk) 17:39, 15 August 2024 (UTC)Reply
I think you meant to say that the answer is, Yes – they can have the same interior angles and yet be noncongruent (from tetragons on).  --Lambiam 22:25, 15 August 2024 (UTC)Reply

August 16

edit