Soccer Flipcup
Soccer Flipcup can be expressed as a 1dimensional random walk with boundaries at 0 and $N$. In each of the intermediate states $(1,N  1)$, there exist a pair of competing players who play “flip cup”. If the player from the onward team wins, the state increases from $n$ to $n + 1$, in which case the (n+1)th players compete. Otherwise, the state decreases from $n$ to $n  1$, and the (n1)th players compete. This continues until the state reaches either 0 or $N$, when the gameplay stops and the corresponding team wins.
The game typically begins with state $\frac{N}{2}$ for odd players or $\frac{N}{2} \pm \frac{1}{2}$ for even players.
For each of the teams of $N  1$ players, we consider what is the optimal assignment of states to each of the $N  1$ players as a function of their rank in flipcup skill.
This game, for one of the teams, can be represented by the following set of equations, where $p(n)$ is the probability of victory given action is on state $n$, and where $q_{n}$ is the likelihood of winning state $n$
The nontrivial equation becomes
Recursively,
Notice we can also write $p\left( n \right)$ as
To find $p\left( 1 \right)$, we use the fact that $p\left( N \right) = 1$,
Now we have a closed form equation for $p(n)$
Nash Equilibrium
What is clear that given two exactly identical teams and perfect information, $q_{n} = \frac{1}{2}$ for all $n$. If one team seeks an advantage by swapping $y$ with $z$, the other team can make the identical swap to keep $q_{n} = \frac{1}{2}$ . As this process continues, both teams would end up with optimal positions and would be unable to improve their odds. This end state is a proper Nash Equilibrium state. And it would happen that $q_{n} = \frac{1}{2}$ for all $n$. In the equilibrium state, $p\left( n \right)$ simplifies to
That is, for odd player games, the likelihood of winning is $\frac{1}{2}$, and for even player games, one of the teams would have an advantage of $\frac{1}{N}$.
What is this Nash Equilibrium state?
To find which $n = t$ has the most importance, and therefore requires the greatest strength, we take the partial derivative of $p\left( n \right)$ with respect to $q_{t}$. We assume $q_{n} = \frac{1}{2}$, as per the Nash equilibrium.
This equation tells you that with $N  1$ players and a starting position $n$, which positions “t” are the most important and therefore require the strongest players.
The optimal strategy is as follows:

The equation is maximized at $t = n$ (since $N > n$). That shouldn’t be surprising to anyone, that the best player should be in the starting position

$\partial p\left( n \right)$ decreases both as both $t$ increases and decreases from $n$. The best players should be around the starting position

As $t$ increases, $\partial p\left( n \right)$ decreases by $n$. As $t$ decreases, $\partial p\left( n \right)$ decreases by $N  n$.

This implies that for $n = \frac{N}{2}$, which is a common starting position, there is symmetry, meaning you are indifferent between switching players between the $\frac{N}{2} + a$ and $\frac{N}{2}  a$ positions

However, say $n > \frac{N}{2}$. Then you would want to put your secondbest player in the $n  1$ spot. This is particularly relevant for games with an even number of players.
Here is a diagram for a special case of N=8, where each column each row represents a $n$, and each column represents a $t$.
1 2 3 4 5 6 7
1 7 6 5 4 3 2 1
2 6 12 10 8 6 4 2
3 5 10 15 12 9 6 3
4 4 8 12 16 12 8 4
5 3 6 9 12 15 10 5
6 2 4 6 8 10 12 6
7 1 2 3 4 5 6 7