followed tags: 16
followed domains: 0
badges given: 0 of 0
member for: 834 days
Congratulations! As it's your first job, try hard to learn good habits and to avoid bad habits. It could be hard to differentiate between those two. If your job is even remotely related to software development, invest your time in reading something along the lines of "Pragmatic programmer" or "Clean code" books; it will pay off tenfold in a long term.
Well, the picture is symmetric. Vertical symmetry comes from the fact that if z is a root of a polynomial with real coefficients then the complex conjugate z' is also a root of the same polynomial (if z = a + bi then the complex conjugate z' is defined as z = a - bi). The horizontal symmetry comes from the fact that if z is a root of a polynomial, then -z is a root of polynomial which can be constructed by negating the sign of each coefficient next to each odd degree. As I'm looking at all polynomials with coefficients +1 or -1, for every root z of some polynomial I have a root -z for some other polynomial.
This indeed gives some possibilities:
- I can draw only 1/4 of the picture, and construct the result out of it
- I can store only 1/4 of all the roots (only top right quadrant, for example)
- I can do calculations for only 1/2 of all the polynomials (but I need some logic to keep track if I need to find the roots of the polynomial or not -- would be simple, though)
- I cannot calculate only half of the roots, because, as I've mentioned above, vertically symmetric roots are coming in pairs in the same polynomial, and the method used by GSL library for finding the roots does not produce roots one by one, instead calculating the whole set of roots at the same time.
All in all, these optimizations would save about 50% of the calculation time and about 75% of memory, also I'd be able to double the resolution. I should be able to do the power 25, and, maybe 26. But not 27. And I dream about doing 32, and increasing the resolution of the picture ten-fold. So I'll rather at first focus on doing the whole thing in more parallel manner, with results not being stored in memory -- but yes, symmetry-based optimization is definitely one of the things to do after that. Thanks for suggestion!
I did some computer experiments, just for fun:
both are playing optimal strategy: p = 0.25, q = 0.5. 100000 rounds, total value 449756
Now blue deviates to p = 0.5; red is clever to choose q = 1. Same amount of rounds, 400438. Blue got less than could.
Let's say red deviated to q = 0.25, blue was clever to choose p = 1. Same amount of rounds, 524991. Red lost more than it could.
If blue plays with optimal p = 0.25, red cannot do anything to lose less than 4.5 per round (see above, the whole q-dependent part is multiplied by 0). Let's say p = 0.25, q = 0.75, 100000 rounds: total value 450115.
In a similar way, if red plays with q = 0.5, blue cannot do anything to win more than 4.5 per round. Let's say q = 0.5, p = 0.75: result is 449799.
Let's remove all the psychology from the game, and assume that Blue's and Red's strategies are independent. Let's say Blue chooses 1 with probability p, and Red chooses 1 with probability q. Then the expected value of one round is
3 * (probability both chose 1: it is pq)
+ 5 (probability Blue chose 2, Red chose 1: it is (1-p)q)
+ 6 * (probability Blue chose 1, Red chose 2: it is p(1-q))
+ 4 * (probability Blue chose 2, Red chose 2: it is (1-p)(1-q))
Now we are getting:
value = 3pq + 5(1 - p)q + 6p(1 - q) + 4(1 - p)(1 - q) =
= 3pq + 5q - 5pq + 6p - 6pq + 4 - 4p - 4q + 4pq =
= 4 - 4pq + 2p + q = (2p - 0.5)(1 - 2q) + 4.5
Now, if p = 0.25, the expected value is 4.5 (for Blue); if p is different from that, the expected value may be less, depending on q. If p > 0.25, any q > 0.5 will give resulting value less than 4.5; if p < 0.25, any q < 0.5 will give resulting value less than 4.5. It means that the optimal strategy for Blue is to choose 1 with probability 25% and 2 with probability 75%.
In very much the same way, if q = 0.5, the expected value is 4.5 (for Blue); if q > 0.5 then p < 0.25 gives value bigger than 4.5 and if q < 0.5 then p > 0.25 gives value bigger than 4.5. Since Red wants to minimize the value, the optimal strategy is q = 0.5, i.e. choose 1 and 2 with the same probability.
I live in Finland. 37.5 hours work week, and nobody expects you to work longer than that. You can be asked to do overtime, and, only if you agree, you're doing overtime and getting paid for it -- more than normal salary, it some cases 2 times (or 2.5?) more. Also: 5 weeks of paid vacations every year, and you are expected to take them. And paid sick leaves.
This all helps a lot in managing workload, and, I believe, improves the productivity in the long run.
Doing hobby projects is more of a challenge, though :)
P.S. I have experience (over 15 years) in both being a programmer and in leading teams of programmers.
Side comment first: yes, root of the polynomial is something that makes it to be equal to zero, so there is 0 in the right side of every equation.
You're completely right up to the point where you're trying to find complex roots of a square equation. It is done using exactly the same way as for real roots:
x = (-b ± √(b² - 4ac)) / 2a
So if your equation is 1x² + 1x + 1 = 0, then a=1, b=1, c=1 and roots are
(-b ± √(b² - 4ac)) / 2a = (-1 ± √(1 - 4)) / 2 = (-1 ± √(-3)) / 2
Since √(-3) = i√3 (because i√3 x i√3 = (i x i) x (√3 x √3) = -1 x 3 = -3), the roots are -1/2 ± (√3/2)i
It is possible to find complex roots your way
You can find the roots the way you calculate things, but be more careful with imaginary part; let's represent x not as 'a + b' as you do but as 'a + bi', where both a and b are real numbers, and i is, well, square root of -1. Then we will get:
1(a + bi)² + 1(a + bi) + 1 = 0 =>
a² + 2abi + (bi)² + a + bi + 1 = 0 =>
a² + 2abi - b² + a + bi + 1 = 0 =>
(a² - b² + a + 1) + (2ab + b)i = 0
Now look at the number at the left side: it has real part (a² - b² + a + 1) and imaginary part (2ab + b)i. For this number to be zero, both parts should be zero (you cannot make zero out of real number by adding imaginary part to it).
So we got that
a² - b² + a + 1 = 0 and (2ab + b)i = 0
I'm going to look at the imaginary part first.
(2ab + b)i = 0 =>
2ab + b = 0 =>
(2a + 1)b = 0
Now we have two numbers (2a + 1 and b) such that their product is zero, so one of those numbers is zero. It cannot be b (try to substitute b = 0 -- the equation turns out to a² + a + 1 = 0, which has no real solutions, and a was a real number). Hence
2a + 1 = 0 => a = -1/2
Now, when we know a, let's look at the real part.
a² - b² + a + 1 = 0 =>
1/4 - b² - 1/2 + 1 = 0 =>
b² = 3/4 =>
b = ±√3/2
We substituted x = a + bi, so we can get our two roots now: -1/2 ± (√3/2)i
There is a theorem that the number of (complex) roots a polynomial has is equal to its degree. Well, the whole thing is a bit less straightforward as some roots can repeat themselves, like in x² - 2x + 1 = 0 -- there is only one solution, x = 1, but since the same equation can be represented as (x - 1)(x - 1) = 0, the solution x = 1 is, well, counted twice -- once for each "x - 1". I'm not going deeper here, but what's important to know: if you take random polynomial of degree n, almost always it will have exactly n different complex roots.
It is not possible to calculate exact roots for general polynomial of degree over 4 -- not that we don't know how, it's proven theorem that the formula cannot exist.
About my picture
So, I have equations which look like ..x^24 + ..x^23 + ...and so on... + ..x^2 + ..x + .. = 0, where every '..' is either 1 or -1. I lose nothing assuming that x^24 always has 1 (think about it), so there are 24 coefficients left. It gives me 2^24 = 16777216 equations. I find approximate solutions for each of those, and each has exactly 24 solutions. In this way I'm getting 2^24 x 24 = 402653184 complex numbers. For each such number, if it is a + bi, I put a point with coordinates (a, b). And that's how I've got my picture!
Okay, technical details.
There are multiple numerical methods to find complex roots of polynomials. I'm aware about Durand-Kerner method (https://en.wikipedia.org/wiki/Durand%E2%80%93Kerner_method) and Dandelin-Graeffe method (https://en.wikipedia.org/wiki/Graeffe's_method), where "aware" means "I know that they exist". I did not implement root finding myself; instead I've used GNU scientific library (http://www.gnu.org/software/gsl/) and built-in function of it called gsl_poly_complex_solve (http://www.gnu.org/software/gsl/manual/html_node/General-Polynomial-Equations.html#General-Polynomial-Equations). The documentation claims that they use "balanced-QR reduction of the companion matrix" and I have no idea what is it and was not aware about it existence before. Since I wrote my code in OCaml, I've used OCaml bindings for GSL (from https://github.com/mmottl/gsl-ocaml).
After the roots are calculated, I transform them to pixel coordinates, making sure that the whole thing scaled to the given image size (I must admit that I crop out some purely real roots, nothing interesting there). Then, according to pixel coordinates, I write 'FFFFFF' (i.e. "white") bytes to the corresponding positions in an array. This array backs up Cairo (http://cairographics.org/) surface of the 20000x20000 size, and Cairo PNG module is used to write the result to a file. Again, I needed OCaml bindings (https://github.com/Chris00/ocaml-cairo).
This approach limits the resolution of the image; for example, Cairo fails to create surface of 40000x40000 size. See below for future plans.
Most of the time is taken by calculating the roots and scaling them to fill the image of the given size; contribution of drawing and writing png file is noticeable but insignificant. On Macbook Pro with 3GHz Intel Core i7 the whole process takes about 40 minutes; drawing the picture and writing it to file takes about 2-3 minutes.
- I preferred convenience in writing to optimization, so there's a lot of work for garbage collector when it kicks in -- the pauses are quite noticeable.
- The whole thing is written in a pretty dumb way, it runs in one thread only, so no multi-CPU multi-core benefits.
- Everything is kept in memory until image rendered. As as result I cannot do degree 25 or more, as I'm running out of memory (I have 16 Gb of RAM). Memory usage definitely can be optimized. See "convenience over optimization" point above.
I have started to rewrite everything in a parallel fashion. I also plan to do rendering in chunks (like "top left quarter image, top right quarter image, bottom left quarter image, bottom right quarter image") -- this would allow me to have much higher resolution. When this is done, I'll rent an hour from some Amazon high-memory high-CPU server and will do much better resolution and higher degree.
I have expected that drawing the picture of degree 23 on top of the picture of degree 24 would be interesting. Not so. Here is the degree 23 (in green) on top of degree 24 (in cyan):
Green is basically distributed everywhere -- more in the middle of the ring, but also in all interesting fractal-like parts.
Here is degree 22 (in orange) on top of degree 23 (in green) on top of degree 24 (in cyan):
Same story. Adding more degrees creates the mess of colored pixels.
But what's clear (from bigger pictures which I do not share) is that my resolution is too rough to see how exactly degree 23 roots overlap degree 24 roots in interesting areas. Stay tuned :)