Oh dude, that screencap is from several thousands lines into a routine, it's not a standalone thing, sorry. Haven't yet broken everything up into subroutines, I'm still optimizing.
Well then, sorry if I seemed condescending or nit-picky. For what it's worth, it was a genuine question as I don't even know what language this is. :P
I'm pretty decent at coding, and it's where the demand is right now. As far as the implementation stuff goes, I have a lot of room for improvement, but when it comes to pioneering completely new directions and techniques for visualizations and analysis, apparently I'm your guy.
I'm not doubting any of that. Right now I'm actually learning how to make my data in any way presentable, as the typical lab report I get back is laden with comments along the lines of "great layout, too detailed theory section, make the graphs again but this time remove all the suck." As far as code is concerned, studying alongside CS people isn't helping anyone's confidence but I'm getting there.
I took a class once. It was a waste of time. People just need to start solving problems ASAP. Like my friend with a Rasperry Pi who wants to program a stick of LED's for audio spectral responses.
I'm of two minds on that one. On one hand, I have been making some neat projects from the very beginning. One of my first thoughts after going through K&R on my own was to try and make stuff like arbitrary precision library (I was made aware of GMP, just wanted to see if I can do it), dice roller (and various character generators for RPGs) and stuff like solvers for some of the more repetitive types of problems I had in school (LRC circuits, various types of chemistry problems, quadratic equations etc). I definitely learned how to apply what little knowledge I had and get something that works out of it.
However, I can't find a single project that I made back then that I could call as even as much as passable. Global variables were overused, my use of data structures shows the clear lack of understanding of wtf I was doing and one of the very few times where I wanted to travel back in time was motivated by my need to slap some sense of race condition to my younger self. Code comments were atrocious, documentation was trite and I can count the number of good examples I wrote there on one hand. To this day I have no idea how the fuck I managed to make a sorting algorithm that was worse than O(n²), but I did it. At least no-one can say that I was cheating with this solution…
I'll admit that some of the classes on programming I took were pointless and served only to fill the point quotas. Other ones improved my abilities enough to at least know what I might be doing wrong. Some of the actually good ones taught me how to organise, document, test and (at least attempt to) debug or refactor my code. From what I hear from programmers, that's 90% of the work so I'm not going to say that it was all for nought.
Probably, as with almost everything, the best approach is a healthy mix of both. The theoretical side of computer science and its best practices are a must if you care at all about stuff like efficiency or maintainability but when it's devoided of context or practical examples you just look at it with frustration. It's a lot like physics where everyone wants to get to the cool stuff but very few want to take the time and grind through all the maths and conceptual framework. I mean, do I need to know how to calculate elastic collisions to get to the M-Theory? ;D